Testing & Mocking

API Testing Strategies: Unit, Integration, Contract, and E2E

Overview of API testing levels — when to use unit tests, integration tests with test servers, contract tests, and end-to-end tests, with status code validation at each level.

The Testing Pyramid for APIs

The classic testing pyramid applies to APIs just as it does to UI code, but the layers have different shapes. At the base sit fast, cheap unit tests for individual handlers and business logic. In the middle sit integration tests that spin up a real (or in-process) HTTP server. Above that, contract tests verify the boundary between services. At the peak, a small suite of end-to-end tests exercises the full stack against a live environment.

LayerSpeedCostConfidence
UnitMillisecondsCheapestLogic correctness
IntegrationSecondsModerateHTTP interface correctness
ContractSecondsModerateCross-service compatibility
E2EMinutesMost expensiveFull-stack correctness

The mistake most teams make is inverting the pyramid — writing too many E2E tests and too few unit tests. E2E tests are slow, brittle, and expensive to maintain. Aim for roughly 70% unit, 20% integration, and 10% contract+E2E.

Unit Testing HTTP Handlers

Unit tests for API handlers should test logic in isolation. Mock all I/O — database calls, external HTTP requests, caches. What you are testing is: given this request, does the handler produce the right status code and response body?

# pytest + unittest.mock example (Django/Flask style)
from unittest.mock import patch, MagicMock
import pytest

def test_get_user_returns_200_when_found():
    user = MagicMock(id=42, name='Alice', email='[email protected]')
    with patch('myapp.views.UserService.get_by_id', return_value=user):
        response = client.get('/users/42')
    assert response.status_code == 200
    assert response.json()['name'] == 'Alice'

def test_get_user_returns_404_when_missing():
    with patch('myapp.views.UserService.get_by_id', return_value=None):
        response = client.get('/users/999')
    assert response.status_code == 404
    assert response.json()['error'] == 'User not found'

def test_create_user_returns_422_on_invalid_email():
    response = client.post('/users', json={'name': 'Bob', 'email': 'not-an-email'})
    assert response.status_code == 422
    errors = response.json()['detail']
    assert any('email' in str(e) for e in errors)

Always test the error paths explicitly. A handler that returns 200 for happy-path requests but crashes with a 500 on bad input is broken, yet many test suites skip the error cases.

Testing Authentication

Authentication checks should be tested as a separate concern from business logic:

def test_protected_endpoint_returns_401_without_token():
    response = client.get('/api/profile')
    assert response.status_code == 401
    assert 'WWW-Authenticate' in response.headers

def test_protected_endpoint_returns_403_with_wrong_role():
    token = make_token(user_id=1, role='viewer')
    response = client.get('/api/admin/users', headers={'Authorization': f'Bearer {token}'})
    assert response.status_code == 403

Integration Testing

Integration tests exercise the full HTTP stack — routing, middleware, serialization, and the real database (or a test database). They are slower than unit tests but catch a different class of bugs: misconfigured routes, middleware that strips headers, serializer bugs.

Django TestCase

from django.test import TestCase, Client
from myapp.models import User

class UserAPITests(TestCase):
    def setUp(self):
        self.client = Client()
        self.user = User.objects.create_user(
            username='alice', email='[email protected]', password='testpass'
        )

    def test_list_users_requires_auth(self):
        response = self.client.get('/api/users/')
        self.assertEqual(response.status_code, 401)

    def test_list_users_returns_200_for_admin(self):
        self.client.force_login(self.user)
        response = self.client.get('/api/users/')
        self.assertEqual(response.status_code, 200)
        self.assertIn('results', response.json())

Supertest (Node.js)

const request = require('supertest');
const app = require('../app');

describe('GET /users/:id', () => {
  it('returns 200 with user data when found', async () => {
    const res = await request(app).get('/users/1');
    expect(res.statusCode).toBe(200);
    expect(res.body).toHaveProperty('id', 1);
  });

  it('returns 404 when user does not exist', async () => {
    const res = await request(app).get('/users/99999');
    expect(res.statusCode).toBe(404);
  });
});

Database Fixtures

Use fixtures or factories to set up known state before each test. Avoid sharing state between tests — each test should create exactly the data it needs:

# factory_boy example
import factory
from myapp.models import Order

class OrderFactory(factory.django.DjangoModelFactory):
    class Meta:
        model = Order
    status = 'pending'
    total_cents = factory.Faker('random_int', min=100, max=100000)

def test_cancel_order_returns_200():
    order = OrderFactory(status='pending')
    response = client.post(f'/orders/{order.id}/cancel')
    assert response.status_code == 200
    order.refresh_from_db()
    assert order.status == 'cancelled'

def test_cancel_shipped_order_returns_409():
    order = OrderFactory(status='shipped')
    response = client.post(f'/orders/{order.id}/cancel')
    assert response.status_code == 409

Contract Testing

Contract testing sits between integration tests and E2E tests. Instead of deploying both services and running real requests, you capture the consumer's expectations as a 'pact' and verify the provider satisfies it independently.

This makes contract tests fast (no network), reliable (no environment dependencies), and extremely good at catching breaking changes early.

# Pact consumer test (Python)
from pact import Consumer, Provider

pact = Consumer('order-service').has_pact_with(Provider('user-service'))

def test_get_user_contract():
    (pact
     .given('user 42 exists')
     .upon_receiving('a GET request for user 42')
     .with_request('GET', '/users/42')
     .will_respond_with(200, body={'id': 42, 'name': Like('Alice')}))

    with pact:
        result = user_service_client.get_user(42)
        assert result['id'] == 42

See the dedicated contract testing guide for full Pact broker setup and CI integration.

End-to-End Testing

E2E tests run against a fully deployed environment — real database, real services, real network. They are valuable for testing the paths that matter most to users, but they are slow and prone to flakiness.

# pytest + httpx against a staging environment
import httpx
import pytest

BASE_URL = 'https://staging.example.com'

@pytest.mark.e2e
def test_full_order_flow():
    # Create user
    r = httpx.post(f'{BASE_URL}/users', json={'name': 'Test', 'email': '[email protected]'})
    assert r.status_code == 201
    user_id = r.json()['id']

    # Authenticate
    r = httpx.post(f'{BASE_URL}/auth/token', json={'email': '[email protected]', 'password': 'pw'})
    assert r.status_code == 200
    token = r.json()['access_token']

    # Place order
    headers = {'Authorization': f'Bearer {token}'}
    r = httpx.post(f'{BASE_URL}/orders', json={'item_id': 1, 'qty': 2}, headers=headers)
    assert r.status_code == 201

Flaky Test Mitigation

E2E tests fail for reasons unrelated to code correctness — network timeouts, race conditions, third-party service outages. Mitigate this with:

  • Retry logic: Re-run failed tests up to 2-3 times before marking them as failed
  • Idempotent setup: Each test creates its own isolated data with unique identifiers
  • Quarantine zone: Flaky tests go to a separate suite that doesn't block deployment
  • Deterministic IDs: Use UUIDs generated from test names to make cleanup reliable

Choosing the Right Level

Ask these questions to pick the right test type:

  • Is this testing logic? → Unit test
  • Is this testing HTTP routing, middleware, or serialization? → Integration test
  • Is this testing that two services agree on an interface? → Contract test
  • Is this testing that the whole user journey works? → E2E test (sparingly)

The goal is maximum confidence at minimum cost. Most bugs live in logic and interfaces, not in end-to-end flows — so invest accordingly.

Related Protocols

Related Glossary Terms

More in Testing & Mocking