Pular para o conteúdo principal

API Testing with Postman

Theory 45 min

What is Postman?

Postman is a collaborative API development platform that lets you build, test, and document APIs through a visual interface. For AI/ML engineers, it's the fastest way to test your model's API without writing code.

The Control Room Analogy

Think of Postman as a mission control room for your API:

  • You can send any type of signal (GET, POST, PUT, DELETE) to your API
  • You can monitor the response in real-time (status code, body, headers, time)
  • You can automate sequences of signals (collection runner)
  • You can share your control panel with teammates (collections export)

Postman vs Other Testing Tools

FeaturePostmancurlhttpx (Python)Thunder Client (VS Code)
Visual interface✅ Full GUI❌ CLI only❌ Code only✅ In VS Code
Test scripts✅ JavaScript❌ Manual check✅ Python assertions⚠️ Limited
Environments✅ Multiple❌ Manual✅ Code-managed✅ Basic
Collections✅ Organized folders❌ Shell scripts✅ Test classes⚠️ Basic
Collaboration✅ Team workspaces❌ None✅ Via Git❌ None
CI/CD integration✅ Newman CLI✅ Native✅ pytest❌ None
Learning curve🟢 Low🟡 Medium🟡 Medium🟢 Low
Best forManual + automated testingQuick one-off testsProgrammatic testingQuick VS Code testing
When to Use What
  • Postman: Interactive exploration, team sharing, manual testing with automation capabilities
  • curl: Quick one-liner tests in the terminal, CI scripts
  • httpx/TestClient: Programmatic tests in pytest, CI/CD pipelines
  • Thunder Client: Quick checks without leaving VS Code

Getting Started with Postman

Installation

  1. Download from postman.com/downloads
  2. Install and create a free account
  3. Open Postman — you'll see the main workspace

Postman Interface Overview


Creating Requests

Health Check Request (GET)

  1. Click NewHTTP Request
  2. Set method to GET
  3. Enter URL: http://localhost:8000/health
  4. Click Send

Expected response:

{
"status": "healthy",
"model_loaded": true,
"model_version": "1.0.0",
"timestamp": "2025-01-15T10:30:00Z"
}

Prediction Request (POST)

  1. Set method to POST
  2. Enter URL: http://localhost:8000/api/v1/predict
  3. Go to Body tab → select raw → set type to JSON
  4. Enter the body:
{
"features": [5.1, 3.5, 1.4, 0.2, 2.3]
}
  1. Click Send

Expected response:

{
"prediction": 1,
"confidence": 0.87,
"model_version": "1.0.0"
}

Batch Prediction Request (POST)

{
"instances": [
{"features": [5.1, 3.5, 1.4, 0.2, 2.3]},
{"features": [6.7, 3.0, 5.2, 2.3, 1.1]},
{"features": [4.9, 2.4, 3.3, 1.0, 0.5]}
]
}

Environments and Variables

Environments let you switch between local, staging, and production without changing your requests.

Setting Up Environments

  1. Click the Environments tab in the sidebar
  2. Create a new environment called Local
  3. Add variables:
VariableInitial ValueCurrent Value
base_urlhttp://localhost:8000http://localhost:8000
api_versionv1v1
auth_tokentest-token-123test-token-123
  1. Create another environment called Staging:
VariableInitial ValueCurrent Value
base_urlhttps://staging-api.example.comhttps://staging-api.example.com
api_versionv1v1
auth_tokenstaging-token-456staging-token-456

Using Variables in Requests

Replace hardcoded URLs with variables using double curly braces:

GET {{base_url}}/health
POST {{base_url}}/api/{{api_version}}/predict

Headers with variables:

Authorization: Bearer {{auth_token}}
Variable Scopes

Postman has 5 variable scopes (from broadest to narrowest):

  1. Global — available everywhere
  2. Collection — available within a collection
  3. Environment — available when the environment is selected
  4. Data — from CSV/JSON files during collection runs
  5. Local — set in scripts, available only during execution

Writing Test Scripts

Postman uses JavaScript for test scripts. Tests run after the response is received.

Basic Test Assertions

Go to the Scripts tab → Post-response and write:

// Test 1: Status code is 200
pm.test("Status code is 200", function () {
pm.response.to.have.status(200);
});

// Test 2: Response time is acceptable
pm.test("Response time is under 500ms", function () {
pm.expect(pm.response.responseTime).to.be.below(500);
});

// Test 3: Response has correct content type
pm.test("Content-Type is JSON", function () {
pm.response.to.have.header("Content-Type", "application/json");
});

Testing Response Body

// Parse the JSON response
const response = pm.response.json();

// Test prediction format
pm.test("Prediction is an integer", function () {
pm.expect(response.prediction).to.be.a("number");
pm.expect(Number.isInteger(response.prediction)).to.be.true;
});

// Test confidence range
pm.test("Confidence is between 0 and 1", function () {
pm.expect(response.confidence).to.be.at.least(0);
pm.expect(response.confidence).to.be.at.most(1);
});

// Test model version exists
pm.test("Model version is present", function () {
pm.expect(response).to.have.property("model_version");
pm.expect(response.model_version).to.be.a("string");
});

// Test prediction class is valid
pm.test("Prediction is class 0 or 1", function () {
pm.expect(response.prediction).to.be.oneOf([0, 1]);
});

Testing Error Responses

For invalid input requests:

pm.test("Status code is 422 for invalid input", function () {
pm.response.to.have.status(422);
});

pm.test("Error detail is present", function () {
const response = pm.response.json();
pm.expect(response).to.have.property("detail");
});

Pre-Request Scripts

Pre-request scripts run before the request is sent. Use them to generate dynamic data:

// Generate a random set of features for testing
const features = [];
for (let i = 0; i < 5; i++) {
features.push(parseFloat((Math.random() * 10).toFixed(2)));
}

pm.variables.set("dynamic_features", JSON.stringify(features));
console.log("Generated features:", features);

Use in the request body:

{
"features": {{dynamic_features}}
}

Setting Variables from Responses

// After a prediction request, save the result for the next request
const response = pm.response.json();
pm.collectionVariables.set("last_prediction", response.prediction);
pm.collectionVariables.set("last_confidence", response.confidence);

Creating Collections

A collection is an organized group of related requests. Think of it as a test suite for your API.

📁 AI Prediction API
├── 📁 Health & Status
│ ├── GET Health Check
│ └── GET Model Info
├── 📁 Predictions
│ ├── POST Single Prediction
│ ├── POST Batch Prediction
│ └── POST Prediction with Metadata
├── 📁 Error Handling
│ ├── POST Empty Features
│ ├── POST Wrong Types
│ ├── POST Missing Body
│ └── POST Invalid JSON
└── 📁 Performance
├── POST Stress Test (10 requests)
└── POST Large Batch (100 instances)

Creating a Collection

  1. Click NewCollection
  2. Name it AI Prediction API
  3. Add a description:
Test suite for the AI Prediction API built with FastAPI.
Base URL: {{base_url}}
Endpoints:
- GET /health
- POST /api/v1/predict
- POST /api/v1/predict/batch
  1. Add folders and requests as shown above

Collection Runner

The Collection Runner lets you execute all requests in a collection sequentially, with test results for each.

Running a Collection

  1. Click the Run button on your collection
  2. Configure:
    • Iterations: How many times to run the full collection (useful for stress testing)
    • Delay: Milliseconds between each request
    • Environment: Select Local, Staging, or Production
  3. Click Run AI Prediction API

Using Data Files

You can feed different inputs to your requests using CSV or JSON files:

feature_1,feature_2,feature_3,feature_4,feature_5,expected_class
5.1,3.5,1.4,0.2,2.3,1
6.7,3.0,5.2,2.3,1.1,0
4.9,2.4,3.3,1.0,0.5,1

In the request body, reference data variables:

{
"features": [
{{feature_1}}, {{feature_2}}, {{feature_3}},
{{feature_4}}, {{feature_5}}
]
}

In the test script:

const expectedClass = parseInt(pm.iterationData.get("expected_class"));
const response = pm.response.json();

pm.test("Prediction matches expected class", function () {
pm.expect(response.prediction).to.equal(expectedClass);
});

Chaining Requests

Some workflows require sequential requests where the output of one becomes the input of the next.

Example: Train → Predict → Explain

Request 1 — Post-response script:

pm.test("API is healthy", function () {
pm.response.to.have.status(200);
const data = pm.response.json();
pm.collectionVariables.set("model_version", data.model_version);
});

Request 2 — Pre-request script:

console.log("Using model version:", pm.collectionVariables.get("model_version"));

Request 2 — Post-response script:

const data = pm.response.json();
pm.collectionVariables.set("prediction_result", data.prediction);
pm.collectionVariables.set("prediction_confidence", data.confidence);

Request 3 — Body:

{
"features": [5.1, 3.5, 1.4, 0.2, 2.3],
"prediction": {{prediction_result}},
"model_version": "{{model_version}}"
}

Sharing Collections

Export/Import

Export:

  1. Right-click on the collection → Export
  2. Choose format Collection v2.1
  3. Save as ai-prediction-api.postman_collection.json

Import:

  1. Click Import → drag the JSON file
  2. The collection appears in your workspace
Version Control

Export your Postman collections and environment files to your Git repository. This way, your API tests live alongside your code:

project/
├── app/
├── tests/
├── postman/
│ ├── ai-prediction-api.postman_collection.json
│ ├── local.postman_environment.json
│ └── staging.postman_environment.json
└── README.md

Newman CLI for CI/CD

Newman is the command-line companion for Postman. It runs collections from the terminal — perfect for CI/CD pipelines.

Installation

npm install -g newman
npm install -g newman-reporter-htmlextra # optional: HTML reports

Running Collections

# Basic run
newman run postman/ai-prediction-api.postman_collection.json

# With environment
newman run postman/ai-prediction-api.postman_collection.json \
-e postman/local.postman_environment.json

# With iterations and data file
newman run postman/ai-prediction-api.postman_collection.json \
-e postman/local.postman_environment.json \
-n 5 \
-d postman/test-data.csv

# With HTML report
newman run postman/ai-prediction-api.postman_collection.json \
-e postman/local.postman_environment.json \
-r htmlextra \
--reporter-htmlextra-export reports/api-test-report.html

Newman in GitHub Actions

# .github/workflows/api-tests.yml

name: API Tests (Postman/Newman)

on:
push:
branches: [main]

jobs:
api-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Start API server
run: |
pip install -r requirements.txt
uvicorn app.main:app --host 0.0.0.0 --port 8000 &
sleep 5

- name: Install Newman
run: npm install -g newman newman-reporter-htmlextra

- name: Run Postman tests
run: |
newman run postman/ai-prediction-api.postman_collection.json \
-e postman/local.postman_environment.json \
-r cli,htmlextra \
--reporter-htmlextra-export reports/report.html

- name: Upload test report
if: always()
uses: actions/upload-artifact@v4
with:
name: api-test-report
path: reports/report.html

Newman Output Example

AI Prediction API

→ Health Check
GET http://localhost:8000/health [200 OK, 234B, 45ms]
✓ Status code is 200
✓ Response time is under 500ms
✓ API is healthy

→ Single Prediction
POST http://localhost:8000/api/v1/predict [200 OK, 189B, 123ms]
✓ Status code is 200
✓ Prediction is an integer
✓ Confidence is between 0 and 1
✓ Prediction is class 0 or 1

→ Empty Features (Error)
POST http://localhost:8000/api/v1/predict [422 Unprocessable Entity, 312B, 12ms]
✓ Status code is 422 for invalid input
✓ Error detail is present

┌─────────────────────────┬────────────────┬───────────────┐
│ │ executed │ failed │
├─────────────────────────┼────────────────┼───────────────┤
│ iterations │ 1 │ 0 │
├─────────────────────────┼────────────────┼───────────────┤
│ requests │ 8 │ 0 │
├─────────────────────────┼────────────────┼───────────────┤
│ test-scripts │ 16 │ 0 │
├─────────────────────────┼────────────────┼───────────────┤
│ assertions │ 22 │ 0 │
└─────────────────────────┴────────────────┴───────────────┘

Advanced Postman Techniques

Request Authorization

For APIs with authentication:

  1. Go to the Authorization tab
  2. Select Bearer Token
  3. Enter {{auth_token}}

This automatically adds the header: Authorization: Bearer <token>

Monitor APIs

Postman Monitors run your collections on a schedule (e.g., every 5 minutes):

  1. Select your collection → MonitorCreate a Monitor
  2. Set frequency and environment
  3. Get alerts when tests fail

API Documentation

Generate beautiful documentation from your collection:

  1. Open your collection → click View Documentation
  2. Publish it — get a shareable URL
  3. Include examples, descriptions, and test snippets

Key Takeaways

Key Takeaways
  1. Postman is the visual control room for testing APIs — no code required for basic testing
  2. Use environments to switch between local, staging, and production effortlessly
  3. Write test scripts (JavaScript) to automate response validation
  4. Organize requests into collections with descriptive folders
  5. Use pre-request scripts to generate dynamic data and chain requests
  6. Export collections to Git for version control
  7. Use Newman to run Postman tests in CI/CD pipelines
  8. Choose the right tool for the job: Postman for exploration, pytest for automation, curl for quick checks

Additional Resources