API Testing with Postman
What is Postman?
Postman is a collaborative API development platform that lets you build, test, and document APIs through a visual interface. For AI/ML engineers, it's the fastest way to test your model's API without writing code.
The Control Room Analogy
Think of Postman as a mission control room for your API:
- You can send any type of signal (GET, POST, PUT, DELETE) to your API
- You can monitor the response in real-time (status code, body, headers, time)
- You can automate sequences of signals (collection runner)
- You can share your control panel with teammates (collections export)
Postman vs Other Testing Tools
| Feature | Postman | curl | httpx (Python) | Thunder Client (VS Code) |
|---|---|---|---|---|
| Visual interface | ✅ Full GUI | ❌ CLI only | ❌ Code only | ✅ In VS Code |
| Test scripts | ✅ JavaScript | ❌ Manual check | ✅ Python assertions | ⚠️ Limited |
| Environments | ✅ Multiple | ❌ Manual | ✅ Code-managed | ✅ Basic |
| Collections | ✅ Organized folders | ❌ Shell scripts | ✅ Test classes | ⚠️ Basic |
| Collaboration | ✅ Team workspaces | ❌ None | ✅ Via Git | ❌ None |
| CI/CD integration | ✅ Newman CLI | ✅ Native | ✅ pytest | ❌ None |
| Learning curve | 🟢 Low | 🟡 Medium | 🟡 Medium | 🟢 Low |
| Best for | Manual + automated testing | Quick one-off tests | Programmatic testing | Quick VS Code testing |
- Postman: Interactive exploration, team sharing, manual testing with automation capabilities
- curl: Quick one-liner tests in the terminal, CI scripts
- httpx/TestClient: Programmatic tests in pytest, CI/CD pipelines
- Thunder Client: Quick checks without leaving VS Code
Getting Started with Postman
Installation
- Download from postman.com/downloads
- Install and create a free account
- Open Postman — you'll see the main workspace
Postman Interface Overview
Creating Requests
Health Check Request (GET)
- Click New → HTTP Request
- Set method to GET
- Enter URL:
http://localhost:8000/health - Click Send
Expected response:
{
"status": "healthy",
"model_loaded": true,
"model_version": "1.0.0",
"timestamp": "2025-01-15T10:30:00Z"
}
Prediction Request (POST)
- Set method to POST
- Enter URL:
http://localhost:8000/api/v1/predict - Go to Body tab → select raw → set type to JSON
- Enter the body:
{
"features": [5.1, 3.5, 1.4, 0.2, 2.3]
}
- Click Send
Expected response:
{
"prediction": 1,
"confidence": 0.87,
"model_version": "1.0.0"
}
Batch Prediction Request (POST)
{
"instances": [
{"features": [5.1, 3.5, 1.4, 0.2, 2.3]},
{"features": [6.7, 3.0, 5.2, 2.3, 1.1]},
{"features": [4.9, 2.4, 3.3, 1.0, 0.5]}
]
}
Environments and Variables
Environments let you switch between local, staging, and production without changing your requests.
Setting Up Environments
- Click the Environments tab in the sidebar
- Create a new environment called Local
- Add variables:
| Variable | Initial Value | Current Value |
|---|---|---|
base_url | http://localhost:8000 | http://localhost:8000 |
api_version | v1 | v1 |
auth_token | test-token-123 | test-token-123 |
- Create another environment called Staging:
| Variable | Initial Value | Current Value |
|---|---|---|
base_url | https://staging-api.example.com | https://staging-api.example.com |
api_version | v1 | v1 |
auth_token | staging-token-456 | staging-token-456 |
Using Variables in Requests
Replace hardcoded URLs with variables using double curly braces:
GET {{base_url}}/health
POST {{base_url}}/api/{{api_version}}/predict
Headers with variables:
Authorization: Bearer {{auth_token}}
Postman has 5 variable scopes (from broadest to narrowest):
- Global — available everywhere
- Collection — available within a collection
- Environment — available when the environment is selected
- Data — from CSV/JSON files during collection runs
- Local — set in scripts, available only during execution
Writing Test Scripts
Postman uses JavaScript for test scripts. Tests run after the response is received.
Basic Test Assertions
Go to the Scripts tab → Post-response and write:
// Test 1: Status code is 200
pm.test("Status code is 200", function () {
pm.response.to.have.status(200);
});
// Test 2: Response time is acceptable
pm.test("Response time is under 500ms", function () {
pm.expect(pm.response.responseTime).to.be.below(500);
});
// Test 3: Response has correct content type
pm.test("Content-Type is JSON", function () {
pm.response.to.have.header("Content-Type", "application/json");
});
Testing Response Body
// Parse the JSON response
const response = pm.response.json();
// Test prediction format
pm.test("Prediction is an integer", function () {
pm.expect(response.prediction).to.be.a("number");
pm.expect(Number.isInteger(response.prediction)).to.be.true;
});
// Test confidence range
pm.test("Confidence is between 0 and 1", function () {
pm.expect(response.confidence).to.be.at.least(0);
pm.expect(response.confidence).to.be.at.most(1);
});
// Test model version exists
pm.test("Model version is present", function () {
pm.expect(response).to.have.property("model_version");
pm.expect(response.model_version).to.be.a("string");
});
// Test prediction class is valid
pm.test("Prediction is class 0 or 1", function () {
pm.expect(response.prediction).to.be.oneOf([0, 1]);
});
Testing Error Responses
For invalid input requests:
pm.test("Status code is 422 for invalid input", function () {
pm.response.to.have.status(422);
});
pm.test("Error detail is present", function () {
const response = pm.response.json();
pm.expect(response).to.have.property("detail");
});
Pre-Request Scripts
Pre-request scripts run before the request is sent. Use them to generate dynamic data:
// Generate a random set of features for testing
const features = [];
for (let i = 0; i < 5; i++) {
features.push(parseFloat((Math.random() * 10).toFixed(2)));
}
pm.variables.set("dynamic_features", JSON.stringify(features));
console.log("Generated features:", features);
Use in the request body:
{
"features": {{dynamic_features}}
}
Setting Variables from Responses
// After a prediction request, save the result for the next request
const response = pm.response.json();
pm.collectionVariables.set("last_prediction", response.prediction);
pm.collectionVariables.set("last_confidence", response.confidence);
Creating Collections
A collection is an organized group of related requests. Think of it as a test suite for your API.
Recommended Collection Structure
📁 AI Prediction API
├── 📁 Health & Status
│ ├── GET Health Check
│ └── GET Model Info
├── 📁 Predictions
│ ├── POST Single Prediction
│ ├── POST Batch Prediction
│ └── POST Prediction with Metadata
├── 📁 Error Handling
│ ├── POST Empty Features
│ ├── POST Wrong Types
│ ├── POST Missing Body
│ └── POST Invalid JSON
└── 📁 Performance
├── POST Stress Test (10 requests)
└── POST Large Batch (100 instances)
Creating a Collection
- Click New → Collection
- Name it AI Prediction API
- Add a description:
Test suite for the AI Prediction API built with FastAPI.
Base URL: {{base_url}}
Endpoints:
- GET /health
- POST /api/v1/predict
- POST /api/v1/predict/batch
- Add folders and requests as shown above
Collection Runner
The Collection Runner lets you execute all requests in a collection sequentially, with test results for each.
Running a Collection
- Click the Run button on your collection
- Configure:
- Iterations: How many times to run the full collection (useful for stress testing)
- Delay: Milliseconds between each request
- Environment: Select Local, Staging, or Production
- Click Run AI Prediction API
Using Data Files
You can feed different inputs to your requests using CSV or JSON files:
feature_1,feature_2,feature_3,feature_4,feature_5,expected_class
5.1,3.5,1.4,0.2,2.3,1
6.7,3.0,5.2,2.3,1.1,0
4.9,2.4,3.3,1.0,0.5,1
In the request body, reference data variables:
{
"features": [
{{feature_1}}, {{feature_2}}, {{feature_3}},
{{feature_4}}, {{feature_5}}
]
}
In the test script:
const expectedClass = parseInt(pm.iterationData.get("expected_class"));
const response = pm.response.json();
pm.test("Prediction matches expected class", function () {
pm.expect(response.prediction).to.equal(expectedClass);
});
Chaining Requests
Some workflows require sequential requests where the output of one becomes the input of the next.
Example: Train → Predict → Explain
Request 1 — Post-response script:
pm.test("API is healthy", function () {
pm.response.to.have.status(200);
const data = pm.response.json();
pm.collectionVariables.set("model_version", data.model_version);
});
Request 2 — Pre-request script:
console.log("Using model version:", pm.collectionVariables.get("model_version"));
Request 2 — Post-response script:
const data = pm.response.json();
pm.collectionVariables.set("prediction_result", data.prediction);
pm.collectionVariables.set("prediction_confidence", data.confidence);
Request 3 — Body:
{
"features": [5.1, 3.5, 1.4, 0.2, 2.3],
"prediction": {{prediction_result}},
"model_version": "{{model_version}}"
}
Sharing Collections
Export/Import
Export:
- Right-click on the collection → Export
- Choose format Collection v2.1
- Save as
ai-prediction-api.postman_collection.json
Import:
- Click Import → drag the JSON file
- The collection appears in your workspace
Export your Postman collections and environment files to your Git repository. This way, your API tests live alongside your code:
project/
├── app/
├── tests/
├── postman/
│ ├── ai-prediction-api.postman_collection.json
│ ├── local.postman_environment.json
│ └── staging.postman_environment.json
└── README.md
Newman CLI for CI/CD
Newman is the command-line companion for Postman. It runs collections from the terminal — perfect for CI/CD pipelines.
Installation
npm install -g newman
npm install -g newman-reporter-htmlextra # optional: HTML reports
Running Collections
# Basic run
newman run postman/ai-prediction-api.postman_collection.json
# With environment
newman run postman/ai-prediction-api.postman_collection.json \
-e postman/local.postman_environment.json
# With iterations and data file
newman run postman/ai-prediction-api.postman_collection.json \
-e postman/local.postman_environment.json \
-n 5 \
-d postman/test-data.csv
# With HTML report
newman run postman/ai-prediction-api.postman_collection.json \
-e postman/local.postman_environment.json \
-r htmlextra \
--reporter-htmlextra-export reports/api-test-report.html
Newman in GitHub Actions
# .github/workflows/api-tests.yml
name: API Tests (Postman/Newman)
on:
push:
branches: [main]
jobs:
api-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Start API server
run: |
pip install -r requirements.txt
uvicorn app.main:app --host 0.0.0.0 --port 8000 &
sleep 5
- name: Install Newman
run: npm install -g newman newman-reporter-htmlextra
- name: Run Postman tests
run: |
newman run postman/ai-prediction-api.postman_collection.json \
-e postman/local.postman_environment.json \
-r cli,htmlextra \
--reporter-htmlextra-export reports/report.html
- name: Upload test report
if: always()
uses: actions/upload-artifact@v4
with:
name: api-test-report
path: reports/report.html
Newman Output Example
AI Prediction API
→ Health Check
GET http://localhost:8000/health [200 OK, 234B, 45ms]
✓ Status code is 200
✓ Response time is under 500ms
✓ API is healthy
→ Single Prediction
POST http://localhost:8000/api/v1/predict [200 OK, 189B, 123ms]
✓ Status code is 200
✓ Prediction is an integer
✓ Confidence is between 0 and 1
✓ Prediction is class 0 or 1
→ Empty Features (Error)
POST http://localhost:8000/api/v1/predict [422 Unprocessable Entity, 312B, 12ms]
✓ Status code is 422 for invalid input
✓ Error detail is present
┌─────────────────────────┬────────────────┬───────────────┐
│ │ executed │ failed │
├─────────────────────────┼────────────────┼───────────────┤
│ iterations │ 1 │ 0 │
├─────────────────────────┼────────────────┼───────────────┤
│ requests │ 8 │ 0 │
├─────────────────────────┼────────────────┼───────────────┤
│ test-scripts │ 16 │ 0 │
├─────────────────────────┼────────────────┼───────────────┤
│ assertions │ 22 │ 0 │
└─────────────────────────┴────────────────┴───────────────┘
Advanced Postman Techniques
Request Authorization
For APIs with authentication:
- Go to the Authorization tab
- Select Bearer Token
- Enter
{{auth_token}}
This automatically adds the header: Authorization: Bearer <token>
Monitor APIs
Postman Monitors run your collections on a schedule (e.g., every 5 minutes):
- Select your collection → Monitor → Create a Monitor
- Set frequency and environment
- Get alerts when tests fail
API Documentation
Generate beautiful documentation from your collection:
- Open your collection → click View Documentation
- Publish it — get a shareable URL
- Include examples, descriptions, and test snippets
Key Takeaways
- Postman is the visual control room for testing APIs — no code required for basic testing
- Use environments to switch between local, staging, and production effortlessly
- Write test scripts (JavaScript) to automate response validation
- Organize requests into collections with descriptive folders
- Use pre-request scripts to generate dynamic data and chain requests
- Export collections to Git for version control
- Use Newman to run Postman tests in CI/CD pipelines
- Choose the right tool for the job: Postman for exploration, pytest for automation, curl for quick checks