إنتقل إلى المحتوى الرئيسي

TP8 - API Testing with Postman

Practical Lab 45 min Intermediate

Objectives

By the end of this lab, you will be able to:

  • Create a Postman collection for your AI prediction API
  • Write test scripts that validate responses automatically
  • Use environment variables to switch between local and production
  • Run the complete collection with the Collection Runner
  • Export the collection for version control and CI/CD with Newman

Prerequisites

  • Postman installed (download here)
  • Your FastAPI prediction API running locally (from TP3 or TP7)
  • API running on http://localhost:8000
Start Your API First

Before beginning this lab, start your API in a terminal:

uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload

Verify it's running: open http://localhost:8000/docs in your browser.


Architecture Overview


Step 1 — Create the Environment

1.1 Create a "Local" Environment

  1. In Postman, click Environments in the left sidebar
  2. Click + to create a new environment
  3. Name it AI API - Local
  4. Add the following variables:
VariableTypeInitial ValueCurrent Value
base_urldefaulthttp://localhost:8000http://localhost:8000
api_versiondefaultv1v1
expected_features_countdefault55
  1. Click Save
  2. Select this environment in the top-right dropdown

1.2 (Optional) Create a "Staging" Environment

VariableTypeInitial Value
base_urldefaulthttps://staging-api.yourapp.com
api_versiondefaultv1
expected_features_countdefault5

Step 2 — Create the Collection

2.1 Initialize the Collection

  1. Click Collections+Blank Collection
  2. Name it AI Prediction API Tests
  3. Add a description:
Comprehensive test suite for the AI Prediction API.
Tests health checks, predictions, error handling, and edge cases.

2.2 Create Folder Structure

Right-click on the collection → Add Folder for each:

  1. Health & Status
  2. Valid Predictions
  3. Error Handling
  4. Edge Cases

Step 3 — Add Requests and Test Scripts

3.1 Health Check (GET)

  1. Right-click Health & StatusAdd Request
  2. Name: Health Check
  3. Method: GET
  4. URL: {{base_url}}/health

Post-response Script:

pm.test("Status code is 200", function () {
pm.response.to.have.status(200);
});

pm.test("Response time is under 500ms", function () {
pm.expect(pm.response.responseTime).to.be.below(500);
});

pm.test("Content-Type is JSON", function () {
pm.response.to.have.header("Content-Type", "application/json");
});

const data = pm.response.json();

pm.test("Status is healthy", function () {
pm.expect(data.status).to.equal("healthy");
});

pm.test("Model is loaded", function () {
pm.expect(data.model_loaded).to.be.true;
});

pm.test("Model version is present", function () {
pm.expect(data).to.have.property("model_version");
pm.expect(data.model_version).to.be.a("string");
});

// Save model version for other requests
pm.collectionVariables.set("model_version", data.model_version);
console.log("Model version:", data.model_version);

3.2 Single Prediction (POST)

  1. Add to Valid Predictions folder
  2. Name: Single Prediction
  3. Method: POST
  4. URL: {{base_url}}/api/{{api_version}}/predict
  5. Body → raw → JSON:
{
"features": [5.1, 3.5, 1.4, 0.2, 2.3]
}

Post-response Script:

pm.test("Status code is 200", function () {
pm.response.to.have.status(200);
});

pm.test("Response time is under 1000ms", function () {
pm.expect(pm.response.responseTime).to.be.below(1000);
});

const data = pm.response.json();

pm.test("Prediction is present", function () {
pm.expect(data).to.have.property("prediction");
});

pm.test("Prediction is an integer", function () {
pm.expect(Number.isInteger(data.prediction)).to.be.true;
});

pm.test("Prediction is class 0 or 1", function () {
pm.expect(data.prediction).to.be.oneOf([0, 1]);
});

pm.test("Confidence is present and valid", function () {
pm.expect(data).to.have.property("confidence");
pm.expect(data.confidence).to.be.a("number");
pm.expect(data.confidence).to.be.at.least(0);
pm.expect(data.confidence).to.be.at.most(1);
});

pm.test("Model version matches", function () {
pm.expect(data).to.have.property("model_version");
pm.expect(data.model_version).to.be.a("string");
});

// Save for chaining
pm.collectionVariables.set("last_prediction", data.prediction);
pm.collectionVariables.set("last_confidence", data.confidence);

3.3 Prediction with Random Features (POST)

  1. Add to Valid Predictions folder
  2. Name: Random Features Prediction
  3. Method: POST
  4. URL: {{base_url}}/api/{{api_version}}/predict

Pre-request Script:

const features = [];
for (let i = 0; i < 5; i++) {
features.push(parseFloat((Math.random() * 10 - 5).toFixed(3)));
}
pm.variables.set("random_features", JSON.stringify(features));
console.log("Generated random features:", features);

Body → raw → JSON:

{
"features": {{random_features}}
}

Post-response Script:

pm.test("Status code is 200", function () {
pm.response.to.have.status(200);
});

const data = pm.response.json();

pm.test("Prediction is valid class", function () {
pm.expect(data.prediction).to.be.oneOf([0, 1]);
});

pm.test("Confidence is between 0 and 1", function () {
pm.expect(data.confidence).to.be.at.least(0);
pm.expect(data.confidence).to.be.at.most(1);
});

3.4 Error: Empty Features (POST)

  1. Add to Error Handling folder
  2. Name: Empty Features
  3. Method: POST
  4. URL: {{base_url}}/api/{{api_version}}/predict
  5. Body:
{
"features": []
}

Post-response Script:

pm.test("Status code is 422 (validation error)", function () {
pm.response.to.have.status(422);
});

pm.test("Error detail is present", function () {
const data = pm.response.json();
pm.expect(data).to.have.property("detail");
});

3.5 Error: Missing Body (POST)

  1. Add to Error Handling folder
  2. Name: Missing Request Body
  3. Method: POST
  4. URL: {{base_url}}/api/{{api_version}}/predict
  5. Body: none (leave empty)

Post-response Script:

pm.test("Status code is 422", function () {
pm.response.to.have.status(422);
});

3.6 Error: Wrong Data Types (POST)

  1. Add to Error Handling folder
  2. Name: String Features
  3. Body:
{
"features": ["a", "b", "c", "d", "e"]
}

Post-response Script:

pm.test("Status code is 422 for string features", function () {
pm.response.to.have.status(422);
});

pm.test("Error mentions type issue", function () {
const data = pm.response.json();
const detail = JSON.stringify(data.detail).toLowerCase();
pm.expect(detail).to.include("type");
});

3.7 Error: Too Few Features (POST)

  1. Add to Error Handling folder
  2. Name: Too Few Features
  3. Body:
{
"features": [1.0, 2.0]
}

Post-response Script:

pm.test("Status code is 422 for insufficient features", function () {
pm.response.to.have.status(422);
});

3.8 Edge Case: All Zeros (POST)

  1. Add to Edge Cases folder
  2. Name: All Zero Features
  3. Body:
{
"features": [0.0, 0.0, 0.0, 0.0, 0.0]
}

Post-response Script:

pm.test("Status code is 200 (zeros are valid)", function () {
pm.response.to.have.status(200);
});

const data = pm.response.json();

pm.test("Prediction is still valid", function () {
pm.expect(data.prediction).to.be.oneOf([0, 1]);
});

3.9 Edge Case: Negative Values (POST)

  1. Add to Edge Cases folder
  2. Name: Negative Features
  3. Body:
{
"features": [-10.0, -5.0, -2.5, -1.0, -0.5]
}

Post-response Script:

pm.test("Status code is 200 (negatives are valid)", function () {
pm.response.to.have.status(200);
});

pm.test("Prediction is valid", function () {
const data = pm.response.json();
pm.expect(data.prediction).to.be.oneOf([0, 1]);
});

3.10 Edge Case: Very Large Values (POST)

  1. Add to Edge Cases folder
  2. Name: Very Large Features
  3. Body:
{
"features": [999999.0, 999999.0, 999999.0, 999999.0, 999999.0]
}

Post-response Script:

pm.test("API handles extreme values gracefully", function () {
pm.expect(pm.response.code).to.be.oneOf([200, 400]);
});

Step 4 — Run the Collection

4.1 Using the Collection Runner

  1. Click the Run button (▶️) on the collection
  2. In the Runner configuration:
    • Environment: AI API - Local
    • Iterations: 1
    • Delay: 100 ms
  3. Click Run AI Prediction API Tests

4.2 Analyze Results

After the run completes, you'll see:

  • ✅ Green = test passed
  • ❌ Red = test failed
  • Total pass/fail counts
  • Response times for each request
If Tests Fail
  1. Check that your API is running (http://localhost:8000/health)
  2. Verify the environment is selected (top-right dropdown)
  3. Check the Console (bottom panel) for detailed error messages
  4. Ensure your API matches the expected response format

4.3 Run Multiple Iterations

Re-run with Iterations: 5 to stress-test your API. The random features request will generate different inputs each time.


Step 5 — Export the Collection

5.1 Export Collection

  1. Right-click on the collection → Export
  2. Choose Collection v2.1 format
  3. Save as postman/ai-prediction-api.postman_collection.json

5.2 Export Environment

  1. Go to Environments → click the three dots on AI API - Local
  2. Export
  3. Save as postman/local.postman_environment.json

5.3 Add to Version Control

project/
├── app/
├── tests/
├── postman/
│ ├── ai-prediction-api.postman_collection.json
│ └── local.postman_environment.json
└── ...

Step 6 — Automate with Newman

6.1 Install Newman

npm install -g newman

6.2 Run from Command Line

newman run postman/ai-prediction-api.postman_collection.json \
-e postman/local.postman_environment.json

6.3 Run with HTML Report

npm install -g newman-reporter-htmlextra

newman run postman/ai-prediction-api.postman_collection.json \
-e postman/local.postman_environment.json \
-r htmlextra \
--reporter-htmlextra-export reports/postman-report.html

6.4 Expected Newman Output

AI Prediction API Tests

❏ Health & Status
↳ Health Check
GET http://localhost:8000/health [200 OK, 256B, 23ms]
✓ Status code is 200
✓ Response time is under 500ms
✓ Content-Type is JSON
✓ Status is healthy
✓ Model is loaded
✓ Model version is present

❏ Valid Predictions
↳ Single Prediction
POST http://localhost:8000/api/v1/predict [200 OK, 178B, 89ms]
✓ Status code is 200
✓ Response time is under 1000ms
✓ Prediction is present
✓ Prediction is an integer
✓ Prediction is class 0 or 1
✓ Confidence is present and valid
✓ Model version matches

❏ Error Handling
↳ Empty Features
POST http://localhost:8000/api/v1/predict [422 Unprocessable Entity, 312B, 8ms]
✓ Status code is 422 (validation error)
✓ Error detail is present

...

┌─────────────────────────┬──────────┬──────────┐
│ │ executed │ failed │
├─────────────────────────┼──────────┼──────────┤
│ iterations │ 1 │ 0 │
├─────────────────────────┼──────────┼──────────┤
│ requests │ 10 │ 0 │
├─────────────────────────┼──────────┼──────────┤
│ test-scripts │ 20 │ 0 │
├─────────────────────────┼──────────┼──────────┤
│ assertions │ 28 │ 0 │
└─────────────────────────┴──────────┴──────────┘

Validation Checklist

Before moving to the next lab, verify:

  • Environment "AI API - Local" created with base_url and api_version
  • Collection with 4 folders: Health, Valid Predictions, Error Handling, Edge Cases
  • At least 10 requests created with test scripts
  • Collection Runner: all tests pass (✅ green)
  • Collection exported to JSON
  • Newman works from the command line

Summary of Created Tests

#RequestMethodExpected StatusTests
1Health CheckGET200Status, model loaded, version
2Single PredictionPOST200Prediction, confidence, schema
3Random FeaturesPOST200Valid class, confidence range
4Empty FeaturesPOST422Error detail present
5Missing BodyPOST422Validation error
6String FeaturesPOST422Type error detail
7Too Few FeaturesPOST422Validation error
8All ZerosPOST200Valid prediction
9Negative ValuesPOST200Valid prediction
10Very Large ValuesPOST200/400Graceful handling
Well done!

You now master Postman for testing your API. Combined with pytest (TP7), you have two complementary approaches: pytest for CI/CD automation, Postman for exploration and documentation.