Skip to main content

LIA - Complete AI Deployment Project

Project 15 hours Advanced

What is the LIA?

The LIA (Learning Integration Assessment) is the capstone project of the AI Model Deployment course. It represents 30% of your final grade and requires you to demonstrate mastery of every skill learned throughout the course by deploying a complete AI-powered service from scratch.

This is an individual project — you will work alone to design, build, test, document, and present a production-ready AI prediction service.

Why a LIA?

The LIA is designed to simulate a real-world professional scenario. In industry, AI engineers must deliver complete, working systems — not isolated notebooks. This project proves you can bridge the gap from model training to production deployment.


Complete Project Pipeline

Your LIA must cover the entire deployment lifecycle. The following diagram shows all the stages your project must include:

View Complete Project Pipeline

Each stage maps directly to a module you studied during the course:

Pipeline StageCourse ModuleWeight
Data & Model TrainingModule 2 — Model Training & Evaluation20%
API DevelopmentModule 3 — Building APIs for AI Models25%
Testing & ValidationModule 5 — Testing & Explainability15%
Explainability (LIME/SHAP)Module 5 — Testing & Explainability15%
Documentation & ReportAll Modules10%
Oral Presentation & DemoAll Modules15%

Learning Objectives

By completing the LIA, you will demonstrate your ability to:

  1. Select and prepare a dataset appropriate for a supervised learning problem
  2. Train, evaluate, and compare at least two ML models using rigorous metrics
  3. Serialize the best model for production use
  4. Build a REST API (FastAPI or Flask) with proper validation, error handling, and documentation
  5. Test the API with pytest (unit + integration) and Postman
  6. Explain model predictions using LIME and/or SHAP
  7. Document the entire system with a professional technical report
  8. Present and defend your work in an oral presentation with live demo

Deliverables

You must submit the following artifacts:

Submission Checklist

#DeliverableFormatRequired
1Git repository (code + model + tests)GitHub / GitLab link
2Trained model file.pkl, .joblib, or .onnx
3Working API with /predict, /health, /model-infoPython source
4Test suite (≥ 10 tests, coverage > 70%)pytest files
5Postman collection.json export
6Explainability analysis (LIME and/or SHAP)Notebook or script + visuals
7README.mdMarkdown
8Technical report (5-8 pages)PDF
9Swagger / OpenAPI documentationAuto-generated
10Slide deckPDF or PPTX
11DockerfileDockerfile⭐ Bonus

Timeline and Milestones

The LIA spans 3 weeks (15 hours of class time + personal work). Follow this timeline to stay on track:

Detailed Milestone Breakdown

WeekMilestoneExpected OutputHours
Week 1M1 — Data & ModelsDataset selected, EDA complete, ≥ 2 models trained and compared, best model serialized5h
Week 2M2 — API & TestingWorking API with 3 endpoints, ≥ 10 tests passing, Postman collection, LIME/SHAP analysis5h
Week 3M3 — Documentation & PresentationREADME, technical report, slide deck, rehearsed demo5h
Don't Leave Documentation for the End

Many students spend all their time coding and rush the report the night before. The documentation and report account for 25% of your grade (10% report + 15% presentation). Budget time accordingly.


Team Organization

This is an individual project. You are solely responsible for all deliverables.

AspectDetails
Team size1 (individual)
CollaborationYou may discuss concepts with classmates, but all code and writing must be your own
AI toolsYou may use GitHub Copilot, ChatGPT, or Cursor as assistants, but you must understand and explain every line of code
PlagiarismCode similarity will be checked; identical submissions will receive 0
Academic Integrity

Using AI tools is allowed and encouraged, but you must be able to explain every decision during your oral presentation. If you cannot explain your code or methodology during Q&A, points will be deducted regardless of code quality.


Grading Rubric — Overview

ComponentWeightKey Criteria
Model Training20%Dataset choice, ≥ 2 models, ≥ 3 metrics, serialization, methodology
API Service25%3 endpoints, validation, error handling, Swagger docs, clean code
Testing15%≥ 10 tests, coverage > 70%, Postman collection, edge cases
Explainability15%LIME and/or SHAP, ≥ 3 visualizations, interpretation
Documentation & Report10%README, technical report (5-8 pages), clarity, structure
Oral Presentation15%15-min presentation, live demo, Q&A, communication
Total100%

Grading Scale

GradeRangeDescription
A90-100%Exceptional work. Production-quality code, insightful analysis, polished presentation.
B80-89%Strong work. All requirements met with good quality. Minor issues.
C70-79%Satisfactory. Core requirements met but lacking depth or polish.
D60-69%Below expectations. Several requirements missing or poorly executed.
F< 60%Unsatisfactory. Major components missing or non-functional.
Detailed Rubric — Expand to View
CriterionExcellent (90-100%)Good (80-89%)Satisfactory (70-79%)Insufficient (< 70%)
Model Training≥ 3 models compared, thorough EDA, optimal hyperparameters, clear methodology2 models compared, good EDA, reasonable hyperparameters2 models but minimal comparison, basic EDA1 model only, no EDA, poor methodology
API ServiceClean architecture, all endpoints work, robust validation, comprehensive error handlingAll endpoints work, basic validation, some error handlingMost endpoints work, minimal validationEndpoints missing or broken, no validation
Testing> 80% coverage, edge cases tested, clear test organization> 70% coverage, good test variety> 60% coverage, basic tests only< 60% coverage or < 10 tests
ExplainabilityLIME + SHAP, ≥ 5 visualizations, deep interpretationLIME or SHAP, ≥ 3 visualizations, good interpretationOne method, 2-3 visualizations, surface-level interpretationNo explainability analysis
ReportProfessional quality, complete sections, clear writingAll sections present, minor gapsMost sections present, some unclear writingMissing sections, poorly written
PresentationConfident delivery, excellent demo, handles Q&A wellGood delivery, working demo, answers most questionsAdequate delivery, demo works partiallyPoor delivery, demo fails, cannot answer questions

Bonus Points

Earn up to 5 bonus points by going beyond the requirements:

BonusPointsDescription
Docker deployment+2Working Dockerfile that runs the complete service
CI/CD pipeline+1GitHub Actions workflow running tests automatically
Advanced monitoring+1Logging, request tracking, or performance metrics
Additional explainability+1Both LIME and SHAP with comparative analysis

Project Repository Structure

Your Git repository should follow this recommended structure:

my-lia-project/
├── README.md
├── requirements.txt
├── Dockerfile # Bonus
├── .gitignore
├── data/
│ └── dataset.csv # Or link in README
├── notebooks/
│ ├── 01_eda.ipynb
│ ├── 02_model_training.ipynb
│ └── 03_explainability.ipynb
├── src/
│ ├── __init__.py
│ ├── app.py # FastAPI / Flask application
│ ├── model.py # Model loading and prediction logic
│ ├── schemas.py # Pydantic models (if FastAPI)
│ └── utils.py # Helper functions
├── models/
│ └── best_model.pkl # Serialized model
├── tests/
│ ├── __init__.py
│ ├── test_model.py
│ ├── test_api.py
│ └── test_schemas.py
├── postman/
│ └── collection.json
├── docs/
│ └── report.pdf
└── presentation/
└── slides.pdf
Start Early, Iterate Often

Don't try to build everything at once. Start with a minimal working pipeline (data → model → simple API), then iteratively add testing, explainability, and documentation. Commit early and commit often.


Getting Help

ResourceWhen to Use
Course materialsReview modules 1-5 for specific techniques
Office hoursAsk your instructor during scheduled office hours
Lab sessionsUse lab time to work on your project
ClassmatesDiscuss concepts (but write your own code)
AI assistantsUse for debugging and learning (cite usage in report)
Important Dates
  • Project start: Week 13
  • Milestone M1 check-in: End of Week 13
  • Milestone M2 check-in: End of Week 14
  • Final submission: Week 15 (code + report + Postman collection)
  • Oral presentations: Week 15 (scheduled time slots)

Quick Navigation