LIA - Complete AI Deployment Project
What is the LIA?
The LIA (Learning Integration Assessment) is the capstone project of the AI Model Deployment course. It represents 30% of your final grade and requires you to demonstrate mastery of every skill learned throughout the course by deploying a complete AI-powered service from scratch.
This is an individual project — you will work alone to design, build, test, document, and present a production-ready AI prediction service.
The LIA is designed to simulate a real-world professional scenario. In industry, AI engineers must deliver complete, working systems — not isolated notebooks. This project proves you can bridge the gap from model training to production deployment.
Complete Project Pipeline
Your LIA must cover the entire deployment lifecycle. The following diagram shows all the stages your project must include:
View Complete Project Pipeline
Each stage maps directly to a module you studied during the course:
| Pipeline Stage | Course Module | Weight |
|---|---|---|
| Data & Model Training | Module 2 — Model Training & Evaluation | 20% |
| API Development | Module 3 — Building APIs for AI Models | 25% |
| Testing & Validation | Module 5 — Testing & Explainability | 15% |
| Explainability (LIME/SHAP) | Module 5 — Testing & Explainability | 15% |
| Documentation & Report | All Modules | 10% |
| Oral Presentation & Demo | All Modules | 15% |
Learning Objectives
By completing the LIA, you will demonstrate your ability to:
- Select and prepare a dataset appropriate for a supervised learning problem
- Train, evaluate, and compare at least two ML models using rigorous metrics
- Serialize the best model for production use
- Build a REST API (FastAPI or Flask) with proper validation, error handling, and documentation
- Test the API with pytest (unit + integration) and Postman
- Explain model predictions using LIME and/or SHAP
- Document the entire system with a professional technical report
- Present and defend your work in an oral presentation with live demo
Deliverables
You must submit the following artifacts:
Submission Checklist
| # | Deliverable | Format | Required |
|---|---|---|---|
| 1 | Git repository (code + model + tests) | GitHub / GitLab link | ✅ |
| 2 | Trained model file | .pkl, .joblib, or .onnx | ✅ |
| 3 | Working API with /predict, /health, /model-info | Python source | ✅ |
| 4 | Test suite (≥ 10 tests, coverage > 70%) | pytest files | ✅ |
| 5 | Postman collection | .json export | ✅ |
| 6 | Explainability analysis (LIME and/or SHAP) | Notebook or script + visuals | ✅ |
| 7 | README.md | Markdown | ✅ |
| 8 | Technical report (5-8 pages) | ✅ | |
| 9 | Swagger / OpenAPI documentation | Auto-generated | ✅ |
| 10 | Slide deck | PDF or PPTX | ✅ |
| 11 | Dockerfile | Dockerfile | ⭐ Bonus |
Timeline and Milestones
The LIA spans 3 weeks (15 hours of class time + personal work). Follow this timeline to stay on track:
Detailed Milestone Breakdown
| Week | Milestone | Expected Output | Hours |
|---|---|---|---|
| Week 1 | M1 — Data & Models | Dataset selected, EDA complete, ≥ 2 models trained and compared, best model serialized | 5h |
| Week 2 | M2 — API & Testing | Working API with 3 endpoints, ≥ 10 tests passing, Postman collection, LIME/SHAP analysis | 5h |
| Week 3 | M3 — Documentation & Presentation | README, technical report, slide deck, rehearsed demo | 5h |
Many students spend all their time coding and rush the report the night before. The documentation and report account for 25% of your grade (10% report + 15% presentation). Budget time accordingly.
Team Organization
This is an individual project. You are solely responsible for all deliverables.
| Aspect | Details |
|---|---|
| Team size | 1 (individual) |
| Collaboration | You may discuss concepts with classmates, but all code and writing must be your own |
| AI tools | You may use GitHub Copilot, ChatGPT, or Cursor as assistants, but you must understand and explain every line of code |
| Plagiarism | Code similarity will be checked; identical submissions will receive 0 |
Using AI tools is allowed and encouraged, but you must be able to explain every decision during your oral presentation. If you cannot explain your code or methodology during Q&A, points will be deducted regardless of code quality.
Grading Rubric — Overview
| Component | Weight | Key Criteria |
|---|---|---|
| Model Training | 20% | Dataset choice, ≥ 2 models, ≥ 3 metrics, serialization, methodology |
| API Service | 25% | 3 endpoints, validation, error handling, Swagger docs, clean code |
| Testing | 15% | ≥ 10 tests, coverage > 70%, Postman collection, edge cases |
| Explainability | 15% | LIME and/or SHAP, ≥ 3 visualizations, interpretation |
| Documentation & Report | 10% | README, technical report (5-8 pages), clarity, structure |
| Oral Presentation | 15% | 15-min presentation, live demo, Q&A, communication |
| Total | 100% |
Grading Scale
| Grade | Range | Description |
|---|---|---|
| A | 90-100% | Exceptional work. Production-quality code, insightful analysis, polished presentation. |
| B | 80-89% | Strong work. All requirements met with good quality. Minor issues. |
| C | 70-79% | Satisfactory. Core requirements met but lacking depth or polish. |
| D | 60-69% | Below expectations. Several requirements missing or poorly executed. |
| F | < 60% | Unsatisfactory. Major components missing or non-functional. |
Detailed Rubric — Expand to View
| Criterion | Excellent (90-100%) | Good (80-89%) | Satisfactory (70-79%) | Insufficient (< 70%) |
|---|---|---|---|---|
| Model Training | ≥ 3 models compared, thorough EDA, optimal hyperparameters, clear methodology | 2 models compared, good EDA, reasonable hyperparameters | 2 models but minimal comparison, basic EDA | 1 model only, no EDA, poor methodology |
| API Service | Clean architecture, all endpoints work, robust validation, comprehensive error handling | All endpoints work, basic validation, some error handling | Most endpoints work, minimal validation | Endpoints missing or broken, no validation |
| Testing | > 80% coverage, edge cases tested, clear test organization | > 70% coverage, good test variety | > 60% coverage, basic tests only | < 60% coverage or < 10 tests |
| Explainability | LIME + SHAP, ≥ 5 visualizations, deep interpretation | LIME or SHAP, ≥ 3 visualizations, good interpretation | One method, 2-3 visualizations, surface-level interpretation | No explainability analysis |
| Report | Professional quality, complete sections, clear writing | All sections present, minor gaps | Most sections present, some unclear writing | Missing sections, poorly written |
| Presentation | Confident delivery, excellent demo, handles Q&A well | Good delivery, working demo, answers most questions | Adequate delivery, demo works partially | Poor delivery, demo fails, cannot answer questions |
Bonus Points
Earn up to 5 bonus points by going beyond the requirements:
| Bonus | Points | Description |
|---|---|---|
| Docker deployment | +2 | Working Dockerfile that runs the complete service |
| CI/CD pipeline | +1 | GitHub Actions workflow running tests automatically |
| Advanced monitoring | +1 | Logging, request tracking, or performance metrics |
| Additional explainability | +1 | Both LIME and SHAP with comparative analysis |
Project Repository Structure
Your Git repository should follow this recommended structure:
my-lia-project/
├── README.md
├── requirements.txt
├── Dockerfile # Bonus
├── .gitignore
├── data/
│ └── dataset.csv # Or link in README
├── notebooks/
│ ├── 01_eda.ipynb
│ ├── 02_model_training.ipynb
│ └── 03_explainability.ipynb
├── src/
│ ├── __init__.py
│ ├── app.py # FastAPI / Flask application
│ ├── model.py # Model loading and prediction logic
│ ├── schemas.py # Pydantic models (if FastAPI)
│ └── utils.py # Helper functions
├── models/
│ └── best_model.pkl # Serialized model
├── tests/
│ ├── __init__.py
│ ├── test_model.py
│ ├── test_api.py
│ └── test_schemas.py
├── postman/
│ └── collection.json
├── docs/
│ └── report.pdf
└── presentation/
└── slides.pdf
Don't try to build everything at once. Start with a minimal working pipeline (data → model → simple API), then iteratively add testing, explainability, and documentation. Commit early and commit often.
Getting Help
| Resource | When to Use |
|---|---|
| Course materials | Review modules 1-5 for specific techniques |
| Office hours | Ask your instructor during scheduled office hours |
| Lab sessions | Use lab time to work on your project |
| Classmates | Discuss concepts (but write your own code) |
| AI assistants | Use for debugging and learning (cite usage in report) |
- Project start: Week 13
- Milestone M1 check-in: End of Week 13
- Milestone M2 check-in: End of Week 14
- Final submission: Week 15 (code + report + Postman collection)
- Oral presentations: Week 15 (scheduled time slots)
Quick Navigation
📄️ 06.1 LIA Overview
Learning Integration Assessment: deploy a complete AI service end-to-end with training, API, testing, explainability and documentation
📄️ 06.2 Project Requirements
Comprehensive requirements, rubrics, and checklists for each LIA deliverable
📄️ 06.3 Project Ideas
Curated project ideas with datasets, expected models, and difficulty levels for your LIA
📄️ 06.4 Report Template
Complete template and guidelines for writing your LIA technical report (5-8 pages)
📄️ 06.5 Presentation Guide
Structure, tips, rubric and preparation guide for your 15-minute LIA oral presentation