Assess and Baseline
Conduct comprehensive AI discovery and risk assessment to establish quantitative baseline for QAG implementation
Phase 1 Implementation Progress (Weeks 1-12)
AI Inventory
Discover and catalog all AI/ML assets across the enterprise
Risk Audit
Comprehensive risk assessment using QAG taxonomy
Tiger Team
Cross-functional team formation and pilot selection
Assessment Summary
3.2
Current Governance Maturity (Out of 5.0)
Critical Risk Models
23 High-Risk
Shadow AI Discovered
41 Ungoverned
Compliance Gap
$2.4M Risk Exposure
AI Model Inventory & Risk Heat Map
| Model Name | Business Unit | Model Type | Risk Score | Status | Discovery Method |
|---|---|---|---|---|---|
| Credit-Risk-Scorer-v3 | Financial Services | Classification | High (8.2) |
Production |
Code Scanning |
| Customer-Churn-Predictor | Marketing | Classification | Medium (5.7) |
Staging |
Survey Response |
| Fraud-Detection-Engine | Security | Anomaly Detection | High (7.9) |
Production |
Cloud Discovery |
| Product-Recommender-v2 | E-commerce | Collaborative Filtering | Low (3.2) |
Production |
Code Scanning |
| Sentiment-Analyzer | Customer Service | NLP Classification | Medium (4.8) |
Development |
Survey Response |
| Price-Optimization-Model | Sales | Regression | Medium (6.1) |
Production |
Code Scanning |
| Resume-Screening-AI | Human Resources | Classification | High (8.7) |
Pilot |
Survey Response |
| Supply-Chain-Optimizer | Operations | Optimization | Low (2.9) |
Production |
Cloud Discovery |
Risk Distribution
23
High Risk
89
Medium Risk
135
Low Risk
Discovery Methods
Baseline Metrics Establishment
Current State Assessment
| Metric | Current State | Target | Gap |
|---|---|---|---|
| Model Documentation Coverage | 23% | 100% | Large |
| Bias Testing Coverage | 12% | 100% | Critical |
| Performance Monitoring | 34% | 95% | Large |
| Explainability Implementation | 8% | 80% | Critical |
| Risk Assessment Completion | 66% | 100% | Medium |
| Governance Automation | 5% | 85% | Critical |
| Audit Trail Completeness | 41% | 100% | Large |
| Stakeholder Training | 18% | 90% | Critical |
Pilot Project Selection & Criteria
Selection Criteria
High business visibility
Manageable risk scope
Clear success metrics
Executive sponsor
3-6 month timeline
Cross-functional impact
Top Pilot Candidates
Credit Risk Scoring Model
High-visibility, regulated, clear ROI metrics
Customer Recommendation Engine
Medium visibility, manageable complexity
HR Resume Screening
High bias risk, good learning opportunity
Technology Stack Assessment: Build vs. Buy Analysis
| Component | Build Score | Buy Score | Recommendation | Rationale |
|---|---|---|---|---|
| Governance Dashboard | 6.2 |
8.4 |
Buy |
Mature vendor solutions available, faster time-to-value |
| Risk Scoring Engine | 8.1 |
5.7 |
Build |
Highly organization-specific, competitive advantage |
| Model Registry | 4.3 |
8.9 |
Buy |
Standard functionality, proven solutions (MLflow) |
| Policy-as-Code Engine | 7.8 |
6.2 |
Build |
Custom rules needed, integration complexity |
| Bias Detection Tools | 3.4 |
9.1 |
Buy |
Excellent open source options (FairLearn, AIF360) |
| Explainability Framework | 4.1 |
8.6 |
Buy |
Mature libraries (SHAP, LIME), standardized |
| Monitoring & Alerting | 5.8 |
7.9 |
Buy |
Infrastructure complexity, vendor expertise |
Recommended Hybrid Approach
Buy: Core Platform
Governance dashboard, monitoringBuild: Custom Logic
Risk scoring, policy rulesLeverage: Open Source
MLflow, FairLearn, SHAPKey Vendor Candidates