Take Assessment

Risk Score in Minutes

GenAI Risk Scorecard Assistant

McKinsey 5/3/1 risk scoring, automated in under 5 minutes

Get the Scorecard Assistant

AI Governance & Risk Control Stack

 
 

How to Build Your Own GPT in Minutes

Features List

5

Weighted Risk Factors

3

Control Tiers

5 Min

To a Governance-Grade Scorecard

Why Risk and Governance Teams Use This CustomGPT

A Score, Not a Judgment Call

Every GenAI use case gets a numeric risk score out of 100, calculated using McKinsey's weighted 5/3/1 framework. Customer Exposure and HITL Oversight each carry 25% of the score. Financial Impact and Complexity carry 20% each. Development Stage carries 10%. You get a defensible number, not a subjective tier assignment based on gut feel.

Red, Amber, or Green in 5 Minutes

Score 70 or above and the use case routes to joint committee review. Between 40 and 69, standard controls apply with targeted enhancements. Below 40, baseline controls are sufficient. The Advisor tells you which tier applies, which factors drove the score, and which of the four control layers each recommendation falls under — in a single conversation.

Audit Trail in Every Output

Each scorecard includes a timestamp and GPT version in the footer for audit traceability. The output header cites whether the recommendation is based on your own governance policy or the McKinsey baseline. Every control recommendation states which specific factor triggered it. Your governance team has a documented, traceable record from day one.

Portfolio View Across Use Cases

Run the Advisor across multiple use cases and it aggregates them into a portfolio summary table — use case name, owner, risk score, tier, next review date, third-party flag, and controls overdue. Score changes of more than 10 points since the last assessment are flagged automatically. Your AI risk committee sees the full picture, not individual snapshots.

Adapts to Your Own Policies

Upload your own AI governance policy document and the Advisor uses it as the primary source for control recommendations. If no policy is available, it defaults to the McKinsey baseline logic. Either way, every recommendation is evidence-tagged and the source is cited in the output header. The GPT never invents controls or policies.

What You Get

  • CustomGPT Design Blueprint (PDF)

    The complete build specification: GPT name, full description, 9-section system instructions covering input collection, scoring framework, control mapping, third-party assessment, portfolio management, knowledge hierarchy, output format, tone, and quality rules. Recommended model (GPT-5) and enabled capabilities (Code Interpreter and Data Analysis) specified.

  • Output A: Use-Case Scorecard Report

    A structured, governance-grade report for each use case covering: use case name and owner, numeric risk score out of 100, Red/Amber/Green tier, factor-by-factor breakdown, dominant risk drivers, recommended controls across all four control layers, committee routing instructions, third-party vendor TCI summary, next review due date, and a one-line executive summary finding.

  • Output B: Portfolio Summary Table

    A structured table aggregating all assessed use cases with columns for Use Case, Owner, Risk Score, Tier, Next Review, Third-Party flag, and Controls Overdue status. Timestamp and GPT version appear in the footer for audit traceability. Score changes exceeding 10 points since the prior assessment are flagged automatically for risk committee attention.

  • McKinsey 5/3/1 Scoring Framework

    The complete weighted scoring methodology: 5 factors (Customer Exposure 25%, HITL Oversight 25%, Financial Impact 20%, Model/Data/Tech Complexity 20%, Development Stage 10%), a 5/3/1 scale normalized to 1.0/0.6/0.2, and a composite formula of Sigma(weight x normalised factor) x 100. Tier thresholds: Red at 70+, Amber at 40-69, Green below 40.

  • 4-Layer Control Mapping Matrix

    Controls mapped to risk factors across four layers: Business Controls (e.g., joint MRM/Legal/Privacy/Cyber review for high customer exposure), Procedural Controls (e.g., post-release validation for HITL level 3+), Manual Controls (e.g., extra review cycles, golden question tests), and Automated Controls (e.g., PII scrubbing, prompt logging, eval automation). Each recommendation cites the triggering factor.

  • Third-Party Vendor TCI Assessment

    When a third-party LLM or vendor is used, the Advisor computes a Comfort Index score using the same TCI formula as AICFO-010: (0.5 x Documentation Completeness) + (0.3 x Transparency Score) + (0.2 x Regulatory Alignment). Buy at 0.75+, Hold at 0.50-0.74, No-Buy below 0.50. Missing model cards, security certifications, or subprocessor lists are flagged automatically.

  • Logical Consistency Validation

    Built-in validation rules catch contradictory inputs before scoring: all mandatory fields must be completed, accepted factor values are 1, 3, or 5 only, and logical conflicts are flagged (e.g., a Full Production stage cannot coexist with an Internal-Only Exposure profile). The Advisor asks up to 3 clarifying questions if inputs are incomplete or contradictory.

  • Portfolio Re-Assessment Triggers

    The Advisor tracks when reassessment is due: every 3 months on a scheduled basis, or immediately when a model, data source, or vendor changes. Score changes greater than 10 points are highlighted in the portfolio summary. Trend insights for board and risk committee reviews are included in portfolio outputs.

When to Use This CustomGPT

Scoring a new GenAI use case before deployment

A business unit wants to deploy a customer-facing AI chatbot. Before sign-off, the AI Risk Officer runs it through the Scorecard Assistant. The GPT collects 8 mandatory inputs, computes a composite risk score, assigns a Red/Amber/Green tier, maps controls across four layers, and produces an auditable scorecard — all in a single conversation, in under 5 minutes.

Preparing a risk summary for the AI risk committee

Your risk committee meets monthly and needs a structured view of all active GenAI use cases. Run each use case through the Advisor and request the portfolio summary output. The table gives the committee a single-view of every use case — score, tier, next review date, third-party flags, and overdue controls — formatted for a risk committee agenda.

Standardizing risk assessment across business units

Different teams are evaluating AI risk with different criteria. The Scorecard Assistant gives every business unit the same McKinsey 5/3/1 methodology, the same factor weights, the same tier thresholds, and the same output format. Risk assessments are now comparable across units, and the governance team can aggregate them into a consistent portfolio view.

Reassessing a use case after a model or vendor change

Your AI provider has released a new model version, or you've switched vendors. The Advisor flags this as a reassessment trigger. Re-run the use case with updated inputs. The new score is compared to the prior assessment — any change exceeding 10 points is highlighted, and updated control recommendations are generated based on the revised inputs.

Uploading your own governance policy as the control source

Your organization has its own AI governance framework — the GenAI Governance Policy (AICFO-007) or an internal equivalent. Upload it to the Advisor before scoring. The GPT treats it as the primary source for control recommendations, cites it in every output header, and only falls back to McKinsey baseline logic for any gaps your policy doesn't cover.

How the McKinsey 5/3/1 Scoring Works

Risk Factor Weight Score 5 (High) Score 1 (Low) Triggers When Score=5
Customer Exposure 25% Direct customer-facing output Internal use only Joint MRM/Legal/Privacy/Cyber review, mandatory HITL, PII scrubbing
HITL Oversight 25% No human review in workflow Full human review at every step Define review roles, post-release validation, extra review cycle, prompt logging
Financial Impact 20% High financial / regulatory consequence Minimal financial exposure Executive sign-off required, change control log, golden question tests, eval automation
Model/Data/Tech Complexity 20% Complex model, sensitive data, novel tech Simple model, public data AI governance owner assigned, model version tracking, expert review, vulnerability scans
Development Stage 10% Full production deployment Idea stage only Full validation and committee approval path applies

Common Questions

Stop Scoring GenAI Risk by Feel. Use the Framework.

McKinsey 5/3/1 risk scoring automated in under 5 minutes. Board-ready scorecards. Updated for 2026.

Get the Scorecard Assistant — $69