Skip to main content

Understanding Facility Risk Scores

CTWise 483 Intelligence assigns risk scores (0-100) to FDA-registered facilities using a 5-factor composite model based on inspection history, citation patterns, recency, severity, and peer benchmarking.

Overview of the Risk Scoring Model

What is a Risk Score?

A risk score is a composite assessment (0-100) that quantifies a facility's inspection compliance risk based on publicly available FDA data. Higher scores indicate greater compliance risk based on historical patterns.

How Scores are Calculated (v2.0)

Risk scores combine 5 weighted factors into a single composite score:

Risk Score = (OAI Ratio × 40%)
+ (Citation Frequency × 25%)
+ (Recency × 15%)
+ (Severity × 10%)
+ (Peer Benchmark × 10%)

Each factor produces a contribution value, and the weighted sum gives the final composite score [0-100].

Data Foundation

Risk scores are calculated from:

  • 25,500+ citation records from FDA Data Dashboard
  • 8,100+ facility profiles with inspection history
  • 1,400+ CFR references with severity classification
  • Industry benchmarks by product type, state, and program area

Score Factors Explained

1. OAI Ratio (40% Weight)

The most heavily weighted factor. Uses a weighted classification ratio where OAI, VAI, and NAI inspections receive different weights.

Why 40%? OAI is the most significant outcome in FDA inspection history -- it indicates violations serious enough to warrant regulatory action.

How It Works

Classifications are weighted: OAI=1.0, VAI=0.5, NAI=0.1

Weighted Ratio = (OAI×1.0 + VAI×0.5 + NAI×0.1) / Total Inspections
OAI Ratio Contribution = Weighted Ratio × 40 (max contribution: 40)
Classification MixContribution (max 40)Interpretation
All NAI~4Clean inspection record
Mixed NAI/VAI10-20Moderate issues
Majority VAI15-25Recurring findings
Any OAI25-40Significant risk

2. Citation Frequency (25% Weight)

Measures the normalized count of citations relative to the number of inspections and years of history.

Why 25%? More citations per inspection indicates more findings per visit, suggesting broader compliance gaps.

How It Works

Citation Frequency Contribution = normalized(Total Citations / Inspection Years) × 25

Facilities with many citations per year score higher than those with few.

3. Recency (15% Weight)

Applies exponential decay to weight recent inspections more heavily than older ones.

Why 15%? Recent findings are more indicative of current facility conditions. The model uses a 2-year half-life -- findings from 2 years ago contribute half as much as current findings.

How It Works

Recency = exponential_decay(days_since_last_inspection, half_life=2_years)
Recency Contribution = Recency × 15 (max contribution: 15)
Time Since Last InspectionDecay FactorInterpretation
< 6 months~0.85Very recent -- high weight
1 year~0.71Recent
2 years~0.50Half-life point
4 years~0.25Historical

4. Severity (10% Weight)

Evaluates the severity of cited CFR sections, weighting more critical regulatory areas higher.

Why 10%? Not all citations are equal -- data integrity violations (21 CFR 211.180) are more serious than labeling issues (21 CFR 211.125).

CFR Severity Categories

CFR CategoryRelative Severity
Data Integrity (211.180, 211.188, 211.194)Highest
Sterility/Contamination (211.113, 211.28)High
Testing & Release (211.84, 211.165)High
Equipment Validation (211.68, 211.160)Moderate
Procedures (211.100, 211.192)Moderate
Labeling (211.122, 211.125)Lower

5. Peer Benchmark (10% Weight)

Compares the facility's performance against its product-type peer group average.

Why 10%? Context matters. A facility manufacturing sterile injectables should be compared against other sterile injectable manufacturers, not the overall population.

How It Works

Peer Benchmark Contribution = (facility_score - peer_avg) / peer_range × 10

Facilities performing worse than their peer group average score higher on this factor, while those performing better than peers score lower.

Risk Levels

Risk scores map to 4 risk levels:

Risk LevelScore RangeInterpretationRecommended Actions
low0-24Strong compliance recordStandard monitoring
medium25-49Some issues, manageableEnhanced monitoring, periodic audits
high50-74Significant concernsDetailed review, supplier audit, CAPA required
critical75-100Serious riskConsider alternate sourcing, immediate action

Interpreting the API Response

The risk score API returns a detailed breakdown of all 5 factors:

{
"fei_number": "3005012345",
"risk_score": 42.5,
"risk_level": "medium",
"methodology_version": "v2.0",
"factors": {
"oai_ratio": 16.0,
"citation_frequency": 8.5,
"recency": 9.0,
"severity": 4.5,
"peer_benchmark": 4.5
},
"methodology": "Composite score [0,100] = oai_ratio(40%) + citation_frequency(25%) + recency(15%) + severity(10%) + peer_benchmark(10%). Risk levels: low (<25), medium (25-50), high (50-75), critical (>=75).",
"factor_descriptions": {
"oai_ratio": "Weighted classification ratio (OAI=1.0, VAI=0.5, NAI=0.1)",
"citation_frequency": "Citation frequency normalized by years of inspection history",
"recency": "Exponential recency decay (half-life = 2 years)",
"severity": "Weighted severity of CFR violations cited",
"peer_benchmark": "Comparison to product-type peer group average"
},
"query_metadata": {
"execution_time_ms": 85
}
}

Reading the Factor Breakdown

Each value in the factors object is the factor's contribution to the composite score (not a raw 0-100 score). The maximum contribution for each factor is:

FactorMax ContributionWeight
oai_ratio4040%
citation_frequency2525%
recency1515%
severity1010%
peer_benchmark1010%
Total100100%

The composite score is simply the sum of all factor contributions.

Use Cases

1. Pre-Inspection Preparation

Scenario: Your facility is due for FDA inspection. Use risk scores to identify focus areas.

import os
import requests

API_KEY = os.getenv("CTWISE_API_KEY")
BASE_URL = "https://api.ctwise.ai/v1"

# Get your facility's risk score
your_fei = "3005012345"

response = requests.get(
f"{BASE_URL}/483/risk-scores/{your_fei}",
headers={"X-Api-Key": API_KEY}
)

risk = response.json()

print(f"Your Facility Risk Score: {risk['risk_score']:.1f}")
print(f"Risk Level: {risk['risk_level']}")
print(f"Methodology: {risk['methodology_version']}")

# Identify which factors are driving risk
print("\n=== RISK FACTOR BREAKDOWN ===")

factors = risk["factors"]
descriptions = risk["factor_descriptions"]
max_values = {"oai_ratio": 40, "citation_frequency": 25, "recency": 15, "severity": 10, "peer_benchmark": 10}

for name, value in factors.items():
max_val = max_values[name]
pct = (value / max_val) * 100 if max_val > 0 else 0
indicator = "!!" if pct > 60 else " "
print(f"{indicator} {name}: {value:.1f}/{max_val} ({pct:.0f}%) - {descriptions.get(name, '')}")

Output:

Your Facility Risk Score: 42.5
Risk Level: medium
Methodology: v2.0

=== RISK FACTOR BREAKDOWN ===
oai_ratio: 16.0/40 (40%) - Weighted classification ratio (OAI=1.0, VAI=0.5, NAI=0.1)
citation_frequency: 8.5/25 (34%) - Citation frequency normalized by years of inspection history
!! recency: 9.0/15 (60%) - Exponential recency decay (half-life = 2 years)
severity: 4.5/10 (45%) - Weighted severity of CFR violations cited
peer_benchmark: 4.5/10 (45%) - Comparison to product-type peer group average

Action: Recency is the main driver at 60% of its max. Recent inspection findings are weighing heavily. Focus on addressing the specific CFR areas cited in the most recent inspection.

2. Supplier Due Diligence Scoring

Scenario: Evaluate 3 potential CMOs for a new product launch.

import os
import requests

API_KEY = os.getenv("CTWISE_API_KEY")
BASE_URL = "https://api.ctwise.ai/v1"

# Candidate suppliers
candidates = [
{"name": "CMO Alpha", "fei": "1000234567"},
{"name": "CMO Beta", "fei": "1000345678"},
{"name": "CMO Gamma", "fei": "1000456789"}
]

# Evaluate each
print(f"{'Supplier':<20} {'Score':>6} {'Level':<10} {'OAI':>5} {'Freq':>5} {'Recency':>8} {'Severity':>9} {'Peer':>5}")
print("-" * 75)

for candidate in candidates:
response = requests.get(
f"{BASE_URL}/483/risk-scores/{candidate['fei']}",
headers={"X-Api-Key": API_KEY}
)

if response.status_code == 200:
risk = response.json()
f = risk["factors"]
print(f"{candidate['name']:<20} {risk['risk_score']:>5.1f} {risk['risk_level']:<10} "
f"{f['oai_ratio']:>5.1f} {f['citation_frequency']:>5.1f} "
f"{f['recency']:>8.1f} {f['severity']:>9.1f} {f['peer_benchmark']:>5.1f}")

3. Compare Against Industry Benchmarks

Scenario: See how your facility compares to product-type peers.

import os
import requests

API_KEY = os.getenv("CTWISE_API_KEY")
BASE_URL = "https://api.ctwise.ai/v1"

# Get your facility's risk score
fei = "3005012345"
risk = requests.get(
f"{BASE_URL}/483/risk-scores/{fei}",
headers={"X-Api-Key": API_KEY}
).json()

# Get benchmark for your product type
bench = requests.get(
f"{BASE_URL}/483/analytics/benchmarks",
headers={"X-Api-Key": API_KEY},
params={"grouping": "product_type", "group_value": "Drugs"}
).json()["results"][0]

metrics = bench["metrics"]
my_score = risk["risk_score"]

print(f"Your Score: {my_score:.1f}/100 ({risk['risk_level']})")
print(f"Drugs Peer Group ({metrics['total_facilities']} facilities):")
print(f" Average: {metrics['avg_risk_score']:.1f}")
print(f" Median: {metrics['median_risk_score']:.1f}")
print(f" P25: {metrics['percentile_25']:.1f}")
print(f" P75: {metrics['percentile_75']:.1f}")
print(f" P90: {metrics['percentile_90']:.1f}")

if my_score < metrics["percentile_25"]:
print(f"\nYou are in the top quartile (below P25) - Strong compliance")
elif my_score < metrics["median_risk_score"]:
print(f"\nYou are below the median - Better than average")
elif my_score < metrics["percentile_75"]:
print(f"\nYou are above the median - Room for improvement")
else:
print(f"\nYou are above P75 - Priority attention needed")

Limitations and Disclaimers

Model Limitations

  1. Historical Data Only: Scores based on past inspections, not real-time facility conditions
  2. FDA Reporting Lag: 483s may take 30-90 days to appear in FDA database
  3. Unannounced Inspections: Model cannot predict timing of future inspections
  4. Facility Changes: Recent improvements not reflected until next inspection

Not a Predictive Guarantee

CTWise risk scores are informational tools, not guarantees.

  • A "Low Risk" facility can still receive OAI on next inspection
  • A "High Risk" facility may have already implemented corrective actions
  • Risk scores do not replace:
    • On-site audits
    • Quality agreement reviews
    • Regulatory judgment
    • Professional due diligence

DO use risk scores for:

  • Prioritizing supplier audits
  • Identifying facilities requiring enhanced monitoring
  • Benchmarking suppliers against peers
  • Supporting data-driven decision-making

DO NOT use risk scores as:

  • Sole basis for supplier rejection
  • Replacement for audits
  • Guaranteed prediction of future events
  • Legal or regulatory advice

Regulatory Perspective

FDA expects you to:

  1. Assess supplier risks using available information (21 CFR 211.84)
  2. Monitor supplier compliance through audits and reviews
  3. Document risk-based decisions in quality systems
  4. Take action when supplier issues arise

CTWise risk scores support these obligations but do not replace them.


API Reference

Get Risk Score for a Facility

GET https://api.ctwise.ai/v1/483/risk-scores/{fei_number}

See Risk Scores API Reference for full documentation.

List Risk Scores (Paginated)

GET https://api.ctwise.ai/v1/483/risk-scores?risk_level=high&limit=20&offset=0

Filter by risk_level (low, medium, high, critical), min_score, max_score, limit, and offset.


Next Steps

  1. Retrieve risk scores for your key suppliers
  2. Analyze factor breakdowns to identify what's driving risk
  3. Compare against benchmarks to contextualize scores
  4. Integrate scores into supplier qualification workflows
  5. Review periodically to track risk trends

For more information: