Decision Layer
Moderate Integrity
71/100
2026-04-17 · policy · organizer
This framework works well as a practitioner tool for audiences who already sense the problem - but it underdelivers on urgency creation, AI-vs-data-governance differentiation, and standalone actionability for cold audiences.
Before using this text, ask whether your nonprofit audience already feels the AI governance problem - or needs to be convinced it exists and is distinct from what they already do.
Grounding & Scope Fit
18/20
Assumptions & Context
13/20
Framing & Audience Openness
16/20
Positionality & Power
13/20
Key tension: The framework is designed for audiences who already feel the AI governance problem - but its stated purpose is to create urgency and action in organizations that don't yet feel it.
Next step: Add one focused paragraph to the Central Argument section that explicitly names how AI governance differs from existing PIPEDA/data governance compliance.
Evidence Needed Next
Check-worthy means worth verifying, not false. No external lookup is run by this viewer.
A factual claim from the draft about staff AI literacy
Type: factual · Priority: medium
Evidence needed: Survey data or sector research on staff AI literacy across Canadian nonprofits
Likely sources: sector surveys, workforce research, nonprofit technology reports
Suggested queries: Canadian nonprofit staff AI literacy survey, nonprofit employee data governance awareness
Support/contrast: needs_evidence
Uncertainty: Likely directionally correct but scope is overstated - some orgs have sophisticated data governance training
A causal claim from the draft about organizational trust
Type: causal · Priority: not_ranked
Evidence needed: Cases of Canadian nonprofits that experienced AI-related trust erosion and their outcomes
Likely sources: sector news, case studies, charity sector reports
Suggested queries: nonprofit trust crisis AI Canada, Canadian nonprofit AI reputational harm
Support/contrast: not_ranked_for_truth
Uncertainty: True for the most vulnerable org types this framework serves; overstated as universal
Responsible AI governance is an organizational change challenge
Type: normative · Priority: not_ranked
Evidence needed: Comparative outcomes from orgs using change-leadership vs. compliance-first AI governance approaches
Likely sources: implementation records, sector research, case studies
Suggested queries: change management AI governance nonprofit outcomes, compliance vs culture AI policy effectiveness
Support/contrast: not_ranked_for_truth
Uncertainty: This is the document's core argument - it is a values claim, not a factual one, and should be evaluated as such
Reasoning Trap Checks
Causation-leap risk
Risk: Risk of reading timing or association as if it proves direction or cause.
Next check: Ask what comparison would separate coincidence from cause.
Severity: high
Missing-denominator risk
Risk: Risk of treating a number as meaningful before knowing the denominator, sample size, or comparison group.
Next check: Ask what the number is out of and which comparison group makes it interpretable.
Severity: high
Comfortable-answer risk
Risk: Risk of treating the easiest answer as settled before pressure-testing alternatives.
Next check: Ask what evidence would make this answer harder to defend.
Severity: medium
Binary-frame risk
Risk: Risk of collapsing a messy decision into a yes-or-no frame and missing a middle path.
Next check: Ask what third option, delay, or partial step has been left out.
Severity: medium
Base-rate neglect risk
Risk: Risk of judging the claim without comparing it to the normal rate or background pattern.
Next check: Ask what baseline rate would make this finding look large, small, or ordinary.
Severity: medium