Redacted Nonprofit Policy Draft - Critical Compass Example Audit

Read-only Critical Compass artifact viewer · public redacted example

Public example note: This public example is redacted to protect unpublished source material.

← Back to Critical Compass download page

Decision Layer

Moderate Integrity 71/100

2026-04-17 · policy · organizer

This framework works well as a practitioner tool for audiences who already sense the problem - but it underdelivers on urgency creation, AI-vs-data-governance differentiation, and standalone actionability for cold audiences.
Before using this text, ask whether your nonprofit audience already feels the AI governance problem - or needs to be convinced it exists and is distinct from what they already do.
Grounding & Scope Fit
18/20
Assumptions & Context
13/20
Bias Transparency
11/20
Framing & Audience Openness
16/20
Positionality & Power
13/20
Key tension: The framework is designed for audiences who already feel the AI governance problem - but its stated purpose is to create urgency and action in organizations that don't yet feel it.

Argument Map

Central claim: The draft argues that responsible AI governance should be treated as an organizational change challenge, not only a technical or compliance task.

Claim type: policy · Confidence: medium

Supporting reasons
  • [redacted source claim] (redacted source section)
  • [redacted source claim]
  • Canadian legal context (no AI-specific law) creates both freedom and responsibility to self-govern
  • The digital divide inside organizations makes AI literacy a workplace equity issue
Objections
  • Basic data governance (PIPEDA) may already cover what nonprofits need - the framework doesn't directly rebut this
  • Two-layer governance assumes organizational capacity many small nonprofits don't have
  • The redacted implementation section is thin for standalone actionability without the paid follow-up
Assumptions
  • The reader already accepts that AI governance is distinct from data governance
  • Named accountability ownership works across all org sizes
  • The redacted source pattern set represent the full distribution of governance approaches
Missing voices
  • Service recipients and community members who receive nonprofit services
  • Indigenous organizations operating under OCAP and data sovereignty frameworks
  • Micro-orgs (under 5 staff) who cannot implement the two-layer model
Evidence gaps
  • No concrete case made for how AI governance differs from existing data governance compliance
  • No evidence of outcomes from organizations that adopted this framework
  • Failure patterns drawn from consulting-engaged orgs - selection bias unacknowledged
Next questions
  • What would a skeptical nonprofit board need to see to be convinced AI governance is distinct from their existing data governance practice?
  • What does successful implementation look like - is there an org that used this framework and can serve as a case example?

Evidence Needed Next

Check-worthy means worth verifying, not false. No external lookup is run by this viewer.

A factual claim from the draft about staff AI literacy

Type: factual · Priority: medium

Evidence needed: Survey data or sector research on staff AI literacy across Canadian nonprofits

Likely sources: sector surveys, workforce research, nonprofit technology reports

Suggested queries: Canadian nonprofit staff AI literacy survey, nonprofit employee data governance awareness

Support/contrast: needs_evidence

Uncertainty: Likely directionally correct but scope is overstated - some orgs have sophisticated data governance training

A causal claim from the draft about organizational trust

Type: causal · Priority: not_ranked

Evidence needed: Cases of Canadian nonprofits that experienced AI-related trust erosion and their outcomes

Likely sources: sector news, case studies, charity sector reports

Suggested queries: nonprofit trust crisis AI Canada, Canadian nonprofit AI reputational harm

Support/contrast: not_ranked_for_truth

Uncertainty: True for the most vulnerable org types this framework serves; overstated as universal

Responsible AI governance is an organizational change challenge

Type: normative · Priority: not_ranked

Evidence needed: Comparative outcomes from orgs using change-leadership vs. compliance-first AI governance approaches

Likely sources: implementation records, sector research, case studies

Suggested queries: change management AI governance nonprofit outcomes, compliance vs culture AI policy effectiveness

Support/contrast: not_ranked_for_truth

Uncertainty: This is the document's core argument - it is a values claim, not a factual one, and should be evaluated as such

Reasoning Trap Checks

Causation-leap risk

Risk: Risk of reading timing or association as if it proves direction or cause.

Next check: Ask what comparison would separate coincidence from cause.

Severity: high

Missing-denominator risk

Risk: Risk of treating a number as meaningful before knowing the denominator, sample size, or comparison group.

Next check: Ask what the number is out of and which comparison group makes it interpretable.

Severity: high

Comfortable-answer risk

Risk: Risk of treating the easiest answer as settled before pressure-testing alternatives.

Next check: Ask what evidence would make this answer harder to defend.

Severity: medium

Binary-frame risk

Risk: Risk of collapsing a messy decision into a yes-or-no frame and missing a middle path.

Next check: Ask what third option, delay, or partial step has been left out.

Severity: medium

Base-rate neglect risk

Risk: Risk of judging the claim without comparing it to the normal rate or background pattern.

Next check: Ask what baseline rate would make this finding look large, small, or ordinary.

Severity: medium

Questions To Bring To The Text

Probing

  • Could you hand this to a nonprofit executive director who currently thinks 'our data governance covers AI' and have them walk away convinced otherwise - and if not, what's missing?
  • What does governance look like for a 3-person nonprofit that cannot implement two-layer structure - and does the framework leave them better or worse off than before they read it?
  • If the paid follow-up is gated behind a paywall, does this document give enough for a nonprofit to take one concrete action - or does it primarily create awareness of a problem without a path forward?

Follow-up

  • What specific scenario or example would make the AI-vs-data-governance distinction immediately clear to a skeptical nonprofit board?
  • How does the framework handle orgs that have already adopted AI tools widely - is 'redacted implementation section' the right frame for them, or do they need a different entry point?
  • What would the author say to a nonprofit that reads this and concludes 'this is good but we'll wait for the paid guide before doing anything' - is that an acceptable outcome for this document?