# Redacted Nonprofit Policy Draft - Critical Compass Example Audit

> This public example is redacted to protect unpublished source material.


Title: Redacted Nonprofit Policy Draft - Critical Compass Example Audit
Date: 2026-04-17
Text Type: policy
Knowledge Tradition: hybrid
Audience: organizer
Mode: reader

## Critical Integrity Snapshot

Label: Moderate Integrity
Points: 71/100

## Plain-Language Read

This framework works well as a practitioner tool for audiences who already sense the problem - but it underdelivers on urgency creation, AI-vs-data-governance differentiation, and standalone actionability for cold audiences.

## Before Using This Text

Before using this text, ask whether your nonprofit audience already feels the AI governance problem - or needs to be convinced it exists and is distinct from what they already do.

## Quick-Scan Findings

### Top Biases Identified

- Overgeneralization (Critical)
  KB ID: ct:biases:methodology:overgeneralization
  Where: [redacted source excerpt]
  Why it matters: Presented as settled fact across all nonprofit contexts.
- WEIRD Bias (Compassionate)
  KB ID: ct:biases:methodology:weird-bias
  Where: [redacted source excerpt]
  Why it matters: The model of 'human connection' and 'community trust' reflects Western, educated, institutional organizational contexts.
- Selection Bias (Critical)
  Where: [redacted source excerpt]
  Why it matters: The failure patterns reflect orgs that engaged a consultant - self-selected, with some capacity to seek advisory services.
- Social Desirability Bias (Critical)
  KB ID: ct:biases:methodology:social-desirability-bias
  Where: [redacted source excerpt]
  Why it matters: Framing compliance-first governance as the wrong approach may cause readers to perform alignment with the change-leadership frame without actually adopting.

### Key Tension

The framework is designed for audiences who already feel the AI governance problem - but its stated purpose is to create urgency and action in organizations that don't yet feel it.

### Recommended Next Step

Add one focused paragraph to the Central Argument section that explicitly names how AI governance differs from existing PIPEDA/data governance compliance.

## Argument Map

- Central Claim: The draft argues that responsible AI governance should be treated as an organizational change challenge, not only a technical or compliance task.
- Claim Type: policy
- Confidence: medium
- Status: authored
### Supporting Reasons

- [redacted source claim] (redacted source section)
- [redacted source claim]
- Canadian legal context (no AI-specific law) creates both freedom and responsibility to self-govern
- The digital divide inside organizations makes AI literacy a workplace equity issue

### Objections

- Basic data governance (PIPEDA) may already cover what nonprofits need - the framework doesn't directly rebut this
- Two-layer governance assumes organizational capacity many small nonprofits don't have
- The redacted implementation section is thin for standalone actionability without the paid follow-up

### Assumptions

- The reader already accepts that AI governance is distinct from data governance
- Named accountability ownership works across all org sizes
- The redacted source pattern set represent the full distribution of governance approaches

### Missing Voices

- Service recipients and community members who receive nonprofit services
- Indigenous organizations operating under OCAP and data sovereignty frameworks
- Micro-orgs (under 5 staff) who cannot implement the two-layer model

### Evidence Gaps

- No concrete case made for how AI governance differs from existing data governance compliance
- No evidence of outcomes from organizations that adopted this framework
- Failure patterns drawn from consulting-engaged orgs - selection bias unacknowledged

### Next Questions

- What would a skeptical nonprofit board need to see to be convinced AI governance is distinct from their existing data governance practice?
- What does successful implementation look like - is there an org that used this framework and can serve as a case example?

### Markdown Export

```markdown
# Argument Map **Plain-language summary:** The framework argues that AI policy fails when treated as a compliance or technology problem, and succeeds when treated as a change leadership problem. It provides a structured practitioner system for Canadian nonprofits to build governance that protects mission, community trust, and charitable status. **Central claim:** The draft argues that responsible AI governance should be treated as an organizational change challenge, not only a technical or compliance task. **Claim type:** policy **Confidence:** medium ## Supporting reasons - [redacted source claim] (redacted source section) - [redacted source claim] - Canadian legal context (no AI-specific law) creates both freedom and responsibility to self-govern - The digital divide inside organizations makes AI literacy a workplace equity issue ## Objections - Basic data governance (PIPEDA) may already cover what nonprofits need - the framework doesn't directly rebut this - Two-layer governance assumes organizational capacity many small nonprofits don't have - The redacted implementation section is thin for standalone actionability without the paid follow-up ## Assumptions - The reader already accepts that AI governance is distinct from data governance - Named accountability ownership works across all org sizes - The redacted source pattern set represent the full distribution of governance approaches ## Missing voices - Service recipients and community members who receive nonprofit services - Indigenous organizations operating under OCAP and data sovereignty frameworks - Micro-orgs (under 5 staff) who cannot implement the two-layer model ## Evidence gaps - No concrete case made for how AI governance differs from existing data governance compliance - No evidence of outcomes from organizations that adopted this framework - Failure patterns drawn from consulting-engaged orgs - selection bias unacknowledged ## Next questions - What would a skeptical nonprofit board need to see to be convinced AI governance is distinct from their existing data governance practice? - What does successful implementation look like - is there an org that used this framework and can serve as a case example?
```

### Argdown Export

```argdown
(The draft argues that responsible AI governance should be treated as an organizational change challenge, not only a technical or compliance task.) + [redacted source claim] (redacted source section) + [redacted source claim] + Canadian legal context (no AI-specific law) creates both freedom and responsibility to self-govern + The digital divide inside organizations makes AI literacy a workplace equity issue - Basic data governance (PIPEDA) may already cover what nonprofits need - the framework doesn't directly rebut this - Two-layer governance assumes organizational capacity many small nonprofits don't have - The redacted implementation section is thin for standalone actionability without the paid follow-up ? The reader already accepts that AI governance is distinct from data governance ? Named accountability ownership works across all org sizes ? The redacted source pattern set represent the full distribution of governance approaches ! No concrete case made for how AI governance differs from existing data governance compliance ! No evidence of outcomes from organizations that adopted this framework ! Failure patterns drawn from consulting-engaged orgs - selection bias unacknowledged
```

## Evidence Needed Next / Checkworthy Claims

Note: check-worthy means worth verifying, not that the claim is false.

### Factual Claims To Verify

- A factual claim from the draft about staff AI literacy
- Verification Priority: medium
- Why Check: Stated as universal fact - the evidence base (which orgs, what size, what sector) is invisible
- Evidence Needed: Survey data or sector research on staff AI literacy across Canadian nonprofits
- Suggested Queries: Canadian nonprofit staff AI literacy survey; nonprofit employee data governance awareness
- Likely Source Types: sector surveys; workforce research; nonprofit technology reports
- Support/Contrast Status: needs_evidence
- Uncertainty Note: Likely directionally correct but scope is overstated - some orgs have sophisticated data governance training
- Source Anchor: Principle 3 - Data Governance Must Be Explicit
- Confidence: medium

### Causal Claims

- A causal claim from the draft about organizational trust
- Why Check: Absolute framing - trust erosion can be existential but is not always. Large diversified nonprofits have survived trust crises. The claim is most accurate for small community-based orgs.
- Evidence Needed: Cases of Canadian nonprofits that experienced AI-related trust erosion and their outcomes
- Suggested Queries: nonprofit trust crisis AI Canada; Canadian nonprofit AI reputational harm
- Likely Source Types: sector news; case studies; charity sector reports
- Support/Contrast Status: not_ranked_for_truth
- Uncertainty Note: True for the most vulnerable org types this framework serves; overstated as universal
- Source Anchor: Canadian Legal & Regulatory Context section
- Confidence: medium

### Normative Claims

- Responsible AI governance is an organizational change challenge
- Why Check: This is the central reframe - worth examining whether change leadership framing produces better governance outcomes than compliance-first approaches
- Evidence Needed: Comparative outcomes from orgs using change-leadership vs. compliance-first AI governance approaches
- Suggested Queries: change management AI governance nonprofit outcomes; compliance vs culture AI policy effectiveness
- Likely Source Types: implementation records; sector research; case studies
- Support/Contrast Status: not_ranked_for_truth
- Uncertainty Note: This is the document's core argument - it is a values claim, not a factual one, and should be evaluated as such
- Source Anchor: Central Argument section
- Confidence: medium

## Improvement Prompts

- To strengthen Grounding & Scope Fit: Add one sentence naming the evidence type - e.g.
- To strengthen Assumptions & Context: Add a single paragraph - ideally in the Central Argument - that names the specific ways AI governance differs from PIPEDA/data governance compliance.
- To strengthen Bias Transparency: Own the standpoint explicitly.
- To strengthen Framing & Audience Openness: Add 'diverse' or 'inclusive' to the redacted source principle language.
- To strengthen Positionality & Power: Add a brief author bio or 'About this framework' note at the start.

## Dimension Deep-Dive

### Grounding & Scope Fit (18/20)

Well-grounded for its stated scope - Canadian nonprofits, practitioner experience. Broad claims like '[redacted source excerpt]' outrun the evidence base but are appropriate for a framework making an argument, not a research synthesis.
Finding traced to: [redacted source evidence; analysis retained]
Improvement prompt: [redacted source-specific prompt; recommendation retained in summary]

### Assumptions & Context (13/20)

Universal language ('most staff don't understand,' 'AI literacy is becoming professionally essential') does structural work without acknowledgment. The framework assumes the reader already agrees AI governance is distinct from data governance - but that assumption is precisely what skeptical nonprofits will contest.
Finding traced to: [redacted source evidence; analysis retained]
Improvement prompt: [redacted source-specific prompt; recommendation retained in summary]

### Bias Transparency (11/20)

The framework presents advocacy as neutral analysis. The 'redacted source section' section positions four alternative models as straw alternatives without engaging their strongest versions. The author's consulting standpoint - and its inherent selection bias toward orgs that hired a consultant - is not disclosed.
Finding traced to: [redacted source evidence; analysis retained]
Improvement prompt: [redacted source-specific prompt; recommendation retained in summary]

### Framing & Audience Openness (16/20)

Intentionally concise framing is appropriate for the document's purpose. Small targeted additions - the word 'diverse' in human judgment language, a brief acknowledgment that org size affects governance design - would open the door to broader applicability without bloating the document.
Finding traced to: [redacted source evidence; analysis retained]
Improvement prompt: [redacted source-specific prompt; recommendation retained in summary]

### Positionality & Power (13/20)

The framework speaks about affected communities but not with them. Service recipients, Indigenous data sovereignty frameworks, and racialized or disabled staff are absent from both the evidence base and the governance model. The author's consulting standpoint is implied but unnamed.
Finding traced to: [redacted source evidence; analysis retained]
Improvement prompt: [redacted source-specific prompt; recommendation retained in summary]

## Assumption Audit

- Assumption: The reader already agrees that AI governance is distinct from basic data governance compliance
  Testable: testable
  What would test it: A nonprofit that says 'our data governance covers this' would test it - and the framework currently has no answer for them
- Assumption: Named accountability ownership improves governance outcomes regardless of org size
  Testable: testable
  What would test it: Comparison of governance outcomes in micro-orgs with named vs. distributed accountability
- Assumption: The redacted source pattern set represent the full distribution of nonprofit AI governance approaches
  Testable: testable
  What would test it: Evidence from orgs that self-governed successfully without consulting engagement
- Assumption: AI literacy becoming professionally essential is a settled fact rather than an argument
  Testable: not testable
  What would test it: This is the document's core claim - it is the argument, not a premise requiring support

## Alternative Frames

- Cynefin Framework - Complex vs. Complicated (ct:thinking-lenses:systems:cynefin-framework) [Critical]
  The framework treats AI governance as a complicated problem (expert knowledge + best practices = right answer). Many nonprofits are in the complex domain (emergent, context-dependent). This reframe suggests the redacted source process may need a probe-sense-respond variant for orgs in early chaos.
- Cross-Cultural Anti-Flattening (ct:methodology-checks:preflight:cross-cultural-anti-flattening) [Compassionate]
  The framework's universalizing language flattens genuine organizational diversity across Canadian nonprofits. Indigenous-led orgs, Francophone organizations, and newcomer-serving agencies have distinct accountability and relational models that the current framing doesn't accommodate.

## Questions To Bring To This Text

- Could you hand this to a nonprofit executive director who currently thinks 'our data governance covers AI' and have them walk away convinced otherwise - and if not, what's missing?
- What does governance look like for a 3-person nonprofit that cannot implement two-layer structure - and does the framework leave them better or worse off than before they read it?
- If the paid follow-up is gated behind a paywall, does this document give enough for a nonprofit to take one concrete action - or does it primarily create awareness of a problem without a path forward?
- What specific scenario or example would make the AI-vs-data-governance distinction immediately clear to a skeptical nonprofit board?
- How does the framework handle orgs that have already adopted AI tools widely - is 'redacted implementation section' the right frame for them, or do they need a different entry point?
- What would the author say to a nonprofit that reads this and concludes 'this is good but we'll wait for the paid guide before doing anything' - is that an acceptable outcome for this document?

## Next Steps For Deeper Thinking

- Add a focused paragraph to the Central Argument that names at least two concrete ways AI governance differs from PIPEDA/data governance compliance.
- Add a brief positionality or author bio note at the start of the document.
- Add capacity-tier guidance for micro-orgs (under 5 staff or no formal governance structure) in the redacted implementation section.

## Reasoning Trap Checks

### Causation-leap risk

- Risk: Risk of reading timing or association as if it proves direction or cause.
- Next Check: Ask what comparison would separate coincidence from cause.

### Missing-denominator risk

- Risk: Risk of treating a number as meaningful before knowing the denominator, sample size, or comparison group.
- Next Check: Ask what the number is out of and which comparison group makes it interpretable.

### Comfortable-answer risk

- Risk: Risk of treating the easiest answer as settled before pressure-testing alternatives.
- Next Check: Ask what evidence would make this answer harder to defend.

### Binary-frame risk

- Risk: Risk of collapsing a messy decision into a yes-or-no frame and missing a middle path.
- Next Check: Ask what third option, delay, or partial step has been left out.

### Base-rate neglect risk

- Risk: Risk of judging the claim without comparing it to the normal rate or background pattern.
- Next Check: Ask what baseline rate would make this finding look large, small, or ordinary.
