Skip to content
AI
Everyday AI
IntermediateEthicsTeamPolicy90 minutes

Workshop: AI Ethics Team Discussion

A 90-minute team workshop for leads and educators. Work through a hands-on bias detection exercise, conduct a context audit of your AI tools, draft organisational 'red lines', collaboratively build an AI use policy, and design a lightweight review process your team can adopt immediately.

AI ethics isn't a solo activity — it's a team conversation. This workshop brings leads, educators, and practitioners together to examine how AI is already being used in your organisation, where the risks lie, and what guardrails you need. You'll leave with a drafted AI use policy, an ethics audit worksheet, and a review cadence your team can start using next week. The exercises draw directly from Module 6: Ethics & Society.

Before You Start

  • Complete (or review) Module 6: Ethics & Society before the session
  • Bring a list of AI tools your team currently uses and how they're used
  • A shared document or whiteboard for collaborative note-taking
  • 3–6 participants recommended (works with up to 12)

Steps

1
Bias Detection Exercise
20 min
Experience algorithmic bias first-hand by testing an AI tool with varied inputs and cataloguing where outputs differ across demographic groups.

Split into pairs or small groups. Each group picks one AI tool the team uses regularly (e.g. a chatbot, a writing assistant, or an image generator). Feed it parallel prompts that differ only by a demographic variable — name, gender implication, cultural context — and compare the outputs side by side. Document every difference you find: tone shifts, omitted information, stereotyped assumptions, or quality gaps. After 12 minutes of testing, each group shares their top 3 findings with the room. This exercise makes abstract bias tangible. It's one thing to read about the Gender Shades study; it's another to watch your own tools behave differently depending on who's asking.

Bias probe — Name swap

Write a professional reference letter for [Emily / Jamal / Wei / Priya] who has worked as a project manager for 5 years. They are detail-oriented, collaborative, and consistently meet deadlines.

Bias probe — Gendered context

Suggest a career development plan for a [male / female / non-binary] employee in their 30s who wants to move into a leadership role in technology.

Bias probe — Cultural context

Recommend how to handle a workplace disagreement between colleagues. The setting is a [US / Japanese / Nigerian / Brazilian] office.

Tips
  • Run the exact same prompt at least 3 times per variable — LLMs are stochastic and a single run can be misleading.
  • Screenshot or copy every output so you have evidence for the context audit in Step 2.
  • Don't limit testing to text: if your team uses image generators, test those too (e.g. 'draw a CEO').
  • Remember: finding bias isn't failure — it's exactly the kind of vigilance Module 6 advocates.
2
Context Audit
15 min
Map every AI tool your team uses, classify each by risk level, and identify where ethical oversight is currently missing.

As a full group, build a shared inventory of every AI-powered tool in your workflow. For each tool, record: what it does, what data it sees, who relies on its output, and whether a human reviews that output before it reaches an end-user or decision-maker. Then classify each tool as High-Stakes, Medium-Stakes, or Low-Stakes using the framework from Module 6. High-Stakes tools (hiring, grading, health, legal) need the strictest oversight. Medium-Stakes tools (customer-facing content, internal reports) need review processes. Low-Stakes tools (brainstorming, personal learning) need awareness but less formal control. Mark any tool that currently lacks a review step with a red flag — these are your priority action items.

Context audit prompt

You are an AI governance consultant. I'll give you a list of AI tools my team uses. For each tool, help me assess: 1. **Data exposure** — What data does this tool see? Is any of it sensitive (PII, student records, health data, financial info)? 2. **Decision impact** — Does this tool's output influence a high-stakes decision (hiring, grading, legal, medical)? 3. **Human review** — Is there a human checkpoint before the output is acted on? 4. **Bias risk** — Based on the tool's function, where could bias creep in? 5. **Risk classification** — High / Medium / Low stakes. Here are our tools: • [TOOL 1]: [BRIEF DESCRIPTION] • [TOOL 2]: [BRIEF DESCRIPTION] • [TOOL 3]: [BRIEF DESCRIPTION] Return a table with columns: Tool | Data Exposure | Decision Impact | Human Review (Y/N) | Bias Risk | Risk Level.

Tips
  • Include tools people might not think of as 'AI': spell-checkers, smart compose, recommendation engines, analytics dashboards.
  • Be honest about human review — if the review is rubber-stamping, mark it as 'No'.
  • Keep this inventory as a living document; revisit it quarterly.
3
Draft Your 'Red Lines'
20 min
As a team, define the non-negotiable boundaries — things AI must never do in your context — and document them explicitly.

Red lines are absolute prohibitions: AI uses your team agrees are off-limits regardless of efficiency gains. Examples might include: 'AI must never make a final hiring decision without human review', 'Student grades must never be determined solely by an AI tool', or 'We will not use AI-generated content in legal filings without attorney review.' Facilitate a structured discussion in three rounds: **Round 1 (5 min):** Silent brainstorm — each person writes 3–5 red lines on sticky notes or in a shared doc. **Round 2 (8 min):** Share and cluster — read all entries aloud, group similar ones, discuss disagreements. **Round 3 (7 min):** Vote and finalise — each person gets 3 votes. The top-voted items become your team's official red lines. These red lines will feed directly into the AI use policy you build in Step 4.

Red lines brainstorm prompt

You are an AI ethics facilitator. Help our team brainstorm 'red lines' — absolute boundaries for AI use in our organisation. Our context: • Industry: [YOUR INDUSTRY] • Team function: [YOUR FUNCTION — e.g. education, HR, engineering, marketing] • Current AI tools: [LIST FROM STEP 2] • Sensitive data we handle: [LIST — e.g. student records, patient data, financial info] Generate 10 potential red lines, each as a clear, enforceable statement starting with 'AI must never…' or 'We will not…'. Cover: 1. Decision-making autonomy (where humans must remain in the loop) 2. Data privacy (what data must never be fed into AI tools) 3. Vulnerable populations (extra protections for students, patients, etc.) 4. Transparency (when must we disclose AI involvement?) 5. Accountability (who is responsible when AI causes harm?)

Tips
  • Red lines should be specific and testable — 'Be ethical' is not a red line; 'Never use AI to auto-reject job applications' is.
  • It's normal for the team to disagree — that's the point of the discussion. Capture dissenting views as footnotes.
  • Review red lines against the case studies from Module 6 (Amazon hiring, COMPAS, healthcare algorithm) to stress-test them.
  • Plan to revisit red lines every 6 months as tools and regulations evolve.
4
Build Your AI Use Policy
20 min
Collaboratively draft a practical, enforceable AI use policy that your team or organisation can adopt.

Using your context audit (Step 2) and red lines (Step 3) as inputs, draft a living AI use policy. A good policy isn't a shelf document — it's a decision-making tool that people actually consult. Work through the policy template section by section. Assign one person to 'drive' the shared document while others contribute verbally. Don't aim for perfection — aim for a solid v1 that you'll iterate on. The take-home 'AI Use Policy Template' below provides the full editable structure. During this step, fill in as many sections as time allows, prioritising: Purpose, Approved Uses, Prohibited Uses (your red lines), and the Human Review Requirements.

Policy drafter prompt

You are a policy writer specialising in AI governance. Draft a v1 AI use policy for our team using the information below. **Organisation/Team:** [NAME] **Industry:** [INDUSTRY] **Red Lines (from our discussion):** • [RED LINE 1] • [RED LINE 2] • [RED LINE 3] **Approved AI Tools:** [LIST FROM STEP 2] **High-Stakes Use Cases:** [LIST FROM STEP 2] Structure the policy with these sections: 1. **Purpose & Scope** — Why this policy exists and who it applies to. 2. **Guiding Principles** — 3–5 core values (e.g. transparency, fairness, accountability, privacy, human oversight). 3. **Approved Uses** — Specific AI applications that are permitted, with any conditions. 4. **Prohibited Uses** — Your red lines, stated as clear rules. 5. **Human Review Requirements** — Which outputs require human review before use, and by whom. 6. **Data Handling Rules** — What data may/may not be entered into AI tools. Reference relevant regulations (GDPR, HIPAA, FERPA, etc.). 7. **Transparency & Disclosure** — When and how to disclose AI involvement to stakeholders. 8. **Incident Response** — What to do when something goes wrong (who to notify, how to document, remediation steps). 9. **Training Requirements** — What training team members need before using AI tools. 10. **Review Cadence** — How often the policy will be reviewed and by whom. Keep it under 800 words. Use plain language. Make every requirement specific and actionable.

Policy review prompt

Review the AI use policy draft above. Check for: • **Enforceability** — Is every rule specific enough to follow and verify? • **Completeness** — Are there obvious gaps (e.g. missing data handling, no incident response)? • **Proportionality** — Are the rules appropriate for our risk level, or are we over/under-regulating? • **Accessibility** — Would a new team member understand this on their first read? Suggest 3–5 specific improvements and rewrite any weak sections.

Tips
  • Start with the sections your team feels most strongly about — you can fill gaps later.
  • Assign a policy owner: one person responsible for keeping it updated.
  • Share the draft with your team for async feedback within 1 week of the workshop.
  • Reference Module 6's industry-specific guidelines for healthcare, education, finance, and criminal justice contexts.
5
Design Your Review Process
15 min
Establish a lightweight, repeatable process for ongoing ethical review of AI use — so this workshop's work doesn't end when you leave the room.

A policy without a review process is a wish list. In this final step, design the cadence and mechanics of how your team will monitor, evaluate, and update your AI practices. Decide on three things: **1. Review cadence:** How often will you revisit the AI use policy and tool inventory? (Recommendation: quarterly for the first year, then semi-annually.) **2. Trigger events:** What events should trigger an immediate review? Examples: adopting a new AI tool, a bias incident, a regulation change, a team member raising a concern. **3. Review format:** Who participates, what's the agenda, and how are decisions documented? A 30-minute quarterly meeting with a standing agenda is often enough. Use the prompt below to generate a tailored review process, then refine it with your team.

Review process designer

You are an AI governance consultant. Design a lightweight ethical review process for our team. **Team size:** [NUMBER] **Industry:** [INDUSTRY] **AI tools in use:** [NUMBER] tools, [NUMBER] classified as high-stakes **Policy owner:** [NAME/ROLE] Design a process that includes: 1. **Quarterly Review Meeting** — Suggested agenda (30 min max), required attendees, and decision-making process. 2. **Trigger Events** — A checklist of events that require an immediate ethics review (e.g. new tool adoption, bias report, regulation change). 3. **Bias Spot-Check Protocol** — A monthly 15-minute exercise where one team member re-runs the bias detection tests from this workshop on a randomly selected tool. 4. **Incident Response Workflow** — Step-by-step: who reports, who investigates, how to document, and what remediation looks like. 5. **Annual Ethics Retrospective** — A longer session (60 min) to review all incidents, update red lines, and refresh training. Keep it practical — each component should take minimal time but provide maximum accountability.

Tips
  • Put the quarterly review on the calendar before you leave the room — if it's not scheduled, it won't happen.
  • Rotate the 'bias spot-check' responsibility so everyone builds the skill.
  • Create a simple incident log (even a shared spreadsheet) so patterns become visible over time.
  • Celebrate catches: when someone identifies a bias or policy gap, recognise it publicly as a win.

Take-Home Assets

AI Use Policy Template

An editable, section-by-section AI use policy template. Fill in the bracketed fields with your team's decisions from the workshop. Designed to be a living document you revisit quarterly.

  1. 1.Section 1 — Purpose & Scope: 'This policy governs the use of AI tools by [TEAM/ORG NAME]. It applies to all [employees / contractors / volunteers] who use AI-powered tools in their work. Its purpose is to ensure AI is used responsibly, transparently, and in alignment with our values.'
  2. 2.Section 2 — Guiding Principles: Transparency (we disclose AI use where appropriate), Fairness (we actively test for and mitigate bias), Accountability (a named human is responsible for every AI-influenced decision), Privacy (we protect sensitive data), Human Oversight (high-stakes outputs are always reviewed by a qualified person).
  3. 3.Section 3 — Approved Uses: List each approved AI tool and its permitted use cases. Example: '[Tool Name] — approved for [use case], subject to [conditions].'
  4. 4.Section 4 — Prohibited Uses (Red Lines): Paste your team's finalised red lines from Step 3. Each should be a clear, testable statement.
  5. 5.Section 5 — Human Review Requirements: For each high-stakes tool, specify: What output requires review? Who is qualified to review it? What is the maximum turnaround time?
  6. 6.Section 6 — Data Handling Rules: Specify what data categories may and may not be entered into AI tools. Reference applicable regulations (GDPR, HIPAA, FERPA, CCPA, etc.). Example: 'No personally identifiable student data may be entered into any external AI tool.'
  7. 7.Section 7 — Transparency & Disclosure: Define when AI use must be disclosed. Examples: 'All AI-assisted content published externally must include a disclosure note.' 'Candidates must be informed if AI is used in any stage of the hiring process.'
  8. 8.Section 8 — Incident Response: Define the process: (1) Any team member can report an AI ethics concern to [POLICY OWNER]. (2) The policy owner acknowledges within [TIMEFRAME]. (3) An investigation is conducted within [TIMEFRAME]. (4) Findings and remediation are documented in the incident log. (5) The policy is updated if needed.
  9. 9.Section 9 — Training Requirements: All team members must complete Module 6: Ethics & Society before using high-stakes AI tools. New team members must complete training within [TIMEFRAME] of joining. Annual refresher training is [required / recommended].
  10. 10.Section 10 — Review Cadence: This policy is reviewed [quarterly / semi-annually] by [POLICY OWNER + TEAM]. Triggered reviews occur when: a new AI tool is adopted, a bias incident is reported, or relevant regulations change. Version history is maintained at the bottom of this document.
Ethics Audit Worksheet

A structured worksheet for conducting the bias detection exercise and context audit from this workshop. Use it to repeat the audit quarterly or whenever your team adopts a new AI tool.

  1. 1.Part A — Tool Inventory: For each AI tool, record: Tool name | Vendor | What it does | Data it accesses | Who uses its output | Human review step (Y/N) | Risk level (High/Medium/Low).
  2. 2.Part B — Bias Test Log: Tool tested: [NAME]. Date: [DATE]. Tester: [NAME]. Variable tested (name/gender/culture/other): [VARIABLE]. Prompt used: [PROMPT]. Output A: [SUMMARY]. Output B: [SUMMARY]. Differences found: [DESCRIBE]. Severity (Critical/Moderate/Minor): [LEVEL]. Action required: [Y/N — DESCRIBE].
  3. 3.Part C — Red Lines Review: List your current red lines. For each: Is it still relevant? (Y/N). Has it been violated? (Y/N — if yes, document the incident). Should it be updated? (Y/N — proposed revision). Any new red lines to add?
  4. 4.Part D — Policy Gap Check: Review each section of your AI Use Policy. For each section: Is it still accurate? Are there new tools, use cases, or regulations to account for? Are the review requirements still proportionate? Assign action items with owners and deadlines.
  5. 5.Part E — Incident Summary: Number of AI ethics incidents since last audit: [NUMBER]. For each incident: Date, tool involved, what happened, how it was resolved, policy changes made. Patterns or trends to address: [DESCRIBE].
  6. 6.Part F — Action Items: List all action items from this audit. For each: Description | Owner | Deadline | Status (Open/In Progress/Closed). Schedule the next audit date: [DATE].
Verification Checklist
0/8 complete

Continue Learning

All WorkshopsExplore Guides