Workshop: Build Your Prompt Library
A 75-minute workshop for regular AI users. Audit your AI usage, identify your top 5 recurring tasks, craft reusable prompts using the Role + Task + Context + Critique framework, organize them in Notion or Google Docs, then test and iterate until each prompt reliably delivers.
You already use AI regularly — but are you retyping the same instructions every time? This workshop helps you turn ad-hoc prompting into a personal prompt library: a curated set of reusable, tested prompts you can pull up in seconds. By the end you will have five polished prompts, a Notion or Google Doc library, and a maintenance routine to keep it fresh.
Before You Start
- ✓An active account on at least one AI chatbot (ChatGPT, Claude, Gemini, etc.)
- ✓A Notion account or Google Doc — free tier is fine
- ✓At least two weeks of casual AI usage so you have patterns to audit
Steps
Open your AI chat history (or recall from memory) and list every distinct task you have used AI for in the past two weeks. Don't filter — quantity matters more than quality right now. Group similar requests together (e.g. 'rewrite email' and 'make this more polite' are the same category). The goal is a raw inventory of your AI habits.
I want to audit how I use AI. Here is a brain-dump of every task I have used you (or another AI) for in the last two weeks: 1. … 2. … 3. … Group these into categories, count how often each category appears, and rank them from most to least frequent.
- •Check your chat history — most AI tools let you scroll back through past conversations.
- •Include small tasks too: 'fix this grammar', 'summarize this article' all count.
- •If you use multiple AI tools, audit all of them.
From your audit, pick five tasks that are (a) frequent — you do them at least weekly, and (b) formulaic enough that a template prompt would work across instances. Avoid one-off creative tasks; focus on tasks where consistency and speed matter. Write a one-sentence description of each.
From the categories you just identified, help me choose the top 5 that would benefit most from a reusable prompt template. For each, tell me: the task name, why it's a good candidate, and a one-sentence description of what the prompt should do.
- •Good candidates: weekly reports, email rewrites, meeting prep, data summaries, code reviews.
- •Bad candidates: 'write my novel' — too open-ended for a reusable template.
- •Rank by frequency × time saved, not by complexity.
Every strong reusable prompt has four layers: Role (who the AI should be), Task (what it should do), Context (background info and constraints), and Critique (how it should self-check). This framework — Role + Task + Context + Critique — turns vague instructions into precise, repeatable templates. Each layer is a slot you fill in; the structure stays the same across tasks.
You are a senior email copywriter (ROLE). Rewrite the rough draft below into a concise, professional email (TASK). The recipient is a client who missed a payment deadline; tone should be firm but polite; max 150 words (CONTEXT). After writing, list three things a skeptical reader might misinterpret and revise to fix them (CRITIQUE). Rough draft: Hey, you haven't paid yet. Can you send the money?
- •Role changes tone and vocabulary — experiment with different expertise levels.
- •Context is where most prompts fail: be specific about audience, length, tone, and format.
- •The Critique layer is optional but dramatically improves first-pass quality.
Take the top task from step 2 and write a full R+T+C+C prompt for it. Use placeholder tokens like [TOPIC], [AUDIENCE], or [DRAFT] for the parts that change each time. The rest of the prompt stays fixed. Test it immediately — paste it into your AI tool with a real example and evaluate the output.
Help me build a reusable prompt template for this task: [describe your #1 task here]. Structure it using the R+T+C+C framework: • Role: who should the AI be? • Task: what exactly should it produce? • Context: what constraints, audience, and format apply? • Critique: how should it self-check the output? Use [PLACEHOLDER] tokens for the parts that change each time I use it.
- •Good placeholder names are descriptive: [CLIENT_NAME], [MEETING_DATE], [ROUGH_DRAFT].
- •Test with a real example right now — don't wait until later.
- •If the output is off, adjust Context first: that is usually the weakest layer.
Repeat the process for tasks 2 through 5. You should be faster now that you understand the framework. Spend about 3 minutes per prompt: draft the template, run one test, and note any adjustments. Don't aim for perfection — a solid 80% prompt that you refine later beats a never-finished masterpiece.
I need to build four more reusable prompt templates quickly. For each task below, generate a R+T+C+C template with [PLACEHOLDER] tokens: Task 2: [describe task] Task 3: [describe task] Task 4: [describe task] Task 5: [describe task] Keep each template under 120 words. Use consistent formatting.
- •Reuse successful patterns: if a Role worked well in prompt 1, try it in prompt 3.
- •If a task doesn't fit R+T+C+C neatly, it might not be the right candidate — swap it out.
- •Keep a quick note of what worked and what didn't for the iteration step.
Before organizing, take a few minutes to browse our prompt cheat sheets. They contain proven prompt patterns across categories like writing, analysis, coding, and brainstorming. You may discover techniques — like output-format pinning or persona stacking — that you can fold into the templates you just built.
- •Look for patterns you haven't tried: few-shot examples, structured output (JSON/Markdown), chain-of-thought.
- •Steal shamelessly — adapt any cheat-sheet prompt to your own context.
- •Bookmark anything interesting; you can revisit after the workshop.
Create a new page in Notion (or a Google Doc) titled 'My Prompt Library'. Add a table or section for each prompt with columns: Name, Category, Template Text, Placeholders, Last Tested, and Notes. Paste in your five templates. A consistent structure makes it easy to find, share, and update prompts over time.
Create a Markdown table template for a personal prompt library. Columns: Prompt Name, Category, R+T+C+C Template (abbreviated), Placeholders, Last Tested Date, Notes. Pre-fill it with 5 example rows using generic task names. I will replace them with my real prompts.
- •Notion databases let you filter and tag — ideal for a growing library.
- •Google Docs work too: use headings + a table of contents for quick navigation.
- •Add a 'Status' column: Draft → Tested → Proven → Retired.
Go through each of your five prompts and run them with real data. Score each output on a 1–5 scale for accuracy, tone, and completeness. For anything scoring 3 or below, tweak the Context or Critique layer and retest. Finally, set a calendar reminder to review your library every two weeks — retire prompts that no longer serve you and add new ones.
I just tested my prompt template and here is the output: [paste AI output] Rate this output on accuracy (1–5), tone (1–5), and completeness (1–5). For any dimension scoring 3 or below, suggest a specific change to the prompt template that would improve it. Show the revised template.
- •The first version of a prompt is never the best — expect 2–3 iterations.
- •Keep old versions in your Notes column so you can roll back.
- •A bi-weekly review takes 10 minutes and keeps your library sharp.
Take-Home Assets
Run through this checklist every time you create or update a reusable prompt. Print it or keep it next to your prompt library.
- 1.Role — Have I told the AI who to be? (expertise, tone, perspective)
- 2.Task — Is the desired output crystal clear? (format, length, deliverable)
- 3.Context — Have I supplied audience, constraints, background, and examples?
- 4.Critique — Did I ask the AI to self-review before finalizing?
- 5.Placeholders — Are variable parts marked with [TOKENS] for easy reuse?
- 6.Test — Have I run the prompt with real data at least once?
- 7.Score — Does the output score 4+ on accuracy, tone, and completeness?
- 8.Store — Is the prompt saved in my library with metadata (category, date, notes)?
A pocket reference for the Role + Task + Context + Critique prompt structure. Keep it visible while prompting.
- 1.Role — 'You are a [senior/expert/friendly] [job title]…' Sets expertise and tone.
- 2.Task — 'Your job is to [verb] a [deliverable]…' Defines the output.
- 3.Context — Audience, length, format, tone, constraints, examples. The more specific, the better.
- 4.Critique — 'After completing the task, review your output for [criteria] and revise.' Self-correction loop.
A ready-made Notion database template with columns for Name, Category, Template, Placeholders, Status, and Notes. Duplicate it into your workspace to get started instantly.