Few-shot prompting means showing the model 1-5 examples of the input-output pairs you want before asking for the real one. In 2026 it's still the most reliable way to lock in format and tone.
Classify the sentiment of the following reviews as Positive, Negative, or Neutral. Respond with only the label. Example 1: "The product broke in a week." -> Negative. Example 2: "Works exactly as advertised." -> Positive. Example 3: "It's okay I guess." -> Neutral. Now: "Shipping was slow but the quality is great." ->
Convert the following user requests into SQL queries against the
orderstable (id, user_id, total, created_at). Example: "How much did I spend in July?" -> SELECT SUM(total) FROM orders WHERE user_id = :me AND created_at BETWEEN '2026-07-01' AND '2026-07-31'; Example: "Show my last 5 orders." -> SELECT * FROM orders WHERE user_id = :me ORDER BY created_at DESC LIMIT 5; Now: "What was my biggest purchase this year?"Rewrite the following sentences in the style of Hemingway (short, concrete, no adverbs). Example: "She was walking quickly through the incredibly foggy streets." -> "She walked fast. The fog was thick." Example: "He felt extraordinarily sad about losing his beloved dog." -> "His dog was gone. He was sad." Now: [paste sentence].
Extract entities from news headlines. Output as JSON with keys "person", "org", "location". Example: "Tim Cook Announces Apple Vision Pro 2 in Cupertino" -> {"person": "Tim Cook", "org": "Apple", "location": "Cupertino"}. Example: "Reuters: Sam Altman Meets World Leaders in Davos" -> {"person": "Sam Altman", "org": "Reuters", "location": "Davos"}. Now: [paste headline].
Write email subject lines for cold outreach. Format: under 40 chars, one specific fact, no emojis, ends with a hook. Example: Context: "prospect got Series A, we sell churn tools." -> "Series A congrats + your first churn cliff". Example: Context: "prospect just launched a podcast, we do transcription." -> "Your new podcast + 3 minutes saved per ep". Now: Context: [paste].
Summarize research papers into structured abstracts. Example: Input: "Paper on LLM hallucination rates…" -> Title: [title]. Problem: hallucinations in LLMs. Method: benchmark of 12 models on 5 tasks. Finding: best model hallucinated 2% of the time. Limitation: only English tasks. Now: [paste paper excerpt].
Generate Git commit messages following Conventional Commits. Example: Diff: "added login button." -> "feat(auth): add login button to header". Example: Diff: "fixed typo in readme." -> "docs(readme): fix typo in setup instructions". Now: Diff: [paste].
Translate product descriptions into Spanish while keeping brand name untouched and preserving markdown. Example: "Acme Pro is the best CRM for teams." -> "Acme Pro es el mejor CRM para equipos." Example: "## Features\n- Fast\n- Secure" -> "## Caracteristicas\n- Rapido\n- Seguro". Now: [paste].
Write user stories in the format "As a [role], I want [goal] so that [benefit]." Example: feature: dark mode toggle -> "As a user who reads at night, I want a dark mode toggle so that I can reduce eye strain." Example: feature: CSV export -> "As a finance analyst, I want to export data to CSV so that I can run custom analysis in Excel." Now: feature: [paste].
| Tool | Strength | Free Tier | Best Use Case |
|---|---|---|---|
| GPT-5 | Fast few-shot | Yes | Any task |
| Claude 4.6 | Long examples (1M ctx) | Yes | Document transforms |
| Gemini 2.5 Pro | Multimodal few-shot | Yes | Image + text |
| DSPy | Auto-optimize prompts | Yes | Production |
| PromptHub | Versioning | Yes | Teams |
How many examples is optimal? 3 for most tasks. 1 is often enough with newer models. >5 has diminishing returns.
Zero-shot vs few-shot? Zero-shot for simple tasks; few-shot for format-sensitive or niche domains.
Can examples cause bias? Yes — ensure diversity. Classifier examples must include all labels.
Does few-shot work on reasoning models (o1, o3)? Yes, but less impact — they already reason well. Still useful for format control.
How do I automate example selection? RAG + few-shot: retrieve 3 most similar examples per query. DSPy does this natively.
Few-shot vs fine-tuning? Few-shot for < 10k examples or quick iteration; fine-tune for production at scale.
Does order of examples matter? Yes — most recent examples have highest influence. Put your best example last.
Few-shot prompting is deceptively simple — show, don't tell. Master it and you'll get consistent JSON, cleaner classifications, and brand-matching copy without fine-tuning.
Documenting your prompt library? Host your prompt collection on Misar.Blog — code-friendly editor, searchable archive.
Free newsletter
Join thousands of creators and builders. One email a week — practical AI tips, platform updates, and curated reads.
No spam · Unsubscribe anytime
Role prompting in 2026 — assign the AI a specific expert identity for dramatically better output. 20 real examples acros…
Master chain-of-thought prompting in 2026 — when to use it, how to structure it, and 20 real examples that turn weak out…
20 battle-tested system prompts for AI agents, chatbots, and custom GPTs — structured, safe, and ready to deploy. Copy,…
Comments
Sign in to join the conversation
No comments yet. Be the first to share your thoughts!