核心 · Key Idea
In one line: Few-shot means stuffing a handful of "input → output" examples in the prompt so the model pattern-matches and follows. No training, no fine-tuning — a zero-cost way to lock output format and style.
What it is#
By number of examples:
- Zero-Shot — no examples, just ask.
- One-Shot — one example.
- Few-Shot — 2–5 (returns diminish past that).
Classify the following email as [spam / personal / work]:
Email: "Congrats! Click to claim..."
Label: spam
Email: "Project meeting moved to 3pm tomorrow"
Label: work
Email: "Want to go hiking this weekend?"
Label: personal
Email: "iPhone flash sale, 3 hours left!"
Label:The model will continue with spam.
Analogy#
打个比方 · Analogy
Few-shot is handing a new hire three templates: "all future contracts follow this format." They just swap fields and submit — no retraining required.
Key concepts#
In-Context LearningIn-context learning
The model does not update weights — it infers the task purely from the examples in the prompt. One of LLMs' strongest capabilities.
Schema InferenceFormat inference
Show a few JSON examples, and the output naturally arrives in that JSON shape.
Order BiasOrder bias
Example order subtly tilts the output. The last example has the strongest pull.
Shot count capDiminishing returns
3–5 examples are usually enough; more burns context for little gain.
When to use it#
Practical notes#
- Diversify examples. Three "spam" examples in a row will tilt the model — show each class at least once.
- Right placement. Put structural constraints in the system message; concrete examples feel more natural in the first user message.
- Combine with CoT. Include the reasoning in your examples (Few-Shot + CoT) — the model will imitate the reasoning too, beating either technique alone.
- Format is a contract. Use
{"label": "spam"}in examples and the model will almost always return that exact structure. - Mind the Token cost. Examples consume context every request. For high-volume calls, consider fine-tuning so the pattern is baked into weights.
Easy confusions#
Few-Shot (runtime)
Examples are pasted in **every request**.
Zero setup cost, but burns context.
Zero setup cost, but burns context.
Fine-tuning (train time)
**Train once**; examples are baked into weights.
Has training cost, but long-term saves Tokens.
Has training cost, but long-term saves Tokens.
Zero-Shot
Relies on prompt description + the model's world knowledge.
Drifts on complex formats.
Drifts on complex formats.
Few-Shot
Examples act as a **strong format constraint**.
Output stability shoots up.
Output stability shoots up.
Further reading#
- CoT — Few-Shot + CoT is the "most powerful combo"
- System Prompt — examples in system or user message?
- Fine-tuning / SFT — when examples accumulate enough, just train