Artificial Intelligence
Foundational concepts — the shortest path into the topic. 24 topics across 5 chapters.
LLM Basics
5 topicsLLM (Large Language Model)
The 'brain' of generative AI — a 'next-token probability machine' trained on massive text corpora.
Token
The smallest piece of text an AI handles — and the unit you get billed in.
Context Window
The model's short-term-memory cap — the maximum tokens it can see in one inference.
Parameters
What 7B / 72B actually mean — the most direct yardstick of model size and capability.
Hallucination
When the model confidently makes things up — an inherent side-effect of probabilistic generation.
Prompts & Control
4 topicsSystem Prompt
The model's global setup — identity, behaviour, and output format, set in stone for the conversation.
Temperature & Top-P
The two core knobs that trade randomness, creativity, and rigour in an LLM's output.
Few-Shot Prompting
Show the model a few examples — it imitates and produces the same shape of answer.
CoT (Chain of Thought)
One sentence — 'think step by step' — makes the model dramatically smarter.
Agent Core
5 topicsAgent
Upgrading 'chatbot' to 'AI that can actually get things done.'
ReAct (Reason + Act)
The core agent pattern — think one step, act, observe, then think again.
Planning
Agents handle complex goals by first writing a checklist, then executing it.
Multi-Agent
Hand a task to multiple role-distinct agents and let them 'meet' to ship complex work.
Workflow
Constrain AI behaviour with fixed nodes and edges — the deterministic counterpart to agents.
Tools & Skills
4 topicsFunction Calling
The low-level protocol that lets AI use tools — emit JSON, your program executes.
MCP (Model Context Protocol)
Standardised 'peripherals interface' for models — USB-C-style plug-and-play for tools and data.
Skills
Bundle prompt + tools into a reusable 'capability' — the agent era's 'app.'
Code Interpreter
A safe sandbox where the model writes code, runs it, and reads the result.
Knowledge & Memory
6 topicsRAG (Retrieval-Augmented Generation)
The standard architecture for plugging an external knowledge base into an LLM — retrieve, then generate, so answers are grounded in real source material.
Embeddings
Compress text into 'semantic coordinates' so a computer can compare who means what.
Vector Database
Specialised infrastructure for large-scale similarity search over high-dimensional vectors.
Chunking
Slice long documents into snippets that can be retrieved and fed to the model.
Short-term Memory
Managing context across turns — the engineering trade-off between sliding windows and summarisation.
Long-term Memory
Cross-session 'fact store' so the AI actually 'knows you.'