Vol.01 · No.10 CS · AI · Infra April 8, 2026

AI Glossary

GlossaryReferenceLearn
Products & Platforms LLM & Generative AI Deep Learning

Claude

Claude is the name for Anthropic’s family of large language models and the AI assistant and tools powered by them. Built on transformer-based deep learning, it is designed to handle long context and assist with writing, analysis, and coding, guided by a Constitutional AI approach for safer, more principled behavior.

Difficulty

Plain Explanation

Many AI tools used to lose track when conversations or documents got long, or they followed rules inconsistently, which led to confusing or unsafe answers. Claude solves this by combining a long attention span with a clear set of written principles.

Analogy: Think of Claude as a careful editor with a very large desk. It can lay out hundreds of pages at once, read them together, and then follow a style guide to produce a consistent, safe result.

How it works concretely:

  • Claude is built on a transformer architecture, a deep learning design that looks at relationships between all the words in a passage at once. This global view helps it reason over long or complex inputs.

  • It supports very large context windows (reported around 200,000 tokens per interaction), so it can keep more of a long conversation or document in working memory without forgetting earlier parts.

  • Its behavior is shaped by Anthropic’s Constitutional AI approach: the model learns to follow a set of written principles (like privacy and fairness) during training and refinement. Instead of only copying from feedback, it uses those principles to choose safer, more consistent responses.

Example & Analogy

Real-world scenarios

  • Long technical review packet: A product team pastes a multi-hundred-page spec and meeting notes into one session. Claude stays coherent across the whole bundle because of its large context window (around 200k tokens reported). The transformer architecture helps it track references across sections and summarize trade-offs without losing earlier details.

  • Policy drafting with guardrails: An HR team drafts sensitive internal guidelines and asks Claude to propose neutral, respectful language. Constitutional AI nudges the model to apply written safety principles, reducing biased or unsafe phrasing while still generating clear text.

  • Large codebase refactor with Claude Code: An engineer opens Claude Code in the terminal to scan a repository, edit files, run tests, and handle Git workflows. Because the tool can read and operate on many related files and commands, Claude can plan multi-step fixes while keeping the broader code context in mind, then execute routine steps safely.

  • Deep research notes consolidation: An analyst feeds long interview transcripts and reports into one conversation. The transformer’s token-level pattern recognition helps Claude align repeated themes and contradictions across documents, while the long context lets it keep prior sources visible when producing a single, structured brief.

At a Glance

Claude models: speed vs balance vs depth Constitutional AI vs ad-hoc rules → built-in principles vs patch-on filters Claude Code vs general chat → agentic coding in terminal vs text-only assistance

| | Claude Haiku | Claude Sonnet | Claude Opus | | Core idea | Optimized for speed/efficiency | Balanced quality and speed | Frontier performance for complex tasks | | Typical use | Quick iterations, lightweight tasks | Day-to-day work with solid reasoning | Hard problems: complex coding, deep research, nuanced explanations | | Context handling | Long-context capable | Long-context capable | Long-context capable | | Trade-off | Fastest, but less powerful | Middle ground | Strongest reasoning, higher cost reported |

| | Constitutional AI | Ad-hoc safety rules | | Safety approach | Principles are embedded into model behavior during training | External filters or scattered rules applied after the fact | | Result | More consistent, principled responses | Can feel inconsistent across topics |

| | Claude Code (terminal) | General chat assistant | | Interaction | Operates directly in your terminal; can read/edit files, run commands, and tests | Text conversation only | | Focus | Agentic software engineering and codebase operations | Broad Q&A, writing, and brainstorming |

Why You Should Know This

  • Long-context work became practical: Claude is reported to handle around 200k tokens per interaction, enabling multi-hundred-page reading, summarizing, and cross-referencing in one session.

  • Safety principles moved into training: Constitutional AI popularized the idea of baking written principles into model behavior, not just adding filters afterward.

  • Clear model tiers for real trade-offs: The Haiku–Sonnet–Opus lineup gives teams explicit choices among speed, balanced performance, and frontier reasoning.

  • Coding moved from chat to command line: With Claude Code, agentic coding tasks (reading files, running tests, handling Git) can happen directly in the terminal via natural language.

Where It's Used

  • Claude assistant (Anthropic): Used for writing, analysis, decision support, and coding through natural language, with emphasis on long, detailed inputs and clear, structured output. Source: Grammarly overview.

  • Claude 3 family — Haiku, Sonnet, Opus: Publicly described tiers for speed (Haiku), balance (Sonnet), and complex tasks (Opus). Source: model overview guides.

  • Claude Opus 4.5: Reported by IBM Think to be significantly more cost-effective than Opus 4.1/4.0 while targeting frontier performance use cases like complex coding and deep research.

  • Claude Code: Anthropic’s agentic coding platform that runs in the terminal and can access files, run shell commands and tests, and operate tools for codebase workflows. Sources: IBM Think page and Anthropic’s Claude Code GitHub repository.

Curious about more?
  • When You See This in the News
  • Common Misconceptions
  • Understanding Checklist
  • How It Sounds in Conversation
  • What should I learn next?
  • Role-Specific Insights
  • Go Deeper

When You See This in the News

When news says “Constitutional AI” → it means the model is trained to follow written principles (like fairness or privacy) to guide safer responses. When news says “200K-token context” → it means Claude can process roughly 350 pages of text in one go, which helps with very long documents. When news says “Opus 4.5 is more cost-effective” → it refers to reports that Opus 4.5 reduces cost compared with Opus 4.1/4.0 for similar frontier tasks. When news says “Claude Code CLI” → it means a terminal-based assistant that can read/edit files, run commands and tests, and help with code workflows using natural language.

Common Misconceptions

❌ Myth: “Claude browses the entire web freely.” → ✅ Reality: Its web access is restricted; results depend on what you provide or approved tools. ❌ Myth: “Claude remembers everything forever.” → ✅ Reality: It remembers within a session’s context window (reported around 200k tokens). Content outside that window drops from working memory. ❌ Myth: “Constitutional AI means no bias or mistakes.” → ✅ Reality: It aims for safer, more principled behavior, but errors and biases can still occur. ❌ Myth: “There’s just one Claude model.” → ✅ Reality: There are multiple tiers (Haiku, Sonnet, Opus) with different speed, capability, and cost trade-offs.

Understanding Checklist

□ Why does the transformer architecture help Claude keep track of relationships across long text? □ What practical benefit does a ~200k-token context window give in real projects? □ How does Constitutional AI differ from simply adding safety filters after training? □ When would you pick Haiku vs Sonnet vs Opus for your team’s workload? □ What specific tasks can Claude Code perform in a terminal that a chat-only assistant cannot?

How It Sounds in Conversation

  • Infra chat: “The PRD bundle is 160k tokens. Let’s use Claude Sonnet so we stay within context and keep latency reasonable; switch to Opus only for the final reasoning pass.”

  • Dev stand-up: “I’ll let Claude Code scan the repo and run tests. If it suggests a multi-file refactor, I’ll review the diff before it commits.”

  • Research sprint: “We pasted three reports (~300 pages total). Claude kept the citations aligned because everything fit in one session. Next, we’ll ask for a contradictions table.”

  • Cost review: “For the heavy analysis step, try Opus 4.5—it’s reported to be more cost-effective than earlier Opus versions, which helps with our monthly budget.”

  • Support enablement: “Web is restricted. Feed Claude the approved policy PDFs directly; it does much better with long, provided documents.”

Related Terms

  • Transformer — The neural network design behind Claude; enables parallel attention over long text, unlike older sequential models.

  • Constitutional AI — Claude’s training approach that bakes written principles into behavior; contrasts with relying only on post-hoc filters.

  • RLHF (Reinforcement Learning from Human Feedback) — Often used with LLMs; Constitutional AI is presented as a complementary or alternative alignment method.

  • Context window — Practical memory limit per interaction; Claude’s reported ~200k tokens enables multi-hundred-page workflows.

  • Claude Code — Terminal-based, agentic coding assistant; can run commands and tests, versus chat-only tools that cannot operate your environment directly.

  • ChatGPT — Another assistant built on LLMs; general-purpose like Claude but without the same emphasis on Constitutional AI in the materials cited here.

Role-Specific Insights

Junior Developer: Try Claude Code for routine tasks: search the repo, run tests, and generate diffs. Keep changes small and review every edit before merging. Product Manager/Planner: Use Claude’s long context to run one-thread document reviews (PRDs, research, feedback) and request structured outputs like pros/cons or risk maps. Senior/Lead Engineer: Match model to task: Haiku for quick drafts, Sonnet for balanced analysis, Opus for the hardest reasoning. Establish guardrails by providing source documents instead of relying on web access. Legal/Compliance: Leverage Constitutional AI’s principled style for first-pass policy language, but require human review. Keep sensitive documents in-session and verify that outputs reflect approved sources.

Go Deeper

Essential resources

  • What is Claude AI? (IBM Think) (blog/overview) — Clear summary of Claude models, use cases (e.g., complex coding, deep research), and notes on Opus 4.5 cost-effectiveness.

  • Claude AI Explained (Grammarly) (blog) — Beginner-friendly explanation of how Claude works (transformer, deep learning) and where it excels (long, detailed inputs).

  • What is Claude and How to Use It for AI Agents (MindStudio) (blog) — Practical view of Claude model tiers and how they support multi-step, tool-using agents.

Next terms

  1. Transformer — Understand the core architecture that lets Claude analyze relationships across long text.
  2. Constitutional AI — Learn how written principles are used to shape safer, more consistent model behavior.
  3. Context Window (Tokens) — Grasp why token limits matter for long documents and multi-step conversations.
Helpful?