← Glossary
ML Fundamentals
Classical ML algorithms, learning theory, evaluation methods
16 terms
Agentic workflows
에이전트 워크플로우
Agentic workflows are dynamic workflows in which multiple specialized AI agents collaborate to plan, reason, use tools, …
BERT
버트
BERT (Bidirectional Encoder Representations from Transformers) is a transformer-based language model introduced by Googl…
F1-Score
F1 점수
F1 Score is a single number that balances two things a classifier must get right: precision (how many predicted positive…
GPU
그래픽 처리 장치
A GPU (Graphics Processing Unit) is a processor built with thousands of small cores to execute many operations in parall…
LLM
대규모 언어 모델
A Large Language Model (LLM) is a deep learning model trained on massive text corpora to understand and generate human l…
MARL
다중 에이전트 강화학습
Multi-Agent Reinforcement Learning (MARL) is an AI technique where multiple agents learn by interacting with each other …
multi-stage training
다단계 학습
Multi-stage training is a method for developing AI models—especially large language models (LLMs)—by progressively impro…
NLP
자연어 처리
Natural Language Processing (NLP) is a branch of artificial intelligence that enables computers to read, understand, and…
post-training
후 훈련
Post-training refers to the set of processes and techniques applied to a machine learning model after it has been initia…
pre-training
사전 훈련
Pre-training is the process of initializing a machine learning model by training it on a large, generic dataset before f…
RAG
검색 증강 생성
Retrieval-Augmented Generation (RAG) is an architecture that improves LLM outputs by retrieving relevant information fro…
reinforcement learning
강화 학습
Reinforcement learning is a type of machine learning where AI agents learn to achieve optimal results through feedback f…
RLHF
인간 피드백 기반 강화학습
Reinforcement Learning from Human Feedback is a method where AI learns better behaviors by using human-provided evaluati…
Self-Attention
셀프 어텐션
Self-attention is a mechanism where each element in an input sequence compares itself with all other elements to compute…
supervised fine-tuning
지도 미세 조정
Supervised fine-tuning is the process of further training a pre-trained AI model using additional labeled data, where hu…
Transformer
트랜스포머
A Transformer is a neural network architecture that uses self-attention so each token in a sequence can look at every ot…