← Glossary
Deep Learning
Neural network architectures, training techniques, vision, audio
19 terms
BERT
버트
BERT (Bidirectional Encoder Representations from Transformers) is a transformer-based language model introduced by Googl…
diffusion model
확산 모델
A diffusion model is a deep learning-based generative model in AI that creates new data by gradually denoising random no…
embedding
임베딩
An embedding is a mathematical method of representing complex data in a lower-dimensional space to make it easily proces…
FlashAttention-4
플래시어텐션-4
FlashAttention-4 is a highly optimized GPU kernel for computing 'attention' operations in large-scale AI models, deliver…
Gemini
제미나이
Gemini is Google’s family of multimodal generative AI models and the chatbot/app built on them. Unlike text-only systems…
Gemma 4
젬마 4
Gemma 4 is the latest generation of Google DeepMind’s lightweight, open-weight AI models designed to run efficiently on …
GPU
그래픽 처리 장치
A GPU (Graphics Processing Unit) is a processor built with thousands of small cores to execute many operations in parall…
grouped-query attention
그룹 쿼리 어텐션
Grouped-query attention is a method used in large language models (LLMs) and transformer-based AI systems to process sev…
hallucination
환각
AI hallucination is a phenomenon where a large language model or other generative AI system produces outputs that seem c…
Latent MoE
잠재적 전문가 혼합
Latent MoE (Latent Mixture of Experts) is a variant of sparse Mixture of Experts where each expert operates in a smaller…
LLM
대규모 언어 모델
A Large Language Model (LLM) is a deep learning model trained on massive text corpora to understand and generate human l…
mixture of experts
전문가 혼합
A mixture of experts is an AI architecture that combines several specialized models (called 'experts') and decides which…
multimodal model
멀티모달 모델
A multimodal model is an artificial intelligence model capable of simultaneously understanding and processing multiple t…
NLP
자연어 처리
Natural Language Processing (NLP) is a branch of artificial intelligence that enables computers to read, understand, and…
recurrent mechanism
순환 메커니즘
A recurrent mechanism refers to an architectural design in AI models where the output from a previous step is fed back a…
Self-Attention
셀프 어텐션
Self-attention is a mechanism where each element in an input sequence compares itself with all other elements to compute…
Sora video model
소라 비디오 모델
The Sora video model is an AI system developed by OpenAI that generates high-quality videos from text prompts. You simpl…
Transformer
트랜스포머
A Transformer is a neural network architecture that uses self-attention so each token in a sequence can look at every ot…
vision-language model
비전-언어 모델
A vision-language model is an artificial intelligence model designed to simultaneously understand and process both visua…