What is Foundation Model?
TL;DR
A general-purpose AI model pre-trained on large-scale data, adaptable to a wide variety of downstream tasks.
Foundation Model: Definition & Explanation
A Foundation Model is a general-purpose AI model that has been pre-trained on massive datasets and can be adapted to diverse downstream tasks through fine-tuning or prompt engineering. The term was coined in 2021 by Stanford University's HAI (Human-Centered AI Institute). Examples include the GPT series, Claude, Gemini, LLaMA (all LLMs), and Stable Diffusion (image generation). Because a single model can be repurposed for many different applications, foundation models eliminate the need to develop specialized models for each individual task, improving AI development efficiency and reducing costs. However, there are also societal concerns, including the risk of biases in training data propagating to downstream tasks and the potential for market dominance by a small number of companies. As foundation model performance improves, it lifts the quality of the entire AI application ecosystem, driving intense competition among developers.