What is LoRA?

TL;DR

A method for efficiently fine-tuning large models by training only a small number of additional parameters.

LoRA: Definition & Explanation

LoRA (Low-Rank Adaptation) is a technique for efficiently fine-tuning large AI models. Traditional fine-tuning requires updating all of a model's parameters, demanding enormous computational resources and memory. LoRA instead adds small, low-rank matrices to the model's weight matrices and trains only those additions. This allows task-specific adjustments without modifying the original model parameters, at a fraction of the computational cost. The dramatic reduction in required GPU memory makes fine-tuning feasible for individuals and small businesses. LoRA has become especially popular in the image generation AI space (Stable Diffusion), where many community-created LoRA models trained on specific art styles or characters are publicly available. QLoRA, which combines quantization with LoRA, enables even more efficient training.

Related AI Tools

Related Terms

AI Marketing Tools by Our Team