Artificial Intelligence
5 mins

LoRA : Low Rank Adaptation of LLMs

A quick explanation of LoRA

Imagine you have a super-smart robot friend who has read all the books in the world and knows a lot. This robot uses its vast knowledge to help you with various tasks, like homework or answering trivia questions. We can think of this robot as a Large Language Model (LLM), which is a type of computer program trained on vast amounts of text to understand and generate human-like language.

Now, say you have a unique hobby, like collecting rare stamps or understanding the language of dolphins. Your robot friend might not know much about these specific topics because they are so niche. So, instead of retraining your robot entirely (which would be like making it read all the books in the world again), you decide to give it a "mini-course" on the topic. This mini-course is like a shortcut for the robot to quickly understand and adapt to this specific knowledge.

LoRA (Low Rank Adaptation) is like this mini-course. Instead of retraining the entire model (robot) from scratch, we tweak it a little bit so that it becomes an expert in a specific topic. The "Low Rank" in LoRA refers to the fact that we're only adjusting a small part of the robot's brain, keeping most of its vast knowledge unchanged. This makes the adaptation process faster and more efficient.

In technical terms, "rank" in the context of matrices refers to the number of independent rows or columns. When we say "Low Rank Adaptation," we're essentially saying that we're making small, focused changes to the model's internal workings, rather than large-scale changes. This allows us to quickly customize our LLM for specific tasks or knowledge areas without having to spend a lot of time or resources.

So, in summary:

  • LLM (Large Language Model): A super-smart robot friend.
  • LoRA (Low Rank Adaptation): A shortcut or mini-course to make the robot even smarter in a specific topic without retraining it from scratch.

August 23, 2023

Read our latest

Blog posts