Parameter Efficient Fine Tuning: LoRA

By: Sanjay

Introduction

In the world of artificial intelligence, the capabilities of language models have grown by leaps and bounds. These models, known as Language Model Models (LLMs), have evolved into colossal giants, boasting hundreds of gigabytes in model weights. While these behemoths are undoubtedly powerful, they pose a significant challenge when it comes to fine-tuning them for specific tasks. Full fine-tuning, a conventional approach, not only demands vast storage capacity but also engulfs copious amounts of memory for optimizer states, gradients, activations, and temporary data throughout training.

However, the paradigm is shifting towards a more memory-efficient approach: Parameter-Efficient Fine-Tuning (PEFT). PEFT is a game-changer, as it doesn’t just make model adaptation more manageable, but also mitigates the problem of catastrophic forgetting often associated with full fine-tuning. In this blog, we’ll delve into the world of PEFT, exploring its core principles and the various methods at your disposal for making it work seamlessly.

The Essence of Parameter-Efficient Fine-Tuning

PEFT revolves around the idea of not updating the entirety of the LLM’s parameters during fine-tuning. Instead, it focuses on a smaller subset of the existing model parameters, drastically reducing the memory footprint required for training. In some cases, as little as 15-20% of the original LLM parameters are altered, rendering PEFT a feasible option even on a single GPU.

One of the key advantages of PEFT is its efficiency in handling multiple tasks. In contrast to full fine-tuning, which results in a brand-new model for each task (each as massive as the original), PEFT’s approach allows for the creation of task-specific parameters that can be effortlessly swapped during inference. This adaptability to various tasks with minimal computational overhead is a significant breakthrough.

Exploring PEFT Methods

Parameter Efficient Fine Tuning: LoRA

PEFT comprises several methods, each with its own trade-offs and nuances. Here, we’ll categorize them into three main classes:

Selective Methods: These methods fine-tune only a subset of the original LLM parameters. The granularity of selection can vary from specific layers to individual parameter types. While these methods provide some degree of parameter efficiency, they often come with trade-offs in compute efficiency and performance.

Reparameterization Methods: Reparameterization techniques operate on the original LLM parameters but reduce the number of parameters to train. One popular method, LoRA (Low Rank Adaptation), achieves this by creating low-rank transformations of the network weights. This method will be explored in detail in the next section.

Additive Methods: Additive methods, on the other hand, keep the original LLM weights frozen and introduce new trainable components. There are two primary approaches within this category:

Adapter Methods: These methods add new trainable layers to the model architecture, often inside the encoder or decoder components, after the attention or feed-forward layers.

Soft Prompt Methods: Soft prompts keep the model architecture fixed and manipulate the input to enhance performance. This can involve adding trainable parameters to prompt embeddings or retraining the embedding weights while keeping the input unchanged.

The Transformer Anatomy

Before we dive into LoRA, let’s revisit the fundamental components of the transformer architecture. In essence, a transformer processes input prompts by converting them into tokens, which are then transformed into embedding vectors. These embeddings traverse through both the encoder and decoder segments of the transformer, where two critical neural networks come into play: self-attention and feedforward networks. During pre-training, the weights of these networks are learned.

Unlocking the Power of LoRA

LoRA operates on a simple yet ingenious principle: it minimizes the parameters that need training during fine-tuning.

Freezing the Original Weights: LoRA begins by locking down all the original model parameters, keeping them intact.

Introducing Rank Decomposition: To achieve parameter efficiency, LoRA injects a pair of rank decomposition matrices alongside the original weights. The dimensions of these smaller matrices are carefully chosen so that their product generates a matrix with the same dimensions as the weights they are modifying.

Training the Low-Rank Matrices: During the fine-tuning process, you train these smaller matrices using supervised learning, akin to what you’ve seen before. This efficiently adjusts the model for the specific task at hand.

Inference with LoRA: When it’s time for inference, you multiply the two low-rank matrices together, creating a matrix of the same size as the frozen weights. You then add this result to the original weights, replacing them in the model. Voilà! You now have a LoRA fine-tuned model ready to tackle your specific task.

Remarkably, LoRA maintains the same number of parameters as the original model, ensuring little to no impact on inference latency. This elegant approach allows for substantial memory savings during training.

Parameter Efficient Fine Tuning: LoRA

Where to Apply LoRA

LoRA’s flexibility shines through in its applicability to various components of the LLM. However, it’s worth noting that most of the LLM’s parameters reside in the attention layers, making them the prime candidates for LoRA. Researchers have found that LoRA applied solely to the self-attention layers can yield remarkable parameter reductions and task-specific enhancements.

Practical Example

To illustrate the power of LoRA, let’s consider a practical example using the transformer architecture from the “Attention is All You Need” paper.

These transformers have weight matrices with dimensions of 512 by 64, totaling 32,768 trainable parameters per matrix. Now, if you employ LoRA with a rank of eight, you train two small rank decomposition matrices, resulting in 4,608 trainable parameters instead of the original 32,768—a whopping 86% reduction.

This significant reduction in trainable parameters enables you to perform parameter-efficient fine-tuning with a single GPU, eliminating the need for a distributed GPU cluster. Moreover, because LoRA matrices have minimal memory requirements, you can easily swap them out for different tasks during inference.

Performance Comparison

To gauge the effectiveness of LoRA, we turn to the ROUGE metric, a valuable tool for assessing model performance. Let’s focus on fine-tuning a FLAN-T5 model for dialogue summarization,

FLAN-T5 Base Model: Starting with the base model, the ROUGE 1 score stands as the baseline.

Full Fine-Tuning: Fine-tuning the FLAN-T5 model for dialogue summarization through the traditional method increases the ROUGE 1 score by 0.19, significantly improving performance.

LoRA Fine-Tuning: Applying LoRA for fine-tuning results in a ROUGE 1 score increase of 0.17, slightly lower than full fine-tuning. However, considering the substantial reduction in trainable parameters and compute usage, this trade-off is well justified.

Parameter Efficient Fine Tuning: LoRA

Source: Hu et al. 2021, “LORA: Low-Rank Adaptation of Large Language Models”

Optimizing LoRA Rank

Selecting the appropriate rank for LoRA matrices is crucial. The choice depends on the task and the desired balance between parameter reduction and performance preservation. Researchers have found that ranks between 4 and 32 strike an excellent balance, with little improvement observed for larger ranks.

Conclusion

LoRA, a re-parameterization technique within the realm of PEFT, empowers parameter-efficient fine-tuning while preserving model performance. Its versatility, memory savings, and ease of integration make it a potent tool for practitioners aiming to optimize LLMs and other models. As the AI community continues to explore and refine LoRA’s potential, it promises to play a pivotal role in the future of efficient model adaptation.

What’s your Reaction?
+1
0
+1
0
+1
0
+1
0
+1
0
+1
0
+1
0

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *