LoRA in Action: Insights from Finetuning LLMs with Low-Rank Adaptation

Dec 15, 2023

Speakers

About

Low-rank adaptation (LoRA) stands as one of the most popular and effective methods for efficiently training custom Large Language Models (LLMs). As practitioners of open-source LLMs, we regard LoRA as a crucial technique in our toolkit. In this talk, I will delve into some practical insights gained from running hundreds of experiments with LoRA, addressing questions such as: How much can I save with quantized LoRA? Are Adam optimizers memory-intensive? Should we train for multiple epochs? How do we choose the LoRA rank? Moreover, the talk will include ideas for future experiments and talking points to stimulate discussions in the workshop, such as mechanisms to avoid overfitting in LoRA and strategies for combining LoRA weights from multiple experiments.

Organizer

Like the format? Trust SlidesLive to capture your next event!

Professional recording and live streaming, delivered globally.

Sharing

Recommended Videos

Presentations on similar topic, category or speaker

Interested in talks like this? Follow NeurIPS 2023