LoRA Colab :
Blog Post:
LoRa Paper:
In this video I look at how to use PEFT to fine tune any decoder style GPT model. This goes through the basics LoRa fine-tuning and how to upload it to HuggingFace Hub.
My Links:
Twitter -
Linkedin -
Github:
00:00 Intro
00:04 - Problems with fine-tuning
00:48 - Introducing PEFT
01:11 - PEFT other cool techniques
01:51 - LoRA Diagram
03:25 - Hugging Face PEFT Library
04:06 - Code Walkthrough
1 view
543
146
7 months ago 00:02:06 1
Revolutionize Customer Service with Alepo ’s AI Virtual Agent
8 months ago 01:31:13 1
A Hackers’ Guide to Language Models
9 months ago 00:59:53 1
Efficient Fine-Tuning for Llama-v2-7b on a Single GPU
10 months ago 02:57:45 1
[SAIF 2023] Samsung AI Forum Breakout Session_AI: LLM and Transformation of AI for Industry
12 months ago 00:15:35 1
Fine-tuning LLMs with PEFT and LoRA
1 year ago 00:04:17 35
Анонс бесплатного курса “Advanced Natural Language Processing“
1 year ago 00:10:48 1
ADEUS ChatGPT! Porque o LLAMA 2 é a MELHOR IA DA META ATÉ AGORA!
1 year ago 00:59:20 1
#145 Why AI will Change Everything—with Former Snowflake CEO, Bob Muglia
1 year ago 00:38:54 1
# 175 Where DeepL Beats ChatGPT with Graham Neubig
1 year ago 00:08:17 1
API For Open-Source Models 🔥 As Easy As Using ChatGPT’s API