Skip to content

abster/Abhijeet-LightweightFineTuning-Project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Abhijeet-LightweightFineTuning-Project

This is an example of using parameter efficient fine-tuning (LoRA) on a sequence classification model using a downstream dataset.

Low ranked adaptation (LoRA) reduces the number of parameters to train making it possible to run fine-tuning on commodity hardware with a smaller compute and memory footprint.

Check out: https://github.com/huggingface/peft

Prerequisutes

Dev environment with at least 16 GB RAM.

Background reading

Foundation model

Training dataset

Install dependencies in your virtual environment using:

pip install -r requirements.txt

Checkout: https://stackoverflow.com/questions/41427500/creating-a-virtualenv-with-preinstalled-packages-as-in-requirements-txt

Run script

The script is included as main method. You can run the script using pycharm or via command line.

Current Limitation

Currently not using Quantization, since Quantization library (bitsandbytes) currently only has support for CUDA and lacks support for Metal (GPU available for Mac)

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages