Professional work-related project
In this project, I have provided code and a Colaboratory notebook that facilitates the fine-tuning process of an Alpaca 350M parameter model originally developed at Stanford University. The particular model that is being fine-tuned has around 350 million parameters, which is one of the smaller Alpaca models (smaller than my previous fine-tuned model).
The model uses low-rank adaptation LoRA to run with fewer computational resources and training parameters. We use bitsandbytes to set up and run in an 8-bit format so it can be used on colaboratory. Furthermore, the PEFT library from HuggingFace was used for fine-tuning the model.
Hyper Parameters:
- MICRO_BATCH_SIZE = 4 (4 works with a smaller GPU)
- BATCH_SIZE = 32
- GRADIENT_ACCUMULATION_STEPS = BATCH_SIZE // MICRO_BATCH_SIZE
- EPOCHS = 2 (Stanford's Alpaca uses 3)
- LEARNING_RATE = 2e-5 (Stanford's Alpaca uses 2e-5)
- CUTOFF_LEN = 256 (Stanford's Alpaca uses 512, but 256 accounts for 96% of the data and runs far quicker)
- LORA_R = 4
- LORA_ALPHA = 16
- LORA_DROPOUT = 0.05
Credit for Original Model: Qiyuan Ge
Fine-Tuned Model: RyanAir/Alpaca-350M-Fine-Tuned (HuggingFace)