Skip to content

Latest commit

 

History

History
123 lines (62 loc) · 3.61 KB

File metadata and controls

123 lines (62 loc) · 3.61 KB

Setting model

  1. Download weights for each model;
  2. Change to your path in the corresponding to the code.

Note: the download weights from huggingface can refer to utils/download_meta_llama2.py.

LLaMA-1

Download weights:

LLaMA-1-7B: decapoda-research/llama-7b-hf, LLaMA-1-13B: decapoda-research/llama-13b-hf

LLaMA-2

Download weights:

LLaMA-2-7B: meta-llama/Llama-2-7b-hf, LLaMA-2-13B: meta-llama/Llama-2-13b-hf

OpenFlamingo

We refer to the official repo OpenFlamingo.

Download weights:

OpenFlamingo

MiniGPT4

We refer to the official repo MiniGPT4.

MiniGPT4(Vicuna 13B), Vicuna 13B.

mPLUG-Owl

We refer to the official repo mPLUG-Owl.

Download weights:

mplug-owl-llama-7b.

LLaMA-Adapter V2

We refer to the official repo LLaMA-Adapter.

Download weights:

LLaMA-7B.

VPGTrans

We refer to the official repo VPGTrans.

Download weights:

vicuna-7b.

LLaVA

We refer to the official repo LLaVA.

Download weights:

For LLaVA-7B, download weights:

LLaVA-7b-delta-v0.

For LLaVA-13B, download weights:

LLaVA-13b-delta-v0.

Coverting delta weights refering to the "Legacy Models (delta weights)".

Multimodal-GPT

We refer to the official repo Multimodal-GPT.

Download weights:

llama-7b-hf, OpenFlamingo-9B, mmgpt-lora-v0-release.pt.

LaVIN

We refer to the official repo LaVIN.

For LaVIN-7B, download weights:

LLaMA-7B, sqa-llama-7b-lite.pth.

For LaVIN-13B, download weights:

LLaMA-13B, sqa-llama-13b-lite.pth.

Lynx-llm

We refer to the official repo Lynx-llm.

Download weights:

EVA01_g_psz14.pt, vicuna-7b, finetune_lynx.pt.

Flan-T5-XXL, BLIP2, InstructBLIP, Fromage can be auto-download.

Other models will be updated soon.