Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PEFT version conflict #13

Closed
ignaciocearuiz opened this issue Dec 8, 2024 · 1 comment
Closed

PEFT version conflict #13

ignaciocearuiz opened this issue Dec 8, 2024 · 1 comment

Comments

@ignaciocearuiz
Copy link

Hi! I'm facing library conflicts while trying to fine-tune and generate sequences using this repository. Here's a breakdown of my setup and the issue:

Setup

  • Environment: Google Colab Pro (NVIDIA A100 40GB GPU)
  • Steps I followed:
    1. Cloned the repo and ran pip install -r requirements.txt.
    2. Replaced example.json with my train_split.json file (shown below) in the instruction_tuning_dataset folder.
    3. Installed missing dependencies (deepspeed and datasets) manually.
    4. Successfully executed run_it.sh, saving the fine-tuned model in the save_dir folder.
    5. Ran the inference script with:
      CUDA_VISIBLE_DEVICES=0 python ProLLaMA/scripts/infer.py --model "save_dir/sft_lora_model/" --interactive

train_split.json Example

[
    {
        "instruction": "[Generate by protein family]",
        "input": "family=<Zinc Fingers family>",
        "output": "Seq=<MSENSDEG...>"
    },
    {
        "instruction": "[Generate by protein family]",
        "input": "family=<Zinc Fingers family>",
        "output": "Seq=<MRHNQAKSLAQ...>"
    }
]

Error Encountered

While running the inference script, I encountered the following error:

Traceback (most recent call last):
  File "/content/ProLLaMA/scripts/infer.py", line 42, in <module>
    model = LlamaForCausalLM.from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 3553, in from_pretrained
    model.load_adapter(
  File "/usr/local/lib/python3.10/dist-packages/transformers/integrations/peft.py", line 137, in load_adapter
    check_peft_version(min_version=MIN_PEFT_VERSION)
  File "/usr/local/lib/python3.10/dist-packages/transformers/utils/peft_utils.py", line 120, in check_peft_version
    raise ValueError(
ValueError: The version of PEFT you are using is not compatible, please use a version that is greater than 0.5.0

Issue Description

Installing an updated version of PEFT to resolve the error creates compatibility issues with huggingface_hub. I’m unsure how to resolve this version conflict without breaking other dependencies.

Can someone provide guidance on how to proceed? Thanks in advance!

@Lyu6PosHao
Copy link
Member

Hello!
I run the infer.py in my own environment sucessfully. My environment:
transformers==4.43.1
peft==0.13.1
torch==2.5.1

So if you only need to infer, you don't have to strictly follow requirements.txt. It is flexible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants