We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No description provided.
The text was updated successfully, but these errors were encountered:
export CUDA_VISIBLE_DEVICES=0 python main.py --do_train --train_file D:/LLM/yuanzhoulvpi/AdvertiseGen/train.json --validation_file D:/LLM/yuanzhoulvpi/AdvertiseGen/dev.json --preprocessing_num_workers 10 --prompt_column content --response_column summary --overwrite_cache --model_name_or_path chatglm2-6b_model --output_dir output/adgen-chatglm2-6b-lora_version --overwrite_output_dir --max_source_length 64 --max_target_length 128 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --gradient_accumulation_steps 16 --predict_with_generate --max_steps 3000 --logging_steps 10 --save_steps 100 --learning_rate 2e-5 --lora_r 32 --model_parallel_mode True 我看这里也没有epoch轮数,难道Lora只能训练一次?
Sorry, something went wrong.
No branches or pull requests
No description provided.
The text was updated successfully, but these errors were encountered: