fix: ValueError argument bug

Fix the duplicate argument

`ValueError: You can't pass `load_in_4bit`or `load_in_8bit` as a kwarg when passing `quantization_config` argument at the same time.`
This commit is contained in:
muhtasham 2024-08-25 01:37:02 +02:00 committed by GitHub
parent 66edeee5a4
commit d00af39123
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -184,8 +184,6 @@ def build_model(model_args, training_args, checkpoint_dir):
compute_dtype = (torch.bfloat16 if training_args.bf16 else torch.float16)
model = transformers.AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path,
load_in_4bit=model_args.bits == 4,
load_in_8bit=model_args.bits == 8,
quantization_config=BitsAndBytesConfig(
load_in_4bit=model_args.bits == 4,
load_in_8bit=model_args.bits == 8,