Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ZeroDivisionError: integer division or modulo by zero #166

Open
Wa-lead opened this issue Mar 25, 2023 · 3 comments
Open

ZeroDivisionError: integer division or modulo by zero #166

Wa-lead opened this issue Mar 25, 2023 · 3 comments

Comments

@Wa-lead
Copy link

Wa-lead commented Mar 25, 2023

Hello,

I have encountered a "division by zero" error while attempting to run generator.py on my Windows machine. I have made the necessary modifications to bitsandbytes and added the required .dll files, but the error persists.

I would appreciate any assistance in identifying the cause of this issue.

Error:
`
Output exceeds the size limit. Open the full output data in a text editor

ZeroDivisionError Traceback (most recent call last)
Cell In[1], line 36
29 if device == "cuda":
30 model = LlamaForCausalLM.from_pretrained(
31 BASE_MODEL,
32 load_in_8bit=LOAD_8BIT,
33 torch_dtype=torch.float16,
34 device_map="auto",
35 )
---> 36 model = PeftModel.from_pretrained(
37 model,
38 LORA_WEIGHTS,
39 torch_dtype=torch.float16,
40 )
41 elif device == "mps":
42 model = LlamaForCausalLM.from_pretrained(
43 BASE_MODEL,
44 device_map={"": device},
45 torch_dtype=torch.float16,
46 )

File c:\Users\walee\miniconda3\lib\site-packages\peft\peft_model.py:167, in PeftModel.from_pretrained(cls, model, model_id, **kwargs)
165 no_split_module_classes = model._no_split_modules
166 if device_map != "sequential":
...
457 # - the size of no split block (if applicable)
458 # - the mean of the layer sizes
459 if no_split_module_classes is None:

ZeroDivisionError: integer division or modulo by zero

`

@PetreVane
Copy link

Use ChatGPT 4 to understand why this happens

@zsc
Copy link

zsc commented Mar 30, 2023

This happens when batch_size < micro_batch_size * num_gpu . Try lowering the factors on the right (use less GPU or decrease micro_batch_size).

The issue is about generator.py, so quite different.

@zoemaestra
Copy link

#260 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants