-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Issues: huggingface/peft
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
When I use peft to finetune llama2, the gpu memory keeps growing
#2141
opened Oct 10, 2024 by
xuanzhangyang
2 of 4 tasks
PEFT doesn't inject virtual tokens into generate forward pass
#2134
opened Oct 6, 2024 by
Kami-chanw
2 of 4 tasks
Key mismatch when trying to load a LORA adapter into an XLORA model
#2132
opened Oct 5, 2024 by
p4arth
2 of 4 tasks
PeftModelForCausalLM.generate
ignores prompt tuning parameters unless use_cache=False
#2123
opened Oct 2, 2024 by
mattlgarber
2 of 4 tasks
Request to Include Named Entity Recognition and Relation Extraction Model Finetuning Examples and Guidance Request
contributions-welcome
good first issue
Good for newcomers
#2119
opened Oct 1, 2024 by
HarikrishnanK9
Ineffective Fine-Tuning Bug: Using
get_peft_model()
Before Loading LoRA Produces Outputs Identical to the Base Model
#2115
opened Sep 30, 2024 by
Hoper-J
4 tasks
could not finetune gemma 2 9b with lora and fsdp
#2111
opened Sep 29, 2024 by
imadoualid
2 of 4 tasks
Optimize DoRA computation when there is no dropout
contributions-welcome
#2107
opened Sep 27, 2024 by
BenjaminBossan
merge_and_unload docs do not clarify behaviour for quantized base models
#2105
opened Sep 26, 2024 by
RonanKMcGovern
2 of 4 tasks
Questions about original_module and modules_to_save.default
#2100
opened Sep 26, 2024 by
dengchengxifrank
2 of 4 tasks
loftq_utils.py depdends on huggingface_hub.errors, which doesn't appear in some versions of huggingface_hub
#2097
opened Sep 25, 2024 by
mashoutsider
2 of 4 tasks
Does peft supports the custom setting of trainable parameters(for example, some params in word_embeddings)
#2067
opened Sep 13, 2024 by
dongdongzhaoUP
When using accelerate+deepspeed to accelerate the code does not work
#2060
opened Sep 11, 2024 by
githubwqj
4 tasks
Loading lora weights for FLUX pipeline is extremely slow
#2055
opened Sep 8, 2024 by
nachoal
2 of 4 tasks
Problem with model.merge_and_unload - the saved model is almost empty - 40kb
#2054
opened Sep 7, 2024 by
Oxi84
2 of 4 tasks
LoRA support for image classification and segmentation
documentation
Improvements or additions to documentation
#2052
opened Sep 6, 2024 by
namrahrehman
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.