-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to load lora model to sentencetransformer model? #2465
Comments
bump |
+1, I have the same use-case. I'm fine-tuning my embedding model using HF Trainer and with PEFT, but when trying to save the checkpoints for sentence-transformers usage, I face this exact same issue. @tomaarsen can you please help take a look? |
+1 for the same issue |
This was referenced Oct 11, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Dear UKPlab team,
My team and myself are working on a RAG project and right now we are fine tuning a retrieval model using peft library. The issue is once we have the model fine-tuned, we couldn't load the local config and checkpoints using
sentencetransformer
.Here is our hierarchy of the local path of the peft model
When I look into the
sentence-transformers
package, the issue comes from the classTransformer.py
which doesn't consider the situation that the model path is apeftmodel
path:config = AutoConfig.from_pretrained(model_name_or_path, **model_args, cache_dir=cache_dir)
So we have to comment this line and delete the
config
attribute at all and in the_load_model
method, only keep this code:self.auto_model = AutoModel.from_pretrained(model_name_or_path, cache_dir=cache_dir)
Sincerely request. Could you please fix this issue or could you please tell me the correct way to load a peft model using sentencetransformer class?
The text was updated successfully, but these errors were encountered: