-
Notifications
You must be signed in to change notification settings - Fork 830
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Llama 3.1 fine-tune deployment error #3443
Comments
It looks like you're encountering a FailedPrecondition error while trying to deploy the LLaMA model using VLLM. This typically indicates that the model server is having issues starting up or executing properly. |
Expected Behavior
To be able to deploy the example notebook without modifications
Actual Behavior
All the links to logs in the output shows an empty log :|
Steps to Reproduce the Problem
Specifications
The text was updated successfully, but these errors were encountered: