-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not working with LLM #396
Comments
What I feel is it's llama2's inability to follow the prompt and give the correct format response. Try with another LLM, like Claude3 (for which you get free 5$ initially) or Gemini till May is free i guess |
I can reproduce this issue. Looks like the LLM's response is not in the intended format. |
in its output. This is done because some models like llama2 provide sections such as "Explanation:" or "Justification:" after the JSON response. This is not a surefire fix though. The model (llama2) will still includes explanation and justification sections at times despite this explicit instruction in the prompt
I have submitted a pull request for a modification that I believe will improve on the issue. This can be somewhat corrected by modifying the prompt to explicitly inform the model to adhere to the output format requested. |
@stan1233 are you in ollama? Use: Latest Devika Version was Working with Ollama, |
This is exactly my problem |
@brPreetham How can you see the raw Ollama output? I'm new to the project and would like to know where to look to debug exactly what my model is formatting incorrectly. |
I am currently working on the problem here :) |
|
fixed. fetch the latest changes. for more read the changelog in discord. |
I still face this issue with llama3-8B-Instruct-Q_8 |
Describe the bug
I've tested the "Implement Conway's Game of Life in Python using Pygame", but I always encounter "Invalid response from the model, trying again..."
I've attempted using llama2:70b and qwen:72b - both yielded identical outcomes.
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: