Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not working with LLM #396

Closed
Tracked by #469
stan1233 opened this issue Apr 8, 2024 · 10 comments
Closed
Tracked by #469

Not working with LLM #396

stan1233 opened this issue Apr 8, 2024 · 10 comments

Comments

@stan1233
Copy link

stan1233 commented Apr 8, 2024

Describe the bug
I've tested the "Implement Conway's Game of Life in Python using Pygame", but I always encounter "Invalid response from the model, trying again..."

I've attempted using llama2:70b and qwen:72b - both yielded identical outcomes.

Additional context
Add any other context about the problem here.
image
image

@obliviousz
Copy link
Contributor

What I feel is it's llama2's inability to follow the prompt and give the correct format response. Try with another LLM, like Claude3 (for which you get free 5$ initially) or Gemini till May is free i guess

@brPreetham
Copy link

I can reproduce this issue. Looks like the LLM's response is not in the intended format.

brPreetham referenced this issue Apr 8, 2024
in its output. This is done because some models like llama2 provide
sections such as "Explanation:" or "Justification:" after the JSON
response.
This is not a surefire fix though. The model (llama2) will still
includes explanation and justification sections at times despite this
explicit instruction in the prompt
@brPreetham
Copy link

I have submitted a pull request for a modification that I believe will improve on the issue.
The Llama2 model is not providing the responses in the expect format.
JSON { ...... }
Instead it includes additional explanation/justification sections in some of its responses. Causing the JSON validation to fail
Llama2 output: JSON{...} explanation: ....

This can be somewhat corrected by modifying the prompt to explicitly inform the model to adhere to the output format requested.
Please have a look at the prompt.jinja file modification here: ea40769
Check and confirm if the suggested modification helps.

@hqnicolas
Copy link

@stan1233 are you in ollama?

Use:
command-r:35b-v0.1-q4_K_M
openchat:7b-v3.5-1210-q5_K_M
mistral-openorca:7b-q6_K

Latest Devika Version was Working with Ollama,
I have Freezed it on my Repo:
git push https:/hqnicolas/devika
cd devika
create config file
edit the docker compose to ollama url
sudo docker compose build
sudo docker compose up

@start-life
Copy link

This is exactly my problem

@daniel-luper
Copy link

I have submitted a pull request for a modification that I believe will improve on the issue. The Llama2 model is not providing the responses in the expect format. JSON { ...... } Instead it includes additional explanation/justification sections in some of its responses. Causing the JSON validation to fail Llama2 output: JSON{...} explanation: ....

This can be somewhat corrected by modifying the prompt to explicitly inform the model to adhere to the output format requested. Please have a look at the prompt.jinja file modification here: ea40769 Check and confirm if the suggested modification helps.

@brPreetham How can you see the raw Ollama output?

I'm new to the project and would like to know where to look to debug exactly what my model is formatting incorrectly.

@hjamet
Copy link

hjamet commented Apr 22, 2024

I am currently working on the problem here :)

@stan1233
Copy link
Author

I am currently working on the problem here :)
Wow, that sounds great!

@ARajgor
Copy link
Collaborator

ARajgor commented Apr 25, 2024

fixed. fetch the latest changes. for more read the changelog in discord.

@ARajgor ARajgor closed this as completed Apr 25, 2024
@iChristGit
Copy link

fixed. fetch the latest changes. for more read the changelog in discord.

I still face this issue with llama3-8B-Instruct-Q_8

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants