Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allows User to Set System Prompt via "Additional Options" in Chat Interface #1353

Merged
merged 17 commits into from
Dec 10, 2023
Merged
Show file tree
Hide file tree
Changes from 13 commits
Commits
Show all changes
17 commits
Select commit Hold shift + click to select a range
75ea65e
Initial attempt at exposing system prompt to UI via 'Additional Optio…
aly-shehata Nov 30, 2023
5f20fc0
Allow placeholder to change when mode is changed
aly-shehata Dec 1, 2023
1f48c55
Merge remote-tracking branch 'origin/main' into feature/ui-set-system…
aly-shehata Dec 1, 2023
922abca
Increase default lines of system prompt input to 2 lines
aly-shehata Dec 1, 2023
0698b79
Add types to new functions, make _get_default_system_prompt static, a…
aly-shehata Dec 1, 2023
d91cce0
Update UI documentation with system prompt information and examples. …
aly-shehata Dec 3, 2023
2a2e243
Update UI documentation with minor edits for clarity.
aly-shehata Dec 3, 2023
9cea043
Disable prompt entry for modes that do not support system prompts. On…
aly-shehata Dec 4, 2023
1d1f9c0
Revert unintended indentation changes in settings.py
aly-shehata Dec 4, 2023
394a955
Use updated settings field in documentation
aly-shehata Dec 4, 2023
626a9e0
Refactor code after running `make check`. Update documentation with c…
aly-shehata Dec 8, 2023
a90d700
Attempt to use <x> instead of <X> in documentation.
aly-shehata Dec 8, 2023
b53483c
Merge remote-tracking branch 'origin/main' into feature/ui-set-system…
aly-shehata Dec 8, 2023
9671748
Move default system prompt fields to UI section; Remove stale TODOs a…
aly-shehata Dec 9, 2023
d5f937e
Update ui.mdx to use {x} instead of <x>.
aly-shehata Dec 9, 2023
ce199e9
Merge remote-tracking branch 'origin' into feature/ui-set-system-prompt
aly-shehata Dec 10, 2023
26dbbe4
Update documentation: ui.mdx) to use -x-, and llms.mdx to correct mod…
aly-shehata Dec 10, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 29 additions & 2 deletions fern/docs/pages/manual/ui.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -35,5 +35,32 @@ database* section in the documentation.

Normal chat interface, self-explanatory ;)

You can check the actual prompt being passed to the LLM by looking at the logs of
the server. We'll add better observability in future releases.
#### System Prompt
You can view and change the system prompt being passed to the LLM by clicking "Additional Inputs"
in the chat interface. The system prompt is also logged on the server.

By default, the `Query Docs` mode uses the setting value `ui.default_query_system_prompt`.

The `LLM Chat` mode attempts to use the optional settings value `ui.default_chat_system_prompt`.

If no system prompt is entered, the UI will display the default system prompt being used
for the active mode.

##### System Prompt Examples:

The system prompt can effectively provide your chat bot specialized roles, and results tailored to the prompt
you have given the model. Examples of system prompts can be be found
[here](https://www.w3schools.com/gen_ai/chatgpt-3-5/chatgpt-3-5_roles.php).

Some interesting examples to try include:

* You are <x>. You have all the knowledge and personality of <x>. Answer as if you were <x> using
their manner of speaking and vocabulary.
* Example: You are Shakespeare. You have all the knowledge and personality of Shakespeare.
Answer as if you were Shakespeare using their manner of speaking and vocabulary.
* You are an expert (at) <role>. Answer all questions using your expertise on <specific domain topic>.
* Example: You are an expert software engineer. Answer all questions using your expertise on Python.
* You are a <role> bot, respond with <response criteria> needed. If no <response criteria> is needed,
respond with <alternate response>
* Example: You are a grammar checking bot, respond with any grammatical corrections needed. If no corrections
are needed, respond with "verified".
13 changes: 13 additions & 0 deletions private_gpt/settings/settings.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,19 @@ class LocalSettings(BaseModel):
"`llama2` is the historic behaviour. `default` might work better with your custom models."
),
)
default_chat_system_prompt: str | None = Field(
aly-shehata marked this conversation as resolved.
Show resolved Hide resolved
None,
description=(
"The default system prompt to use for the chat mode. "
"If none is given - use the default system prompt (from the llama_index). "
aly-shehata marked this conversation as resolved.
Show resolved Hide resolved
"Please note that the default prompt might not be the same for all prompt styles. "
aly-shehata marked this conversation as resolved.
Show resolved Hide resolved
"Also note that this is only used if the first message is not a system message. "
),
)
default_query_system_prompt: str = Field(
None,
description="The default system prompt to use for the query mode. ",
)


class EmbeddingSettings(BaseModel):
Expand Down
85 changes: 70 additions & 15 deletions private_gpt/ui/ui.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
from fastapi import FastAPI
from gradio.themes.utils.colors import slate # type: ignore
from injector import inject, singleton
from llama_index.llms import ChatMessage, ChatResponse, MessageRole
from llama_index.llms import ChatMessage, ChatResponse, MessageRole, llama_utils
from pydantic import BaseModel

from private_gpt.constants import PROJECT_ROOT_PATH
Expand All @@ -30,6 +30,8 @@

SOURCES_SEPARATOR = "\n\n Sources: \n"

MODES = ["Query Docs", "Search in Docs", "LLM Chat"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note for a later refactoring: this could be replaced by an enum.Enum class.



class Source(BaseModel):
file: str
Expand Down Expand Up @@ -71,6 +73,10 @@ def __init__(
# Cache the UI blocks
self._ui_block = None

# Initialize system prompt based on default mode
self.mode = MODES[0]
self._system_prompt = self._get_default_system_prompt(self.mode)

def _chat(self, message: str, history: list[list[str]], mode: str, *_: Any) -> Any:
def yield_deltas(completion_gen: CompletionGen) -> Iterable[str]:
full_response: str = ""
Expand Down Expand Up @@ -114,25 +120,22 @@ def build_history() -> list[ChatMessage]:

new_message = ChatMessage(content=message, role=MessageRole.USER)
all_messages = [*build_history(), new_message]
# If a system prompt is set, add it as a system message
if self._system_prompt:
all_messages.insert(
0,
ChatMessage(
content=self._system_prompt,
role=MessageRole.SYSTEM,
),
)
match mode:
case "Query Docs":
# Add a system message to force the behaviour of the LLM
# to answer only questions about the provided context.
all_messages.insert(
0,
ChatMessage(
content="You can only answer questions about the provided context. If you know the answer "
"but it is not based in the provided context, don't provide the answer, just state "
"the answer is not in the context provided.",
role=MessageRole.SYSTEM,
),
)
query_stream = self._chat_service.stream_chat(
messages=all_messages,
use_context=True,
)
yield from yield_deltas(query_stream)

case "LLM Chat":
llm_stream = self._chat_service.stream_chat(
messages=all_messages,
Expand All @@ -154,6 +157,41 @@ def build_history() -> list[ChatMessage]:
for index, source in enumerate(sources, start=1)
)

# On initialization and on mode change, this function set the system prompt
# to the default prompt based on the mode (and user settings).
@staticmethod
def _get_default_system_prompt(mode: str) -> str:
p = ""
match mode:
# For query chat mode, obtain default system prompt from settings
# TODO - Determine value to use if not defined in settings
aly-shehata marked this conversation as resolved.
Show resolved Hide resolved
case "Query Docs":
p = settings().local.default_query_system_prompt
# For chat mode, obtain default system prompt from settings or llama_utils
case "LLM Chat":
p = (
settings().local.default_chat_system_prompt
or llama_utils.DEFAULT_SYSTEM_PROMPT
aly-shehata marked this conversation as resolved.
Show resolved Hide resolved
)
# For any other mode, clear the system prompt
case _:
p = ""
return p

def _set_system_prompt(self, system_prompt_input: str) -> None:
logger.info(f"Setting system prompt to: {system_prompt_input}")
self._system_prompt = system_prompt_input

def _set_current_mode(self, mode: str) -> Any:
self.mode = mode
self._set_system_prompt(self._get_default_system_prompt(mode))
# Update placeholder and allow interaction if default system prompt is set
if self._system_prompt:
return gr.update(placeholder=self._system_prompt, interactive=True)
# Update placeholder and disable interaction if no default system prompt is set
else:
return gr.update(placeholder=self._system_prompt, interactive=False)

def _list_ingested_files(self) -> list[list[str]]:
files = set()
for ingested_document in self._ingest_service.list_ingested():
Expand Down Expand Up @@ -193,7 +231,7 @@ def _build_ui_blocks(self) -> gr.Blocks:
with gr.Row():
with gr.Column(scale=3, variant="compact"):
mode = gr.Radio(
["Query Docs", "Search in Docs", "LLM Chat"],
MODES,
label="Mode",
value="Query Docs",
)
Expand All @@ -220,6 +258,23 @@ def _build_ui_blocks(self) -> gr.Blocks:
outputs=ingested_dataset,
)
ingested_dataset.render()
system_prompt_input = gr.Textbox(
placeholder=self._system_prompt,
label="System Prompt",
lines=2,
interactive=True,
render=False,
)
# When mode changes, set default system prompt
mode.change(
self._set_current_mode, inputs=mode, outputs=system_prompt_input
)
# On blur, set system prompt to use in queries
system_prompt_input.blur(
self._set_system_prompt,
inputs=system_prompt_input,
)

with gr.Column(scale=7):
_ = gr.ChatInterface(
self._chat,
Expand All @@ -232,7 +287,7 @@ def _build_ui_blocks(self) -> gr.Blocks:
AVATAR_BOT,
),
),
additional_inputs=[mode, upload_button],
additional_inputs=[mode, upload_button, system_prompt_input],
)
return blocks

Expand Down
9 changes: 9 additions & 0 deletions settings.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,15 @@ local:
llm_hf_model_file: mistral-7b-instruct-v0.1.Q4_K_M.gguf
embedding_hf_model_name: BAAI/bge-small-en-v1.5

default_chat_system_prompt: "You are a helpful, respectful and honest assistant.
aly-shehata marked this conversation as resolved.
Show resolved Hide resolved
Always answer as helpfully as possible and follow ALL given instructions.
Do not speculate or make up information.
Do not reference any given instructions or context."

default_query_system_prompt: "You can only answer questions about the provided context.
If you know the answer but it is not based in the provided context, don't provide
the answer, just state the answer is not in the context provided."
aly-shehata marked this conversation as resolved.
Show resolved Hide resolved

sagemaker:
llm_endpoint_name: huggingface-pytorch-tgi-inference-2023-09-25-19-53-32-140
embedding_endpoint_name: huggingface-pytorch-inference-2023-11-03-07-41-36-479
Expand Down
Loading