Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: We cannot use hole to handle this generation the number of desired tokens exceeds the models max length #249

Closed
pseudotensor opened this issue Jun 7, 2023 · 0 comments · Fixed by #255
Assignees

Comments

@pseudotensor
Copy link
Collaborator

  File "/home/ubuntu/h2ogpt/gradio_runner.py", line 1067, in bot
    for output_fun in fun1(*tuple(args_list)):
  File "/home/ubuntu/h2ogpt/generate.py", line 1056, in evaluate
    for r in run_qa_db(query=query,
  File "/home/ubuntu/h2ogpt/gpt_langchain.py", line 1250, in _run_qa_db
    raise thread.exc
  File "/home/ubuntu/h2ogpt/utils.py", line 314, in run
    self._return = self._target(*self._args, **self._kwargs)
  File "/home/ubuntu/miniconda3/envs/h2ollm/lib/python3.10/site-packages/langchain/chains/base.py", line 140, in __call__
    raise e
  File "/home/ubuntu/miniconda3/envs/h2ollm/lib/python3.10/site-packages/langchain/chains/base.py", line 134, in __call__
    self._call(inputs, run_manager=run_manager)
  File "/home/ubuntu/miniconda3/envs/h2ollm/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py", line 84, in _call
    output, extra_return_dict = self.combine_docs(
  File "/home/ubuntu/miniconda3/envs/h2ollm/lib/python3.10/site-packages/langchain/chains/combine_documents/stuff.py", line 87, in combine_docs
    return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
  File "/home/ubuntu/miniconda3/envs/h2ollm/lib/python3.10/site-packages/langchain/chains/llm.py", line 213, in predict
    return self(kwargs, callbacks=callbacks)[self.output_key]
  File "/home/ubuntu/miniconda3/envs/h2ollm/lib/python3.10/site-packages/langchain/chains/base.py", line 140, in __call__
    raise e
  File "/home/ubuntu/miniconda3/envs/h2ollm/lib/python3.10/site-packages/langchain/chains/base.py", line 134, in __call__
    self._call(inputs, run_manager=run_manager)
  File "/home/ubuntu/miniconda3/envs/h2ollm/lib/python3.10/site-packages/langchain/chains/llm.py", line 69, in _call
    response = self.generate([inputs], run_manager=run_manager)
  File "/home/ubuntu/miniconda3/envs/h2ollm/lib/python3.10/site-packages/langchain/chains/llm.py", line 79, in generate
    return self.llm.generate_prompt(
  File "/home/ubuntu/miniconda3/envs/h2ollm/lib/python3.10/site-packages/langchain/llms/base.py", line 134, in generate_prompt
    return self.generate(prompt_strings, stop=stop, callbacks=callbacks)
  File "/home/ubuntu/miniconda3/envs/h2ollm/lib/python3.10/site-packages/langchain/llms/base.py", line 191, in generate
    raise e
  File "/home/ubuntu/miniconda3/envs/h2ollm/lib/python3.10/site-packages/langchain/llms/base.py", line 185, in generate
    self._generate(prompts, stop=stop, run_manager=run_manager)
  File "/home/ubuntu/miniconda3/envs/h2ollm/lib/python3.10/site-packages/langchain/llms/base.py", line 436, in _generate
    self._call(prompt, stop=stop, run_manager=run_manager)
  File "/home/ubuntu/miniconda3/envs/h2ollm/lib/python3.10/site-packages/langchain/llms/huggingface_pipeline.py", line 168, in _call
    response = self.pipeline(prompt)
  File "/home/ubuntu/miniconda3/envs/h2ollm/lib/python3.10/site-packages/transformers/pipelines/text_generation.py", line 209, in __call__
    return super().__call__(text_inputs, **kwargs)
  File "/home/ubuntu/miniconda3/envs/h2ollm/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1109, in __call__
    return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
  File "/home/ubuntu/miniconda3/envs/h2ollm/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1115, in run_single
    model_inputs = self.preprocess(inputs, **preprocess_params)
  File "/home/ubuntu/h2ogpt/h2oai_pipeline.py", line 79, in preprocess
    return super().preprocess(prompt_text, prefix=prefix, handle_long_generation=handle_long_generation,
  File "/home/ubuntu/miniconda3/envs/h2ollm/lib/python3.10/site-packages/transformers/pipelines/text_generation.py", line 228, in preprocess
    raise ValueError(
ValueError: We cannot use `hole` to handle this generation the number of desired tokens exceeds the models max length
@pseudotensor pseudotensor self-assigned this Jun 7, 2023
@pseudotensor pseudotensor mentioned this issue Jun 8, 2023
pseudotensor added a commit that referenced this issue Jun 8, 2023
pseudotensor added a commit that referenced this issue Jun 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant