Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'GenerationConfig' object has no attribute '_eos_token_tensor' #1299

Open
1 of 2 tasks
zzlTim opened this issue Jul 26, 2024 · 3 comments
Open
1 of 2 tasks

Comments

@zzlTim
Copy link

zzlTim commented Jul 26, 2024

System Info / 系統信息

print(torch.version)
2.3.1+cu121
print(torch.version.cuda)
12.1
print(torch.backends.cudnn.version())
8902
print(transformers.version)
4.43.2

Who can help? / 谁可以帮助到您?

@abmfy

python api_server.py
Setting eos_token is not supported, use the default one.
Setting pad_token is not supported, use the default one.
Setting unk_token is not supported, use the default one.
Loading checkpoint shards: 100%|█████████████████████| 7/7 [00:05<00:00, 1.24it/s]
INFO: Started server process [229171]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: 127.0.0.1:34408 - "GET / HTTP/1.1" 404 Not Found
2024-07-26 19:50:03.803 | DEBUG | main:create_chat_completion:244 - ==== request ====
{'messages': [ChatMessage(role='user', content='你好', name='string', function_call=FunctionCallResponse(name='string', arguments='string'))], 'temperature': 0.8, 'top_p': 0.8, 'max_tokens': 1024, 'echo': False, 'stream': False, 'repetition_penalty': 1.1, 'tools': None}
INFO: 202.120.87.49:55407 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/chatglm3-demo/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 399, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/home/ubuntu/anaconda3/envs/chatglm3-demo/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in call
return await self.app(scope, receive, send)
File "/home/ubuntu/anaconda3/envs/chatglm3-demo/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in call
await super().call(scope, receive, send)
File "/home/ubuntu/anaconda3/envs/chatglm3-demo/lib/python3.10/site-packages/starlette/applications.py", line 123, in call
await self.middleware_stack(scope, receive, send)
File "/home/ubuntu/anaconda3/envs/chatglm3-demo/lib/python3.10/site-packages/starlette/middleware/errors.py", line 186, in call
raise exc
File "/home/ubuntu/anaconda3/envs/chatglm3-demo/lib/python3.10/site-packages/starlette/middleware/errors.py", line 164, in call
await self.app(scope, receive, _send)
File "/home/ubuntu/anaconda3/envs/chatglm3-demo/lib/python3.10/site-packages/starlette/middleware/cors.py", line 85, in call
await self.app(scope, receive, send)
File "/home/ubuntu/anaconda3/envs/chatglm3-demo/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 65, in call
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/home/ubuntu/anaconda3/envs/chatglm3-demo/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/home/ubuntu/anaconda3/envs/chatglm3-demo/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/home/ubuntu/anaconda3/envs/chatglm3-demo/lib/python3.10/site-packages/starlette/routing.py", line 756, in call
await self.middleware_stack(scope, receive, send)
File "/home/ubuntu/anaconda3/envs/chatglm3-demo/lib/python3.10/site-packages/starlette/routing.py", line 776, in app
await route.handle(scope, receive, send)
File "/home/ubuntu/anaconda3/envs/chatglm3-demo/lib/python3.10/site-packages/starlette/routing.py", line 297, in handle
await self.app(scope, receive, send)
File "/home/ubuntu/anaconda3/envs/chatglm3-demo/lib/python3.10/site-packages/starlette/routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/home/ubuntu/anaconda3/envs/chatglm3-demo/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/home/ubuntu/anaconda3/envs/chatglm3-demo/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/home/ubuntu/anaconda3/envs/chatglm3-demo/lib/python3.10/site-packages/starlette/routing.py", line 72, in app
response = await func(request)
File "/home/ubuntu/anaconda3/envs/chatglm3-demo/lib/python3.10/site-packages/fastapi/routing.py", line 278, in app
raw_response = await run_endpoint_function(
File "/home/ubuntu/anaconda3/envs/chatglm3-demo/lib/python3.10/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
return await dependant.call(**values)
File "/home/ubuntu/users/zzl/ChatGLM3/openai_api_demo/api_server.py", line 300, in create_chat_completion
response = generate_chatglm3(model, tokenizer, gen_params)
File "/home/ubuntu/users/zzl/ChatGLM3/openai_api_demo/utils.py", line 165, in generate_chatglm3
for response in generate_stream_chatglm3(model, tokenizer, params):
File "/home/ubuntu/anaconda3/envs/chatglm3-demo/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 35, in generator_context
response = gen.send(None)
File "/home/ubuntu/users/zzl/ChatGLM3/openai_api_demo/utils.py", line 81, in generate_stream_chatglm3
for total_ids in model.stream_generate(**inputs, eos_token_id=eos_token_id, **gen_kwargs):
File "/home/ubuntu/anaconda3/envs/chatglm3-demo/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 35, in generator_context
response = gen.send(None)
File "/home/ubuntu/.cache/huggingface/modules/transformers_modules/chatglm3-6b/modeling_chatglm.py", line 1145, in stream_generate
logits_processor = self._get_logits_processor(
File "/home/ubuntu/anaconda3/envs/chatglm3-demo/lib/python3.10/site-packages/transformers/generation/utils.py", line 871, in _get_logits_processor
and generation_config._eos_token_tensor is not None
AttributeError: 'GenerationConfig' object has no attribute '_eos_token_tensor'

url
http://202.120.87.24:8000/v1/chat/completions
请求体
{
"model": "string",
"messages": [
{
"role": "user",
"content": "你好",
"name": "string",
"function_call": {
"name": "string",
"arguments": "string"
}
}
],
"temperature": 0.8,
"top_p": 0.8,
"max_tokens": 0,
"stream": false,
"functions": {},
"repetition_penalty": 1.1
}

Information / 问题信息

  • The official example scripts / 官方的示例脚本
  • My own modified scripts / 我自己修改的脚本和任务

Reproduction / 复现过程

cd ChatGLM3/openai_api_demo
python api_server.py

Expected behavior / 期待表现

希望能正常部署

@penchy-zju
Copy link

transformers 版本太新了,使用chatglm3的话可以把transformers版本降到4.40

@chaoStart
Copy link

transformers 版本太新了,使用chatglm3的话可以把transformers版本降到4.40

我尝试了,换成4.40,但是还是不能运行chatglm2-6

@qinhuangdaoStation
Copy link

降低版本到4.40.2,可以了

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants