Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

用FastDeploy部署比用Flask框架封装PaddleOcr推理要慢 #13780

Open
3 tasks done
jzm930203 opened this issue Aug 29, 2024 · 1 comment
Open
3 tasks done

用FastDeploy部署比用Flask框架封装PaddleOcr推理要慢 #13780

jzm930203 opened this issue Aug 29, 2024 · 1 comment

Comments

@jzm930203
Copy link

🔎 Search before asking

  • I have searched the PaddleOCR Docs and found no similar bug report.
  • I have searched the PaddleOCR Issues and found no similar bug report.
  • I have searched the PaddleOCR Discussions and found no similar bug report.

🐛 Bug (问题描述)

请教一下,为什么用FastDeploy部署比用Flask框架封装PaddleOcr推理要慢,当前用Flask封装PaddleOcr 8块P100,TPS只能到35左右,希望能用C++推理工程优化一下。
单图片推理:
在FastDeploy部署的服务上走GPU(backend用paddle)推理,使用GRPC调用服务需要0.25秒
在Flask封装的PaddleOcr服务,同样走GPU,使用HTTP调用推理需要0.15秒
有没有其他加速的方法,使用dynamic_batching{}是否可以,这个参数具体应该怎么填写,万分感谢。

🏃‍♂️ Environment (运行环境)

paddleocrV3

🌰 Minimal Reproducible Example (最小可复现问题的Demo)

registry.baidubce.com/paddlepaddle/fastdeploy:1.0.2-gpu-cuda11.4-trt8.4-21.10

@GreatV
Copy link
Collaborator

GreatV commented Aug 30, 2024

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants