We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
请教一下,为什么用FastDeploy部署比用Flask框架封装PaddleOcr推理要慢,当前用Flask封装PaddleOcr 8块P100,TPS只能到35左右,希望能用C++推理工程优化一下。 单图片推理: 在FastDeploy部署的服务上走GPU(backend用paddle)推理,使用GRPC调用服务需要0.25秒 在Flask封装的PaddleOcr服务,同样走GPU,使用HTTP调用推理需要0.15秒 有没有其他加速的方法,使用dynamic_batching{}是否可以,这个参数具体应该怎么填写,万分感谢。
paddleocrV3
registry.baidubce.com/paddlepaddle/fastdeploy:1.0.2-gpu-cuda11.4-trt8.4-21.10
The text was updated successfully, but these errors were encountered:
建议反馈给fastdeploy https:/PaddlePaddle/FastDeploy/issues?q=sort%3Aupdated-desc+is%3Aissue+is%3Aopen
Sorry, something went wrong.
No branches or pull requests
🔎 Search before asking
🐛 Bug (问题描述)
请教一下,为什么用FastDeploy部署比用Flask框架封装PaddleOcr推理要慢,当前用Flask封装PaddleOcr 8块P100,TPS只能到35左右,希望能用C++推理工程优化一下。
单图片推理:
在FastDeploy部署的服务上走GPU(backend用paddle)推理,使用GRPC调用服务需要0.25秒
在Flask封装的PaddleOcr服务,同样走GPU,使用HTTP调用推理需要0.15秒
有没有其他加速的方法,使用dynamic_batching{}是否可以,这个参数具体应该怎么填写,万分感谢。
🏃♂️ Environment (运行环境)
paddleocrV3
🌰 Minimal Reproducible Example (最小可复现问题的Demo)
registry.baidubce.com/paddlepaddle/fastdeploy:1.0.2-gpu-cuda11.4-trt8.4-21.10
The text was updated successfully, but these errors were encountered: