-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Latency under load #200
Comments
This was referenced May 17, 2021
sadiqj
pushed a commit
to ocaml-multicore/retro-httpaf-bench
that referenced
this issue
May 19, 2021
* cohttp_lwt_unix: make benchmark more realistic This change is based on inhabitedtype/httpaf#200 in which @vouillon reasonably points out that it is unrealistic to assume that the request handler responds immediately. The benchmark is modified to yield inside the request handler in order to simulate the effect of an hypotetical db request. Signed-off-by: Marcello Seri <[email protected]> * Add yield in the handler of the lwt examples Signed-off-by: Marcello Seri <[email protected]> * Add yield in the nethttp-go example The function is documented here: https://golang.org/pkg/runtime/#Gosched Signed-off-by: Marcello Seri <[email protected]> * effects_benchmark: reindent and comment out unused code mainly to reduce a number of warnings in the build Signed-off-by: Marcello Seri <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
The benchmark in mirage/ocaml-cohttp#328 seems unrealistic to me since the request handler responds immediately. Usually, some database requests are performed to build the response. So, I have investigated what happens when a
Lwt.yield
is added inside the request handler.The code is here: https://gist.github.com/vouillon/5002fd0a8c33eb0634fb08de6741cec0
I'm using the following command to perform the benchmark. Compared to mirage/ocaml-cohttp#328, I had to significantly raise the request rate to overwhelm the web servers.
Cohttp is significantly slower than http/af, as expected. But http/af seems to exhibit some queueing as well, with a median latency of almost 10 seconds.
So, I'm wondering whether I'm doing anything wrong. Or maybe this is just something one should expect, since there is no longer any backpressure to limit the number of concurrent requests being processed?
The text was updated successfully, but these errors were encountered: