-
Notifications
You must be signed in to change notification settings - Fork 493
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Huge latency for some use case scenarios #3727
Comments
Diagnostic report - { |
@aavasthy Can you check the Diagnostics and see if the SDK behaved as expected? Did we retry on 429s? Which are the user configurations that can affect this behavior? @AbhiroopLMK 429s have nothing to do with the SDK, this is throttling: https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/troubleshoot-request-rate-too-large?tabs=resource-specific |
Thanks @ealsur |
@AbhiroopLMK Whether or not an operation (any operation) is throttled (has a 429 response) is outside of the scope of the SDKs or this repository. Evaluation of utilized RU per second and RU budget provisioned on the container is a service behavior, there is nothing the library in this repository can change for you to not get a 429 response if the RU budget is exhausted because it is not a programmatic behavior on the library. |
@ealsur Thank you for the explaination. But, Im still dont understand why the same query is fetching proper results in older versions and throttling in this specific version. |
@AbhiroopLMK SDKs are not responsible for RU calculation and enforcement (return of 429s), so there is nothing we can help there with, you could get 429s because of other unrelated operations being performed on the container. We can only analyze of the SDK behaved as expected in terms of the handling of the 429. @aavasthy any results on that? @neildsh can you help understand if the query is ok to be doing 10 requests? Seems like it is prefetching. |
Hello @AbhiroopLMK , can you please rewrite the query to:
The |
Thanks @neildsh @ealsur . We are using the Linq filters and IQueryable objects for querying the database, so the SDK is actually converting the Linq satements to these queries. We are using the following filters where the ParentId's are in GUID and we are converting to string. This conversion and end to end is working fine for older sdk's but not with this current version. |
Hi @AbhiroopLMK, the newer version of the sdk had some fixes for LINQ that are now tranlating the |
Hi @neildsh , we have to use the ToString() in order to change the guid, to string for the comparison. And we would expect this to work in latest sdk version as it has been working in all previous versions. |
@AbhiroopLMK can you see if this reproes after the last SDK release? It includes a fix on LINQ queries. |
@ealsur Tested with cosmos 3.32.2. Issue still seems to be existing. |
@neildsh seems like the issue is not related to the LINQ regression, the query in the new version seems to consume way more RUs than on 3.31.2, any ideas what else could it be? |
@AbhiroopLMK , like I recommended a while back, just remove the
Debug Trace: |
Describe the bug
In a particular scenario, where we are trying to query for some results based on transaction ID and it returns null, its giving huge latency. The feed iterator diagnostics suggests its spiralling out of control where its making 10 200OK calls followed by 8 429 error calls. In all other scenarios, including ones that are returning nulss using a different ID, are taking 2 direct calls with 200 OK.
To Reproduce
This is the query its executing agains the cosmos db -
{{"query":"SELECT VALUE root FROM root WHERE (ToString(root["parentId"]) = "563741f4-f06f-4091-a048-59470249b6d0") ORDER BY root["createdDate"] DESC"}}
Expected behavior
Should be a quick response back with 2 direct responses 200 OK.
Actual behavior
Getting 10 200OK,s and 8 429's
Environment summary
Microsoft.Azure.Cosmos - 3.32.0
.Net Core - 3.1.0
Additional context
Will be ataching the diagnostic report
The text was updated successfully, but these errors were encountered: