-
Notifications
You must be signed in to change notification settings - Fork 469
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ts mutex deadlock under sidekiq. #1051
Comments
I'm really not sure why this deadlock is happening… I'd expect the first run would preload indices, and then everything else would be allowed to run after that. Can you try adding the following to an initialiser - which will preload the indices fully when the app boots, and hopefully that'll avoid the deadlocks when the jobs are being processed by Sidekiq.
|
Thank you for your help. |
Otherwise concurrent threads (e.g. sidekiq, puma) can deadlock while racing to obtain access to the mutex block at https:/pat/thinking-sphinx/blob/v3.4.2/lib/thinking_sphinx/configuration.rb#L78. This bug was fixed in ThinkingSphinx v4.3.0+. See - pat/thinking-sphinx#1051 - pat/thinking-sphinx#1132
Ruby version: 2.3.1
Thinking-sphinx: 3.3.0
Sidekiq version: 4.2.10
rails: 5.0.0, 5.0.2
I have models, which have ts indices.
The sidekick run attached worker as several jobs at the same time.
And all they are blocked on updating the record, which calls thinking-sphinx.
As you can see in attached trace for one of the jobs, ActiveRecord update_attribute call thinking-sphinx callback, which try get indices configuration.
thinking_sphinx/configuration.rb#L78
I have no idea, how to use Sidekiq with thinking-sphinx simultaneously.
As I understand, thinking-sphinx configuration block inside mutex protect take a relatively much time, but why all threads got deadlock?
For test I commented @@mutex.synchronize, and get success update.
This issue is affected only sidekiq worker, manual call this worker from console works well.
The text was updated successfully, but these errors were encountered: