Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

base_exception: side-effect in @api.constrains lead to no retry #1642

Closed
guewen opened this issue Aug 12, 2019 · 15 comments
Closed

base_exception: side-effect in @api.constrains lead to no retry #1642

guewen opened this issue Aug 12, 2019 · 15 comments
Labels
stale PR/Issue without recent activity, it'll be soon closed automatically.

Comments

@guewen
Copy link
Member

guewen commented Aug 12, 2019

Introduction

The base_exception module states that we have to call _check_exception in an @api.constrains method.

@api.multi
def _check_exception(self):
"""
This method must be used in a constraint that must be created in the
object that inherits for base.exception.
for sale :
@api.constrains('ignore_exception',)
def sale_check_exception(self):
...
...
self._check_exception
"""
exception_ids = self.detect_exceptions()
if exception_ids:
exceptions = self.env['exception.rule'].browse(exception_ids)
raise ValidationError('\n'.join(exceptions.mapped('name')))

And it's what's been done in the sale_exception module:

https:/OCA/sale-workflow/blob/8428446c79981aee281eb1773ff84c5a32d02067/sale_exception/models/sale.py#L49-L53

    @api.constrains('ignore_exception', 'order_line', 'state')
    def sale_check_exception(self):
        orders = self.filtered(lambda s: s.state == 'sale')
        if orders:
            orders._check_exception()

So what?

One of the method called in _check_exception, the method detect_exceptions, has side-effects on the database; it does a write on exception.rule on a field which must be a Many2many with the record being checked. For instance, sale_ids for the sales. the write will issue an UPDATE on exception.rule (for the write_date) and an INSERT in the Many2many relation.

A PR has already been created to limit the effect of concurrency here, as 2+ orders with an exception won't be able to call the method at the same time due to the write_date change.

The subject of this issue is about the usage of @api.constrains with side-effects, so what's the problem?

Summarizing:

Any create or write done in a method decorated by @api.constrains will never be retried when an OperationalError that could be retried (such as "could not serialize access due to concurrent update") occurs as the OperationalError is shadowed by a ValidationError.

Solutions

We should either ensure there is no write in the check method, or change the documentation and implementation of sale_exceptions to call _check_exception in create and write methods instead of an @api.constrains. Considering the current code, the second one seems better to me.

cc @florian-dacosta

@florian-dacosta
Copy link
Contributor

Hi @guewen
Thanks for the detailed report.

If I understand correctly, there are 2 issues.
The fact that the write is always done on the exception rule, instead of the the sale order cause concurrent update error more frequently.
The fact that this error can happen in the api constrains which lead to no retry (and also to raise an exception with no linked to the real error, I guess)

I wonder, with the PR made to limit the effect of concurrency, do we still have concurrency issues in this case? How is it possible?

Also, wouldn't it be possible to avoid the modification of the exception.rule, by ignoring the log_access field?
Indeed, we don't really need to update the write_date/write_uid of the exception rule in this particular case, this information make no sense...
I am not sure if it could have side effect to do something like this, in _detect_exceptions

rule._log_access = False
rule.write({'sale_ids': [...]})
rule._log_access = True

It would avoid all the concurrent update issues. (which is better than lock and wait or retry)
The write_date is not important when only the linked exceptions are updated.

Anyway, even if this concurrent update issue is resolved, I think it would still be better to call the _check_exception method in write and create instead of the api.constrains.
I am not confortable with a write inside an api.constrains, as this decorator was not designed for this, it could have other side effects.

@guewen
Copy link
Member Author

guewen commented Aug 12, 2019

If I understand correctly, there are 2 issues.

Yes

I wonder, with the PR made to limit the effect of concurrency, do we still have concurrency issues in this case? How is it possible?

PR #1638 make it very unlikely, if not impossible, to happen indeed.

Also, wouldn't it be possible to avoid the modification of the exception.rule, by ignoring the log_access field?

I remember @gurneyalex considered it but discarded it, I don't remember the reason, maybe to limit the change as much as possible.
May people be using record rules on them?

Anyway, even if this concurrent update issue is resolved, I think it would still be better to call the _check_exception method in write and create instead of the api.constrains.
I am not confortable with a write inside an api.constrains, as this decorator was not designed for this, it could have other side effects.

Yep, plus it breaks the principle of least astonishment.

@florian-dacosta
Copy link
Contributor

Well, I guess we agree the _check_exception should be refactore to avoid be in a api.constrains decorator.

Independently, I'd like to have @gurneyalex opinion about the log_access removal in this specific case, as it seems to be the best solution for me, for this particular case.

May people be using record rules on them?

I did not understand this part.
Even if it changes a bit the behavior, like I said before, in this case the write_date information, on the exception.rule make no sense IMHO.

@hparfr FYI

@hparfr
Copy link
Contributor

hparfr commented Aug 12, 2019

implementation of sale_exceptions to call _check_exception in create and write methods instead of an @api.constrains.

+1 on this.

@guewen
Copy link
Member Author

guewen commented Aug 13, 2019

My test of #1638 is not positive.

I am not sure if it could have side effect to do something like this, in _detect_exceptions

rule._log_access = False
rule.write({'sale_ids': [...]})
rule._log_access = True

I tried this, and even tried to put _log_access = False globally on exception.rule and still had concurrent update errors, l'm looking for various solutions.

@florian-dacosta were you the author of this part

rule.write({reverse_field: to_remove_list + to_add_list})
?

Do you remember why the write happens on the rule with a reverse_field and not directly on the "target" model? (Such as SaleOrder.exception_ids) This would reduce a lot the risk of concurrency errors I guess?

@florian-dacosta
Copy link
Contributor

@guewen
Yes,
Since a while, there are 2 ways to check the exception rules:

  • By python code, as it has been forever
  • By domain (I am not sure since when)
    A "Domain rule" is good because it allows to check a rule against a lot of records very quickly.
    When evaluating a domain rule, the module was writing on the rule. (I guess it is more efficient / less complex code)

So, if I remember correctly, the module was sometimes writing on the target recordset (python code rule) and sometimes writing on the rule (domain rule), and the code started to become really complex.
So the idea was to simplify the code again, but keeping the new stuff (domain rule mainly) and improve performance (writing on rule)

This maybe could be changed by making 2 writes on the target recordset instead like

to_remove.write({'exception_ids': [(3, rule.id, _)]})
to_add.write(({'exception_ids': [(4, rule.id, _)]}))

But, from a performance point of view in case of larges recordset, this may be really worse.
While we just do 1 write per rule, we would go back doing multiple write on large recordset which can lead to performance issues.
@hparfr had done some tests on big databases and saw a significants improvement by writing on the rule.

Anyway it is strange that the _log_access = False did not work.
At this stage, I am not sure one solution is better than another.
On one side the performance is better writing on the rule (on huge recordset) which is a case we encounter in many projects here.
On the other side in an environment with many concurrent exception detecting, the performance for large recordset maybe less important...

What could be done is to extact this part :

            to_remove_list = [(3, x.id, _) for x in to_remove]
            to_add_list = [(4, x.id, _) for x in to_add]
            rule.write({reverse_field: to_remove_list + to_add_list})

On a separate method (like _apply_exceptions or something)
This way, it would be really easy to override to write on the other object.
But, that's far from perfect...

guewen added a commit to guewen/sale-workflow that referenced this issue Aug 13, 2019
The method called by 'sale_check_exception' has a side effect, it writes
on 'exception.rule' + on the Many2many relation between it and
sale.order(.line). When decorated by @api.constrains, any error during
the method will be caught and re-raised as "ValidationError".
This part of code is very prone to concurrent updates as 2 sales having
the same exception will both write on the same 'exception.rule'.
A concurrent update (OperationalError) is re-raised as ValidationError,
and then is not retried properly.

Calling the same method in create/write has the same effect than
@api.constrains without shadowing the exception type.

Full explanation:
OCA/server-tools#1642
guewen added a commit to guewen/server-tools that referenced this issue Aug 13, 2019
In the documentation.

The method called by '_check_exception' has a side effect, it writes
on 'exception.rule' + on the Many2many relation between it and
the related model (such as sale.order). When decorated by
@api.constrains, any error during the method will be caught and
re-raised as "ValidationError".  This part of code is very prone to
concurrent updates as 2 sales having the same exception will both write
on the same 'exception.rule'.  A concurrent update (OperationalError) is
re-raised as ValidationError, and then is not retried properly.

Calling the same method in create/write has the same effect than
@api.constrains without shadowing the exception type.

Full explanation:
OCA#1642
@guewen
Copy link
Member Author

guewen commented Aug 13, 2019

Thanks for the explanation! The pity thing is that we should be only adding or removing entries in the relation table, so writing on one side or the other should change nothing at all.

I opened 2 pull requests which do not solve the concurrency issue but remove the @api.constrains so at least such errors are retried properly:

@guewen
Copy link
Member Author

guewen commented Aug 14, 2019

I tried this, and even tried to put _log_access = False globally on exception.rule and still had concurrent update errors

Found why. Using _log_access removes the write on exception_rule, so there is no concurrent update on exception_rule.
But it fails because every sale orders linked to an exception are updated to set the main_exception_id field when one of them change.

In my scenario, I have sale orders already linked with the configured exception rules and I create a new sales order (id 693334 in the logs). The logs (filtered on this query) show this:

db_1       | 2019-08-14 09:02:44 UTC [44]: [240-1] LOG:  duration: 1.239 ms  statement: UPDATE "sale_order" SET "main_exception_id"=NULL,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (693334)
db_1       | 2019-08-14 09:02:44 UTC [44]: [421-1] LOG:  duration: 0.778 ms  statement: UPDATE "sale_order" SET "main_exception_id"=14,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (693306, 693334)
 87, 558372, 578986, 586887, 669358, 605107, 589237, 592314, 559422, 637473, 562768, 571735, 641753, 592352, 683874, 606197, 544382)
db_1       | 2019-08-14 09:02:44 UTC [44]: [438-1] LOG:  duration: 29.634 ms  statement: UPDATE "sale_order" SET "main_exception_id"=NULL,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (601345, 670891, 621572, 558721, 548507, 600992, 567585, 600994, 6009
 610459, 608412, 561341, 562110)
db_1       | 2019-08-14 09:02:44 UTC [44]: [464-1] LOG:  duration: 12.401 ms  statement: UPDATE "sale_order" SET "main_exception_id"=4,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (547713, 614563, 564522, 585869, 563694, 677874, 561339, 670870, 681146,
95, 606754, 558638, 580402, 617396, 641717, 537402, 611132, 583741, 569919, 624450, 645572, 583750, 656455, 600521, 652243, 641879, 627544, 569947, 558522, 636511, 570721, 676332, 597613, 692209, 612856, 636479, 583294)
db_1       | 2019-08-14 09:02:44 UTC [44]: [486-1] LOG:  duration: 32.484 ms  statement: UPDATE "sale_order" SET "main_exception_id"=NULL,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (605440, 552449, 661765, 610186, 553687, 685330, 537494, 652186, 5584
db_1       | 2019-08-14 09:02:44 UTC [44]: [523-1] LOG:  duration: 8.712 ms  statement: UPDATE "sale_order" SET "main_exception_id"=11,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (624643, 678871, 597612, 545043, 649111, 536279, 642426, 656318, 630783)
db_1       | 2019-08-14 09:02:44 UTC [44]: [533-1] LOG:  duration: 2.212 ms  statement: UPDATE "sale_order" SET "main_exception_id"=4,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (681146)
db_1       | 2019-08-14 09:02:44 UTC [44]: [543-1] LOG:  duration: 5.282 ms  statement: UPDATE "sale_order" SET "main_exception_id"=NULL,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (672608, 540796, 669284, 667413)
db_1       | 2019-08-14 09:02:44 UTC [44]: [548-1] LOG:  duration: 1.481 ms  statement: UPDATE "sale_order" SET "main_exception_id"=20,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (597135)
db_1       | 2019-08-14 09:02:44 UTC [44]: [558-1] LOG:  duration: 1.255 ms  statement: UPDATE "sale_order" SET "main_exception_id"=7,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (539829)
db_1       | 2019-08-14 09:02:44 UTC [44]: [560-1] LOG:  duration: 1.370 ms  statement: UPDATE "sale_order" SET "main_exception_id"=NULL,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (671503)
45, 580403, 647092, 538552, 641465, 689210, 643390, 582337, 689222, 665296, 689234, 591443, 680916, 638939, 681693, 565727, 679216, 638948, 584940, 562925, 597106, 623507, 611939, 620791, 688749, 606333)
db_1       | 2019-08-14 09:02:44 UTC [44]: [570-1] LOG:  duration: 28.031 ms  statement: UPDATE "sale_order" SET "main_exception_id"=NULL,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (677634, 611467, 671503, 567185, 580115, 689185, 669483, 661680, 5385
 662206)
db_1       | 2019-08-14 09:02:44 UTC [44]: [605-1] LOG:  duration: 8.936 ms  statement: UPDATE "sale_order" SET "main_exception_id"=24,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (642401, 626563, 651205, 629643, 568716, 572755, 645236, 634526, 584939,
db_1       | 2019-08-14 09:02:44 UTC [44]: [624-1] LOG:  duration: 1.144 ms  statement: UPDATE "sale_order" SET "main_exception_id"=12,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (539473)
db_1       | 2019-08-14 09:02:44 UTC [44]: [634-1] LOG:  duration: 1.118 ms  statement: UPDATE "sale_order" SET "main_exception_id"=6,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (630666)
db_1       | 2019-08-14 09:02:44 UTC [44]: [636-1] LOG:  duration: 3.181 ms  statement: UPDATE "sale_order" SET "main_exception_id"=NULL,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (606777, 672170, 567971, 553893)
db_1       | 2019-08-14 09:02:45 UTC [44]: [647-1] LOG:  duration: 0.645 ms  statement: UPDATE "sale_order" SET "main_exception_id"=NULL,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (665150)
db_1       | 2019-08-14 09:02:45 UTC [44]: [655-1] LOG:  duration: 1.703 ms  statement: UPDATE "sale_order" SET "main_exception_id"=NULL,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (541558)
db_1       | 2019-08-14 09:02:46 UTC [44]: [1251-1] LOG:  duration: 1.053 ms  statement: UPDATE "sale_order" SET "main_exception_id"=14,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (693306, 693334)
987, 558372, 578986, 586887, 669358, 605107, 589237, 592314, 559422, 637473, 562768, 571735, 641753, 592352, 683874, 606197, 544382)
db_1       | 2019-08-14 09:02:46 UTC [44]: [1273-1] LOG:  duration: 14.993 ms  statement: UPDATE "sale_order" SET "main_exception_id"=NULL,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (601345, 670891, 621572, 558721, 548507, 600992, 567585, 600994, 600
 610459, 608412, 561341, 562110)
db_1       | 2019-08-14 09:02:46 UTC [44]: [1299-1] LOG:  duration: 4.811 ms  statement: UPDATE "sale_order" SET "main_exception_id"=4,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (547713, 614563, 564522, 585869, 563694, 677874, 561339, 670870, 681146,
495, 606754, 558638, 580402, 617396, 641717, 537402, 611132, 583741, 569919, 624450, 645572, 583750, 656455, 600521, 652243, 641879, 627544, 569947, 558522, 636511, 570721, 676332, 597613, 692209, 612856, 636479, 583294)
db_1       | 2019-08-14 09:02:46 UTC [44]: [1320-1] LOG:  duration: 18.247 ms  statement: UPDATE "sale_order" SET "main_exception_id"=NULL,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (605440, 552449, 661765, 610186, 553687, 685330, 537494, 652186, 558
db_1       | 2019-08-14 09:02:46 UTC [44]: [1357-1] LOG:  duration: 4.553 ms  statement: UPDATE "sale_order" SET "main_exception_id"=11,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (624643, 678871, 597612, 545043, 649111, 536279, 642426, 656318, 630783)
db_1       | 2019-08-14 09:02:46 UTC [44]: [1367-1] LOG:  duration: 0.581 ms  statement: UPDATE "sale_order" SET "main_exception_id"=4,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (681146)
db_1       | 2019-08-14 09:02:46 UTC [44]: [1377-1] LOG:  duration: 3.234 ms  statement: UPDATE "sale_order" SET "main_exception_id"=NULL,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (672608, 540796, 669284, 667413)
db_1       | 2019-08-14 09:02:46 UTC [44]: [1382-1] LOG:  duration: 1.224 ms  statement: UPDATE "sale_order" SET "main_exception_id"=20,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (597135)
db_1       | 2019-08-14 09:02:46 UTC [44]: [1392-1] LOG:  duration: 0.528 ms  statement: UPDATE "sale_order" SET "main_exception_id"=7,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (539829)
db_1       | 2019-08-14 09:02:46 UTC [44]: [1394-1] LOG:  duration: 0.646 ms  statement: UPDATE "sale_order" SET "main_exception_id"=NULL,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (671503)
545, 580403, 647092, 538552, 641465, 689210, 643390, 582337, 689222, 665296, 689234, 591443, 680916, 638939, 681693, 565727, 679216, 638948, 584940, 562925, 597106, 623507, 611939, 620791, 688749, 606333)
db_1       | 2019-08-14 09:02:46 UTC [44]: [1403-1] LOG:  duration: 17.928 ms  statement: UPDATE "sale_order" SET "main_exception_id"=NULL,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (677634, 611467, 671503, 567185, 580115, 689185, 669483, 661680, 538
, 662206)
db_1       | 2019-08-14 09:02:46 UTC [44]: [1438-1] LOG:  duration: 3.561 ms  statement: UPDATE "sale_order" SET "main_exception_id"=24,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (642401, 626563, 651205, 629643, 568716, 572755, 645236, 634526, 584939
db_1       | 2019-08-14 09:02:46 UTC [44]: [1456-1] LOG:  duration: 1.227 ms  statement: UPDATE "sale_order" SET "main_exception_id"=12,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (539473)
db_1       | 2019-08-14 09:02:46 UTC [44]: [1466-1] LOG:  duration: 0.798 ms  statement: UPDATE "sale_order" SET "main_exception_id"=6,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (630666)
db_1       | 2019-08-14 09:02:46 UTC [44]: [1468-1] LOG:  duration: 3.432 ms  statement: UPDATE "sale_order" SET "main_exception_id"=NULL,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (606777, 672170, 567971, 553893)
db_1       | 2019-08-14 09:02:46 UTC [44]: [1479-1] LOG:  duration: 0.936 ms  statement: UPDATE "sale_order" SET "main_exception_id"=NULL,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (665150)
db_1       | 2019-08-14 09:02:46 UTC [44]: [1487-1] LOG:  duration: 1.079 ms  statement: UPDATE "sale_order" SET "main_exception_id"=NULL,"write_uid"=1,"write_date"=(now() at time zone 'UTC') WHERE id IN (541558)

@guewen
Copy link
Member Author

guewen commented Aug 14, 2019

This maybe could be changed by making 2 writes on the target recordset instead like

to_remove.write({'exception_ids': [(3, rule.id, _)]})
to_add.write(({'exception_ids': [(4, rule.id, _)]}))

But, from a performance point of view in case of larges recordset, this may be really worse.
While we just do 1 write per rule, we would go back doing multiple write on large recordset which can lead to performance issues.

I'm surprised, I don't really see what would take more time since the things happening are:

case 1 when we write on ExceptionRule.sale_ids:

  • insert or delete in the M2m relation table for the current sale
  • write on exception_rule.write_date in case of _log_access in case
  • cascading to update every sale.order linked with the rule (because of the computed main_exception_ids that depends on exception_ids

case 2 when we write on SaleOrder.exception_ids:

  • insert or delete in the M2m relation table for the current sale
  • write on sale_order.write_date in case of _log_access
  • only main_exception_ids of the current sale is updated

EDIT: got it, when detect_exceptions is called on a recordset of many records (probably not the main use case though?)

@florian-dacosta
Copy link
Contributor

So, I did not personally made the test on performance, so it is hard to be categoric on this.

EDIT: got it, when detect_exceptions is called on a recordset of many records (probably not the main use case though?)

Detect exceptions on large recordset seems quite usual to me (cron that check all records with exceptions for instance).
Also, this is on these big checks that we really feel the slowness. If you check on 1 record, the performance issue is probably kind of negligible.

But for sure, if main_exception_id is recomputed for all linked recordset, that's not good. (I am surprised it is, if we only add or remove 1 record from the rule, it should be recomputed only for this record...)

The thing is, if we proceed rule by rule (like today, for each rule, we check a recordset) it seems really more logical to write on the rule, once by rule.
At the contrary, if we proceed by record (for each record, we check all rules) it seems then more logical to write on the record...

The domain on the rule make us kind of forced to proceed rule by rule (else it make no sense to have a domain...)

About what to do, like I said earlier, I think we could make the write (on rule or linked recordset) in a separate method, so if for a specific usecase, it make a lot of different in term of performance, we still can change the behavior easily.

If we continue writing on the rule, we also may manage the main_exception_id as a "normal" (not computed) field, since the standard behavior is not the best. We could recompute the main_exception_id, only on the to_remove and to_add recordset. We even could launch this recompute at then end of the check of all rules.

I personally don't have a strong opinion about the way to do it. Maybe doing more performance test, on small and very large recordset would be a good idea.
I won't be able to do it before the end of next week though(holidays).

It could be nice to have other points of view also, now that the problems are quite clear...
@hparfr @sebastienbeau @yvaucher @gurneyalex

@guewen
Copy link
Member Author

guewen commented Aug 14, 2019

@florian-dacosta it's easy to streamline all the records to add/write for a rule while writing on the comodel (sale.order, ...) instead of the rule, working on it.

guewen added a commit to guewen/server-tools that referenced this issue Aug 14, 2019
The goal of the modified method is to create or remove the relationship
(in the M2m relation tabel) between the tested model (such as
sale_order) and the exception rules. When the ORM writes on
ExceptionRule.sale_ids (using the example of sale_exception), it will
first proceeds with these updates:

* an UPDATE on exception_rule to set the write_date
* INSERT or DELETE on the relation table
* but then, as "write" is called on the exception rule, the ORM will
  trigger the api.depends to recompute all the "main_exception_ids"
  of the records (sales, ...) related to it, leading to an UPDATE
  for each sale order

We end up with RowExclusiveLock on such records:

* All the records of the relation table added / deleted for the current
sale order
* All the records of exception_rule matching the current sale order
* All the records of sale_order related to the exception rules matching
the current sale order

The first one is expected, the next 2 are not. We can remove the lock on
the exception_rule table by removing `_log_access`, however in any case,
the main_exception_ids computed field will continue to lock many sale
orders, effectively preventing 2 sales orders with the same exception
to be confirmed at the same time.

Reversing the write by writing on SaleOrder instead of ExceptionRule
fixes the 2 unexpected locks. It should not result in more queries: the
"to remove" part generates a DELETE on the relation table for the rule
to remove and the "to add" part generates an INSERT for the rule to add,
both will be exactly the same in both cases.

Related to OCA#1642
Replaces OCA#1638
@guewen
Copy link
Member Author

guewen commented Aug 14, 2019

@florian-dacosta here it is #1647, thanks for your help!

@yvaucher
Copy link
Member

@florian-dacosta IMO the issue with concurrence on the write is more troublesome that performances.

@gurneyalex could reduce the number of UPDATE queries done on rules plus add a lock on row level, but the lock is not enough with simultaneous exceptions as we don't only write on the rules but also on more sales than expected. So it's less likely to fail but still the python exceptions are not of the right type. And as I understand, the write have other a side effects in sale_exception that could lead to more concurrence errors with data written were we don't need to, either on write_date on rules or main_exception_id on sales.

The fix that moves all possible write out of api.constraint is the way to go to get the proper python exception with a queuejob context.

For the issue with the field main_exception_id my take is that we are talking about exceptions which should happen rarely, last temporary and be read few times.
To me the field main_exception_id doesn't need to be stored for performances issues.

I understand it was made that way to be able to order/search on it in the view list. But it has a major drawback on concurrence. For search, you probably can use filters on the many2many toward rules.

@guewen
Copy link
Member Author

guewen commented Aug 14, 2019

@yvaucher our messages crossed but my PR fixes the concurrent errors without using any locks, and I claim it does not have performance penalties.

guewen added a commit to guewen/server-tools that referenced this issue Aug 14, 2019
The goal of the modified method is to create or remove the relationship
(in the M2m relation tabel) between the tested model (such as
sale_order) and the exception rules. When the ORM writes on
ExceptionRule.sale_ids (using the example of sale_exception), it will
first proceeds with these updates:

* an UPDATE on exception_rule to set the write_date
* INSERT or DELETE on the relation table
* but then, as "write" is called on the exception rule, the ORM will
  trigger the api.depends to recompute all the "main_exception_ids"
  of the records (sales, ...) related to it, leading to an UPDATE
  for each sale order

We end up with RowExclusiveLock on such records:

* All the records of the relation table added / deleted for the current
sale order
* All the records of exception_rule matching the current sale order
* All the records of sale_order related to the exception rules matching
the current sale order

The first one is expected, the next 2 are not. We can remove the lock on
the exception_rule table by removing `_log_access`, however in any case,
the main_exception_ids computed field will continue to lock many sale
orders, effectively preventing 2 sales orders with the same exception
to be confirmed at the same time.

Reversing the write by writing on SaleOrder instead of ExceptionRule
fixes the 2 unexpected locks. It should not result in more queries: the
"to remove" part generates a DELETE on the relation table for the rule
to remove and the "to add" part generates an INSERT for the rule to add,
both will be exactly the same in both cases.

Related to OCA#1642
Replaces OCA#1638
guewen added a commit to guewen/sale-workflow that referenced this issue Aug 14, 2019
The method called by 'sale_check_exception' has a side effect, it writes
on 'exception.rule' + on the Many2many relation between it and
sale.order(.line). When decorated by @api.constrains, any error during
the method will be caught and re-raised as "ValidationError".
This part of code is very prone to concurrent updates as 2 sales having
the same exception will both write on the same 'exception.rule'.
A concurrent update (OperationalError) is re-raised as ValidationError,
and then is not retried properly.

Calling the same method in create/write has the same effect than
@api.constrains without shadowing the exception type.

Full explanation:
OCA/server-tools#1642
guewen added a commit to guewen/server-tools that referenced this issue Aug 14, 2019
The goal of the modified method is to create or remove the relationship
(in the M2m relation tabel) between the tested model (such as
sale_order) and the exception rules. When the ORM writes on
ExceptionRule.sale_ids (using the example of sale_exception), it will
first proceeds with these updates:

* an UPDATE on exception_rule to set the write_date
* INSERT or DELETE on the relation table
* but then, as "write" is called on the exception rule, the ORM will
  trigger the api.depends to recompute all the "main_exception_ids"
  of the records (sales, ...) related to it, leading to an UPDATE
  for each sale order

We end up with RowExclusiveLock on such records:

* All the records of the relation table added / deleted for the current
sale order
* All the records of exception_rule matching the current sale order
* All the records of sale_order related to the exception rules matching
the current sale order

The first one is expected, the next 2 are not. We can remove the lock on
the exception_rule table by removing `_log_access`, however in any case,
the main_exception_ids computed field will continue to lock many sale
orders, effectively preventing 2 sales orders with the same exception
to be confirmed at the same time.

Reversing the write by writing on SaleOrder instead of ExceptionRule
fixes the 2 unexpected locks. It should not result in more queries: the
"to remove" part generates a DELETE on the relation table for the rule
to remove and the "to add" part generates an INSERT for the rule to add,
both will be exactly the same in both cases.

Related to OCA#1642
Replaces OCA#1638
guewen added a commit to guewen/server-tools that referenced this issue Aug 15, 2019
The goal of the modified method is to create or remove the relationship
(in the M2m relation tabel) between the tested model (such as
sale_order) and the exception rules. When the ORM writes on
ExceptionRule.sale_ids (using the example of sale_exception), it will
first proceeds with these updates:

* an UPDATE on exception_rule to set the write_date
* INSERT or DELETE on the relation table
* but then, as "write" is called on the exception rule, the ORM will
  trigger the api.depends to recompute all the "main_exception_ids"
  of the records (sales, ...) related to it, leading to an UPDATE
  for each sale order

We end up with RowExclusiveLock on such records:

* All the records of the relation table added / deleted for the current
sale order
* All the records of exception_rule matching the current sale order
* All the records of sale_order related to the exception rules matching
the current sale order

The first one is expected, the next 2 are not. We can remove the lock on
the exception_rule table by removing `_log_access`, however in any case,
the main_exception_ids computed field will continue to lock many sale
orders, effectively preventing 2 sales orders with the same exception
to be confirmed at the same time.

Reversing the write by writing on SaleOrder instead of ExceptionRule
fixes the 2 unexpected locks. It should not result in more queries: the
"to remove" part generates a DELETE on the relation table for the rule
to remove and the "to add" part generates an INSERT for the rule to add,
both will be exactly the same in both cases.

Related to OCA#1642
Replaces OCA#1638
guewen added a commit to guewen/sale-workflow that referenced this issue Sep 13, 2019
The method called by 'sale_check_exception' has a side effect, it writes
on 'exception.rule' + on the Many2many relation between it and
sale.order(.line). When decorated by @api.constrains, any error during
the method will be caught and re-raised as "ValidationError".
This part of code is very prone to concurrent updates as 2 sales having
the same exception will both write on the same 'exception.rule'.
A concurrent update (OperationalError) is re-raised as ValidationError,
and then is not retried properly.

Calling the same method in create/write has the same effect than
@api.constrains without shadowing the exception type.

Full explanation:
OCA/server-tools#1642
guewen added a commit to guewen/server-tools that referenced this issue Oct 14, 2019
The goal of the modified method is to create or remove the relationship
(in the M2m relation tabel) between the tested model (such as
sale_order) and the exception rules. When the ORM writes on
ExceptionRule.sale_ids (using the example of sale_exception), it will
first proceeds with these updates:

* an UPDATE on exception_rule to set the write_date
* INSERT or DELETE on the relation table
* but then, as "write" is called on the exception rule, the ORM will
  trigger the api.depends to recompute all the "main_exception_ids"
  of the records (sales, ...) related to it, leading to an UPDATE
  for each sale order

We end up with RowExclusiveLock on such records:

* All the records of the relation table added / deleted for the current
sale order
* All the records of exception_rule matching the current sale order
* All the records of sale_order related to the exception rules matching
the current sale order

The first one is expected, the next 2 are not. We can remove the lock on
the exception_rule table by removing `_log_access`, however in any case,
the main_exception_ids computed field will continue to lock many sale
orders, effectively preventing 2 sales orders with the same exception
to be confirmed at the same time.

Reversing the write by writing on SaleOrder instead of ExceptionRule
fixes the 2 unexpected locks. It should not result in more queries: the
"to remove" part generates a DELETE on the relation table for the rule
to remove and the "to add" part generates an INSERT for the rule to add,
both will be exactly the same in both cases.

Related to OCA#1642
Replaces OCA#1638
Tardo pushed a commit to Tecnativa/server-tools that referenced this issue Dec 5, 2019
The goal of the modified method is to create or remove the relationship
(in the M2m relation tabel) between the tested model (such as
sale_order) and the exception rules. When the ORM writes on
ExceptionRule.sale_ids (using the example of sale_exception), it will
first proceeds with these updates:

* an UPDATE on exception_rule to set the write_date
* INSERT or DELETE on the relation table
* but then, as "write" is called on the exception rule, the ORM will
  trigger the api.depends to recompute all the "main_exception_ids"
  of the records (sales, ...) related to it, leading to an UPDATE
  for each sale order

We end up with RowExclusiveLock on such records:

* All the records of the relation table added / deleted for the current
sale order
* All the records of exception_rule matching the current sale order
* All the records of sale_order related to the exception rules matching
the current sale order

The first one is expected, the next 2 are not. We can remove the lock on
the exception_rule table by removing `_log_access`, however in any case,
the main_exception_ids computed field will continue to lock many sale
orders, effectively preventing 2 sales orders with the same exception
to be confirmed at the same time.

Reversing the write by writing on SaleOrder instead of ExceptionRule
fixes the 2 unexpected locks. It should not result in more queries: the
"to remove" part generates a DELETE on the relation table for the rule
to remove and the "to add" part generates an INSERT for the rule to add,
both will be exactly the same in both cases.

Related to OCA#1642
Replaces OCA#1638
jaredkipe pushed a commit to hibou-io/oca-sale-workflow that referenced this issue Dec 6, 2021
The method called by 'sale_check_exception' has a side effect, it writes
on 'exception.rule' + on the Many2many relation between it and
sale.order(.line). When decorated by @api.constrains, any error during
the method will be caught and re-raised as "ValidationError".
This part of code is very prone to concurrent updates as 2 sales having
the same exception will both write on the same 'exception.rule'.
A concurrent update (OperationalError) is re-raised as ValidationError,
and then is not retried properly.

Calling the same method in create/write has the same effect than
@api.constrains without shadowing the exception type.

Full explanation:
OCA/server-tools#1642
nicomacr pushed a commit to adhoc-dev/server-tools that referenced this issue Feb 4, 2022
The goal of the modified method is to create or remove the relationship
(in the M2m relation tabel) between the tested model (such as
sale_order) and the exception rules. When the ORM writes on
ExceptionRule.sale_ids (using the example of sale_exception), it will
first proceeds with these updates:

* an UPDATE on exception_rule to set the write_date
* INSERT or DELETE on the relation table
* but then, as "write" is called on the exception rule, the ORM will
  trigger the api.depends to recompute all the "main_exception_ids"
  of the records (sales, ...) related to it, leading to an UPDATE
  for each sale order

We end up with RowExclusiveLock on such records:

* All the records of the relation table added / deleted for the current
sale order
* All the records of exception_rule matching the current sale order
* All the records of sale_order related to the exception rules matching
the current sale order

The first one is expected, the next 2 are not. We can remove the lock on
the exception_rule table by removing `_log_access`, however in any case,
the main_exception_ids computed field will continue to lock many sale
orders, effectively preventing 2 sales orders with the same exception
to be confirmed at the same time.

Reversing the write by writing on SaleOrder instead of ExceptionRule
fixes the 2 unexpected locks. It should not result in more queries: the
"to remove" part generates a DELETE on the relation table for the rule
to remove and the "to add" part generates an INSERT for the rule to add,
both will be exactly the same in both cases.

Related to OCA#1642
Replaces OCA#1638
augusto-weiss pushed a commit to adhoc-dev/server-tools that referenced this issue Feb 10, 2022
The goal of the modified method is to create or remove the relationship
(in the M2m relation tabel) between the tested model (such as
sale_order) and the exception rules. When the ORM writes on
ExceptionRule.sale_ids (using the example of sale_exception), it will
first proceeds with these updates:

* an UPDATE on exception_rule to set the write_date
* INSERT or DELETE on the relation table
* but then, as "write" is called on the exception rule, the ORM will
  trigger the api.depends to recompute all the "main_exception_ids"
  of the records (sales, ...) related to it, leading to an UPDATE
  for each sale order

We end up with RowExclusiveLock on such records:

* All the records of the relation table added / deleted for the current
sale order
* All the records of exception_rule matching the current sale order
* All the records of sale_order related to the exception rules matching
the current sale order

The first one is expected, the next 2 are not. We can remove the lock on
the exception_rule table by removing `_log_access`, however in any case,
the main_exception_ids computed field will continue to lock many sale
orders, effectively preventing 2 sales orders with the same exception
to be confirmed at the same time.

Reversing the write by writing on SaleOrder instead of ExceptionRule
fixes the 2 unexpected locks. It should not result in more queries: the
"to remove" part generates a DELETE on the relation table for the rule
to remove and the "to add" part generates an INSERT for the rule to add,
both will be exactly the same in both cases.

Related to OCA#1642
Replaces OCA#1638
augusto-weiss pushed a commit to adhoc-dev/server-tools that referenced this issue Feb 16, 2022
The goal of the modified method is to create or remove the relationship
(in the M2m relation tabel) between the tested model (such as
sale_order) and the exception rules. When the ORM writes on
ExceptionRule.sale_ids (using the example of sale_exception), it will
first proceeds with these updates:

* an UPDATE on exception_rule to set the write_date
* INSERT or DELETE on the relation table
* but then, as "write" is called on the exception rule, the ORM will
  trigger the api.depends to recompute all the "main_exception_ids"
  of the records (sales, ...) related to it, leading to an UPDATE
  for each sale order

We end up with RowExclusiveLock on such records:

* All the records of the relation table added / deleted for the current
sale order
* All the records of exception_rule matching the current sale order
* All the records of sale_order related to the exception rules matching
the current sale order

The first one is expected, the next 2 are not. We can remove the lock on
the exception_rule table by removing `_log_access`, however in any case,
the main_exception_ids computed field will continue to lock many sale
orders, effectively preventing 2 sales orders with the same exception
to be confirmed at the same time.

Reversing the write by writing on SaleOrder instead of ExceptionRule
fixes the 2 unexpected locks. It should not result in more queries: the
"to remove" part generates a DELETE on the relation table for the rule
to remove and the "to add" part generates an INSERT for the rule to add,
both will be exactly the same in both cases.

Related to OCA#1642
Replaces OCA#1638
augusto-weiss pushed a commit to adhoc-dev/server-tools that referenced this issue Feb 16, 2022
The goal of the modified method is to create or remove the relationship
(in the M2m relation tabel) between the tested model (such as
sale_order) and the exception rules. When the ORM writes on
ExceptionRule.sale_ids (using the example of sale_exception), it will
first proceeds with these updates:

* an UPDATE on exception_rule to set the write_date
* INSERT or DELETE on the relation table
* but then, as "write" is called on the exception rule, the ORM will
  trigger the api.depends to recompute all the "main_exception_ids"
  of the records (sales, ...) related to it, leading to an UPDATE
  for each sale order

We end up with RowExclusiveLock on such records:

* All the records of the relation table added / deleted for the current
sale order
* All the records of exception_rule matching the current sale order
* All the records of sale_order related to the exception rules matching
the current sale order

The first one is expected, the next 2 are not. We can remove the lock on
the exception_rule table by removing `_log_access`, however in any case,
the main_exception_ids computed field will continue to lock many sale
orders, effectively preventing 2 sales orders with the same exception
to be confirmed at the same time.

Reversing the write by writing on SaleOrder instead of ExceptionRule
fixes the 2 unexpected locks. It should not result in more queries: the
"to remove" part generates a DELETE on the relation table for the rule
to remove and the "to add" part generates an INSERT for the rule to add,
both will be exactly the same in both cases.

Related to OCA#1642
Replaces OCA#1638
damdam-s pushed a commit to damdam-s/server-tools that referenced this issue Mar 23, 2022
The goal of the modified method is to create or remove the relationship
(in the M2m relation tabel) between the tested model (such as
sale_order) and the exception rules. When the ORM writes on
ExceptionRule.sale_ids (using the example of sale_exception), it will
first proceeds with these updates:

* an UPDATE on exception_rule to set the write_date
* INSERT or DELETE on the relation table
* but then, as "write" is called on the exception rule, the ORM will
  trigger the api.depends to recompute all the "main_exception_ids"
  of the records (sales, ...) related to it, leading to an UPDATE
  for each sale order

We end up with RowExclusiveLock on such records:

* All the records of the relation table added / deleted for the current
sale order
* All the records of exception_rule matching the current sale order
* All the records of sale_order related to the exception rules matching
the current sale order

The first one is expected, the next 2 are not. We can remove the lock on
the exception_rule table by removing `_log_access`, however in any case,
the main_exception_ids computed field will continue to lock many sale
orders, effectively preventing 2 sales orders with the same exception
to be confirmed at the same time.

Reversing the write by writing on SaleOrder instead of ExceptionRule
fixes the 2 unexpected locks. It should not result in more queries: the
"to remove" part generates a DELETE on the relation table for the rule
to remove and the "to add" part generates an INSERT for the rule to add,
both will be exactly the same in both cases.

Related to OCA#1642
Replaces OCA#1638
damdam-s pushed a commit to damdam-s/sale-workflow that referenced this issue Mar 23, 2022
The method called by 'sale_check_exception' has a side effect, it writes
on 'exception.rule' + on the Many2many relation between it and
sale.order(.line). When decorated by @api.constrains, any error during
the method will be caught and re-raised as "ValidationError".
This part of code is very prone to concurrent updates as 2 sales having
the same exception will both write on the same 'exception.rule'.
A concurrent update (OperationalError) is re-raised as ValidationError,
and then is not retried properly.

Calling the same method in create/write has the same effect than
@api.constrains without shadowing the exception type.

Full explanation:
OCA/server-tools#1642
gfcapalbo pushed a commit to gfcapalbo/server-tools that referenced this issue Apr 13, 2022
In the documentation.

The method called by '_check_exception' has a side effect, it writes
on 'exception.rule' + on the Many2many relation between it and
the related model (such as sale.order). When decorated by
@api.constrains, any error during the method will be caught and
re-raised as "ValidationError".  This part of code is very prone to
concurrent updates as 2 sales having the same exception will both write
on the same 'exception.rule'.  A concurrent update (OperationalError) is
re-raised as ValidationError, and then is not retried properly.

Calling the same method in create/write has the same effect than
@api.constrains without shadowing the exception type.

Full explanation:
OCA#1642
JuaniFreedoo pushed a commit to JuaniFreedoo/sale-workflow that referenced this issue May 10, 2022
The method called by 'sale_check_exception' has a side effect, it writes
on 'exception.rule' + on the Many2many relation between it and
sale.order(.line). When decorated by @api.constrains, any error during
the method will be caught and re-raised as "ValidationError".
This part of code is very prone to concurrent updates as 2 sales having
the same exception will both write on the same 'exception.rule'.
A concurrent update (OperationalError) is re-raised as ValidationError,
and then is not retried properly.

Calling the same method in create/write has the same effect than
@api.constrains without shadowing the exception type.

Full explanation:
OCA/server-tools#1642
gorkaegui pushed a commit to gorkaegui/server-tools that referenced this issue May 10, 2022
The goal of the modified method is to create or remove the relationship
(in the M2m relation tabel) between the tested model (such as
sale_order) and the exception rules. When the ORM writes on
ExceptionRule.sale_ids (using the example of sale_exception), it will
first proceeds with these updates:

* an UPDATE on exception_rule to set the write_date
* INSERT or DELETE on the relation table
* but then, as "write" is called on the exception rule, the ORM will
  trigger the api.depends to recompute all the "main_exception_ids"
  of the records (sales, ...) related to it, leading to an UPDATE
  for each sale order

We end up with RowExclusiveLock on such records:

* All the records of the relation table added / deleted for the current
sale order
* All the records of exception_rule matching the current sale order
* All the records of sale_order related to the exception rules matching
the current sale order

The first one is expected, the next 2 are not. We can remove the lock on
the exception_rule table by removing `_log_access`, however in any case,
the main_exception_ids computed field will continue to lock many sale
orders, effectively preventing 2 sales orders with the same exception
to be confirmed at the same time.

Reversing the write by writing on SaleOrder instead of ExceptionRule
fixes the 2 unexpected locks. It should not result in more queries: the
"to remove" part generates a DELETE on the relation table for the rule
to remove and the "to add" part generates an INSERT for the rule to add,
both will be exactly the same in both cases.

Related to OCA#1642
Replaces OCA#1638
TxemaSaezUna pushed a commit to TxemaSaezUna/server-tools that referenced this issue May 10, 2022
The goal of the modified method is to create or remove the relationship
(in the M2m relation tabel) between the tested model (such as
sale_order) and the exception rules. When the ORM writes on
ExceptionRule.sale_ids (using the example of sale_exception), it will
first proceeds with these updates:

* an UPDATE on exception_rule to set the write_date
* INSERT or DELETE on the relation table
* but then, as "write" is called on the exception rule, the ORM will
  trigger the api.depends to recompute all the "main_exception_ids"
  of the records (sales, ...) related to it, leading to an UPDATE
  for each sale order

We end up with RowExclusiveLock on such records:

* All the records of the relation table added / deleted for the current
sale order
* All the records of exception_rule matching the current sale order
* All the records of sale_order related to the exception rules matching
the current sale order

The first one is expected, the next 2 are not. We can remove the lock on
the exception_rule table by removing `_log_access`, however in any case,
the main_exception_ids computed field will continue to lock many sale
orders, effectively preventing 2 sales orders with the same exception
to be confirmed at the same time.

Reversing the write by writing on SaleOrder instead of ExceptionRule
fixes the 2 unexpected locks. It should not result in more queries: the
"to remove" part generates a DELETE on the relation table for the rule
to remove and the "to add" part generates an INSERT for the rule to add,
both will be exactly the same in both cases.

Related to OCA#1642
Replaces OCA#1638
DsolanoRuiz pushed a commit to DsolanoRuiz/server-tools that referenced this issue May 10, 2022
The goal of the modified method is to create or remove the relationship
(in the M2m relation tabel) between the tested model (such as
sale_order) and the exception rules. When the ORM writes on
ExceptionRule.sale_ids (using the example of sale_exception), it will
first proceeds with these updates:

* an UPDATE on exception_rule to set the write_date
* INSERT or DELETE on the relation table
* but then, as "write" is called on the exception rule, the ORM will
  trigger the api.depends to recompute all the "main_exception_ids"
  of the records (sales, ...) related to it, leading to an UPDATE
  for each sale order

We end up with RowExclusiveLock on such records:

* All the records of the relation table added / deleted for the current
sale order
* All the records of exception_rule matching the current sale order
* All the records of sale_order related to the exception rules matching
the current sale order

The first one is expected, the next 2 are not. We can remove the lock on
the exception_rule table by removing `_log_access`, however in any case,
the main_exception_ids computed field will continue to lock many sale
orders, effectively preventing 2 sales orders with the same exception
to be confirmed at the same time.

Reversing the write by writing on SaleOrder instead of ExceptionRule
fixes the 2 unexpected locks. It should not result in more queries: the
"to remove" part generates a DELETE on the relation table for the rule
to remove and the "to add" part generates an INSERT for the rule to add,
both will be exactly the same in both cases.

Related to OCA#1642
Replaces OCA#1638
OSKMCC pushed a commit to OSKMCC/sale-workflow that referenced this issue May 10, 2022
The method called by 'sale_check_exception' has a side effect, it writes
on 'exception.rule' + on the Many2many relation between it and
sale.order(.line). When decorated by @api.constrains, any error during
the method will be caught and re-raised as "ValidationError".
This part of code is very prone to concurrent updates as 2 sales having
the same exception will both write on the same 'exception.rule'.
A concurrent update (OperationalError) is re-raised as ValidationError,
and then is not retried properly.

Calling the same method in create/write has the same effect than
@api.constrains without shadowing the exception type.

Full explanation:
OCA/server-tools#1642
KKKARLOS pushed a commit to KKKARLOS/server-tools that referenced this issue May 10, 2022
The goal of the modified method is to create or remove the relationship
(in the M2m relation tabel) between the tested model (such as
sale_order) and the exception rules. When the ORM writes on
ExceptionRule.sale_ids (using the example of sale_exception), it will
first proceeds with these updates:

* an UPDATE on exception_rule to set the write_date
* INSERT or DELETE on the relation table
* but then, as "write" is called on the exception rule, the ORM will
  trigger the api.depends to recompute all the "main_exception_ids"
  of the records (sales, ...) related to it, leading to an UPDATE
  for each sale order

We end up with RowExclusiveLock on such records:

* All the records of the relation table added / deleted for the current
sale order
* All the records of exception_rule matching the current sale order
* All the records of sale_order related to the exception rules matching
the current sale order

The first one is expected, the next 2 are not. We can remove the lock on
the exception_rule table by removing `_log_access`, however in any case,
the main_exception_ids computed field will continue to lock many sale
orders, effectively preventing 2 sales orders with the same exception
to be confirmed at the same time.

Reversing the write by writing on SaleOrder instead of ExceptionRule
fixes the 2 unexpected locks. It should not result in more queries: the
"to remove" part generates a DELETE on the relation table for the rule
to remove and the "to add" part generates an INSERT for the rule to add,
both will be exactly the same in both cases.

Related to OCA#1642
Replaces OCA#1638
TxemaSaezUna pushed a commit to TxemaSaezUna/sale-workflow that referenced this issue May 10, 2022
The method called by 'sale_check_exception' has a side effect, it writes
on 'exception.rule' + on the Many2many relation between it and
sale.order(.line). When decorated by @api.constrains, any error during
the method will be caught and re-raised as "ValidationError".
This part of code is very prone to concurrent updates as 2 sales having
the same exception will both write on the same 'exception.rule'.
A concurrent update (OperationalError) is re-raised as ValidationError,
and then is not retried properly.

Calling the same method in create/write has the same effect than
@api.constrains without shadowing the exception type.

Full explanation:
OCA/server-tools#1642
DsolanoRuiz pushed a commit to DsolanoRuiz/sale-workflow that referenced this issue May 10, 2022
The method called by 'sale_check_exception' has a side effect, it writes
on 'exception.rule' + on the Many2many relation between it and
sale.order(.line). When decorated by @api.constrains, any error during
the method will be caught and re-raised as "ValidationError".
This part of code is very prone to concurrent updates as 2 sales having
the same exception will both write on the same 'exception.rule'.
A concurrent update (OperationalError) is re-raised as ValidationError,
and then is not retried properly.

Calling the same method in create/write has the same effect than
@api.constrains without shadowing the exception type.

Full explanation:
OCA/server-tools#1642
OSKMCC pushed a commit to OSKMCC/server-tools that referenced this issue May 10, 2022
The goal of the modified method is to create or remove the relationship
(in the M2m relation tabel) between the tested model (such as
sale_order) and the exception rules. When the ORM writes on
ExceptionRule.sale_ids (using the example of sale_exception), it will
first proceeds with these updates:

* an UPDATE on exception_rule to set the write_date
* INSERT or DELETE on the relation table
* but then, as "write" is called on the exception rule, the ORM will
  trigger the api.depends to recompute all the "main_exception_ids"
  of the records (sales, ...) related to it, leading to an UPDATE
  for each sale order

We end up with RowExclusiveLock on such records:

* All the records of the relation table added / deleted for the current
sale order
* All the records of exception_rule matching the current sale order
* All the records of sale_order related to the exception rules matching
the current sale order

The first one is expected, the next 2 are not. We can remove the lock on
the exception_rule table by removing `_log_access`, however in any case,
the main_exception_ids computed field will continue to lock many sale
orders, effectively preventing 2 sales orders with the same exception
to be confirmed at the same time.

Reversing the write by writing on SaleOrder instead of ExceptionRule
fixes the 2 unexpected locks. It should not result in more queries: the
"to remove" part generates a DELETE on the relation table for the rule
to remove and the "to add" part generates an INSERT for the rule to add,
both will be exactly the same in both cases.

Related to OCA#1642
Replaces OCA#1638
cesar-tecnativa pushed a commit to Tecnativa/server-tools that referenced this issue Jun 6, 2022
The goal of the modified method is to create or remove the relationship
(in the M2m relation tabel) between the tested model (such as
sale_order) and the exception rules. When the ORM writes on
ExceptionRule.sale_ids (using the example of sale_exception), it will
first proceeds with these updates:

* an UPDATE on exception_rule to set the write_date
* INSERT or DELETE on the relation table
* but then, as "write" is called on the exception rule, the ORM will
  trigger the api.depends to recompute all the "main_exception_ids"
  of the records (sales, ...) related to it, leading to an UPDATE
  for each sale order

We end up with RowExclusiveLock on such records:

* All the records of the relation table added / deleted for the current
sale order
* All the records of exception_rule matching the current sale order
* All the records of sale_order related to the exception rules matching
the current sale order

The first one is expected, the next 2 are not. We can remove the lock on
the exception_rule table by removing `_log_access`, however in any case,
the main_exception_ids computed field will continue to lock many sale
orders, effectively preventing 2 sales orders with the same exception
to be confirmed at the same time.

Reversing the write by writing on SaleOrder instead of ExceptionRule
fixes the 2 unexpected locks. It should not result in more queries: the
"to remove" part generates a DELETE on the relation table for the rule
to remove and the "to add" part generates an INSERT for the rule to add,
both will be exactly the same in both cases.

Related to OCA#1642
Replaces OCA#1638
cesar-tecnativa pushed a commit to Tecnativa/server-tools that referenced this issue Jun 28, 2022
The goal of the modified method is to create or remove the relationship
(in the M2m relation tabel) between the tested model (such as
sale_order) and the exception rules. When the ORM writes on
ExceptionRule.sale_ids (using the example of sale_exception), it will
first proceeds with these updates:

* an UPDATE on exception_rule to set the write_date
* INSERT or DELETE on the relation table
* but then, as "write" is called on the exception rule, the ORM will
  trigger the api.depends to recompute all the "main_exception_ids"
  of the records (sales, ...) related to it, leading to an UPDATE
  for each sale order

We end up with RowExclusiveLock on such records:

* All the records of the relation table added / deleted for the current
sale order
* All the records of exception_rule matching the current sale order
* All the records of sale_order related to the exception rules matching
the current sale order

The first one is expected, the next 2 are not. We can remove the lock on
the exception_rule table by removing `_log_access`, however in any case,
the main_exception_ids computed field will continue to lock many sale
orders, effectively preventing 2 sales orders with the same exception
to be confirmed at the same time.

Reversing the write by writing on SaleOrder instead of ExceptionRule
fixes the 2 unexpected locks. It should not result in more queries: the
"to remove" part generates a DELETE on the relation table for the rule
to remove and the "to add" part generates an INSERT for the rule to add,
both will be exactly the same in both cases.

Related to OCA#1642
Replaces OCA#1638
nikul-serpentcs pushed a commit to nikul-serpentcs/sale-workflow that referenced this issue Oct 10, 2022
The method called by 'sale_check_exception' has a side effect, it writes
on 'exception.rule' + on the Many2many relation between it and
sale.order(.line). When decorated by @api.constrains, any error during
the method will be caught and re-raised as "ValidationError".
This part of code is very prone to concurrent updates as 2 sales having
the same exception will both write on the same 'exception.rule'.
A concurrent update (OperationalError) is re-raised as ValidationError,
and then is not retried properly.

Calling the same method in create/write has the same effect than
@api.constrains without shadowing the exception type.

Full explanation:
OCA/server-tools#1642
matiasperalta1 pushed a commit to adhoc-dev/sale-workflow that referenced this issue Nov 17, 2022
The method called by 'sale_check_exception' has a side effect, it writes
on 'exception.rule' + on the Many2many relation between it and
sale.order(.line). When decorated by @api.constrains, any error during
the method will be caught and re-raised as "ValidationError".
This part of code is very prone to concurrent updates as 2 sales having
the same exception will both write on the same 'exception.rule'.
A concurrent update (OperationalError) is re-raised as ValidationError,
and then is not retried properly.

Calling the same method in create/write has the same effect than
@api.constrains without shadowing the exception type.

Full explanation:
OCA/server-tools#1642
sbejaoui pushed a commit to acsone/sale-workflow that referenced this issue Dec 28, 2022
The method called by 'sale_check_exception' has a side effect, it writes
on 'exception.rule' + on the Many2many relation between it and
sale.order(.line). When decorated by @api.constrains, any error during
the method will be caught and re-raised as "ValidationError".
This part of code is very prone to concurrent updates as 2 sales having
the same exception will both write on the same 'exception.rule'.
A concurrent update (OperationalError) is re-raised as ValidationError,
and then is not retried properly.

Calling the same method in create/write has the same effect than
@api.constrains without shadowing the exception type.

Full explanation:
OCA/server-tools#1642
sonhd91 pushed a commit to sonhd91/sale-workflow that referenced this issue Mar 30, 2023
The method called by 'sale_check_exception' has a side effect, it writes
on 'exception.rule' + on the Many2many relation between it and
sale.order(.line). When decorated by @api.constrains, any error during
the method will be caught and re-raised as "ValidationError".
This part of code is very prone to concurrent updates as 2 sales having
the same exception will both write on the same 'exception.rule'.
A concurrent update (OperationalError) is re-raised as ValidationError,
and then is not retried properly.

Calling the same method in create/write has the same effect than
@api.constrains without shadowing the exception type.

Full explanation:
OCA/server-tools#1642
nguyenminhchien pushed a commit to nguyenminhchien/sale-workflow that referenced this issue Dec 15, 2023
The method called by 'sale_check_exception' has a side effect, it writes
on 'exception.rule' + on the Many2many relation between it and
sale.order(.line). When decorated by @api.constrains, any error during
the method will be caught and re-raised as "ValidationError".
This part of code is very prone to concurrent updates as 2 sales having
the same exception will both write on the same 'exception.rule'.
A concurrent update (OperationalError) is re-raised as ValidationError,
and then is not retried properly.

Calling the same method in create/write has the same effect than
@api.constrains without shadowing the exception type.

Full explanation:
OCA/server-tools#1642
nguyenminhchien pushed a commit to nguyenminhchien/sale-workflow that referenced this issue Feb 2, 2024
The method called by 'sale_check_exception' has a side effect, it writes
on 'exception.rule' + on the Many2many relation between it and
sale.order(.line). When decorated by @api.constrains, any error during
the method will be caught and re-raised as "ValidationError".
This part of code is very prone to concurrent updates as 2 sales having
the same exception will both write on the same 'exception.rule'.
A concurrent update (OperationalError) is re-raised as ValidationError,
and then is not retried properly.

Calling the same method in create/write has the same effect than
@api.constrains without shadowing the exception type.

Full explanation:
OCA/server-tools#1642
Copy link

There hasn't been any activity on this issue in the past 6 months, so it has been marked as stale and it will be closed automatically if no further activity occurs in the next 30 days.
If you want this issue to never become stale, please ask a PSC member to apply the "no stale" label.

@github-actions github-actions bot added the stale PR/Issue without recent activity, it'll be soon closed automatically. label Feb 25, 2024
nguyenminhchien pushed a commit to nguyenminhchien/sale-workflow that referenced this issue Apr 11, 2024
The method called by 'sale_check_exception' has a side effect, it writes
on 'exception.rule' + on the Many2many relation between it and
sale.order(.line). When decorated by @api.constrains, any error during
the method will be caught and re-raised as "ValidationError".
This part of code is very prone to concurrent updates as 2 sales having
the same exception will both write on the same 'exception.rule'.
A concurrent update (OperationalError) is re-raised as ValidationError,
and then is not retried properly.

Calling the same method in create/write has the same effect than
@api.constrains without shadowing the exception type.

Full explanation:
OCA/server-tools#1642
Deriman-Alonso pushed a commit to Deriman-Alonso/sale-workflow that referenced this issue Aug 5, 2024
The method called by 'sale_check_exception' has a side effect, it writes
on 'exception.rule' + on the Many2many relation between it and
sale.order(.line). When decorated by @api.constrains, any error during
the method will be caught and re-raised as "ValidationError".
This part of code is very prone to concurrent updates as 2 sales having
the same exception will both write on the same 'exception.rule'.
A concurrent update (OperationalError) is re-raised as ValidationError,
and then is not retried properly.

Calling the same method in create/write has the same effect than
@api.constrains without shadowing the exception type.

Full explanation:
OCA/server-tools#1642
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale PR/Issue without recent activity, it'll be soon closed automatically.
Projects
None yet
Development

No branches or pull requests

4 participants