-
-
Notifications
You must be signed in to change notification settings - Fork 261
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
#2377 - Wrong order in batch save - Fix to transaction depth #2378
Conversation
…into FOCONIS-save-cascade-order
The guts of this fix is in DefaultPersister.saveAssocOne() with the change to use the new transaction.depthDecrement() method. For our failing test cases this changes the depth from -1, 0 to be 0, 1. We can see the impact of this change in the batch ordering via the logging of: io.ebean.SUM - txn[1001] BatchControl flush ...
@@ -1123,7 +1125,7 @@ private void saveAssocOne(PersistRequestBean<?> request) { | |||
&& !prop.isReference(detailBean) | |||
&& !request.isParent(detailBean)) { | |||
SpiTransaction t = request.transaction(); | |||
t.depth(-1); | |||
t.depthDecrement(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The magic here - effectively we only decrement the depth when it is already greater than 0. This has the impact that it now will not make the depth negative (and hence order it before other persists requests at depth zero).
When we want to do this we now need to ensure we reset the depth back to zero after top level inserts and updates occur.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Q: Is it OK to increment the depth two lines below, if we did not decrement it?
int decremented = t.depthDecrement(); // returns 0 or 1
saveRecurse(...)
t.depth(decremented )
May also other places where depth is modified be affected by this kind of bug (e.g. deleteAssocMany
, deleteList
...)
t.depth(decremented);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Q: Is it OK to increment the depth two lines below, if we did not decrement it?
Yes I think so. I actually started off with code like what you have (I called decremented delta) but actually that just left things at depth 0 and was ok but I felt it was better to instead ~ "move everything down 1" in terms of depth.
deleteAssocMany, deleteList ...
I am not sure there is the same ordering issue with deletes yet. I'd prefer to have a failing test first.
@Override | ||
public final int depth() { | ||
return depth; | ||
} | ||
|
||
@Override | ||
public final void depthDecrement() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this effectively means it won't take the depth negative. So relative to other independent persist requests also at depth zero will execute in the order that they are persisted.
Fixes testMultiSave4() and testMultiSave4_usingRelation4()
@@ -417,7 +418,7 @@ public void update(EntityBean entityBean, Transaction t) { | |||
} else { | |||
update(req); | |||
} | |||
|
|||
req.resetDepth(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reset the depth after each top level insert and update. This means that all our top level requests at depth zero are treated in the order that they are saved().
FYI @rPraml |
With this additional change testMultiSave4 looks like BatchControl flush [Relation3:0 i:1, Relation2:1 i:1, Relation1:100 i:1] ... the resulting ordering is 0, 1 and 100. |
return false; | ||
BatchedBeanHolder beanHolder = getBeanHolder(request, alreadyInBatch); | ||
int ordering = depthOrder.orderingFor(depth); | ||
return beanHolder.getOrder() > ordering; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you check, if this really makes sense what I've done here ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think it looks ok but I probably need to put a break point here and see some examples so I'll do that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah so yes it turns out we can revert this back (we don't need this change) now that we have the other change to "fix the depth" per say. So I'll revert this part of the change back.
With the fix to the depth we no longer need these changes to BatchControl
This extends the fix and tests in PR #2377 ... with an additional fix to transaction depth.
The guts of this fix is in DefaultPersister.saveAssocOne() with the change to use the new transaction.depthDecrement() method. For our failing test cases this changes the depth from -1, 0 to be 0, 1.
Our failing tests failed as -1 will always order before 0. The tests got the -1 ordering via the
saveAssocOne()
, so our fix effectively shifts the depth computation by +1.We can see the impact of this change in the batch ordering via the logging of:
io.ebean.SUM - txn[1001] BatchControl flush ...
With this additional change testMultiSave4 looks like
... the resulting ordering is 0, 1 and 100.