Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PersistenceException when calling executeBatch() on SqlUpdate #2110

Merged
merged 1 commit into from
Nov 25, 2020

Conversation

jonasPoehler
Copy link
Contributor

Hello Rob,

we have discovered a problem in a routine in our application that fills two batched updates in one for-loop and after that calls executeBatch() on them. This seems to work perfectely in almost all cases, except, when the number of batched updates equals the maxBatchSize (or number % maxBatchSize == 0). In that case we receive the following exception (reproducable with testTwoParallelBatches):

javax.persistence.PersistenceException: No batched statement found for key update uuone set name = ? where 0=1

	at io.ebeaninternal.server.persist.BatchedPstmtHolder.execute(BatchedPstmtHolder.java:91)
	at io.ebeaninternal.server.persist.BatchControl.execute(BatchControl.java:355)
	at io.ebeaninternal.server.persist.DefaultPersister.executeBatch(DefaultPersister.java:135)
	at io.ebeaninternal.server.core.DefaultServer.executeBatch(DefaultServer.java:1958)
	at io.ebeaninternal.server.core.DefaultSqlUpdate.executeBatch(DefaultSqlUpdate.java:167)
	at org.tests.update.TestSqlUpdateBatch.testTwoParallelBatches(TestSqlUpdateBatch.java:41)
	...
	at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:53)

When we were trying to fix this, our first intention was, not to throw this exception and instead just return and do nothing (the updates were already flushed). But that brought up the question why the stmtMap in BatchedPstmtHolder is cleared in the first place. At every flush the map of BatchedPstmt is cleared, resulting in a new object of BatchedPstmt being created on the next call to addBatch(). Since this results in a wrong array of change-length returned from executeBatch() (see the second test: testBatchReturnArrayLength), this didn't look correct either.

A possible fix would maybe be not to clear the map, but rather clear the list of "elements-still-to-flush" for every BatchedPstmt and not recreate the statement after each flush. But since I am not all that sure about the overall architecture thoughts on this matter, I decided, to just provide these two tests and maybe start a discussion.

I hope that I made the problem clear enough for you to understand, the code might explain it even better.
Looking forward to your thoughts on this!

All the best
Jonas

@rbygrave
Copy link
Member

our first intention was, not to throw this exception and instead just return and do nothing

Yes.

A possible fix would maybe be not to clear the map, but rather clear the list of "elements-still-to-flush" for every BatchedPstmt and not recreate the statement after each flush

Yes, that seems a conceptually better approach. I have started this and should be able to confirm this is the path shortly but this approach has more impact.

Note with the second test we can just bump up the transaction.setBatchSize(100) such that there isn't the implicit batch flush based on transaction batch size - perhaps not what you were after with explicit addBatch() executeBatch() use - hmmm.

rbygrave added a commit that referenced this pull request Nov 24, 2020
- Changes to generally not close BatchedPstmt on executeBatch() and instead closed on commit/rollback
@rbygrave rbygrave self-assigned this Nov 24, 2020
@rbygrave rbygrave added the bug label Nov 24, 2020
@rbygrave rbygrave merged commit ce60eac into ebean-orm:master Nov 25, 2020
rbygrave added a commit that referenced this pull request Nov 25, 2020
@jonasPoehler jonasPoehler deleted the bug-ebean/sql_update_batch_flush branch July 29, 2021 14:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants