| Summary: | Flush impact optimistic locking, strategy to get current data should have no functional impact | ||
|---|---|---|---|
| Product: | z_Archived | Reporter: | Sebastien Tardif <SebTardif> |
| Component: | Eclipselink | Assignee: | Nobody - feel free to take it <nobody> |
| Status: | NEW --- | QA Contact: | |
| Severity: | normal | ||
| Priority: | P2 | CC: | douglas.clarke, hansharz_bugzilla, peter.krogh, tom.ware |
| Version: | unspecified | ||
| Target Milestone: | --- | ||
| Hardware: | PC | ||
| OS: | Windows XP | ||
| Whiteboard: | |||
|
Description
Sebastien Tardif
Flush increment version, if flush didn't increment version but instead get version increment only before commit that would not create this issue. If many flush occur, they execute in the same transaction, so version cannot change due to external transaction, so no optimistic lock requirement to have the version check be done more than one time.
When version is a field of the persistent instance it can be manipulated by code and merge, which is creating issues.
So two possibles fix to handle our use case of doing merge after flush.
1- Have flush don't do version check, but only get it at commit. It can also be useful for application not expecting OptimisticLockException before commit.
2- Ignore version of persistent object after first flush, then set it right at commit.
Fix we have provided so far is about 2, below is a continuation of 2, to support all scenario we have found. The code should execute from DescriptorEventAdaptor when update occur:
RepeatableWriteUnitOfWork repeatableWriteUnitOfWork = (RepeatableWriteUnitOfWork) descriptorEvent.getSession();
Field field = RepeatableWriteUnitOfWork.class.getDeclaredField("cumulativeUOWChangeSet");
field.setAccessible(true);
UnitOfWorkChangeSet unitOfWorkChangeSet = (UnitOfWorkChangeSet) field.get(repeatableWriteUnitOfWork);
// if empty then we never flushed before, so we have nothing to fix
if (unitOfWorkChangeSet != null) {
ObjectChangeSet objectChangeSet = (ObjectChangeSet) unitOfWorkChangeSet.getObjectChangeSetForClone(descriptorEvent.getSource());
if (objectChangeSet != null) {
Long initialWriteLockValue = (Long) objectChangeSet.getInitialWriteLockValue();
if (initialWriteLockValue != null) {
RelationalDescriptor descriptor = (RelationalDescriptor) descriptorEvent.getDescriptor();
CTVersionLockingPolicy ctVersionLockingPolicy = (CTVersionLockingPolicy) descriptor.getOptimisticLockingPolicy();
Long newLockValue = (Long) ctVersionLockingPolicy.lockValueFromObject(descriptorEvent.getSource());
Comparable latestUsedLockValue = (Long) objectChangeSet.getWriteLockValue();
if (latestUsedLockValue == null) {
latestUsedLockValue = initialWriteLockValue;
}
if (newLockValue.compareTo(initialWriteLockValue) < 0 || newLockValue.compareTo((Long) latestUsedLockValue + 1) > 0) {
throw new OptimisticLockException();
}
// keep trace of the version changes everywhere descriptorEvent.updateAttributeWithObject(versionNoRelationalMappings.iterator().next().getAttributeName(), latestUsedLockValue + 1);
descriptorEvent.getQuery().getTranslationRow().put(versionNoRelationalMappings.iterator().next().getField(), latestUsedLockValue);
descriptorEvent.getRecord().put(versionNoRelationalMappings.iterator().next().getField(), latestUsedLockValue + 1);
objectChangeSet.setWriteLockValue latestUsedLockValue + 1);
cdelahun said: EclipseLink increments the version number in the object at the same time it issues update statement it issues to the database, as you have seen. Reply: We are keeping finding regression due to flush. The new regressions are: 1- version is incremented more than once per transaction. This is useless but also because of this it's not helping figure out what will be the version in the DB before commit happen. It's needed to get the version in advance before commit like to set the version to a browser response or in a JMS message. 2- Similarly we don't know if myPersistentObject.getVersion() return the incremented version or the original value if called before commit. When the strategy to get current data from query was using conform in unit of work, getVersion() was always the NOT yet incremented value before commit occur, now it's random! Setting target and priority. See the following page for details of what this means: http://wiki.eclipse.org/EclipseLink/Development/Bugs/Guidelines Bug still exists in Build 2.1.1.v20100817-r8050 Steps to reproduce: - Use Oracle as Backend with the Oracle10Platform - Create a simple persistent class Foo with a @Version field - Create two instances of Foo - Detach both objects from the session - Change a value in Foo1 and merge it (works fine) - Change a value in Foo2 and merge it Result: Getting a OptimisticLockException in MergeManager.mergeChangesOfCloneIntoWorkingCopy Investigation showed that somehow the version field of Foo2 is updated twice. This happens only with an Oracle Backend. Tests on H2, Postgres and DB2 worked fine. The suggested fix (adapted to current code state) works for us. From James S.: Comparing with the original value in the change set, and not throwing the error is probably fine. Not incrementing the lock version, or not throwing lock errors on flush would be wrong and cause lots of issues. Resending the object back to the client after flush is perferable, otherwise repeated merges could cause the users own changes to be over written. (i.e. anything set on the server, anything set through events, anything returned from the database, will be cleared if you remerge the old data from the client). The Eclipselink project has moved to Github: https://github.com/eclipse-ee4j/eclipselink |