Community
Participate
Working Groups
A LockObjectsRequest implicitly refreshes the objects being locked, and since normally only a small subset of all objects viewed by the client get locked, this means locking an object comes down to a *partial* refresh. Partial refreshes are inherently dangerous because they make it possible for only one end of a containment or eOpposite relationship to be changed in the client's graph. (That is, when the object holding one end of the relationship is refreshed, but the object holding the other end isn't.) Will attach testcase shortly.
Note: this is primarily a problem for sessions with passiveUpdate=false, but I guess in theory it could even be a problem for PU=true because there's no guarantee that the passiveUpdate info will arrive before the refresh.
Created attachment 181705 [details] Patch v1 (test case only) Test case, as a patch.
I can think of 3 possible solutions so far: (1) Locking an object does not require refreshing that object. This is a simple, clean solution, but the regrettable side-effect is that holding a write lock would no longer by itself guarantee that the commit of the locked object will succeed, as it might still get rejected for being an modification of a revised revision. This isn't breakage, but it would make the semantics of CDO locks a bit odd. (2) Locking one or more objects requires refreshing ALL viewed objects. Seems like a straightforward solution at first, but it's at odds with the sticky (i.e. timestamp-pegged) behavior that we recently introduced for views with passiveUpdate=false (and supporting audit data). The 2nd drawback is that it turns what's supposed to be a light-weight op (i.e. locking one or a few objects) into a heavy-weight op that could possibly involve a big data transfer across the network. (3) Locking a set of objects requires refreshing at least those objects, plus a computed additional set including all objects that were "touched" by changes to the primary set. This would be a very complicated solution. The problem is that the client cannot compute the set of additional objects, since it has no (certain) knowledge about what changes have been made. The server can compute the set (I think...), but the server is in itself unaware of which objects the client is viewing. So in order to avoid the server sending refresh data for objects that aren't even being viewed, the client would still have to identify in its lock request ALL of the objects it's viewing . Example of this approach: say client wants to lock A. Server checks whether it has for A a revision newer than the requesting client's revision. If so, server compares that latest revision with the client's revision to see if there have been changes to any of A's containment and eOpposites relationships. If so, the "other end" of those relationships must, in principle, be included in the response, unless those other ends aren't viewed by that particular client. (But come to think of it, this solution seems even harder to reconcile with sticky views.)
Another option: (4) Locking a set of objects requires refreshing only those objects *initially*, but when receiving the response the client checks for inconsistencies arising from the refresh, and requests additional fresh revisions from the server to deal with those. This is similar to (3) except that the computational burden is on the client. Obvious is drawback is that it could lead to an unpredictably long sequence of LoadRevision requests.
What about this: When a lock is to be requested and passive updates are off, then we just check if the client's last update time is uptodate (i.e. equal to the servers last commit time of the viewed branch). If the client's view is based on old data (even if that client doesn't use/see this old data) we reject the lock request and throw an exception. Indicating to the user that a refresh() is needed first.
(In reply to comment #5) > When a lock is to be requested and passive updates are off, then we just check > if the client's last update time is uptodate (i.e. equal to the servers last > commit time of the viewed branch). If the client's view is based on old data > (even if that client doesn't use/see this old data) we reject the lock request > and throw an exception. Indicating to the user that a refresh() is needed > first. Hmm yeah, that's good actually. I defer :-) You want me to implement this, or will you?
(In reply to comment #6) > (In reply to comment #5) > > When a lock is to be requested and passive updates are off, then we just check > > if the client's last update time is uptodate (i.e. equal to the servers last > > commit time of the viewed branch). If the client's view is based on old data > > (even if that client doesn't use/see this old data) we reject the lock request > > and throw an exception. Indicating to the user that a refresh() is needed > > first. > > Hmm yeah, that's good actually. I defer :-) No sarcasm?? > You want me to implement this, or will you? I'd be happy if you can tackle this ;-)
(In reply to comment #7) OK. Saulius suggested the following optimization: instead of simply comparing the client's latest update TS with the latest commit time on the server, let's check whether for any of the objects the client is trying to lock, he (the client) does not have the latest revision. This is better, because it doesn't unnecessarily refuse locks because of unrelated commits by other clients (i.e. commits that didn't affect any of the objects the client is trying to lock).
> Saulius suggested the following optimization: instead of simply > comparing the client's latest update TS with the latest commit > time on the server, let's check whether for any of the objects the > client is trying to lock, he (the client) does not have the latest > revision. Isn't that almost the same effort like a complete refresh()?
(In reply to comment #9) > Isn't that almost the same effort like a complete refresh()? No, it's still much less effort, because, first, the check will only be performed for the objects being locked, which is typically a small subset of all the objects the client is viewing. And second, even if stale revisions are detected, the server will only throw an exception; it will not send back the new revisions as it would in case of a refresh. Anyway, point is that the client shouldn't be prevented from locking a set of objects because of totally unrelated commits. (E.g. commits in other resources by people (colleagues?) he might not even know and who are working on different projects -- a likely scenario if a CDO based app is used in a large organization.)
Created attachment 182013 [details] Patch including testcases Here's a patch and a few new tests in LockingManagerTest, which make my earlier testcase for this Zilla obsolete.
Created attachment 182015 [details] Patch including testcases > ... which make my earlier testcase for this Zilla obsolete. But still I included it :-( Here's another version of the patch, without the obsoleted testcase.
Looks like we'll have to deprecate Bugzilla_294850_Test, since with this patch, a LockObjectsRequest no longer brings in any revisions, and so it can't cause any conflicts.
Your patch looks good. Unfortunately Bugzilla_294850_Test fails in all scenarios here. I guess it's easy to fix for you. BTW. I fear that my hotfix for bug 329005 is in conflict with your LockManagerTest changes. Can you bring this class uptodate? I wonder if we can remove the synchronization on the view's invalidation lock, now that Objects don't change anymore...
Created attachment 182128 [details] Patch v3 Yeah, Bugzilla_294850_Test.java has to be removed, see my earlier comment. Uploading a new patch that includes the removal.
(In reply to comment #14) > BTW. I fear that my hotfix for bug 329005 is in conflict with > your LockManagerTest changes. Can you bring this class uptodate? See comments in bug 329005.
IIRC patch v3 in *this* bugzilla contains changes to LockManagerTest.java but they don't match HEAD anymore due to my changes to the same class (see bug 329005). I guess your changes are important, that's why I asked you to bring patch v3 in sync with the newer HEAD ;-)
Patch v3 in this bugzilla still works with current HEAD.
Please go ahead and commit...
Committed to HEAD
Available in R20110608-1407