Community
Participate
Working Groups
CacheAdapter.adapting is updated by addAdapter and checked by setTarget. I don't know when setTarget is called (or if ever), but addAdapter is invoked when creating a stereotype application, so if two threads are creating stereotype applications at the same time, the first to complete addAdapter will set the isAdapting flag back to false before the other completes. Eclipse 3.5.2, UML2 3.0.1, org.eclipse.uml2.common_1.5.0.v200905041045 I have no indication this is actually causing any issues in practice, just found it when investigating another issue.
Please read 'adapt(Notifier)' where I wrote 'addAdapter'.
Created attachment 187793 [details] proposed changes Do these changes address the problem? To try them, pass -Dorg.eclipse.uml2.common.CacheAdapter.ThreadLocal as an argument when running Eclipse.
Kenn, sorry I never replied, I have been running with similar changes for a year now. Looking forward to seeing these changes in an upcoming release. Cheers.
The changes have been committed and pushed to git.
The changes are now available in an integration build.
With thread-local cache adapters, now if multiple threads actually need to coöperatively share access to a model, the entire CacheAdapter state, including the inverse cross-referencer, will be replicated in each thread. There is no way for the threads to share that state. Even if one thread just hands off a UML model to another without the first thread ever needing to look at it again, a new cache will be created. Also, each thread's cache adapter has to be added to the same adapter lists of the same objects when threads are sharing a model. That still has to protected by synchronization at the application level. Instead of thread-local cache adapters, can we build synchronization into the CacheAdapter to protect access to its internal caches and the inverse cross-referencer? Would there be enough time in this release if I were to volunteer to tackle that? Do we have a test suite with which to test such a solution against the thread-local adapters? Or, maybe the replication of caches isn't cause for concern? How common is the case of multiple threads sharing models compared to multiple threads working on separate models? How big is a cache adapter, anyways, in comparison with the size of the actual model content?
(In reply to comment #6) > With thread-local cache adapters, now if multiple threads actually need to > coöperatively share access to a model, the entire CacheAdapter state, including > the inverse cross-referencer, will be replicated in each thread. There is no > way for the threads to share that state. Even if one thread just hands off a > UML model to another without the first thread ever needing to look at it again, > a new cache will be created. > > Also, each thread's cache adapter has to be added to the same adapter lists of > the same objects when threads are sharing a model. That still has to protected > by synchronization at the application level. Indeed. > Instead of thread-local cache adapters, can we build synchronization into the > CacheAdapter to protect access to its internal caches and the inverse > cross-referencer? Would there be enough time in this release if I were to > volunteer to tackle that? Do we have a test suite with which to test such a > solution against the thread-local adapters? We could persue such a collection, but we've run out of runway in Juno. Note that the support for thread local cache adapters is optional (it has to be turned on with a system property) and, in a sense, orthogonal to how thread-safe the singleton is. > Or, maybe the replication of caches isn't cause for concern? How common is the > case of multiple threads sharing models compared to multiple threads working on > separate models? How big is a cache adapter, anyways, in comparison with the > size of the actual model content? I don't have any data on this... perhaps others do?
> Instead of thread-local cache adapters, can we build synchronization > into the CacheAdapter to protect access to its internal caches and > the inverse cross-referencer? I don't think a synchronization based approach is viable for server-side applications, which want to be stateless and share nothing. Christian, just curious, since the change in behavior is only enabled if the org.eclipse.uml2.common.util.CacheAdapter.ThreadLocal is set to true (which is not the case for IDE-based usage), what were you concerned with this approach? > How common is the > case of multiple threads sharing models compared to multiple threads working on > separate models? I'd say sharing models may make sense for desktop/IDE apps, but not for stateless server apps. These two scenarios are quite different, and impose very different constraints.
In our IDE, we encountered deadlocks when a background job as well as the UI thread are loading a model from the same file (same problem as described in #332057). So we enabled the threadLocal option for the CacheAdapter which solved these issues. However, we want to load the model in a ProgressMonitorDialog and then use it in the UI, but that leads to two different CacheAdapter instances and all inverse cross-references are not resolved in the UI thread. Any idea how to solve this? Does anyone know about a thread-safe CacheAdapter which we could use instead?