| Summary: | [region] copy and replace should have a consistency check | ||
|---|---|---|---|
| Product: | [Eclipse Project] Equinox | Reporter: | Thomas Watson <tjwatson> |
| Component: | Components | Assignee: | Thomas Watson <tjwatson> |
| Status: | RESOLVED FIXED | QA Contact: | |
| Severity: | normal | ||
| Priority: | P3 | CC: | glyn.normington |
| Version: | 3.7 | ||
| Target Milestone: | --- | ||
| Hardware: | PC | ||
| OS: | Mac OS X - Carbon (unsup.) | ||
| Whiteboard: | |||
|
Description
Thomas Watson
I released a fix. Instead of trying to maintain a timestamp which will require a global lock on the digraph to continually check and update it I decided to create an extra copy with each digraph copy that is a snap shot of the state of the digraph when the copy was taken. This extra frozen copy is then checked on replace to make sure the current digraph is equal to the frozen copy (i.e. the current digraph has not changed since the copy). Taking another copy of the digraph incurs a cost proportional to the size of the digraph both on copy and on check.Also, wouldn't it cause a memory leak if a caller of the API copies the digraph and then fails before replacing it? I would have thought updates to the digraph will be relatively infrequent, so a global lock would be relatively efficient and would avoid this memory leak. (In reply to comment #2) > Taking another copy of the digraph incurs a cost proportional to the size of > the digraph both on copy and on check. Yes, on copy we take a double copy for the check. But I was betting that the copy/replace operation would be rare. Is that not the case? > Also, wouldn't it cause a memory leak if > a caller of the API copies the digraph and then fails before replacing it? If a caller fails to replace the digraph with their copy then there is no way to recover, aside from taking another copy and performing the same operations on the new copy and trying to replace using the new copy. I would have thought the old, failed, copy would have been thrown away for garbage collection. I don't consider that a leak. > > I would have thought updates to the digraph will be relatively infrequent, so a > global lock would be relatively efficient and would avoid this memory leak. I agree that updates to the digraph with copy/replace should be infrequent. This is why I did not introduce the global lock for all other operations. I am open to using some kind of stamp, if you think it is more simple. But when I looked at doing that I was worried about deadlock between the global lock required when making a copy and the monitor locks required when updating the bundle ids in a region. I discussed this with Glyn. While the deep double copy makes it safe it is still not ideal and there is also a race between threads adding bundles to regions and threads that are doing copy/replace operations. We decided to go with a single monitor lock per digraph which is shared between all regions belonging to the digraph. This eliminates the complication of having monitor hierarchies between region monitor and the digraph monitor but results in a coarse grained single digraph monitor for all update operations on the digraph and its regions. We felt this approach simplified the locking strategy and prevented possible deadlocks. With this single monitor lock for the complete digraph we can not use a timestamp approach. Each digraph has two timestamps. 1) for its current timestamp 2) for the timestamp of the digraph it originated (was copied) from. For copy operations a copied digraph will have a reference of the origin digraph it was copied from and the origin timestamp at the time it was copied. This way we can do a timestamp check to make sure the origin has not changed between the time the copy was made and the time the replace operation is called. I released these changes. |