Some Eclipse Foundation services are deprecated, or will be soon. Please ensure you've read this important communication.
Bug 360811 - Rework ZK Preferences Synchronization
Summary: Rework ZK Preferences Synchronization
Status: RESOLVED FIXED
Alias: None
Product: z_Archived
Classification: Eclipse Foundation
Component: gyrex (show other bugs)
Version: unspecified   Edit
Hardware: All All
: P3 enhancement (vote)
Target Milestone: ---   Edit
Assignee: Gunnar Wagenknecht CLA
QA Contact:
URL:
Whiteboard:
Keywords:
Depends on: 360505
Blocks: 358210
  Show dependency tree
 
Reported: 2011-10-13 08:59 EDT by Gunnar Wagenknecht CLA
Modified: 2018-03-19 11:59 EDT (History)
2 users (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Gunnar Wagenknecht CLA 2011-10-13 08:59:52 EDT
Cloud preferences is implemented based on ZooKeeperBasedPreferences which is based on ZooKeeperBasedService. A disconnect and re-connect on a server with a fair amount of stuff saved in the preferences involves a lot of listeners as well as watches. ZooKeeperBasedService registers listeners with ZooKeeperGate. ZooKeeperBasedPreferences registers watches with ZooKeeper.

We might not be able to reduce the amount of watches in ZooKeeper. However, we should investigate using just one singleton watcher for preferences synchronization. It would receive all watch events and batches necessary model updates which would then be processed asynchronously. Updates for children could be combined into a single tree update (which often happens anyway). Thus, a watch event storm would not end up in double fresh events.

This work also needs to take DISCONNECT as well as SESSION_EXPIRED into account (bug 360505). Preferences may continue to work (except flush/sync) on DISCONNECT until SESSION_EXPIRED is triggered.
Comment 1 Gunnar Wagenknecht CLA 2011-10-26 14:42:15 EDT
I committed a rework away from many listeners to a single client side watcher implementation. I did not implement that batching and asynchronous refresh of nodes. However, it's now far more easier to implement it later *if* we realize that this has some benefits we need.