| Summary: |
[performance] hotspot in org.eclipse.osgi.container.ModuleDatabase.Persistence.store() |
| Product: |
[Eclipse Project] Equinox
|
Reporter: |
Carsten Hammer <carsten.hammer> |
| Component: |
Framework | Assignee: |
equinox.framework-inbox <equinox.framework-inbox> |
| Status: |
CLOSED
WONTFIX
|
QA Contact: |
|
| Severity: |
normal
|
|
|
| Priority: |
P3
|
CC: |
alex.blewitt, Lars.Vogel
|
| Version: |
4.16 | |
|
| Target Milestone: |
--- | |
|
| Hardware: |
All | |
|
| OS: |
All | |
|
| See Also: |
https://git.eclipse.org/r/163858
|
| Whiteboard: |
|
| Bug Depends on: |
|
|
|
| Bug Blocks: |
563542
|
|
|
| Attachments: |
|
Created attachment 283100 [details] profile output jdt.ui start The method org.eclipse.osgi.container.ModuleDatabase.Persistence.store() seems to be a performance hotspot. At startup of a simple plain jdt.ui run configuration eclipse uses 94 ms cpu time (the other time is waiting time eg. for harddisk). I was told that this cannot (or should not) be that it is called at startup - so maybe my test environment is not completely valid for this investigation. Maybe someone comes up with a suggestion to do it better. One third of the cpu time is spent in ModuleDatabase$Persistence.addMap() - one half of this third part in java.lang.String.hashCode() and the other half in java.util.AbstractMap.equals(java.lang.Object). But see yourself. I attach the results. This suggests that the methods called from within "store()" and "Storage.saveGenerations()" should be fast. Maybe it is possible to get rid of the copies completely and use a kind of visitor pattern to collect and process the information. What I can see is that some HashSet datastructures have a high number of entries at the end. So maybe we can just increase their initial size to at least reduce the pressure on the GC caused be the resize events.