Community
Participate
Working Groups
eclipse.buildId=4.7.0.I20161101-0800 java.lang.NullPointerException at org.eclipse.pde.internal.core.target.IUBundleContainer.generateBundle(IUBundleContainer.java:426) at org.eclipse.pde.internal.core.target.IUBundleContainer.generateResolvedBundles(IUBundleContainer.java:406) at org.eclipse.pde.internal.core.target.IUBundleContainer.cacheBundles(IUBundleContainer.java:305) at org.eclipse.pde.internal.core.target.IUBundleContainer.synchronizerChanged(IUBundleContainer.java:331) at org.eclipse.pde.internal.core.target.P2TargetUtils.notify(P2TargetUtils.java:804) at org.eclipse.pde.internal.core.target.P2TargetUtils.synchronize(P2TargetUtils.java:742) at org.eclipse.pde.internal.core.target.IUBundleContainer.resolveFeatures(IUBundleContainer.java:177) at org.eclipse.pde.internal.core.target.AbstractBundleContainer.resolve(AbstractBundleContainer.java:82) at org.eclipse.pde.internal.core.target.TargetDefinition.resolve(TargetDefinition.java:282) at org.eclipse.pde.internal.ui.preferences.TargetPlatformPreferencePage$1.run(TargetPlatformPreferencePage.java:357) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)
Sergey, is there a way to recreate this one?
It happened to me once and didn't happen again when I repeated the same action.
The only reason this could happen is very similar to http://stackoverflow.com/questions/21754098/java-hashmap-returning-null-on-get-call A simple null check should fix this issue. Making it thread-safe could be an overkill. It is just a cache that is getting used and NPE check should work as well without any potential problem.
Created attachment 265828 [details] fix
*** Bug 509273 has been marked as a duplicate of this bug. ***
Using concurrent hashmap or synchronized block could solve this issue. However in our case, NPE check is the best solution since it would be the fastest and will achieve all the purpose. The other alternatives could slow down the process.
(In reply to Vikas Chandra from comment #6) > Using concurrent hashmap or synchronized block could solve this issue. > However in our case, NPE check is the best solution since it would be the > fastest and will achieve all the purpose. The other alternatives could slow > down the process. The proposed patch does not fix the problem. As far as I see, the NPE is caused by: P2TargetUtils.fgArtifactKeyRepoFile.get(artifactKey).get(repo) because get(artifactKey) returns null. To solve the NPE, you need to replace the containsKey/get pairs with this pattern: Object value = map.get( key ); if( value != null ) { // use value } However, even this leaves the maps accessible from fgArtifactKeyRepoFile vulnerable. HashMaps aren't prepared for concurrent access and will throw random exceptions when accessed from multiple threads. Therefore, the maps should be replaced with thread-safe implementations. Or you may want to synchronize all access to these maps in which case the containsKey/get pattern can remain unchanged.
Created attachment 265903 [details] updated NPE fix
Updated fix based on previous comment. Between the null check call and the same call to get it, we can probably encounter the same issue. That issue fixed by this patch. >>HashMaps aren't prepared for concurrent access and will throw random exceptions >>when accessed from multiple threads. However put and get by different threads can still pose a problem ( not an NPE with this fix). Other options include using concurrentHashMap or synchronize fgArtifactKeyRepoFile's get and put with this updated patch.
I will investigate the performance impact of ConcurrentHashMap Vs HashMap for this issue before making a call. I will update with the results on this bug.
(In reply to Vikas Chandra from comment #10) > I will investigate the performance impact of ConcurrentHashMap Vs HashMap > for this issue before making a call. I will update with the results on this > bug. I think there is no need to investigate the performance impact. Using synchronized Maps or synchronizing the code that accesses the maps is strictly necessary. Otherwise, seemingly random exceptions will remain when accessing P2TargetUtils.fgArtifactKeyRepoFile from multiple threads. See also comment #7
http://git.eclipse.org/c/pde/eclipse.pde.ui.git/commit/?id=efb882e5afae88d94765bcfa78c2ecee492bfa67 Thanks Rüdiger 1) replace contains and gets with a get and null check ( much simpler code ) 2) used a concurrenthashmap which is thread safe This should fix everything.
(In reply to Vikas Chandra from comment #12) > http://git.eclipse.org/c/pde/eclipse.pde.ui.git/commit/ > ?id=efb882e5afae88d94765bcfa78c2ecee492bfa67 > > 1) replace contains and gets with a get and null check ( much simpler code ) > 2) used a concurrenthashmap which is thread safe > > This should fix everything. The Map that is held in the values of fgArtifactKeyRepoFile is still unsynchronized. Besides that, I would recommend using abstract types in declarations, e.g. prefer Map<IFileArtifactRepository, File> repoFile = new HashMap<>(); instead of HashMap<IFileArtifactRepository, File> repoFile = new HashMap<>(); ^^^^ The reads shorter and gives you the freedom to choose an implementation without affecting your clients.
Thanks Rüdiger ! Missed the inner map. Fixed via http://git.eclipse.org/c/pde/eclipse.pde.ui.git/commit/?id=e9436258d99d5d8998404060e9728238bf72720e
(In reply to Vikas Chandra from comment #14) > Thanks Rüdiger ! Missed the inner map. Fixed via > > http://git.eclipse.org/c/pde/eclipse.pde.ui.git/commit/ > ?id=e9436258d99d5d8998404060e9728238bf72720e I am afraid I found another loophole. If another thread sneaks in between determining the file variable and calling fgArtifactKeyRepoFile.put() its data will be lost. Hence, the entire block, starting with File file = null down until fgArtifactKeyRepoFile.put() need to be synchronized. The ConcurrentHashMap declarations of P2TargetUtils::fgArtifactKeyRepoFile could also be 'shortened' to Map.
>>If another thread sneaks in between determining the file variable and calling >>fgArtifactKeyRepoFile.put() its data You mean with the same key? In that case the worst case scenario will be an overwrite of data of that key. We can probably avoid by putIfAbsent. Since fgArtifactKeyRepoFile is supposed to cache the key and values, the values are not expected to change ( between the 2 competing threads) and overwrite will probably wont matter on the concurrentHashMap but putIfAbsent will be better. Please let me know your views on this.
(In reply to Vikas Chandra from comment #16) > >>If another thread sneaks in between determining the file variable and calling >>fgArtifactKeyRepoFile.put() its data > > You mean with the same key? In that case the worst case scenario will be an > overwrite of data of that key. We can probably avoid by putIfAbsent. > > Since fgArtifactKeyRepoFile is supposed to cache the key and values, the > values are not expected to change ( between the 2 competing threads) and > overwrite will probably wont matter on the concurrentHashMap but putIfAbsent > will be better. > > Please let me know your views on this. 'overwrite of data' is what I meant in my previous comment, that is, the previous data is lost thereby. I don't know if this is appropriate in this situation. Anyway, putIfAbsent() and leaving the ConcurrentHashMaps in place seems a decent solution to me and avoids the need to synchronize the entire block.
Thanks Rüdiger ! Handling corner case of possible accidental overwrite via http://git.eclipse.org/c/pde/eclipse.pde.ui.git/commit/?id=fa732c9a35958d46d517db34714196aab1a49127
(In reply to Vikas Chandra from comment #18) > Thanks Rüdiger ! Handling corner case of possible accidental overwrite via > > http://git.eclipse.org/c/pde/eclipse.pde.ui.git/commit/ > ?id=fa732c9a35958d46d517db34714196aab1a49127 Thanks, Vikas, for looking into and fixing this!
Thanks Rüdiger for your insightful comments !
Rüdiger/Sergey, can you please verify that this works for you now?
Never happened again after the fix.