| Summary: | Add Memory Dump Button | ||
|---|---|---|---|
| Product: | [Tools] MAT | Reporter: | Manuel Selva <manuel.selva> |
| Component: | GUI | Assignee: | Krum Tsvetkov <krum.tsvetkov> |
| Status: | RESOLVED FIXED | QA Contact: | |
| Severity: | enhancement | ||
| Priority: | P3 | CC: | andreas.buchen, andrew_johnson, apupier, dominik.stadler, jcayne, mstorer3772, taras.tielkes |
| Version: | unspecified | ||
| Target Milestone: | --- | ||
| Hardware: | PC | ||
| OS: | Windows XP | ||
| Whiteboard: | |||
| Bug Depends on: | |||
| Bug Blocks: | 290338 | ||
| Attachments: | |||
|
Description
Manuel Selva
+1 SAP ships its own JDK (a Sun derivative) which has yet another way to acquire heap dumps on demand. This would be on extension using this dialog. Hi, I checked out the MAT plug-ins and started to investigate the way we will have to implement this new feature. I didn't deeply investigate the plug-in architecture, but I think I 'll start by adding an Extension Point (as suggested by Andreas ;o) to the org.eclipse.mat.ui plug-in. This extension point will require the following: + A name + May be an optional icon + A class implementing a Given Interface with one method looking something like: public Path acquireDump(Shell shell); The UI plug-in will add a new DropDown Acquire Dump button to the toolbar. This drop down will contain one entry for each extension to our new Extension point. The entry name is the name given by extending plug-in. When the user clicks on a given entry the associated class is Loaded and is asked for the Path of the generated memory dump file using the acquireDump method. A shell is given here in order to let the responsibility of (i) Dialog to select the VM to create a heap dump from to the client since some "dumpers" may not be available to dump memory for all running VMs. Feel free to comment/help me ;o) Regards Manu Hi Manu,
I was wondering if the user really wants/needs to know which methods are used to acquire the heap dump. What do you think about this?
interface HeapDumpWriter {
List<VmInfo> getAvailableInfos();
void acquireDump(VmInfo info, File target, IProgressListener listener) throws Exception [A]
}
final class VmInfo {
int pid;
String description;
boolean canDump; [B]
VmInfo source;
}
<heapDumpWriter class="x.y.z.HotspotsMBean" />
Now, if the user clicks the acquire button, all HeapDumpWriters are asked for the available VMs. The list is displayed in a dialog. The user selects one and the HeapDumpWriter is asked to write it to the target file/directory.
[A] The exception is needed because maybe the VM is terminated between listing the VM and the selection to dump the heap.
[B] The boolean flag would indicate the VM is visible but the heap cannot be dumped. Its similar to JConsole: the user knows we found the VM but there is a reason he cannot dump the heap.
Andreas.
Hi Andreas, First, I totally agree with you concerning the user point of view. He doesn't care the method used to get the dump. Then I have one question about you suggestion from the implementation side. Lets say we have two plug-ins extending our extension point: <heapDumpWriter class="x.y.z.HotspotsMBean" /> <heapDumpWriter class="x.y.z.JMap" /> How we will be able to present the available VMInfo to the user ?? These two implementations will certainly return some differents VMInfo related to the same JVM. We musn't present these "different" infos to the user but have to give the user one entry per JVM. No ?? Maybe we can compare PIDs ... but it's quite unpredictable since they come from client's implementation and may not be exactly VM process Id ... ? If we suppose we can rely on these PID's then when building the list of available VMs to dump we just use the first available way to dump this Vm if exists otherwise we present it to the user but grayed .. (as in JConsole as you said) Thanks for your help Manu (In reply to comment #3) > Hi Manu, > > I was wondering if the user really wants/needs to know which methods are used > to acquire the heap dump. What do you think about this? > > interface HeapDumpWriter { > List<VmInfo> getAvailableInfos(); > void acquireDump(VmInfo info, File target, IProgressListener listener) throws > Exception [A] > } > > final class VmInfo { > int pid; > String description; > boolean canDump; [B] > VmInfo source; > } > > <heapDumpWriter class="x.y.z.HotspotsMBean" /> > > Now, if the user clicks the acquire button, all HeapDumpWriters are asked for > the available VMs. The list is displayed in a dialog. The user selects one and > the HeapDumpWriter is asked to write it to the target file/directory. > > [A] The exception is needed because maybe the VM is terminated between listing > the VM and the selection to dump the heap. > > [B] The boolean flag would indicate the VM is visible but the heap cannot be > dumped. Its similar to JConsole: the user knows we found the VM but there is a > reason he cannot dump the heap. > > > Andreas. > (In reply to comment #4) > Maybe we can compare PIDs ... but it's quite unpredictable since they come from > client's implementation and may not be exactly VM process Id ... ? Why not require the heap dump writers to return the PID given by the operation system to the VM? I think jps/jmap and HotspotDiagnostic work that way. > If we suppose we can rely on these PID's then when building the list of > available VMs to dump we just use the first available way to dump this Vm if > exists otherwise we present it to the user but grayed .. I didn't think about this yet... The user will get a heap dump but cannot decide which technical method is used in the background. I think that is ok. If we find out later that there are significant technical differences, we could add a "priority" attribute to the extension point. Manu, thanks for your input! (In reply to comment #5) > (In reply to comment #4) > > Maybe we can compare PIDs ... but it's quite unpredictable since they come from > > client's implementation and may not be exactly VM process Id ... ? > > Why not require the heap dump writers to return the PID given by the operation > system to the VM? I think jps/jmap and HotspotDiagnostic work that way. Ok I will start this way. > > > If we suppose we can rely on these PID's then when building the list of > > available VMs to dump we just use the first available way to dump this Vm if > > exists otherwise we present it to the user but grayed .. > > I didn't think about this yet... The user will get a heap dump but cannot > decide which technical method is used in the background. I think that is ok. If > we find out later that there are significant technical differences, we could > add a "priority" attribute to the extension point. Ok > > Manu, thanks for your input! > Manu Hi, Such a feature was requested already several times through different channels (bugzilla, mailing list, erc...) and we feel it is important to provide it. The discussions here, which happened more than one year ago, I found very helpful and I would like to continue with the implementation. Having an extension point for the functionality should make it easier to enable at a later time also dumps from IBM VMs. (I added Andrew to CC). But before I start doing anything, I wanted to ask if there was any progress on this since the last discussion, or if I shall start at the place the discussion finished. Hi, Unfortunately I didn't progress at all on this enhancement since it's not "directly" related to my work ... Shame on me :-( I started some investigations on my personal time but nothing relevant enough to be contributed. It will be faster for you to start from scartch Thanks to take the implementation of this feature. Regards, Manu (In reply to comment #7) > Hi, > > Such a feature was requested already several times through different channels > (bugzilla, mailing list, erc...) and we feel it is important to provide it. The > discussions here, which happened more than one year ago, I found very helpful > and I would like to continue with the implementation. > Having an extension point for the functionality should make it easier to enable > at a later time also dumps from IBM VMs. (I added Andrew to CC). > > But before I start doing anything, I wanted to ask if there was any progress on > this since the last discussion, or if I shall start at the place the discussion > finished. No problem. Thanks for the suggestion and the discussions! They are already helpful. I'll start then working on the implementation, and I'll keep the message updated. Created attachment 153051 [details]
patch containing the deffinition of the extension point
Created attachment 153052 [details]
patch containing a jmap based implementation of the IHeapDumpProvider
Created attachment 153053 [details]
UI components for acquiring the heap dumps
I have created an initial implementation of the feature. I haven't submitted it yet, it's just attached here as a patch, or more precisely 3 patches. org.eclipse.mat.api.patch - it contains the definition of the extension point and the interface to be implemented org.eclipse.mat.hprof.patch - contains a jps + jmap based implementation. I tested it so far on Windows and Ubuntu linux, Andreas promissed to give it a try on a Mac :) org.eclipse.mat.ui.patch - A couple of classes collecting the registered extensions and providing the UI for acquiring the heap dumps. In the File menu the "Acquire Heap Dump ..." entry should appear Feedback is highly appreciated! One problem I had with the jmap implementation is that right now I just call "jmap ..." and it fetches the one from the JDK of MAT. As I tried then to get a dump from x64 process, and MAT itself was running x32 bit, I ran into errors. We need to either have some preferrences for this (global), or before triggering the dump suggest a default jmap to be used, and leave the user the opportunity to change it... @Andrew: Can you please have a look, and see if this interface will be enough to get also dumps from an IBM VM? Or will we need to come up with a way to provide parameters (similar to my jmap issue). The IBM VMs don't have a direct equivalent of jps and jmap. The Java Attach API (available with Java 6 SR6) could be useful to generate dumps. http://publib.boulder.ibm.com/infocenter/javasdk/v6r0/index.jsp?topic=/com.ibm.java.doc.user.lnx.60/user/attachapi.html Inside of Eclipse it might be possible to find running VMs through Eclipse APIs. This looks great! For the Heap Dump Providers, it might be useful to have a description so similar provider nodes could be grouped in the acquire heap dump wizard. This could allow, for example, to group by vendor JVMs (e.g. Sun, SAP, IBM, etc.). In the VmInfo, it might also be useful to have a long description so additional information about the application could be provided. For example, hovering on the description could provide the qualified class name for the method shown. With regards to finding the JVM used by an Eclipse, perhaps the org.eclipse.debug.core.Launch might be of use as it can return a list of processes. I just committed the initial implementation (which I had attached as patches sometimes ago). I hope now it would be easier for the others to try it out. It definitely needs further work to get it all correct, e.g. for the moment I'm just calling 'jps' and 'jmap' and if they are not found then the user has no chance to specify their location. I will come up with some way to provide properties to the heap dump provider. @Joel - thanks for your comments. I'll try to include this suggestions. I'm having some success using the IBM attach API to generate dumps. 1.The com.ibm.tools.attach API is IBM VM specific and will only run on IBM VMs. It also needs a separate jar to load into the target VM. I therefore think it is better to put the all the attach code into a separate plug-in from the DTFJ plug-in. The DTFJ plug-in can then be compiled and run from a non-IBM VM provided the DTFJ feature is available. The IBM attach plug-in would need to be run with MAT started with an IBM VM. Is that a reasonable restriction? 2.Acquire Dialog Probably shouldn't have 'dialog' in the title' The '...' in the specify folder is not very helpful. Use a more direct action rather than 'Finish'. 3.IBM VMs have a choice of dump type e.g. system dump or heap dump. Should this choice be given to the user? The only way now is to change the description. I think the provider name (from the Eclipse extension or from the com.ibm.tools.attach.AttachProvider should be a separate field, as should the dump type.) 4.The VirtualMachineDescriptor and additional data e.g. dump type is needed to generate the dump. How should this be saved from the list query and passed back to generate the dump? VMInfo is final, so can't be extended, and there is no way to attach dump provider data to it. The only way is via the VMInfo.setHeapDumpProvider where the data could be saved in different implementations of IHeapDumpProvider. Is that the intended way of using the API? 5.Process id of VMInfo is an integer. VirtualMachineDescriptor.id() is a String, though it does seem to be an integer so can be converted. Is an int always suitable for a process identifier? 6.Should IHeapDumpProvider.acquireDump() throw a general Exception, or should it throw SnapshotException and rely on the provider to understand its generated exceptions? *** Bug 290338 has been marked as a duplicate of this bug. *** comments from 290338: Mark Storer 2009-09-23 19:47:11 EDT User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 (.NET CLR 3.5.30729) Build Identifier: 3.4.2 The summary pretty much says it all. It would be nice to be able to (for example) right-click on a process in the debug perspective and click on "Analise Memory" to whip up a Memory Analysis from it right then and there. I'm not at all experienced with plugin development in/for eclipse, so I don't know if this would be an enhancement within the debug perspective itself, or if MAT could insert something into other components menus. Reproducible: Always Steps to Reproduce: JConsole/jmap are fine, but IDEs are all about convenience/development speed. If you can turn "5 clicks, type in a path, 2 clicks, browse-to-the-dump-file, click" into "click click", I for one would be a Happy Camper. If you could automagically catch the output of -XX:HeapDumpOn* running in eclipse's debugger and open it, that'd be Quite Awesome as well. (In reply to comment #17) > I'm having some success using the IBM attach API to generate dumps. > > 1.The com.ibm.tools.attach API is IBM VM specific and will only run on IBM VMs. > It also needs a separate jar to load into the target VM. I therefore think it is > better to put the all the attach code into a separate plug-in from the DTFJ > plug-in. The DTFJ plug-in can then be compiled and run from a non-IBM VM > provided the DTFJ feature is available. The IBM attach plug-in would need to be > run with MAT started with an IBM VM. Is that a reasonable restriction? I think this will complicate things a little. But to be honest I don't know how users analyzing IBM dumps start MAT - if they use the IBM VM or something else. Assuming that we do the dumps only from the same box (which is the case) should mean that there is an IBM VM on the box, and the user just have to reconfigure MAT. Could be OK. We have to figure out how to compile this adapter in the Eclipse build environment. > 2.Acquire Dialog > Probably shouldn't have 'dialog' in the title' > The '...' in the specify folder is not very helpful. > Use a more direct action rather than 'Finish'. I submitted today changes some changes - renamed the Dialog to "Acquire Heap Dump", renamed the button to "Browse..." and fixed the broken layout. The "Finish" comes from the Wizard, I personally don't know how to change it. > 5.Process id of VMInfo is an integer. VirtualMachineDescriptor.id() is a String, > though it does seem to be an integer so can be converted. Is an int always > suitable for a process identifier? On all OSs I've seen the PID was always an integer, but I can't claim there is no OS where the PID is something else. Having an integer will allow us to sort properly by PID (the table can't be sorted in any way still). I don't see another advantage at the moment for having an int instead of a String. If you think having int is a limitation and the sorting is not enough as an argument, we can change the type. > 3.IBM VMs have a choice of dump type e.g. system dump or heap dump. Should this > choice be given to the user? The only way now is to change the description. > I think the provider name (from the Eclipse extension or from the > com.ibm.tools.attach.AttachProvider should be a separate field, as should the > dump type.) > 4.The VirtualMachineDescriptor and additional data e.g. dump type is needed to > generate the dump. How should this be saved from the list query and passed back > to generate the dump? VMInfo is final, so can't be extended, and there is no way > to attach dump provider data to it. The only way is via the > VMInfo.setHeapDumpProvider where the data could be saved in different > implementations of IHeapDumpProvider. Is that the intended way of using the API? I am thinking right now to follow the concept we have for IQuery - whatever information is needed for the heap dump provider, it should be declared via annotations, and injected from outside. I guess we could reuse the - @Argument annotation to pass things like the dump type (IBM), the location of a proper JDK with jmap, etc... - @Name to pass the Provider name - @Help to give some help, etc... I was hoping to reuse the ArgumentsWizard we have for queries, but it seems to be tightly coupled to the query context. What do you think about reusing the annotations? If we decide on this idea, I will check if some parts of the ArgumentsWizard could be separated and reused. > 6.Should IHeapDumpProvider.acquireDump() throw a general Exception, or should it > throw SnapshotException and rely on the provider to understand its generated > exceptions? I guess a SnapshotException is better. This will "force" the provider to do some exception handling. Shall I change the exception type? I've checked in a new project, org.eclipse.mat.ibmdumps, to generate dumps from IBM VMs. It's in a different project as it has a dependency on com.ibm.tools.attach.* which is available from recent IBM VMs. This means it won't compile with a Sun compiler, but it is unlikely that a developer would want to modify this plugin without access to an IBM VM. I'm hoping it will compile with Athena CBI as that uses an IBM VM. At run time it tries to call com.ibm.tools.attach.* APIs directly, but if this fails (e.g. if Memory Analyzer is run with a Sun VM or a backlevel IBM VM) then a small utility jar is built and run with a user selected IBM VM as a separate process (rather like jps/jmap). We may need a CQ for a works-with dependency on the IBM VM. The VM level on the Eclipse build machines is quite old, so we get errors:
java.fullversion=J2RE 1.5.0 IBM J9 2.3 Linux ppc-32 j9vmxp3223-20071007 (JIT enabled)
[javac] 2. ERROR in /opt/users/hudsonbuild/.hudson/jobs/cbi-mat-nightly/workspace/build/N201003091224/eclipse/plugins/org.eclipse.mat.ibmdumps/src/org/eclipse/mat/ibmvm/acquire/IBMDumpProvider.java (at line 32)
[javac] import com.ibm.tools.attach.AgentInitializationException;
[javac] ^^^^^^^^^^^^^
[javac] The import com.ibm.tools cannot be resolved
[javac] ----------
[javac] 3. ERROR in /opt/users/hudsonbuild/.hudson/jobs/cbi-mat-nightly/workspace/build/N201003091224/eclipse/plugins/org.eclipse.mat.ibmdumps/src/org/eclipse/mat/ibmvm/acquire/IBMDumpProvider.java (at line 33)
[javac] import com.ibm.tools.attach.AgentLoadException;
[javac] ^^^^^^^^^^^^^
[javac] The import com.ibm.tools cannot be resolved
Don't worry about these errors:
[javac] 30. ERROR in /opt/users/hudsonbuild/.hudson/jobs/cbi-mat-nightly/workspace/build/N201003091224/eclipse/plugins/org.eclipse.mat.ibmdumps/src/org/eclipse/mat/ibmvm/acquire/IBMSystemDumpProvider.java (at line 32)
[javac] String agentCommand() {
[javac] ^^^^^^^^^^^^^^
[javac] Cannot reduce the visibility of the inherited method from IBMDumpProvider
https://bugs.eclipse.org/bugs/show_bug.cgi?id=298238
I think we'll have to exclude ibmdumps from the feature for the moment.
The IBM dump provider now compiles on Athena CBI https://build.eclipse.org/hudson/view/Athena%20CBI%20%28SVN%29/job/cbi-mat-nightly/116/ The build is a little unusual. The code relies on com.ibm.tools.attach APIs which aren't always present. The code is split into src/ directory (normal) and src2/ (needs com.ibm.tools.attach). There is bin/ directory (output from src/) and classes/ (precompiled versions of results from src2/) PDE build only compiles src/ bin/ and classes/ are packaged. At run time classes are found from bin/ and failing that classes/. If files in src2/ are changed then this must be done on a developer's machine with an IBM VM installed. The compiled class files from src2/ get put in bin/. These should then be copied into classes/ and checked into SVN. At run time, if com.ibm.tools.attach isn't available then the necessary classes are packaged into a temporary jar and an IBM Java 6 VM is started to do the necessary work. Krum has updated the acquire code to add annotations. All HeapDumpProviders now need to be registered as extensions. The IBM dump code used to generate different providers for the VmInfo and the acquiredmup from those provided to getAvailableVMs. This had to change. There are now 4 providers, IBMHeapDumpProvider IBMSystemDumpProvider and IBMExecHeapDumpProvider IBMExecSystemDumpProvider The first two make direct use of the attach API. The second use a separate IBM Java 6 process. They only return VMs if the first fail. There is the possibility of just having 2 providers (exec and non-exec) and selecting the type of dump heap/system using an enumeration argument. For the exec providers an IBM JVM has to be found. This needs to be done at the getAvailableVMs stage (rather like jps). The VM for the dump generation stage can be the same, so there isn't much advantage in selecting it. If it were then the provider would update the VM File when finding VMs, ProviderArgumentsTable.processSelected would need to update the defaults based on the selected provider, and FileOpenDialogEditor would set the default based on the table, not the last directory or user.dir Andrew, you ran into several problems with my proposed solution. Because of the fact that the IHeapDumpProvider is instantiated by the UI you have no suitable way to pre-configure it. Additionally currently the annotations are used to set some parameters only after the list of VMs is returned. This has the disadvantage that you can't "ask" for some configuration needed to display the list. I also ran into this recently - there is no way currently to specify the VM which contains the proper jps. Therefore I suggest the following changes. Please have a look and comment on them: 1) Redesign the extension point: the extension could be an IHeapDumpProvider factory instead of the IHeapDumpProvider itself. This way the providers will be instantiated by the factory (e.g. by the plugin providing the extension) and the providers can be initialized in a proper way 2) Change the semantics of a field within IHeapDumpProvider marked with the @Argument annotation: currently such fields are set (via GUI) once a concrete VM process is selected. I suggest that we use fields in IHeapDumpProvider annotated with @Argument to mark data which is needed for the IHeapDumpProvider to return the list of VMs. Having 1) you will be able to fill in some reasonable defaults, and still if some information is missing (argument left null) the user can fill it in. The GUI code will move out of your adapter. 3) Allow annotating fields in VmInfo with @Argument. This should mean that before executing the real trigger operation the GUI should take care that all mandatory arguments are in place, i.e. what is currently done using the IHeapDumpProvider fields. VmInfo is not final any longer, so one can extend it, put some more fields inside and mark them as @Argument if the configuration gui should open. Do you think this will be flexible enough to solve the current issues? 1.For IConfigurationElement the documentation says: If the specified class implements IExecutableExtensionFactory interface, the method IExecutableExtensionFactory.create() is invoked.
so I think factories are already available.
2.That may be enough.
E.g. for IBM VMs there are two main modes
i.the current VM is a Java 6 SR6 or later, so can directly invoke the attach API.
ii.the current VM is not, so we need to find a suitable helper IBM VM (or specify that none is available).
so steps would be
a.Call IBM factory
b.factory decides if attach API available, returns direct HeapDumpProvider, or indirect HeapDumpProvider - using guess as to suitable helper VM or saved version from last time?
c.Populate provider with user arguments for indirect HeapDumpProvider path to helper VM.
d.MAT calls provider to get list of VMs - what happens if helper VM unavailable? Should getAvailableVMs have a listener to show a progress bar, or log errors?
Save suitable helper VM in the VM info
e.If error occurs - does it just return null or empty list or an exception?
f.User decides on VM
g.Populate VMInfo from user input - should the user decide between system and heap dump at this point? Should the user have an option to change the helper VM?
h.call HeapDumpProvider to generate dump
3.As VMInfo is no longer final, should the signature be:
public List<? extends VmInfo> getAvailableVMs()
or
public List<? extends VmInfo> getAvailableVMs(IProgressListener listener)
Is there a security problem allowing arbitrary files to be executed? I guess no more so than the file open dialog, which also allows files to be executed.
I have submitted some changes as discussed above. I changed the signatures of both methods on IHeapDumpProvider. Now both methods get an IProgressListener as a parameter may throw a SnapshotException public List<? extends VmInfo> getAvailableVMs(IProgressListener listener) throws SnapshotException; public File acquireDump(VmInfo info, File preferredLocation, IProgressListener listener) throws SnapshotException; Additionally I changed the UI so that it asks for the parameters annotated in the IheapDumpProvider before the list of VMs is requested. Any dump/process specific parameters can be annotated in the VmInfo subclasses. Let me know if the changes work fine for you. I've updated the IBM dump provider to match the new API, and it works as-is. There are still a few minor problems: 1. Need a help annotation for provider or null pointer exception results, which is hard to debug. 2. The file dump name needs a parent directory or an exception results which is hard to debug. 3. The file dialog fields are not prepopulated with the values in the provider/vminfo object. 4.The Acquire Heap Dump Dialog lists the arguments for the provider, but not the provider description. 5. The configure heap dump provider panel has no title 6. The configure heap dump provider panel has the help for the provider, but no arguments which remain on the Acquire Heap Dump Dialog. I think help for the arguments would be more useful below the configure panel. The description might be useful to, although it is already in the upper part, as long descriptions get lost. 7. The Heap Dump Provider arguments panel says 'vminfo'. Why? Is that meant to be the name for the VMInfo? 8.The help for the VMInfo doesn't appear above the arguments. 9. The configure heap dump provider panel only selects the name column in the row, not the whole row, which is more usual for a read-only table. 10.If you select the IBM dump (using helper VM) provider, see the SYSTEM/HEAP and javaexecutable option, then go back, then select the HPROF jmap dump provider then this provider gets a spurious SYSTEM/HEAP option left over from the IBM dump. 11.Selecting different providers in the Acquire Heap Dump Dialog sometimes appends the file name to the folder field, rather than replacing it. 12.The dialog says specify a 'folder' but actually seems to have the file. Which is it. 13.Would a FileFieldEditor simplify any of the panels? Thanks for taking the time to provide this the detailed list! I'll work over the points. Hi, I tried to acquire head dump from this version: MemoryAnalyzer-Incubation-0.8.0.20100408-win32.win32.x86 I'm using sun jvm 1.6u18. I read quickly the comments and it seems that it is working only for IBM jvm, is that right? If yes, for when is it planned to make it worked for another VM (especially sun ^^)? If it might work for sun vm, I can provide log. regards, Hi, It should be possible to trigger a heap dump also from a Sun VM. The functionality is still in development, but it should be possible already to get dumps. With the preview version currently on our download page what you have to do is: 1) run MAT with a JDK, not a JRE. You can specify in the MemoryAnalyzer.ini file by adding -vm <path to \bin folder within JDK> The JDK is needed in order to find the jps and jmap executables. 2) When the popup for choosing an IBM VM appears - just close it, click Abort. If everything works fine you should see a list of processes and be able to trigger a dump. Meanwhile we have made some further changes - one can configure the Sun JDK with jps/jmap without starting the tool with it, e.g. 1) is not needed any longer. One has to click "Configure ..." and configure then the HPROF dump provider. The popup from 2) is also gone. You can also try updating to our latest stable build https://build.eclipse.org/hudson/job/cbi-mat-nightly/lastSuccessfulBuild/artifact/ then navigate to the Nxxxxxxx folder. The file MemoryAnalyzer-Incubation-Update-Nxxxxxxxxxxxxx.zip contains a zipped update site. Or you can wait a bit more for our next preview downloadable. Thanks for trying the feature out. We'll be happy to get some early feedback! Ok, I nearly can do it working with my version and with HEAD of the project (after removing the plugin for IBMdumps) The .hprof is generated but when I click finish an error dialog prompts: "Head dump file was not created. jmap exit code = 0 stdout: Dumping heap to F:\java_pidXXX.hprof ... Head dump file created" BTW the dump is correct, I can analyse it :) There is no stack in the .log, just an INFO to tell that it runs the jmap command. Another thing that I notice: the default name of the dump file begin with "/". I'm running on Windows.So I wondered where it will create it and in fact it created it at the root of my system. But this can be disconcerting for people on windows. > We'll be happy to get some early feedback! you're welcome :) I've fixed 4,7,8,9 of my list in comment #28. Andrew, I merged your changes from today and also fixed the following: - 1, 5, 10 from comment #28 - externalized all newly introduced strings Is there anything open from your side? I will look now into the comments from Aurelien (comment #32) I've fixed 2 & 3. 6 was fixed by my changes last night. I'll work around 11 by giving a file name only for IBM dumps. The name can change if the dump type changes, but the name is based on the dump type in IBMVmInfo, and IBMVmInfo only gets updated when the dump starts, before then just the ArgumentSet is updated. Perhaps the IBM dump provider should have a default type too, which is copied into the IBMVMInfo. (In reply to comment #32) I submitted some changes which should solve both problems - if the user has not made any selection for the directory to save the heap dump, then the user home is proposed instead of nothing. I think the other problem you reported was related also to this one. The changes will be available on our nightly build by tomorrow I guess... I also changed the jmap heap dump provider to ask only for a JDK directory and not for the full path to jps or jmap. I've made updates to the IBM dump provider. The IBM dump provider now takes account of the supplied destination directory/file. The destination directory is used to as a location to copy the results of getting the dump. The supplied destination file name is used, assuming the file extension matches that for the dump type, otherwise the default for the type is used. The date/timestamp from the actual file is substituted into the supplied file if they are of the same format, i.e. heapdump.yyyyMMdd.HHmmss.<pid>.<seq>.phd core.yyyyMMdd.HHmmss.<pid>.<seq>.dmp.zip etc. The Heap Dump arguments panel now has a compress option as well as the type type. This generates .phd.gz for heap dumps, and .dmp.zip for system dumps. The heap dump provider configuration has a default option for dump type and for compression. This saves setting up the options for each dump. Oddities. If the dump file name is changed from the default, and then the dump type/compression is changed on the next page, then the default file name will be used if the supplied file extension is wrong. Javacore dumps do not open directly from the acquire dialog (though opening them afterwards does work) Andrew, now I also tested the getting dumps from an IBM VM and it works fine. I think we have now a good new feature and don't see any immediate changes needed. Let's see if there is some feedback from users. HeapDumpProviderRegistry and VmInfoDescriptor have strings that need to be internationalized. VmInfo.isHeapDumpEnabled() is never called, so disabling an instance of VmInfo does not work. Fortunately the current IBM and HPROF providers don't disable any. I externalized the Strings. About the isHeapDumpEnabled() method - shall we fully remove it? Initially the thinking was the would be able to know in advance if dumping the heap would be possible. I haven't find a way to do it, for IBM provider it is also not needed. What you think? I made another change for the jmap provider - now the heap dump file name in the jmap command line is surrounded with quotes. Without this paths with spaces were causing troubles. On Linux systems with the IBM system dumps there is sometimes a message saying cannot find 'core' file. This may happen because the VM generates a file called 'core' then renames it to core.date.time.pid.seq.dmp To avoid this we need to remove files which cease to exist from the new files list. getHeapDumpEnabled on VmInfo is now used - see Bug 533915 I think this bug is now fixed. Marking as fixed - any small enhancements should be done under a new work item. |