Community
Participate
Working Groups
You can see this in the console log for the test, as created during "performance runs": such as http://download.eclipse.org/eclipse/downloads/drops4/S-4.5M4-201412102000/baseline/linux.gtk.x86_64_8.0/org.eclipse.pde.api.tools.tests.ApiToolsPerformanceTestSuite.txt it says Error occurred during initialization of VM Initial heap size set to a larger value than the maximum heap size And, the test.xml for the org.eclipse.pde.api.tools.tests bundle, it states <property name="vmargs" value="-Xmx300M" /> It does this for unit tests and performance tests ... so, my guess is that for the performance tests "the test framework" sets the minimum/maximum heap to some larger values, by default. [Which, was probably something I added ... though, thought I had removed it?] So, a) performance framework has to change to not specify anything for -Xms? or, b) any test that sets -Xmx should also set -Xms. It seems "b" would always be a good idea, even I make some change in the performance framework. Any advice or opinions?
I see now I did remove the setting from the test framework itself (after I initially added it) but I see it is still set in the Hudson script that invokes the performance tests: -Xms1024m -Xmx1024m I assumed performance tests were "better" if garbage collection was "minimized" ... but, am happy to change if "the team" thinks otherwise.
(In reply to David Williams from comment #1) > I see now I did remove the setting from the test framework itself (after I > initially added it) but I see it is still set in the Hudson script that > invokes the performance tests: > > -Xms1024m -Xmx1024m > > I assumed performance tests were "better" if garbage collection was > "minimized" ... but, am happy to change if "the team" thinks otherwise. FWIW, I did remove any setting from test framework, and our "Hudson scripts" to avoid this problem ... am trying local builds today, to see if there might be other issues with this particular suite. Also, I see in http://wiki.eclipse.org/Performance/Automated_Tests it says Add -Xms256M -Xmx256M or similar to the VM arguments to avoid memory pressure during the measurements. So, that's what I was trying to do, times 4.
Adding some other "senior people" that might have an opinion, or memories!, about how the "performance frameworkwork" should handle this. Should "we" specify "equal" min heap and max heap: -Xms -Xmx, and if so, what's the right value? 256M or 1024M on "today's equipment"? Or ... just leave it to each suite to set how they'd like?
(In reply to David Williams from comment #2) > FWIW, I did remove any setting from test framework, and our "Hudson scripts" > to avoid this problem ... am trying local builds today, to see if there > might be other issues with this particular suite. FWIW, it did run locally once I fixed (removed) the "memory pressure avoidance" settings. I'll put on discussion topic for Wednesday's meeting to get settled, but note that even if we used -Xms256M -Xmx256M, I still recommend all tests that set mx should also set ms. I'll have to find a good wiki page to document that.
In general, performance tests should use the values that we set in the eclipse.ini (currently -Xms40m -Xmx512m), so that we test what the user experiences. Using something below that is definitely wrong. I've removed the corresponding settings from: org.eclipse.jdt.text.tests org.eclipse.pde.api.tools.tests org.eclipse.team.tests.core The settings were there because back in the old times those test suites ran out of memory with the default settings. We might as well have to do that again for some test suites that fail using 512m. The performance framework should not set any (different) values than specified by the eclipse.ini. Moving to Releng to ensure that.
(In reply to Dani Megert from comment #5) > In general, performance tests should use the values that we set in the > eclipse.ini (currently -Xms40m -Xmx512m), so that we test what the user > experiences. Using something below that is definitely wrong. I've removed > the corresponding settings from: > > org.eclipse.jdt.text.tests > org.eclipse.pde.api.tools.tests > org.eclipse.team.tests.core > > The settings were there because back in the old times those test suites ran > out of memory with the default settings. We might as well have to do that > again for some test suites that fail using 512m. > > The performance framework should not set any (different) values than > specified by the eclipse.ini. Moving to Releng to ensure that. I can see it either way, but would be best if you stated this was a change in philosophical approach, since the time of writing of http://wiki.eclipse.org/Performance/Automated_Tests and "we" no longer think it appropriate to "avoid memory pressures" as it says: Add -Xms256M -Xmx256M or similar to the VM arguments to avoid memory pressure during the measurements. = = = = = My own thinking was the tests were designed purely to test for regressions in our code (and leave complications and differences of VMs GC out of it). But GC methods have gotten a lot more complicated since that writing, and it would take more than setting ms and mx equal to avoid GC from being engaged. = = = = = = = Assuming that part of wiki should be removed ... or changed, perhaps to say "same settings as used by eclipse.ini should be used, unless the test specifically requires more memory". = = = = = = = = Just a reminder, we no longer use the eclipse executable to run the performance tests. I only mention this since there are some comments in code (I believe in overall test.xml) that say it is. Hence, I believe the framework would have to set the "eclipse.ini" values.
As a minor aside, should the same "standard" memory settings also be used for unit tests? I believe the reason I originally set them in Hudson was so that they only applied to performance tests. So, I'm just trying to decide if I should set them in Hudson, or the framework itself.
(In reply to David Williams from comment #6) > I can see it either way, but would be best if you stated this was a change > in philosophical approach, since the time of writing of > > http://wiki.eclipse.org/Performance/Automated_Tests > > and "we" no longer think it appropriate to "avoid memory pressures" as it > says: > > Add -Xms256M -Xmx256M or similar to the VM arguments to avoid memory > pressure during the measurements. That particular comment is in the section about launching via launch configuration from the IDE. I don't know whether the framework back then explicitly set the values to -Xms256M -Xmx256M? Did it? > = = = = = > > My own thinking was the tests were designed purely to test for regressions > in our code (and leave complications and differences of VMs GC out of it). Not purely. W always want(ed) to focus on common user scenarios, and get a feeling on how those perform on different machines with different memory, plus of course, detect regressions. > = = = = = = = > > Assuming that part of wiki should be removed ... or changed, perhaps to say > "same settings as used by eclipse.ini should be used, unless the test > specifically requires more memory". Yes, can you do the update, please? Thanks. > Just a reminder, we no longer use the eclipse executable to run the > performance tests. I only mention this since there are some comments in code > (I believe in overall test.xml) that say it is. Hence, I believe the > framework would have to set the "eclipse.ini" values. That's fine, but we should read the values from the eclipse.ini file, and remove the outdated comments.
(In reply to David Williams from comment #7) > As a minor aside, should the same "standard" memory settings also be used > for unit tests? > > I believe the reason I originally set them in Hudson was so that they only > applied to performance tests. So, I'm just trying to decide if I should set > them in Hudson, or the framework itself. For the unit tests it's fine to tweak the VM arguments, if that makes the tests run faster. Otherwise, I'd also just use the defaults from eclipse.ini.
I've confirmed for unit tests (and, now performance tests) the values for "java-tests" are being set to what the eclipse.ini has (we don't actually read the eclipse.ini, so if/when that changes, the test framework (in the library.xml file) should be updated also. I've also updated the wiki, replacing the old line with Adjust -Xms and -Xmx if needed. Note that during automated production runs, the settings are set to what the eclipse.ini has. If your test needs some different (larger or smaller) be sure to set both (ms and mx) in your test.xml file, using the vmargs property. So, I think we are done here ... except to see who runs out of memory, and they will have to fix in their test.xml files. (Or, fix their code if its due to a memory leak :)
Marking as fixed. But, please comment if more discussion. Or if I missed anything.