| Summary: | New job for CDT | ||
|---|---|---|---|
| Product: | Community | Reporter: | Andrew Gvozdev <angvoz.dev> |
| Component: | CI-Jenkins | Assignee: | Eclipse Webmaster <webmaster> |
| Status: | RESOLVED FIXED | QA Contact: | |
| Severity: | normal | ||
| Priority: | P3 | ||
| Version: | unspecified | ||
| Target Milestone: | --- | ||
| Hardware: | All | ||
| OS: | All | ||
| Whiteboard: | |||
|
Description
Andrew Gvozdev
Done. I don't think there really is any one 'good' time, but presumably later in the evening (EST) and weekends are good choices. -M. > # java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool::allocate. Out of swap space? Am I out of space here? init: @dot.nestedJars: [unzip] Expanding: <https://hudson.eclipse.org/hudson/job/cdt-sd80/ws/all/org.eclipse.cdt.releng/results/eclipse/plugins/org.eclipse.equinox.registry_3.5.100.v20110321.jar> into <https://hudson.eclipse.org/hudson/job/cdt-sd80/ws/all/org.eclipse.cdt.releng/results/nestedJars/org.eclipse.equinox.registry_3.5.100.v20110321> # # A fatal error has been detected by the Java Runtime Environment: # # java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool::allocate. Out of swap space? # # Internal Error (allocation.cpp:117), pid=14094, tid=4137491312 # Error: ChunkPool::allocate # # JRE version: 6.0_21-b06 # Java VM: Java HotSpot(TM) Server VM (17.0-b16 mixed mode linux-x86 ) # An error report file with more information is saved as: # <https://hudson.eclipse.org/hudson/job/cdt-sd80/ws/all/org.eclipse.cdt.releng/hs_err_pid14094.log> # # If you would like to submit a bug report, please visit: # http://java.sun.com/webapps/bugreport/crash.jsp # /tmp/hudson3455261505414658444.sh: line 31: 14094 Aborted java -Xms800M -Xmx2048M -jar tools/org.eclipse.releng.basebuilder/plugins/org.eclipse.equinox.launcher.jar -application org.eclipse.ant.core.antRunner -DupdateSiteLocation=../../update-site update-site Archiving artifacts Recording test results It's possible if lots of jobs are running(or not cleaning up) that tmp space could be exhausted. Is this continuing to happen? -M. (In reply to comment #3) > > # java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool::allocate. > It's possible if lots of jobs are running(or not cleaning up) that tmp space > could be exhausted. Is this continuing to happen? I came to think that this is actually OutOfMemory error as it says. I am not able to finish a single build after switching to basebuilder R37_M7 to be able to use git fetch factory. Is it possible to add some memory to a Hudson job? Typically I am getting errors like these, always in different spots: a) /opt/public/jobs/cdt-sd80/workspace/all/org.eclipse.cdt.releng/results/plugins/org.eclipse.cdt.core.aix/build.xml:179: The following error occurred while executing this line: /opt/public/jobs/cdt-sd80/workspace/all/org.eclipse.cdt.releng/results/plugins/org.eclipse.cdt.core.aix/build.xml:86: java.lang.OutOfMemoryError b) mmap failed for CEN and END part of zip file mmap failed for CEN and END part of zip file mmap failed for CEN and END part of zip file mmap failed for CEN and END part of zip file I don't see anything in the hudson config that indicates we're 'limiting' memory usage, and the slave you're running the build on has 20+G of ram (it's currently showing 4G free). -M. Hmm, I do not have that issue running builds locally. I am pretty much stuck at that point, waiting for ideas to arrive. One odd thing to mention is that I cannot remove build #4, see https://hudson.eclipse.org/hudson/job/cdt-sd80/. Getting java.io.IOException: Unable to delete /opt/users/hudsonbuild/.hudson/jobs/cdt-sd80/builds/.2011-05-18_12-28-51/.nfs0000000077dd2bea000050ed at hudson.Util.deleteFile(Util.java:261) How is it possible for me to run the script outside of Hudson to rule Hudson out? I've cleared the dangling nfs file. About the only place you could build it here would be build.eclipse.org, but since that is separate hardware(and a different config) the results may not be 'helpful' in solving the issue. Is there any way to have your job check the 'amount' of space(ram) the java process is consuming, or what's available before the run starts? Would connecting the job to a java debugger help? -M. |