| Summary: | OutOfMemory Exception | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | z_Archived | Reporter: | sam va <vavesw> | ||||||
| Component: | BIRT | Assignee: | Birt-ReportEngine-inbox <Birt-ReportEngine-inbox> | ||||||
| Status: | NEW --- | QA Contact: | Xiaoying Gu <bluesoldier> | ||||||
| Severity: | normal | ||||||||
| Priority: | P3 | CC: | bluesoldier, jouyang, schlosna, vavesw, wenfeng.fwd, wyan | ||||||
| Version: | 3.7.0 | ||||||||
| Target Milestone: | --- | ||||||||
| Hardware: | All | ||||||||
| OS: | Linux | ||||||||
| Whiteboard: | |||||||||
| Attachments: |
|
||||||||
|
Description
sam va
Can you attach your test driver code? We want to see if there is any BIRT engine/task instance not released. We are running this in a server environment. Hence we keep the engine up until the server lifetime. If we run small reports its fine. But if we run large reports parallelly its giving these issues. We have lot of our custom code integrated into the code. I am not sure if I can put the code here... will try to segregate the engine code I get the following errors now java.lang.OutOfMemoryError: nativeGetNewTLA at org.eclipse.birt.data.engine.executor.cache.ResultObjectUtil.readData(ResultObjectUtil.java:175) at org.eclipse.birt.data.engine.executor.cache.disk.DataFileReader.read(DataFileReader.java:104) at org.eclipse.birt.data.engine.executor.cache.disk.RowFile.readRowFromFile(RowFile.java:248) at org.eclipse.birt.data.engine.executor.cache.disk.RowFile.fetch(RowFile.java:209) at org.eclipse.birt.data.engine.executor.cache.disk.DiskCacheResultSet.nextRow(DiskCacheResultSet.java:102) at org.eclipse.birt.data.engine.executor.cache.disk.DiskCache.next(DiskCache.java:174) at org.eclipse.birt.data.engine.executor.cache.SmartCache.next(SmartCache.java:155) at org.eclipse.birt.data.engine.executor.transform.CachedResultSet.next(CachedResultSet.java:436) at org.eclipse.birt.data.engine.impl.ResultIterator.hasNextRow(ResultIterator.java:569) at org.eclipse.birt.data.engine.impl.ResultIterator.nextRow(ResultIterator.java:510) at org.eclipse.birt.data.engine.impl.ResultIterator.next(ResultIterator.java:460) at org.eclipse.birt.data.engine.impl.ResultIterator.close(ResultIterator.java:969) at org.eclipse.birt.data.engine.impl.ResultIterator$1.dataEngineShutdown(ResultIterator.java:200) at org.eclipse.birt.data.engine.impl.DataEngineImpl.shutdown(DataEngineImpl.java:565) at org.eclipse.birt.report.data.adapter.impl.DataRequestSessionImpl.shutdown(DataRequestSessionImpl.java:509) at org.eclipse.birt.report.engine.data.dte.AbstractDataEngine.shutdown(AbstractDataEngine.java:348) at org.eclipse.birt.report.engine.data.dte.DataGenerationEngine.shutdown(DataGenerationEngine.java:151) at org.eclipse.birt.report.engine.executor.ExecutionContext.close(ExecutionContext.java:480) at org.eclipse.birt.report.engine.api.impl.EngineTask.close(EngineTask.java:1540) at org.eclipse.birt.report.engine.api.impl.RunTask.close(RunTask.java:309) And I have set the following code to the task (as mentioned in the BirtMetal.ppt) task.getAppContext().put(DataEngine.MEMORY_USAGE, DataEngine.MEMORY_USAGE_CONSERVATIVE); task.getAppContext().put(DataEngine.DATA_SET_CACHE_ROW_LIMIT, "0"); task.getAppContext().put(DataEngine.MEMORY_BUFFER_SIZE, "2"); Similarly, I set the options for EngineConfig also... Starting the test with 2 reports in parallel. And will slowly run more reports if everything goes fine. One report is running fine without any issues. But takes 1hr. FYI, we generate the report and PDF/Excel along with it. (In reply to comment #2) > put the code here... will try to segregate the engine code We have a server that uses BIRT engine to run report and generate PDF/excel in parallel as well, we do not see memory issue with large reports (10s of gig size each with millions of recrods). Your segregated code that uses birt engine would help isolate the issue. 2 reports with 25000 records completed successfully. But it took 1hr 45m. The reports generated excel and pdf documents along with rptdocument I am trying to segregate the code and then reproduce the issue with sample database. Thanks (In reply to comment #5) > 2 reports with 25000 records completed successfully. But it took 1hr 45m. The > reports generated excel and pdf documents along with rptdocument What is the size of the rptdocument, and what is the size of the excel file? Is it possible for you to break down the time into 1) DB query time. 2) rptdocument generation time 3) excel generation time 4) pdf generation time 1hr 45min seems too long for a 25k record report generation. Following are the sizes of the files -rw-r--r-- 1 beadmin beadmin 521756672 Oct 5 04:02 MPM_1675.rptdocument -rw-r--r-- 1 beadmin beadmin 571666432 Oct 5 04:03 MPM_1674.rptdocument -rw-r--r-- 1 beadmin beadmin 2723649 Oct 5 04:46 MPM_1675.pdf -rw-r--r-- 1 beadmin beadmin 2971403 Oct 5 04:58 MPM_1674.pdf -rw-r--r-- 1 beadmin beadmin 26937266 Oct 5 05:20 MPM_1675.xls -rw-r--r-- 1 beadmin beadmin 29201596 Oct 5 05:38 MPM_1674.xls We had a different tool earlier which completes the entire report in 45 mins. And the query used to take 6 mins The rptdocument is getting generated quickly. may be 15 mins But as you see excel is taking 45 mins and pdf 35 mins. I am trying to put a sample code for you and reproduce the issue. But the earliest I can make is Oct 11th (Tue). Do you prefer an appserver (like tomcat). We are using right now weblogic 10.0.2 Thanks Here are the stats for 3 parallel reports with html/pdf/excel (25000 records) -rw-r--r-- 1 beadmin beadmin 2332190 Oct 7 03:37 MPM_1703.pdf -rw-r--r-- 1 beadmin beadmin 520826880 Oct 7 02:26 MPM_1703.rptdocument -rw-r--r-- 1 beadmin beadmin 26937266 Oct 7 04:41 MPM_1703.xls -rw-r--r-- 1 beadmin beadmin 2541239 Oct 7 03:14 MPM_1704.pdf -rw-r--r-- 1 beadmin beadmin 570642432 Oct 7 02:28 MPM_1704.rptdocument -rw-r--r-- 1 beadmin beadmin 29201596 Oct 7 04:29 MPM_1704.xls -rw-r--r-- 1 beadmin beadmin 2971808 Oct 7 04:11 MPM_1705.pdf -rw-r--r-- 1 beadmin beadmin 599187456 Oct 7 02:29 MPM_1705.rptdocument -rw-r--r-- 1 beadmin beadmin 30694806 Oct 7 05:14 MPM_1705.xls Thanks Created attachment 204941 [details] The eclipse war project This contains the test code File: BirtLoadTest.zip The URL of the application will be as follows: URL: http://localhost:8001/BirtLoadTest/ReportingServlet?count=5 count can be any number, which specifies, the scheduler will schedule that many number of jobs that are run parallelly after 1 min (takes current time) + 1 min. Created attachment 204942 [details]
test design file
pl place this in C drive.
outputs will be created in C drive
this test design file may create a 20mb rptdocument. but for us, in reality, the rptdocument is 550mb. hence failing with outofmemory. this test design has 27K records from sample database. but it runs fine. locally with 5 reports in parallel. may be we need to create a more complex report. will try that thanks (In reply to comment #11) > this test design file may create a 20mb rptdocument. but for us, in reality, > the rptdocument is 550mb. hence failing with outofmemory. this test design has > 27K records from sample database. but it runs fine. locally with 5 reports in > parallel. > Do you see memory usage continue to increase without stablizing at a ceiling with this test report? Sam, What's your memory setting for JDK? Sam, Could you please reproduce the problem with a standalone java program? In my test environment, with single thread, running the test design takes only 27s, rendering it to xls takes 43ms. Setting the data engine options gets almost same result. My jdk setting is "-Xms40m -Xmx384m". If you can reproduce the problem in a standalone program with the test report, it would help much. thanks Correction: above rendering for xls takes 43s but not 43ms. 1. We tested a document more than 500MB and see that if we do 3 operations: a. run document b. render pdf c. render xls together in 1 thread, the total time cost is about 1 hr. 2. Please notice that rendering such a document to xls might cost up to 1024M memory, and rendering several xls together would need several times memory. So if you render xls with 5 threads together, you'd better set maximum memory to 5120m. 3. Generating document and rendering pdf requires much less memory than xls. 4. Our test generates much bigger document than yours: pdf: 500m xls: 2600m Theoretically, it should be fast and require less memory in your case. Anyway, it's not possible for us to estimate exactly how much memory and time needed for your special case. To do so, we need to know: 1. In single thread: how much memory and time needed to run document/render pdf/xls separately. 2. Do you set enough memory so that several operations can be done together? thanks Actually the rptdocument, which is being generated should not be 500MB size for a 50K record report. We are in the process of tuning/optimizing the report Thanks Developer is handling this bug.--fengfu.liu |