| Summary: | goalFile Keeps Growing with Open Source BIRT 2.3.2 Provided with Maximo 7.1.1.5 | ||
|---|---|---|---|
| Product: | z_Archived | Reporter: | David Borland <dborland129065mi> |
| Component: | BIRT | Assignee: | Mingxia Wu <mwu> |
| Status: | NEW --- | QA Contact: | weiming tang <weiming.tang> |
| Severity: | critical | ||
| Priority: | P3 | CC: | bluesoldier, jouyang, mccafferty, mwu, weiming.tang |
| Version: | 2.3.2 | ||
| Target Milestone: | --- | ||
| Hardware: | Other | ||
| OS: | Linux | ||
| Whiteboard: | |||
|
Description
David Borland
David, please try to port to latest birt first. What is the name of the temp file, "goalFile"? Could you please confirm if this happens when exporting any format document? (In reply to comment #1) > David, please try to port to latest birt first. > What is the name of the temp file, "goalFile"? Could you please confirm if this > happens when exporting any format document? We can not update to the latest BIRT since IBM would walk away from supporting us if we are not at the version bundled with Maximo. Looking into the /tmp directory I see three directories created by BIRT DataEngine_151521544_1 DataEngine_383784672_5 DataEngine_649733818_2 They all have a subdirectory called something like BirtDataTemp12997659559291 And then all have a subdirectory called something like session_12997659559301 Inside that directory is a growing file called goalFile. This file is binary. I exported a format document and it did not cause the file to be created. We pulled the vendor reports from our environment and the goalFile (and subdirectories) is not being created today. Mingxia, please take a look. (In reply to comment #3) > Mingxia, please take a look. The goalFile was recreated today just before noon. It appears when someone is running a BIRT report that has lots of data that the the data is written out to the goalFile. I'm seeing data from multiple plants. (In reply to comment #4) > (In reply to comment #3) > > Mingxia, please take a look. > The goalFile was recreated today just before noon. It appears when someone is > running a BIRT report that has lots of data that the the data is written out to > the goalFile. I'm seeing data from multiple plants. What can we do to diagnosis what is causing this? We currently have two growing goalFile(s). Please advise. I'm able to reproduce the issue now in my QA environment. I ran the Word Order Summary Report against the whole workorder table starting at 17:37 PM. At 17:46 PM a DataEngine, associated subdirectories and goalFile were created. I'm wondering if this may end up being users running a report against too many records and when BIRT doesn't have space in memory to hold the data it starts writing it out to the /tmp directory. What is BIRT designed to do when a report is run against every record in the database? Does it create a goalFile in the /tmp directory as a spot to hold the records that can't be held in memory? How many records trigger the creation of the goalFile? I'm in a 32 bit environment. Would that trigger a goalfile quicker? Mingxia, How can we stop BIRT from writing these large files? This is a bug in birt 2.3.2. But we have fixed it in 2.6.2. How did you integrate BIRT with Maximo? Did you write code to call BIRT's API or directly deploy BIRT under your web application server? I need to know this, then maybe I can give you some solution to avoid generating those temp file. The Maximo integration uses both the sample viewer and the API, for converting directly to PDF. Would it be possible to provide information for both scenarios? For API layer, you can configure your memory usage in EngineConfig, such as: EngineConfig ec = new EngineConfig( ); ec.getAppContext( ).put( DataEngine.MEMORY_BUFFER_SIZE, 0 ); This will load all rows in memory, which will not trigger the disk IO. For the birt viewer, this setting is not configurable. Thank you for the information. After making this change, is the report execution terminated if there is not enough memory to support the report contents? Or will this cause an out of memory failure? Is the answer to that question the same for the fix in 2.6.2? Would you provide information about how the fix is implemented in that version? (In reply to comment #11) > Thank you for the information. After making this change, is the report > execution terminated if there is not enough memory to support the report > contents? Or will this cause an out of memory failure? > Is the answer to that question the same for the fix in 2.6.2? Would you > provide information about how the fix is implemented in that version? Mingxia, Can you answer ajleonhard's question? Seems like the fix might cause even more issues if the server doesn't have enough memory to load the data. I tried run a big report in 3.7.0 release and it can not reproduce. you can configure your memory usage in EngineConfig, such as: EngineConfig ec = new EngineConfig( ); ec.getAppContext( ).put( DataEngine.MEMORY_BUFFER_SIZE, 0 ); This will load all rows in memory, which will not trigger the disk IO. If the question still happens, you can use the 3.7.0. Since 2.6.2 the default setting is load all rows in memory. |