| Summary: | Provide problem/reference counts in xml from Analysis and Use Scan tasks | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Product: | [Eclipse Project] PDE | Reporter: | Peter Parapounsky <pjparapo> | ||||||||
| Component: | API Tools | Assignee: | Curtis Windatt <curtis.windatt.public> | ||||||||
| Status: | VERIFIED FIXED | QA Contact: | |||||||||
| Severity: | enhancement | ||||||||||
| Priority: | P3 | CC: | curtis.windatt.public, Michael_Rennie | ||||||||
| Version: | 3.7 | Keywords: | noteworthy | ||||||||
| Target Milestone: | 3.7 M7 | ||||||||||
| Hardware: | PC | ||||||||||
| OS: | Windows XP | ||||||||||
| Whiteboard: | |||||||||||
| Attachments: |
|
||||||||||
|
Description
Peter Parapounsky
To produce the summary xml, the task would have to do essentially all the same work that the report conversion task does. What benefit is there to putting this work into the use scan task? Is the report conversion task a problem to run performance or result wise? >To produce the summary xml, the task would have to do essentially all the same
>work that the report conversion task does.
I see, I assumed that the task producing the XML reports would also produce the XML summary.
The XML summary is ultimately what we really need so we can easily retrieve the total errors/warnings. If there is no other way but to run the conversion task, that is OK too.
We haven't really done any performance analysis so I don't really know whether the conversion task could be a performance problem, though I haven't seen anything obvious. I guess my point was that the conversion tasks could be eliminated by using XSLT.
(In reply to comment #2) > We haven't really done any performance analysis so I don't really know whether > the conversion task could be a performance problem, though I haven't seen > anything obvious. I guess my point was that the conversion tasks could be > eliminated by using XSLT. This has been previously investigated but the prior decision was to go with a separate conversion task. There could certainly be some benefits to having more metadata in the xml report. However, changing how the task runs would be a significant time investment and we would need to support both the old and new xml reports in the conversion task. There need to be some clear wins before investing in it. I understand. So will it be possible the conversion task to generate a XML summary? Mike is investigating this for M5. We only plan on providing a minimal amount of metadata in xml (totals of reference/problem counts). If more detailed information is needed we can look at making the html report easier to scrape information from. Created attachment 191471 [details]
Proposed Fix
This fix provides a very simple count xml file in the root directory of the xml report. The xml has the same format for both the use scan (total references found) and analysis scan (total problems found). Note that the html converter for the use scans can combine elements which may result in a smaller count in the html report.
I have committed the patch to HEAD for this so Peter can try out the fix. The xml is called counts.xml and format is as follows: <?xml version="1.0" encoding="UTF-8" standalone="no"?> <reportedcount total="685"/> I do have some additional information that I can put in the summary, but I figured I should start with the absolute minimum of information. If there is something specific that is needed, Peter, please let me know. Thanks Curtis. If I understand correctly the fix provides only the total number of references. What we are more interested in actually is the number of errors and warnings(if any) so we can display that number on a build page, something like say: Errors: 4, Warnings 12. Would that be possible? (In reply to comment #8) > Thanks Curtis. If I understand correctly the fix provides only the total number > of references. What we are more interested in actually is the number of errors > and warnings(if any) so we can display that number on a build page, something > like say: Errors: 4, Warnings 12. Would that be possible? Use scans don't have errors/warning severities. I can collect information based on what is in IReference such as the number of illegal references. For analysis scans, I'm not sure when we apply the severity preferences. Mike may know. If the severities are available in the IApiProblem or somewhere else we can access from the xml reporter, we can provide counts of problems with warnings/error severities. The task documentation lists other files that are generated, so we'll have to update the doc plug-in as well. (In reply to comment #9) > For analysis scans, I'm not sure when we apply the severity preferences. Answered my own question, we do store the severity directly in the xml (in my example it was just hidden by the longer message string). The IApiProblem does not know its severity, but we can access the ApiProblemFactory to get the expected information. I'll work on fixing this asap. Created attachment 191559 [details]
WIP
Adds support for listing error/warning totals for analysis task.
Adds support for listing illegal/internal totals for use task.
No longer increments counter for entries that are removed by the collator before writing.
Fixes documentation.
There is still an issue with the counts in the use scan. The counts we get are higher than what is actually written out to the xml. I found that the collater was removing some entries under the covers, but there must also be something in the xml decriptor writer that will skip or overwrite entries.
Created attachment 191560 [details]
Fix
Took a long time to debug, but eventually figured out how the collate was removing 'like' references and how to work around it in the counts.
Fixed in HEAD. Verified in I20110424-2000 |