Some Eclipse Foundation services are deprecated, or will be soon. Please ensure you've read this important communication.
Bug 431143 - "testConfigs" are hard coded into ant task, instead of being settable.
Summary: "testConfigs" are hard coded into ant task, instead of being settable.
Status: RESOLVED FIXED
Alias: None
Product: Platform
Classification: Eclipse Project
Component: Releng (show other bugs)
Version: 4.4   Edit
Hardware: PC Linux
: P2 major (vote)
Target Milestone: 4.6 M7   Edit
Assignee: David Williams CLA
QA Contact:
URL:
Whiteboard: SR1
Keywords:
Depends on:
Blocks: 390986
  Show dependency tree
 
Reported: 2014-03-25 12:56 EDT by David Williams CLA
Modified: 2016-05-06 15:19 EDT (History)
0 users

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description David Williams CLA 2014-03-25 12:56:50 EDT
This bug is to reflect long term improvements needed after encountering bug 431047. 

"testConfigs" are hard-coded into the Java code that summarizes JUnit results, but to be accurate, yet flexible, it needs to be set at test-time.  (for example, at time we might want to test with both Java 7 and Java 8, on same platform, for same build, but other times not. Or, we may eventually want the ability to sometimes test on SUSE, and sometimes RedHat ... or what ever. 

This code, where its hard coded, is in TestResultsGenerator.java in eclipse.platform.releng.buildtools project. 

Plus, this issue is complicated that at test-time, the suffixes (that are to match "testConfigs") are semi-hard coded in a number of places ... and it is hard to know which is the one used. So, that needs to be "spec'd" and a simple system defined to determine, or "compute" it. The "computation" should be relatively simply, since it's merely ${os}.${ws}.${arch}_${VM} which is all specified (separately) anyway.
Comment 1 David Williams CLA 2014-03-25 12:58:49 EDT
I think it worth straighting this out before M7, as it will make other fixes/changes much easier.
Comment 2 David Williams CLA 2014-03-25 16:27:40 EDT
Another aspect of this issue is the web pages where we display console logs. We currently do that with a PHP function such as "displayList(<some directory name>" ... so what ever the "platform" is, needs to be "passed to" the test php page as well, so it can display the correct directories.
Comment 3 David Williams CLA 2014-03-25 16:39:23 EDT
The display of the "logs" is done in "logs.php" ... it currently looks like this (near the end of the file): 

listDegailedLogs("linux.gtk.x86_6.0");
listDegailedLogs("win32.win32.x86_7.0");
listDegailedLogs("macosx.cocoa.x86_5.0");
Comment 4 David Williams CLA 2014-03-25 17:08:33 EDT
Just so I don't forget, see also bug 431047 comment 5 for the "earliest" point we need to know "test platform". (which is before framework starts).
Comment 5 David Williams CLA 2014-03-25 20:54:37 EDT
I see a lot of this is rediscovering what I've discovered two years ago, documented in but 390986.
 
I'll use this bug specifically for fixing the ant task to accept a "list of test platforms" as input, and the original for all the remaining issues. 

And, as I think about it, there was another bug, that the "generator" assumed all tests were available at the same time, or else the last one to run would "wipe out" previous reports (such as, if they'd been deleted or lost due to a disk crash) ... plus, the way we do things now, we we go out to "generate a report", we know exactly which machine's results we are looking for. So, while a much larger change ... will consider passing in just one "test platform" string, and generating "the column data" needed for that machine. 

Plus, while looking for that bug, ran across bug 182955 which is old, but still valid, which I think is saying, in part, some part of the "testPlatform" string should be user settable, even if other parts of it are better "computed" (from the other parameters, for os, ws, arch, and (main) VM running the tests.
Comment 6 David Williams CLA 2014-04-30 13:48:17 EDT
I've not really investigated "what this will take", so still consider high priority, but will un-target to be clear this may not be done in time for Luna.
Comment 7 David Williams CLA 2016-05-06 15:19:02 EDT
This was fixed in M6 or M7. 

The "expectedConfigs" can still be passed into the "generate index" task, but it also write back "found configs" which can be used to fine tune test results pages, "file not found", etc. 

The order "passed in" to the task determines the order they appear on test summary page (Previously had to be alphabetical). 

There is  a potential limitation: there is no "minimum" or "maximum" checked. While some column widths are correctly adjusted based on the number of "columns" expected (where a column is a testConfig) there would be some practical limits on squeezing in more than 4 or 6. Plus, if there on only one, for example, the summary table may look funny. May find occasion to improve this in the future.

A small bug was introduced. There is only a dash ('-') printed in a table cell for a missing file if another test config has that file in its results. The "missing file" is still listed below the summary table, it is just that "results" drive the table, not "expected". I do not think this is worth opening a bug since in practice almost never happens -- and when it does, it is typically temporary until some more results are obtained, and then the dash is printed so the table stays balanced. 

This general scheme allowed two big improvements. First, there is no longer a long list of "missing files" listed, when it is simply waiting for results from that platform. (I think there may be a bug for that?) Second, if there are "test results found" but they are not listed in the "expected files" in the testManifest.xml file, then those files are listed in a separate table, so the testManifest.xml file can be corrected. This is expected to happen when developers add new test suites.