Community
Participate
Working Groups
This bug is to reflect long term improvements needed after encountering bug 431047. "testConfigs" are hard-coded into the Java code that summarizes JUnit results, but to be accurate, yet flexible, it needs to be set at test-time. (for example, at time we might want to test with both Java 7 and Java 8, on same platform, for same build, but other times not. Or, we may eventually want the ability to sometimes test on SUSE, and sometimes RedHat ... or what ever. This code, where its hard coded, is in TestResultsGenerator.java in eclipse.platform.releng.buildtools project. Plus, this issue is complicated that at test-time, the suffixes (that are to match "testConfigs") are semi-hard coded in a number of places ... and it is hard to know which is the one used. So, that needs to be "spec'd" and a simple system defined to determine, or "compute" it. The "computation" should be relatively simply, since it's merely ${os}.${ws}.${arch}_${VM} which is all specified (separately) anyway.
I think it worth straighting this out before M7, as it will make other fixes/changes much easier.
Another aspect of this issue is the web pages where we display console logs. We currently do that with a PHP function such as "displayList(<some directory name>" ... so what ever the "platform" is, needs to be "passed to" the test php page as well, so it can display the correct directories.
The display of the "logs" is done in "logs.php" ... it currently looks like this (near the end of the file): listDegailedLogs("linux.gtk.x86_6.0"); listDegailedLogs("win32.win32.x86_7.0"); listDegailedLogs("macosx.cocoa.x86_5.0");
Just so I don't forget, see also bug 431047 comment 5 for the "earliest" point we need to know "test platform". (which is before framework starts).
I see a lot of this is rediscovering what I've discovered two years ago, documented in but 390986. I'll use this bug specifically for fixing the ant task to accept a "list of test platforms" as input, and the original for all the remaining issues. And, as I think about it, there was another bug, that the "generator" assumed all tests were available at the same time, or else the last one to run would "wipe out" previous reports (such as, if they'd been deleted or lost due to a disk crash) ... plus, the way we do things now, we we go out to "generate a report", we know exactly which machine's results we are looking for. So, while a much larger change ... will consider passing in just one "test platform" string, and generating "the column data" needed for that machine. Plus, while looking for that bug, ran across bug 182955 which is old, but still valid, which I think is saying, in part, some part of the "testPlatform" string should be user settable, even if other parts of it are better "computed" (from the other parameters, for os, ws, arch, and (main) VM running the tests.
I've not really investigated "what this will take", so still consider high priority, but will un-target to be clear this may not be done in time for Luna.
This was fixed in M6 or M7. The "expectedConfigs" can still be passed into the "generate index" task, but it also write back "found configs" which can be used to fine tune test results pages, "file not found", etc. The order "passed in" to the task determines the order they appear on test summary page (Previously had to be alphabetical). There is a potential limitation: there is no "minimum" or "maximum" checked. While some column widths are correctly adjusted based on the number of "columns" expected (where a column is a testConfig) there would be some practical limits on squeezing in more than 4 or 6. Plus, if there on only one, for example, the summary table may look funny. May find occasion to improve this in the future. A small bug was introduced. There is only a dash ('-') printed in a table cell for a missing file if another test config has that file in its results. The "missing file" is still listed below the summary table, it is just that "results" drive the table, not "expected". I do not think this is worth opening a bug since in practice almost never happens -- and when it does, it is typically temporary until some more results are obtained, and then the dash is printed so the table stays balanced. This general scheme allowed two big improvements. First, there is no longer a long list of "missing files" listed, when it is simply waiting for results from that platform. (I think there may be a bug for that?) Second, if there are "test results found" but they are not listed in the "expected files" in the testManifest.xml file, then those files are listed in a separate table, so the testManifest.xml file can be corrected. This is expected to happen when developers add new test suites.