Community
Participate
Working Groups
On the shared Hudson instance, we have a special job that is triggered after a test completes, and that job runs (only) on a slave that has access the /shared/eclipse. This allows it to "write data" there, to a small file in /shred/eclipse/sdk/testjobdata. Then there is a cron job that checks every 10 minutes, to see if there are any new "data sets" to pull from Hudson, summarize, and upload to download server. These scheme doesn't work well on the performance machine, because the "collect job" can sometimes get in the que after several other performance test jobs, so it can be "a day" before the collect job runs. (The performance machine, has only one executor, and is not part of a "master/slave" configuration). I tried using 2 executors, and "locks" to allow one "collect" jobs to be able to always run quickly, but that doesn't seem to work well, since locks do not seem to work well on that Hudson instance. (bug 454736). On that one "Linux instance", we do not really need the complication of a seperate job (just trying to follow a consistent pattern) since it itself has access to /shared/eclipse so as a final "build step", each job can write the file the /shared/eclipse/sdk/testjobdata.
Fixed last week.