| Summary: | Release Train Cascade / RSS Notification & Response | ||
|---|---|---|---|
| Product: | Community | Reporter: | Nick Boldt <nboldt> |
| Component: | Cross-Project | Assignee: | Nick Boldt <nboldt> |
| Status: | RESOLVED FIXED | QA Contact: | |
| Severity: | normal | ||
| Priority: | P3 | CC: | davidms, david_williams, hkyleung, jeffliu, kim.moir, naci.dai, ppshah, richard.gronback, sonia_dimitrov, steven.wasleski, thatnitind, tlroche |
| Version: | unspecified | ||
| Target Milestone: | --- | ||
| Hardware: | PC | ||
| OS: | Windows XP | ||
| Whiteboard: | |||
| Bug Depends on: | 85485 | ||
| Bug Blocks: | 124172, 141152 | ||
| Attachments: | |||
|
Description
Nick Boldt
Created attachment 30194 [details]
Current XSD for the RSS build feed
Dave has suggested a number of improvements to the XML (and its backing XSD,
which is attached), including:
+ fix XML so that another namespace is used for extensions added to the
<summary> (an eclipse one, perhaps?) instead of implying that we're extending
the atom namespace by only defining one namespace in the XML/XSD
+ add new content into <summary> or a new tag, eg. <bundles> (?) in order to
provide a list of the available drivers for a given build, so downstream
projects can pick/choose the one(s) they need. In most cases, the need would be
most likely for the Runtime or SDK bundle, but I could see a case where
downstream projects might want to use the mechanism to collect and run the
JUnit tests associated w/ a given project. For the case of the platform, this
would let downstream builds easily pick the right driver for their build
environment, rather than just assuming it's linux/gtk or failing that, just
deferring to whatever the upstream project used.
Of course, in order to make this work with mirror support, I suppose these
ought to be only paths, not URLs, so that for whatever mirror you get the feed
from, you could then also get that mirror's copy of the zip or tar.gz file you
need.
<bundles>
<bundle type="SDK">/tools/emf/downloads/drops/...SDK...zip</bundle>
<bundle type="runtime">...</bundle>
...
</bundles>
First let me say that judging from the content of this bug and related bug 85485 it appears we have a good start on this. Looking at this from a Callisto point of view, I see a few issues to be tackled. BTW, by "Callisto point of view" I mean, how can we leverage this work to improve the odds that we produce solid Callisto milestones as defined at http://www.eclipse.org/projects/callisto.php, including the common GM date at the end of June. The issues: 1) The dependency information in the RSS feed entries is specific to the platform on which the project built. For example, the EMF entries contain references to the platform SDK Linux drop. A project that depends on EMF may build on Windows and will need the platform Windows drop. I don't think this is a big issue since I believe it is relatively straight forward to derive the Windows download name from the Linux one for the platform. However, I am not sure all the projects with platform specific drops follow the same naming conventions so a team that depends on more than one of these would need to be aware of that or better yet this exercise may drive better consistency. Alternatively, could the "bundles" (should that be "drops" or "drivers" or some such since "bundle" already means something else in eclipse) mechanism be extended to include platform info for each "bundle". I think that would solve this problem too. 2) I think we need to add Callisto milestone information to this. That is, in support of which Callisto milestone (or RC) is this build a part of. For example, the platform is currently working on M5 which is a part of Callisto M5. Right now, RSS entries from the platform should indicate that they are part of Callisto M5. Why you may ask? When the platform declares their M5, their work on Callisto M5 will be complete, however, the other Callisto projects may not complete their Callisto M5 work for a couple more weeks. The downstream projects will want to remain on platform M5 (and all their other Callisto M5 targeted dependencies) until they are done with their Callisto M5 targeted work. They will not want to pick up the Callisto RC0 targeted work that the platform and other dependencies will be publishing. Once the downstream project completes their Callisto M5 work, they can move on to their Callisto RC0 work and start picking up the corresponding dependencies via the feeds. This tagging will need to take into account that there will be coordinated releases after Callisto too (functional and maintenance). 3) How should projects with multiple peer dependencies handle kicking off their builds and can support for the proper solution be added to this code. For example, VE depends on both EMF and GEF. In this automated world, the EMF and GEF builds would be running in parallel after the platform publishes. The VE build needs to know that it has the proper builds of both of these. I don't think they would be able to trigger off the completion of just one or the other. I suppose that the VE polling of the feeds could look to see if it has a relatively new build from each that depend on the same platform build. That would probably work but if feels a bit klugy to me since it will be hard to define exactly what "relatively new" means. Thanks for your input, Steve. My thoughts on your thoughts: 1. Agreed, for the dependencies to be useful, the feed should include all platform-specific versions that could be used under the release. Of course, in some cases, the dependency might have a single version for all platforms (EMF, for example). 2. Yup, I think this makes sense, too. 3. My first idea on this was that projects should be able to define a set of dependencies, with a type for each. For instance, the appearance of a new build of a "kick" dependency (which would typically be the platform) would make the system try to kick off a new build. A "hold" dependency would prevent it from doing the build, until a new one appeared. I would define "new" as "newer than the active kick dependency." I could also imagine specifying whether a build with test failures or pending test results would actually satisfy the dependency or not. An "ignore" dependency would be used in the build, but wouldn't figure into deciding whether or not to kick it off at all. From what I understand, a component is going to get "built" when its upstream component changes, but when the component itself has not changed (if it had changed, it would be built using the existing mechanisms). So, the only interesting outcome for rebuilding a component that has already been built, is to see if dependencies caused either A) compile errors, or B) test failures. In either case, it doesn't make sense to call this a "build", it's really just a test. If the result is "success", the actual binaries should be the same as the last time the component ran its normal build. Am I misunderstanding the purpose? Further discussion in bug 124172. Will the XSD/Documentation for what goes is proposed to be in the RSS feed by updated? It doesn't currently have "type of build", right? e.g. milestone, I-build, etc., Will that be added? (I get a little lost in all the threads of numbered discussions. Also, seems to me a criticial part of this scheme is the "cron job" that looks for and reads the rss feed. Anyone have one to contribute? Or, is that obviousl to others? Lastly, I'm a bit confused (or concerned) about the "dependancies" being in this feed. Seems redundant with the information that should already be in the plugins and features, so ... I'd hate to see a "middle man" data structure here. (And apologies if I've misunderstood what it means or its purpose). I'm going to assume no one has responded to comment 4 because it's pretty much accurate. So, rather than uploading tons of identical builds to eclipse.org and its mirrors, and misleading downloaders into downloading identical content, wouldn't it make more sense to go back to the last build, and update it's list of dependencies. For example, the GEF M4 build could say: GEF M4 requires platform M4. This build has also been tested with xxxx integration build... etc., link to post-M4 platform build, etc. comment 7 makes an interesting point - if the purpose of Callisto / stack build automation is to ensure that the latest upstream deps still work for the latest downstream deps, then yes, updating the feed entries w/ Build X (eg., UML2) works with Upstream Build Y+1 instead of Upstream Build Y (eg., EMF) would certainly cause the downstream build cascade to continue w/o the need for extra unneeded uploads ... assuming that is that there weren't any bugs fixed or other code changed AS WELL between Y and Y+1. If the builds are weekly, it's IMHO a safe bet to assume that code's changed in Y+1 so it needs a new published build, not just an update to its feed entry. re: comment 6 ... type of build is implied in the URL of the build, provided that builds are named w/ I, M, N, R, or S somewhere in their name. If not, then yes, I could add a field for that too. (The schema will be updated once I have changes to apply to it.) I have a few examples of cron entries and the actual shell scripts involved, but they're not really ready for public consumption yet. For the EMF->UML2 autobuild stack, which was turned on Wed night in order to cause EMF to build at midnight and UML2 a couple hours later, there are currently three files: * start_cron.sh (builds EMF on a schedule (00:00h) to start the process) * promote_emf_feed.sh (promotes EMF on a schedule (00:49h) to automate promotion & feed update) * build_uml2_cron.sh (every hour from 1am to 4am, checks the EMF feed for changes, compares to local copy, and if different, kicks a build & updates the cached local copy) Here's a sample of the crontab entries: # www-data crontab: an I build every Thu (4) at 00:00h 0 0 * * 4 $HOME/emf-build/scripts/start_cron.sh -buildType I -tagBuild true -branchCVS HEAD -noperf -runJDK14Tests -runJDK50Tests -runOldTests 1> $HOME/cron_logs/start_cron.sh.I22.log.txt 2>&1 # runs as nickb: promote the build when done building 49 0 * * 4 /home/nickb/tmp/promote_emf_feed.sh -announce -email codeslave@ca.ibm.com > $HOME/cron_logs/promote_emf_feed.sh.log.txt # Runs as www-data, the apache user: listen for new EMF build 0 1,2,3,4 * * 4 $HOME/emf-build/scripts/build_uml2_cron.sh -f $HOME/emf-build/scripts/promoteToEclipse.uml2.properties 1> $HOME/cron_logs/start_cron.sh.uml2.I.log.txt 2>&1 Two of these are in CVS, if you want to read 'em: http://dev.eclipse.org/viewcvs/indextools.cgi/emf-home/emf-build/scripts/ The last one, promote_emf_feed.sh, is just a wrapper for promoteToEclipse.sh which simplifies its use and assumes a few values to pass to that script. As it's changing soon, it's not in CVS yet. All three need to be renamed, optimized, and tweaked to support listening for multiple feeds (eg., EMFT builds w/ multiple dependencies), and also need to be changed to run 24/7 but using sleep() instead of simply running on a schedule. Ideally, there'd be just two scripts, usable by any project in the stack - one to listen + build, and one to listen + promote (or just listen + rss-feed-update). This way it's can be BOTH purely keyed to rss notification or on a schedule, and can respond to BOTH scheduled and impromptu builds. I'd also like to implement a naming convention such that all builds in the stack use the same timestamp instead of the ACTUAL time they're built, to make them easier to tie together. The first build (in this case, EMF) would define the timestamp of the build as yyyymmddhhMM, and all the rest would follow suit. Once we're listening to Eclipse builds, we'd adopt their build's timestamps instead, and pass that on down the stack. But builds are *not* weekly. The platform itself builds nightly, and more often when things go wrong. Lets assume you have a single upstream component, and it builds as frequently as you do. This would mean that half the time you are building because the upstream component changed, and half the time you changed (normal build process). So, 50% waste/dupes. For leaf components, I would guess that 90% of their builds would be automatically initiated due to upstream changes. By definition, all cascaded builds are recreations of a previous build, and would be dupes. It would be risky for a component like WTP to say "we're going release a bunch of fixes into CVS, but we aren't going to try to build until an upstream component produces a build we've never looked at". > If the builds are weekly, it's
> IMHO a safe bet to assume that code's changed in Y+1 so it needs a new
> published build, not just an update to its feed entry.
OK, maybe I don't understand something. Which builds are weekly? I thought the purpose was to trigger a build at any time that any dependency promotes a new build.
I am commenting on this build cascade feature from the point of view of TPTP. The TPTP project depends on platform, EMF/XSD, WTP and BIRT. TPTP publishes a build to eclipse daily. Since the TPTP project is very active, and there are always changes in TPTP that need to be integrated into the build, we won't wait for our dependencies to have the next good build before kicking off a build. So, the scenario where project B depends on project A, and if no new project A build in feed, no new project B build required, does not apply to our project. However, it is desirable to always build with the lastest good builds of our dependencies. From my experience, TPTP need to build with dependencies that were built "reasonably recently", such as the last stable build, or last week's development build dropped to IES. I have not received the requirement to always build with last night's EMF build, for example. The only time when a true build cascade happen is when we are about to release and we want to build with the release candidates of our dependencies. Currently, I have a file in our build system to specify the versions, and URLs of our dependencies. I like to view this build cascade feature as a mechanism of letting other projects know which is your lastest good build, as opposed to a real time notification that will trigger builds in another project. The mechanism can be as simple as publishing an XML config file that contain information like the URL for download the lastest build that is suitable for use by downstream products. Randy, I wouldn't have thought that every build should cause a cascade. That does seem wasteful. I would have thought that some regular scheduled builds (e.g. weekly integration builds) would cascade, the release build, of course, and perhaps any extra builds that might be required right before it. So, the purpose of the cascade mechanism is to help me modify the following 2 lines of text in the build.cfg file for GEF? eclipseURL=http://download.eclipse.org/downloads/drops/S-3.2M3-200511021600/ eclipseBuildID=3.2M3 At one point, I thought we were talking about finding errors caused by changes in dependencies "early", where early is defined as prior to the time when I'm ready to declare a new/interesting build for the community. re: comment 13 - there's multiple purposes. 1) [PUSH] to advertise via an RSS feed (or xml file, or properties file, whatever's most convenient) that a new build is available. 2) [PULL] to be able to respond to news of upstream dependency changes by listening to the above feed. Whether that means for your specific project that you'll build a new N build every night that EMF or UML2 or EMFT do so, or just respond to our weekly I/M/S/R build feed updates, that depends on your project's specific needs. I don't want this to be too stringent, restrictive, or arbitrary. My intent is to build scripts/crons that will allow projects to publish feeds and listen to feeds. How they choose to respond to that data is up to them. For the case of EMF-UML2-EMFT the intent is to have automated N and I builds, and to even synch up the versions so that it's easier to see who depends on which version; eg., if the cascade starts with EMF at midnight doing its 200602020000 build, I'd like to have all the components who want to autobuild based on that new driver be also numbered 200602020000, even if they start an hour or two later. Of course this would be build in as an option in the shell script so that the datestamp would either be the actual server time at the start of the build, or the "inherited" one from the rss feed. Note also that as far as N builds go, EMF, UML2 and EMFT are not currently PUBLISHING their N builds, so while it would be possible to respond to the N build feed INTERNALLY, projects which build outside IBM will not (as yet) be able to listen and respond to N build feed changes for the above projects. For those projects, there's the I/M/S/R feeds. oh, one other clarification. Just because you're listening to a feed (or feeds) of upstream components to decide that it's time to autobuild your project, doesn't mean you can't initiate your own builds too. And, if you don't publish those builds, no one downstream will know. So if you're thrashing away one week and kicking N builds every 2 hrs to resolve some problem or because your project is VERY active and changes are coming in frequently, you won't be causing your upstream builds to thrash away with umpteen unnecessary N builds too - unless you choose to by publishing those builds (into the feed and to a location from which they can be downloaded). Once again, the purpose here is to simplify getting builds done where desired/possible, not to create extra unneeded builds "just 'cuz." > Just because you're listening to a feed (or feeds) of upstream components to
> decide that it's time to autobuild your project, doesn't mean you can't
> initiate your own builds too.
But, if you auto-build your project, you are creating duplicates. I think the auto-building and auto-testing is great, but instead of publishing the resulting build, the original build should just be updated to indicate that all tests passed using more recent versions of the dependencies.
re: comment 16 I agree. So then an option on the listener-response script would be something like -updateDepsOnly, so that the feed would be changed w/ the new deps, not the new build. This would also require updating that build's index.html. Enabling this option would be a way of telling the cron that you're not doing anything at the moment that's build-worthy except testing upstream dependencies. Without this option, it would be assumed that every new build would produce new feed data (and optionally, a new promoted build, too). If this option is turned on, should the autobuilt build be deleted when the feed's updated, since you're not using it for anything except a build test? Or should that too be another optional flag, -deleteBuildWhenDone, so that if you wanted to look at it, you could do so? The discussion in here is very interesting. I think I see four key themes emerging: 1) Purpose of implementation from comment 14: "I don't want this to be too stringent, restrictive, or arbitrary. My intent is to build scripts/crons that will allow projects to publish feeds and listen to feeds. How they choose to respond to that data is up to them." 2) Interesting ways the development teams can use this for their own purposes, see comment 14 (right after above quote), the second paragraph of comment 4, comment 15 and probably some others. 3) Callisto needs, see the second paragraph of comment 2. 4) Implementation, discussed at various points through out the bug (like comment 16 and comment 17) and linked to the first three themes. These are all important. I think we all agree on theme 1, Nick has a good start on theme 4 and theme 2 will by its nature continue to evolve over time. What we need to reach agreement on is how to accomplish theme 3. This is the bit that has to work across the projects. If we figure this out the implementation can be completed (at least for now). I will stop this comment at this point without diving into how we can accomplish theme 3. I would first like to make sure we have concensus that this summary of where we are at is correct. Then we can work through the details. Great summary, Steve. Thanks. +1. I was talking to Nick about this... What would be nice is if we can reuse this infrastructure for other tasks within the same project. For example, unit testing, performance testing, API scans, and etc. The idea is that your feed will look something like this: <tests type="performance"> <results platform="Linux">PASS</results> </tests> When a Windows performance machine reads this feed, it will say, oh performance for this build has yet to be run on Windows, so I better kick if off now. After the performance tests complete, it will upload the performance results and updates the feed: <tests type="performance"> <results platform="Linux">PASS</results> <results platform="Windows">PASS</results> </tests> So we are not only using this for building upstream projects, but also using it to manage tasks within the same project. Created attachment 35321 [details] current code for reading/viewing feeds, listening to & responding to feeds, etc. (This is a copy of the information I sent to Jeff, in case anyone else wants to benefit.) 0. Source: In the attached zip, you'll find: a) dev.eclipse.org_emf-home_emf-build_scripts.zip (cvs extract) b) org.eclipse.emf.rss.zip (complete source for RSS viewer plugin, feed updater, etc.) 1. To update a feed after a build is done (at a set time): In the attached zip, you'll find: a) promote_emf_feed.sh (weekly I builds) b) promote_emf_N_feed.sh (nightly N builds) which are cron'd like this: 49 0 * * 0,1,2,3,5,6 /home/nickb/tmp/promote_emf_N_feed.sh > $HOME/cron_logs/promote_emf_N_feed.sh.log.txt 20 1 * * 4 /home/nickb/tmp/promote_emf_feed.sh -announce -email codeslave@ca.ibm.com > $HOME/cron_logs/promote_emf_feed.sh.log.txt which are themselves just wrappers for emf-home/emf-build/scripts/promoteToEclipse.sh (because crontab entries die if you pass too many params) 2. To watch a feed & respond by kicking a build: emf-home/emf-build/scripts/build_uml2_cron.sh which is cron'd like this: #UML2 nightly builds based on EMF ones (RSS build cascade) 0 1,2,3,4 * * 4 $HOME/emf-build/scripts/build_uml2_cron.sh -f $HOME/emf-build/scripts/promoteToEclipse.uml2.properties 1> $HOME/cron_logs/start_cron.sh.uml2.I.log.txt 2>&1 0 1,2,3,4 * * 0,1,2,3,5,6 $HOME/emf-build/scripts/build_uml2_cron.sh -f $HOME/emf-build/scripts/promoteToEclipse.uml2.N.builds.properties 1> $HOME/cron_logs/start_cron.sh.uml2.N.log.txt 2>&1 which then runs uml2-home/uml2-build/scripts/start.sh (see code in CVS in /cvsroot/tools) to produce the next UML2 downstream build. I've posted some sample feeds here, prior to completing a new schema and ant task to generate said feed XML. http://wiki.eclipse.org/index.php/Eclipse_Build_Available_RSS_Feeds If you have comments or feedback for those feeds, or want to suggest different things to inlcude, please post comments to the Wiki. Moving to Callisto. Created attachment 39003 [details]
org.eclipse.build.tools\src\org.eclipse.releng.generators\RSSFeed*.java
Includes four Ant tasks for:
1. CreateFeed (create an XML document containing an Atom 1.0 RSS <feed/>)
2. AddEntry (add an <entry/> to an existing <feed/>, and create the <feed/> if necesarry)
3. GetProperty (find & return the value of a text field or attribute using XPath syntax)
4. UpdateEntry (change a text field or attribute value in an existing feed, using XPath syntax)
Additionally, includes an ant script for running all four of the above tasks, plus some XPath query examples for both search and replacement, a build script for compiling the feedTools.jar archive, and sample properties files for running the ant script using Eclipse, EMF, and UML2 configuration values.
Coming soon, an updated XSD to back the Atom 1.0 RSS XML files generated.
Please review and comment. Open secondary bugs / feature requests if necessary.
Thanks!
Added code releng_test branch of org.eclipse.releng.basebuilder. I'll test incorporating it into platform builds. Nick, would it be possible to use 1.4 libraries for parsing etc. instead of the 1.5 ones you are using? The issue is that today we are using a 1.4 vm to compile that project and it is rather awkward to just use a 1.5 vm to compile the RSS portion. Nick and I talked on IM yesterday and the conclusion of our conversation was -keep using 1.5 libraries for RSS code -platform will run the RSS generator after the main build exits in a separate using a 1.5 vm -when build vm changes, we will run the RSS generator using the same vm as the main process. Created attachment 39855 [details]
org.eclipse.build.tools\src\org.eclipse.releng.*\RSSFeed*.java
* misc fixes/tweaks, strings externalized into messages.properties
* new shell & ant scripts (jar/zip builders and ant task runners)
* new services.RSSFeedPublisherTask for either CVS or SCP feed publishing (or both)
* shell scripts for running feedManipulation.xml with both org.apache.ant.launcher.Launcher (commandline) and org.eclipse.ant.internal.ui.antsupport.InternalAntRunner (no Eclipse)
- solves compatibility problem between:
com.sun.org.apache.xerces (Sun JDK 1.5, rt.jar) and
org.apache.xerces (Ant 1.6.5, xercesImpl.jar)
- since jars are built with JDK 1.5, must exclude xercesImpl.jar from classpath)
Created attachment 39861 [details]
org.eclipse.build.tools\src\org.eclipse.releng.*\RSSFeed*.java
bug in builder script; previous zip contained duplicate content; buildFeedTools*.xml fixed; other contents the same
changes released to releng_test branch of basebuilder Created attachment 41813 [details]
org.eclipse.build.tools\src\org.eclipse.releng.*\RSSFeed*.java
This update includes code for listening to and responding to feeds, as well as an updated XSD for the feeds.
Created attachment 41816 [details]
org.eclipse.build.tools.emf.feed.validator
Ant task / shell script to validate a given XML document (build feed) against the latest available XSD (build feed schema).
Kept separate from the previous attachment due to the fact that this depends on EMF (emf.common.jar, emf.ecore.jar, emf.ecore.xmi.jar, xsd.jar).
Nick, regarding comment #31, I released this into the releng_test branch of basebuilder. Sonia refactored it so it's in own package because this was the easier for us to export our buildTools.jar which only requires 1.4 etc. Hope this is okay. The tag of the project that we're using in the build is v20060518a. If it would be easier for you to commit code directly to basebuilder instead of providing patches, let me know. I put a test RSS feed here for RC4. http://download.eclipse.org/downloads/builds-eclipse.xml Other notes... I'm not sure if this is the correct id <id>http://www.eclipse.org/news/builds.xml</id> I noticed that scripts don't really generate much if you comment out the test results bits in the properties file. The reason I mention this is... Our build page currently needs to be seriously refactored to generate an xml page of test results which can then be parsed via php. A summer project. Anyways, as a result, there currently isn't really support in our builder to identify in a properties file which drops have test failures associated with them. To add to that issue, we haven't had a build in months without a test failure. There are always test failures due to networking issues, intermittent timing issues or problems with the test themselves. As well, we need to update our builder to generate an event when the performance test results are available. Today the php looks for a file and displays them if they are available. If it is useful for Callisto, I can update our feed after each subsequent release candidate. Let me know. (In reply to comment #33) > v20060518a. If it would be easier for you to commit code directly to > basebuilder instead of providing patches, let me know. Sure! That would certainly be easier. > I'm not sure if this is the correct id > <id>http://www.eclipse.org/news/builds.xml</id> Well, probably not since that URL goes to a 404. I'd recommend using an ID that's unique to the feed, and actually exists, like http://download.eclipse.org/downloads/builds-eclipse.xml > Our build page currently needs to be seriously refactored to generate an xml > page of test results which can then be parsed via php. A summer project. > Anyways, as a result, there currently isn't really support in our builder to > identify in a properties file which drops have test failures associated with > them. To add to that issue, we haven't had a build in months without a test > failure. There are always test failures due to networking issues, intermittent > timing issues or problems with the test themselves. It might be valuable to merge these two goals into one - that is, the index.php in each build's folder could simply be an XSLT wrapper to turn index.xml into HTML. index.xml would in fact be the rss feed, but containing ONLY the single build, rather than multiple entries. Thus there'd be a central feed for all builds, the same XML for the build's index.xml pages, and in both cases the same XSLT could be used to prettyprint the feed data. Only wrinkle here is that using client- or server-side XSLT means the server (the mirror) or the browser must support doing so. If the XML is turned into HTML via an Ant script (like PDE does for text results) that runs after every update to the feed, we would get valid static HTML instead of dynamically created stuff and eliminate cross-mirror and cross-browser issues. > As well, we need to update our builder to generate an event when the > performance test results are available. Today the php looks for a file and > displays them if they are available. You could have the same PHP that checks for a file also update the feed, provided that it only does the update once. It's probably cleaner to have the builder make the update after the files are rsync'd or something. > If it is useful for Callisto, I can update our feed after each subsequent > release candidate. Let me know. Yes, please do. I'll do the same for EMF and beyond. Created attachment 43014 [details]
org.eclipse.build.tools RSS tools
New schema, new generator code, improved documentation.
Created attachment 43015 [details]
org.eclipse.build.tools RSS Feed validator tool (requires EMF)
New schema, new validation tool, new scripts (for use with Windows too).
(In reply to comment #35) Changes released to releng_test branch of org.eclipse.releng.basebuilder/plugins/org.eclipse.build.tools. Kim - if you can implement the new feed (regenerate for RC4 thru RC6), I can implement watching that feed for EMF and we can finally put this bug to bed. ;-) Nick, would it be possible to change JUnitTestResults so it accepts os,ws,arch,status instead of just os,ws,status I notice that you have appended the arch to some of the ws values in the example properties file We have a many architectures associated with certain os-ws combinations for instance linux-gtk-x86 linux-gtk-ppc linux-gtk-x86_64 win32-win32-x86 win32-win32-x86_64 win32-win32-ia64 I have regenerated the feed for rc4-rc6 http://download.eclipse.org/eclipse/downloads/builds-eclipse.xml (In reply to comment #39) > I have regenerated the feed for rc4-rc6 > http://download.eclipse.org/eclipse/downloads/builds-eclipse.xml Updated: schema, sample data, properties, generation code, messages, etc. in CVS. I will no longer post zips here. See http://wiki.eclipse.org/index.php/Eclipse_Build_Available_RSS_Feeds for updates. BTW, you can use newlines to make long properties more readable, like for tests and releases. See updated updater.eclipse.properties for example. Please regenerate feed once more, if you'd like these properties to appear. thanks, regenerated. The next time the feed refreshes (or an N feed is available) I can verify my implementation works and close this bug. Proof of concept & implementation completed for UML2 -> EMF; same functional requirements for EMF -> Platform. Will implement shortly for EMFT stack -> EMF, etc. Closing. |