Community
Participate
Working Groups
We need to work with Bjorn (or whoever is running the ganymede-o-matic) to integrate p2 support. This needs to happen preferably quickly, probably by M5.
changing the milestone. Figured we should do this for 3.4 ;-)
Bjorn, I'm looking at the Ganymede build scripts and will provide a patch that generates p2 metadata as part of the Ganymede build. As part of that effort, we were wondering how often the Ganymede build moves up to a new version of Buckminster - how often is the builder updated? Is there a mechanism to run a test build? Also, I was looking at the Ganymede build scripts and I couldn't find the portion that generates the packed jars and the site digest. Could you provide a pointer? Pascal, further to our conversation, in each project's contribution to Ganymede, (org.eclipse.ganymede.sitecontributions in /cvsroot/callisto) the contributing team specifies the categories for each of their features. There is also a template site.xml in the same project which specifies all the categories across the projects.
Bjorn, do you have any information for the questions in comment #2
(In reply to comment #2) > Bjorn, I'm looking at the Ganymede build scripts and will provide a patch that > generates p2 metadata as part of the Ganymede build. Thanks. > As part of that effort, we were wondering how often the Ganymede build moves up > to a new version of Buckminster - how often is the builder updated? Is there a > mechanism to run a test build? When it seems appropriate (i.e. bugs or new features), then I move up to the new version of Buckminster. This is the same policy I used for the Europa-matic: I tried not to change anything unless there was a reason to. There are two things that I haven't finished yet for Ganymatic: (a) the instructions on the web page for how to run your own version of Ganymatic, (b) the pack200 integration. I have them both on scraps of paper by my computer, but haven't had a moment to type them in yet. Sorry.
(In reply to comment #4) > (a) the > instructions on the web page for how to run your own version of Ganymatic, This is done, AFAIK. http://wiki.eclipse.org/Ganymede/Build#Creating_Ganymatic Re: integrating metadata generation, here's a shell script [1] you can use to simplify things a little. More info here [2]. [1]http://dev.eclipse.org/viewcvs/index.cgi/releng-common/tools/scripts/buildUpdateSiteMetadata.sh?root=Modeling_Project&content-type=text%2Fplain&view=co [2]http://wiki.eclipse.org/Equinox_p2_Metadata_Generator#Generating_metadata_from_an_update_manager_site I'm willing to lend a hand here if needs be.
I'm getting this integrated into ganymatic today (Friday, 5/9) and have some up with a few questions. One conceptual ... what advantage would you say the end-user will get out of this? Faster updates? More reliable? One technical, upon reading the instructions at http://wiki.eclipse.org/Equinox_p2_Metadata_Generator#Generating_metadata_from_an_update_manager_site I am wondering what, exactly, does the -reusePack200Files imply (or, require). Does this determine if it creates them or not? Or, does it mean more and actually requires the pack200 files to actually exist at the time EclipseGenerator is ran? I ask because ganymatic basic build takes 5 or 10 minutes to run (on build.eclipse.org), and the digest creation about a minute -- so we do that automatically each build. But, ... to pack all the jars takes about 2 hours, so we do not do that automatically each build. So, I'm wondering, is it useful to run EclipseGenerator each build? Or, only after the pack200 files are created? I'm assuming the latter, for now, but thought I'd ask just to be sure. My first (local) test the EclipseGenerator took about 10 minutes, so it's feasible to do each build, if that's valid and useful without the pack200 files. Thanks,
Thanks David, this is really good news! About the advantages: - Improved user experience when first connecting to the site. Because we don't have to generate metadata on the client's machine. - Precise dependency resolution causing the user ot install only what is needed but also preventing to install things that will not work. - Possibility to browse the site content in the user's language About -reusePack200 Specifying -reusePack200 does not require you to have pack200 files on the server, nor does it cause pack200 files to be created. When this option is specified, the generator looks for pack.gz files and if available it creates an entry for it in the artifacts.jar. It is necessary to run the generator on each build otherwise new versions of features will not be visible. I'm surprised by the time spent in the generation, because the equivalent operation for the ganymedeM5 site takes much less than this on my t42.
(In reply to comment #7) > About -reusePack200 > Specifying -reusePack200 does ... Added to wiki. > I'm surprised by the time spent in the generation, because the equivalent > operation for the ganymedeM5 site takes much less than this on my t42. Me too. My latest runs took as long as 60 seconds for the EMF site (563 jars, 150M footprint), and as little as 11 seconds for EMFT (345 jars, 65M footprint). Ganymede *IS* much larger, however -- 3378 jars, 967M footprint. $ find . -name "*.jar" | grep jar -c; du -shc $(find . -name "*.jar") | grep total 3378 967M total
There is now a version of artifacts.jar and content.jar on /releases/ganymede/staging Can interested parties give it a quick test to confirm it's what you expect? I have not run "pack200" yet ... but sounds like that's not critical for an initial test. BTW, this run (too) took over 600 seconds (613 to be exact). But, in many of the local builds I was running, seemed to vary quite a bit ... from 3 minutes to 10 minutes. Not sure if that's just "machine load", or if there's something variable about the process.
(In reply to comment #9) > There is now a version of artifacts.jar and content.jar on > /releases/ganymede/staging w00t! > BTW, this run (too) took over 600 seconds (613 to be exact). > Not sure if that's just "machine load" I'd guess load. My runs have been known to run faster on my thinkpad or on our build servers than on build.eclipse.org... and build.eclipse has boatloads of ram and is quadcore. Maybe we need to get Denis to share some of his new power [1] w/ the Ganymatic? [1]http://eclipsewebmaster.blogspot.com/2008/05/16-cpus-for-bugzilla-all-to-myself.html
There is a problem. The artifacts.jar is empty (or close since it does not refer to any artifact), once filled its size should be about 50K.
David, which version of the metadata generator bundle do you use? I just tried again running the application straight out of an the M7 drop on a local copy of the ganymede site M5 and the artifacts.jar got properly produced. I suspect that the builder you use is missing some ecf bundles (probably org.eclipse.ecf.provider.filetransfer or org.eclipse.ecf.filetransfer). Which one do you have in your installation? Could you try again deleting the artifacts.jar and the content.jar. Thx As for the performance my run of the generator took 206 seconds and the process is not really variable since we simply open every jar and every feature to read the various manifests.
(In reply to comment #12) If you are just now checking, you missed the "good ones" from yesterday :) The main difference is that the current set are based on running after the jars have been pack200ed. I'll post after this one, with some data on that, but to answer your questions first: > David, which version of the metadata generator bundle do you use? M7 > I just tried again running the application straight out of an the M7 drop on a > local copy of the ganymede site M5 and the artifacts.jar got properly produced. > I suspect that the builder you use is missing some ecf bundles (probably > org.eclipse.ecf.provider.filetransfer or org.eclipse.ecf.filetransfer). Which > one do you have in your installation? You mean what's in M7? $ ll *org.eclipse.ecf* 79K 2008-05-02 03:48 org.eclipse.ecf_2.0.0.v20080428-0800.jar 42K 2008-05-02 03:48 org.eclipse.ecf.filetransfer_2.0.0.v20080428-0800.jar 47K 2008-05-02 03:48 org.eclipse.ecf.identity_2.0.0.v20080428-0800.jar 93K 2008-05-02 03:48 org.eclipse.ecf.provider.filetransfer_2.0.0.v20080428-0800.jar 8K 2008-05-02 03:48 org.eclipse.ecf.provider.filetransfer.ssl_1.0.0.v20080428-0800.jar 11K 2008-05-02 03:48 org.eclipse.ecf.ssl_1.0.0.v20080428-0800.jar > > Could you try again deleting the artifacts.jar and the content.jar. Thx > I always do delete them before generating. > As for the performance my run of the generator took 206 seconds and the process > is not really variable since we simply open every jar and every feature to read > the various manifests. > I am not really worried about 600 seconds ... I'm merely providing data for you, in case it's cause for you to be concerned.
Created attachment 99612 [details] excerpts from latest runs parts of logs, showing file sizes before running pack200 and then after running pack200 and re-generating p2 repository.
whoa ya ... I just noticed that, after the last pack200 and re-generate run, that ALL THE JARS WERE MISSING from the site ... that's some optimization! Not sure when that happened, during the pack200 step, or, during the generate P2 repo step! (pack200 has worked fine in the past, but this is first time with M7).
Another, conceptual and process policy issue ... does install behavior change with P2 repository vs. UM? There is, perhaps, an implication of this added P2 repository data that I want to be clear on, and depending on the answers, we may need to make some announcement on cross- project and some quick decisions if we (Ganymede contributors) are ready for this functionality, or if instead projects may have "bugs" in their features definitions that will lead to incorrect installations with seemingly hard to explain "missing" plugins. To illustrate, I'll give a simplified (stylized) example from WTP. Let's say there are three features in WTP, called JPT, JST, and WST, and that they depend on each other, in that order, where JPT requires many but not all of the plugins in JST and JST requires many but not all of the plugins in WST. In all cases we do not specify feature dependencies in terms of features, but instead in terms of just a long list of plugins that are required, so it's more accurate to say JPA is, conceptually, dependent on JST and JST is, conceptually, dependent on WST. UM Behavior: In the past, I think we were (and still are) a bit "sloppy" in our list of plugins that a feature was dependent on. They'd be accurate at some point, but we did not necessarily manually update them every time it was required, and there is no automatic way to do it (that I know of). Hence, they'd get stale -- some required plugins are not listed explicitly in feature.xml -- but it didn't matter ... because the UM behavior was to take the list of plugins, and figure out the feature that best provided those, and then installed that whole feature. P2 Behavior: I think P2 does not install all the "whole features" it "infers" ... it only installs, literally, just the plugins the feature tells it too ... and any plugins those plugins depend on, right? The "lower level" feature definitions are irrelevant, right? If I'm right about the above, then someone who selects, say, JPT will get less things installed with P2 than they would have with UM. I'll try to confirm that empirically, but if true, then I think we need to be more conservative in changing behavior at the last minute of M7.
> P2 Behavior: I think P2 does not install all the "whole features" it "infers" ... it only installs, literally, just the plugins the feature tells it too ... and any plugins those plugins depend on, right? The "lower level" feature definitions are irrelevant, right? Correct. p2 follows the dependencies expressed in features and in plug-ins. There is no "inference". It purely and simply uses the dependencies expressed in the manifest.mf and feature.xml to figure out the set of plug-ins to install. The presence or absence of p2 metadata on the server does not affect this behavior since absence of server side metadata is compensated by generation on the client side. So in short with or without metadata the behavior will not[1] be different than what could have been experienced in M6. Another point is that when UM gives the ability for features to depend on plugins, it is to allow plug-ins to be installed before hand by a feature that is out of your control. For example one could depend on org.eclipse.core.runtime but this does not necessarily mean that the org.eclipse.platform feature is present. In this case the only guarantee provided is that runtime is present and in a runnable state (aka resolved). p2 offers the same guarantee. Where it performs differently and better than UM is that in presence of p2 metadata on the server, the dependencies from the plug-ins included in the features are also considered in the resolution (which means that the expression of plug-in requirement in features is redundant (except in some obscur usecases)). To conclude, I agree that sending a carefully phrased non-scary note to the cross-project ml is interesting, but I disagree with delaying the availability of p2 metadata.
There might have been one point in my script I needed to add a "delete artifacts and delete content" ... since I run EclipseGenerator twice, once every build, and once after running pack200, I was not deleting artifacts.jar and content.jar before the pack200 run. I've managed to recreate and all seems intact. I will now run once again, from the top, to ensure all semi-automatic scripts run as expected. (during the recreation, I manually ran some of the scripts).
Some notes on this. - We need to start the Ganymede p2 site off with the contents of the p2 site as generated by the associated Eclipse project build. Without that, the metadata repo and artifact repo will not contain the proper information related to the root files and executables. That is, the repos will be incomplete and installation/build using p2 will not work. - The jar missing and pack200 thing is confusing. Why do you need to run the generator twice? I recall something about the time required for packing being too long. So when all the packed things are available they should be added into the artifact repo. There is no need to touch the metadata repo. We just have to add the things into the artifact repo. We can create a simple app that does this. Alternatively, there is an optimizer app that will do the packing for you and update the artifact repo.
> We need to start the Ganymede p2 site off with the contents of the p2 site as generated by the associated Eclipse project build. I agree that ideally we want to do this. However to start with, and to have a site equivalent to what was available in Europa, this is not necessary. >We can create a simple app that does this. Alternatively, there is an optimizer app that will do the packing for you and update the artifact repo. Do you have them in a production ready form?
> We need to start the Ganymede p2 site off with the contents of the p2 site as generated by the associated Eclipse project build. On a second thought, since repos can refer to other repos we should simply point to the platform repos.
true on the repo references. two things that are needed: - a way of adding the platform repo references to the ganymede repos? - stable platform repos. We have a milestone repo. We would have to ensure that it does not go away. For adding references manual hacking of the ganymede repo is possilbe but not reproducable. Is there a better story?
Urgent question on internal format of artifacts and content ... when we move the ganymede "staging" files to their final resting spot on "releases" do we need to re-run the P2 Repo tools? Or are all the locations relative?
Everything is relative.
(In reply to comment #24) > Everything is relative. > Actually, quite by accident, by virtue of working on bug 230924, I've learned this is not quite true. Both files contain a "mirror URL" that is not relative. Plus, upon thinking about it, I've realized the 'released' site may have more content than the 'staging' site (e.g. previous versions left in place) hence the P2 repository data (and the digest.zip file) should always be re-generated. Lesson learned for RC1.
David, could you detail in which file (and in which section) you see the non-relative paths so we can correct it? Thx.
(In reply to comment #26) > David, could you detail in which file (and in which section) you see the > non-relative paths so we can correct it? Thx. > I'm not sure you can, but it's in both artifacts and content ... <property name='p2.mirrorsURL' value='http://www.eclipse.org/downloads/download.php?file=/releases/ganymede/staging/&format=xml&protocol=http'/> which is what's produced when we create the repository on 'staging', but which should be <property name='p2.mirrorsURL' value='http://www.eclipse.org/downloads/download.php?file=/releases/ganymede/&format=xml&protocol=http'/> on the released site.
Marking as Fixed since this is now integrated.
*** Bug 230985 has been marked as a duplicate of this bug. ***