| Summary: | Loading problem of p2 metadata on jenkins.eclipse.org for Wild Web Developer | ||
|---|---|---|---|
| Product: | Community | Reporter: | Gautier de SAINT MARTIN LACAZE <gautier.desaintmartinlacaze> |
| Component: | CI-Jenkins | Assignee: | CI Admin Inbox <ci.admin-inbox> |
| Status: | RESOLVED FIXED | QA Contact: | |
| Severity: | blocker | ||
| Priority: | P3 | CC: | denis.roy, mistria, webmaster |
| Version: | unspecified | ||
| Target Milestone: | --- | ||
| Hardware: | PC | ||
| OS: | Linux | ||
| Whiteboard: | |||
|
Description
Gautier de SAINT MARTIN LACAZE
This is likely on our end. Under certain conditions, the downloads filesystem can become overly stressed, leading to a timeout. For now, perhaps increase the timeouts or retries? Mind you -- I've examined your failure rates, and that frequency is not at all expected from the system load we're seeing. I think something else may be at play. I'll look into it. FWIW, this build runs on a Kubernetes agent -> https://github.com/eclipse/wildwebdeveloper/blob/master/Jenkinsfile#L6 . Maybe that affects the filesystem/network resolution. This issue is becoming a blocker for Wild Web Developer. We cannot deliver snapshot to the community members who are eager to test them, and this feedback is extremely important in the last weeks before 2018-12.
> For now, perhaps increase the timeouts or retries?
I didn't find a way to control that with Tycho.
(In reply to Mickael Istria from comment #3) > FWIW, this build runs on a Kubernetes agent -> > https://github.com/eclipse/wildwebdeveloper/blob/master/Jenkinsfile#L6 . > Maybe that affects the filesystem/network resolution. That is, indeed, the issue. For some reason, the pod is not even connecting to download.e.o. Can you run your build on the master for now? (In reply to Denis Roy from comment #6) > https://jenkins.eclipse.org/wildwebdeveloper/job/Wildwebdeveloper/job/master/ > 27/ is successful. Awesome! Is this based on a tweak on build-side or a fix on cluster-side? Basically, is there anything that's worth for users to know about this fix? > Awesome! Is this based on a tweak on build-side or a fix on cluster-side? > Basically, is there anything that's worth for users to know about this fix? Embarrassingly, I don't know how I fixed this. Other pods on the same OpenShift node were having similar issues. I rsh'd into the WWD pod and ran wget http://download.eclipse.org/..... and that worked. Since then, access to download.e.o has been working. (In reply to Denis Roy from comment #8) > Embarrassingly, I don't know how I fixed this. > Other pods on the same OpenShift node were having similar issues. I rsh'd > into the WWD pod and ran wget http://download.eclipse.org/..... and that > worked. Since then, access to download.e.o has been working. Ok, so it seems to be on OpenShift side then and nothing to improve in the project build itself. Thanks for this magic trick then ;) Exactly. Now -- we have been making changes to our core routing recently, so there may have been something stale at play with that node specifically. If the issue comes back, please reopen and we'll look into it deeper. |