| Summary: | ecd.theia theia-example Linux pod resources | ||
|---|---|---|---|
| Product: | Community | Reporter: | Rob Moran <rob.moran> |
| Component: | CI-Jenkins | Assignee: | CI Admin Inbox <ci.admin-inbox> |
| Status: | RESOLVED FIXED | QA Contact: | |
| Severity: | normal | ||
| Priority: | P3 | CC: | frederic.gurr, webmaster |
| Version: | unspecified | ||
| Target Milestone: | --- | ||
| Hardware: | PC | ||
| OS: | Mac OS X | ||
| Whiteboard: | |||
|
Description
Rob Moran
(In reply to Rob Moran from comment #0) > Could you confirm the maximum memory we can request for the build in this > pod? You can request up to 8GB per pod/container. All concurrently running builds (max. 2) also share a total of 8GB of RAM. So if you run a build that request the full 8GB of RAM, no other build can run at the same time. If you request 6GB of RAM, a second build can run with a max request of 2GB. etc. The default request is 4GB. Thanks Frederic, I'll try this out with 8GB. A couple of follow up questions... - I assume I can set both the 'limits' and 'requests' to the maximum 8GB? - Is there a simple way to ensure jenkins only runs one concurrent job across all agents? I have the Windows and Mac agents set up to only allow one job at a time, but I'd like to do the same for the linux pod (so it always uses the full 8GB, but jobs are in series) Cheers, Rob (In reply to Rob Moran from comment #2) > - I assume I can set both the 'limits' and 'requests' to the maximum 8GB? Yes > - Is there a simple way to ensure jenkins only runs one concurrent job > across all agents? I have the Windows and Mac agents set up to only allow > one job at a time, but I'd like to do the same for the linux pod (so it > always uses the full 8GB, but jobs are in series) Just to clarify: the memory limits I was mentioning only apply to the dynamic cluster-based linux agents. The Windows and Mac agents have their own, independent memory limits since both of them are static VMs. We could limit the number of concurrently running linux pods, but I'm not sure this is necessary or beneficial. > We could limit the number of concurrently running linux pods, but I'm not sure this is necessary or beneficial.
OK, what happens in the following scenario:
A build is started (perhaps due to a PR) and all 3 hosts (linux pod and 2 agents) start working on it. The builds take ~1.5 hours
Another PR is created, triggering another build. The 2 agents are set to only handle one build, so parts of the new build are queued.
Does another linux pod start (with zero memory) and fail the new build or does it know to queue this? I'm worried it will still run and fail.
(In reply to Rob Moran from comment #4) > Does another linux pod start (with zero memory) and fail the new build or > does it know to queue this? I'm worried it will still run and fail. No, it will try to spawn a new linux pod, but since the resource limitations do not allow it, the spawning will fail and the build will have to wait in the queue. So you might see failing build _agents_, but not failing builds. If you want to avoid that a specific build job runs concurrently in general across all nodes/agents, you can set this in the job configuration/pipeline: pipeline { options { disableConcurrentBuilds() } } Great, thanks! |