Community
Participate
Working Groups
In most make manuals when they talk about the -j option to parallelize make builds, they say that the optimal job number is the number of processors plus one. I notice that when I choose the optimal number on my single processor system, I see the number used is 1. I think 2 might be more optimal since a compile is often waiting on disk IO and could share the CPU.
On my dual core system, the optimal number is also 1 by default. It appears that this is a static value, and not a very good default under any circumstances.
I even noticed that Eclipse creates sometimes a load of 300 and more on my Linux system!!!!! In such a case the box is just frozen and completely unresponsive. If I'm lucky and are able to have a running process viewer I see 20-30 compiler jobs in parallel. That's too much even for a quad core CPU. The affected project is a Managed C++ project where Eclipse creates the Makefiles and there are currently three builds performed (Debug, Release and a test build). If Eclipse chooses 2*number of cores as default per each build and runs all builds together it would result in such large job numbers. Using -j3 fixes this. The severity of this bug should be increased as often a reboot is the only practical solution (waiting a few few hours may also help but as the system doesn't react fast enough even a ssh connection to kill processes doesn't help).
I thought the "optimal" number we were using *was* the number of processors +1? I guess it's not... but it should be.
(In reply to comment #3) > I thought the "optimal" number we were using *was* the number of processors +1? > > I guess it's not... but it should be. Probably you know that ordinary make evaluates a variable MAKEFLAGS which I set to j8 ... But in this case I start only a single make process.
closing as duplicate *** This bug has been marked as a duplicate of bug 259768 ***