Gentoo: Portage’s new --jobs feature

Yesterday, zmedico wrote about building multiple packages in parallel with Portage-2.2_rc2. In Gentoo Prefix, we had a sneak peak to this feature, so I have had some time to play with it on my dual-quad core box. Some timing results that you may like:

emerge -e system (excluding sys-devel/gcc)

As a baseline:

With --jobs=1 and MAKEOPTS=16, load-average=9:
real 77m54.290s
user 41m46.086s
sys 29m14.598s

Because I was skeptical of what --jobs could really do, I decided to start with small number of parallel jobs:

With --jobs=3, MAKEOPTS=16, load-average=9:
real 61m30.181s
user 42m23.398s
sys 32m32.009s

While that was running, I noticed a very significant amount of time where my cores were idle, thanks to the handy little xfce-extra/xfce4-cpugraph widget. So, I turned --jobs up again:

With --jobs=5, MAKEOPTS=16, load-average=9:
real 58m5.388s
user 42m35.721s
sys 34m46.950s

Meh, not much improvement there. Surprising, but I suspect that I may be reaching the limits of the parallelization (dependencies, etc).

With --jobs=10, MAKEOPTS=16, load-average=9:
real 58m9.824s
user 42m43.525s
sys 37m57.234s

(And actually, a quick visual scan showed load averages staying below 4. Only a few times did I see the average above 8 )

Relying solely on load-average to keep my system usable:

With --jobs=40, MAKEOPTS=40 load-average=15:
real 58m45.106s
user 43m15.129s
sys 40m47.949s

The highest load average I visually saw was 23. But my load average was mostly always greater than 4, so this means that my procs are obviously getting used more but I must have hit another bottleneck.

-emerge -pe system was preformed before each time trial to assume the depgraph was in cache.
-84 packages total
-no ccache/distcc running

Conclusion: 20 minutes? ~30% speedup. Wow. Good! Quite significant even. Assuming you have cores/procs to spare, go ahead and crank up those --jobs. It is nice to be able to make the ./configure step not be the bottleneck anymore. ;-) I will keep testing and see if I can get the time down even farther (although, unlikely based on the last time trial).

Test requests? Please leave a comment.

11 Responses to Gentoo: Portage’s new --jobs feature

  1. cynyr says:

    I was wondering how well the –jobs option handles build time dependencies, run time ones, and trying to keep the number of jobs up. It would seem that if a package’s tree was completely serial (every package is needed for the next) that –jobs does nothing to speed this up, and you would want makeopts=-j$(cores), but of something has 12 deps that can all be made at once you wouldn’t want each to have the same makeopts=-j$(cores). So can it balance the two cases?

  2. Duncan says:

    You don’t mention… how much memory do you have and where do you have PORTAGE_TMPDIR pointing? Is it per chance at a tmpfs or similar?

    I’ve found that makes a big difference here (not nearly your “toy”, but dual Opteron 290, so dual dual-core 2.8 GHz, 8 gig RAM, and a 4-way RAID (kernel/mdp RAID-6 for system and user data, mdp RAID-0 for PORTDIR and ccache, and 4-way striped swap).

    I’ve been doing manual parallel compiles for some time now, and really, with per-user kernel scheduling and PORTAGE_NICENESS=19 (to get the batch priority timeslice length bonus in addition to the better interactivity), I’ve found load average is pretty much irrelevant to system responsiveness. I can have a load average of hundreds (kernel and IIRC glibc builds are nice for their many parallelizable jobs) and still have a nicely usable system, as long as I’m running my interactive stuff mostly as a different user, and don’t bog down the kernel in I/O-wait.

    Pointing PORTAGE_TMPDIR at tmpfs definitely helps with I/O, but then of course one has to track memory usage. With the 4-way-striped swap, I’ve found I can go a gig into swap before things start noticeably slowing down there, but I use job limits and load-average control way more as an indirect method of limiting memory and tmpfs usage than to really limit load average, which as I said, is pretty much irrelevant in and of itself, provided I’m interacting as a different user with its own kernel enforced time-share if user has need of it.

    I’m running the 2.2 rcs but haven’t yet updated this week, so am still on rc1. Today’s update day, tho, so I’ll soon be playing with the new options too! =8^)

  3. jolexa says:

    @cynyr: Assuming all deps are serial, yes –jobs will do nothing. Also, assuming that all of the jobs compile at the exact same time (unlikely, IME) then you will get a bottleneck at the CPU again.

    @Duncan: 4 gigs of ram but no tmpfs. I do understand that there would be an improvement here but I do not have the time to set that up. Plus, this is actually on RHEL4 which is annoying enough to deal with ;). Looks like it is time to try tmpfs on my home box again.

  4. jolexa says:

    It was suggested that I was thrashing my CPU with MAKEOPTS set so high. I agree with that on the last trial where MAKEOPTS was set to 40, but I didn’t agree with it on any of the other trials based solely on load average. It was also suggested that I may be trashing my IO. The suggested test was to do --jobs=8 and MAKEOPTS="-j5" to keep both IO & CPU busy but not thrashing. I obliged:

    With --jobs=8, MAKEOPTS=5, load-average=10:
    real 58m41.048s
    user 41m43.281s
    sys 34m6.764s

  5. Aniruddha says:

    So –jobs=8, MAKEOPTS=3, load-average=10 is the recommended setting for a dualcore processor? What do you mean with “trashing your cpu”?

  6. jolexa says:

    @Aniruddha: For a dual core proc, I would probably recommend –jobs=2 MAKEOPTS=-j3. Depends on alot of things though, RAM, IO wait, etc. Please try out a few things but –jobs=8 is high for only 2 cores. (My host has 8 cores)

    Thrashing the CPU, is really using your CPU for more than you have available. ie. Trying to do way more than it can handle and making it switch processes alot. Here is a definition of disk thrashing, apply it to CPUs:

  7. Aniruddha says:

    Thanks for the answer. i noticed that running multiple jobs is problematic when you need to accept a license. My emerge world couldn’t continue because I needed to accept the ut2004 license and I couldn’t see the “accept license screen”

  8. jolexa says:

    @Aniruddha, nice find. I spoke with zmedico and filed a bug at his request.

  9. Aniruddha says:

    Lol, I didn’t even realize this was a bug. Thanks for filing the bugreport!


Get every new post delivered to your Inbox.

%d bloggers like this: