Cameron KaiserIonPower: phase 5!

Progress! I got IonPower past the point PPCBC ran aground at -- it can now jump in and out of Baseline and Ion code on PowerPC without crashing or asserting. That's already worth celebrating, but as the judge who gave me the restraining order on behalf of Scarlett Johansson remarked, I always have to push it. So I tried our iterative π calculator again and really gave it a workout by forcing 3 million iterations. Just to be totally unfair, I've compared the utterly unoptimized IonPower (in full Ion mode) versus the fully optimized PPCBC (Baseline) in the forthcoming TenFourFox 31.6. Here we go (Quad G5, Highest Performance mode):

% /usr/bin/time /Applications/ --no-ion -e 'var pi=4,top=4,bot=3,minus = true;next(pi,top,bot,minus,3000000);function next(pi,top,bot,minus,num){for(var i=0;i<num;i++){pi += (minus == true)?-(top/bot):(top/bot);minus = \!minus;bot+=2;}print(pi);}'
0.48 real 0.44 user 0.03 sys

% /usr/bin/time ../../../obj-ff-dbg/dist/bin/js --ion-offthread-compile=off -e 'var pi=4,top=4,bot=3,minus = true;next(pi,top,bot,minus,3000000);function next(pi,top,bot,minus,num){for(var i=0;i<num;i++){pi += (minus == true)?-(top/bot):(top/bot);minus = \!minus;bot+=2;}print(pi);}'
0.37 real 0.21 user 0.16 sys

No, that's not a typo. The unoptimized IonPower, even in its primitive state, is 23 percent faster than PPCBC on this test largely due to its superior use of floating point. It gets even wider when we do 30 million iterations:

% /usr/bin/time /Applications/ --no-ion -e 'var pi=4,top=4,bot=3,minus = true;next(pi,top,bot,minus,30000000);function next(pi,top,bot,minus,num){for(var i=0;i<num;i++){pi += (minus == true)?-(top/bot):(top/bot);minus = \!minus;bot+=2;}print(pi);}'
4.20 real 4.15 user 0.03 sys

% /usr/bin/time ../../../obj-ff-dbg/dist/bin/js --ion-offthread-compile=off -e 'var pi=4,top=4,bot=3,minus = true;next(pi,top,bot,minus,30000000);function next(pi,top,bot,minus,num){for(var i=0;i<num;i++){pi += (minus == true)?-(top/bot):(top/bot);minus = \!minus;bot+=2;}print(pi);}'
1.55 real 1.38 user 0.16 sys

That's 63 percent faster. And I'm not even to fun things like leveraging the G5's square root instruction (the G3 and G4 versions will use David Kilbridge's software square root from JaegerMonkey), parallel compilation on the additional cores or even working on some of the low-hanging fruit with branch optimization, and on top of all that IonPower is still running all its debugging code and sanity checks. I think this qualifies as IonPower phase 5 (basic operations), so now the final summit will be getting the test suite to pass in both sequential and parallel modes. When it does, it's time for TenFourFox 38!

By the way, for Ben's amusement, how does it compare to our old, beloved and heavily souped up JaegerMonkey implementation? (17.0.11 was our fastest version here; 19-22 had various gradual degradations in performance due to Mozilla's Ion development screwing around with methodjit.)

% /usr/bin/time /Applications/ -m -n -e 'var pi=4,top=4,bot=3,minus = true;next(pi,top,bot,minus,30000000);function next(pi,top,bot,minus,num){for(var i=0;i<num;i++){pi += (minus == true)?-(top/bot):(top/bot);minus = \!minus;bot+=2;}print(pi);}'
4.15 real 4.11 user 0.02 sys

Yup. I'm that awesome. Now I'm gonna sit back and go play some well-deserved Bioshock Infinite on the Xbox 360 (tri-core PowerPC, thank you very much, and I look forward to cracking the firmware one of these days) while the G5 is finishing the 31.6 release candidates overnight. They should be ready for testing tomorrow, so watch this space.

Jordan LundMozharness is moving into the forest

Since its beginnings, Mozharness has been living in its own world (repo). That's about to change. Next quarter we are going to be moving it in-tree.

what's Mozharness?

it's a configuration driven script harness

why in tree?
  1. First and foremost: transparency.
    • There is an overarching goal to provide developers the keys to manage and stand up their own builds & tests (AKA self-serve). Having the automation step logic side by side to the compile and test step logic provides developers transparency and a sense of determinism. Which leads to reason number 2.
  2. deterministic builds & tests
    • This is somewhat already in place thanks to Armen's work on pinning specific Mozharness revisions to in-tree revisions. However the pins can end up behind the latest Mozharness revisions so we end up often landing multiple changes to Mozharness at once to one in-tree revsion.
  3. Mozharness automated build & test jobs are not just managed by Buildbot anymore. Taskcluster is starting to take the weight off Buildbot's hands and, because of its own behaviour, Mozharness is better suited in-`tree.
  4. ateam is going to put effort this quarter into unifying how we run tests locally vs automation. Having mozharness in-tree should make this easier
this sounds great. why wouldn't we want to do this?

There are downsides. It arguably puts extra strain on Release Engineering for managing infra health. Though issues will be more isolated, it does become trickier to have a higher view of when and where Mozharness changes land.

In addition, there is going to be more friction for deployments. This is because a number of our Mozharness scripts are not directly related to continuous integration jobs: e.g. releases, vcs-sync, b2g bumper, and merge tasks.

why wasn't this done yester-year?

Mozharness now handles > 90% of our build and test jobs. Its internal components: config, script, and log logic, are starting to mature. However, this wasn't always the case.

When it was being developed and its uses were unknown, it made sense to develop on the side and tie itself close to buildbot deployments.

okay. I'm sold. can we just simply hg add mozharness?

Integrating Mozharness in-tree comes with a fe6 challenges

  1. chicken and egg issue

    • currently, for build jobs, Mozharness is in charge of managing version control of the tree itself. How can Mozharness checkout a repo if it itself lives within that repo?
  2. test jobs don't require the src tree

    • test jobs only need a binary and a It doesn't make sense to keep a copy of our branches on each machine that runs tests. In line with that, putting mozharness inside also leads us back to a similar 'chicken and egg' issue.
  3. which branch and revisions do our release engineering scripts use?

  4. how do we handle releases?

  5. how do we not cause extra load on hg.m.o?

  6. what about integrating into Buildbot without interruption?

it's easy!

This shouldn't be too hard to solve. Here is a basic outline my plan of action and roadmap for this goal:

  • land copy of mozharness on a project branch
  • add an end point on relengapi with the following logic
    1. endpoint will contain 'mozharness' and a '$REVISION'
    2. look in s3 for equivalent mozharness archive
    3. if not present: download a sub repo dir archive from hg.m.o, run tests, and push that archive to s3
    4. finally, return the url to the s3 archive
  • integrate the endpoint into buildbot
    • call endpoint before scheduling jobs
    • add builder step: download and unpack the archive on the slave
  • for machines that run mozharness based releng scripts
    • add manifest that points to 'known good s3 archive'
    • fix deploy model to listen to manifest changes and downloads/unpacks mozharness in a similar manner to builds+tests

This is a loose outline of the integration strategy. What I like about this

  1. no code change required within Mozharness' code
  2. there is very little code change within Buildbot
  3. allows Taskcluster to use Mozharness in whatever way it likes
  4. no chicken and egg problem as (in Buildbot world), Mozharness will exist before the tree exists on the slave
  5. no need to manage multiple repos and keep them in sync

I'm sure I am not taking into account many edge cases and I look forward to hitting those edges head on as I start this in Q2. Stay tuned for further developments.

One day, I'd like to see Mozharness (at least its internal parts) be made into isolated python packages installable by pip. However, that's another problem for another day.

Questions? Concerns? Ideas? Please comment here or in the tracking bug

Doug BelshawWeeknote 13/2015

This week I’ve been:


  • Finishing off my part of the Hive Toronto Privacy badges project. GitHub repo here.
  • Submitting my final expenses and health & wellness invoices.
  • Writing about Web Literacy Map v1.5 (my last post on the Webmaker blog!)
  • Editing the Learning Pathways whitepaper. I’ll do as much as I can, but it’s up to Karen Smith to shepherd from this point forward!
  • Backing up everything.
  • Catching-up one to one with a few people.
  • Leaving Mozilla. I wrote about that here. Some colleagues gave me a Gif tribute send-off and dressed up an inflatable dinosaur in a party hat. Thanks guys!

Dynamic Skillset

  • Helping out DigitalMe with an event in Leeds around Open Badges. I wrote that up here.
  • Preparing my presentation for a keynote next week.
  • Collaborating on a proposal to scope out Open Badges for UK Scouting.
  • Replying to lots of people/organisations who’d like to work with me! :)
  • Finalising things for next week when I start working with City & Guilds for most (OK, nearly all) of my working week.
  • Getting to grips with Xero (which is what I’m using for accounting/invoicing)


Next week I’m spending most of Monday with my family before heading off to London. I’ll be keynoting and running a workshop at the London College of Fashion conference on Tuesday. On Wednesday and Thursday I’ll be working from the City & Guilds offices, getting to know people and putting things into motion!

Image CC BY Kenny Louie