Planet Mozilla Automation

January 27, 2015

William Lachance

mozregression updates

Lots of movement in mozregression (a tool for automatically determining when a regression was introduced in Firefox by bisecting builds on ftp.mozilla.org) in the last few months. Here’s some highlights:

Also thanks to Julien, we have a spiffy new website which documents many of these features. If it’s been a while, be sure to update your copy of mozregression to the latest version and check out the site for documentation on how to use the new features described above!

Thanks to everyone involved (especially Julien) for all the hard work. Hopefully the payoff will be a tool that’s just that much more useful to Firefox contributors everywhere. :)

January 27, 2015 11:52 PM

Byron Jones

happy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

January 27, 2015 05:34 PM

January 22, 2015

Armen Zambrano G. (@armenzg)

Backed out - Pinning for Mozharness is enabled for the fx-team integration tree

EDIT=We had to back out this change since it caused issues for PGO talos jobs. We will try again after further testing.

Pinning for Mozharness [1] has been enabled for the fx-team integration tree.
Nothing should be changing. This is a no-op change.

We're still using the default mozharness repository and the "production" branch is what is being checked out. This has been enabled on Try and Ash for almost two months and all issues have been ironed out. You can know if a job is using pinning of Mozharness if you see "repostory_manifest.py" in its log.

If you notice anything odd please let me know in bug 1110286.

If by Monday we don't see anything odd happening, I would like to enable it for mozilla-central for few days before enabling it on all trunk trees.

Again, this is a no-op change, however, I want people to be aware of it.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

January 22, 2015 08:57 PM

Henrik Skupin

Firefox Automation report – week 49/50 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 49 and 50.

Highlights

During the first week of December the all-hands work week happened in Portland. Those were some great and inspiring days, full of talks, discussions, and conversations about various things. Given that I do not see my colleagues that often in real life, I have taken this opportunity to talk to everyone who is partly or fully involved in projects of our automation team. There are various big goals in front of us, so clearing questions and finding the next steps to tackle ongoing problems was really important. Finally we came out with a long list of todo items and more clarity about so far unclear tasks.

In week 50 we got some updates landed for Mozmill CI. Given a regression from the blacklist landing, our l10n tests haven’t been executed for any locale of the Firefox Developer Edition. Since the fix landed, we have seen problems with access keys in nearly each locale for a new test, which covers the context menu of web content.

Also we would like to welcome Barbara Miller in our team. She joined us as an intern via the FOSS outreach program as driven by Gnome. She will be with us until March and will mainly work on testdaybot and the conversion of Mozmill tests to Marionette. The latter project is called m21s and details can be found on its project page. Soon I will post more details about it.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 49 and week 50.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meeting of week 48. Due to the Mozilla all-hands workweek there was no meeting in week 49.

January 22, 2015 07:19 AM

January 20, 2015

Simarpreet Singh

Mozharness

In the recent past most of my work has involved writing code in python. For the most part, I haven't been working on code bases that large to worry about writing a full blown test harness to manage it. But recently working on a network simulator, me and my partner @clouisa were faced a problem: how to debug python code that is non-deterministic in nature.

Following the project guidelines, the code was supposed to be a random, unifomly distributed, sequence of events that could depending on their values could take a range of value which would result in a sequence of different code branches being taken. For example consider the following code:

def next_gen_time(current_tick):
    gen_number = random.random()
    gen_time = (-1.0 / ARRIVAL_RATE) * math.log(1 - gen_number)
    gen_tick = math.ceil(gen_time / TICK_DURATION)
    return int(gen_tick + current_tick)

This is an example of a Markovian Distribution.

What's Mozharness?

Mozharness is a python test harness that serves a purpose two-fold. It is basically a set of scripts that can be used by both the author of the code and tester alike. This eliminates the need for a separate test specific teams as the developers themselves are capable of running full tests locally on their dev machines.

Currently mozharness is used in the Mozilla community to test the nightly releases of Firefox. It is language specific, so as long as the tests have a defined pass/fail conditions, regardless of what language the initial codebase being tested was written in, it will work just fine.

There are three main parts to the Mozharness infrastructure:

BaseLogger

The BaseLogger component is responsible for logging all the runs of the mozharness test harness and storing them under logs/. It is useful to note that the logs are stored by stream so it's easier to just look at the respective stream (e.g. ERROR or INFO) when needed.

The output looks something like this:

16:23:06     INFO - Reading from file tmpfile_stdout
 16:23:06     INFO - Output received:
 16:23:06     INFO -  /home/simar/mozilla/mozharness/build/application/firefox/firefox
 16:23:06     INFO - Running post-action listener: _resource_record_post_action
 16:23:06     INFO - #####
 16:23:06     INFO - ##### Running run-tests step.
 16:23:06     INFO - #####
 16:23:06     INFO - Running pre-action listener: _resource_record_pre_action
 16:23:06     INFO - Running main action method: run_tests
 16:23:06     INFO - Running command: ['/home/simar/mozilla/mozharness/build/venv/bin/python', '--  version']
 16:23:06     INFO - Copy/paste: /home/simar/mozilla/mozharness/build/venv/bin/python --version
 16:23:06     INFO -  Python 2.7.6
 16:23:06     INFO - Return code: 0
 16:23:06     INFO - mkdir: /home/simar/mozilla/mozharness/build/blobber_upload_dir
 16:23:06     INFO - ENV: MOZ_UPLOAD_DIR is now /home/simar/mozilla/mozharness/build/               blobber_upload_dir

BaseConfig

config files define a set of tests that are required to be run. These config files can then be expanded and built on top of the standard so called sanity tests. BaseConfig does just that. BaseConfig provides a set of known and most commonly used sets of test cases that are more or less likely to be used.

It also provides methods to parse in various paramaters that can be used to pass in additional arguments into scripts.

def _create_config_parser(self, config_options, usage):
        self.config_parser.add_option(
            "--work-dir", action="store", dest="work_dir",
            type="string", default="build",
        )
        self.config_parser.add_option(
            "--base-work-dir", action="store", dest="base_work_dir",
            type="string", default=os.getcwd(),
        )

BaseScript

BaseScript inherits both BaseConfig and BaseLogger and is then used to define high level tasks/tests that are needed to be run. They may include definitions for tests in a python list:

['clobber',
 'read-buildbot-config',
 'download-and-extract',
 'clone-talos',
 'create-virtualenv',
 'install',
 'run-tests',
]

So how does it work?

A standard call to mozharness might look like this:

python scripts/talosscript.py --suite chromez --add-option --webServer,localhost --branch-name Firefox-Non-PGO --system-bits 64 --cfg talos/linuxconfig.py --download-symbols ondemand --use-talos-json --blob-upload-branch Firefox-Non-PGO --cfg developerconfig.py --installer-url http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/latest-trunk/firefox-37.0a1.en-US.linux-x8664.tar.bz2

Breaking it down:

January 20, 2015 06:21 AM

January 19, 2015

Mark Côté

BMO 2014 Statistics

Everyone loves statistics! Right? Right? Hello?

tap tap

feedback screech

Well anyway, here are some numbers from BMO in 2014:

BMO Usage:

33 243 new users registered
45 628 users logged in
23 063 users performed an action
160 586 new bugs filed
138 127 bugs resolved
100 194 patches attached

BMO Development:

1 325 bugs filed
1 214 bugs resolved

Conclusion: there are a lot of dedicated Mozillians out there!

January 19, 2015 01:09 AM

January 15, 2015

Andrew Halberstadt

The New Mercurial Workflow - Part 2

This is a continuation of my previous post called The New Mercurial Workflow. It assumes that you have at least read and experimented with it a bit. If you haven't, stop right now, read it, get set up and try playing around with bookmarks and mozreview a bit.

I've had several requests for examples of more advanced usage with this workflow. The previous post covered the basics, but it skipped many important concepts for the sake of brevity. Well that and the fact that I'm still figuring out all of this myself. Rather than a step by step tutorial, each section is its own independent concept which you can either use or choose to ignore. After all, there is more than one way to skin a cat (apparently), and I make no claims that my way is the best.

Pushing to Try

Probably the biggest thing I left out from the last post, is how to push to try. The easiest way is to simply edit the commit message of the top most commit in your bookmark:

$ hg update my_feature
$ hg commit --amend -m "try: -b o -p linux -u mochitest-1 -t none"
$ hg push -f try

However this method isn't ideal for two reasons:

  1. you have to re-edit your commit message back to something appropriate.
  2. --amend will overwrite your old commit with whatever you have in the working directory. It is easy to accidentally commit unintended changes, and unless you have the evolve extension installed (more on this later) the old commit will be lost forever.

A better approach is to push an empty changeset with try syntax on top of your bookmark. The bad news is that there is no good way to do this without using mq. The good news is that there is an extension that will make this a lot easier (though you'll still need mq installed in your hgrc). I'd recommend sfink's trychooser extension because it lets you choose syntax via a curses ui, or manually (note the original extension of the same name by pbiggar is different). After cloning and installing it, push to try:

$ hg update my_feature
$ hg trychooser

This opens a curses ui from which you can build your syntax (note it may be slightly out of date). Alternatively, just specify the syntax manually:

$ hg update my_feature
$ hg trychooser -m "try: -b o -p linux -u mochitest-1 -t none"

There is a bug on file to move this extension into the standard version-control-tools repository.

The mozreview folks are also working on the ability to autoland changesets pushed to review on try, which should greatly simplify the common use case.

Mutating History and Mozreview

In the last post, I showed you an example of addressing review comments by making an additional commit and then squashing it later. But what if you push multiple commits to review and you intend to land them all separately, without squashing them at the end? Here is the setup:

hg update my_feature
... add foo ...
hg commit -m "Bug 1234567 - Part 1: add the foo api"
... add bar ...
hg commit -m "Bug 1234567 - Part 2: add the bar api"
hg push -r my_feature review

Now you add a reviewer for each of the two commits and publish. Except the reviewer gives an r- to the first commit and r+ to the second. Pushing a third commit to the review will make it difficult to squash later. It is possible with rebasing, but there is a better way.

Mercurial has a new(ish) feature called Changeset Evolution. I won't go into detail here, but you know how with git you can mutate history and then force push with -f and people say don't do that since it could leave someone else in an unrecoverable state? This is because when you mutate history in git, the old changeset is lost forever. With changeset evolution, the old changesets are not thrown out, instead they are marked obsolete. It is then possible to push mutated history and remote repositories can use these obsolescence markers to "do the right thing" without putting someone else into an unrecoverable state. The mozreview repository is set up to use obsolescence markers, which means mutating history and pushing to review is perfectly acceptable.

The first step is to clone and install the evolve extension (update to the stable branch). Going back to the original example, we need to amend the first commit of our review push while leaving the second one intact. First, let's update to the commit we'll be amending:

$ hg update "my_feature^"
... fix review comments ...
$ hg commit --amend
$ hg push -r my_feature review

Remember in the last section I said --amend would cause you to lose your old commit? In this case evolve has actually modified the behaviour of --amend to mark the old changeset obsolete instead. The review repository can then use this information to see that you have amended an existing commit and update the review request accordingly. The end result is the review request will still only contain two commits, but a second entry on the push slider will show up, allowing the reviewer to see the original diff, the full diff and the interdiff with just the review fixes.

Amending is just one way to mutate history with evolve. You can also prune (delete), uncommit and fold (squash). If you are interested in how evolve works, or want more details on what it can do, I'd highly recommend this tutorial.

Tips for Working with Bookmarks

One thing that took me a little while to understand, was that bookmarks are not the same as git branches. Yes, they are very similar and can be used in much the same way. But a bookmark is just a label that automatically updates itself when activated. Unlike a git branch, there is no concept of ownership between a bookmark and a commit. This can lead to some confusion when rebasing bookmarks on a multi-headed repository like the unified firefox repo we set up in the previous post. Let's see what might happen:

$ hg pull -u inbound 
$ hg rebase -b my_feature -d inbound 
$ hg pull -u fx-team
$ hg rebase -b my_feature -d fx-team
abort: can't rebase immutable changeset ad2042b4c668

What happened here? The important thing to understand is that the -b argument to rebase doesn't stand for bookmark, it stands for base. You are telling hg to take every changeset from my_feature all the way back to the common ancestor with fx-team and rebase them all on top of fx-team. In this case, that includes all the public changesets that have landed on inbound, but haven't yet landed on fx-team. And you can't rebase public changesets (rightfully so). Luckily, it's still possible to rebase bookmarks automatically using revsets:

$ hg rebase -r "reverse(only(my_feature) and draft())" -d fx-team

This same revset can be used to log a bookmark and only that bookmark (log -f is useful, but includes all ancestors of the bookmark, so it's not always obvious where the bookmark starts):

$ hg log -r "reverse(only(my_feature) and draft())"

The revset is somewhat long, so it helps to add an alias to your ~/.hgrc:

[revsetalias]
b($1) = reverse(only($1) and draft())

Now you can use it like so:

$ hg log -r "b(my_feature)"

This revset works for most simple cases, but it isn't perfect:

  1. It will show an incorrect range if you pushed your bookmark to a publishing repo (e.g it is no longer draft).
  2. It will show an incorrect range if you rebase your bookmark on top of draft changesets (e.g another bookmark).
  3. It is slightly more annoying to write -r "b(my_feature)" than it is to write -r my_feature.

These shortcomings were annoying enough to me that I wrote an extension called logbook. Essentially if you pass in -r <bookmark> to a supported command, logbook will replace the bookmark's revision with a revset containing all changesets "in" the bookmark. So far log, rebase, prune and fold are wrapped by logbook. Logbook will also detect if bookmarks are based on top of one another, and only use commits that actually belong to the bookmark you want to see. For example, the following does what you'd expect:

$ hg rebase -r bookmark_2 -d bookmark_1
$ hg rebase -r bookmark_3 -d bookmark_2
$ hg log -r bookmark_1
$ hg log -r bookmark_2
$ hg log -r bookmark_3

Because logbook only considers draft changesets, the following won't print anything:

$ hg update central
$ hg bookmark new_bookmark
$ hg log -r new_bookmark

If you actually want to treat the bookmark as a label to a revision instead, it's still possible by escaping the bookmark with a period:

$ hg log -r .my_feature

Logbook likely has some bugs to work out, so let me know if you run into problems using it. There are also likely other commands that could make use of the revset. I'll add support for them either as I stumble across them or on request.

Shelving Changes

Finally I'd like to briefly mention hg shelve. It is more or less identical to git stash and is an official extension. To install it add the following to ~/.hgrc:

[extensions]
shelve=

I mostly use it for debug changes that I don't want to commit, but want to test both with and without a particular change. My basic usage is:

... add debug statements ...
... test ...
hg shelve
hg update <rev>
hg unshelve
... test ...
hg revert -a

Closing Words

That more or less wraps up what I've learnt since the first post and I can't remember any other pain points I had to work around. This workflow is still based on a lot of new tools that are still under heavy development, but all things considered I think it has gone remarkably smoothly. The setup involves installing a lot of extensions, but this should hopefully get better over time as they move into core mercurial or version-control-tools. Have you run into any other pain points using this workflow? If so, have you solved them?

January 15, 2015 04:27 PM

January 14, 2015

Henrik Skupin

Firefox Automation report – week 47/48 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 47 and 48.

Highlights

Most of the work during those two weeks made by myself were related to get [Jenkins](http://jenkins-ci.org/ upgraded on our Mozmill CI systems to the most recent LTS version 1.580.1. This was a somewhat critical task given the huge number of issue as mentioned in my last Firefox Automation report. On November 17th we were finally able to get all the code changes landed on our production machine after testing it for a couple of days on staging.

The upgrade was not that easy given that lots of code had to be touched, and the new LTS release still showed some weird behavior when connecting slave nodes via JLNP. As result we had to stop using this connection method in favor of the plain java command. This change was actually not that bad because it’s better to automate and doesn’t bring up the connection warning dialog.

Surprisingly the huge HTTP session usage as reported by the Monitoring plugin was a problem introduced by this plugin itself. So a simple upgrade to the latest plugin version solved this problem, and we will no longer get an additional HTTP connection whenever a slave node connects and which never was released. Once we had a total freeze of the machine because of that.

Another helpful improvement in Jenkins was the fix for a JUnit plugin bug, which caused concurrent builds to hang, until the former build in the queue has been finished. This added a large pile of waiting time to our Mozmill test jobs, which was very annoying for QA’s release testing work – especially for the update tests. Since this upgrade the problem is gone and we can process builds a lot faster.

Beside the upgrade work, I also noticed that one of the Jenkins plugins in use, it’s actually the XShell plugin, failed to correctly kill the running application on the slave machine in case of an job is getting aborted. The result of that is that following tests will fail on that machine until the not killed job has been finished. I filed a Jenkins bug and did a temporary backout of the offending change in that plugin.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 47 and week 48.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 47 and week 48.

January 14, 2015 01:57 PM

January 13, 2015

Armen Zambrano G. (@armenzg)

Name wanted for a retrigger-based bisection tool

Hello all,
I'm looking for a name for the tool that I will be working on this quarter.

This quarter I will be working on creating a prototype of a command-line tool that can be used by sheriffs and others to automate retrigger-based bisection. This could be used to help bisect new intermittent oranges, and to backfill jobs that have been skipped due to coalescing. Integration with Treeherder or other service will be done later.

I'm proposing "TriggerCI" since it shows what it does regardless of what you use it for.

If this works for you, please let me know.
If you have another suggestion please let me know. I'm interested on fun and creative names since that part of my brain is dysfunctional :P


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

January 13, 2015 03:39 PM

January 12, 2015

Armen Zambrano G. (@armenzg)

How to uplift changes through the various Mozilla release trees

One of my co-workers asked me a bit ago what is a clean way of uplifting patches through the different Mozilla release trees. In my case the patch is self-approved since it only affects test jobs and not the product (a=testing).

Once my change landed on one of the integration trees (e.g. mozilla-inbund, mozilla-central), I waited for it to be merged to mozilla-central (a sheriff made a note on the bug). At that point we can start the uplift process.

In particular, the changeset I want is:
https://hg.mozilla.org/integration/fx-team/rev/a13aa1af5d75

Note

I believe I have the named branches because I have the firefoxtree extension enabled.
You can read more about it in here.

I have a unified checkout from which I can manage all the various release trees (aurora, beta, release, esr31 et al).

Update the tree

We update to the right brach:
hg pull beta
hg update beta

Graft changeset

NOTE: Usage explained in here.
NOTE: Make sure you use --edit if you need to add a=testing to your commit message.

We graft the changeset, carefull check that we're pushing to the right place and push:
armenzg@armenzg-thinkpad:~/repos/branches/release_branches$ hg id
774733a9b2ad beta/tip

armenzg@armenzg-thinkpad:~/repos/branches/release_branches$ hg graft --edit -r a13aa1af5d75
grafting revision 249741
merging testing/config/mozharness/android_arm_config.py

armenzg@armenzg-thinkpad:~/repos/branches/release_branches$ hg out -r tip beta
comparing with beta
searching for changes
changeset:   249997:93587eeda731
tag:         tip
user:        Armen Zambrano Gasparnian
date:        Mon Jan 12 09:53:51 2015 -0500
summary:     Bug 1064002 - Fix removal of --log-raw from xpcshell. r=chmanchester. a=testing

armenzg@armenzg-thinkpad:~/repos/branches/release_branches$ hg push -r tip beta
pushing to ssh://hg.mozilla.org/releases/mozilla-beta
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files
remote: Trying to insert into pushlog.
remote: Inserted into the pushlog db successfully.
remote: You can view your change at the following URL:
remote:   https://hg.mozilla.org/releases/mozilla-beta/rev/93587eeda731

Repeat the same process for the remaining trees.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

January 12, 2015 10:01 PM

January 08, 2015

Byron Jones

changing the face of bugzilla.mozilla.org

background

bugzilla.mozilla.org presents some interesting problems when it comes to UX; while it has a singular view of bug data, most teams use bugzilla in slightly different ways.  some of these differences surface in fields used to track work (keywords, flags, whiteboard, summary), how bugs are prioritised (whiteboard, custom fields, priority fields), and even how inter-bug dependancies are tracked.

the net result is trying to design an user interface for viewing bugs optimised for how people use bugzilla is a near impossible task.  in light of this, we’ve been working on a framework which allows us to deploy multiple experimental alternative view of bugs.

goals

the goals are to enable alternative bug views in a way which enables rapid development and doesn’t force incomplete or broken implementations upon users.

implemented as a bugzilla extension, alternativeUI provides a harness for alternative bug views, and implements the view selection – currently as a simple user preference.  in the future we expect bugzilla to be able to automatically change view depending on the bug or your relationship to it.  for example, we could present a slightly (or radically!) different view of a bug to the assignee vs. the reviewer vs. someone not involved in the bug.

generic show_bug alternative

of course a framework for alternative bug views would be useless without an actual alternative.  i’ve been working on one which will likely be the basis of future experiments.  show/edit modality is at the core of this design — a bug is loaded with most fields as static text and switching to “edit mode” is required to make changes.

ideas we’re throwing around:

hiding fields

if a field doesn’t have a value set there’s no need for that field to be displayed by default.  removing those fields from the initial display greatly reduces the noise and complexity generally associated with bugzilla.

we can also use the user’s group membership or involvement in the bug as a cue to which data should be initially visible.  an example would be hiding the QE flags from users who are not associated with the bug or members of a the QE team.

performance

when a bug is loaded right now in bugzilla, it has to load all the alternatives for many fields (products, components, versions, milestones, flags, …).  in most cases bugzilla’s fine-grained security model results in a measurable cost to generating these alternatives — you can see the difference yourself by logging out of bugzilla or opening a bug in a private tab and comparing the page load times against a logged-in request.

by loading a bug as read-only initially, we can defer loading the alternatives until after the user clicks on ‘edit’.

selected “editable by default”

requiring a mode switch to perform any change to a bug isn’t an ideal situation either – operations that are frequently performed should be easy to do with minimal extraneous steps.

the current implementation allows you to comment, CC, or vote for a bug without switching to edit mode.  this can be extended to, for example, allow the bug’s assignee to change a bug’s status/resolution without needing to switch to edit mode.

screenshots

bmo-modal-1

bmo-modal-2


Filed under: bmo, bugzilla, mozilla

January 08, 2015 02:46 AM

January 07, 2015

Mark Côté

BMO 2014 update part II

The second half of 2014 was spent finishing up some performance work and shifting into usability improvements, which will continue into 2015.

More performance!

By the end of 2014, we’d managed to pick most of the low-to-medium-hanging fruit in the world of Bugzilla server-side performance. The result is approximately doubling the performance of authenticated bug views. Here are graphs from January 2014 and October 2014:

The server now also minifies and concatenates JavaScript and CSS files. This affects cold loads mostly, since these files are cached, but even on reload it saves a few round trips.

As mentioned above, we’re shifting focus away from performance work and towards usability/work-flow improvements, but there will still be perf benefits, both by reducing and delaying loading of content and by making it easier for users to accomplish common tasks.

New, better documentation

We’ve converted the upstream Bugzilla documentation to reStructuredText, massively updated and reorganized it, and, perhaps of most interest to anyone reading this, completely rewrote the API docs, which were very hard to grok.

For BMO specifically, we’ve fixed up the wiki page. We’ve also started a user guide, but it’s just a skeleton at the moment. There are lots of users out there who know the ins and outs of BMO, so feel free to contribute a section!

GMail support

To support Mozilla’s transition to GMail, we added two features. First, we now limit the number of emails sent to a user per minute and per hour, since GMail will temporarily disable accounts that receive too much mail, and some BMO users receive a lot of bugmail. Second, since GMail’s ability to filter mail by headers is limited compared to other email servers, users can now include the X-Bugzilla-* headers in the body via General Preferences.

Other things

2015!

As I’ve said a few times now, in 2015 we’re going to do a lot of work on improving general BMO-user productivity: usability, UX, work flows, whatever you want to call it. I’ll write more about this later, but here are a few things we’re looking into:

As usual, if you have questions or comments, you can leave them here, but an even better place is the mozilla.tools.bmo mailing list, also available as a Google Group and via NNTP.

January 07, 2015 07:31 PM

January 06, 2015

Armen Zambrano G. (@armenzg)

Tooltool fetching can now use LDAP credentials from a file

You can now fetch tooltool files by using an authentication file.
All you have to do is append "--authentication-file file" to your tooltool fetching command.

This is important if you want to use automation to fetch files from tooltool on your behalf.
This was needed to allow Android test jobs to run locally since we need to download tooltool files for it.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

January 06, 2015 04:45 PM

Byron Jones

happy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

January 06, 2015 06:23 AM

January 05, 2015

Armen Zambrano G. (@armenzg)

Run Android test jobs locally

You can now run Android test jobs on your local machine with Mozharness.

As with any other developer capable Mozharness script, all you have to do is:

An example for this is:
python scripts/android_emulator_unittest.py --cfg android/androidarm.py
--test-suite mochitest-gl-1 --blob-upload-branch try
--download-symbols ondemand --cfg developer_config.py
--installer-url http://ftp.mozilla.org/pub/mozilla.org/mobile/nightly/latest-mozilla-central-android-api-9/en-US/fennec-37.0a1.en-US.android-arm.apk
--test-url http://ftp.mozilla.org/pub/mozilla.org/mobile/nightly/latest-mozilla-central-android-api-9/en-US/fennec-37.0a1.en-US.android-arm.tests.zip


Here's the bug where the work happened.
Here's the documentation on how to run Mozharness as a developer.

Please file a bug under Mozharness if you find any issues.

Here are some other related blog posts:


Disclaimers

Bug 1117954- I think that I need a different SDK or emulator version is needed to run Android API 10 jobs.

I wish we run all of our jobs in proper isolation!


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

January 05, 2015 08:47 PM

December 31, 2014

Geoff Brown

Firefox for Android Performance Measures – 2014 in Review

Let’s review our performance measures for 2014.

Highlights:

- regressions in 2014 for tcheck2, trobopan, and tspaint.

- improvements in tprovider, tsvgx, and tp4m.

- overall regressions in time to throbber start and stop.

- recent checkerboard regressions apparent on Eideticker.

Talos

This section tracks Perfomatic graphs from graphs.mozilla.org for mozilla-central builds of Firefox for Android, for Talos tests run on Android 4.0 Opt. The test names shown are those used on tbpl. See https://wiki.mozilla.org/Buildbot/Talos for background on Talos.

tcheck2

Measure of “checkerboarding” during simulation of real user interaction with page. Lower values are better.

tcheck

Jan 2014: 4.7

Dec 2014: 18.2

This test seems to be one of our most frequently regressing tests. We had some good improvements this year, but overall we end the year significantly regressed from where we started. Silver lining: Test results are much less noisy now than they have been all year!

(For details on the December regression, see bug 1111565 / bug 1097318).

trobopan

Panning performance test. Value is square of frame delays (ms greater than 25 ms) encountered while panning. Lower values are better.

tpan

Jan 2014: 28000

Dec 2014: 62000

Again, there are some wins and losses over the year but we end the year significantly regressed. There is a lot of noise in the results.

tprovider

Performance of history and bookmarks’ provider. Reports time (ms) to perform a group of database operations. Lower values are better.

tprovider

Jan 2014: 560

Dec 2014: 520

Very steady performance here with a slight improvement in April carrying through to the end of the year.

tsvgx

An svg-only number that measures SVG rendering performance. About half of the tests are animations or iterations of rendering. This ASAP test (tsvgx) iterates in unlimited frame-rate mode thus reflecting the maximum rendering throughput of each test. The reported value is the page load time, or, for animations/iterations – overall duration the sequence/animation took to complete. Lower values are better.

tsvg

Jan 2014: 6150

Dec 2014: 5900

This is great — we’re seeing the best performance of the year.

tp4m

Generic page load test. Lower values are better.

tp4

Jan 2014: 970

Dec 2014: 855

Wow, even better. This tells me someone out there really cares about our page load performance.

ts_paint

Startup performance test. Lower values are better.

tspaint

Jan 2014: 3700

Dec 2014: 4100

You can’t win them all? It feels like we’re slowly losing ground here.

Note that this test is currently hidden on treeherder; it fails very often – bug 1112697.

Throbber Start / Throbber Stop

These graphs are taken from http://phonedash.mozilla.org.  Browser startup performance is measured on real phones (a variety of popular devices).

throbstart

I could not get cohesive phonedash graphs for the entire year, since we made so many changes to autophone over the year, but here are views for the last 6 months. It looks like we have some work to do on time to throbber start. Time to throbber stop is better, but we have lost ground there too.

throbstop

Eideticker

These graphs are taken from http://eideticker.mozilla.org. Eideticker is a performance harness that measures user perceived performance of web browsers by video capturing them in action and subsequently running image analysis on the raw result.

More info at: https://wiki.mozilla.org/Project_Eideticker

Again, I couldn’t generate good graphs for the whole year, but here are some for the last 3 months.

eide1

eide2

eide3

Eideticker startup tests seem to be performing well.

eide4

eide5

But we’ve had some recent regressions in checkerboarding.

Happy New Year!


December 31, 2014 06:16 PM

December 23, 2014

Byron Jones

happy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

December 23, 2014 07:15 AM

December 22, 2014

Armen Zambrano G. (@armenzg)

Run mozharness talos as a developer (Community contribution)

Thanks to our contributor Simarpreet Singh from Waterloo we can now run a talos job through mozharness on your local machine (bug 1078619).

All you have to add is the following:
--cfg developer_config.py 
--installer-url http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/latest-trunk/firefox-37.0a1.en-US.linux-x86_64.tar.bz2

To read more about running Mozharness locally go here.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

December 22, 2014 08:10 PM

December 19, 2014

Henrik Skupin

Firefox Automation report – week 45/46 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 45 and 46.

Highlights

In our Mozmill-CI environment we had a couple of frozen Windows machines, which were running with 100% CPU load and 0MB of memory used. Those values came from the vSphere client, and didn’t give us that much information. Henrik checked the affected machines after a reboot, and none of them had any suspicious entries in the event viewer either. But he noticed that most of our VMs were running a very outdated version of the VMware tools. So he upgraded all of them, and activated the automatic install during a reboot. Since then the problem is gone. If you see something similar for your virtual machines, make sure to check that used version!

Further work has been done for Mozmill CI. So were finally able to get rid of all the traces for Firefox 24.0ESR since it is no longer supported. Further we also setup our new Ubuntu 14.04 (LTS) machines in staging and production, which will soon replace the old Ubuntu 12.04 (LTS) machines. A list of the changes can be found here.

Beside all that Henrik has started to work on the next Jenkins v1.580.1 (LTS) version bump for the new and more stable release of Jenkins. Lots of work might be necessary here.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 45 and week 46.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 45 and week 46.

December 19, 2014 11:41 AM

December 18, 2014

Henrik Skupin

Firefox Automation report – week 43/44 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 43 and 44.

Highlights

In preparation for the QA-wide demonstration of Mozmill-CI, Henrik reorganized our documentation to allow everyone a simple local setup of the tool. Along that we did the remaining deployment of latest code to our production instance.

Henrik also worked on the upgrade of Jenkins to latest LTS version 1.565.3, and we were able to push this upgrade to our staging instance for observation. Further he got the Pulse Guardian support implemented.

Mozmill 2.0.9 and Mozmill-Automation 2.0.9 have been released, and if you are curious what is included you want to check this post.

One of our major goals over the next 2 quarters is to replace Mozmill as test framework for our functional tests for Firefox with Marionette. Together with the A-Team Henrik got started on the initial work, which is currently covered in the firefox-greenlight-tests repository. More to come later…

Beside all that work we have to say good bye to one of our SoftVision team members.October the 29th was the last day for Daniel on the project. So thank’s for all your work!

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 43 and week 44.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 43 and week 44.

December 18, 2014 11:35 AM

December 17, 2014

Andrew Halberstadt

How to Consume Structured Test Results

You may not know that most of our test harnesses are now outputting structured logs (thanks in large part to :chmanchester's tireless work). Saying a log is structured simply means that it is in a machine readable format, in our case each log line is a JSON object. When streamed to a terminal or treeherder log, these JSON objects are first formatted into something that is human readable, aka the same log format you're already familiar with (which is why you may not have noticed this).

While this might not seem all that exciting it lets us do many things, such as change the human readable formats and add metadata, without needing to worry about breaking any fragile regex based log parsers. We are now in the process of updating much of our internal tooling to consume these structured logs. This will let us move faster and provide a foundation on top of which we can build all sorts of new and exciting tools that weren't previously possible.

But the benefits of structured logs don't need to be constrained to the Tools and Automation team. As of today, anyone can consume structured logs for use in whatever crazy tools they can think of. This post is a brief guide on how to consume structured test results.

A High Level Overview

Before diving into code, I want to briefly explain the process at a high level.

  1. The test harness is invoked in such a way that it streams a human formatted log to stdout, and a structured log to a file.
  2. After the run is finished, mozharness uploads the structured log to a server on AWS using a tool called blobber. Mozharness stores a map of uploaded file names to blobber urls as a buildbot property. The structured logs are just one of several files uploaded via blobber.
  3. The pulse build exchange publishes buildbot properties. Though the messages are based on buildbot events and can be difficult to consume directly.
  4. A tool called pulsetranslator consumes messages from the build exchange, cleans them up a bit and re-publishes them on the build/normalized exchange.
  5. Anyone creates a NormalizedBuildConsumer in pulse, finds the url to the structured log and downloads it.

Sound complicated? Don't worry, the only step you're on the hook for is step 5.

Creating a Pulse Consumer

For anyone not aware, pulse is a system at Mozilla for publishing and subscribing to arbitrary events. Pulse has all sorts of different applications, one of which is receiving notifications whenever a build or test job has finished.

The Setup

First, head on over to https://pulse.mozilla.org/ and create an account. You can sign in with Persona, and then create one or more pulse users. Next you'll need to install the mozillapulse python package. First make sure you have pip installed, then:

$ pip install mozillapulse

As usual, I recommend doing this in a virtualenv. That's it, no more setup required!

The Execution

Creating a pulse consumer is pretty simple. In this example we'll download all logs pertaining to mochitests on mozilla-inbound and mozilla-central. This example depends on the requests package, you'll need to pip install it if you want to run it locally:

import json
import sys
import traceback

import requests

from mozillapulse.consumers import NormalizedBuildConsumer

def run(args=sys.argv[1:]):
    pulse_args = {
        # a string to identify this consumer when logged into pulse.mozilla.org
        'applabel': 'mochitest-log-consumer',

        # each message contains a topic. Only messages that match the topic specified here will
        # be delivered. '#' is a wildcard, so this topic matches all messages that start with
        # 'unittest'.
        'topic': 'unittest.#',

        # durable queues will store messages inside pulse even if your consumer goes offline for
        # a bit. Otherwise, any messages published while the consumer is not explicitly
        # listeneing will be lost forever. Keep it set to False for testing purposes.
        'durable': False,

        # the user you created on pulse.mozilla.org
        'user': 'ahal',

        # the password you created for the user
        'password': 'hunter1',

        # a callback that will get invoked on each build event
        'callback': on_build_event,
    }


    pulse = NormalizedBuildConsumer(**pulse_args)

    while True:
        try:
            pulse.listen()
        except KeyboardInterrupt:
            # without this ctrl-c won't work!
            raise
        except IOError:
            # sometimes you'll get a socket timeout. Just call listen again and all will be
            # well. This was fairly common and probably not worth logging.
            pass
        except:
            # it is possible for rabbitmq to throw other exceptions. You likely
            # want to log them and move on.
            traceback.print_exc()


def on_build_event(data, message):
    # each message needs to be acknowledged. This tells the pulse queue that the message has been
    # processed and that it is safe to discard. Normally you'd want to ack the message when you know
    # for sure that nothing went wrong, but this is a simple example so I'll just ack it right away.
    message.ack()

    # pulse data has two main properties, a payload and metadata. Normally you'll only care about
    # the payload.
    payload = data['payload']
    print('Got a {} job on {}'.format(payload['test'], payload['tree']))

    # ignore anything not from mozilla-central or mozilla-inbound
    if payload['tree'] not in ('mozilla-central', 'mozilla-inbound'):
        return

    # ignore anything that's not mochitests
    if not payload['test'].startswith('mochitest'):
        return

    # ignore jobs that don't have the blobber_files property
    if 'blobber_files' not in payload:
        return

    # this is a message we care about, download the structured log!
    for filename, url in payload['blobber_files'].iteritems():
        if filename == 'raw_structured_logs.log':
            print('Downloading a {} log from revision {}'.format(
                   payload['test'], payload['revision']))
            r = requests.get(url, stream=True)

            # save the log
            with open('mochitest.log', 'wb') as f:
                for chunk in r.iter_content(1024):
                    f.write(chunk)
            break

    # now time to do something with the log! See the next section.

if __name__ == '__main__':
    sys.exit(run())

A Note on Pulse Formats

Each pulse publisher can have its own custom topics and data formats. The best way to discover these formats is via a tool called pulse-inspector. To use it, type in the exchange and routing key, click Add binding then Start Listening. You'll see messages come in which you can then inspect to get an idea of what format to expect. In this case, use the following:

Pulse Exchange: exchange/build/normalized
Routing Key Pattern: unittest.#

Consuming Log Data

In the last section we learned how to obtain a structured log. Now we learn how to use it. All structured test logs follow the same structure, which you can see in the mozlog documentation. A structured log is a series of line-delimited JSON objects, so the first step is to decode each line:

lines = [json.loads(l) for l in log.splitlines()]
for line in lines:
    # do something

If you have a large number of log lines, you'll want to use a generator. Another common use case is registering callbacks on specific actions. Luckily, mozlog provides several built-in functions for dealing with these common cases. There are two main approaches, registering callbacks or creating log handlers.

Examples

The rest depends on what you're trying to accomplish. It now becomes a matter of reading the docs and figuring out how to do it. Below are several examples to help get you started.

List all failed tests by registering callbacks:

from mozlog.structured import reader

failed_tests = []
def append_if_failed(log_item):
    if 'expected' in log_item:
        failed_tests.append(log_item['test'])

with open('mochitest.log', 'r') as log:
    iterator = reader.read(log)
    action_map = { 'test_end': append_if_failed }
    reader.each_log(iterator, action_map)

print('\n'.join(failed_tests))

List the time it took to run each test using a log handler:

import json

from mozlog.structured import reader

class TestDurationHandler(reader.LogHandler):
    test_duration = {}
    start_time = None

    def test_start(self, item):
        self.start_time = item['timestamp']

    def test_end(self, item):
        duration = item['timestamp'] - self.start_time
        self.test_duration[item['test']] = duration

handler = TestDurationHandler()
with open('mochitest.log', 'r') as log:
    iterator = reader.read(log)
    reader.handle_log(iterator, handler)

print(json.dumps(handler.test_duration, indent=2))

How to consume the log is really up to you. The built-in methods can be helpful, but are by no means required. Here is a more complicated example that receives structured logs over a socket, and spawns an arbitrary number of threads to process and execute callbacks on them.

If you have questions, comments or suggestions, don't hesitate to speak up!

Finally, I'd also like to credit Ahmed Kachkach an intern who not only worked on structured logging in mochitest over the summer, but also created the system that manages pulse users and queues.

December 17, 2014 08:45 PM

Mark Côté

Searching Bugzilla

BMO currently supports five—count ‘em, five—ways to search for bugs. Whenever you have five different ways to perform a similar function, you can be pretty sure the core problem is not well understood. Search has been rated, for good reason, one of the least compelling features of Bugzilla, so the BMO team want to dig in there and make some serious improvements.

At our Portland get-together a couple weeks ago, we talked about putting together a vision for BMO. It’s a tough problem, since BMO is used for so many different things. We did, however, manage to get some clarity around search. Gerv, who has been involved in the Bugzilla project for quite some time, neatly summarized the use cases. People search Bugzilla for only two reasons:

That’s it. The fact that BMO has five different searches, though, means either we didn’t know that, or we just couldn’t find a good way to do one, or the other, or both.

We’ve got the functionality of the first use case down pretty well, via Advanced Search: it helps you assemble a set of criteria of almost limitless specificity that will result in a list of bugs. It can be used to determine what bugs are blocking a particular release, what bugs a particular person has assigned to them, or what bugs in a particular Product have been fixed recently. Its interface is, admittedly, not great. Quick Search was developed as a different, text-based approach to Advanced Search; it can be quicker to use but definitely isn’t any more intuitive. Regardless, Advanced Search fulfills its role fairly well.

The second use of Search is how you’d answer the question, “what was that bug I was looking at a couple weeks ago?” You have some hazy recollection of a bug. You have a good idea of a few words in the summary, although you might be slightly off, and you might know the Product or the Assignee, but probably not much else. Advanced Search will give you a huge, useless result set, but you really just want one specific bug.

This kind of search isn’t easy; it needs some intelligence, like natural-language processing, in order to give useful results. Bugzilla’s solutions are the Instant and Simple searches, which eschew the standard Bugzilla::Search module that powers Advanced and Quick searches. Instead, they do full-text searches on the Summary field (and optionally in Comments as well, which is super slow). The results still aren’t very good, so BMO developers tried outsourcing the feature by adding a Google Search option. But despite Google being a great search engine for the web, it doesn’t know enough about BMO data to be much more useful, and it doesn’t know about new nor confidential bugs at all.

Since Bugzilla’s search engines were originally written, however, there have been many advances in the field, especially in FLOSS. This is another place where we need to bring Bugzilla into the modern world; MySQL full-text searches are just not good enough. In the upcoming year, we’re going to look into new approaches to search, such as running different databases in tandem to exploit their particular abilities. We plan to start with experiments using Elasticsearch, which, as the name implies, is very good at searching. By standing up an instance beside the main MySQL db and mirroring bug data over, we can refer specific-bug searches to it; even though we’ll then have to filter based on standard bug-visibility rules, we should have a net win in search times, especially when searching comments.

In sum, Mozilla developers, we understand your tribulations with Bugzilla search, and we’re on it. After all, we all have a reputation to maintain as the Godzilla of Search Engines!

December 17, 2014 03:37 PM

Henrik Skupin

Firefox Automation report – week 41/42 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 41 and 42.

With the beginning of October we also have some minor changes in responsibilities of tasks. While our team members from SoftVision mainly care about any kind of Mozmill tests related requests and related CI failures, Henrik is doing all the rest including the framework and the maintenance of Mozmill CI.

Highlights

With the support for all locales testing in Mozmill-CI for any Firefox beta and final release, Andreea finished her blacklist patch. With that we can easily mark locales not to be tested, and get rid of the long white-list entries.

We spun up our first OS X 10.10 machine in our staging environment of Mozmill CI for testing the new OS version. We hit a couple of issues, especially some incompatibilities with mozrunner, which need to be fixed first before we can get started in running our tests on 10.10.

In the second week of October Teodor Druta joined the Softvision team, and he will assist all the others with working on Mozmill tests.

But we also had to fight a lot with Flash crashes on our testing machines. So we have seen about 23 crashes on Windows machines per day. And that all with the regular release version of Flash, which we re-installed because of a crash we have seen before was fixed. But the healthy period did resist long, and we had to revert back to the debug version without the protect mode. Lets see for how long we have to keep the debug version active.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 41 and week 42.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 41 and week 42.

December 17, 2014 09:22 AM

December 16, 2014

Byron Jones

happy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

December 16, 2014 06:47 AM

December 12, 2014

Joel Maher

Tracking Firefox performance as we uplift – the volume of alerts we get

For the last year, I have been focused on ensuring we look at the alerts generated by Talos.  For the last 6 months I have also looked a bit more carefully at the uplifts we do every 6 weeks.  In fact we wouldn’t generate alerts when we uplifted to beta because we didn’t run enough tests to verify a sustained regression in a given time window.

Lets look at data, specifically the volume of alerts:

Trend of improvements/regressions from Firefox 31 to 36 as we uplift to Aurora

Trend of improvements/regressions from Firefox 31 to 36 as we uplift to Aurora

this is a stacked graph, you can interpret it as Firefox 32 had a lot of improvements and Firefox 33 had a lot of regressions.  I think what is more interesting is how many performance regressions are fixed or added when we go from Aurora to Beta.  There is minimal data available for Beta.  This next image will compare alert volume for the same release on Aurora then on Beta:

Side by side stacked bars for the regressions going into Aurora and then going onto Beta.

Side by side stacked bars for the regressions going into Aurora and then going onto Beta.

One way to interpret this above graph is to see that we fixed a lot of regressions on Aurora while Firefox 33 was on there, but for Firefox 34, we introduced a lot of regressions.

The above data is just my interpretation of this, Here are links to a more fine grained view on the data:

As always, if you have questions, concerns, praise, or other great ideas- feel free to chat via this blog or via irc (:jmaher).


December 12, 2014 03:06 PM

December 10, 2014

Joel Maher

Language hacking – changing the choice of words we use

Have you ever talked to someone who continues to use the same word over and over again?  Then you find that many people you chat with end up using the same choice of words quite frequently.  My wife and I see this quite often, usually with the word ‘Amazing’, ‘cool’, and ‘hope’.

Lets focus on the word “hope”.  There are many places where hope is appropriate, but I find that most people misuse the word.  For example:

I hope to show up at yoga on Saturday

I heard this sentence and wonder:

What could be said is:

I am planning to show up at yoga on Saturday

or:

I have a lot of things going on, if all goes well I will show up at yoga on Saturday

or:

I don’t want to hurt your feelings by saying no, so to make you feel good I will be non committal about showing up to yoga on Saturday even though I have no intentions.

There are many ways to replace the word “hope”, and all of them achieve a clearer communication between two people.

Now with that said, what am I hacking?  For the last few months I have been reducing (almost removing) the words ‘awesome’, ‘amazing’, ‘hate’, and ‘hope’ from my vocabulary.

Why am I writing about this?  I might as well be in the open about this and invite others to join me in being deliberate about how we speak.  Once a month I will post a new word, feel free to join me in this effort and see how thinking about what you say and how you say it impacts your communications.

Also please leave comments on this post about specific words that you feel are overused – I could use suggestions of words.


December 10, 2014 05:40 PM

December 09, 2014

Armen Zambrano G. (@armenzg)

Running Mozharness in developer mode will only prompt once for credentials

Thanks to Mozilla's contributor kartikgupta0909 we now only have to enter LDAP credentials once when running the developer mode of Mozharness.

He accomplished it in bug 1076172.

Thank you Kartik!


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

December 09, 2014 09:43 PM

Joel Maher

5 days in Portland with Mozillians and 10 great things that came from it

I took a lot of notes in Portland last week.  One might not know that based on the fact that I talked so much my voice ran out of steam by the second day.  Either way, in chatting with some co-workers yesterday about what we took away from Portland, I realized that there is a long list of awesomeness.

Let me caveat this by saying that some of these ideas have been talked about in the past, but despite our efforts to work with others and field interesting and useful ideas, there is a big list of great things that came to light while chatting in person:

Those are 10 specific topics which despite everybody knowing how to contact me or the ateam and share great ideas or frustrations, these came out of being in the same place at the same time.

Thinking through this, when I see these folks in a real office while working from there for a few days or a week, it seems as though the conversations are smaller and not as intense.  Usually just small talk whilst waiting for a build to finish.  I believe the idea where we are not expected to focus on our day to day work and instead make plans for the future is the real innovation behind getting these topics surfaced.


December 09, 2014 08:00 PM

December 08, 2014

Armen Zambrano G. (@armenzg)

Test mozharness changes on Try

You can now push to your own mozharness repository (even a specific branch) and have it be tested on Try.

Few weeks ago we developed mozharness pinning (aka mozharness.json) and recently we have enabled it for Try. Read the blog post to learn how to make use of it.

NOTE: This currently only works for desktop, mobile and b2g test jobs. More to come.
NOTE: We only support named branches, tags or specific revisions. Do not use bookmarks as it doesn't work.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

December 08, 2014 06:59 PM

December 07, 2014

Geoff Brown

New Android job names on treeherder: Update your try pushes!

Treeherder builds are now producing two separate APKs for Android arm, each targeting a different Android API range — see bug 1073772 for discussion and details.

Instead of seeing:

old

You should now see:

new

Notice that Instead of a single Android Opt arm build (used for both Android 2.3 opt and Android 4.0 opt tests), there are now two separate builds: Android 2.3 API9 opt and Android 4.0 API10+ opt. There is a very similar change for Android Debug arm builds. Android x86 builds are unchanged.

This change affects try pushes for Android arm, because the builder names have changed. Instead of:

try: -b o -p android

you should use:

try: -b o -p android-api-9,android-api-11 …

Unfortunately, “-p android” no longer matches any builder name, so if you forget to update your try syntax, your push will not build on Android and you won’t run any tests — oops!

oops


December 07, 2014 02:58 PM

December 06, 2014

David Burns

Bugsy 0.4.0 - More search!

I have just released the latest version of Bugsy. This allows you to search bugs via change history fields and within a certain timeframe. This allows you to find things like bugs created within the last week, like below.

I have updated thedocumentation to get you started.

>>> bugs = bugzilla.search_for\
                    .keywords('intermittent-failure')\
                    .change_history_fields(["[Bug creation]"])\
                    .timeframe("2014-12-01", "2014-12-05")\
                    .search()

You can see the Changelog for more details..

Please raise issues on GitHub

December 06, 2014 12:05 AM

November 28, 2014

Geoff Brown

Firefox for Android Performance Measures – November check-up

My monthly review of Firefox for Android performance measurements for November.

Highlights:

- Significant improvements in tcheck2, tsvgx, and tp4m.

- Phonedash and Eideticker measurements holding steady.

Talos

This section tracks Perfomatic graphs from graphs.mozilla.org for mozilla-central builds of Firefox for Android, for Talos tests run on Android 4.0 Opt. The test names shown are those used on tbpl. See https://wiki.mozilla.org/Buildbot/Talos for background on Talos.

tcheck2

Measure of “checkerboarding” during simulation of real user interaction with page. Lower values are better.

17.8 (start of period) – 7.6 (end of period)

Significant improvement Nov 5.

trobopan

Panning performance test. Value is square of frame delays (ms greater than 25 ms) encountered while panning. Lower values are better.

30000 (start of period) – 30000 (end of period)

Significant noise noted.

tprovider

Performance of history and bookmarks’ provider. Reports time (ms) to perform a group of database operations. Lower values are better.

520 (start of period) – 520 (end of period).

tsvgx

An svg-only number that measures SVG rendering performance. About half of the tests are animations or iterations of rendering. This ASAP test (tsvgx) iterates in unlimited frame-rate mode thus reflecting the maximum rendering throughput of each test. The reported value is the page load time, or, for animations/iterations – overall duration the sequence/animation took to complete. Lower values are better.

6100 (start of period) – 4500 (end of period).

Significant improvement Nov 24.

tp4m

Generic page load test. Lower values are better.

880 (start of period) – 800 (end of period).

Improvement Nov 24. This is the best score we have seen all year:

tp4-annual

ts_paint

Startup performance test. Lower values are better.

3850 (start of period) – 3950 (end of period).

No specific regression identified.

Throbber Start / Throbber Stop

These graphs are taken from http://phonedash.mozilla.org.  Browser startup performance is measured on real phones (a variety of popular devices).

Time to throbber start and throbber stop seem steady this month.

throbstart

throbstop

Eideticker

These graphs are taken from http://eideticker.mozilla.org. Eideticker is a performance harness that measures user perceived performance of web browsers by video capturing them in action and subsequently running image analysis on the raw result.

More info at: https://wiki.mozilla.org/Project_Eideticker

eide1

eide2

eide3


November 28, 2014 05:41 PM

November 25, 2014

Henrik Skupin

Firefox Automation report – week 39/40 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 39 and 40.

Highlights

One of our goals for last quarter was to get locale testing enabled in Mozmill-CI for each and every supported locale of Firefox beta and release builds. So Cosmin investigated the timing and other possible side-effects, which could happen when you test about 90 locales across all platforms! The biggest change we had to do was for the retention policy of logs from executed builds due to disk space issues. Here we not only delete the logs after a maximum amount of builds, but also after 3 full days now. That gives us enough time for investigation of test failures. Once that was done we were able to enable the remaining 60 locales. For details of all the changes necessary, you can have a look at the mozmill-ci pushlog.

During those two weeks Henrik spent his time on finalizing the Mozmill update tests to support the new signed builds on OS X. Once that was done he also released the new mozmill-automation 2.0.8.1 package.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 39 and week 40.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 39 and week 40.

November 25, 2014 02:22 PM

Byron Jones

happy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

November 25, 2014 07:41 AM

November 24, 2014

Armen Zambrano G. (@armenzg)

Pinning mozharness from in-tree (aka mozharness.json)

Since mozharness came around 2-3 years ago, we have had the same issue where we test a mozharness change against the trunk trees, land it and get it backed out because we regress one of the older release branches.

This is due to the nature of the mozharness setup where once a change is landed all jobs start running the same code and it does not matter on which branch that job is running.

I have recently landed some code that is now active on Ash (and soon on Try) that will read a manifest file that points your jobs to the right mozharness repository and revision. We call this process to "pin mozhaness". In other words, what we do is to fix an external factor to our job execution.

This will allow you to point your Try pushes to your own mozharness repository.

In order to pin your jobs to a repository/revision of mozharness you have to change a file called mozharness.json which indicates the following two values:


This is a similar concept as talos.json introduced which locks every job to a specific revision of talos. The original version of it landed in 2011.

Even though we have a similar concept since 2011, that doesn't mean that it was as easy to make it happen for mozharness. Let me explain a bit why:

Coming up:
  • Enable on Try
  • Free up Ash and Cypress
    • They have been used to test custom mozharness patches and the default branch of Mozharness (pre-production)
Long term:
  • Enable the feature on all remaining Gecko trees
    • We would like to see this run at scale for a bit before rolling it out
    • This will allow mozharness changes to ride the trains
If you are curious, the patches are in bug 791924.

Thanks for Rail for all his patch reviews and Jordan for sparking me to tackle it.



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

November 24, 2014 05:35 PM

November 18, 2014

Byron Jones

happy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

November 18, 2014 06:51 AM

November 11, 2014

Henrik Skupin

Firefox Automation report – week 37/38 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 37 and 38.

Highlights

After 7 months without a new release we finally were able to release mozdownload 1.12 with a couple of helpful enhancements and fixes.

We released Mozmill 2.0.7 and mozmill-automation 2.0.7 mainly for adding support of the v2 signed Firefox application bundles on OS X. Sadly we quickly had to follow-up with an appropriate 2.0.8 release for both tools, because a let change in the JS Engine caused a complete bustage of Mozmill. More details can be found in my appropriate blog post.

We were even able to finally release Memchaser 0.6, which is fixing a couple of outstanding bugs and brought in the first changes to fully support Australis.

One of our goals was to get the failure rate of Mozmill tests for release and beta candidate builds under 5%. To calculate that Cosmin wrote a little script, which pulls the test report data for a specific build from out dashboard and outputs the failure rate per executed testrun. We were totally happy to see that the failure rate for all Mozmill tests was around 0.027%!

During the merge process for the Firefox 32 release Andrei has seen some test inconsistencies between our named branches in the Mozmill-Tests repository. Some changes were never backported, and only present on the default branch for a very long time. He fixed that and also updated our documentation for branch merges

Something else worth for highlighting is also bug 1046645. Here our Mozmill tests found a case when Firefox does not correctly show the SSL status of a website if you navigate quickly enough. The fix for this regression caused by about:newtab made it even into the release notes

Last but not least Andreea started planning our Automation Training day for Q3. So she wrote a blog post about this event on QMO.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 37 and week 38.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 37 and week 38.

November 11, 2014 10:58 AM

Byron Jones

happy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

November 11, 2014 06:20 AM

November 08, 2014

Andrew Halberstadt

The New Mercurial Workflow

There's a good chance you've heard something about a new review tool coming to Mozilla and how it will change everything. There's an even better chance you've stumbled across one of gps' blog posts on how we use mercurial at Mozilla.

With mozreview entering beta, I decided to throw out my old mq based workflow and try to use all the latest and greatest tools. That means mercurial bookmarks, a unified mozilla-central, using mozreview and completely expunging mq from my workflow.

Making all these changes at the same time was a little bit daunting, but the end result seems to be a much easier and more efficient workflow. I'm writing the steps I took down in case it helps someone else interested in making the switch. Everything in this post is either repeating the mozreview documentation or one of gps' blog posts, but I figured it might help for a step by step tutorial that puts all the pieces together, from someone who is also a mercurial noob.

Setup

Mercurial

Before starting you need to do a bit of setup. You'll need the mercurial reviewboard and firefoxtree extensions and mercurial 3.0 or later. Luckily you can run:

$ mach mercurial-setup

And hitting 'yes' to everything should get you what you need. Make sure you at least enable the rebase extension. In my case, mercurial > 3.0 didn't exist in my package repositories (Fedora 20) so I had to download and install it manually.

MozReview

There is also some setup required to use the mozreview tool. Follow the instructions to get started.

Tagging the Baseline

Because we enabled the firefoxtree extension, anytime we pull a remote repo resembling Firefox from hg.mozilla.org, a local tag will be created for us. So before proceeding further, make sure we have our baseline tagged:

$ hg pull https://hg.mozilla.org/mozilla-central
$ hg log -r central

Now we know where mozilla-central tip is. This is important because we'll be pulling mozilla-inbound on top later.

Create path Aliases

Edit: Apparently the firefoxtree extension provides built-in aliases so there's no need to do this step. The aliases follow the central, inbound, aurora convention.

Typing the url out each time is tiresome, so I recommend creating path aliases in your ~/.hgrc:

[paths]
m-c = https://hg.mozilla.org/mozilla-central
m-i = https://hg.mozilla.org/integration/mozilla-inbound
m-a = https://hg.mozilla-org/releases/mozilla-aurora
m-b = https://hg.mozilla-org/releases/mozilla-beta
m-r = https://hg.mozilla-org/releases/mozilla-release

Learning Bookmarks

It's a good idea to be at least somewhat familiar with bookmarks before starting. Reading this tutorial is a great primer on what to expect.

Start Working on a Bug

Now that we're all set up and we understand the basics of bookmarks, it's time to get started. Create a bookmark for the feature work you want to do:

$ hg bookmark my_feature

Make changes and commit as often as you want. Make sure at least one of the commits has the bug number associated with your work, this will be used by mozreview later:

... do some changes ...
$ hg commit -m "Bug 1234567 - Fix that thing that is broken"
... do more changes ...
$ hg commit -m "Only one commit message needs a bug number"

Maybe you want to pull central again and rebase your changes on top of it. No problem:

$ hg update central
$ hg pull central
$ hg rebase -b my_feature -d central

Pushing a Bookmark for Review

When you are ready for review, all you do is:

$ hg update my_feature
$ hg push review

Mercurial will automatically push the currently active bookmark to the review repository. This is equivalent (no need to update):

$ hg push -r my_feature review

At this point you should see some links being dumped to the console, one for each commit in your bookmark as well as a parent link to the overall review. Open this last link to see your review request. At this stage, the review is unpublished, you'll need to add some reviewers and publish it before anyone else can see it. Instead of explaining how to do this, I highly recommend reading the mozreview instructions carefully. I would have saved myself a lot of time if I had just paid closer attention to them.

Once published, mozreview will automatically update the associated bug with appropriate information.

Fixing Review Comments

If all went well, someone has received your review request. If you need to make some follow up changes, it's super easy. Just activate the bookmark, make a new commit and re-push:

$ hg update my_feature
... fix review comments ...
$ hg commit -m "Address review comments"
$ hg push review

Mozreview will automatically detect which commits have been pushed to the review server and update the review accordingly. In the reviewboard UI it will be possible for reviewers to see both the interdiff and the full diff by moving a commit slider around.

Pushing to Inbound

Once you've received the r+, it's time to push to mozilla-inbound. Remember that firefoxtree makes local tags when you pull from a remote repo on hg.mozilla.org, so let's do that:

$ hg update central
$ hg pull inbound
$ hg log -r inbound

Next we rebase our bookmark on top of inbound. In this case I want to use the --collapse argument to fold the review changes into the original commit:

$ hg rebase -b my_feature -d inbound --collapse

A file will open in your default editor where you can modify the commit message to whatever you want. In this case I'll just delete everything except the original commit message and add "r=".

And now everything is ready! Verify you are pushing what you expect and push:

$ hg outgoing -r my_feature inbound
$ hg push -r my_feature inbound

Pushing to other Branches

The beauty of this system is that it is trivial to land patches on any tree you want. If I wanted to land my_feature on aurora:

$ hg pull aurora
$ hg rebase -b my_feature -d aurora
$ hg outgoing -r my_feature aurora
$ hg push -r my_feature aurora

Syncing work across Computers

You can use a remote clone of mozilla-central to sync bookmarks between computers. Instead of pushing with -r, push with -B. This will publish the bookmark on the remote server:

$ hg push -B my_feature <my remote mercurial server>

From another computer, you can pull the bookmark in the same way:

$ hg pull -B my_feature <my remote mercurial server>

WARNING: As of this writing, Mozilla's user repositories are publishing! This means that when you push a commit to them, they will mark the commit as public on your local clone which means you won't be able to push them to either the review server or mozilla-inbound. If this happens, you need to run:

$ hg phase -f --draft <rev>

This is enough of a pain that I'd recommend avoiding user repositories for this purpose unless you can figure out how to make them non-publishing.

Edit: Mathias points out that to make a repo non-publishing, you need to add:

[phases]
publish=false

to your <repo>/.hg/hgrc file. I don't think normal users have permission to do this for user repositories, but you should do it if you can.

Conclusion

I'll need to play around with things a little more, but so far everything has been working exactly as advertised. Kudos to everyone involved in making this workflow possible!

November 08, 2014 07:45 AM

November 06, 2014

Armen Zambrano G. (@armenzg)

Setting buildbot up a-la-releng (Create your own local masters and slaves)

buildbot is what Mozilla's Release Engineering uses to run the infrastructure behind tbpl.mozilla.org.
buildbot assigns jobs to machines (aka slaves) through hosts called buildbot masters.

All the different repositories and packages needed to setup buildbot are installed through Puppet and I'm not aware of a way of setting my local machine through Puppet (I doubt I would want to do that!).
I managed to set this up a while ago by hand [1][2] (it was even more complicated in the past!), however, these one-off attempts were not easy to keep up-to-date and isolated.

I recently landed few scripts that makes it trivial to set up as many buildbot environments as you want and all isolated from each other.

All the scripts have been landed under the "community" directory under the "braindump" repository:
https://hg.mozilla.org/build/braindump/file/default/community

The main two scripts:

If you call create_community_slaves_and_masters.sh with -w /path/to/your/own/workdir you will have everything set up for you. From there on, all you would have to do is this:
  • cd /path/to/your/own/workdir
  • source venv/bin/activate
  • buildbot start masters/test_master (for example)
  • buildslave start slaves/test_slave
Each paired master and slave have been setup to talk to each other.

I hope this is helpful for people out there. It's been great for me when I contribute patches for buildbot (bug 791924).

As always in Mozilla, contributions are always welcome!

PS 1 = Only tested on Ubuntu. If you want it to port this to other platforms please let me know and I can give you a hand.

PS 2 = I know that there is a repository that has docker images called "tupperware", however, I had these set of scripts being worked on for a while. Perhaps someone wants to figure out how to set a similar process through the docker images.



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

November 06, 2014 02:02 PM

Henrik Skupin

Firefox Automation report – week 35/36 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 35 and 36.

Highlights

Due to a lot of Mozmill test failures related to add-on installation and search, we moved from the addons-dev.allizom.org website to the staging website located at addons.allizom.org. Since then we experiencing way lesser test failures, which are most likely network related.

In order to keep up with all test failures, and requests for new tests we started our Bug triage meeting on a weekly basis on Friday. For details we have created an etherpad.

If you are interested in helping us with Mozmill tests, you can now find a list of mentored and good first bugs on the bugsahoy web page.

Because of the app bundle changes on OS X, which were necessary due to the v2 signing requirements, we had to fix and enhance a couple of our automation tools. Henrik updated mozversion, mozmill, and mozmill-automation. We were close to releasing Mozmill 2.0.7.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 35 and week 36.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 35 and week 36.

November 06, 2014 10:08 AM

November 04, 2014

David Burns

WebDriver Face To Face - TPAC 2014

Last week was the 2014 W3C TPAC. For those that don't know, TPAC is a conference where a number of W3C working groups get together in the same venue. This allows for a great amount of discussions between groups and also allows people to see what is coming in the future of the web.

The WebDriver Working Group was part of TPAC this year, like previous years, and there was some really great discussions.

The main topics that were discussed were:

The meeting minutes for Thursday and Friday

November 04, 2014 09:57 PM

Byron Jones

happy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

note: there was an issue with sending of encrypted email to recipients with s/mime keys. this resulted in a large backlog of emails which was cleared following today’s push.

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

November 04, 2014 07:37 AM

October 31, 2014

Geoff Brown

Firefox for Android Performance Measures – September/October check-up

I skipped my monthly review for September, so here is a review of Firefox for Android performance measurements for September and October. Highlights:

- significant regression in tcheck2

- minor regressions in tp4m, tspaint

- improvements in trobopan, tsvgx

- checkerboard regressions in eideticker

Talos

This section tracks Perfomatic graphs from graphs.mozilla.org for mozilla-central builds of Firefox for Android, for Talos tests run on Android 4.0 Opt. The test names shown are those used on tbpl. See https://wiki.mozilla.org/Buildbot/Talos for background on Talos.

tcheck2

Measure of “checkerboarding” during simulation of real user interaction with page. Lower values are better.

check2

12 (start of period) – 17.8 (end of period)

Significant regression Oct 7 – bug 1086642.

trobopan

Panning performance test. Value is square of frame delays (ms greater than 25 ms) encountered while panning. Lower values are better.

50000 (start of period) – 30000 (end of period)

Distinct improvement around Oct 8.

tprovider

Performance of history and bookmarks’ provider. Reports time (ms) to perform a group of database operations. Lower values are better.

520 (start of period) – 520 (end of period).

tsvgx

An svg-only number that measures SVG rendering performance. About half of the tests are animations or iterations of rendering. This ASAP test (tsvgx) iterates in unlimited frame-rate mode thus reflecting the maximum rendering throughput of each test. The reported value is the page load time, or, for animations/iterations – overall duration the sequence/animation took to complete. Lower values are better.

6300 (start of period) – 6100 (end of period).

Distinct improvement October 4.

tp4m

Generic page load test. Lower values are better.

850 (start of period) – 880 (end of period).

Regression on October 22 – bug 1088669.

ts_paint

Startup performance test. Lower values are better.

tspaint

3850 (start of period) – 3850 (end of period).

Regression on Oct 28 – bug 1091664.

Throbber Start / Throbber Stop

These graphs are taken from http://phonedash.mozilla.org.  Browser startup performance is measured on real phones (a variety of popular devices).

throbberstart

Note the changes in throbber start (and corresponding improvements in throbber stop, below) around Oct 1 — bug 888482. Changes around Oct 28 are being investigated in bug 1091664.

throbberstop

Eideticker

These graphs are taken from http://eideticker.mozilla.org. Eideticker is a performance harness that measures user perceived performance of web browsers by video capturing them in action and subsequently running image analysis on the raw result.

More info at: https://wiki.mozilla.org/Project_Eideticker

cnn

cnn2

cnn3

dirty

taskjs


October 31, 2014 08:42 PM

October 30, 2014

Joel Maher

A case of the weekends?

Case of the Mondays

What was famous 15 years ago as a case of the Mondays has manifested itself in Talos.  In fact, I wonder why I get so many regression alerts on Monday as compared to other days.  It is more to a point of we have less noise in our Talos data on weekends.

Take for example the test case tresize:

linux32,

* in fact we see this on other platforms as well linux32/linux64/osx10.8/windowsXP

30 days of linux tresize

Many other tests exhibit this.  What is different about weekends?  Is there just less data points?

I do know our volume of tests go down on weekends mostly as a side effect of less patches being landed on our trees.

Here are some ideas I have to debug this more:

If you have ideas on how to uncover this mystery, please speak up.  I would be happy to have this gone and make any automated alerts more useful!


October 30, 2014 04:29 PM

October 28, 2014

Byron Jones

happy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

October 28, 2014 06:00 AM

October 26, 2014

Andreas Tolfsen

Hi, Mozilla!

It's now been well over four months since I joined Mozilla, and this is the first public account of what the whole experience has been like.

But besides joning Mozilla as a staff engineer I've also gotten engaged, moved to a new country, travelled two and a half times around the globe, and generally had my life turned upside-down in so many ways unimaginable. It's been quite a year.

Joining the Mozilla project was an easy decision because it's all about those things I'm most passionate about: Access to knowledge, platform openness, and driving innovation on the web.

Access to the web determines who may participate in a digital society. Securing this right for our and for future generations is in part about which freedoms the browser vendors and the web platform gives you. In this context Mozilla is playing a more important part now than ever before.

Being able to work together with thousands of other contributors in the open is nothing but an enormous privilege. Yet in a way I'm also very humble to have been given the chance to contribute full-time. Because of the nature of the project, this is a big responsibility.

Because of the somewhat draconian contract I had with my previous employer barring me from contributing to competing browsers, my only regret is not having been able to make this move sooner.

I'll primarily be working on the tools- and automation code, and so far I've had 40 patches landed in mozilla-central, plus a dozen or so more in deliveries- and external projects. It's so far been a very exhilerating experience.

I've also had the opportunity to travel quite a bit this past year. My first week of work was in the Toronto office where I got to meet parts of my new team. It's fair to say that one of them is a bit of an arse horse.

Horseplay.

Joining Mozilla has been nothing short of a great adventure so far. I originally intended to write about my experience joining much sooner, but time has just flown by. I'm also very excited about some of the things my team and I are working on, and I have to call out the W3C WebDriver specification in particular.

These four and a half months have been extremely busy with lots of momentum and of basically getting stuff done. I'm also grateful for everyone putting up with all the (maybe not so) stupid questions I've had about the Mozilla way, infrastructure, and processes thus far!

October 26, 2014 11:30 PM

October 16, 2014

Byron Jones

happy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

October 16, 2014 08:50 AM

October 02, 2014

Henrik Skupin

Firefox Automation report – week 33/34 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 33 and 34.

Highlights

To make sure that our weekly meetings will be more visible to our community, we got them added to the community calendar. If you are interested in what’s going on for Firefox Automation you are welcome to join our Monday’s team meeting.

In regards of the Mozmill project, Henrik landed his patch, which makes Mozmill more descriptive in terms of unexpected application shutdowns. Especially in the past weeks we have seen that Firefox does not restart as expected, but simply quits. There is bug 1057246 filed for the underlying problem. So with the patch landed, Mozmill will log that correctly in the results. Beside that we can also better see when crashes or a not by Mozmill triggered quit happens.

For Mozmill CI we landed a couple of enhancements and fixes. The most important ones were indeed the addition of 20 new locales for testing beta and release builds of Firefox across supported platforms. That means we cover 30 of about 95 active locales now. To cover them all, a good amount of follow-up work is still necessary. Immediately we stopped to run add-on tests for all branches except Nightly builds to save more time on our machines.

Henrik also continued on PuppetAgain integration for our staging and production CI systems. One of the blockers was the missing proxy support, but with the landing of the patch on bug 1050268 all proxy related work should have been done now.

Also on the continuous integration for TPS tests we made progress. The implementation got that far for Coversheet that we made the Jenkins branch the active master. There are still issues to implement or get fixed before the Jenkins driven CI can replace the old hand-made one.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 33 and week 34.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 33 and week 34.

October 02, 2014 01:56 PM

October 01, 2014

Henrik Skupin

Firefox Automation report – week 31/32 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 31 and 32. It’s a bit lesser than usual, mainly because many of us were on vacation.

Highlights

The biggest improvement as came in during week 32 were the fixes for the TPS tests. Cosmin spent a bit of time on investigating the remaining underlying issues, and got them fixed. Since then we have a constant green testrun, which is fantastic.

While development for the new TPS continuous integration system continued, we were blocked for a couple of days by the outage of restmail.net due to a domain move. After the DNS entries got fixed, everything was working fine again for Jenkins and Mozilla Pulse based TPS tests.

For Mozmill CI we agreed on that the Endurance tests we run across all branches are not that useful, but only take a lot of time to execute – about 2h per testrun! The most impact also regarding of new features landed will be for Nightly. So Henrik came up with a patch to only let those tests run for en-US Nightly builds.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 31 and week 32.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 32. There was no meeting in week 31.

October 01, 2014 04:16 PM

September 30, 2014

Andrew Halberstadt

How many tests are disabled?

tl;dr Look for reports like this in the near future!

At Mozilla, platform developers are culturally bound to tbpl. We spend a lot of time staring at those bright little letters, and their colour can mean the difference between hours, days or even weeks of work. With so many people performing over 420 pushes per day, all watching, praying, rejoicing and cursing, it's paramount that the whole process operates like a well oiled machine.

So when a test starts intermittently failing, and there aren't any obvious changesets to blame, it'll often get disabled in the name of keeping things running. A bug will be filed, some people will be cc'ed, and more often than not, it will languish. People who really care about tests know this. They have an innate and deep fear that there are tests out there that would catch major and breaking regressions, but for the fact that they are disabled. Unfortunately, there was never a good way to see, at a high level, which tests were disabled for a given platform. So these people who care so much have to go about their jobs with a vague sense of impending doom. Until now.

A Concrete Sense of Impending Doom

Test Informant is a new service which aims to bring some visibility into the state of tests for a variety of suites and platforms. It listens to pulse messages from mozilla-central for a variety of build configurations, downloads the associated tests bundle, parses as many manifests as it can and saves the results to a mongo database.

There is a script that queries the database and can generate reports (e.g like this), including how many tests have been enabled or disabled over a given period of time. This means instead of a vague sense of impending doom, you can tell at a glance exactly how doomed we are.

There are still a few manual steps required to generate and post the reports, but I intend to fully automate the process (including a weekly digest link posted to dev.platform).

Over the Hill and Far Away

There are a number of improvements that can be made to this system. We may or may not implement them based on the initial feedback we get from these reports. Possible improvements include:

There are also some known limitations:

If you would like to contribute, or just take a look at the source, it's all on github.

As always, let me know if you have any questions!

September 30, 2014 02:16 PM

September 29, 2014

William Lachance

Using Flexbox in web applications

Over last few months, I discovered the joy that is CSS Flexbox, which solves the “how do I lay out this set of div’s in horizontally or vertically”. I’ve used it in three projects so far:

When I talk to people about their troubles with CSS, layout comes up really high on the list. Historically, basic layout problems like a panel of vertical buttons have been ridiculously difficult, involving hacks involving floating divs and absolute positioning or JavaScript layout libraries. This is why people write articles entitled “Give up and use tables”.

Flexbox has pretty much put an end to these problems for me. There’s no longer any need to “give up and use tables” because using flexbox is pretty much just *like* using tables for layout, just with more uniform and predictable behaviour. :) They’re so great. I think we’re pretty close to Flexbox being supported across all the major browsers, so it’s fair to start using them for custom web applications where compatibility with (e.g.) IE8 is not an issue.

To try and spread the word, I wrote up a howto article on using flexbox for web applications on MDN, covering some of the common use cases I mention above. If you’ve been curious about flexbox but unsure how to use it, please have a look.

September 29, 2014 03:07 PM

September 25, 2014

Armen Zambrano G. (@armenzg)

Making mozharness easier to hack on and try support

Yesterday, we presented a series of proposed changes to Mozharness at the bi-weekly meeting.

We're mainly focused on making it easier for developers and allow for further flexibility.
We will initially focus on the testing side of the automation and make ground work for other further improvements down the line.

The set of changes discussed for this quarter are:

  1. Move remaining set of configs to the tree - bug 1067535
    • This makes it easier to test harness changes on try
  2. Read more information from the in-tree configs - bug 1070041
    • This increases the number of harness parameters we can control from the tree
  3. Use structured output parsing instead of regular where it applies - bug 1068153
    • This is part of a larger goal where we make test reporting more reliable, easy to consume and less burdening on infrastructure
    • It's to establish a uniform criteria for setting a job status based on a test result that depends on structured log data (json) rather than regex-based output parsing
    • "How does a test turn a job red or orange?" 
    • We will then have a simple answer that is that same for all test harnesses
  4. Mozharness try support - bug 791924
    • This will allow us to lock which repo and revision of mozharnes is checked out
    • This isolates mozharness changes to a single commit in the tree
    • This give us try support for user repos (freedom to experiment with mozharness on try)


Even though we feel the pain of #4, we decided that the value gained for developers through #1 & #2 gave us immediate value while for #4 we know our painful workarounds.
I don't know if we'll complete #4 in this quarter, however, we are committed to the first three.

If you want to contribute to the longer term vision on that proposal please let me know.


In the following weeks we will have more updates with regards to implementation details.


Stay tuned!



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

September 25, 2014 07:42 PM

September 24, 2014

Byron Jones

happy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

September 24, 2014 04:18 AM

September 17, 2014

Henrik Skupin

mozdownload 1.12 has been released

The Firefox Automation team would like to announce the release of mozdownload 1.12. Without any other release of our universal download tool in the last 7 months, a couple of nice new features and bug fixes will make this release even more useful. You can upgrade your installation easily via pip, or by downloading it from PyPI.

Changes in 1.12

September 17, 2014 10:11 PM

Mozmill 2.0.7 and 2.0.8 have been released

The Firefox Automation team would like to announce the release of Mozmill 2.0.7 and Mozmill 2.0.8. Both versions had to be released in such a short time frame to ensure continuing support for Firefox. Some latest changes done for Firefox Nightly broke Mozmill, or at least made it misbehaving. If you run tests with Mozmill ensure to upgrade to the latest version. You can do this via PyPI, or simply download the already pre-configured environment.

Changes in 2.0.7

Changes in 2.0.8

Please keep in mind that Mozmill 2.0 does not support electrolysis (e10s) builds of Firefox yet. We are working hard to get full support for e10s added, and hope it will be done until the next version bump mid of October.

Thanks everyone who was helping with those releases!

September 17, 2014 10:04 PM

Byron Jones

we’re changing the default search settings for advanced search

at the end of this week we will change the default search settings on bugzilla.mozilla.org’s advanced search page:

resolution-duplicate

the resolution DUPLICATE will no longer be selected by default – only open bugs will be searched if the resolution field is left unmodified.

this change will not impact any existing saved searches or queries.

 

DUPLICATE has been a part of our default query for as long as i can remember, and was included to accommodate using that form to search for existing bugs.

since the addition of “possible duplicates” to the bug creation workflow, the importance of searching duplicates has lessened, and returning duplicates by default to advanced users is more of a hindrance than a help.  the data reflects this – the logs indicate that over august less than 4% of advanced queries included DUPLICATE as a resolution.

 

this change is being tracked in bug 1068648.


Filed under: bmo

September 17, 2014 03:14 PM

September 16, 2014

Armen Zambrano G. (@armenzg)

Which builders get added to buildbot?

To add/remove jobs on tbpl.mozilla.org, we have to modify buildbot-configs.

Making changes can be learnt by looking at previous patches, however, there's a bit of an art to it to get it right.

I just landed a script that sets up buildbot for you inside of a virtualenv and you can pass a buildbot-config patch and determine which builders get added/removed.

You can run this by checking out braindump and running something like this:
buildbot-related/list_builder_differences.sh -j path_to_patch.diff

NOTE: This script does not check that the job has all the right parameters once live (e.g. you forgot to specify the mozharness config for it).

Happy hacking!


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

September 16, 2014 03:26 PM

Byron Jones

happy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

September 16, 2014 06:12 AM

September 15, 2014

William Lachance

mozregression 0.24

I just released mozregression 0.24. This would be a good time to note some of the user-visible fixes / additions that have gone in recently:

  1. Thanks to Sam Garrett, you can now specify a different branch other than inbound to get finer grained regression ranges from. E.g. if you’re pretty sure a regression occurred on fx-team, you can do something like:

    mozregression --inbound-branch fx-team -g 2014-09-13 -b 2014-09-14

  2. Fixed a bug where we could get an incorrect regression range (bug 1059856). Unfortunately the root cause of the bug is still open (it’s a bit tricky to match mozilla-central commits to that of other branches) but I think this most recent fix should make things work in 99.9% of cases. Let me know if I’m wrong.
  3. Thanks to Julien Pagès, we now download the inbound build metadata in parallel, which speeds up inbound bisection quite significantly

If you know a bit of python, contributing to mozregression is a great way to have a high impact on Mozilla. Many platform developers use this project in their day-to-day work, but there’s still lots of room for improvement.

September 15, 2014 10:02 PM

September 12, 2014

Geoff Brown

Running my own AutoPhone

AutoPhone is a brilliant platform for running automated tests on physical mobile devices.

:bc maintains an AutoPhone instance running startup performance tests (aka “S1/S2 tests” or “throbber start/stop tests”) on a small farm of test phones; those tests run against all of our Firefox for Android builds and results are reported to PhoneDash, available for viewing at http://phonedash.mozilla.org/.

I have used phonedash.mozilla.org for a long time now, and reported regressions in bugs and in my monthly “Performance Check-up” posts, but I have never looked under the covers or tried to use AutoPhone myself — until this week.

All things considered, it is surprisingly easy to set up your own AutoPhone instance and run your own tests. You might want to do this to reproduce phonedash.mozilla.org results on your own computer, or to check for regressions on a feature before check-in.

Here’s what I did to run my own AutoPhone instance running S1/S2 tests against mozilla-inbound builds:

Install AutoPhone:

git clone https://github.com/mozilla/autophone

cd autophone

pip install -r requirements.txt

Install PhoneDash, to store and view results:

git clone https://github.com/markrcote/phonedash

Create a phonedash settings file, phonedash/server/settings.cfg with content:

[database]
SQL_TYPE=sqlite
SQL_DB=yourdb
SQL_SERVER=localhost
SQL_USER=
SQL_PASSWD=

Start phonedash:

python server.py <ip address of your computer>

It will log status messages to the console. Watch that for any errors, and to get a better understanding of what’s happening.

Prepare your device:

Connect your Android phone or tablet to your computer by USB. Multiple devices may be connected. Each device must be rooted. Check that you can see your devices with adb devices — and note the serial number(s) (see devices.ini below).

Configure your device:

cp devices.ini.example devices.ini

Edit devices.ini, changing the serial numbers to your device serial numbers and the device names to something meaningful to you. Here’s my simple devices.ini for one device I called “gbrown”:

[gbrown]
serialno=01498B300600B008

Configure autophone:

cp autophone.ini.example autophone.ini

Edit autophone.ini to make it your own. Most of the defaults are fine; here is mine:

[settings]
#clear_cache = False
#ipaddr = …
#port = 28001
#cachefile = autophone_cache.json
#logfile = autophone.log
loglevel = DEBUG
test_path = tests/manifest.ini
#emailcfg = email.ini
enable_pulse = True
enable_unittests = False
#cache_dir = builds
#override_build_dir = None
repos = mozilla-inbound
#buildtypes = opt
#build_cache_port = 28008
verbose = True

#build_cache_size = 20
#build_cache_expires = 7
#device_ready_retry_wait = 20
#device_ready_retry_attempts = 3
#device_battery_min = 90
#device_battery_max = 95
#phone_retry_limit = 2
#phone_retry_wait = 15
#phone_max_reboots = 3
#phone_ping_interval = 15
#phone_command_queue_timeout = 10
#phone_command_queue_timeout = 1
#phone_crash_window = 30
#phone_crash_limit = 5

python autophone.py -h provides help on options, which are analogues of the autophone.ini settings.

Configure your tests:

Notice that autophone.ini has a test path of tests/manifest.ini. By default, tests/manifest.ini is configured for S1/S2 tests — it points to configs/s1s2_settings.ini. We need to set up that file:

cd configs

cp s1s2_settings.ini.example s1s2_settings.ini

Edit s1s2_settings.ini to make it your own. Here’s mine:

[paths]
#source = files/
#dest = /mnt/sdcard/tests/autophone/s1test/
#profile = /data/local/tmp/profile

[locations]
# test locations can be empty to specify a local
# path on the device or can be a url to specify
# a web server.
local =
remote = http://192.168.0.82:8080/

[tests]
blank = blank.html
twitter = twitter.html

[settings]
iterations = 2
resulturl = http://192.168.0.82:8080/api/s1s2/

[signature]
id =
key =

Be sure to set the resulturl to match your PhoneDash instance.

If running local tests, copy your test files (like blank.html above) to the files directory. If runnng remote tests, be sure that your test files are served from the resulturl (if using PhoneDash, copy to the html directory).

Start autophone:

python autophone.py –config autophone.ini

With these settings, autophone will listen for new builds on mozilla-inbound, and start tests on your device(s) for each one. You should start to see your device reboot, then Firefox will be installed and startup tests will run. As more builds complete on mozilla-inbound, more tests will run.

autophone.py will print some diagnostics to the console, but much more detail is available in autophone.log — watch that to see what’s happening.

Check your phonedash instance for results — visit http://<ip address of your computer>:8080. At first this won’t have any data, but as autophone runs tests, you’ll start to see results. Here’s my instance after a few hours:

myphonedash


September 12, 2014 08:02 PM

September 11, 2014

William Lachance

Hacking on the Treeherder front end: refreshingly easy

Over the past two weeks, I’ve been working a bit on the Treeherder front end (our interface to managing build and test jobs from mercurial changesets), trying to help get things in shape so that the sheriffs can feel comfortable transitioning to it from tbpl by the end of the quarter.

One thing that has pleasantly surprised me is just how easy it’s been to get going and be productive. The process looks like this on Linux or Mac:


git clone https://github.com/mozilla/treeherder-ui.git
cd treeherder-ui/webapp
./scripts/web-server.js

Then just load http://localhost:8000 in your favorite web browser (Firefox) and you should be good to go (it will load data from the actually treeherder site). If you want to make modifications to the HTML, Javascript, or CSS just go ahead and do so with your favorite editor and the changes will be immediately reflected.

We have a fair backlog of issues to get through, many of them related to the front end. If you’re interested in helping out, please have a look:

https://wiki.mozilla.org/Auto-tools/Projects/Treeherder#Bugs_.26_Project_Tracking

If nothing jumps out at you, please drop by irc.mozilla.org #treeherder and we can probably find something for you to work on. We’re most active during Pacific Time working hours.

September 11, 2014 08:35 PM

Armen Zambrano G. (@armenzg)

Run tbpl jobs locally with Http authentication (developer_config.py) - take 2

Back in July, we deployed the first version of Http authentication for mozharness, however, under some circumstances, the initial version could fail and affect production jobs.

This time around we have:

If you read How to run Mozharness as a developer you should see the new changes.

As quick reminder, it only takes 3 steps:

  1. Find the command from the log. Copy/paste it.
  2. Append --cfg developer_config.py
  3. Append --installer-url/--test-url with the right values
To see a real example visit this


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

September 11, 2014 12:45 PM

September 09, 2014

Byron Jones

happy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

discuss these changes on mozilla.tools.bmo.

 

the new bugmail filtering ability allows you to filter on specific flags:

bugmail filtering with substrings

these two rules will prevent bugzilla from emaling you the changes to the “qa whiteboard” field or the “qe-verify” flag for bugs where you aren’t the assignee.


Filed under: bmo, mozilla

September 09, 2014 06:41 AM

September 08, 2014

Henrik Skupin

Firefox Automation report – week 29/30 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 29 and 30.

Highlights

During week 29 it was time again to merge the mozmill-tests branches to support the upcoming release of Firefox 31.0. All necessary work has been handled on bug 1036881, which also included the creation of the new esr31 branch. Accordingly we also had to update our mozmill-ci system, and got the support landed on production.

The RelEng team asked us if we could help in setting up Mozmill update tests for testing the new update server aka Balrog. Henrik investigated the necessary tasks, and implemented the override-update-url feature in our tests and the mozmill-automation update script. Finally he was able to release mozmill-automation 2.6.0.2 two hours before heading out for 2 weeks of vacation. That means Mozmill CI could be used to test updates for the new update server.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 29 and week 30.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 29 and week 30.

September 08, 2014 06:37 PM

September 05, 2014

Mark Côté

Review Board preview

I know lots of people are very anxious to see Mozilla’s new code-review tool. It’s been a harrowing journey, but we are finally knocking out our last few blockers for initial deployment (see tracking bug 1021929). While we sort those out, here’s something to whet your palate: a walk through the new review work flow.

September 05, 2014 12:02 AM

September 02, 2014

Geoff Brown

Firefox for Android Performance Measures – August check-up

My monthly review of Firefox for Android performance measurements. This month’s highlights:

 – Eideticker for Android is back!

 – small regression in ts_paint

 – small improvement in tp4m

 – small regression in time to throbber start / stop

Talos

This section tracks Perfomatic graphs from graphs.mozilla.org for mozilla-central builds of Firefox for Android, for Talos tests run on Android 4.0 Opt. The test names shown are those used on tbpl. See https://wiki.mozilla.org/Buildbot/Talos for background on Talos.

tcanvasmark

This test is not currently run on Android 4.0.

tcheck2

Measure of “checkerboarding” during simulation of real user interaction with page. Lower values are better.

12 (start of period) – 12 (end of period)

There was a temporary regression in this test for much of the month, but it seems to be resolved now.

trobopan

Panning performance test. Value is square of frame delays (ms greater than 25 ms) encountered while panning. Lower values are better.

50000 (start of period) – 50000 (end of period)

tprovider

Performance of history and bookmarks’ provider. Reports time (ms) to perform a group of database operations. Lower values are better.

520 (start of period) – 520 (end of period).

tsvgx

An svg-only number that measures SVG rendering performance. About half of the tests are animations or iterations of rendering. This ASAP test (tsvgx) iterates in unlimited frame-rate mode thus reflecting the maximum rendering throughput of each test. The reported value is the page load time, or, for animations/iterations – overall duration the sequence/animation took to complete. Lower values are better.

6300 (start of period) – 6300 (end of period).

tp4m

Generic page load test. Lower values are better.

940 (start of period) – 850 (end of period).

tp4m

Improvement noted around August 21.

ts_paint

Startup performance test. Lower values are better.

3650 (start of period) – 3850 (end of period).

tspaint

Note the slight regression around August 12, and perhaps another around August 27 – bug 1061878.

Throbber Start / Throbber Stop

These graphs are taken from http://phonedash.mozilla.org.  Browser startup performance is measured on real phones (a variety of popular devices).

throbstart

Note the regression in time to throbber start around August 14 — bug 1056176.

throbstop

The same regression, less pronounced, is seen in time to throbber stop.

Eideticker

Eideticker for Android is back after a long rest – yahoo!!

These graphs are taken from http://eideticker.mozilla.org. Eideticker is a performance harness that measures user perceived performance of web browsers by video capturing them in action and subsequently running image analysis on the raw result.

More info at: https://wiki.mozilla.org/Project_Eideticker

eide1 eide2 eide3 eide4 eide5


September 02, 2014 07:49 PM

Byron Jones

happy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

discuss these changes on mozilla.tools.bmo.

bitly


Filed under: bmo, mozilla

September 02, 2014 06:30 AM

August 15, 2014

William Lachance

A new meditation app

I had some time on my hands two weekends ago and was feeling a bit of an itch to build something, so I decided to do a project I’ve had in the back of my head for a while: a meditation timer.

If you’ve been following this log, you’d know that meditation has been a pretty major interest of mine for the past year. The foundation of my practice is a daily round of seated meditation at home, where I have been attempting to follow the breath and generally try to connect with the world for a set period every day (usually varying between 10 and 30 minutes, depending on how much of a rush I’m in).

Clock watching is rather distracting while sitting so having a tool to notify you when a certain amount of time has elapsed is quite useful. Writing a smartphone app to do this is an obvious idea, and indeed approximately a zillion of these things have been written for Android and iOS. Unfortunately, most are not very good. Really, I just want something that does this:

  1. Select a meditation length (somewhere between 10 and 40 minutes).
  2. Sound a bell after a short preparation to demarcate the beginning of meditation.
  3. While the meditation period is ongoing, do a countdown of the time remaining (not strictly required, but useful for peace of mind in case you’re wondering whether you’ve really only sat for 25 minutes).
  4. Sound a bell when the meditation ends.

Yes, meditation can get more complex than that. In Zen practice, for example, sometimes you have several periods of varying length, broken up with kinhin (walking meditation). However, that mostly happens in the context of a formal setting (e.g. a Zendo) where you leave your smartphone at the door. Trying to shoehorn all that into an app needlessly complicates what should be simple.

Even worse are the apps which “chart” your progress or have other gimmicks to connect you to a virtual “community” of meditators. I have to say I find that kind of stuff really turns me off. Meditation should be about connecting with reality in a more fundamental way, not charting gamified statistics or interacting online. We already have way too much of that going on elsewhere in our lives without adding even more to it.

So, you might ask why the alarm feature of most clock apps isn’t sufficient? Really, it is most of the time. A specialized app can make selecting the interval slightly more convenient and we can preselect an appropriate bell sound up front. It’s also nice to hear something to demarcate the start of a meditation session. But honestly I didn’t have much of a reason to write this other than the fact than I could. Outside of work, I’ve been in a bit of a creative rut lately and felt like I needed to build something, anything and put it out into the world (even if it’s tiny and only a very incremental improvement over what’s out there already). So here it is:

meditation-timer-screen

The app was written entirely in HTML5 so it should work fine on pretty much any reasonably modern device, desktop or mobile. I tested it on my Nexus 5 (Chrome, Firefox for Android)[1], FirefoxOS Flame, and on my laptop (Chrome, Firefox, Safari). It lives on a subdomain of this site or you can grab it from the Firefox Marketplace if you’re using some variant of Firefox (OS). The source, such as it is, can be found on github.

I should acknowledge taking some design inspiration from the Mind application for iOS, which has a similarly minimalistic take on things. Check that out too if you have an iPhone or iPad!

Happy meditating!

[1] Note that there isn’t a way to inhibit the screen/device from going to sleep with these browsers, which means that you might miss the ending bell. On FirefoxOS, I used the requestWakeLock API to make sure that doesn’t happen. I filed a bug to get this implemented on Firefox for Android.

August 15, 2014 02:02 AM

August 13, 2014

Henrik Skupin

Firefox Automation report – week 27/28 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 27 and 28.

Highlights

Henrik continued his work on our QA specific PuppetAgain setup. One of the blockers for us was bug 997721, which is the full proxy support on OS X and Linux. By week 27 we were finally able to get this finished. Further Henrik also got the manifest for installing Java done.

On TPS we also made progress. So Cosmin got the Pulse listener script implemented for the Coversheet CI, which triggers TPS tests whenever new nightly builds of Firefox have been made available. Further a couple of fixes for Mozrunner were necessary given that the 6.0 release caused a couple of regressions for TPS. As result we agreed on to pin Python package dependencies to specific versions of mozrunner and related packages.

One big thing for our team is also to assist people in the decision, if automated tests are possible for certain Firefox features. The questions mainly come up for tests, which cannot be implemented for any of our developer driven test frameworks due to limitations on buildbot (no network access allowed, restart of the application, and others…). To be more successful in the future, Henrik started a discussion on the dev-quality mailing list. We hope to get the proposed process established for bug verification.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 27 and week 28.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 27 and week 28.

August 13, 2014 09:58 AM

August 07, 2014

Armen Zambrano G. (@armenzg)

mozdevice now mozlogs thanks to mozprocess!



jgraham: armenzg_brb: This new logging in mozdevice is awesome!
armenzg: jgraham, really? why you say that?
jgraham
: armenzg: I can see what's going on!
We recently changed the way that mozdevice works. mozdevice is a python package used to talk to Firefox OS or Android devices either through ADB or SUT.

Several developers of the Auto Tools team have been working on the Firefox OS certification suite for partners to determine if they meet the expectations of the certification process for Firefox OS.
When partners have any issues with the cert suite, they can send us a zip file with their results so we can help them out. However, until recently, the output of mozdevice would go to standard out rather than being logged into the zip file they would send us.

In order to use the log manger inside the certification suite, I needed every adb command and its output to be logged rather than printed to stdout. Fortunately, mozprocess has a parameter that I can specify which allows me to manipulate any output generate by the process. To benefit from mozprocess and its logging, I needed to switch every subprocess.Popen() call to be replaced by ProcessHandler().

If you want to "see what's going on" in mozdevice, all you have to do is to request the debug level of logging like this:
DeviceManagerADB(logLevel=mozlog.DEBUG)
As part of this change, we also switched to use structured logging instead of the basic mozlog logging.

Switching to mozprocess also helped us discover an issue in mozprocess specific to Windows.

You can see the patch in here.
You can see other changes inside of the mozdevice 0.38 release in here.

At least with mozdevice you can know what is going on!


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

August 07, 2014 03:58 PM

August 03, 2014

Geoff Brown

Firefox for Android Performance Measures – July check-up

My monthly review of Firefox for Android performance measurements. This month’s highlights:

- No significant regressions or improvements found!

Talos

This section tracks Perfomatic graphs from graphs.mozilla.org for mozilla-central builds of Firefox for Android, for Talos tests run on Android 4.0 Opt. The test names shown are those used on tbpl. See https://wiki.mozilla.org/Buildbot/Talos for background on Talos.

tcanvasmark

This test is not currently run on Android 4.0.

tcheck2

Measure of “checkerboarding” during simulation of real user interaction with page. Lower values are better.

Screenshot from 2014-08-03 13:13:44

12 (start of period) – 12 (end of period)

The temporary regression of July 24 was caused by bug 1031107; resolved by bug 1044702.

trobopan

Panning performance test. Value is square of frame delays (ms greater than 25 ms) encountered while panning. Lower values are better.

50000 (start of period) – 50000 (end of period)

tprovider

Performance of history and bookmarks’ provider. Reports time (ms) to perform a group of database operations. Lower values are better.

520 (start of period) – 520 (end of period).

tsvgx

An svg-only number that measures SVG rendering performance. About half of the tests are animations or iterations of rendering. This ASAP test (tsvgx) iterates in unlimited frame-rate mode thus reflecting the maximum rendering throughput of each test. The reported value is the page load time, or, for animations/iterations – overall duration the sequence/animation took to complete. Lower values are better.

6300 (start of period) – 6300 (end of period).

tp4m

Generic page load test. Lower values are better.

940 (start of period) – 940 (end of period).

ts_paint

Startup performance test. Lower values are better.

3600 (start of period) – 3650 (end of period).

Throbber Start / Throbber Stop

These graphs are taken from http://phonedash.mozilla.org.  Browser startup performance is measured on real phones (a variety of popular devices).

Screenshot from 2014-08-03 13:39:26

Screenshot from 2014-08-03 13:45:17

Screenshot from 2014-08-03 13:49:07

Eideticker

These graphs are taken from http://eideticker.mozilla.org. Eideticker is a performance harness that measures user perceived performance of web browsers by video capturing them in action and subsequently running image analysis on the raw result.

More info at: https://wiki.mozilla.org/Project_Eideticker

The Eideticker dashboard is slowly coming back to life, but there are still not enough results to show graphs here. We’ll check back at the end of August.


August 03, 2014 07:51 PM

July 29, 2014

Joel Maher

Say hi to Kaustabh Datta Choudhury, a newer Mozillian

A couple months ago I ran into :kaustabh93 online as he had picked up a couple good first bugs.  Since then he has continued to work very hard and submit a lot of great pull requests to Ouija and Alert Manager (here is his github profile).  After working with him for a couple of weeks, I decided it was time to learn more about him, and I would like to share that with Mozilla as a whole:

Tell us about where you live-

I live in a town called Santragachi in West Bengal. The best thing about this place is its ambience. It is not at the heart of the city but the city is easily accessible. That keeps the maddening crowd of the city away and a calm and peaceful environment prevails here.

Tell us about your school-

 I completed my schooling from Don Bosco School, Liluah. After graduating from there, now I am pursuing an undergraduate degree in Computer Science & Engineering from MCKV Institute of Engineering.

Right from when it was introduced to me, I was in love with the subject ‘Computer Science’. And introduction to coding was one of the best things that has happened to me so far.

Tell us about getting involved with Mozilla-

I was looking for some exciting real life projects to work on during my vacation & it was then that the idea of contributing to open source projects had struck me. Now I have been using Firefox for many years now and that gave me an idea of where to start looking. Eventually I found the volunteer tab and thus started my wonderful journey on Mozilla.

Right from when I was starting out, till now, one thing that I liked very much about Mozilla was that help was always at hand when needed. On my first day , I popped a few questions in the IRC channel #introduction & after getting the basic of where to start out, I started working on Ouija under the guidance of ‘dminor’ & ‘jmaher’. After a few bug fixes there, Dan recommended me to have a look at Alert Manager & I have been working on it ever since. And the experience of working for Mozilla has been great.

Tell us what you enjoy doing-

I really love coding. But apart from it I also am an amateur photographer & enjoy playing computer games & reading books.

Where do you see yourself in 5 years?

In 5 years’ time I prefer to see myself as a successful engineer working on innovative projects & solving problems.

If somebody asked you for advice about life, what would you say?

Rather than following the crowd down the well-worn path, it is always better to explore unchartered territories with a few.

:kaustabh93 is back in school as of this week, but look for activity on bugzilla and github from him.  You will find him online once in a while in various channels, I usually find him in #ateam.


July 29, 2014 07:49 PM

Dave Hunt

Performance testing Firefox OS on reference devices

A while back I wrote about the LEGO harness I created for Eideticker to hold both the device and camera in place. Since then there has been a couple of iterations of the harness. When we started testing against our low-cost prototype device, the harness needed modifying due to the size difference and position of the USB socket. At this point I tried to create a harness that would fit all of our current devices, with the hope of avoiding another redesign.

Eideticker harness v2.0 If you’re interested in creating one of these yourself, here’s the LEGO Digital Designer file and building guide.

Unfortunately, when I first got my hands on our reference device (codenamed ‘Flame’) it didn’t fit into the harness. I had to go back to the drawing board, and needed to be a little more creative due to the width not matching up too well with the dimensions of LEGO bricks. In the end I used some slope bricks (often used for roof tiles) to hold the device securely. A timelapse video of constructing the latest harness follows.

 

 

We now are 100% focused on testing against our reference device, so in London we have two dedicated to running our Eideticker tests, as shown in the photo below.

Eideticker harness for FlameAgain, if you want to build one of these for yourself, download the LEGO Digital Designer file and building guide. If you want to learn more about the Eideticker project check out the project page, or if you want to see the dashboard with the latest results, you can find it here.

July 29, 2014 04:08 PM

July 23, 2014

Dave Hunt

A new home for the gaiatest documentation

The gaiatest python package provides a test framework and runner for testing Gaia (the user interface for Firefox OS). It also provides a handy command line tool and can be used as a dependency from other packages that need to interact with Firefox OS.

Documentation for this package has now been moved to gaiatest.readthedocs.org, which is generated directly from the source code whenever there’s an update. In order to make this more useful we will continue to add documentation to the Python source code. If you’re interested in helping us out please get in touch by leaving a comment, or joining #ateam on irc.mozilla.org and letting us know.

July 23, 2014 12:56 PM