Leo McArdleLetter to my MP on DRIP

What follows is a copy of the email I just sent to my MP about the Data Retention and Investigatory Powers Bill (DRIP). I urge you to send a similar email right now.

Dear Robin Walker,

I have no doubt that by now you will have heard of the Data Retention and Investigatory Powers Bill (DRIP) which your Government and the Opposition will try to rail-road through Parliament next week. I also have no doubt that you will have heard of the great deal of criticism surrounding this bill, both from your colleagues within Westminster hailing from all parties, such as David Davis MP and Tom Watson MP, and those outside of Westminster, such as Jim Killock of the Open Rights Group.

In April the European Court of Justice (ECJ) ruled that the Data Retention Directive (DRD) was incompatible with the Charter of Fundamental Rights of the European Union and therefore that the 2006 act enabling the DRD in the UK was a breach of Human Rights. This means what was, and still is, the status quo when it comes to forcing companies to store data on their customers is a breach of fundamental Human Rights. This is the same status quo which the Home Secretary has said that DRIP merely retains. I think it is clear to see why I, and others, have such a problem with DRIP.

The ECJ ruling outlined some very clear ways in which the DRD could be made compatible with Human Rights law, by saying that this cannot be done on a blanket basis and that someone independent must supervise police access. These fundamental points are missing from DRIP.

Furthermore, DRIP goes far further than just retaining the status quo. It makes sweeping amendments to the Regulation of Investigatory Powers Act (RIPA) including the expansion of what a communications service provider is, the extension of these powers to outside the UK and an open door to allow the Government to make new regulations about data retention at will, without the need to debate them fully in Parliament. I am sure you agree that such huge amendments to RIPA need to be subject to full Parliamentary scrutiny.

It is perfectly clear to everybody, including you, I am sure, Mr Walker, that the Government is using the ECJ ruling as a pretext to force through, at great speed, legislation which affects Human Rights, without proper scrutiny or deliberation. The ECJ ruling was in April, and many warned as far back as 2006 that the DRD was flawed. The UK Government has had years to prepare for the DRD being struck down. There is no reason for this emergency legislation, other than to try and sneak sweeping changes under the noses of MPs who have been allowed to go on holiday.

Wherever you stand on where the balance should be between State Security and Civil Liberties (and I would not be surprised if we stand on opposite ends of that balance), you must agree that five days in nowhere near enough time to properly debate and represent all the views on this issue.

It is for this reason that I urge you as my elected representative to vote against DRIP, and do everything you can to urge your colleagues to do the same. At the very least, could you please push for a highly amended bill, with all the sections amending RIPA removed, which serves purely as a stopgap, not for a period of two years, but for a maximum of six months. We need to have this debate now, and not pass the buck on to the next Government in 2016, who will surely pass the buck on again.

In 2015 I will get my first opportunity to vote in a General Election, and while I may feel that this Government has done devastating things to this country, you, Mr Walker, may be able to differentiate yourself from a sea of blue if you stand up for Civil Liberties and Human Rights.

Yours sincerely,
Leo McArdle

Nick CameronRust for C++ programmers - part 8: destructuring

First an update on progress. You probably noticed this post took quite a while to come out. Fear not, I have not given up (yet). I have been busy with other things, and there is a section on match and borrowing which I found hard to write and it turns out I didn't understand very well. It is complicated and probably deserves a post of its own, so after all the waiting, the interesting bit is going to need more waiting. Sigh.

I've also been considering the motivation of these posts. I really didn't want to write another tutorial for Rust, I don't think that is a valuable use of my time when there are existing tutorials and a new guide in the works. I do think there is something to be said for targeting tutorials at programmers with different backgrounds. My first motivation for this series of posts was that a lot of energy in the tutorial was expended on things like pointers and the intuition of ownership which I understood well from C++, and I wanted a tutorial that concentrated on the things I didn't know. That is hopefully where this has been going, but it is a lot of work, and I haven't really got on to the interesting bits. So I would like to change the format a bit to be less like a tutorial and more like articles aimed at programmers who know Rust to some extent, but know C++ a lot better and would like to bring up their Rust skills to their C++ level. I hope that complements the existing tutorials better and is more interesting for readers. I still have some partially written posts in the old style so they will get mixed in a bit. Let me know what you think of the idea in the comments.


Last time we looked at Rust's data types. Once you have some data structure, you will want to get that data out. For structs, Rust has field access, just like C++. For tuples, tuple structs, and enums you must use destructuring (there are various convenience functions in the library, but they use destructuring internally). Destructuring of data structures doesn't happen in C++, but it might be familiar from languages such as Python or various functional languages. The idea is that just as you can create a data structure by filling out its fields with data from a bunch of local variables, you can fill out a bunch of local variables with data from a data structure. From this simple beginning, destructuring has become one of Rust's most powerful features. To put it another way, destructuring combines pattern matching with assignment into local variables.

Destructuring is done primarily through the let and match statements. The match statement is used when the structure being desctructured can have difference variants (such as an enum). A let expression pulls the variables out into the current scope, whereas match introduces a new scope. To compare:
fn foo(pair: (int, int)) {
    let (x, y) = pair;
    // we can now use x and y anywhere in foo

    match pair {
        (x, y) => {
            // x and y can only be used in this scope

The syntax for patterns (used after `let` and before `=>` in the above example) in both cases is (pretty much) the same. You can also use these patterns in argument position in function declarations:
fn foo((x, y): (int, int)) {

(Which is more useful for structs or tuple-structs than tuples).

Most initialisation expressions can appear in a destructuring pattern and they can be arbitrarily complex. That can include references and primitive literals as well as data structures. For example,
struct St {
    f1: int,
    f2: f32

enum En {
    Var4(int, St, int)

fn foo(x: &En) {
    match x {
        &Var1 => println!("first variant"),
        &Var3(5) => println!("third variant with number 5"),
        &Var3(x) => println!("third variant with number {} (not 5)", x),
        &Var4(3, St{ f1: 3, f2: x }, 45) => {
            println!("destructuring an embedded struct, found {} in f2", x)
        &Var4(_, x, _) => {
            println!("Some other Var4 with {} in f1 and {} in f2", x.f1, x.f2)
        _ => println!("other (Var2)")

Note how we destructure through a reference by using `&` in the patterns and how we use a mix of literals (`5`, `3`, `St { ... }`), wildcards (`_`), and variables (`x`).

You can use `_` wherever a variable is expected if you want to ignore a single item in a pattern, so we could have used `&Var3(_)` if we didn't care about the integer. In the first `Var4` arm we destructure the embedded struct (a nested pattern) and in the second `Var4` arm we bind the whole struct to a variable. You can also use `..` to stand in for all fields of a tuple or struct. So if you wanted to do something for each enum variant but don't care about the content of the variants, you could write:
fn foo(x: En) {
    match x {
        Var1 => println!("first variant"),
        Var2 => println!("second variant"),
        Var3(..) => println!("third variant"),
        Var4(..) => println!("fourth variant")

When destructuring structs, the fields don't need to be in order and you can use `..` to elide the remaining fields. E.g.,
struct Big {
    field1: int,
    field2: int,
    field3: int,
    field4: int,
    field5: int,
    field6: int,
    field7: int,
    field8: int,
    field9: int,

fn foo(b: Big) {
    let Big { field6: x, field3: y, ..} = b;
    println!("pulled out {} and {}", x, y);

As a shorthand with structs you can use just the field name which creates a local variable with that name. The let statement in the above example created two new local variables `x` and `y`. Alternatively, you could write
fn foo(b: Big) {
    let Big { field6, field3, ..} = b;
    println!("pulled out {} and {}", field3, field6);

Now we create local variables with the same names as the fields, in this case `field3` and `field6`.

There are a few more tricks to Rust's destructuring. Lets say you want a reference to a variable in a pattern. You can't use `&` because that matches a reference, rather than creates one (and thus has the effect of dereferencing the object). For example,
struct Foo {
    field: &'static int

fn foo(x: Foo) {
    let Foo { field: &y } = x;

Here, `y` has type `int` and is a copy of the field in `x`.

To create a reference to something in a pattern, you use the `ref` keyword. For example,
fn foo(b: Big) {
    let Big { field3: ref x, ref field6, ..} = b;
    println!("pulled out {} and {}", *x, *field6);

Here, `x` and `field6` both have type `&int` and are references to the fields in `b`.

One last trick when destructuring is that if you are detructuring a complex object, you might want to name intermediate objects as well as individual fields. Going back to an earlier example, we had the pattern `&Var4(3, St{ f1: 3, f2: x }, 45)`. In that pattern we named one field of the struct, but you might also want to name the whole struct object. You could write `&Var4(3, s, 45)` which would bind the struct object to `s`, but then you would have to use field access for the fields, or if you wanted to only match with a specific value in a field you would have to use a nested match. That is not fun. Rust lets you name parts of a pattern using `@` syntax. For example `&Var4(3, s @ St{ f1: 3, f2: x }, 45)` lets us name both a field (`x`, for `f2`) and the whole struct (`s`).

That just about covers your options with Rust pattern matching. There are a few features I haven't covered, such as matching vectors, but hopefully you know how to use `match` and `let` and have seen some of the powerful things you can do. Next time I'll cover some of the subtle interactions between match and borrowing which tripped me up a fair bit when learning Rust.

Anthony RicaudAdopting Omnifocus and GTD

I've tried to adopt the Getting Things Done method a few times already. Every time, it wasn't a success. I wasn't applying most principles and fell back to noting things down on a collection of small papers. This time, I had a huge advantage: at work, I'm sitting next to Étienne, a big proponent of GTD. He inspired me to try again and answered a lot of questions I had during my adoption.

This time, I chose Omnifocus for my GTD experimentation. It's a bit expensive to buy the three flavours but I was committed. I'll be talking about my experiences via Omnifocus but you should not focus too much on the software. You can adopt GTD with paper, with another software, whatever works for you.


In january, I started the capture part. That's when you note down in your GTD system everything you'd like to do. You need to create that habit and do it every time something pops in your head. I use three main methods to collect:

  1. When I'm in front of my computer, I use the ^⌥Space shortcut to open the Quick Entry panel
  2. When I'm not in front of my computer, I use the iPod Touch app
  3. When an email requires some action, I send a message to the mail drop address

I got a huge inbox but I was ok with it because I knew collecting was the first part to get right. There is a big relief in knowing that everything you need or want to do is explicitly written somewhere. You're not afraid of forgetting something anymore.

Capturing your thoughts like this also allows you to stay focused on the current task. You don't have to do that new task right now, you don't have to explore that idea yet. Just trust the system to remind it to you later.

To start this, you may also want to start by doing a mind sweep: sit down in front of a piece of paper, no distractions, half an hour and write down everything that comes to mind.


Once you have this exhaustive list of things you want to do, you process it in contexts and projects. You also flag some items you deem important and put important dates for those tasks. I only started doing this mid january. The tricky part for me was creating the projects and contexts.


In GTD, Contexts are things you need to achieve a task. It could be a location, a person or an object. I'm not really using the contexts because most of the time, I just need to be in front of my computer to accomplish work related tasks. I may need to tweak this again but for now, I don't feel the need to dive more in that area.

My contexts:

  • Errands: When I'm neither at home nor at work
  • Home: I don't have an office context because I can work from anywhere. I have a few tasks that require me to be in an office (like printing) but not enough to warrant a full context.
  • People: A nested list of some people and also a phone context
  • Technology: This is where you'll find most of my tasks. I have a nested email folder.
  • Waiting: When I'm waiting on something else to happen.


Let me give you three example of real projects:

Fixing a bug

I try to do this a lot :) So I have a template project that I copy when I intend to work on a bug. This is a sequential project, meaning I need to achieve a task before the next one is available.

  1. Find a fix: Well that sounds dumb but this is my first step
  2. Write tests: Even though I may write the tests as I fix the problem, I still keep this reminder to make sure I wrote enough tests
  3. Test on a phone: I will certainly have done this while developing but for small fixes that look obvious, I have been bitten by not testing on a real phone. Hence this reminder.
  4. Put in review: Uploading my patch and explaining my fix.
  5. Wait for review: This is in a waiting context so I can forget about this project until I receive an email about that review. If it's not good, I'll add a task for each comment to adress.
  6. Wait for green tests: In a waiting context too because you shouldn't land something if the tests are not green.
  7. Land patch and clean branches: When all is good, I can land my work. This is usually where I'll clean the branches I had to create.
  8. Close bug with link to commit: This is the last step so that people can track the work later.
Feedback on Openweb articles

The crazy hard worker that Nicolas Hoffmann is wrote a few articles on modern CSS practices on the OpenWeb group. I told him I wanted to read them and provide some feedback but I have no idea when I'll come around doing that. So I created one task per article. It's out of my mind but I know I'll do it one day because I have this reminder.

Birthday ideas

This is not a project per se. But when someone talks about a topic they like, I try to take a note of it. Then during the review process, I mark it as due a few days before the actual birthday.

In addition to these kinds of projects, I have a few projects called "Work :: Miscelleanous" or "Personal :: Miscelleanous". That's just things I need to do and don't really fit in a project.

Flags, deferred dates and due dates

This is how I have things popping up for attention. I try to use due dates as little as possible because otherwise, one day you end up with 10 things urgent to do and you get stuck. So only tasks that have a hard deadline (like filing taxes) get a due date.

I use flags for the tasks that are important but without a real deadline. During my weekly review (see below), I'll flag things that I want to do next week.

The capture phase was really refreshing because I knew everything was stored somewhere. Via the process phase, it's even more relaxing because I know the system will remind me when I need to do something. That completely changed how I think about being overwhelmed. Before, I had this blurry collection of things to do in my head. They were all collapsing and I had no sense of what was important to do or if I was forgetting something that matters to me. Now, when I feel overwhelmed, I know it just means I need to spend a bit of time in front of Omnifocus to process my inbox.


In february, I started doing reviews more often. First every two weeks and now every week. This is another step that gives me a great deal of comfort. This is when I'll decide what I want to do next week and put flags or due dates on things that I consider important for the coming week. I will also delete some items that I don't feel like doing anymore.


And this is the biggest part of GTD. Actually doing stuff. If you spend all that time in a tool to organise your tasks, it's not for the sake of it. That's why I did it gradually, to not spend too much time finding the perfect workflow.

I'm really happy with my adoption of the GTD method. It's not perfect, I'm still tweaking here and there.

I encourage you to try it. Reach out to me if you'd like to discuss it, I'd be happy to!

Nigel BabuJinxed!

A couple of weeks ago, I requested L3 access as part of my Sheriffing work and my request was granted. I think I’ve totally jinxed things since then ;)

The tree. IT'S BURNING!

The first Sunday afterward, we had a patch that was landed into aurora inadvertently causing massive spike in crashes. I saw it myself and suspect that my copy was corrupt and downloaded the latest! Of course, to no avail. I finally noticed the right bug and Kairo was looking for someone to back it out. I backed it out and triggered a rebuild which fixed the issue.

The next Saturday, we had mobile imaging failures. This one was fun fixing, I talked to Nick Thomas and Chris Cooper on the phone. All it needed was one command, but it took us some time to get there :-) But hey, it got me mentioned under Friends of Mozilla.

Having more access to fix things somehow makes me feel responsible!

Nigel BabuTraining in Tanzania

On the last Monday of April, I found myself nervously standing in a room of about 15 people from the e-Government Agency and National Bureau of Statistics in Dar es Salaam. They were waiting for me to start training them in Python and CKAN. I’ve been programming in Python since 2011, but I’ve never actually trained people in Python. On the first day, I didn’t have any slides. All I had was one [PDF][pdf] from Wikibooks which I was using as material. I didn’t even cover the whole material. By the end of the day though, I could sense that it was sinking into the attendees a bit.

It all started with an email from my manager asking me if I was available to do a training in Tanzania in April. After lots of back and forth, we finalized on a date and a trainer to assist in the trainings, and I flew in. Dar es Salaam, strangely, reminded of growing up in Salalah. I got in a day early to prep for the week and settle in. The trainer looking groggy on a Monday does not bode well!

People who train often don’t tell you this - Trainings are exhausting. You’re most likely to be on your feet all day and walk around the room helping people who’re lagging behind. Looking back, the training was both fun and exhausting. I enjoyed talking about Python, though I feel like I need more practice to do it well. The CKAN training, I was pretty satisfied with the outcome, by the end of the week, the folks from e-Gov Agency went in and setup a server with CKAN!

Note to self: Write these posts immediately after the trip before I forget :-)

Armen ZambranoIntroducing Http authentication for Mozharness.

A while ago, I asked a colleague (you know who you are! :P) of mine how to run a specific type of test job on tbpl on my local machine and he told me with a smirk, "With mozharness!"

I wanted to punch him (HR: nothing to see here! This is not a literal punch, a figurative one), however he was right. He had good reason to say that, and I knew why he was smiling. I had to close my mouth and take it.

Here's the explanation on why he said that: most jobs running inside of tbpl are being driven by Mozharness, however they're optimized to run within the protected network of Release Engineering. This is good. This is safe. This is sound. However, when we try to reproduce a job outside of the Releng network, it becomes problematic for various reasons.

Many times we have had to guide people who are unfamiliar with mozharness as they try to run it locally with success. (Docs: How to run Mozharness as a developer). However, on other occasions when it comes to binaries stored on private web hosts, it becomes necessary to loan a machine. A loaned machine can reach those files through internal domains since it is hosted within the Releng network.

Today, I have landed a piece of code that does two things:
  • Allows Http authentication to download files behind LDAP credentials
  • Changes URLs to point to publicly reachable domains
This change, plus the recently-introduced developer configs for Mozharness, makes it much easier to run mozharness outside of continuous integration infrastructure.

I hope this will help developers have a better experience reproducing the environments used in the tbpl infrastructure. One less reason to loan a machine!

This makes me *very* happy (see below) since I don't have VPN access anymore.

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Armen ZambranoUsing developer configs for Mozharness

To help run mozharness by developers I have landed some configs that can be appended to the command appearing on tbpl.

All you have to do is:
  • Find the mozharness script line in a log from tbpl (search for "script/scripts")
  • Look for the --cfg parameter and add it again but it should end with "_dev.py"
    • e.g. --cfg android/androidarm.py --cfg android/androidarm_dev.py
  • Also add the --installer-url and --test-url parameters as explained in the docs
Developer configs have these things in common:
  • They have the same name as the production one but instead end in "_dev.py"
  • They overwrite the "exes" dict with an empty dict
    • This allows to use the binaries in your personal $PATH
  • They overwrite the "default_actions" list
    • The main reason is to remove the action called read-buildbot-configs
  • They fix URLs to point to the right public reachable domains 
Here are the currently available developer configs:
You can help by adding more of them!

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Kent JamesThe Thunderbird Tree is Green!

For the first time in a while, the Thunderbird build tree is all green. That means that all platforms are building, and passing all tests:

The Thunderbird build tree is green!

The Thunderbird build tree is green!

Many thanks to Joshua Cranmer for all of his hard work to make it so!

Hub FiguièreGithub tracks you by email.

That's right. Github tracks you by email. Each Github notification email contains in the HTML part a beacon. Beacons are usually one pixel images with a unique URL to know who did view the email or not - triggered by the HTML rendered downloading the image to display.

Two safeguards against that tracking:

  1. don't automatically download images in emails - lot of clients allow or default to this.
  2. view email only in plain text: impossible with some email system or client. Like K9-Android or just GMail. (by far this is what I do in Thunderbird)

Now I complain over twitter and according to Github Zach Holman:

"It’s a pretty rad feature for a ton of our users; reading a notification in one should mark the web UI as read too. We dig it."*.

Sorry, but there is no optout to tracking. Holman also said:

"you can just disable images. It’s the same functionality in the email as on the web, though. We’re not spying on anything."*


"[...] It’s just in this case there’s zero additional information trading hands."*.

Note that recent events showed me I couldn't trust Github ethics anyway, so I'd rather have them not have the info that them claiming it never change hands.

This wouldn't be important if Mozilla didn't mostly require Github to contribute to certain projects including. I filed bug 1031899. While I can understand the feature, I believe user privacy should be paramount, therefor not being able to disable tracking is a serious ethics issue.

Gervase MarkhamWhy Do Volunteers Work On Free Software Projects?

Why do volunteers work on free software projects?

When asked, many claim they do it because they want to produce good software, or want to be personally involved in fixing the bugs that matter to them. But these reasons are usually not the whole story. After all, could you imagine a volunteer staying with a project even if no one ever said a word in appreciation of his work, or listened to him in discussions? Of course not. Clearly, people spend time on free software for reasons beyond just an abstract desire to produce good code. Understanding volunteers’ true motivations will help you arrange things so as to attract and keep them. The desire to produce good software may be among those motivations, along with the challenge and educational value of working on hard problems. But humans also have a built-in desire to work with other humans, and to give and earn respect through cooperative activities. Groups engaged in cooperative activities must evolve norms of behavior such that status is acquired and kept through actions that help the group’s goals.

– Karl Fogel, Producing Open Source Software

Joel MaherI invite you to join me in welcoming 3 new tests to Talos

2 months ago, we added session restore tests to Talos, today I am pleased to announce 3 new tests:

  • media_tests – only runs on linux64 and is our first test to measure audio processing.  Much thanks to :jesup, and Suhas Nandaku from Cisco.
  •  tp5o_scroll - imagine if tscrollx and tp5 had a child- not only do we load the page, but we scroll the page.  Big thanks go to :avih for tackling this project.
  •  glterrain – The first webgl benchmark to show up in Talos.  Credit goes to :avih for driving this and delivering it.  There are others coming, this was the easiest to get going.


Stay tuned in the coming weeks as we have 2 others tests in the works:

  • ts_paint_cold
  • mainthreadio detection


Bogomil ShopovIt’s live: Usersnap’s JavaScript error- and XHR log-recorder.

I am so happy. Today we released our console recorder (JavaScript errors and XHR logs) and now every web developer can fight bugs like a superhero.

It’s so awesome!

Pascal ChevrelWhat I did in Q3

A quick recap of what I did in Q3  so as that people know what kind of work we do in the l10n-drivers team and because as a service team to other department, a lot of what we do is not necessarily visible.

Tools and code

I spent significantly more time on tool this quarter than in the past, I am also happy to say that Transvision is now a 6 people team and that we will all be at Brussels for the Summit (see my blog post in April). I like it, I like to create the small tools and scripts that make my life or localizers life better.

  • Two releases of Transvision (release notes) + some preparatory work for future l20n compatibility
  • Created a mini-dashboard for our team so as to help us follow FirefoxOS work
  • Wrote the conversion script to convert our Serbian Cyrillic string repository to Serbian Latin (see this blog post)
  • Turned my langchecker scripts (key part of the Web Dashboard) into a github project and worked with Flod on improving our management scripts for mozilla.org and fhr. A recent improvement is that we can now import automatically translations done on Locamotion You can see a list of the changes in the release notes.
  • Worked on scripts allowing to query bugzilla without using the official API (because the data I want is specific to the mozilla customizations we need for locales), that will probably be part of the Webdashboard soon so as to be able to extract Web localization bugs from multiple components (gist here). Basically I had the idea to use the CSV export feature for advanced search in Bugzilla as a public read-only API :)
  • Several python patches to mozilla.org to fix l10n bugs or improve our tools to ship localized pages (Bug 891835, Bug 905165, Bug 904703).

Mozilla.org localization

Since we merged all of our major websites (mozilla.org, mozilla.com, mozilla-europe.org, mozillamessaging.com) under the single mozilla.org domain name two years ago with a new framework based on Django, we have gained in consistency but localization of several backends under one single domain and a new framework slowed us down for a while. I'd say that we are now mostly back to the old mozilla.com speed of localization, lots of bugs and features were added to Bedrok (nickname of our Django-powered site), we have a very good collaboration with the webdev/webprod teams on the site and we are more people working on it. I think this quarter localizer felt that a lot more work was asked from them on mozilla.org, I'll try to make sure we don't loose locales on the road, this is a website that hosts content for 90 locales, but we are back to speed with tons of new people!

  • Main Firefox download page (and all the download buttons across the site) finally migrated to Bedrock, our Django instance.  Two major updates to that page this quarter (+50 locales), more to come next quarter, this is part of a bigger effort to simplify our download process, stop maintaining so many different specialized download pages and SEO improvements.
  • Mozilla.org home page is now l10n-friendly and we just shipped it in 28 languages. Depending on your locale, visitor see custom content (news items, calls for contribution or translation...)
  • Several key high traffic pages (about products updade) are now localized and maintained at large scale (50+ locales)
  • Newsletter center and newsletter subscription process largely migrated to Bedrock and additional locales supported (but there is still work to do there)
  • The plugincheck web application is also largely migrated to Bedrock (61 locales on bedrock, about 30 more to migrate before we can delete the older backend and maintain only one version)
  • The contribute page scaled up tp 28 locales with local teams of volunteers behind answering people that contact us
  • Firefox OS consumer and industry sub-sites released/updated for +10 locales, with some geoIP in addition to locale detection for tailored content!
  • Many small updates to other existing pages and templates

Community growth

This quarter, I tried to spend some time looking for localizers to work on web content as well as acompanying volunteers that contact us. I know that I am good at finding volunteers that share our values and are achievers, unfortunately I don't have that much time to spend on that. Hopefully I will be able to spend a few days on that every quarter because we need to grow and we need to grow with the best open source contributors! :)

  • About 20 people got involved for the folowing locales: French, Czech, Catalan, Slovenian, Tamil, Bengali, Greek, Spanish (Spain variant), Swedish. Several became key localizers and will be at the Mozilla summit
  • A couple of localizers moved from mozilla.org localization to product localization where their help was more needed, I helped them by finding new people to replace them on web localization and/or empowering existing community members to avoid any burn-out
  • I spent several days in a row specifically helping the Catalan community as it needed help to scale since they now also do all the mobile stuff. I opened a #mozilla-cat IRC channel and found 9 brand new volunteers, some of them professional translators, some of them respected localizers from other open source projects. I'll probably spend some more time to help them next quarter to consolidate this growth. I may keep this strategy every quarter since it seems to be efficient (look for localizers in general and also help one specific team to grow and organize itself to scale up)


  • Significant localization work for Firefox Health Report, both Desktop (shipped) and Mobile versions (soon to be shipped)
  • Lots of meetings for lots of different projects for next quarter :)
  • Two work weeks, one focused on tooling in Berlin, one focused on training my new colleagues Peying and Francesco (but to be honest, Francesco didn't need much of it thanks to his 10 years of involvement in Mozilla as a contributor :) )
  • A lot of work to adjust my processes to work with my new colleague Francesco Lodolo (also an old-timer in the community, he is the Italian Firefox localizer). Kudos to Francesco for helping me with all of the projects! Now I can go on holidays knowing that i have a good backup :)

French community involvement

  • In the new Mozilla paris office I organized a meeting with the LinuxFR admins, because I think it's important to work with the rest of the Open Source ecosystem
  • With Julien Wajsberg (Gaia developer) we organized a one day meeting with the Dotclear community, a popular blogging platform alternative to  Wordpress in France (purely not-for-profit), because we think it's important to work with project that build software that allows people to create content on the Web
  • Preparation of more open source events in the Paris office
  • We migrated our server (hosting Transvision, womoz.org, mozfr.org...) to the latest Debian Stable, which finally brings us a decent modern version of PHP (5.4). We grew our admin community to 2 more people with Ludo and Geb :). Our server flies!

In a nutshell, a very busy quarter! If you want to speak about some of it with me, I will be at the Mozilla Summit in Brussels this week :)

Pascal ChevrelMy Q2-2014 report

Summary of what I did last quarter (regular l10n-drivers work such as patch reviews, pushes to production, meetings and past projects maintenance excluded) .

Australis release

At the end of April, we shipped Firefox 29 which was our first major redesign of the Firefox user interface since Firefox 4 (released in 2011). The code name for that was Australis and that meant replacing a lot of content on mozilla.org to introduce this new UI and the new features that go with it. That also means that we were able to delete a lot of old content that now had become really obsolete or that was now duplicated on our support site.

Since this was a major UI change, we decided to show an interactive tour of the new UI to both new users and existing users upgrading to the new version. That tour was fully localized in a few weeks time in close to 70 languages, which represents 97.5% of our user base. For the last locales not ready on time, we either decided to show them a partially translated site (some locales had translated almost everything or some of the non-translated strings were not very visible to most users, such as alternative content to images for screen readers) or to let the page fall back to the best language available (like Occitan falling back to French for example).

Mozilla.org was also updated with 6 new product pages replacing a lot of old content as well as updates to several existing pages. The whole site was fully ready for the launch with 60 languages 100% ready and 20 partially ready, all that done in a bit less than 4 weeks, parallel to the webdev integration work.

I am happy to say that thanks to our webdev team, our amazing l10n community and with the help of my colleagues Francesco Lodolo (also Italian localizer) and my intern Théo Chevalier (also French localizer), we were able to not only offer a great upgrading experience for the quasi totality of our user base, we were also able to clean up a lot of old content, fix many bugs and prepare the site from an l10n perspective for the upcoming releases of our products.

Today, for a big locale spanning all of our products and activities, mozilla.org is about 2,000 strings to translate and maintain (+500 since Q1), for a smaller locale, this is about 800 strings (+200 since Q1). This quarter was a significant bump in terms of strings added across all locales but this was closely related to the Australis launch, we shouldn't have such a rise in strings impacting all locales in the next quarters.

Transvision releases

Last quarter we did 2 releases of Transvision with several features targeting out 3 audiences: localizers, localization tools, current and potential Transvision developers.

For our localizers, I worked on a couple of features, one is quick filtering of search results per component for Desktop repositories (you search for 'home' and with one click, you can filter the results for the browser, for mail or for calendar for example). The other one is providing search suggestions when your search yields no results with the best similar matches ("your search for 'lookmark' yielded no result, maybe you were searching for 'Bookmark'?").

For the localization tools community (software or web apps like Pontoon, Mozilla translator, Babelzilla, OmegaT plugins...), I rewrote entirely our old Json API and extended it to provide more services. Our old API was initially created for our own purposes and basically was just giving the possibility to get our search results as a Json feed on our most popular views. Tools started using it a couple of years ago and we also got requests for API changes from those tool makers, therefore it was time to rewrite it entirely to make it scalable. Since we don't want to break anybody's workflow, we now redirect all the old API calls to the new API ones. One of the significant new service to the API is a translation memory query that gives you results and a quality index based on the Levenshtein distance with the searched terms. You can get more information on the new API in our documentation.

I also worked on improving our internal workflow and make it easier for potential developers wanting to hack on Transvision to install and run it locally. That meant that now we do continuous integration with Travis CI (all of our unit tests are ran on each commit and pull request on PHP 5.4 and 5.5 environments), we have made a lot of improvements to our unit tests suite and coverage, we expose to developers peak memory usage and time per request on all views so as to catch performance problems early, and we also now have a "dev" mode that allows getting Transvision installed and running on the PHP development server in a matter of minutes instead of hours for a real production mode. One of the blockers for new developers was the time required to install Transvision locally. Since it is a spidering tool looking for localized strings in Mozilla source repositories, it needed to first clone all the repositories it indexes (mercurial/git/svn) which is about 20GB of data and takes hours even with a fast connection. We are now providing a snapshot of the final extracted data (still 400MB ;)) every 6 hours that is used by the dev install mode.

Check the release notes for 3.3 and 3.4 to see what other features were added by the team (/ex: on demand TMX generation or dynamic Gaia comparison view added by Théo, my intern).

Web dashboard / Langchecker

The main improvement I brought to the web dashboard is probably this quarter the deadline field to all of our .lang files, which allows to better communicate the urgency of projects and for localizers are an extra parameter allowing them to prioritize their work.

Theo's first project for his internship was to build a 'project' view on the web dashboard that we can use to get an overview of the translation of a set of pages/files, this was used for the Australis release (ex: http://l10n.mozilla-community.org/webdashboard/?project=australis_all) but can be used to any other project we want to define , here is an example for the localization of two Android Add-ons I did for the World Cup that we did and tracked with .lang files.

We brought other improvements to our maintenance scripts for example to be able to "bulk activate" a page for all the locales ready, we improved our locamotion import scripts, started adding unit tests etc. Generally speaking, the Web dashboard keeps improving regularly since I rewrote it last quarter and we regularly experiment using it for more projects, especially for projects which don't fit in the usual web/product categories and that also need tracking. I am pretty happy too that now I co-own the dashboard with Francesco who brings his own ideas and code to streamline our processes.

Théo's internship

I mentionned it before, our main French localizer Théo Chevalier, is doing an internship with me and Delphine Lebédel as mentors, this is the internship that ends his 3rd year of engineering (in a 5 years curriculum). He is based in Montain View, started early April and will be with us until late July.

He is basically working on almost all of the projects I, Delphine and Flod work on.

So far, apart from regular work as an l10n-driver, he has worked for me on 3 projects, the Web Dashboard projects view, building TMX files on demand on Transvision and the Firefox Nightly localized page on mozilla.org. This last project I haven't talked about yet and he blogged about it recently, in short, the first page that is shown to users of localized builds of Firefox Nightly can now be localized, and by localized we don't just mean translated, we mean that we have a community block managed by the local community proposing Nightly users to join their local team "on the ground". So far, we have this page in French, Italian, German and Czech, if your locale workflow is to translate mozilla-central first, this is a good tooll for you to reach a potential technical audience to grow your community .


This quarter, I found 7 new localizers (2 French, 1 Marahati, 2 Portuguese/Portugal, 1 Greek, 1 Albanian) to work with me essentially on mozilla.org content. One of them, Nicolas Delebeque, took the lead on the Australis launch and coordinated the French l10n team since Théo, our locale leader for French, was starting his internship at Mozilla.

For Transvision, 4 people in the French community (after all, Transvision was created initially by them ;)) expressed interest or small patches to the project, maybe all the efforts we did in making the application easy to install and hack are starting to pay, we'll probably see in Q3/Q4 :)

I spent some time trying to help rebuild the Portugal community which is now 5 people (instead of 2 before), we recently resurrected the mozilla.pt domain name to actually point to a server, the MozFR one already hosting the French community and WoMoz (having the French community help the Portuguese one is cool BTW). A mailing list for Portugal was created (accessible also as nntp and via google groups) and the #mozilla-portugal IRC channel was created. This is a start, I hope to have time in Q3 to help launch a real Portugal site and help them grow beyond localization because I think that communities focused on only one activity have no room to grow or renew themselves (you also need coding, QA, events, marketing...).

I also started looking at Babelzilla new platform rewrite project to replace the current aging platform (https://github.com/BabelZilla/WTS/) to see if I can help Jürgen, the only Babelzilla dev, with building a community around his project. Maybe some of the experience I gained through Transvision will be transferable to Babelzilla (was a one man effort, now 4 people commit regularly out of 10 committers). We'll see in the next quarters if I can help somehow, I only had time to far to install the app locally.

In terms of events, this was a quiet quarter, apart from our l10n-drivers work week, the only localization event I was in was the localization sprint over a whole weekend in the Paris office. Clarista, the main organizer blogged about it in French, many thanks to her and the whole community that came over, it was very productive, we will definitely do it again and maybe make it a recurring event.


This quarter was a good balance between shipping, tooling and community building. The beginning of the quarter was really focused on shipping Australis and as usual with big releases, we created scripts and tools that will help us ship better and faster in the future. Tooling and in particular Transvision work which is probably now my main project, took most of my time in the second part of the quarter.

Community building was as usual a constant in my work, the one thing that I find more difficult now in this area is finding time for it in the evening/week end (when most potential volunteers are available for synchronous communication) basically because it conflicts with my family life a bit. I am trying to be more efficient recruiting using asynchronous communication tools (email, forums…) but as long as I can get 5 to 10 additional people per quarter to work with me, it should be fine with scaling our projects.

Pascal ChevrelMy 2009 yearly report

I am not great at blogging in English and communicating about my work so I thought that publishing my yearly report would compensate that ;)

All in all, it has been a busy year, nobody in the localization drivers team and among our localization teams had time to get bored, lots of product releases, lots of pages, lots of travel and events too. I am listing below what I have been directly leading and/or participating in, not some other projects where I was just giving a minor help (usually to my colleagues Stas and Delphine).


  • 2 major releases: Firefox 3.5 and Thunderbird 3 (with a new multilingual Mozilla Messaging website)
  • 26 other releases (maintenance, beta and RC releases)

Mozilla Europe website(s):

  • 3 new locales: Serbian, Bulgarian, Swedish, our geographic coverage of Europe is now almost complete
  • New content for 3.5 release, minor releases and many side projects
  • major cleanups of content and code for easier maintenance (especially maintenance releases) and more features (html5 support, per locale menu navigation, visits now with referrer hints for better locale detection...)
  • Site now sharing same metrics application as other mozilla sites
  • More per country news items than previous years (events, new community sites, community meetings...)
  • 46 blog posts written by our European community on blogs.mozilla-europe.org
  • Our events management web application was used for 10 European events (I created it back in summer 2008)

Mozilla.com website

  • We now have a localized landing page for our 74 locales on top of up to date in-product pages
  • Geolocation page for all locales
  • 3.0 and 3.5 major updates offered for all locales
  • Localized beta download pages to incitate beta-testing of non-English versions of Firefox
  • Better code for our localized pages (better right-to-left, language switching, simpler templates...)
  • Whatsnew/Firstun pages now warn the user in his language if his Flash plugin is outdated  (for better security and stability)
  • Lots of content, css, graphics updates all along the year, everywhere
  • Firefox 3.6 in-product pages (firstrun, whatsnew, major update) localization underway, pluginscheck page localization almost technically ready for localization
  • Fennec pages being localized for 1.0 final

Marketing Sites made multilingual

Mozilla Education:

  • Gave a lecture at the Madrid university about opensource, the mozilla project and community management.
  • MMTC Madrid one week sprint in July, gave Mozilla classes with Paul Rouget and Vivien Nicolas to 30 students (evaluation TBD)
  • Organized CoMeTe project at Evry university, France,  in October with Fabien Cazenave and Laurent Jouanneau as teachers

Community work

  • Found new localizers for a dozain locales, helped some creating blogs, community sites and local events
  • Many community meetings, IRC or IRL
  • Participated in Firefox 3.5 party in Madrid
  • I am since May on twitter, communicating about my work and Mozilla in Europe
  • Organized a theming contest in collaboration with the Dotclear project for our community blog, won by Marie Alhomme
  • Created with Julia a Mozilla Planet for French Speakers
  • Lots of Web l10n QA with Delphine plus some personal QA work on 3.6 looking for Linux specific Firefox bugs
  • Went to 21 events (7 of them internal Mozilla events) like Fosdem, MozCamps Chile + Prague, Ubuntu parties, Solutions Linux, W3C event, Firefox 5 year anniversary, Firefox 3.5 party Madrid, JDLL, Geneva Community meetup... Lots of time abroad and travelling.
  • Blogging in French about the Mozilla project and its place in the FLOSS ecosystem, current focus on Mozilla QA and how to get involved in the project.


  • Some documentation work (mostly on QA of localized pages)
  • Many updates to the webdashboard
  • Helped Delphine setting up Womoz website and general advices on community building
  • Several press interviews for Spain as well as conferences given about the Mozilla project
  • Started this week with Stas and Patrick the localization work needed for the Browser Choice Screen in Windows for Europe
  • Lots of technical self teaching while building projects, I even did my first Jetpack extension this week, Yay!
  • A new expresso machine :)

Happy new year 2010 to all mozillians and FOSS lovers in the world :)

Pascal ChevrelTransliterating Serbian Cyrillic to Serbian Latin on Linux with PHP

Mozilla has beeen shipping Firefox in Serbian for many years and we ship it in cyrillic script, that means that our software, our sites, our documentation is all in cyrillic for Serbian.

You may not know it (especially if you are not European), but Serbian can be written in both Cyrillic and Latin scripts, people live with the two writing systems, that is a phenomenon called synchronic digraphia.

I was wondering of it would be easy to create a version of Firefox or Firefox OS in Latin script and since our l10n community server just got an upgrade and now has PHP 5.4, I played a bit with the recent transliterator class in that version that uses the ICU library.

Basically, it works, and it works well. With one caveat though, I found out that the ICU library shipped with Linux distro is old and exposes a bug in Serbian transliteration that was fixed in more recent ICU libraries.

How does it work? Here is a code example:

$source = 'Завирите у будућност';
$t = Transliterator::create('Serbian-Latin/BGN');
print "Serbian (Cyrillic): $source <br>";
print "Serbian (Latin): {$t->transliterate($source)}";

And here is the output:

Cyrillic: Завирите у будућност
Latin: Zavirite u budućnost

The bug I mentioned earlier is that the cyrillic letter j is systematically converted to an uppercase J even if the letter is inside a word and should be lowercase.

Example: This string : Најгледанији сајтови
Should be transliterated to: Najgledaniji sajtovi
But my script transliterated it to: NaJgledaniJi saJtovi

I filed a bug in the PHP ticket system and got an inmediate response that my test script actually works on Windows. After some investigation by the PHP dev, it turns out that there is no bug on the PHP side, the bug is in the ICU library that ships with the OS and it happens to be version 48.x on Linux distros while Windows enjoys a more recent version 50 and the ICU project itself is at version 51.2

Unfortunately, I couldn't find any .deb package or ppa for Ubuntu that would propose a more recent ICU library version, Chris Coulson from Canonical pointed me to this ticket in Launchpad: [request] upgrade to icu 50, but this was an unassigned one.

As a consequence, I had to compile the newer ICU library myself to make it work. Fortunately, I could follow almost all the steps indicated in this post for a CentOS distro, I only had to adjust the php.ni locations (and also update the php.ini file for the development server) and restart Apache :)

So now, I can transliterate easily from cyrillic to Latin a full repository, I put a gist file online with the full script doing the conversion of a repo if you want to use it.

Soledad PenadesSpeaking at OneShotLondon NodeConf

“Just turn it into a node module,” and other mantras Edna taught me

The story of leaving behind a random mix of Python + php + bash + makefile + Scons scripts to totally embrace using Node, modules, standard callbacks, browserify, and friends to build toys that bleep and bloop with MIDI, WebGL and Web Audio.

As you can maybe deduct, this might not be your average super expert node.js talk, but a story of learning with a non-appointed mentor and an spontaneous community, and improving and making the most out of node.js—and how it informed and shaped the rest of my coding philosophy, both inside and outside of Mozilla.

I must confess that I’m really humbled and flattered to be amongst this amazing line up of true node experts.

UUUUUUUHHH THE EXPECTATIONS!—feeling a bit like an impostor now.

Next next Saturday 19th of July. See you there? :-)

flattr this!

Ludovic HirlimannThunderbird 31 coming soon to you and needs testing love

We just released the second beta of Thunderbird 31. Please help us improve Thunderbird quality by uncovering bugs now in Thunderbird 31 beta so that developers have time to fix them.

There are two ways you can help

- Use Thunderbird 31 beta in your daily activities. For problems that you find, file a bug report that blocks our tracking bug 1008543.

- Use Thunderbird 31 beta to do formal testing.  Use the moztrap testing system to tests : choose run test - find the Thunderbird product and choose 31 test run.

Visit https://etherpad.mozilla.org/tbird31testing for additional information, and to post your testing questions and results.

Thanks for contributing and helping!

Ludo for the QA team

Updated links

Henrik SkupinFirefox Automation report – week 23/24 2014

In this post you can find an overview about the work happened in the Firefox Automation team during week 23 and 24.


To continue the training for Mozilla related test frameworks, we had the 3rd automation training day on June 4th. This time lesser people attended, but we were still able to get a couple of tasks done on oneanddone.

Something which bothered us already for a while, is that for our mozmill-tests repository no push_printurl hook was setup. As result the landed changeset URL gets not printed to the console during its landing. Henrik fixed that on bug 1010563 now, which allows an easier copy&paste of the link to our bugs.

Our team started to work on the new continuous integration system for TPS tests. To be able to manage all the upcoming work ourselves, Henrik asked Jonathan Griffin to move the Coversheet repository from his own account to the Mozilla account. That was promptly done.

In week 24 specifically on June 11th we had our last automation training day for quarter 2 in 2014. Given the low attendance from people we might have to do some changes for future training days. One change might be to have the training on another day of the week. Andreea probably will post updates on that soon.

Henrik was also working on getting some big updates out for Mozmill-CI. One of the most important blockers for us was the upgrade of Jenkins to the latest LTS release. With that a couple of issues got fixed, including the long delays in sending out emails for failed jobs. For more details see the full list of changes.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 23 and week 24.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 23 and week 24.

Chris PearceReducing Windows' background CPU load while building Firefox

If you're building Firefox on Windows 8 like I am you might want to tweak the following settings to reduce the OS' background CPU load while you're building Firefox (some of these settings may also be applicable to Windows 7, but I haven't test this):
  1. Add your src/object directory to Windows Defender's list of locations excluded from real time scans. I realized that the "Antimalware Service Executable" was using up to 40% CPU utilization during builds before I did this. You can add your src/objdir to the exclude list using the UI at: Windows Defender > Settings > Excluded files and locations.
  2. Remove your src/objdir from Windows' list of locations to be indexed. I actually did the inverse, and removed my home directory (which my src/objdir was inside) from the list of indexable locations and re-added the specific subdirectories in my home dir that I wanted indexed (Documents, etc) without re-adding my src/objdir.
Update, 11 July 2014: Recently the "Antimalware Service Execuable" started hogging CPU again while building, so I added MSVC's cl.exe, and link.exe, to the list of "Excluded Processes" in Windows Defender > Settings, and that reduced "Antimalware Service Execuable"'s CPU usage while building.

    Christian Heilmann[video+slides] FirefoxOS – HTML5 for a truly world-wide-web (Sapo Codebits 2014)

    Chris Heilmann at SAPO codebits
    Today the good people at Sapo released the video of my Codebits 2014 keynote.

    In this keynote, I talk about FirefoxOS and what it means in terms of bringing mobile web connectivity to the world. I explain how mobile technology is unfairly distributed and how closed environments prevent budding creators from releasing their first app. The slides are available on Slideshare as the video doesn’t show them.

    There’s also a screencast on YouTube.

    Since this event, Google announced their Android One project, and I am very much looking forward how this world-wide initiative will play out and get more people connected.

    Photo by José P. Airosa ‏@joseairosa

    Henrik SkupinFirefox Automation report – week 21/22 2014

    In this post you can find an overview about the work happened in the Firefox Automation team during week 21 and 22.


    To assist everyone from our community to learn more about test automation at Mozilla, we targeted 4 full-day automation training days from mid of May to mid of June. The first training day was planned for May 21rd and went well. Lots of [people were present and actively learning more about automation[https://quality.mozilla.org/2014/05/automation-training-day-may-21st-results/). Especially about testing with Mozmill.

    To support community members to get in touch with Mozmill testing a bit easier, we also created a set of one-and-done tasks. Those start from easy tasks like running Mozmill via the Mozmill Crowd extension, and end with creating the first simple Mozmill test.

    Something, which hit us by surprise was that with the release of Firefox 30.0b3 we no longer run any automatically triggered Mozmill jobs in our CI. It took a bit of investigation but finally Henrik found out that the problem has been introduced by RelEng when they renamed the product from ”’firefox”’ to ”’Firefox”’. A quick workaround fixed it temporarily, but for a long term stable solution we might need a frozen API for build notifications via Mozilla Pulse.

    One of our goals in Q2 2014 is also to get our machines under the control of PuppetAgain. So Henrik started to investigate the first steps, and setup the base manifests as needed for our nodes and the appropriate administrative accounts.

    The second automation training day was also planned by Andreea and took place on May 28th. Again, a couple of people were present, and given the feedback on one-and-done tasks, we fine-tuned them.

    Last but not least Henrik setup the new Firefox-Automation-Contributor team, which finally allows us now to assign people to specific issues. That was necessary because Github doesn’t let you do that for anyone, but only known people.

    Individual Updates

    For more granular updates of each individual team member please visit our weekly team etherpad for week 21 and week 22.

    Meeting Details

    If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 21 and week 22.

    Henrik GemalBringing SIMD to JavaScript

    In an exciting collaboration with Mozilla and Google, Intel is bringing SIMD to JavaScript. This makes it possible to develop new classes of compute-intensive applications such as games and media processing—all in JavaScript—without the need to rely on any native plugins or non-portable native code. SIMD.JS can run anywhere JavaScript runs. It will, however, run a lot faster and more power efficiently on the platforms that support SIMD. This includes both the client platforms (browsers and hybrid mobile HTML5 apps) as well as servers that run JavaScript, for example through the Node.js V8 engine.


    Nicholas NethercoteDipping my toes in the Servo waters

    I’m very interested in Rust and Servo, and have been following their development closely. I wanted to actually do some coding in Rust, so I decided to start making small contributions to Servo.

    At this point I have landed two changes in the tree — one to add very basic memory measurements for Linux, and the other for Mac — and I thought it might be interesting for people to hear about the basics of contributing. So here’s a collection of impressions and thoughts in no particular order.

    Getting the code and building Servo was amazingly easy. The instructions actually worked first time on both Ubuntu and Mac! Basically it’s just apt-get install (on Ubuntu) or port install (on Mac), git clone, configure, and make. The configure step takes a while because it downloads an appropriate version of Rust, but that’s fine; I was expecting to have to install the appropriate version of Rust first so I was pleasantly surprised.

    Once you build it, Servo is very bare-boned. Here’s a screenshot.


    There is no address bar, or menu bar, or chrome of any kind. You simply choose which page you want to display from the command line when you start Servo. The memory profiling I implemented is enabled by using the -m option, which causes memory measurements to be periodically printed to the console.

    Programming in Rust is interesting. I’m not the first person to observe that, compared to C++, it takes longer to get your code past the compiler, but it’s more likely to to work once you do. It reminds me a bit of my years programming in Mercury (imagine Haskell, but strict, and with a Prolog ancestry). Discriminated unions, pattern matching, etc. In particular, you have to provide code to handle all the error cases in place. Good stuff, in my opinion.

    One thing I didn’t expect but makes sense in hindsight: Servo has seg faulted for me a few times. Rust is memory-safe, and so shouldn’t crash like this. But Servo relies on numerous libraries written in C and/or C++, and that’s where the crashes originated.

    The Rust docs are a mixed bag. Libraries are pretty well-documented, but I haven’t seen a general language guide that really leaves me feeling like I understand a decent fraction of the language. (Most recently I read Rust By Example.) This is meant to be an observation rather than a complaint; I know that it’s a pre-1.0 language, and I’m aware that Steve Klabnik is now being paid by Mozilla to actively improve the docs, and I look forward to those improvements.

    The spotty documentation isn’t such a problem, though, because the people in the #rust and #servo IRC channels are fantastic. When I learned Python last year I found that simply typing “how to do X in Python” into Google almost always leads to a Stack Overflow page with a good answer. That’s not the case for Rust, because it’s too new, but the IRC channels are almost as good.

    The code is hosted on GitHub, and the project uses a typical pull request model. I’m not a particularly big fan of git — to me it feels like a Swiss Army light-sabre with a thousand buttons on the handle, and I’m confident using about ten of those buttons. And I’m also not a fan of major Mozilla projects being hosted on GitHub… but that’s a discussion for another time. Nonetheless, I’m sufficiently used to the GitHub workflow from working on pdf.js that this side of things has been quite straightforward.

    Overall, it’s been quite a pleasant experience, and I look forward to gradually helping build up the memory profiling infrastructure over time.

    John FordNode.js modules to talk to Treestatus and Build API

    Have you ever wanted to talk to treestatus.mozilla.org or the buildapi in your node project?  Well, now you can!  I needed to be able to talk to these two systems so I wrote a couple wrappers today for them.  If you look at the code, they're remarkably similar.  That's because both of them are basically identical.  The API methods are defined in basic Javascript object, which is used to generate the functions of the API wrapper.  Because this is a little harder to understand when trying to figure out how to use the modules, I also wrote a function that automatically writes out a README.md file with all the functions and their parameters.

    Proof that it works:

    They aren't perfect, but they work well.  What I really should do is split the API data into a JSON file, create a single module that knows how to consume those JSON files and build the API Objects and Docs then have both of these APIs use that module to do their work.  Wanna do that?  I swear, I'll look at pull requests!

    Edit: Forgot to actually link to the code.  It's here and here.

    Patrick ClokeMentoring and Time

    No, this is not about being late places, it’s about respecting people’s time. I won’t go deep into why this is important as, Michael Haggerty wrote an awesome article on this. His thoughts boiled down to a single line of advice:


    I think this applies to any type of mentoring, and not only open source work, but any formal or informal mentoring! This advice isn’t meant just for GSoC students, for interns or new employees, but also things I’d like to remind myself to do when someone is helping me.

    To make this sound positive, I’d reword the above advice as:

    Respect other people’s time!

    Someone is willing to help you, so assume some good faith, but help them help you! Some actions to focus on:

    • Ask focused questions! If you do not understand an answer, do not re-ask the same question, but ask followup question. Show you’ve researched the original answer and attempted to understand it. Write sample code, play with it, etc. If you think the answer given doesn’t apply to your question, reword your question: your mentor probably did not understand.
    • Be cognizant of timezones: if you’d like a question answered (in realtime), ask it when the person is awake! (And this includes realizing if they have just woken up or are going to bed.)
    • Your mentor may not have the context you do: they might be helping many people at once, or even working on something totally different than you! Try not to get frustrated if you have to explain your context to them multiple times or have to clarify your question. You are living and breathing the code you’re working in; they are not.
    • Don’t be afraid to share code. It’s much easier to ask a question when there’s a specific example in front of you. Be specific and don’t talk theoretically.
    • Don’t be upset if you’re asked to change code (e.g. receive an r-)! Part of helping you to grow is telling you what you’re doing wrong.
    • Working remotely is hard. It requires effort to build a level of trust between people. Don’t just assume it will come in time, but work at it and try to get to know and understand your mentor.
    • Quickly respond to both feedback and questions. Your mentor is taking their precious time to help you. If they ask you a question or ask something of you, do it ASAP. If you can’t answer their question immediately, at least let them know you received it and will soon look at it.
    • If there are multiple people helping you, assume that they communicate (without your knowledge). Don’t…
      • …try to get each of them to do separate parts of a project for you.
      • …ask the same question to multiple people hoping for different answers.

    The above is a lot to consider. I know that I have a tendency to do some of the above. Using your mentors time efficiently will not only make your mentor happy, but it will probably cause them to want to give you more of their time.

    Mentoring is also hard and a skill to practice. Although I’ve talked a lot about what a mentee needs to do, it is also important that a mentor makes h(im|er)self available and open. A few thoughts on interacting as a mentor:

    • Be cognizant of culture and language (as in, the level at which a mentor and mentee share a common language). In particular, colloquialisms should be avoided whenever possible. At least until a level of trust is reached.
    • Be tactful when giving feedback. Thank people for submitting a patch, give good, actionable feedback quickly. Concentrate more on overall code design and algorithms than nits. (Maybe even point out nits, but fix them yourself for an initial patch.)

    Byron Jonesusing “bugmail filtering” to exclude notifications you don’t want

    a common problem with bugzilla emails (aka bugmail) is there’s too much of it.  if you are involved in a bug or watching a component you receive all changes made to bugs, even those you have no interest in receiving.

    earlier this week we pushed a new feature to bugzilla.mozilla.org : bugmail filtering.


    this feature is available on the “bugmail filtering” tab on the “user preference” page.


    there are many reasons why bugzilla may send you notification of a change to a bug:

    • you reported the bug
    • the bug is assigned to you
    • you are the qa-contact for the bug
    • you are on the cc list
    • you are watching the bug’s product/component
    • you are watching another user who received notification of the bug
    • you are a “global watcher”

    dealing with all that bugmail can be time consuming.  one way address this issue is to use the x-headers present in every bugmail to categorise and separate bugmail into different folders in your inbox.  unfortunately this option isn’t available to everyone (eg. gmail users still cannot filter on any email header).

    bugmail filtering allows you to tell bugzilla to notify you only if it’s a change that you’re interested in.

    for example, you can say:

    don’t send me an email when the qa-whiteboard field is changed unless the bug is assigned to me


    if multiple filters are applicable to the same bug change, include filters override exclude filters.  this interplay allows you to write filters to express “don’t send me an email unless …”

    don’t send me an email for developer tools bugs that i’m CC’ed on unless the bug’s status is changed

    • first, exclude all developer tools emails:


    • then override the exclusion with an inclusion for just the status changes:


    Filed under: bmo

    Pete MooreWeekly review 2014-07-09

    This week I am on build duty.

    At the tail end of last week, I managed to finish of l10n patches and sent them over to Aki to get them reviewed. He has now reviewed them, and the next step is to process his review comments.

    Other than this, I raised a bug about refactoring the beagle config and created a patch, and am currently in discussions with Aki about it.

    I’m still processing the loaners for Joel Maher (thanks Coop for taking care of the windows loaners) - I hit some problems on the way setting up vnc on Mountain Lion - working through this currently (also involved Armen to get his expertise).

    After the loaners are done, I also have my own queue of optimisations that I’d like to look at that are related to build duty (open bugs).

    Niko MatsakisAn experimental new type inference scheme for Rust

    While on vacation, I’ve been working on an alternate type inference scheme for rustc. (Actually, I got it 99% working on the plane, and have been slowly poking at it ever since.) This scheme simplifies the code of the type inferencer dramatically and (I think) helps to meet our intutions (as I will explain). It is however somewhat less flexible than the existing inference scheme, though all of rustc and all the libraries compile without any changes. The scheme will (I believe) make it much simpler to implement to proper one-way matching for traits (explained later).

    Note: Changing the type inference scheme doesn’t really mean much to end users. Roughly the same set of Rust code still compiles. So this post is really mostly of interest to rustc implementors.

    The new scheme in a nutshell

    The new scheme is fairly simple. It is based on the observation that most subtyping in Rust arises from lifetimes (though the scheme is extensible to other possible kinds of subtyping, e.g. virtual structs). It abandons unification and the H-M infrastructure and takes a different approach: when a type variable V is first related to some type T, we don’t set the value of V to T directly. Instead, we say that V is equal to some type U where U is derived by replacing all lifetimes in T with lifetime variables. We then relate T and U appropriately.

    Let me give an example. Here are two variables whose type must be inferred:

    'a: { // 'a --> name of block's lifetime
        let x = 3;
        let y = &x;

    Let’s say that the type of x is $X and the type of y is $Y, where $X and $Y are both inference variables. In that case, the first assignment generates the constraint that int <: $X and the second generates the constraint that &'a $X <: $Y. To resolve the first constraint, we would set $X directly to int. This is because there are no lifetimes in the type int. To resolve the second constraint, we would set $Y to &'0 int – here '0 represents a fresh lifetime variable. We would then say that &'a int <: &'0 int, which in turn implies that '0 <= 'a. After lifetime inference is complete, the types of x and y would be int and &'a int as expected.

    Without unification, you might wonder what happens when two type variables are related that have not yet been associated with any concrete type. This is actually somewhat challenging to engineer, but it certainly does happen. For example, there might be some code like:

    let mut x;        // type: $X
    let mut y = None; // type: Option<$0>
    loop {
        if y.is_some() {
            x = y.unwrap();

    Here, at the point where we process x = y.unwrap(), we do not yet know the values of either $X or $0. We can say that the type of y.unwrap() will be $0 but we must now process the constrint that $0 <: $X. We do this by simply keeping a list of outstanding constraints. So neither $0 nor $X would (yet) be assigned a specific type, but we’d remember that they were related. Then, later, when either $0 or $X is set to some specific type T, we can go ahead and instantiate the other with U, where U is again derived from T by replacing all lifetimes with lifetime variables. Then we can relate T and U appropriately.

    If we wanted to extend the scheme to handle more kinds of inference beyond lifetimes, it can be done by adding new kinds of inference variables. For example, if we wanted to support subtyping between structs, we might add struct variables.

    What advantages does this scheme have to offer?

    The primary advantage of this scheme is that it is easier to think about for us compiler engineers. Every type variable is either set – in which case its type is known precisely – or unset – in which case its type is not known at all. In the current scheme, we track a lower- and upper-bound over time. This makes it hard to know just how much is really known about a type. Certainly I know that when I think about inference I still think of the state of a variable as a binary thing, even though I know that really it’s something which evolves.

    What prompted me to consider this redesign was the need to support one-way matching as part of trait resolution. One-way matching is basically a way of saying: is there any substitution S such that T <: S(U) (whereas normal matching searches for a substitution applied to both sides, like S(T) <: S(U)).

    One-way matching is very complicated to support in the current inference scheme: after all, if there are type variables that appear in T or U which are partially constrained, we only know bounds on their eventual type. In practice, these bounds actually tell us a lot: for example, if a type variable has a lower bound of int, it actually tells us that the type variable is int, since in Rust’s type system there are no super- of sub-types of int. However, encoding this sort of knowledge is rather complex – and ultimately amounts to precisely the same thing as this new inference scheme.

    Another advantage is that there are various places in the Rust’s type checker whether we query the current state of a type variable and make decisions as a result. For example, when processing *x, if the type of x is a type variable T, we would want to know the current state of T – is T known to be something inherent derefable (like &U or &mut U) or a struct that must implement the Deref trait? The current APIs for doing this bother me because they expose the bounds of U – but those bounds can change over time. This seems “risky” to me, since it’s only sound for us to examine those bounds if we either (a) freeze the type of T or (b) are certain that we examine properties of the bound that will not change. This problem does not exist in the new inference scheme: anything that might change over time is abstracted into a new inference variable of its own.

    What are the disadvantages?

    One form of subtyping that exists in Rust is not amenable to this inference. It has to do with universal quantification and function types. Function types that are “more polymorphic” can be subtypes of functions that are “less polymorphic”. For example, if I have a function type like <'a> fn(&'a T) -> &'a uint, this indicates a function that takes a reference to T with any lifetime 'a and returns a reference to a uint with that same lifetime. This is a subtype of the function type fn(&'b T) -> &'b uint. While these two function types look similar, they are quite different: the former accepts a reference with any lifetime but the latter accepts only a reference with the specific lifetime 'b.

    What this means is that today if you have a variable that is assigned many times from functions with varying amounts of polymorphism, we will generally infer its type correctly:

    fn example<'b>(..) {
        let foo: <'a> |&'a T| -> &'a int = ...;
        let bar: |&'b T| -> &'b int = ...;
        let mut v;
        v = foo;
        v = bar;
        // type of v is inferred to be |&'b T| -> &'b int

    However, this will not work in the newer scheme. Type ascription of some form would be required. As you can imagine, this is not a very .common problem, and it did not arise in any existing code.

    (I believe that there are situations which the newer scheme infers correct types and the older scheme will fail to compile; however, I was unable to come up with a good example.)

    How does it perform?

    I haven’t done extensive measurements. The newer scheme creates a lot of region variables. It seems to perform roughly the same as the older scheme, perhaps a bit slower – optimizing region inference may be able to help.

    Doug BelshawWhy Mozilla cares about Web Literacy [whitepaper]

    One of my responsibilities as Web Literacy Lead at Mozilla is to provide some kind of theoretical/conceptual underpinning for why we do what we do. Since the start of the year, along with Karen Smith and some members of the community, I’ve been working on a Whitepaper entitled Why Mozilla cares about Web Literacy.

    Webmaker whitepaper

    The thing that took time wasn’t really the writing of it – Karen (a post-doc researcher) and I are used to knocking out words quickly – but the re-scoping and design of it. The latter is extremely important as this will serve as a template for future whitepapers. We were heavily influenced by P2PU’s reports around assessment, but used our own Makerstrap styling. I’d like to thank FuzzyFox for all his work around this!

    Thanks also to all those colleagues and community members who gave feedback on earlier drafts of the whitepaper. It’s available under a Creative Commons Attribution 4.0 International license and you can fork/improve the template via the GitHub repository. We’re planning for the next whitepaper to be around learning pathways. Once that’s published, we’ll ensure there’s a friendlier way to access them - perhaps via a subdomain of webmaker.org.

    Questions? Comments? I’m @dajbelshaw and you can email me at doug@mozillafoundation.org.

    Gervase MarkhamThe Latest Airport Security Theatre

    All passengers flying into or out of the UK are being advised to ensure electronic and electrical devices in hand luggage are sufficiently charged to be switched on.

    All electronic devices? Including phones, right? So you must be concerned that something dangerous could be concealed inside a package the size of a phone. And including laptops, right? Which are more than big enough to contain said dangerous phone-sized electronics package in the CD drive bay, or the PCMCIA slot, and still work perfectly. Or, the evilness could even be occupying 90% of the body of the laptop, while the other 10% is taken up by an actual phone wired to the display and the power button which shows a pretty picture when the laptop is “switched on”.

    Or are the security people going to make us all run 3 applications of their choice and take a selfie using the onboard camera to demonstrate that the device is actually fully working, and not just showing a static image?

    I can’t see this as being difficult to engineer around. And meanwhile, it will cause even more problems trying to find charging points in airports. Particularly for people who are transferring from one long flight to another.

    Just BrowsingSpeeding Up Grunt Builds Using Watchify

    Grunt is a great tool for automating boring tasks. Browserify is a magical bundler that allows you to require your modules, Node.js style, in your frontend code.

    One the most useful plugins for grunt is grunt-contrib-watch. Simply put, it watches your file system and runs predefined commands whenever a change occurs (i.e. file was deleted, added or updated). This comes in especially handy when you want to run your unit tests every time anything changes.

    Browserify works by parsing your JS files and scanning for require and exports statements. Then it determines the dependency order of the modules and concatenates them together to create a “superbundle” JS file. Hence every time your code changes, you need to tell browserify to rebuild your superbundle or your changes will not be picked up by the browser.

    Watching Your Build: the Naive Approach

    When connecting Grunt with Browserify, it might be tempting to do something like this:

    watch: {
       sources: {
         files: [
           '<%= srcDir %>/**/*.coffee',
           '<%= srcDir %>/**/*.js'
         tasks: ['browserify', 'unittest']

    And it would work. All your code would be parsed and processed and turned into a superbundle when a file changes. But there’s a problem here. Can you spot it? Hint: all your code would be parsed and processed and turned into a superbundle.

    Yep. Sloooooow.

    On my MacBook Air with SSD and 8GB of RAM, this takes about 4 seconds (and that’s after I made all the big dependencies such as jQuery or Angular external). That’s a long time to wait for feedback from your tests, but not long enough to go grab a coffee. The annoying kind of long, in other words. We can do better.

    Enter Watchify

    Watchify is to Browserify what grunt-contrib-watch is to Grunt. It watches the filesystem for you and recompiles the bundle when a change is detected. There is an important twist, however, and that is caching. Watchify remembers the parsed JS files, making the rebuild much faster (about ten times faster in my case).

    There’s one caveat you have to look out for though. When you’re watching your files in order to run tests (which you still need to do via grunt-contrib-watch because Browserify only takes care of the bundling), make sure you target the resulting (browserified) files and not the source files. Otherwise your changes might not get detected by Grunt watch (on some platforms Watchify seems to “eat” the file system events and they don’t get through to grunt-contrib-watch).

    In other words, do something like this:

    watch: {
       sources: {
          files: [
            '<%= buildDir %>/**/*.coffee',
            '<%= buildDir %>/**/*.js'
          tasks: ['test']

    where test is an alias for (for example):

    test: [

    You should see a huge improvement in your build times.

    Happy grunting!

    Marco ZeheAccessibility in Google Apps – an overview

    I recently said that I would write a blog series about Google apps accessibility, providing some hints and caveats when it comes to using Google products such as GMail, Docs, and Drive in a web browser.

    However, when I researched this topic further, I realized that the documentation Google provide on each of their products for screen reader users is actually quite comprehensive. So, instead of repeating what they already said, I’m going to provide some high-level tips and tricks, and links to the relevant documentation so you can look the relevant information up yourself if you need to.

    There is really not much difference between Google Drive, GMail and the other consumer-related products and the enterprise-level offerings called Google Apps for Business. All of them are based on the same technology base. The good thing is that there is also no way for administrators to turn accessibility off entirely when they administrate the Google Apps for Business setup for their company. And unless they mess with default settings, themes and other low-vision features should also work in both end user and enterprise environments.

    A basic rule: Know your assistive technology!

    This is one thing I notice pretty often when I start explaining certain web-related stuff to people, be they screen reader users or users of other assistive technologies. It is vital for your personal, but even more your professional life, to know your assistive technology! As a screen reader user, just getting around a page by tabbing simply isn’t enough to get around complex web applications efficiently and deal with stuff in a timely fashion. You need to be familiar with concepts such as the difference between a virtual document (or virtual buffer or browse mode document) and the forms or focus mode of your screen reader, especially when on Windows. You need to know at least some quick navigation commands available in most browse/virtual mode scenarios. You should also be familiar with what landmarks are to navigate to certain sections of a page. If you just read this and don’t know what I was talking about, consult your screen reader manual and key stroke reference now! If you are a person who requires training to memorize these things and isn’t good at self-paced learning this, go get help and training for this, especially in professional environments. You will be much more proficient afterwards and provide much better services. And besides, it’ll make you feel better because you will have a feeling of greater accomplishment and less frustrations. I promise!

    Now with that out of the way, let’s move on to some specific accessibility info, shall we?


    One of the most used things you’l be using is GMail. If you want to use a desktop or mobile e-mail client because that is easiest for you, you can do so! Talk to your administrator if you’re in a corporate or educational environment, or simply set up your GMail account in your preferred client. Today, most clients even won’t require you to enter an IMAP or SMTP server any more, because they know what servers they need to talk to for GMail. So unless your administrator has turned off IMAP and SMTP access, which they most likely haven’t, you should be able to just use your preferred client of choice. Only if you want to add server-side e-mail filters and change other settings will you then need to enter the web interface.

    If you want to, or have to, use the web interface, don’t bother with the basic HTML view. It is so stripped down in functionality that the experience by today’s standards is less than pleasant. Instead, familiarize yourself with the GMail guide for screen reader users, and also have a look at the shortcuts for GMail. Note that the latter will only work if your screen reader’s browse or virtual mode is turned off. If you’re a screen reader user, experiment with which way works better for you, browse/virtual mode or non-virtual mode.

    Personally, I found the usability of GMail quite improved in recent months compared to earlier times. I particularly am fond of the conversation threading capabilities and the power of the search which can also be applied to filters.

    Note that in some documentation, it is said that the chat portion of GMail is not accessible. However, I found that this seems to be outdated information, since the above guide very well states that Chat works, and describes some of its features. Best way to find out: Try it!


    Contacts are accessible on the web, too, but again you can use your e-mail client’s capabilities or extension to sync your contacts through that as well.


    Google Calendar’s Agenda View can be used with screen readers on the web, but it, too, allows access from desktop or mobile CalDav clients as well. The Google Calendar guide for screen reader users and Keyboard Shortcuts for Google Calendar provide the relevant info.

    Google Docs and Sites

    This is probably the most complicated suite of the Google offerings, but don’t fear, they are accessible and you can actually work with them nowadays. For this to work best, Google recommends to use either JAWS or NVDA with Firefox, or IE, or Chrome + ChromeVox. I tested, and while Safari and VoiceOver on OS X also provided some feedback, the experience wasn’t as polished as one would hope. So if you’re on the Mac, using Google Chrome and ChromeVox is probably your best bet.

    Also, all of these apps work best if you do not rely on virtual/browse modes when on Windows. In NVDA, it’s easy to just turn it off by pressing NVDA+Space. For JAWS, the shortcut is JAWSKey+Z. Bt since this has multiple settings, consult your JAWS manual to make this setting permanent for the Google Drive domain.

    The documentation on Drive is extensive. I suggest to start at this hub and work your way through all linked documentation top to bottom. It’s a lot at first, but you’ll quickly get around and grasp the concepts, which are pretty consistent throughout.

    Once you’re ready to dive into Docs, Sheets, Slides and the other office suite apps, use the Docs Getting Started document as a springboard to all the others and the in-depth documentation on Docs itself.

    One note, in some places, it is said that creating forms is not accessible yet. However, since there is documentation on that, too, those documents stating that creating forms isn’t accessible yet are out of date. One of those, among other things, is the Administrator Guide to Apps Accessibility.

    I found that creating and working in documents and spreadsheets works amazingly well already. There are some problems sometimes with read-only documents which I’ve made the Docs team aware of, but since these are sometimes hard to reproduce, it may need some more time before this works a bit better. I found that, if you get stuck, alt-tabbing out of and back into your browser often clears things up. Sometimes, it might even be enough to just open the menu bar by pressing the Alt key.

    Closing remarks

    Like with any other office productivity suite, Google Docs is a pretty involved product. In a sense, it’s not less feature-rich than a desktop office suite of programs, only that it runs in a web browser. So in order to effectively use Google Apps, it cannot be said enough: Know your browser, and know your assistive technology! Just tabbing around won’t get you very far!

    If you need more information not linked to above, here’s the entry page for all things Google accessibility in any of their apps, devices and services. From here, most of the other pages I mention above can also be found.

    And one more piece of advice: If you know you’ll be switching to Google Apps in the future in your company or government or educational institution, and want to get a head start, get yourself a GMail account if you don’t have one. Once you have that, all of Google Drive, Docs, and others, are available to you as well to play around with. There’s no better way than creating a safe environment and play around with it! Remember, it’s only a web application, you can’t break any machines by using it! And if you do, you’re up for some great reward from Google! :)


    Christian HeilmannHave we lost our connection with the web? Let’s #webexcite

    I love the web. I love building stuff in it using web standards. I learned the value of standards the hard way: building things when browser choices were IE4 or Netscape 3. The days when connections were slow enough that omitting quotes around attributes made a real difference to end users instead of being just an opportunity to have another controversial discussion thread. The days when you did everything possible – no matter how dirty – to make things look and work right. The days when the basic functionality of a product was the most important part of it – not if it looks shiny on retina or not.

    Let's get excited

    I am not alone. Many out there are card-carrying web developers who love doing what I do. And many have done it for a long, long time. Many of us don a blue beanie hat once a year to show our undying love for the standard work that made our lives much, much easier and predictable and testable in the past and now.

    Enough with the backpatting

    However, it seems we live in a terrible bubble of self-affirmation about just how awesome and ever-winning the web is. We’re lacking proof. We build things to impress one another and seem to forget that what we do sooner than later should improve the experience of people surfing the web out there.

    In places of perceived affluence (let’s not analyse how much of that is really covered-up recession and living on borrowed money) the web is very much losing mind-share.

    Apps excite people

    People don’t talk about “having been to a web site”; instead they talk about apps and are totally OK if the app is only available on one platform. Even worse, people consider themselves a better class than others when they have iOS over Android which dares to also offer cheaper hardware.

    The web has become mainstream and boring; it is the thing you use, and not where you get your Oooohhhs and Aaaahhhhs.

    Why is that? We live in amazing times:

    • New input types allow for much richer forms
    • Video and Audio in HTML5 has matured to a stage where you can embed a video without worrying about showing a broken grey box
    • Canvas allows us to create and manipulate graphics on the fly
    • WebRTC allows for Skype-like functionality straight in the browser.
    • With Web Audio we can create and manipulate music in the browser
    • SVG is now an embed in HTML and doesn’t need to be an own document which allows us scalable vector graphics (something Flash was damn good in)
    • IndexedDB allows us to store data on the device
    • AppCache, despite all its flaws allows for basic offline functionality
    • WebGL brings 3D environments to the web (again, let’s not forget VRML)
    • WebComponents hint at finally having a full-fledged Widget interface on the web.

    Shown, but never told

    The worry I have is that most of these technologies never really get applied in commercial, customer-facing products. Instead we build a lot of “technology demos” and “showcases” to inspire ourselves and prove that there is a “soon to come” future where all of this is mainstream.

    This becomes even more frustrating when the showcases vanish or never get upgraded. Many of the stuff I showed people just two years ago only worked in WebKit and could be easily upgraded to work across all browsers, but we’re already bored with it and move on to the next demo that shows the amazing soon to be real future.

    I’m done with impressing other developers; I want the tech we put in browsers to be used for people out there. If we can’t do that, I think we failed as passionate web developers. I think we lost the connection to those we should serve. We don’t even experience the same web they do. We have fast macs with lots of RAM and Adblock enabled. We get excited about parallax web sites that suck the battery of a phone empty in 5 seconds. We happily look at a loading bar for a minute to get an amazing WebGL demo. Real people don’t do any of that. Let’s not kid ourselves.

    Exciting, real products

    I remember at the beginning of the standards movement we had showcase web sites that showed real, commercial, user-facing web sites and praised them for using standards. The first CSS layout driven sites, sites using clever roll-over techniques for zooming into product images, sites with very clean and semantic markup – that sort of thing. #HTML on ircnet had a “site of the day”, there was a “sightings” site explaining a weekly amazing web site, “snyggt” in Sweden showcased sites with tricky scripts and layout solutions.

    I think it may be time to re-visit this idea. Instead of impressing one another with codepens, dribbles and other in-crowd demos, let’s tell one another about great commmercial products aimed not at web developers using up-to-date technology in a very useful and beautiful way.

    That way we have an arsenal of beautiful and real things to show to people when they are confused why we like the web so much. The plan is simple:

    • If you find a beautiful example of modern tech used in the wild, tweet or post about it using the #webexcite hash tag
    • We can also set up a repository somewhere on GitHub once we have a collection going

    Gervase MarkhamSpending Our Money Twice

    Mozilla Corporation is considering moving its email and calendaring infrastructure from an in-house solution to an outsourced one, seemingly primarily for cost but also for other reasons such as some long-standing bugs and issues. The in-house solution is corporate-backed open source, the outsourced solution under consideration is closed source. (The identities of the two vendors concerned are well-known, but are not relevant to appreciate the point I am about to make.) MoCo IT estimates the outsourced solution as one third of the price of doing it in-house, for equivalent capabilities and reliability.

    I was pondering this, and the concept of value for money. Clearly, it makes sense that we avoid spending multiple hundreds of thousands of dollars that we don’t need to. That prospect makes the switch very attractive. Money we don’t spend on this can be used to further our mission. However, we also need to consider how the money we do spend on this furthers our mission.

    Here’s what I mean: I understand that we don’t want to self-host. IT has enough to do. I also understand that it may be that no-one is offering to host an open source solution that meets our feature requirements. And the “Mozilla using proprietary software or web services” ship hasn’t just sailed, it’s made it to New York and is half way back and holding an evening cocktail party on the poop deck. However, when we do buy in proprietary software or services, I assert we should nevertheless aim to give our business to companies which are otherwise aligned with our values. That means whole-hearted support for open protocols and data formats, and for the open web. For example, it would be odd to be buying in services from a company who had refused to, or dragged their feet about, making their web sites work on Firefox for Android or Firefox OS.

    If we deploy our money in this way, then we get to “spend it twice” – it gets us the service we are paying for, and it supports companies who will spend it again to bring about (part of) the vision of the world we want to see. So I think that a values alignment between our vendors and us (even if their product is not open source) is something we should consider strongly when outsourcing any service. It may give us better value for money even if it’s a little more expensive.

    Byron Joneshappy bmo push day!

    the following changes have been pushed to bugzilla.mozilla.org:

    • [1033258] bzexport fails to leave a comment when attaching a file using the BzAPI compatibility layer
    • [1003244] Creation of csv file attachement on new Mozilla Reps Swag Request
    • [1033955] pre-load all related bugs during show_bug initialisation
    • [1034678] Use of uninitialized value $_[0] in pattern match (m//) at Bugzilla/Util.pm line 74. The new value for request reminding interval is invalid: must be numeric.
    • [1033445] Certain webservice methods such as Bug.get and Bug.attachments should not use shadow db if user is logged in
    • [990980] create an extension for server-side filtering of bugmail

    server-side bugmail filtering

    accessible via the “Bugmail Filtering” user preference tab, this feature provides fine-grained control over what changes to bugs will result in an email notification.

    for example to never receive changes made to the “QA Whiteboard” field for bugs where you are not the assignee add the following filter:

    Field: QA Whiteboard
    Product: __Any__
    Component: __Any__
    Relationship: Not Assignee

    discuss these changes on mozilla.tools.bmo.

    Filed under: bmo, mozilla

    Erik VoldJPM Beta

    JPM is ready for beta testers!

    If you don’t already know jpm is a new NodeJS based command line tool to replace the older python based cfx tool. It also uses npm for third party modules instead of our legacy ways to use 3rd party modules.

    To get started, do the following:

    1. Run npm install jpm
    2. Run jpm init to create a new add-on
    3. Add npm dependencies, and run npm install to install them
    4. Create a cool add-on!
    5. Run the add-on with this command jpm run -b /path/to/Nightly.app
    6. Write tests
    7. Run tests with this command jpm test -b /path/to/Nightly.app

    At the moment you’ll have to use the -b flag and point to your installation of Firefox Nightly.

    You can find more information on JPM here.

    I’ve made a simple example add-on at erikvold/markdown-viewer which is using an npm module for the markdown rendering.

    The project is on github at mozilla/jpm and at least for now we are managing issues on github as well at https://github.com/mozilla/jpm/issues so if you have any questions or notice issues please report them here.

    Nick FitzgeraldCome Work With Me On Firefox Developer Tools

    My team at Mozilla, the half of the larger devtools team that works on JavaScript and performance tools, is looking to hire another software engineer.

    We have members of the devtools team in our San Francisco, London, Vancouver, Paris, Toronto, and Portland offices, but many also work remotely.

    We are responsible for writing the full stack of the tools we create, from the C++ platform APIs exposing SpiderMonkey and Gecko internals, to the JavaScript/CSS/HTML based frontend that you see when you open the Firefox Developer Tools.

    Some of the things we're working on:

    • A JavaScript debugger

    • A performance tool that incorporates a sampling CPU-profiler, platform events tracing, and information from SpiderMonkey's Just-In-Time compiler

    • An allocations and heap usage profiler (this is what I'm working on, and I wrote about it here)

    • A WebGL shader live-editor, and a canvas debugger to step through individual draw calls

    One of the most important things for me is that every line of code we write at Mozilla is Free and Open Source from day one, and we're dedicated to keeping the web open.

    Apply here!

    Christie KoehlerDriving Project-wide Community Growth by Improving the Mozilla Wiki

    At the Mozilla project there are many ways to contribute. Some contributions are directly to our products: Firefox Desktop, Firefox for Android, Firefox OS, Webmaker, etc. Some contributions are to things that make those products better: QA, localization, release engineering, etc. Some contributions are to tools that help us work together better, such as: Pontoon, Bugzilla, Mozillians and the Mozilla Wiki.

    I’ve long had a personal interest in the Mozilla Wiki. When I started as a paid contributor in 2011, it was my main source of information about the many, many Mozilla projects.

    And I’m not alone in this. Contributor Sujith Reddy says:

    The wiki page of Mozilla has got info about every project running around. For instance, being a Rep, I get questioned by many people on mails, What exactly is the ReMo program. I would reply’em with a single link: https://wiki.mozilla.org/ReMo Basically, it makes my work easier to explain people. It is Mozilla-Encyclopedia :)

    And contributor Mark A. Hershberger says:

    Wikis provide the best way for a community with many members to collaborate to disseminate knowledge about their shared interest…The wiki provides one of the easiest ways to start contributing to the shared work and become a contributing member of the Mozilla community.

    And it’s not just volunteer contributors who find the wiki essential. Here’s Benjamin Sternthal from Web Production:

    The Mozilla Wiki is an essential part of how Web Productions manages projects and involves community. The Wiki is particularly valuable for our project hubs, the central place where anyone can view information about a project without having to hunt around in various systems.

    History of the Mozilla Wiki

    The Mozilla Wiki has been around for a long time. According to WikiApiary it was founded on in November of 2004 making it nearly 10 years old! It has over 90,000 pages, all of which are public, and roughly 600 daily users.

    During most of its existence the Wiki has been maintained by community without organized effort. Mozilla IT has supported it on Mozilla’s corporate infrastructure, and various community members, paid and volunteer, have worked to keep it as up-to-date and functional as possible.

    This approach worked fairly well for a long time. But during the last couple of years, as our community has experienced incredible growth, this ad-hoc approach stopped serving us well. The wiki has become harder and harder to use when it should become easier and easier to use.

    Formation of the Wiki Working Group

    And that’s why a group of us came together in March 2014 and formed the Wiki Working Group. It’s been a few months and the group is going very well. We meet twice a month as a full group, and in smaller groups as needed to work through specific issues. There are 25 people on our mailinglist and meeting attendance averages 8-12, with a mix of paid and volunteer contributors in about a 1:1 ratio. Of the paid contributors, I am the only with time dedicated to work on the Wiki.

    In a short amount of time we’ve made some significant accomplishments, including:

    • triaged all open bugs (>100, some open several years without updates)
    • created a formal governance structure by creating a submodule for the Wiki within Websites
    • reduced the clutter and improved usability on the wiki by eliminating new spam (spam accounts and pages previously numbered in the several hundreds per day on average)
    • improved usability of the wiki by fixing a few critical but long-standing bugs, including an issue with table sorting
    • created an About page for the Wiki that clarifies its scope and role in the project, including what is appropriate content and how to report issues

    One of the long-standing bugs was to re-enable the WikiEditor which greatly improves usability by giving users an easy-to-use toolbar to allow page authoring without having to know wiki markup.

    Chris More from Web Productions gave us this feedback on these recent changes:

    With the re-introduction of the visual wikieditor, it has allowed non-technical people to be able to maintain their project’s wiki page without having to learn the common wiki markup language. This has been invaluable with getting the new process adopted across the Engagement team.

    We’ve also worked hard to create a clear vision for the purpose of the Wiki Working Group. Early on we reached consensus that it is not our role to be the only ones contributing to the wiki. Rather, it is our role to enable everyone across the project to feel empowered to participate and collaborate to make the Mozilla Wiki an enjoyable and lively place to document and communicate about our work.

    Where we’re going in 2014

    With that in mind, we’re working towards the following milestones for this year:

    • increasing usability and stability) upgrading to current version of Mediawiki
    • updating the default skin (theme) to be more usable and mobile-friendly
    • improving the information architecture of the site so content is easier to find and maintain
    • engage contributors to learn to use the wiki and help us improve it by running a series of “wiki missions”
    • create compelling visual dashboards that will help us better understand and recognize wiki activity

    We expect these changes to increase participation on the wiki itself considerably, and to increase community activity in other areas of the project by making it easier to document and discover contribution pathways. In this way, the WWG serves all teams at Mozilla in their community building efforts.

    Chris More from Web Production again:

    The use of the wiki has recently been amplified by the introduction of the Integrated Marketing process. The new process is essentially program management best practices to ensure what Engagement is working on is relevant, organized, and transparent. The wiki has been used to document, share, and to be the hub for both the process and every major project Engagement is working on. Without the wiki, Engagement would have no central public location to share our plans with the world and to understand how to get involved.

    So, while our group is small, we are highly engaged. As we continue our work, we’ll enable many, many more people to become contributors and to continue contributing across the project.

    How to Get Involved

    If you’re interested in joining or following the Wiki Working Group, take a look at the How to Participate section on our wiki page for links to our mailinglist and meeting schedule.

    If you have general feedback about the Mozilla Wiki, or things you’d like to see improved there, leave comments on this Sandbox page.

    Jennie Rose Halperinabout:Mozilla: more than just a newsletter

    “The about:Mozilla newsletter reaches 70,000 people?” I asked Larissa Shapiro incredulously in March when she suggested that our team assist in reviving the dormant newsletter. Indeed, with about:Mozilla, we have the opportunity to reach the inboxes of 70,000 potential contributors, all of whom have already expressed interest in learning more about our work. Though the newsletter is several years old, the revamp focuses on contribution and community. Its renewal has been a boon for our team and helped us continue working both cross-functionally and with our contributor base.

    Spreading the Mozilla mission by connecting at scale is one of next quarter’s goals, and the about:Mozilla newsletter is a unique and dynamic way for us to do so. The about:Mozilla newsletter brings us back to our roots: We are seeking out the best in contribution activities and delighting a large community of motivated, excited people who love our products, projects and mission. As our Recognition Working Group asserts: “People contribute to Mozilla because they believe in our message.” The newsletter brings that message to new contributors and reminds casual contributors what they can do for Mozilla.

    Reinvigorating the newsletter was a high priority for the Community Building team in Q2 and its success and consistency speaks to the continued collaboration between Community Building and Engagement to create a fantastic, contributor-led newsletter. We’ve released four newsletters since May, and found that with each issue we continue to find our voice, empower new contributions, and seek out relevant, highly engaged channels for new contributors to get involved at scale. The newsletter team, which consists of myself, Jan Bambach, Brian King, Jessilyn Davis, and Larissa Shapiro, seek to provide readers the best opportunities to volunteer across Mozilla.

    The easy, digestible, and fun opportunities in the newsletter have been identified by a variety of teams, and every week we present more chances to connect. We’ve given contributors the tools to contribute in a variety of functional areas, from Maker Party to Security to Marketplace to Coding. We have yet to be sure of our return on investment: the newsletter is new and our tracking system is still limited in terms of how we identify new contributions across the organization, but we are excited to see this continue to scale in Q3. We hope to become a staple in the inboxes of contributors and potential contributors around the world.

    Our click rates are stable and at industry average with approximately 25% of subscribers opening the newsletter, and our bounce rate is very low. We are working together to improve the quality and click rate for our community news and updates as well as featuring a diverse set of Mozilla contributors from a variety of different contribution areas. Though our current click rate is at 3%, we’re fighting for at least 6% and the numbers have been getting incrementally better.

    Identifying bite-sized contribution activities across the organization continues to be a struggle from week to week. We keep our ears open for new opportunities, but would like more teams to submit through our channels in order to identify diverse opportunities. Though we put out a call for submissions at the bi-monthly Grow meeting, we find it difficult to track down teams with opportunities to engage new Mozillians. Submissions remain low despite repeated reminders and outreach.

    My favorite part of the newsletter is definitely our “Featured Contributor” section. We’ve featured people from four countries (the United States, China, India, and the Phillipines,) and told their varied and inspirational stories. People are excited to be featured in the newsletter, and we are already getting thank you emails and reposts about this initiative. Thank you also to all the contributors who have volunteered to be interviewed!

    I’d like to encourage all Mozillians to help, and here are some easy things that you can do to help us connect at scale:

    Here is what I would like to see in the next quarter:

    • I’d like to see our click rate increase to 8%. I’ve been reading a lot about online newsletters, and we have email experts like Jessilyn Davis on our team, so I think that this can be done.

    • The name about:Mozilla is no longer descriptive, and we would like to discuss a name change to about:Community by the end of the year.

    • I will set up a system for teams to provide feedback on whether or not the newsletter brought in new contributors. Certain teams have done this well: the MoFo Net Neutrality petition from last week contained analytics that tracked if the signature came from the newsletter. (Security-minded folks: I can say honestly that it tracked nothing else!)

    • I would like to see the newsletter and other forms of Engagement become a pathway for new contributors. This newsletter cannot happen without the incredible work of Jan Bambach, a motivated and long-time volunteer from Germany, but I’d love to see others getting involved too. We have a link at the bottom of the page that encourages people to Get Involved, but I think we can do more. The newsletter provides a pathway that can help contributors practice writing for the web, learn about news and marketing cycles, and also learn to code in html. A few more hands would provide a variety of voices.

    • I will continue to reach out to a variety of teams in new and creative ways to encourage diverse submissions and opportunities. The form seems to be underutilized, and there are definitely other ways to do outreach to teams across the organization.
    • Eventually, I’d love to see the newsletter translated into other languages besides English!

    While the newsletter is only a part of what we do, it has become a symbol for me of how a small group of motivated people can reboot a project to provide consistent quality to an increasingly large supporter base. The about:Mozilla newsletter is not only a success for the Community Building Team, it’s a success for the whole organization because it helps us get the word out about our wonderful work.

    Michael VerdiRecent work on Bookmarks and Firefox Reset

    I’ve been working on a number of things over the last couple of months and I wanted to share two of them. First, bookmarks. Making this bookmarks video for Firefox 29 reminded me of a long-standing issue that’s bothered me. By default, new bookmarks are hidden away in the unsorted bookmarks folder. So without any instruction, they’re pretty hard to find. Now that we have this fun animation that shows you where your new bookmark went, I thought it would be good if you could actually see that bookmark when clicking on the bookmarks menu button. After thinking about a number of approaches we decided to move the list of recent bookmarks from a sub-menu and expose them directly in the main bookmarks menu.


    With the design done, this is currently waiting to be implemented.

    Another project that I’ve been focusing on is Firefox Reset. The one, big, unimplemented piece of this work that began about three years ago, is making this feature discoverable when people need it. And the main place we like to surface this option is when people try to reinstall the same version of Firefox that they are currently running. We often see people try to do this, expecting that it will fix various problems. The issue is that reinstalling doesn’t fix many things at all. What most people are expecting to happen, actually happens when you reset Firefox. So here we’d like to take two approaches. If the download page knows that you have the same version of Firefox that you are trying to download, it should offer you a reset button instead of the regular download button.


    The other approach is to have Firefox detect if it’s just had the same version installed and offer you the opportunity to reset Firefox.


    The nice thing about these approaches is that work together. If you determine that something is wrong with Firefox and you want to fix it by reinstalling, you’ll see a reset button on the download page. If you use that, the reset process takes just a few seconds and you can be on your way. If you want to download and install a new copy you can, and you’ll have another opportunity to reset after Firefox has launched and you’ve verified whether the issue has been fixed. This presentation explains in more detail how these processes might work. This work isn’t final and there are a few dependencies to work out but I’m hopeful these pieces can be completed soon.

    Mike ConleyDocShell in a Nutshell – Part 1: Original Intents

    I think in order to truly understand what the DocShell currently is, we have to find out where the idea of creating it came from. That means going way, way back to its inception, and figuring out what its original purpose was.

    So I’ve gone back, peered through various archived wiki pages, newsgroup and mailing list posts, and I think I’ve figured out that original purpose.1

    The original purpose can be, I believe, summed up in a single word: embedding.


    Back in the late 90′s, sometime after the Mozilla codebase was open-sourced, it became clear to some folks that the web was “going places”. It was the “bees knees”. It was the “cat’s pajamas”. As such, it was likely that more and more desktop applications were going to need to be able to access and render web content.

    The thing is, accessing and rendering web content is hard. Really hard. One does not simply write web browsing capabilities into their application from scratch hoping for the best. Heartbreak is in that direction.

    Instead, the idea was that pre-existing web engines could be embedded into other applications. For example, Steam, Valve’s game distribution platform, displays a ton of web content in its user interface. All of those Steam store pages? Those are web pages! They’re using an embedded web engine in order to display that stuff.2

    So making Gecko easily embeddable was, at the time, a real goal, and a real project.


    The problem was that embedding Gecko was painful. The top-level component that embedders needed to instantiate and communicate with was called “nsWebShell”, and it was a pretty unwieldy. Lots of internal knowledge about the internal workings of Gecko was leaked through the nsWebShell component, and it’s interface changed far too often.

    It was also inefficient – the nsWebShell didn’t just represent the top-level “thing that loads web content”. Instances of nsWebShell were also used recursively for subdocuments within those documents – for example, (i)frames within a webpage. These nested nsWebShell’s formed a tree. That’s all well and good, except for the fact that there were things that the nsWebShell loaded or did that only the top-level nsWebShell really needed to load or do. So there was definitely room for some performance improvement.

    In order to correct all of these issues, a plan was concocted to retire nsWebShell in favour of several new components and a slew of new interfaces. Two of those new components were nsDocShell and nsWebBrowser.


    nsWebBrowser would be the thing that embedders would drop into the applications – it would be the browser, and would do all of the loading / doing of things that only the top-level web browser needed to do.

    The interface for nsWebBrowser would be minimal, just exposing enough so that an embedder could drop one into their application with little fuss, point it at a URL, set up some listeners, and watch it dance.


    nsDocShell would be… well, everything else that nsWebBrowser wasn’t. So that dumping ground that was nsWebShell would get dumped into nsDocShell instead. However, a number of new, logically separated interfaces would be created for nsDocShell.

    Examples of those interfaces were:

    • nsIDocShell
    • nsIDocShellTreeItem
    • nsIDocShellTreeNode
    • nsIWebNavigation
    • nsIWebProgress
    • nsIBaseWindow
    • nsIScrollable
    • nsITextScroll
    • nsIContentViewerContainer
    • nsIInterfaceRequestor
    • nsIScriptGlobalObjectOwner
    • nsIRefreshURI

    So instead of a gigantic, ever changing interface, you had lots of smaller interfaces, many of which could eventually be frozen over time (which is good for embedders).

    These interfaces also made it possible to shield embedders from various internals of the nsDocShell component that embedders shouldn’t have to worry about.

    Ok, but… what was it?

    But I still haven’t answered the question – what was the DocShell at this point? What was it supposed to do now that it was created.

    This ancient wiki page spells it out nicely:

    This class is responsible for initiating the loading and viewing of a document.

    This document also does a good job of describing what a DocShell is and does.

    Basically, any time a document is to be viewed, a DocShell needs to be created to view it. We create the DocShell, and then we point that DocShell at the URL, and it does the job of kicking off communications via the network layer, and dealing with the content once it comes back.

    So it’s no wonder that it was (and still is!) a dumping ground – when it comes to loading and displaying content, nsDocShell is the central nexus point of communications for all components that make that stuff happen.

    I believe that was the original purpose of nsDocShell, anyhow.

    And why “shell”?

    This is a simple question that has been on my mind since I started this. What does the “shell” mean in nsDocShell?

    Y’know, I think it’s actually a fragment left over from the embedding work, and that it really has no meaning anymore. Originally, nsWebShell was the separation point between an embedder and the Gecko web engine – so I think I can understand what “shell” means in that context – it’s the touch-point between the embedder, and the embedee.

    I think nsDocShell was given the “shell” monicker because it did the job of taking over most of nsWebShell’s duties. However, since nsWebBrowser was now the touch-point between the embedder and embedee… maybe shell makes less sense. I wonder if we missed an opportunity to name nsDocShell something better.

    In some ways, “shell” might make some sense because it is the separation between various documents (the root document, any sibling documents, and child documents)… but I think that’s a bit of a stretch.

    But now I’m racking my brain for a better name (even though a rename is certainly not worth it at this point), and I can’t think of one.

    What would you rename it, if you had the chance?

    What is nsDocShell doing now?

    I’m not sure what’s happened to nsDocShell over the years, and that’s the point of the next few posts in this series. I’m going to be going through the commits hitting nsDocShell from 1999 until the present day to see how nsDocShell has changed and evolved.

    Hold on to your butts.

    Further reading

    The above was gleaned from the following sources:

    1. I’m very much prepared to be wrong about any / all of this. I’m making assertions and drawing conclusions by reading and interpreting things that other people have written about DocShell – and if the telephone game is any indication, this indirect analysis can be lossy. If I have misinterpreted, misunderstood, or completely missed the point in any of the above, please don’t hesitate to comment, and I will correct it forthwith. 

    2. They happen to be using WebKit, the same web engine that powers Safari, and (until recently) Chromium. According to this, they’re using the Chromium Embedding Framework to display this web content. There are a number of applications that embed Gecko. Firefox is the primary consumer of Gecko. Thunderbird is another obvious one – when you display HTML email, it’s using the Gecko web engine in order to lay it out and display it. WINE uses Gecko to allow Windows-binaries to browse the web. Making your web engine embeddable, however, has a development cost, and over the years, making Gecko embeddable seems to have become less of a priority. Servo is a next-generation web browser engine from Mozilla Research that aims to be embeddable. 

    Mark SurmanMoFo Update (and Board Slides)

    A big priority for Mozilla in 2014 is growing our community: getting more people engaged in everything from bringing the web to mobile and teaching web literacy to millions of people around the world.  At our June Mozilla Foundation board meeting, I provided an update on the MoFo teams contribution to this effort during Q2 and on our plans for the next quarter. Here is a brief screen cast that summarizes the material fromt that meeting.

    In addition to the screencast, I have posted the full board deck (40 slides) here. Much of the deck focuses on our progress towards the goal of 10k Webmaker contributors in 2014. If you want a quick overview of that piece of what we’re working on, here are some notes I wrote up to explain the Webmaker slides:

    • Our overall annual goal: grow Webmaker community to include 10k active contributors teaching web literacy by end of 2014.
    • The main focus of Q2 was to respin Webmaker as a platform for people who want to teach web literacy with Mozilla. Main things we achieved:
    • Over 250 partners secured and 100s of events created in advance for 2014 Maker Party (Q2 goal: Pre-launch Maker Party 2014 for partners and contributors)
    • A new version of Webmaker.org released in June oriented towards the needs of instructors who want to contribute to Mozilla (Q2 goal: re-launch webmaker.org with new UX)
    • Related result: over 3000 people signed up to teach w/ Mozilla this summer as Party of Maker Party.
    • Developed and released comprehensive web literacy curriculum — as well as new platform of Webmaker.org for people to publish and remix curriculum themselves (Q2 goal: Release web literacy ‘texbook’ at webmaker.org/explore)
    • In addition, we spun up a new joint MoCo / MoFo program in Q2 called the Mobile Opportunity Initiative.
    • This initiative will focus on local app and content creation in markets where FirefoxOS is launching, and will include development of easy app authoring tools as well as Webmaker training to go along with this. (Q2 goal: pilot Webmaker Mobile + local content program (includes FFos))
    • For Q3: the main goals are to a) run a successful Maker Party and b) grow the number of people we have contributing to Mozilla by teaching web literacy. Specific goal: Maker Party reach and impact builds on 2013.KPIs: 2400 events + 6500 contributors
    • We also want to use Q3 to grow Maker Party from a yearly campaign into a year round program — or network of ‘clubs’ — for people teaching with Mozilla.New features added to support year round ‘teach the web’ program. Specific goal: Add new features added to support year round ‘teach the web’ program
    • Finally, Q3 will include a getting meat on the bones for the Mobile Opportunity Initiative, including prototypes of what Appmaker could offer to users. Specific goal: Announce digital inclusion initiative w/ partners from mobile industry. KPIs: 3 carrier partners and 3 philanthropic partners aligned *and* three ‘appmaker’ user value concepts tested in field w/ at least 300 content creators

    The slides also talk about our joint efforts with MoCo to grow the number of Mozilla contributors overall to 20,000 people in 2014. In addition to Webmaker, Mozilla’s Open News, Science Lab, Open Internet Policy and MozFest initiatives are all a part of growing our contributor community. There is also a financial summary. We are currently $12M towards our $17M revenue goal for the year.

    For back ground and context, see Mozilla’s overall 2014 goals here and the quarterly goal tracker here. If you have questions or comments on any of this, please reach out to me directly or leave comments below.

    Filed under: mozilla, statusupdate, webmakers

    William LachanceMeasuring frames per second and animation smoothness with Eideticker

    [ For more information on the Eideticker software I'm referring to, see this entry ]

    Just wanted to write up a few notes on using Eideticker to measure animation smoothness, since this is a topic that comes up pretty often and I wind up explaining these things repeatedly. ;)

    When rendering web content, we want the screen to update something like 60 times per second (typical refresh rate of an LCD screen) when an animation or other change is occurring. When this isn’t happening, there is often a user perception of jank (a.k.a. things not working as they should). Generally we express how well we measure up to this ideal by counting the number of “frames per second” that we’re producing. If you’re reading this, you’re probably already familiar with the concept in outline. If you want to know more, you can check out the wikipedia article which goes into more detail.

    At an internal level, this concept matches up conceptually with what Gecko is doing. The graphics pipeline produces frames inside graphics memory, which is then sent to the LCD display (whether it be connected to a laptop or a mobile phone) to be viewed. By instrumenting the code, we can see how often this is happening, and whether it is occurring at the right frequency to reach 60 fps. My understanding is that we have at least some code which does exactly this, though I’m not 100% up to date on how accurate it is.

    But even assuming the best internal system monitoring, Eideticker might still be useful because:

    • It is more “objective”. This is valuable not only for our internal purposes to validate other automation (sometimes internal instrumentation can be off due to a bug or whatever), but also to “prove” to partners that our software has the performance characteristics that we claim.
    • The visual artifacts it leaves behind can be valuable for inspection and debugging. i.e. you can correlate videos with profiling information.

    Unfortunately, deriving this sort of information from a video capture is more complicated than you’d expect.

    What does frames per second even mean?

    Given a set of N frames captured from the device, the immediate solution when it comes to “frames per second” is to just compare frames against each other (e.g. by comparing the value of individual pixels) and then counting the ones that are different as “unique frames”. Divide the total number of unique frames by the length of the
    capture and… voila? Frames per second? Not quite.

    First off, there’s the inherent problem that sometimes the expected behaviour of a test is for the screen to be unchanging for a period of time. For example, at the very beginning of a capture (when we are waiting for the input event to be acknowledged) and at the end (when we are waiting for things to settle). Second, it’s also easy to imagine the display remaining static for a period of time in the middle of a capture (say in between gestures in a multi-part capture). In these cases, there will likely be no observable change on the screen and thus the number of frames counted will be artificially low, skewing the frames per second number down.

    Measurement problems

    Ok, so you might not consider that class of problem that big a deal. Maybe we could just not consider the frames at the beginning or end of the capture. And for pauses in the middle… as long as we get an absolute number at the end, we’re fine right? That’s at least enough to let us know that we’re getting better or worse, assuming that whatever we’re testing is behaving the same way between runs and we’re just trying to measure how many frames hit the screen.

    I might agree with you there, but there’s a further problems that are specific to measuring on-screen performance using a high-speed camera as we are currently with FirefoxOS.

    An LCD updates gradually, and not all at once. Remnants of previous frames will remain on screen long past their interval. Take for example these five frames (sampled at 120fps) from a capture of a pan down in the FirefoxOS Contacts application (movie):


    Note how if you look closely these 5 frames are actually the intersection of *three* seperate frames. One with “Adam Card” at the top, another with “Barbara Bloomquist” at the top, then another with “Barbara Bloomquist” even further up. Between each frame, artifacts of the previous one are clearly visible.

    Plausible sounding solutions:

    • Try to resolve the original images by distinguishing “new” content from ghosting artifacts. Sounds possible, but probably hard? I’ve tried a number of simplistic techniques (i.e. trying to find times when change is “peaking”), but nothing has really worked out very well.
    • Somehow reverse engineering the interface between the graphics chipset and the LCD panel, and writing some kind of custom hardware to “capture” the framebuffer as it is being sent from one to the other. Also sounds difficult.
    • Just forget about this problem altogether and only try to capture periods of time in the capture where the image has stayed static for a sustained period of time (i.e. for say 4-5 frames and up) and we’re pretty sure it’s jank.

    Personally the last solution appeals to me the most, although it has the obvious disadvantage of being a “homebrew” metric that no one has ever heard of before, which might make it difficult to use to prove that performance is adequate — the numbers come with a long-winded explanation instead of being something that people immediately understand.

    Blake WintonFiguring out where things are in an image.

    People love heatmaps.

    They’re a great way to show how much various UI elements are used in relation to each other, and are much easier to read at a glance than a table of click- counts would be. They can also reveal hidden patterns of usage based on the locations of elements, let us know if we’re focusing our efforts on the correct elements, and tell us how effective our communication about new features is. Because they’re so useful, one of the things I am doing in my new role is setting up the framework to provide our UX team with automatically updating heatmaps for both Desktop and Android Firefox.

    Unfortunately, we can’t just wave our wands and have a heatmap magically appear. Creating them takes work, and one of the most tedious processes is figuring out where each element starts and stops. Even worse, we need to repeat the process for each platform we’re planning on displaying. This is one of the primary reasons we haven’t run a heatmap study since 2012.

    In order to not spend all my time generating the heatmaps, I had to reduce the effort involved in producing these visualizations.

    Being a programmer, my first inclination was to write a program to calculate them, and that sort of worked for the first version of the heatmap, but there were some difficulties. To collect locations for all the elements, we had to display all the elements.

    Firefox in the process of being customized

    Customize mode (as shown above) was an obvious choice since it shows everything you could click on almost by definition, but it led people to think that we weren’t showing which elements were being clicked the most, but instead which elements people customized the most. So that was out.

    Next we tried putting everything in the toolbar, or the menu, but those were a little too cluttered even without leaving room for labels, and too wide (or too tall, in the case of the menu).

    A shockingly busy toolbar

    Similarly, I couldn’t fit everything into the menu panel either. The only solution was to resort to some Photoshop-trickery to fit all the buttons in, but that ended up breaking the script I was using to locate the various elements in the UI.

    A surprisingly tall menu panel

    Since I couldn’t automatically figure out where everything was, I figured we might as well use a nicely-laid out, partially generated image, and calculate the positions (mostly-)manually.

    The current version of the heatmap (Note: This is not the real data.)

    I had foreseen the need for different positions for the widgets when the project started, and so I put the widget locations in their own file from the start. This meant that I could update them without changing the code, which made it a little nicer to see what’s changed between versions, but still required me to reload the whole page every time I changed a position or size, which would just have taken way too long. I needed something that could give me much more immediate feedback.

    Fortunately, I had recently finished watching a series of videos from Ian Johnson (@enjalot on twitter) where he used a tool he made called Tributary to do some rapid prototyping of data visualization code. It seemed like a good fit for the quick moving around of elements I was trying to do, and so I copied a bunch of the code and data in, and got to work moving things around.

    I did encounter a few problems: Tributary wasn’t functional in Firefox Nightly (but I could use Chrome as a workaround) and occasionally sometimes trying to move the cursor would change the value slider instead. Even with these glitches it only took me an hour or two to get from set-up to having all the numbers for the final result! And the best part is that since it's all open source, you can take a look at the final result, or fork it yourself!

    Christian HeilmannYahoo login issue on mobile – don’t fix the line length of your emails

    Yesterday I got a link to an image on Flickr in a tweet. Splendid. I love Flickr. It has played a massive role in the mashup web, I love the people who work in there and it used to be a superb place to store and share photos without pestering people to sign up for something. Twitter has also been a best-of-breed when it comes to “hackable” URLs. I could get different sizes of images and different parts of people’s pages simply by modifying the URL in a meaningful way. All in all, a kick-ass product, I loved, adored, contributed to and gave to people as a present.

    Until I started using a mobile device.

    Well, I tapped on the link and got redirected to Chrome on my Nexus 5. Instead of seeing an image as I expected I got a message that I should please download the epic Flickr app. No thanks, I just want to see this picture, thank you very much. I refused to download the app and went to the “web version” instead.

    This one redirected me to the Yahoo login. I entered my user name and password and was asked “for security reasons” to enter animated captcha. I am not kidding, here it is:

    animated captcha with bouncing letters over a letter storm or something

    I entered this and was asked to verify once more that I am totally me and would love to see this picture that was actually not private or anything so it would warrant logging in to start with.

    I got the option to do an email verification or answer one of my security questions. Fine, let’s do the email verification.

    An email arrived and it looked like this:

    verification email with cut off text

    As you can see (and if not, I am telling you now) the text seems cut off and there is no code in the email. Touching the text of the mail allows me to scroll to the right and see the full stop after “account.” I thought at first the code was embedded as an image and google had filtered it out, but there was no message of that sort.

    Well, that didn’t help. So I went back in the verification process and answered one of my questions instead. The photo wasn’t worth it.

    What happened?

    By mere chance I found the solution. You can double-tap the email in GMail for Android and it extends it to the full text. Then you can scroll around. For some reason the longest line gets displayed and the rest collapsed.

    The learning from that: do not fix line widths in emails (in this case it seems 550px as a layout table) if you display important information.

    I am not sure if that is a bug or annoyance in GMail, but in any case, this is not a good user experience. I reported this to Yahoo and hopefully they’ll fix this in the login verification mail.

    Bogomil ShopovWhy webcompat.com should do better + a proposal

    I really like the webcompat project‘s idea: to allow all users to be able to report bugs on all websites and in all browsers. Let’s take a step back: What are the 3 main problems with bug-reporting in general?

    Most of the users are not-so-tech savvy.

    It’s true. They know something is wrong, because the website is not working for them, but they don’t know why is that or how to report it. Imagine if they could use the tools they are familiar with to report bugs and to give the developers the data they need without even realizing it. Cool huh?

    “It’s working for me”

    Thanks to different browsers, standards and different developers a website that one user sees can be completely different for another user. Agreed?

    You can imagine the questions the user must answer:: “On which page are you?”, “What are you doing exactly?”, “Did you select something wrong?”, “Did you write a number in your street address as requested?”, “Are you logged in?”, etc.

    It’s an option too: the developer (or a QA) will say to the user “It’s working for me”, meaning to the end user - you are doing something wrong, John or Jane, it’s not the website. it’s you.

    The end user doesn’t know how to fill-in bugs into bug tracking systems

    Majority of the people that can find a bug don’t know how to report it or why to report it.  Most of the companies or projects have 1-2 page howto’s attached to the bug reporting form, but the result is still questionable.


    The WebCompat way: Now

    Webcompat will give all users a brand new way to report bugs. It will also provide for a community which will be able to easily find fixes for the bugs or alternatively, contact the website owners and report them.

    There is still one problem with that. The users are still missing the tools which they are familiar with to report bugs. The developers (community of fixers) are still missing the vital information about fixing the bug.


    The WebCompat way: Upgraded

    We can fix that!  Being part of Usersnap’s mission to make this process easier, I can propose a solution and being a Mozillian for almost a decade I believe this will help millions of users to report bugs and to make the web a better place – this is why Usersnap has been created for.

    The ideal set of information to fix a bug from the developer’s perspective:


    • Obviously, a report of what exactly is not working
    • Information about the plug-ins installed in the browser
    • OS and browser version, at a minimum
    • JavaScript error log and the XMLHttpRequest Log  in case there are problems (we still don’t know that). The worst thing about client-side JavaScript errors is that they happen on the client side, which is a cruel place far away from the developer trying to fix this error.
    • It would also be great to know what the user did BEFORE the error (yes we can do that)

    Wouldn’t it be awesome if we could have this information to every bug report, collected automatically.


    Push to gitHub

    And what is the best part of this? All data goes directly to your bug tracking solutions – in this case gitHub and everybody is happy.

    Mozilla WebDev CommunityWebdev Extravaganza July 2014 Notes

    Once a month, web developers across the Mozilla Project get together to talk about the things we’ve shipped, discuss the libraries we’re working on, introduce any new faces, and argue about pointless things. We call it the Webdev Extravaganza! It’s open to the public; you should come next month!

    There’s a wiki page that’s mostly useful for pre-meeting connection info and agenda-building, a recording of the meeting, and an etherpad which is a mix of semi-useful info and avant-garde commentary. This month, I’m trying out writing a post-meeting blog post that serves as a more serious log of what was shared during the meeting.

    Shipping Celebration

    The shipping celebration is for sharing anything that we have shipped in the past month, from incremental updates to our sites to brand new sites to library updates.

    Mobile Partners

    mobilepartners.mozilla.org is a Mezzanine-based site that allows phone manufacturers and operators to learn about Firefox OS and sign up as a partner. This month we shipped an update that, among other things,  tightens up the site’s Salesforce integration, replaces HTML-based agreements with PDF-based ones displayed via PDF.js, and moves the site off of old Mozilla Labs hardware.


    input.mozilla.org got two major features shipped:

    1. All non-English feedback for Firefox OS is now being automatically sent to humans for translation. Having the feedback translated allows processes that work on English only, like Input’s sentiment analysis, to run on feedback from non-English users.
    2. An improved GET API for pulling data from Input, useful primarily for creating interesting dashboards without having to build them into Input itself.

    Open Source Citizenship

    Peep has an IRC channel

    Peep is a wrapper around pip that vets packages downloaded from PyPI via a hash in your requirements file. There’s now an IRC channel for discussing peep development: #peep on irc.mozilla.org.


    Spiderflunky is a static analysis tool for JavaScript that was modified from a tool for inspecting packaged apps called perfalator. DXR is planning to use it, and there’s interest around the project, so it has it’s own repository now.

    New Hires / Interns / Volunteers

    This month we have several new interns (including a bunch from the Open Source Lab at OSU):

    Name IRC Nick Project
    Trevor Bramwell bramwelt crash-stats.mozilla.com
    Ian Kronquist muricula input.mozilla.org
    Dean Johnson deanj support.mozilla.org
    Marcell VazquezChanlatte marcell dxr.mozilla.org
    Christian Weiss cweiss Web Components / Brick

    Bikeshed / Roundtable

    In the interests of time and comprehensibility, open discussions will be ignored.

    What does Webdev do?

    As part of an ongoing effort to improve webdev, there’s an etherpad for listing the things that webdev does, along with who owns them, what goals they contribute to, and how they can be improved. You are encouraged to take a look and add info or opinions by the end of the week.

    Interesting Python Stuff

    Erik Rose shared a backport of concurrent.futures from Python 3.2 and a feature from setuptools called entry points that is useful for, among other things, supporting a plugin architecture in your library.

    Did you find this post useful? Let us know and we might do it again next month!

    Until then, STAY INCLUSIVE, KIDS

    Nicholas NethercoteMeasuring memory used by third-party code

    Firefox’s memory reporting infrastructure, which underlies about:memory, is great. And when it lacks coverage — causing the “heap-unclassified” number to get large — we can use DMD to identify where the unreported allocations are coming from. Using this information, we can extend existing memory reporters or write new ones to cover the missing heap blocks.

    But there is one exception: third-party code. Well… some libraries support custom allocators, which is great, because it lets us provide a counting allocator. And if we have a copy of the third-party code within Firefox, we can even use some pre-processor hacks to forcibly provide custom counting allocators for code that doesn’t support them.

    But some heap allocations are done by code we have no control over, like OpenGL drivers. For example, after opening a simple WebGL demo on my Linux box, I have over 50% “heap-unclassified”.

    208.11 MB (100.0%) -- explicit
    ├──107.76 MB (51.78%) ── heap-unclassified

    DMD’s output makes it clear that the OpenGL drivers are responsible. The following record is indicative.

    Unreported: 1 block in stack trace record 2 of 3,597
     15,486,976 bytes (15,482,896 requested / 4,080 slop)
     6.92% of the heap (20.75% cumulative); 10.56% of unreported (31.67% cumulative)
     Allocated at
     replace_malloc (/home/njn/moz/mi8/co64dmd/memory/replace/dmd/../../../../memory/replace/dmd/DMD.cpp:1245) 0x7bf895f1
     _swrast_CreateContext (??:?) 0x3c907f03
     ??? (/usr/lib/x86_64-linux-gnu/dri/i965_dri.so) 0x3cd84fa8
     ??? (/usr/lib/x86_64-linux-gnu/dri/i965_dri.so) 0x3cd9fa2c
     ??? (/usr/lib/x86_64-linux-gnu/dri/i965_dri.so) 0x3cd8b996
     ??? (/usr/lib/x86_64-linux-gnu/dri/i965_dri.so) 0x3ce1f790
     ??? (/usr/lib/x86_64-linux-gnu/dri/i965_dri.so) 0x3ce1f935
     glXGetDriverConfig (??:?) 0x3dce1827
     glXDestroyGLXPixmap (??:?) 0x3dcbc213
     glXCreateNewContext (??:?) 0x3dcbc48a
     mozilla::gl::GLContextGLX::CreateGLContext(mozilla::gfx::SurfaceCaps const&, mozilla::gl::GLContextGLX*, bool, _XDisplay*, unsigned long, __GLXFBConfigRec*, bool, gfxXlibSurface*) (/home/njn/moz/mi8/co64dmd/gfx/gl/../../../gfx/gl/GLContextProviderGLX.cpp:783) 0x753c99f4

    The bottom-most frame is for a function (CreateGLContext) within Firefox’s codebase, and then control passes to the OpenGL driver, which eventually does a heap allocation, which ends up in DMD’s replace_malloc function.

    The following DMD report is a similar case that shows up on Firefox OS.

    Unreported: 1 block in stack trace record 1 of 463
     1,454,080 bytes (1,454,080 requested / 0 slop)
     9.75% of the heap (9.75% cumulative); 21.20% of unreported (21.20% cumulative)
     Allocated at
     replace_calloc /Volumes/firefoxos/B2G/gecko/memory/replace/dmd/DMD.cpp:1264 (0xb6f90744 libdmd.so+0x5744)
     os_calloc (0xb25aba16 libgsl.so+0xda16) (no addr2line)
     rb_alloc_primitive_lists (0xb1646ebc libGLESv2_adreno.so+0x74ebc) (no addr2line)
     rb_context_create (0xb16446c6 libGLESv2_adreno.so+0x726c6) (no addr2line)
     gl2_context_create (0xb16216f6 libGLESv2_adreno.so+0x4f6f6) (no addr2line)
     eglCreateClientApiContext (0xb25d3048 libEGL_adreno.so+0x1a048) (no addr2line)
     qeglDrvAPI_eglCreateContext (0xb25c931c libEGL_adreno.so+0x1031c) (no addr2line)
     eglCreateContext (0xb25bfb58 libEGL_adreno.so+0x6b58) (no addr2line)
     eglCreateContext /Volumes/firefoxos/B2G/frameworks/native/opengl/libs/EGL/eglApi.cpp:527 (0xb423dda2 libEGL.so+0xeda2)
     mozilla::gl::GLLibraryEGL::fCreateContext(void*, void*, void*, int const*) /Volumes/firefoxos/B2G/gecko/gfx/gl/GLLibraryEGL.h:180 (discriminator 3) (0xb4e88f4c libxul.so+0x789f4c)

    We can’t traverse these allocations in the usual manner to measure them, because we have no idea about the layout of the relevant data structures. And we can’t provide a custom counting allocator to code outside of Firefox’s codebase.

    However, although we pass control to the driver, control eventually comes back to the heap allocator, and that is something that we do have some power to change. So I had an idea to toggle some kind of mode that records all the allocations that occur within a section of code, as the following code snippet demonstrates.

    context = glx.xCreateNewContext(display, cfg, LOCAL_GLX_RGBA_TYPE, glxContext, True);

    The calls on either side of glx.xCreateNewContext tell the allocator that it should tag all allocations done within that call. And later on, the relevant memory reporter can ask the allocator how many of these allocations remain and how big they are. I’ve implemented a draft version of this, and it basically works, as the following about:memory output shows.

    216.97 MB (100.0%) -- explicit
    ├───78.50 MB (36.18%) ── webgl-contexts
    ├───32.37 MB (14.92%) ── heap-unclassified

    The implementation is fairly simple.

    • There’s a global hash table which records which live heap blocks have a tag associated with them. (Most heap blocks don’t have a tag, so this table stays small.)
    • When SetHeapBlockTagForThread is called, the given tag is stored in thread-local storage. When ClearHeapBlockTagForThread is called, the tag is cleared.
    • When an allocation happens, we (quickly) check if there’s a tag set for the current thread and if so, put a (pointer,tag) pair into the table. Otherwise, we do nothing extra.
    • When a deallocation happens, we check if the deallocated block is in the table, and remove it if so.
    • To find all the live heap blocks with a particular tag, we simply iterate over the table looking for tag matches. This can be used by a memory reporter.

    Unfortunately, the implementation isn’t suitable for landing in Firefox’s code, for several reasons.

    • It uses Mike Hommey’s replace_malloc infrastructure to wrap the default allocator (jemalloc). This works well — DMD does the same thing — but using it requires doing a special build and then setting some environment variables at start-up. This is ok for an occasional-use tool that’s only used by Firefox developers, but it’s important that about:memory works in vanilla builds without any additional effort.
    • Alternatively, I could modify jemalloc directly, but we’re hoping to one day move away from our old, heavily-modified version of jemalloc and start using an unmodified jemalloc3.
    • It may have a non-trivial performance hit. Although I haven’t measured performance yet — the above points are a bigger show-stopper at the moment — I’m worried about having to do a hash table lookup on every deallocation. Alternative implementations (store a marker in each block, or store tagged blocks in their own zone) are possible but present their own difficulties.
    • It can miss some allocations. When adding a tag for a particular section of code, you don’t want to mark every allocation that occurs while that section executes, because there could be multiple threads running and you don’t want to mark allocations from other threads. So it restricts the marking to a single thread, but if the section creates a new thread itself, any allocations done on that new thread will be missed. This might sound unlikely, but my implementation appears to miss some allocations and this is my best theory as to why.

    This issue of OpenGL drivers and other kinds of third-party code has been a long-term shortcoming with about:memory. For the first time I have a partial solution, though it still has major problems. I’d love to hear if anyone has additional ideas on how to make it better.

    Patrick ClokeNew Blog: Serving the Content

    In the first part of this blog post I talked about using Pelican to create a blog, this is a bit more about how I got it up and running.

    Getting a Domain

    The most exciting part! Getting a domain! I used gandi.net, it was recommended to me by Florian as "awesome, but a bit expensive". I liked that they actually explain exactly what I was getting by registering a domain through them. Nowhere else I looked was this explicit.

    Once you get your domain you’ll need to set up your CNAME record to forward to wherever you’re serving your content. I found it pretty interesting that gandi essentially gives you an DNS zone file to modify. I ended up making a few modifications:

    • Created a patrick subdomain (patrick.cloke.us)
    • Redirected the apex domain (cloke.us) to the patrick subdomain
    • Redirected the www subdomain to the patrick subdomain

    I also created a few email aliases which forward to the email accounts I already own.

    Serving the Content

    OK, we have a domain! We have content! How do we actually link them!? I used GitHub Pages, cause I’m cheap and don’t like to pay for things. The quick version:

    • Create a repository that is named <your account name>.github.io
    • Push whatever HTML content you want into the master branch
    • Voila it’s available as <your account name>.github.io

    Personally, I store my Pelican code in a separate source branch [1] and use ghp-import to actually publish my content. I’ve automated a lot of the tasks by extending the default fabfile.py that is generated with the quickstart. My workflow looks something like:

    komodo content/new-article.rst
    # <edit article>
    # <check it out in a browser using fab build/serve/regenerate>
    git add content/ && git commit -m "Add 'New Article!'."
    fab publish # Which runs "ghp-import -p -b master output" underneath!
    git push origin source

    One other thing you’ll need to do is add a CNAME file that has the domain of your host in it (and only the domain). I found the GitHub documents on this extremely confusing, but it’s pretty simple:

    • Create a file called CNAME somewhere you have static files in Pelican (mine is at content/static/CNAME)
    • Add a line to your pelicanconf.py to have this file end up in the root:
    # Set up static content and output locations.
        'static/CNAME': {'path': 'CNAME'},

    It took 10 - 20 minutes for this to "kick in" on GitHub, until that time I had a 404 GitHub page.

    Redirect Blogger

    This is the really fun part. How the hell do we redirect blogger links to actually go to the new location of each blog post? With some hackery, some luck, and some magic.

    I found some help in an article about switching to WordPress from Blogger and modified the template they had there. On the Blogger dashboard, choose "Template", scroll to the bottom and click "Revert to Classic Template". Then use something like the following template:

                    window.location.href = "http://patrick.cloke.us/";
                        // This is the full permalink.
                        var permalink = "<$BlogItemPermalinkURL$>";
                        // Split up each part by the slash.
                        var parts = permalink.split("/");
                        // Take the last part (the page) and strip off the .html.
                        var slug = parts.slice(-1)[0].slice(0, -5);
                        var article = "http://patrick.cloke.us/posts/<BlogDateHeader><$BlogDateHeaderDate$></BlogDateHeader>/" + slug;
                        window.location.href = article;
                        window.onload = function() {
                            document.getElementById("linkrel").href = article;
                            document.getElementById("simplelink").href = article;
                <link rel="canonical" href="http://patrick.cloke.us/" />
                    <link id="linkrel" rel="canonical" />
            <h1><a href="http://patrick.cloke.us"><$BlogTitle$></a></h1>
                        <a id="simplelink">

    Obviously you’ll need to change the URLs, but the key parts here are that we’re generating a URL based on the date and the full article name. The magic comes in generating the date. The get it in the format I wanted (YYYY/MM/DD) I modified a the "Date Header Format" in "Settings" > "Language and formatting". This matches how I formatted my URLs in my pelicanconf.py. The slug that gets generated needs to match the slug you used in your template so the link will work. (I had some help in figuring out these template tags.)

    I’d suggest you check the links to all your articles! A couple of the dates were messed up in mine (the day was off by one, causing the forwarded location to be broken).

    The last thing to do is to redirect the Atom/RSS feed (if anyone is using that). Go to "Settings" > "Other" > "Post Feed Redirect URL" and set it to your new Atom feed URL (wherever that might be).

    [1]Pro-tip: You can change the "default" branch of your repository in the settings page on GitHub.

    Patrick ClokeNew Blog

    If you’re reading this you might have noticed that I have a shiny new blog! I had been pretty disappointed in Blogger practically since I started using it. But it was free, I was a poor college student, etc. I finally managed to get myself a domain name and set up a more proper blog!

    I was between a few different pieces of blog software (namely WordPress, DotClear and Jekyll) and couldn’t really decide. I loved the idea of the simplicity of Jekyll, but it’s Ruby. (Who’s got time for that?) I wanted something I could read the code of and understand if necessary. (And it has been necessary!) My main requirements for blog software were:

    • Easy to use and set up.
    • Free.
    • Support for syntax highlighted code blocks. (This was the only "hard" one to come by.)
    • Support for RSS feeds.

    After a quick Google search for "jekyll python", I ended up on Hyde. It seemed alright, but no one seemed to use it. Farther down the page I came across a blog talking about moving to Pelican. I was hooked. (Note that I don’t necessarily agree with all the ideas in that post, it just introduced me to Pelican.)

    Set up Pelican

    (Since I dislike writing my own descriptions:)

    "Pelican is a static site generator, written in Python, that requires no database or server-side logic."

    Cool! Sounds real simple to use! And it was. Mostly.

    I pretty much followed the quickstart:

    mkdir -p blog/blog # The first blog is to store other repos, etc.
    cd blog/blog
    git init && git checkout -b source # Explained later on.
    brew install pandoc # If you're not on Mac you'll need to do something else.
    mkvirtualenv blog
    pip install pelican markdown Fabric ghp_import feedparser

    I’ll let you read the rest of the quickstart guide, but it was super quick to get up and running! (I also referenced another blog post I found that had some good information!) I, of course, had no content…but wait I did, it was just in Blogger!

    Importing Content from Blogger

    Pelican does not directly support importing from Blogger (Edit: Seems someone just put up a `pull request`_ to support Blogger natively!), but it supports importing from a RSS feed. The first time I did this it missed a couple of articles (which I didn’t notice right away), so make sure you bump up the max amount in the URL like so (this also outputs in the folder "content"):

    pelican-import --feed http://clokep.blogspot.com/feeds/posts/default\?alt\=rss\&max-results\=240 -o content

    No import is ever perfect and I had to clean up a bunch of stuff by hand including:

    • The slugs did not match the ones from Blogger (this is important later on!)
    • Some of the dates were strangely wrong
    • Some HTML formatting was included (in particular around <code>/<pre> blocks I had added).
    • Some formatting was messed up.
    • The (single) image I had on my blog had to be manually downloaded and added.
    • I had bolded things I really wanted to be headers. (This is my fault!)

    I probably spent a couple of hours cleaning all the reStructuredText content up, but now I feel that I have a portable set of all of my blog posts, which I’m pretty happy about!

    Customizing Pelican

    I tried a few different themes for Pelican, but eventually settled on just using and modifying the default theme. I, frankly, haven’t tried it on too many different systems, so hopefully it doesn’t totally break on small screen sizes or something. I’m not HTML expert, so I’d rather talk about the other stuff I modified. (Although, if you’re curious, the main elements I did modify are adding the sidebar to the left and the organization of the archives page.)

    Blogger has a concept of "labels", Pelican has a concept of "category" and "tags". I hate this. What’s the difference? Anyway, I wanted to eradicate the concept of a "category" (and "authors" since I’m the only one on my blog!), so I added a few things to my pelicanconf.py:

    # Disable categories.
    # Disable author pages.

    Note that DISPLAY_CATEGORY_ON_ARTICLE is actually a variable I added and used in the template to not show categories above the list of tags on each article.

    This is getting pretty long so I’ll leave how I’m actually serving this content to my next article!

    Niko MatsakisImplied bounds

    I am on vacation for a few weeks. I wanted to take some time to jot down an idea that’s been bouncing around in my head. I plan to submit an RFC at some point on this topic, but not yet, so I thought I’d start out by writing a blog post. Also, my poor blog has been neglected for some time. Consider this a draft RFC. Some important details about references are omitted and will come in a follow-up blog post.

    The high-level summary of the idea is that we will take advantage of bounds declared in type declarations to avoid repetition in fn and impl declarations.

    Summary and motivation

    Recent RFCs have introduced the ability to declare bounds within type declarations. For example, a HashMap type might be defined as follows:

    struct HashMap<K:Hash,V> { ... }
    trait Hash : Eq { ... }

    These type declarations indicate that every hashmap is parameterized by a key type K and a value type V. Furthermore, K must be a hashable type. (The trait definition for Hash, meanwhile, indicates that every hashable type must also be equatable.)

    Currently, the intention with these bounds is that every time the user writes HashMap<SomeKey,SomeValue>, the compiler will run off and verify that, indeed, SomeKey implements the trait Hash. (Which in turn implies that SomeKey implements Eq.)

    This RFC introduces a slight twist to this idea. For the types of function parameters as well as the self types of impls, we will not verify their bounds immediately, but rather attach those bounds as [where clauses][where] on the fn. This shifts the responsibility for proving the bounds are satisfied onto the fn’s caller; in turn, it allows the fn to assume that the bounds are satisfied. The net result is that you don’t have to write as many duplicate bounds.

    As applied to type parameter bounds

    Let me give an example. Here is a generic function that inserts a key into a hashmap if there is no existing entry for the key:

    fn insert_if_not_already_present<K,V>(
        hashmap: &mut HashMap<K,V>
        key: K,
        value: V)
        if hashmap.contains_key(&key) { return; }
        hashmap.insert(key, value);

    Today this function would not type-check because the type K has no bounds. Instead one must declare K:Hash. But this bound feels rather pointless – after all, the fact that the function takes a hashmap as argument implies that K:Hash. With the proposed change, however, the fn above is perfectly legal.

    Because impl self types are treated the same way, it will also be less repititious to define methods on a type. Whereas before one would have to write:

    impl<K:Hash,V> HashMap<K,V> {

    it is now sufficient to leave off the Hash bound, since it will be inferred from the self-type:

    impl<K,V> HashMap<K,V> {

    As applied to lifetimes

    In fact, we already have a similar rule for lifetimes. Specifically, in some cases, we will infer a relationship between the lifetime parameters of a function. This is the reason that the following function is legal:

    struct Foo { field: uint }
    fn get_pointer<'a,'b>(x: &'a &'b Foo) -> &'a int {

    Here, the lifetime of (**x).field (when all dereferences are written in full) is most properly 'b, but we are returning a reference with lifetime 'a. The compiler permits this because there exists a parameter of type &'a &'b Foo – from this, the compiler infers that 'a <= 'b. The basis for this inference is a rule that you cannot have a reference that outlives its referent. This is very helpful for making some programs typecheck: this is particularly true with generic traits, as described in this blog post.

    Detailed design

    Well-formed types and the BOUNDS function

    We say that a type is well-formed if all of its bounds are met. We define a function BOUNDS(T) that maps from a type T to the set of bounds that must be satisfied for T to be called well-formed.

    For the scalar types like int or float, BOUNDS just returns the empty set:

    BOUNDS(int) = {}
    BOUNDS(uint) = {}
    BOUNDS(...) = {}

    For struct types like HashMap<SomeKey,SomeValue>, the function combines the bounds declared on the HashMap type with those declared on SomeKey and SomeValue. (The SUBST() function is used to substitute the actual type parameters T1 ... Tn for their formal counterparts.)

                                  BOUNDS(T1), ..., BOUNDS(Tn))

    Enum and object types are handled in precisely the same way as struct types.

    For vector types, the element type must be sized:

    BOUNDS([T, ..N]) = UNION({T : Sized}, BOUNDS(T))

    Well-formed references

    For references, the type must have a suitable lower bound:

    BOUNDS(&'a T) = UNION({'a <= LOWER-BOUND(T)}, BOUNDS(T))
    BOUNDS(&'a mut T) = UNION({'a <= LOWER-BOUND(T)}, BOUNDS(T))

    Note that I have not defined the LOWER-BOUND function. The proper definition of this function is important and I have been working on it, but I prefer to defer that subject to a post/RFC of its own. (Clarifying the lower-bound function, however, is the heart of #5723 along with a number of other recent bugs being filed on lifetimes.) Note that this definition subsumes the existing rule for references described in my prior blog post.

    Andrew TruongWhy do I even bother?

    Most of you know that I'm a SUMO (support.mozilla.org) contributor. I know that Mozilla is all about being open and certainly open to feedback. However, why should I still bother? It seems that the past few times, I've started a thread in the contributors forum and got a basic response. The suggestions, the feedback provided, seemed to be received but I don't think it's gone anywhere. No actions were made or taken after me suggesting items, all I received was a response that obviously doesn't satisfy or rectify the present issue. It seems to occur often that when something breaks, the "suggestion" from X period of time ago is fixed. Why not, take the feedback into account, see what can be done and then possibly work on it, instead of having X break which then a fix needs to be pushed out which resolves it.
    I know by now, that teams are pressured, that there are set goals for each quarter and that they need to reach that goal. There doesn't seem to be any time for volunteers anymore, it's all about Mozilla. Why is is so hard nowadays to get something done?
    Sure, I'm young, my feedback doesn't matter, and that's alright ;-)

    I'm happy that there have been occasions where I do  get awesome responses, but why do I still bother when the response I'm wanting is lacked?

    I'll show some examples below:





    The level of care seems to be gone. I know that Mozilla has its priorities, but what can a single person like me do?

    Alex ClarkPillow 2-5-0 is out!

    Pillow is the "friendly" PIL fork by Alex Clark and Contributors. PIL is the Python Imaging Library by Fredrik Lundh and Contributors

    Since Pillow 2.0 the Pillow Team has adopted a quarterly release cycle; as such, Pillow 2.5.0 is out! Here's what's new in this release:

    2.5.0 (2014-07-01)

    • Imagedraw rewrite [terseus, wiredfool]
    • Add support for multithreaded test execution [wiredfool]
    • Prevent shell injection #748 [mbrown1413, wiredfool]
    • Support for Resolution in BMP files #734 [gcq]
    • Fix error in setup.py for Python 3 [matthew-brett]
    • Pyroma fix and add Python 3.4 to setup metadata #742 [wirefool]
    • Top level flake8 fixes #741 [aclark]
    • Remove obsolete Animated Raster Graphics (ARG) support [hugovk]
    • Fix test_imagedraw failures #727 [cgohlke]
    • Fix AttributeError: class Image has no attribute 'DEBUG' #726 [cgohlke]
    • Fix msvc warning: 'inline' : macro redefinition #725 [cgohlke]
    • Cleanup #654 [dvska, hugovk, wiredfool]
    • 16-bit monochrome support for JPEG2000 [videan42]
    • Fixed ImagePalette.save [brightpisces]
    • Support JPEG qtables [csinchok]
    • Add binary morphology addon [dov, wiredfool]
    • Decompression bomb protection [hugovk]
    • Put images in a single directory [hugovk]
    • Support OpenJpeg 2.1 [al45tair]
    • Remove unistd.h #include for all platforms [wiredfool]
    • Use unittest for tests [hugovk]
    • ImageCms fixes [hugovk]
    • Added more ImageDraw tests [hugovk]
    • Added tests for Spider files [hugovk]
    • Use libtiff to write any compressed tiff files [wiredfool]
    • Support for pickling Image objects [hugovk]
    • Fixed resolution handling for EPS thumbnails [eliempje]
    • Fixed rendering of some binary EPS files (Issue #302) [eliempje]
    • Rename variables not to use built-in function names [hugovk]
    • Ignore junk JPEG markers [hugovk]
    • Change default interpolation for Image.thumbnail to Image.ANTIALIAS [hugovk]
    • Add tests and fixes for saving PDFs [hugovk]
    • Remove transparency resource after P->RGBA conversion [hugovk]
    • Clean up preprocessor cruft for Windows [CounterPillow]
    • Adjust Homebrew freetype detection logic [jacknagel]
    • Added Image.close, context manager support. [wiredfool]
    • Added support for 16 bit PGM files. [wiredfool]
    • Updated OleFileIO to version 0.30 from upstream [hugovk]
    • Added support for additional TIFF floating point format [Hijackal]
    • Have the tempfile use a suffix with a dot [wiredfool]
    • Fix variable name used for transparency manipulations [nijel]


    With every release, there are notable contributions I must acknowledge:

    • Thanks to Stephen Johnson for contributing http://pillow.readthedocs.org, we continue to rely on & extend this resource.
    • Thanks to Christopher Gohlke for producing Windows Egg, Exe, and Wheel distributions.
    • Thanks to Matthew Brett for producing OS X Wheels (for the first time ever!)
    • Thanks to Eric Soroos for his contributions and serving as "Pillow Man #2" (2nd in command).
    • Welcome to Hugo VK who has joined the Pillow Team & contributed significantly to this release.
    • Thanks to all the remaining unnamed contributors! We appreciate every commit.

    Enjoy Pillow 2.5.0 & please report issues here: https://github.com/python-imaging/Pillow/issues

    Jennifer BorissThe Internet Says Goodbye

    Humans are a Community-Centric We evolved to rely on community to survive. From moment we stood on two legs, we have derived our identity, culture, and beliefs from the people […]

    Kim MoirThis week in Mozilla Releng - July 4, 2014

    This is a special double issue of this week in releng. I was so busy in the last week that I didn't get a chance to post this last week.  Despite the fireworks for Canada Day and Independence Day,  Mozilla release engineering managed to close some bugs. 

    Major highlights:
    • Armen, although he works on Ateam now, made blobber uploads discoverable and blogged about it.  Blobber is a server and client side set of tools that allow Releng's test infrastructure to upload files without requiring to deploy ssh keys on them. 
    • Callek and Coop, who served on buildduty during the past two weeks worked to address capacity issues with our test and build infrastructure.  We hit a record of 88,000 jobs yesterday which led to high pending counts.
    • Kim is trying to address the backlog of Android 2.3 test jobs  by moving more test jobs to AWS from our inhouse hardware now that Geoff on the Ateam has found a suitable image.
    • Rail switched jacuzzi EBS from magnetic to SSD.  Jacuzzis are similar pools of build machines  and switching their EBS storage from magnetic to SSD in AWS will improve build times.
     Completed work (resolution is 'FIXED'):
    In progress work (unresolved and not assigned to nobody):

    Christie KoehlerIt’s the 4th of July and I’m Celebrating Independence from Facebook

    I just requested that Facebook permanently delete my account.

    This change is a long time coming. I’ve grown increasingly concerned about the power Facebook exercises to commodify and influence our social interactions. There’s nothing holding Facebook accountable in the exercise of this power. Aside from all of that, I get very little out of time spent on the site. Yes, it’s a way I can connect with some folks for which I’m not in the habit of calling, emailing or writing. There’s nothing stopping me from doing this, however. I have the phone numbers, emails and addresses of the folks I generally care about keeping in touch with. I do wish more folks had their own blogs, though.

    Earlier in the week I posted a message on my timeline telling folks that in a few days I’d be deleting my account. I listed a few other ways to get in touch with me including twitter, my blog, and email. The other thing I did was look at the settings for every Facebook page I’m an admin on and ensure I wasn’t the only one (I wasn’t). I also downloaded a copy of my info.

    Today I logged in, ready to delete my account. First I couldn’t find a way to do so. I noticed a “deactivate my account” link under security settings. I figured this was the only way, so I tried it first.

    When you try to deactivate your account, Facebook presents you with a page that does everything to try and get you to keep your account active. It shows you pictures of your friends, says they will miss you and prompts you to message them. I found it particularly funny that one of the friends it showed me was Creepius the Bear (and identity created to demonstrate how creepy one can be on Facebook):

    Creepius will miss me after I've left Facebook.Creepius will miss me after I’ve left Facebook.

    And then after this you must provide a reason you’re deactivating your account. For any reason you select, you’re given additional information that supposedly resolves the concern:

    Facebook wants to know why you're deactivating your account.Facebook wants to know why you’re deactivating your account.

    What caught my attention was the Email opt out option, which states:

    Note: Even after you deactivate, your friends can still invite you to events, tag you in photos, or ask you to join groups.

    Not what I wanted, so I started figuring out how to work around this. Unfriend everyone first? Sounds tedious. Then someone asks me in IRC, “why don’t you delete instead of deactivate?” I responded saying I didn’t know that was an option. So, I searched Facebook’s help for “deactivate my account” and found this help page: How do I permanently delete my account?

    I follow the link in that article, and got this prompt:

    Deleting my facebook account.Deleting my facebook account.

    Much nicer, right? No guilt-trips and attempts to invalidate address my concerns. I clicked “Delete My Account”, filled out my password and captcha and got the following confirmation:

    Confirmation that my account has been deactivated and will then be deletedConfirmation that my account has been deactivated and will then be deleted

    I also received confirmation via email.

    So, that’s it! Assuming I don’t log in to my account during the next 14 days, my account will be deleted. Ah, freedom!

    If you like the idea of doing this, but want a more gradual approach, check out de-facing, in which one person talks about their plan to leave Facebook one friend at a time.

    Vaibhav AgrawalGSoC 2014: Progress Report

    In the last month, a good deal of work has been done on the Mochitest Failure Investigator project. I have described the algorithms implemented for bisecting the tests in my previous post. Since then, I have worked on including “sanity-check” once the cause of failure is found in which the tests are re-run omitting the cause of failure. The advantage of doing this is that if suppose a test is failing due to >1 number of previous tests, then we can find all those tests which are causing the failure. I then worked on refactoring the patch to integrate it seamlessly with the existing code. Andrew and Joel gave me good suggestions to make the code more beautiful and readable, and it has been a challenging and a great learning experience . The patch is now in the final review stages and it is successfully passing for all platforms on tbpl: https://tbpl.mozilla.org/?tree=Try&rev=07bbcb4a3e98. Apart from this, I worked on a small fun bug (925699) which was not cleaning up the unwanted temporary files, eventually occupying a large space on my machine.

    Now since the proposed project is almost over and I still have a month and a half left, after discussing with my mentor, I am planning to cover up a few extra things:

    * support finding the cause of failure in case of a timeout/crash for a chunk

    * support for intermittent problems – this case is interesting, as we may be able to support AutoLand by using this.

    * validating new tests to reduce intermittent problems caused by their addition.

    Until next time!


    Jennie Rose HalperinOn working open in a closed world

    At Mozilla, we talk a lot about how working in the open can benefit our communities. As Mozillians, we come from a lot of different backgrounds and experience levels in terms of “openness,” and have blogged and blogged and blogged about this subject, trying to fight “community debt” and keep people active and involved using open processes to collaboration. As David Boswell pointed out at a recent talk,  a lot of this is the expanding nature of our communities; while he was able to reach out to one or two people when he wanted to get involved fifteen years ago, now there are hundreds of listservs and tools and thousands of people to engage with.

    At Ada Camp this weekend, I had a wonderful conversation with other feminists about hospitality and its absence in many communities. Working open is, for me, a form of hospitality. When we use phrases like “Designing for Participation,” we are actually inviting people into our work and then gifting it to them, asking them to share in our creativity, and using the power of the collective “hive” mind in order to create something beautiful, functional, and delightful. We should be continuing to embrace this gift economy, recognizing contributors in ways that they both want, and in perhaps less tangible ways.

    There’s a section of the book The Ethical Slut (pardon the title) that I’ve always loved. The authors propose that love and affection in our society is engaged a mythical “starvation economy” and claim that many of us have been conditioned since childhood to “fight for whatever we get, often in cutthroat competition with our brothers and sisters.” They assert that people who believe in starvation economics are often possessive of their work, friends, and things, believing that anything they get has to come from “a small pool of not-enough” and has to be taken from someone else. Further, anything that they have can only be taken from them rather than shared.

    I believe that creativity can be conceived of in a similar fashion. If there’s anything that working for Mozilla has taught me, it is that there are always enough (usually too many!) ideas to go around. Embracing creativity as a collaborative process is central to our ethos, and working “default open” should not just be about the final work, it should be also about the journey to get there. Inviting people to provide input into the story as well as the final product will not only make our events, projects, and products better, it will inspire a new kind of work and motivate our communities to find their impact because they have a say in the projects and products they love.

    While making project pages public, inviting volunteers to meetings and workweeks, and using public forums rather than personal emails are a start to working in the open, there is still so much more that we can be doing to ensure that a multitude of voices are included in our process. We can learn a lot from other open source communities, but I would posit that we can also be learning from activist communities, non-profits, corporate trainings, and others. We’ve already begun with our speaker series “Learning from other non-profits,” but I look forward to seeing how much more we can do. Breaking down the silos can help us empower and grow our communities in ways we didn’t think possible.

    As the community building team asserts,

    Mozilla has reached the limits of unplanned, organic community growth.

    For many people, one-on-one and personal interaction is the most important part of community, and until we create processes for creating and maintaining these connections as well as systems for mediating the inevitable conflicts that arise within communities working together toward a common goal, we have failed as advocates and community builders.

    To that end, I am working with my colleagues to bring process-based solutions into conversation and indeed into the structure of the organization. From Mozilla “guides” who will help contributors find their way in an increasingly confusing contributor landscape to training in non-violent communication and consensus, we want to provide our communities open solutions that make them want to continue contributing and creatively collaborating together. We can do other things as well, like running exciting meetings with innovative structures, providing fun tasks to volunteers, and keeping personal connections vivid and electric with possibility.

    On holidays, many Jews traditionally open the door and make a plate for any person who has no place to go. Reinterpreting that for our own creative processes, I would say that we should open the door and leave a place in our work for new people and new ideas because, as we have seen, there is enough. There is always enough.


    Laura HilligerUpdates from the edge

    Last week I was off work at a family event, and I was disappointed because I thought that maybe I wasn't getting the rest I needed from a week off of work. Apparently, social stress and work stress are different. This week when I feel like I've been extremely productive and managed to make a bunch of things. First off, I was pleased to see that while I was away the work on the DRM and Net Neutrality Trainings launching July 28th had started, and that the team creating that content is excited about the possibilities. There’s still loads to be done, so if those topics interest you and you want to help out, there are plenty of opportunities. [caption id="" align="alignright" width="311"] My brain was doing this all week. via Giphy[/caption] I collaborated with Doug to figure out the nuances of Webmaker Training and what it means for counting contribution towards the Mozilla project. I collected loads of data and wrote a couple of wiki pages that fully unpack the training that took place from May 12th June 8th. This debrief page talks about things went right, things went wrong and some ideas for improvement. This page explores how training is different from other types of programs in terms of how it relates to a contribution to the Mozilla project. Another thing that I accomplish this week was taking a good deep dive into MDN’s new Learning Zone that they're putting together. I'm excited about this work because I think that the MDN community and Webmaker community have a lot of overlap. We can help each other in learning social and technical skills around the web, and we can help each other #TeachTheWeb too. I'd like to create more bridges, and the Learning Zone work is a step in that direction. The MDN is beginning to create articles and Makes that relate to the Web Literacy Map. They’re also starting to think about Thimble, Popcorn and X-ray goggles Makes that support active learning. They’re planning on making tutorials and other fun things for our community to remix or use to #TeachTheWeb. I've also just been added to an email chain about the new Connected Learning Course that is going to be coming out in September. I’m looking forward to exploring cMOOC challenges and spinning ideas about blended learning along with the team putting together this new experience designed to create better ties between academic classrooms and online learning initiatives. For more info on what I do every week, you can always check out my Weeknotes, and I'm all over the web, so get in touch!

    Byron Jonesbugzilla and performance, 2014

    bugzilla is a large legacy perl application initially developed at the birth of the world wide web. it has been refactored significantly in the 15 years since its release; however, some design decisions made at the start of the project carry significant technical debt in today’s modern web.

    while there have been a large number of micro-optimisations over the course of bugzilla’s life (generally as a result of performance profiling), there are limits to the benefits these sorts of optimisations can provide.

    this year has seen a lot of focus on improving bugzilla’s performance within mozilla, centred around the time it takes to display a bug to authenticated users.


    tl;dr bugzilla is faster


    the design of a modern large web application is generally centred around caches, between the business logic layer and the data sources, as well as between the users and the user interface. while bugzilla has been refactored to use objects, database abstraction, templates, etc, it had zero caching capabilities. this coupled with completely stateless processing of each request meant that every byte of html returned to the user was regenerated from scratch, starting with a new connection to the database.

    towards the end of 2013 i worked on implementing a memcached framework into bugzilla [bug 237498].

    retrofitting a caching mechanism into a large extendible framework proved to be a significant challenge. bugzilla provides the ability for developers to extend bugzilla’s functionality via extensions, including but not limited to adding new fields, user interface, or process. the extensions system conflicts with caching as it’s possible for an extension to conditionally alter an object in ways that would render it impossible to cache (eg. add a new field to an object only if the current user is a member of a particular group).

    some compromises had to be made. instead of caching fully constructed objects, the cache sits between the object’s constructor and the database. we avoid a trip to the database, but still have to construct objects from that data (which allows extensions to modify the object during construction).

    code which updated the database directly instead of using bugzilla’s objects had to be found and rewritten to use the objects or updated to manually clear the cache entries. extra care had to be taken as returning stale data could silently result in data loss. to placate concerns that these errors would be impossible to detect, the caller of an object’s constructor must pass in a parameter to “opt-in” to caching.

    in 2014 i built upon the memcached framework to support most of bugzilla’s objects [bug 956233], with the “bugs” object being the only notable exception. memcached also caches bugzilla’s configuration parameters (classifications, products, components, groups, flags, …) [bug 987032]. although caching the “bugs” object would be beneficial given its central role in all things bugzilla, it is highly likely that enabling this by default would break bugzilla extensions and customisations as a high proportion of them update the database directly instead of using bugzilla’s object layer. this would manifest as data which is silently stale, making undetectable dataloss a high probability.

    memcached support will be released with bugzilla 5.0, but has been live on bugzilla.mozilla.org (bmo) since february 2014.


    while profilling tools such as nytprof have often been pointed at bugzilla, bmo’s high number of concurrent requests and usage patterns time and time again resulted in performance optimisations performing worse than expected once deployed to production.

    we took the decision to deploy instrumentation code into bugzilla itself, reporting on each http request, database query, and template execution. as bugzilla is written in perl, support was absent for off-the-shelf instrumentation tools such as new relic, so we had to roll our own data collection and reporting system [bug 956230].

    the collector wraps specific Bugzilla::DB, Bugzilla::Memcached and Bugzilla::Template calls via subclassing, then reports data to an elasticsearch cluster. currently all reporting solutions are ad-hoc and involve running of scripts which identify the most costly database queries and templates.

    this data identified areas of bugzilla which require optimisation or caching. examples include the optimisation of a single query which shaved 200ms (~15%) off page load times for most users [bug 993894], caching of all queries which read an entire table [part of bug 987032], and caching of user groups [bugs 993939] and settings [bug 993926].

    we continue to revisit the instrumentation reports in order to guide further improvements.

    stylesheet concatenation and minification

    improving the performance of a web applications isn’t limited to focusing on server-side execution speed.

    due to bugzilla’s extensions we ended up in a situation where bugzilla was serving multiple small css files – on bmo we loaded 17 stylesheets as part of show_bug in comparison with 5 for an installation of bugzilla without any extensions installed.

    similar to the issue encountered with memcached, extensions have complete control with regards to optionally loading stylesheets, which means any css concatenation and minification solution needed to be implemented at run-time.

    [bug 977969] does exactly that – the template passes an array of stylesheets to load to bugzilla’s global header, where a hash of the array is used to find a unified stylesheet. simple minification is performed which dropped the stylesheet size from 54kb to 43kb on show_bug on bmo.

    stylesheet concatenation and minification support will be released with bugzilla 5.0, and has been live on bugzilla.mozilla.org since may 2014.


    in order to address performance issues caused by bugzilla’s use of the myisam table type, in march our DBAs upgraded our database cluster to mysql version 5.6. this was the result of analysis by the DBAs into replication and stability issues around our myisam table.

    as mysql 5.6 adds support for fulltext indexing to its innodb table type, bugzilla was able to switch away from myisam. this immediately fixed the database replication issues, and provided a noticeable performance boost to searches involving comments.

    looking forward

    the next large project is to update bugzilla so the bug object can use memcached on an unmodified installation without any non-default extensions enabled. for reasons previously covered it’s unlikely we’ll ship a version of bugzilla with this enabled by default, however this will allow sites to audit their own code (if any) and enable caching of the bug object if required.

    we will build on the css concatenation and minification work to provide the same benefits to javascript files.

    we periodically enable instrumentation and use it to identify the next set of targets for optimisation. this will continue for the foreseeable future.

    Filed under: bmo, bugzilla

    Mozilla WebDev CommunityNew Firefox Nightly firstrun page and community growth

    This summer I’m doing an internship at Mozilla, and so far that’s the most exciting time of my life. I work in the Localization Team in Mountain View, and one of my goals is to increase the number of users on localized Firefox Nightly builds.
    Several projects are ramped up to reach this goal, and one of them is the Nightly firstrun page. The firstrun page is the web page you see the first time you run Nightly (makes sense, right?), but also each time we release a new Firefox.

    Why do Firefox Nightly builds matter?

    Firefox Nightly is the earliest Firefox branch, on which all Firefox developers push their work. This build may be unstable, (but I’ve been using it as my main browser for more than 3 years, so far not that many issues). Most of the users are technical users, and they would be really valuable to the Mozilla mission if we could find an easy way to get them more involved in the project. Here comes the firstrun!

    Nightly firstrun in English

    Why does the firstrun page matters?

    This page is one of the first things we show to our technical users. The page has been simplified on purpose, to give them a clear way to get involved. 3 core areas are displayed: coding, QA, and localization. For instance, they can find a way to provide feedback to the localization team of their language, this is really helpful for localizers to get early feedback. They can even join the team! We really want users to get more involved with their local community. That’s why we don’t just get the page translated, we localized it by adding a customizable block at the bottom of the page. This allows Mozilla communities to create and publish their own content, (for instance to promote local events, IRC channel…) so that users can meet them and get involved more quickly and easily. Fun fact, I was myself initially using Nightly and contributing to Firefox code before even knowing a French community existed!
    In order to always get relevant content, localizers can update the block as part of the general Web Localization process, so it can be done quickly and regularly, without having to ask anyone.

    The page has been live for a few days now, and you’ll see it if you create a new profile on Firefox Nightly, or next time Firefox team bumps up the version number. You can take a look at the per-locale customization on the French page below for instance. The page is already localized in a few locales, also we are planning to add more locales amongst Firefox Nightly locales really soon. The next step for the page will be to analyze the traffic and determine what we can improve.

    Nightly firstrun in French

    You too, get more involved!

    If you want to contribute to these areas, you can download Firefox Nightly in your language. Right now this is the only link we have to get people to download localized Firefox Nightly builds, but we plan to get them more exposed really soon by adding Firefox Nightly to the new design of the channel page.

    Personal notes

    I learned a lot during this project, this was my first page for mozilla.org, and this was a fun challenge. Our Creative Team created the new design, I did the HTML/CSS code — and of course, made sure the page was l10n friendly — and the Mozilla.org dev team reviewed my work and gave me useful tips. There were a lot of things to take into account: performance, accessibility, localization, metrics, responsive design… but it was really worth it.
    Also, a huge thank you to Pascal Chevrel and Delphine Lebédel my mentors, and to all the Mozillians for helping me learning new things!

    Christie KoehlerSome OpenID Providers

    While I don’t hear about it a lot recently these days, there are still some sites that I need OpenID to log in to. I had been using myOpenID from Janrain for this, but that service was retired. Unfortunately, So was my backup provider, ClaimID.

    So, I went shopping for new providers! Here’s what I found:

    Whatever OpenID provider you have, I highly suggest setting up delegation. OpenID delegation means you can use any website you control as your OpenID login. The delegate website is configured to use your chosen provider and you can switch anytime without having to update your login information on other sites.

    How do you set up delegation? It’s easy! You just have to add the following two lines to the head of the site you want to act as delegate:

    <link rel="openid.delegate" href="http://mywebsite.com/" />
    <link rel="openid.server" href="https://myopenidprovider.com/" />

    Replacing “mywebsite.com” with the site you want to act as delegate, and “myopenidprovider.com” with your chosen OpenID provider (e.g., openid.stackexchange.com). Make sure you have an account at the OpenID provider of your choice before doing this.

    If you have a self-hosted WordPress blog, you can use this plugin instead of editing your theme files.

    Thanks Aaron Parecki, Nicolas Ward, and Sumana Harihareswara ‏ for helping me compile this list. Know of an OpenID provider not already on the list above? Let me know in the comments!

    Asa DotzlerFoxtrot is Here

    Image via Flickr user Helena Perez García, and used under a CC license

    Foxtrot image via Flickr user Helena Perez García, and used under a CC license

    We’re excited to launch the first, small-scale, FLAME reference phone program, “Foxtrot” for Mozillians.

    We have about 200 devices available for members of our community (volunteers and employees) who are not part of the core OS dev teams, who will use the phones regularly, and who will participate in product surveys, focused testing efforts, and be available to help us gauge the quality of specific features and the overall product.

    To apply to the program, head over to our Foxtrot Application:


    The application takes about 10 to 15 minutes.

    200 phones means that not everyone can participate in this program but not to worry, there will be other opportunities for receiving free Flames.

    Joel MaherAlert Manager has a more documentation and a roadmap

    I have been using alert manager for a few months to help me track performance regressions. It is time to take it to the next level and increase productivity with it.

    Yesterday I created a wiki page outlining the project. Today I filed a bug of bugs to outline my roadmap.

    Basically we have:
    * a bunch of little UI polish bugs
    * some optimizations
    * addition of reporting
    * more work flow for easier management and investigations

    In the near future we will work on making this work for high resolution alerts (i.e. each page that we load in talos collects numbers and we need to figure out how to track regressions on those instead of the highly summarized version of a collection).

    Thanks for looking at this, improving tools always allows for higher quality work.

    Joel MaherMore thoughts on Auto-land and try server

    Last week I wrote a post with some thoughts on AutoLand and Try Server, this had some wonderful comments and because of that I have continued to think in the same problem space a bit more.

    In chatting with Vaibhav1994 (who is really doing an awesome GSoC project this summer for Mozilla), we started brainstorming another way to resolve our intermittent orange problem.

    What if we rerun the test case that caused the job to go orange (yes in a crash, leak, shutdown timeout we would rerun the entire job) and if it was green then we could deem the failure as intermittent and ignore it!

    With some of the work being done in bug 1014125, we could achieve this outside of buildbot and the job could repeat itself inside the single job instance yielding a true green.

    One thought- we might want to ensure that if it is a test failing that we run it 5 times and it only fails 1 time, otherwise it is too intermittent.

    A second thought- we would do this by try by default for autoland, but still show the intermittents on integration branches.

    I will eventually get to my thoughts on massive chunking, but for now, lets hear more pros and cons of this idea/topic.

    Michael VerdiVideoblogging 10 years later

    Yesterday I was on another episode of The Web Ahead. This time, talking about the videoblogging movement that I was involved in starting back in 2004. It was then that my friend Ryanne and I created Freevlog to teach people how to get video on the web (we later turned that into a book). Firefox was a really big thing for us back then and it was the beginning of my love of the Mozilla mission and eventually led to me working at Mozilla.

    Henrik SkupinFirefox Automation report – week 19/20 2014

    In this post you can find an overview about the work happened in the Firefox Automation team during week 19 and 20.


    When we noticed that our Mozmill-CI production instance is quickly filling up the /data partition, and having nearly no space left to actually run Jenkins, Henrik did a quick check, and has seen that the problem were the update jobs. Instead of producing log files with about 7MB in size, files with more than 100MB each were present. Inspecting those files revealed that the problem were all the SPDY log entries. As a fix Henrik reduced the amount of logging information, so it is still be helpful but it’s not exploding.

    In the past days we also have seen a lot of JSBridge disconnects while running our Mozmill tests. Andrei investigated this problem, and it turned out that the reduced delay for add-on installations were the cause of it. Something is most likely messing up with Mozmill and our SOCKS server. Increasing the delay for that dialog fixed the problem for now.

    We are using Bugsahoy for a long time now, but we never actually noticed that the Github implementation was somewhat broken when it comes to filtering for languages. To fix that Henrik added all the necessary language mappings. After updating the Github labels for all of our projects, we were seeing a good spike of new contributors interested to work with us.

    Individual Updates

    For more granular updates of each individual team member please visit our weekly team etherpad for week 19 and week 20.

    Meeting Details

    If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 19 and week 20.

    Roberto A. VitilloMost popular GPUs on Nightly and Release

    We have a long-standing bug about Firefox not scrolling smoothly with Intel GPUs. It turns out that recently, thanks to a driver update, the performance has drastically improved for some of our users. While trying to confirm this hypothesis I found some interesting data about Firefox users on Nightly 33 and Release 30 that might come in handy in deciding on what hardware to run our benchmarks in the future.

    First things first, let’s have a look at the popularity of the various GPU vendors:

    That doesn’t really come as a surprise considering how ubiquitous Intel chips are nowadays. This also means we should concentrate our optimization efforts towards Intel GPUs since it’s there where we can have the biggest impact.

    But how well does Firefox perform with GPUs from the above mentioned vendors? It turns out that one of our telemetry metrics, that measures the average time to render a frame during a tab animation, comes in pretty handy to answer this question:

    Ideally we would like to reach 60 frames per second for any vendor. Considering that practically every second user has an Intel GPU, the fact that our performance on those chips is not splendid weights even more on our shoulders. Though, as one might suspect, Intel chips are usually not as performant as Nvidia’s or AMD’s ones. There is also a suspicious difference in performance between Nightly and Release; could this be related to Bug 1013262?

    The next question is what specific models do our users possess. In order to answer it, let’s have a look at the most popular GPUs that account for 25% of our user base on Nightly:


    and on Release:

    As you can notice there are only Intel GPUs in the top 25% of the population. The HD 4000 and 3000 are dominating on Nightly while the distribution is much more spread out on Release with older models, like the GMA 4500, being quite popular. Unsurprisingly though the most popular chips are mobile ones.

    Let’s take now as reference the most popular chips on Release and see how their performance compares to Nightly:

    We can immediately spot that newer chips perform generally better than older ones, unsurprisingly. But more importantly, there is a marked performance difference between the same models on Nightly and Release which will require further investigation. Also noteworthy is that older desktop models outperform newer mobile ones.

    And finally we can try to answer the original question by correlating the tab animation performance with the various driver versions. In order to do that, let’s take the Intel HD 4000 and plot the performance for its most popular drivers on the release channel and compare it to the nightly one:

    We can notice that there is a clear difference in performance between the older 9.X drivers and the newer 10.X which answers our original question. Unfortunately though only about 25% of our users on the release channel have updated their driver to a recent 10.X version. Also, the difference between the older and newer drivers is more marked on the nightly channel than on the release one.

    We are currently working on an alerting system for Telemetry that will notify us when a relevant change appears in our metrics. This should allow us to catch pre-emptively regressions, like Bug 1032185, or improvements, like the one mentioned in this blog post, without accidentally stumbling on it.

    Armen ZambranoTbpl's blobber uploads are now discoverable

    What is blobber? Blobber is a server and client side set of tools that allow Releng's test infrastructure to upload files without requiring to deploy ssh keys on them.

    This is useful since it allows uploads of screenshots, crashdumps and any other file needed to debug what failed on a test job.

    Up until now, if you wanted your scripts determine the files uploaded in a job, you would have to download the log and parse it to find the TinderboxPrint lines for Blobbler uploads, e.g.
    15:21:18 INFO - (blobuploader) - INFO - TinderboxPrint: Uploaded 70485077-b08a-4530-8d4b-c85b0d6f9bc7.dmp to http://mozilla-releng-blobs.s3.amazonaws.com/blobs/mozilla-inbound/sha512/5778e0be8288fe8c91ab69dd9c2b4fbcc00d0ccad4d3a8bd78d3abe681af13c664bd7c57705822a5585655e96ebd999b0649d7b5049fee1bd75a410ae6ee55af
    Now, you can look for the set of files uploaded by looking at the uploaded_files.json that we upload at the end of all uploads. This can be discovered by inspecting the buildjson files or by listening to the pulse events. The key used is called "blobber_manifest_url" e.g.
    "blobber_manifest_url": "http://mozilla-releng-blobs.s3.amazonaws.com/blobs/try/sha512/39e400b6b94ac838b4e271ed61a893426371990f1d0cc45a7a5312d495cfdb485a1866d7b8012266621b4ee4df0cf9aa7d0f6d0e947ff63785543d80962aaf9b",
    In the future, this feature will be useful when we start uploading structured logs. It will help us not to download logs to extract meta-data about the jobs!

    No, your uploads are not this ugly
    This work was completed in bug 986112. Thanks to aki, catlee, mtabara and rail to help me get this out the door. You can read more about Blobber by visiting: "Blobber is live - upload ALL the things!" and "Blobber - local environment setup".

    Creative Commons License
    This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

    Karl DubostMediaqueries need space to be successful

    When using a smartphone, Montreal Gazette sends its users to one of its two mobile Websites: touch or mobile. Well not all smartphone users, Opera, Firefox OS and Firefox Android browsers were not redirected to the mobile ones. They stay on the desktop content.

    When this is happening, we often test in advance before contacting. The rule is that if we ask the company to send the mobile site, how does it fail? What are the things which have to be fixed. So when I tested the touch Web site on Firefox OS, I was surprised to see it failing in some fashion. It looked like the viewport was wrong or something related to the screen size. Would it be mediaqueries?

    First, mobile smartphones are redirected to the /mobile/ then to access the /touch/ version you have to be AppleWebKit and have Mobile in your User Agent string, but not BlackBerry 9780. Let me sing "Oh! Canada". Ah no sorry that was two days ago.

    if (navigator.userAgent.match(/AppleWebKit/i) && navigator.userAgent.match(/Mobile/i) && !(navigator.userAgent.match(/BlackBerry 9780/i)))
       // remove '/mobile' from the start of the path and add '/touch'
       var path = "/touch" + location.pathname.substr(7, location.pathname.length);
       document.location = path + location.search;

    It was surprising because in the touch version of the CSS, there are styles for Mozilla, Opera and all. Still they were some mistakes in the mediaqueries such as:

    @media only screen and(-webkit-min-device-pixel-ratio:1.5),only screen and(min--moz-device-pixel-ratio:1.5),only screen and(min-resolution:240dpi){
            -moz-background-size:776px 18px;
            -o-background-size:776px 18px;
            -webkit-background-size:776px 18px;
            background-size:776px 18px;

    Did you spot it? Yeah min--moz-device-pixel-ratio:1.5, someone at lunch felt that beer was long overdue to be able to swallow a poutine. No pun. I LOVE poutine. But that was not the issue. The issue was that in the console I had 14 times, the message:

    Expected ',' in media list but found 'and('.

    Huh? What could be wrong? So I made a test with the different mediaqueries rules.

    p {background-color: #fff;}
    @media screen and (max-width:600px) and (min-width: 501px) {
        p {background-color: yellow;}
    @media screen and(max-width:500px){
        p {background-color: pink;}

    Your eagle eye has noticed the difference in between and( and and (. (and good luck to parse this sentence). I added a space in between the and and the ( parenthesis. The initial testing showed that it was working in Safari (Yeah!) and failing in Firefox (Booh!). I was so happy that I opened a bug thinking I had found a parsing bug in Firefox CSS parser. Then further tests, it is failing in Opera, in IE11, and in Chrome…

    Ah change of strategy! This is failing everywhere but Safari. Let me find the relevant bug at Apple.

    What happened I guess is that someone when creating the Web site has tested on his/her iPhone device and it worked. Then later own, someone from management or business accessed with a blackberry and it failed. It might be why they put the JavaScript code.

    Three things to do:

    1. ask the Web developer of La Gazette to fix the CSS. (ONGOING)
    2. ask WebKit developers to fix CSS Parsing (ONGOING)
    3. If you do not have spaces after the and in mediaqueries, give it space now!


    Robert O'CallahanImplementing Scroll Animations Using Web Animations

    It's fashionable for apps to perform fancy animations during scrolling. Some examples:

    • Parallax scrolling
    • Sticky headers that scroll normally until they would scroll out of view and then stop moving
    • Panels that slide laterally as you scroll vertically
    • Elements that shrink as their available space decreases instead of scrolling out of view
    • Scrollable panels that resist scrolling as you get near the end

    Obviously we need to support these behaviors well on the Web. Also obviously, we don't want to create a CSS property for each of them. Normally we'd handle this diversity by exposing a DOM API which lets developers implement their desired behavior in arbitrary Javascript. That's tricky in this case because script normally runs on the HTML5 event loop which is shared with a lot of other page activities, but for smooth touch tracking these scrolling animation calculations need to be performed reliably at the screen refresh rate, typically 60Hz. Even for skilled developers, it's easy to have a bug where once in a while some page activity (e.g. an event handler working through some unexpected large data set) blows the 16ms budget to make touch dragging less than perfect, especially on low-end mobile devices.

    There are a few possible approaches to fixing this. One is to not provide any new API, hope that skilled developers can avoid blowing the latency budget, and carefully engineer the browser to minimize its overhead. We took this approach to implementing homescreen panning in FirefoxOS. This approach sounds fragile to me. We could make it less fragile by changing event dispatch policy during a touch-drag, e.g. to suppress the firing of "non-essential" event handlers such as setTimeouts, but that would add platform complexity and possibly create compatibility issues.

    Another approach would be to move scroll animation calculations to a Worker script, per an old high-level proposal from Google (which AFAIK they are not currently pursuing). This would be more robust than main-thread calculations. It would probably be a bit clumsy.

    Another suggestion is to leverage the Web's existing and proposed animation support. Basically we would allow an animation on an element to be use another element's scroll position instead of time as the input to the animation function. Tab Atkins proposed this with declarative CSS syntax a while ago, though it now seems to make more sense as part of Web Animations. This approach is appealing because this animation data can be processed off the main thread, so these animations can happen at 60Hz regardless of what the main thread is doing. It's also very flexible; versions of all of the above examples can be implemented using it.

    One important question is how much of the problem space is covered by the Web Animations approach. There are two sub-issues:

    • What scrolling effects depend on more than just the scroll position, e.g. scroll velocity? (There certainly are some, such as headers that appear when you scroll down but disappear when you scroll up.)
    • For effects that depend on just the scroll position, which ones can't be expressed just by animating CSS transforms and/or opacity as a function of the scroll position?
    If people are aware of scrolling effects in either of those two categories, it would be very useful to hear about them.

    Benjamin KerensaRelease Management Work Week

    IMG 20140623 133320 300x222 Release Management Work Week

    Team discussing goals

    Last week in Portland, Oregon, we had our second release management team work week of the year focusing on our goals and work ahead in Q3 of 2014. I was really excited to meet the new manager of the team, our new intern and two other team members I had not yet met.

    It was quite awesome to have the face-to-face time with the team to knock out some discussions and work that required the kind of collaboration that a work week offers. One thing I liked working on the most was discussing the current success of the Early Feedback Community Release Manager role I have had on the team (I’m the only non-employee on the team currently) and discussing ideas for improving the pathways for future contributors in the team while also creating new opportunities and a new pathway for me to continue to grow.

    One thing unique about this work week is we also took some time to participate in Open Source Bridge a local conference that Mozilla happened to be sponsoring at The Eliot Center and that Lukas Blakk from our team was speaking at. Lukas used her keynote talk to introduce her awesome project she is working on called the Ascend Project which she will be piloting soon in Portland.

    DSC 0618 300x200 Release Management Work Week

    Lukas Blakk Ascend Project Keynote at Open Source Bridge 2014

    While this was a great work week and I think we accomplished a lot, I hope in future work weeks that they are either out of town or that I can block off other life obligations to spend more time on-site as I did have to drop off a few times for things that came up or run off to the occasional meeting or Vidyo call.

    Thanks to Lawrence Mandel for being such an awesome leader of our team and seeing the value in operating open by default. Thanks to Lukas for being a great mentor and awesome person to contribute alongside. Thanks to Sylvestre for bringing us French Biscuits and fresh ideas. Thanks to Bhavana for being so friendly and always offering new ideas and thanks to Pranav for working so hard on picking up where Willie left off and giving us a new tool that will help our release continue to be even more awesome.


    Frédéric HarperMicrosoft nominated me as a Most Valuable Professional


    You read well; Microsoft nominated me, and I’m now a MVP, a Most Valuable Professional about Internet Explorer. For those of you that don’t know, it’s an important recognition in the Microsoft ecosystem: it’s given to professionals with a certain area of expertise, around Microsoft technology or technology Microsoft use, for their work in the community.

    So why Microsoft gave me recognition for Internet Explorer? In the end, I’m working at Mozilla. Well, I see this as a good news for everything I was doing while I was there (like Make Web Not War), and everything I’m doing right now: it’s another proof that Microsoft is more, and more open. As I’ve always said, it’s not perfect, but it’s going in the right direction (MS Open Tech is a great example). In my case, this award is less about Internet Explorer itself, than about the Web, and it’s lovely technology that is HTML5. They recognize the work I’m doing in that sense, either with Open Source, with communities in Montreal or with talks I’m doing about the Web. At first, it may seem weird for you, but I think it makes a lot of sense that Microsoft recognize someone like me, even if I’m working in a company making a competing product. Microsoft is doing a better job since Internet Explorer 9, and the browser is getting better, and better. No matter if you don’t use it, but other people do, and by making a better browser, which respect more the standards, they are helping the web to move forward. It’s also a good thing for us, developers, as we can more easily build great experiences for any users, no matter the browser they use. On that note, I was happy to see a bit more transparency about IE with the platform status website.

    So, I salute the openness of Microsoft to nominate someone like me (they even hired me in the past). By being part of that MVP program, I’m looking forward to seeing how I can continue to work on the Mozilla mission while helping Microsoft to be more open with web technologies.

    Microsoft nominated me as a Most Valuable Professional is a post on Out of Comfort Zone from Frédéric Harper

    Michelle ThorneWebmaker Training in Uganda

    60 newly trained Webmaker Mentors. 12 amazing Webmaker Super Mentors. 200 students taught how to participate on the web. 1 epic weekend in Kampala!

    Mozillians from Uganda, Kenya and Rwanda gathered together for the first time to run a train-the-trainer event for East Africa. The goal was to teach the local community–a lovely mix of educators, techies and university students in Kampala–how to teach the web.

    The training in Uganda builds on Webmaker’s free online professional development. Our theory is that blending online and in-person professional development, participants get the most out of the experience and better retain the skills they learned. Not to mention staying connected to a local community as well as a global one.

    Training Agenda

    Together with the amazing event hosts, we crafted a modular training agenda.

    It cover 2 days of training and a half day practice event. Participants had little to no experience teaching the web before the event. But after the training, they would go on to teach 200 secondary school students!

    The training helped the participants get ready for the practice event and to teach the web to the communities they care about. We covered these four main learning objectives:

    • understanding the value of the open web, making as learning and participatory learning.
    • using the Webmaker tools to teach web literacy
    • how to create your own teaching kit and be a good facilitator
    • how to participate in the global & local Webmaker community

    Not to mention lots of fun games and interstitial activities. I learned, for example, how to play a Ugandan schoolyard game called “Ali Baba and the 40 Thieves.”

    Community Leaders

    For me, the most exciting part of this event was meeting and supporting the emerging community leaders.

    Some of the Webmaker Super Mentors were part of our first training a year ago in Athens. Others were quite experienced event organizers, mentors and facilitators who stepped up to the role of teaching others how to teach.

    The training facilitators had a beautiful blend of experiences, and each facilitator, in addition to each participant, got to level up their skills as part of the training.

    Lessons Learned

    Every event is a learning experience, no matter which role you have. I learned a lot by helping San James teach people to teach people how to teach the web.

    • Practice events are invaluable. The highlight of the training was bringing the 60 freshly trained Webmaker Mentors to a live event, where they taught 200 secondary school students and put their new skills to practice. They prepared their own agendas, rehearsed them, and then split into small groups to teach these students. Floating around, it was amazing to hear the mentors sharing the knowledge they just learned the day before. And from the smiles on everyone’s faces, you could tell it was a fun and memorable event.

    • Prepare low-fi / no-fi activities. We missed a big opportunity to test the amazing new “low-fi/no-fi” teaching kit, for when you want to teach the web without internet or computers. Given our connectivity issues, this would have been perfect.

    • Make time for participants to take immediate next steps. I was proud how well we worked in debriefs and reflections into the training agenda. However, it would have been better if participants had had time to make an action plan and even take the first step in it. For example, they could pledge to host a small Maker Party., log the event and draft an agenda.

    Thank you!

    This was one of the most inspiring and fun events I’ve been to with Mozilla.

    The hugest of thank yous to all the Super Mentors–from Uganda, Kenya and Rwanda–for making the event possible. A special thanks to San James and Lawrence for believing in this event for a long time. Your upcoming Mozilla Festival East Africa will be a success thanks to your wonderful team and the people you trained. This is only the beginning!