Adam RoachBetter Living through Tracking Protection

There's been a bit of a hullabaloo in the press recently about blocking of ads in web browsers. Very little of the conversation is new, but the most recent round of discussion has been somewhat louder and more excited, in part because of Apple's recent decision to allow web content blockers on the iPhone and iPad.

In this latest round of salvos, the online ad industry has taken a pretty brutal beating, and key players appear to be rethinking long-entrenched strategies. Even the Interactive Advertising Bureau -- who has referred to ad blocking as "robbery" and "an extortionist scheme" -- has gone on record to admit that the Internet ads got so bad that users basically had no choice but to start blocking them.

So maybe things will get better in the coming months and years, as online advertisers learn to moderate their behavior. Past behavior shows a spotty track record in this area, though, and change will come slowly. In the meanwhile, there are some pretty good tools that can help you take back control of your web experience.

How We Got Here

While we probably all remember the nadir of online advertising -- banners exhorting users to "punch the monkey to win $50", epilepsy-inducing ads for online gambling, and out-of-control popup ads for X10 cameras -- the truth is that most ad networks have already pulled back from the most obvious abuses of users' eyeballs. It would appear that annoying users into spending money isn't a winning strategy.

Unfortunately, the move away from hyperkinetic ads to more subtle ones was not a retreat as much as a carefully calculated refinement. Ads nowadays are served by colossal ad networks with tendrils on every site -- and they're accompanied by pretty sophisticated code designed to track you around the web.

The thought process that went into this is: if we can track you enough, we learn a lot about who you are and what your interests are. This is driven by the premise that people will be less annoyed by ads that actually fit their interests; and, at the same time, such ads are far more likely to convert into a sale.

Matching relevant ads to users was a reasonable goal. It should have been a win-win for both advertisers and consumers, as long as two key conditions were met: (1) the resulting system didn't otherwise ruin the web browsing experience, and (2) users who don't want to have their personal movements across the web could tell advertisers not to track them, and have those requests honored.

Neither is true.

Tracking Goes off the Rails

Just like advertisers went overboard with animated ads, pop-ups, pop-unders, noise-makers, interstitials, and all the other overtly offensive behavior, they've gone overboard with tracking.

You hear stories of overreach all the time: just last night, I had a friend recount how she got an email (via Gmail) from a friend that mentioned front-loaders, and had to suffer through weeks of banner ads for construction equipment on unrelated sites. The phenomenon is so bad and so well-known, even The Onion is making fun of it.

Beyond the "creepy" factor of having ad agencies building a huge personal profile for you and following you around the web to use it, user tracking code itself has become so bloated as to ruin the entire web experience.

In fact, on popular sites such as CNN, code to track users accounts for somewhere on the order of three times as much memory usage as the actual page content: a recent demo of the Firefox memory tracking tool found that 30 MB of the 40 MB used to render a news article on CNN was consumed by code whose sole purpose was user tracking.

This drags your browsing experience to a crawl.

Ad Networks Know Who Doesn't Want to be Tracked, But Don't Care.

Under the assumption that advertisers were actually willing to honor user choice, there has been a large effort to develop and standardize a way for users to indicate to ad networks that they didn't want to be tracked. It's been implemented by all major browsers, and endorsed by the FTC.

For this system to work, though, advertisers need to play ball: they need to honor user requests not to be tracked. As it turns out, advertisers aren't actually interested in honoring users' wishes; as before, they see a tiny sliver of utility in abusing web users with the misguided notion that this somehow translates into profits. Attempts to legislate conformance were made several years ago, but these never really got very far.

So what can you do? The balance of power seems so far out a whack that consumers have little choice than to sit back and take it.

You could, of course, run one of any number of ad blockers -- Adblock Plus is quite popular -- but this is a somewhat nuclear option. You're throwing out the slim selection of good players with the bad ones; and, let's face it, someone's gotta provide money to keep the lights on at your favorite website.

Even worse, many ad blockers employ techniques that consume as much (or more) memory and as much (or more) time as the trackers they're blocking -- and Adblock Plus is one of the worst offenders. They'll stop you from seeing the ads, but at the expense of slowing down everything you do on the web.

What you can do

When people ask me how to fix this, I recommend a set of three tools to make their browsing experience better: Firefox Tracking Protection, Ghostery, and (optionally) Privacy Badger. (While I'm focusing on Firefox here, it's worth noting that both Ghostery and Privacy Badger are also available for Chrome.)

1. Turn on Tracking Protection

Firefox Tracking Protection is automatically activated in recent versions of Firefox whenever you enter "Private Browsing" mode, but you can also manually turn it on to run all the time. If you go to the URL bar and type in "about:config", you'll get into the advanced configuration settings for Firefox (you may have to agree to be careful before it lets you in). Search for a setting called "privacy.trackingprotection.enabled", and then double-click next to it where it says "false" to change it to "true." Once you do that, Tracking Protection will stay on regardless of whether you're in private browsing mode.

Firefox tracking protection uses a curated list of sites that are known to track you and known to ignore the "Do Not Track" setting. Basically, it's a list of known bad actors. And a study of web page load times determined that just turning it on improves page load times by a median of 44%.

2. Install and Configure Ghostery

There's also an add-on that works similar to Tracking Protection, called Ghostery. Install it from, and then go into its configuration (type "about:addons" into your URL bar, and select the "Preferences" button next to Ghostery). Now, scroll down to "blocking options," near the bottom of the page. Under the "Trackers" tab, click on "select all." Then, uncheck the "widgets" category. (Widgets can be used to track you, but they also frequently provide useful functions for a web page: they're a mixed bag, but I find that their utility outweighs their cost).

Ghostery also uses a curated list, but it's far more aggressive in what it considers to be tracking. It also allows you fine-grained control over what you block, and lets you easily whitelist sites, if you find that they're not working quite right with all the potential trackers removed.

Poke around at the other options in there, too. It's really a power-users tracker blocker.

3. Optionally, Install Privacy Badger

Unlike tracking protection and Ghostery, Privacy Badger isn't a curated list of known trackers. Instead, it's a tool that watches what webpages do. When it sees behavior that could be used to track users across multiple sites, it blocks that behavior from ever happening again. So, instead of knowing ahead of time what to block, it learns what to block. In other words, it picks up where the other two tools leave off.

This sounds really good on paper, and does work pretty well in practice. I ran with Privacy Badger turned on for about a month, with mostly good results. Unfortunately, its "learning" can be a bit aggressive, and I found that it broke sites far more frequently than Ghostery. So the trade-off here: if you run Privacy Badger, you'll have much better protection against tracking, but you'll also have to be alert to the kinds of defects that it can introduce, and go turn it off when it interferes with what you're trying to do. Personally, I turned it off a few months ago, and haven't bothered to reactivate it yet; but I'll be checking back periodically to see if they've tuned their algorithms (and their yellow-list) to be more user-friendly.

If you're interested in giving it a spin, you can download Privacy Badger from the website.

John O'Duinn“Distributed” ER#3 now available!

Book Cover for DistributedEarlier this week, just before the US Thanksgiving holidays, we shipped Early Release #3 for my “Distributed” book-in-progress.

Early Release #3 (ER#3) adds two new chapters: Ch.1 remoties trends, Ch.2 the real cost of an office, and many tweaks/fixes to the previous Chapters. There are now a total of 9 chapters available (1,2,4,6,7,8,10,13,15) arranged into three sections. These chapters were the inspiration for recent presentations and blog posts here, here and here.)

ER#3 comes one month after ER#2. You can buy ER#3 by clicking here, or clicking on the thumbnail of the book cover. Anyone who already has ER#1 or ER#2 should get prompted with a free update to ER#3. (If you don’t please let me know!). And yes, you’ll get updated when ER#4 comes out next month.

Please let me know what you think of the book so far. Your feedback get to help shape/scope the book! Is there anything I should add/edit/change? Anything you found worked for you, as a “remotie” or person in a distributed team, which you wish you knew when you were starting? If you were going to setup a distributed team today, what would you like to know before you started?

Thank you to everyone who’s already sent me feedback/opinions/corrections – all changes that are making the book better. I’m merging changes/fixes as fast as I can – some days are fixup days, some days are new writing days. All great to see coming together. To make sure that any feedback doesn’t get lost or caught in spam filters, it’s best to email a special email address (feedback at oduinn dot com) although feedback via twitter and linkedin works also. Thanks again to everyone for their encouragement, proof-reading help and feedback so far.

Now, it’s time to get back to typing. ER#4 is coming soon!


Robert O'Callahanrr Replay Performance Improvements

I've been spending a lot of time using rr, as have some other Mozilla developers, and it occurred to me a small investment in speeding up the debugging experience could pay off in improved productivity quite quickly. Until recently no-one had ever really done any work to speed up replay, so there was some low-hanging fruit.

During recording we avoid trapping from tracees to the rr process for common syscalls (read, clock_gettime and the like) with an optimization we call "syscall buffering". The basic idea is that the tracee performs the syscall "untraced", we use a seccomp-bpf predicate to detect that the syscall should not cause a ptrace trap, and when the syscall completes the tracee copies its results to a log buffer. During replay we do not use seccomp-bpf; we were using PTRACE_SYSEMU to generate a ptrace trap for every syscall and then emulating the results of all syscalls from the rr process. The obvious major performance improvement is to avoid generating ptrace traps for buffered syscalls during replay, just as we do during recording.

This was tricky to do while preserving our desired invariants that control flow is identical between recording and replay, and data values (in application memory and registers) are identical at all times. For example consider the recvmsg system call, which takes an in/out msg parameter. During recording syscall wrappers in the tracee would copy msg to the syscall log buffer, perform the system call, then copy the data from the log buffer back to msg. Hitherto, during replay we would trap on the system call and copy the saved buffer contents for that system call to the tracee buffer, whereupon the tracee syscall wrappers would copy the data out to msg. To avoid trapping to rr for a sequence of such syscalls we need to copy the entire syscall log buffer to the tracee before replaying them, but then the syscall wrapper for recvmsg would overwrite the saved output when it copies msg to the buffer! I solved this, and some other related problems, by introducing a few functions that behave differently during recording and replay while preserving control flow and making sure that register values only diverge temporarily and only in a few registers. For this recvmsg case I introduced a function memcpy_input_parameter which behaves like memcpy during recording but is a noop during replay: it reads a global is_replay flag and then does a conditional move to set the source address to the destination address during replay.

Another interesting problem is recapturing control of the tracee after it has run a set of buffered syscalls. We need to trigger some kind of ptrace trap after reaching a certain point in the syscall log buffer, without altering the control flow of the tracee. I handled this by generating a large array of stub functions (each only one byte, a RET instruction) and after processing the log buffer entry ending at offset O, we call stub function number O/8 (each log record is at least 8 bytes long). rr identifies the last log entry after which it wants to stop the tracee, and sets a breakpoint at the appropriate stub function.

It took a few late nights and a couple of half-days of debugging but it works now and I landed it on master. (Though I expect there may be a few latent bugs to shake out.) The results are good:

This shows much improved replay overhead for Mochitest and Reftest, though not much improvement on Octane. Mochitest and Reftest are quite system-call intensive so our optimization gives big wins there. Mochitests spend a significant amount of time in the HTTP server, which is not recorded by rr, and therefore zero-overhead replay could actually run significantly faster than normal execution, so it's not surprising we're already getting close to parity there. Octane replay is dominated by SCHED context-switch events, each one of which we replay using relatively expensive trickery to context-switch at exactly the right moment.

For rr cognoscenti: as part of eliminating traps for replay of buffered syscalls, I also eliminated the traps for the ioctls that arm/disarm the deschedule-notification events. That was relatively easy (just replace those syscalls with noops during replay) and actually simplified code since we don't have to write those events to the trace and can wholly ignore them during replay.

There's definitely more that can be squeezed out of replay, and probably recording as well. E.g. currently we record a SCHED event every time we try to context-switch, even if we end up rescheduling the thread that was already running (which is common). We don't need to do that, and eliminating those events would reduce syscallbuf flushing and also the number of ptrace traps taken during replay. This should hugely benefit Octane. I'm trying to focus on easy rr improvements with big wins that are likely to pay off for Mozilla developers in the short term; it's difficult to know whether any given improvement is in that category, but I think SCHED elision during recording probably is. (We used to elide recorded SCHED events during replay, but that added significant complexity to reverse execution so I took it out.)

Chris AtLeeFirefox builds on the Taskcluster Index


You have have heard rumblings that FTP is going away...


Over the past few quarters we've been working to migrate our infrastructure off of the ageing "FTP" [1] system to Amazon S3.

We've maintained some backwards compatibility for the time being [2], so that current Firefox CI and release builds are still available via, or preferably, since we don't support the ftp protocol any more!

Our long term plan is to make the builds available via the Taskcluster Index, and stop uploading builds to

How do I find my builds???


This is pretty big change, but we really think this will make it easier to find the builds you're looking for.

The Taskcluster Index allows us to attach multiple "routes" to a build job. Think of a route as a kind of hierarchical tag, or directory. Unlike regular directories, a build can be tagged with multiple routes, for example, according to the revision or buildid used.

A great tool for exploring the Taskcluster Index is the Indexed Artifact Browser

Here are some recent examples of nightly Firefox builds:

The latest win64 nightly Firefox build is available via the
gecko.v2.mozilla-central.nightly.latest.firefox.win64-opt route

This same build (as of this writing) is also available via its revision:


Or the date:


The artifact browser is simply an interface on top of the index API. Using this API, you can also fetch files directly using wget, curl, python requests, etc.: [3]

Similar routes exist for other platforms, for B2G and mobile, and for opt/debug variations. I encourage you to explore the gecko.v2 namespace, and see if it makes things easier for you to find what you're looking for! [4]

Can't find what you want in the index? Please let us know!

[1]A historical name referring back to the time when we used the FTP prototol to serve these files. Today, the files are available only via HTTP(S)
[2]in fact, all Firefox builds right now are currently uploaded to S3. we've just had to implement some compatibility layers to make S3 appear in many ways like the old FTP service.
[3]yes, you need to know the version number...for now. we're considering stripping that from the filenames. if you have thoughts on this, please get in touch!
[4]ignore the warning on the right about "Task not found" - that just means there are no tasks with that exact route; kind of like an empty directory

Jan de MooijMath.random() and 32-bit precision

Last week, Mike Malone, CTO of Betable, wrote a very insightful and informative article on Math.random() and PRNGs in general. Mike pointed out V8/Chrome used a pretty bad algorithm to generate random numbers and, since this week, V8 uses a better algorithm.

The article also mentioned the RNG we use in Firefox (it was copied from Java a long time ago) should be improved as well. I fully agree with this. In fact, the past days I've been working on upgrading Math.random() in SpiderMonkey to XorShift128+, see bug 322529. We think XorShift128+ is a good choice: we already had a copy of the RNG in our repository, it's fast (even faster than our current algorithm!), and it passes BigCrush (the most complete RNG test available).

While working on this, I looked at a number of different RNGs and noticed Safari/WebKit uses GameRand. It's extremely fast but very weak.

Most interesting to me, though, was that, like the previous V8 RNG, it has only 32 bits of precision: it generates a 32-bit unsigned integer and then divides that by UINT_MAX + 1. This means the result of the RNG is always one of about 4.2 billion different numbers, instead of 9007199 billion (2^53). In other words, it can generate 0.00005% of all numbers an ideal RNG can generate.

I wrote a small testcase to visualize this. It generates random numbers and plots all numbers smaller than 0.00000131072.

Here's the output I got in Firefox (old algorithm) after generating 115 billion numbers:

And a Firefox build with XorShift128+:

In Chrome (before Math.random was fixed):

And in Safari:

These pics clearly show the difference in precision.


Safari and older Chrome versions both generate random numbers with only 32 bits of precision. This issue has been fixed in Chrome, but Safari's RNG should probably be fixed as well. Even if we ignore its suboptimal precision, the algorithm is still extremely weak.

Math.random() is not a cryptographically-secure PRNG and should never be used for anything security-related, but, as Mike argued, there are a lot of much better (and still very fast) RNGs to choose from.

Support.Mozilla.OrgWhat’s up with SUMO – 27th November

Hello, SUMO Nation!

Have you had a good week so far? We hope you have! Here are a few pertinent updates from the world of SUMO for your reading pleasure.

Welcome, new contributors!

…at least that’s the only one we know of! So, if you joined us recently, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

  • Scribe & Phoxuponyou – for their constant contributions on the support forum – cheers!
  • Costenslayer – for offering to help us with cloning our YT videos to AirMo – thanks!

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

Reminder: the next SUMO Community meeting…

  • …is going to take place on Monday, 30th of November. Join us!
  • If you want to add a discussion topic to upcoming the live meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Monday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).



Support Forum


  • for Desktop
    • GTK+3 is required for Firefox on Linux as of Beta 43 (the full release comes on the 15th of December).
  •  for iOS
    • Version 1.2 is out – please go ahead and test it on your devices!
    • Version 2.0 is going to happen in 2016 (we just don’t know when exactly… yet), and should add synchronization from iOS to Desktop and/or Android.
  • Firefox OS
    • Peace and quiet makes for the start of a good weekend!
Thank you for reading all the way to the end! We hope you join us on Monday (and beyond that day), and wish you a great, relaxing weekend. Take it easy and stay foxy!

Mozilla FundraisingA/B Test: Three-page vs One-page donation flow

Here are the results of our first A/B test from our 2015 End of Year fundraising campaign. Three page flow (our control) In our control (above) credit card donations are processed (via Stripe) from within our user interface, in a … Continue reading

Dustin J. MitchellRemote GPG Agent

Private keys should be held close -- the fewer copies of them, and the fewer people have access to them, the better. SSH agents, with agent forwarding, do a pretty good job of this. For quite a long time, I've had my SSH private key stored only on my laptop and desktop, with a short script to forward that agent into my remote screen sessions. This works great: while I'm connected and my key is loaded, I can connect to hosts and push to repositories with no further interaction. But once I disconnect, the screen sessions can no longer access the key.

Doing the same for GPG keys turns out to be a bit harder, not helped by the lack of documentation from GnuPG itself. In fact, as far as I can tell, it was impossible before GnuPG 2.1, and a great deal more difficult before OpenSSH 6.7.

I don't want exactly the same thing, anyway: I only need access to my GPG private keys once every few days (to sign a commit, for example) So I'd like to control exactly when I make the agent available.

The solution I have found involves this shell script, named remote-gpg:

#! /bin/bash

set -e

if [ -z "$host" ]; then
    echo "Supply a hostname"
    exit 1

# remove any existing agent socket (in theory `StreamLocalBindUnlink yes` does this,
# but in practice, not so much)
ssh $host rm -f ~/.gnupg/S.gpg-agent
ssh -t -R ~/.gnupg/S.gpg-agent:.gnupg/S.gpg-agent-extra $host \
    sh -c 'echo; echo "Perform remote GPG operations and hit enter"; \
        read; \
        rm -f ~/.gnupg/S.gpg-agent'; 

The critical bit of configuration was to add the following to .gnupg/gpg-agent.conf on my laptop and desktop:

extra-socket /home/dustin/.gnupg/S.gpg-agent-extra

and then kill the agent to reload the config:

gpg-connect-agent reloadagent /bye

The idea is this: the local GPG agent (on the laptop or desktop) publishes this "extra" socket specifically for forwarding to remote machines. The set of commands accepted over the socket is limited, although it does include access to the key material. The SSH command then forwards the socket (this functionality was added in OpenSSH 6.7) to the remote host, after first deleting any existing socket. That command displays a prompt, waits for the user to signal completion of the operation, then cleans up.

To use this, I just open a new terminal or local screen window and run remote-gpg euclid. If my key is not already loaded, I'm prompted to enter the passphrase. GPG even annotates the prompt to indicate that it's from a remote connection. Once I've finished with the private keys, I go back to the window and hit enter.

Air MozillaParticipation Call, 26 Nov 2015

Participation Call The Participation Call helps connect Mozillians who are thinking about how we can achieve crazy and ambitious goals by bringing new people into the project...

Air MozillaReps weekly, 26 Nov 2015

Reps weekly This is a weekly call with some of the Reps council members to discuss all matters Reps, share best practices and invite Reps to share...

Armen ZambranoMozhginfo/Pushlog client released

If you've ever spent time trying to query metadata from hg with regards to revisions, you can now use a Python library we've released to do so.

In bug 1203621 [1], our community contributor @MikeLing has helped us release the module we had written for Mozilla CI tools.

You can find the pushlog_client package in here [3] and you can find the code in here [4]

Thanks MikeLing!


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Andy McKayAdd-ons at Mozlando

If you are going to Orlando for the Mozilla summit and want to talk add-ons, we want to talk to you. But, if you look at the schedule we haven't scheduled a whole pile of meetings, yet there are 430 meetings (by my quick count) scheduled.

In fact we've got one meeting I'd like people who are interested to learn about add-ons to come to, the add-ons open house and demos. We'll be talking road map, 2016 planning, a few demos and then getting a chat going. You should come.

If you want to talk with us on any add-ons subject any other time we'd love to talk and I'm sure we can work around each others schedule. You can find me in online or wander into the Firefox home room and look for the add-ons sign (yes I plan on making one). We can set up quick ad-hoc meetings on pretty much anything.

There's a reason for that, the productivity and happiness I've encountered at work weeks is the related to the inverse of the number of meetings I have. At Portland I was triple booked at one point. At Whistler I had few meetings and most of the ones I did have were relaxed, outdoors and in the sun.

Whistler ended up being a much more positive experience for me and my team.

So here's the plan for my team:

  • meet and interact with the rest of our team
  • learn what people outside our group are doing
  • learn where Mozilla is going
  • don't feel under pressure to attend any meetings

We'll be hacking on code when we are hanging out, the criteria being:

  • nothing that is on a critical path
  • the hack must involve working with other people
  • don't feel under pressure to complete that code

That's about it, just let the team flow, don't hold it back with meetings.

Because let's face it, if you want to have a meeting on a subject, we can do that any time with video conferencing. I look forward to seeing you there.

Nick CameronMacro hygiene in all its guises and variations

Note, I'm not sure of the terminology for some of this stuff, so I might be making horrible mistakes, apologies.

Usually, when we talk about macro hygiene we mean the ability to not confuse identifiers with the same name but from different contexts. This is a big and interesting topic in it's own right and I'll discuss it in some depth later. Today I want to talk about other kinds of macro hygiene.

There is hygiene when naming items (I've heard this called "path hygiene", but I'm not sure if that is a standard term). For example,

mod a {  
    fn f() {}

    pub macro foo() {


The macro use will expand to f(), but there is no f in scope. Currently this will be a name resolution error. Ideally, we would remember the scope where the call to f came from and look up f in that scope.

I believe that switching our hygiene algorithm to scope sets and using the scope sets for name resolution solves this issue.

Privacy hygiene

In the above example, f is private to a, so even if we can name it from the expansion of foo, we still can't access it due to its visibility. Again, scope sets comes to the rescue. The intuition is that we check privacy from the scope used to find f, not from its lexical position. There are a few more details than that, but nothing that will make sense before explaining the scope sets algorithm in detail.

Unsafety hygiene

The goal here is that when checking for unsafety, whether or not we are allowed to execute unsafe code depends on the context where the code is written, not where it is expanded. For example,

unsafe fn foo(x: i32) {}

macro m1($x: expr) {  

macro m2($x: expr) {  

macro m3($x: expr) {  
    unsafe {

macro m4($x: expr) {  
    unsafe {

fn main() {  
    foo(42); // bad
    unsafe {
        foo(42);  // ok
    m1(42); // bad
    m2(foo(42)); // bad
    m3(42); // ok
    m4(foo(42)); // bad
    unsafe {
        m1(42); // bad
        m2(foo(42)); // ok
        m3(42); // ok
        m4(foo(42)); // ok

We could in theory use the same hygiene information as for the previous kinds. But when checking unsafety we are checking expressions, not identifiers, and we only record hygiene info for identifiers.

One solution would be to track hygiene for all tokens, not just identifiers. That might not be too much effort since groups of tokens passed together would have the same hygiene info. We would only be duplicating indices into a table, not more data than that. We would also have to track or be able to calculate the safety-status of scopes.

Alternatively, we could introduce a new kind of block into the token tree system - a block which can't be written by the user, only created by expansion or procedural macros. It would affect precedence but not scoping. Such a block is also the solution to having interpolated AST in the token stream - we just have tokens wrapped in the scope-less block. Such a block could be annotated with its safety-status. We would need to track unsafety during parsing/expansion to make this work. We have something similar to this in the HIR where we can push/pop unsafe blocks. I believe we want an absolute setting here rather than push/pop though, and we also don't want to introduce new scoping.

We could follow the current stability solution and annotate spans, but this is a bit of an abuse of spans, IMO.

I'm not super-happy with any of these solutions.

Stability hygiene

Finally, stability. We would like for macros in libraries with access to unstable code to be able to access unstable code when expanded. This is currently supported in Rust by having a bool on spans. We can probably continue to use this system or adapt either of the solutions proposed for unsafety hygiene.

It would be nice for macros to be marked as stable and unstable, I believe this is orthogonal to hygiene though.

Mozilla Addons BlogAdd-ons Update – Week of 2015/11/25

I post these updates every 3 weeks to inform add-on developers about the status of the review queues, add-on compatibility, and other happenings in the add-ons world.

The Review Queues

In the past 3 weeks, 758 add-ons were reviewed:

  • 602 (79%) were reviewed in less than 5 days.
  • 32 (4%) were reviewed between 5 and 10 days.
  • 124 (16%) were reviewed after more than 10 days.

There are 281 listed add-ons awaiting review, and 189 unlisted add-ons awaiting review. I should note that this is an unusually large number of unlisted add-ons, which is due to a mass uploading by a developer with 100+ add-ons.

Review times for most add-ons have improved recently  due to more volunteer activity. Add-ons that are admin-flagged or very complex are now getting much needed attention, thanks to a new contractor reviewer. There’s still a fairly large review backlog to go through.

If you’re an add-on developer and would like to see add-ons reviewed faster, please consider joining us. Add-on reviewers get invited to Mozilla events and earn cool gear with their work. Visit our wiki page for more information.

Firefox 43 Compatibility

This compatibility blog post is now public. The bulk compatibility validation should be run soon.

As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Changes in let and const in Firefox 44

Firefox 44 includes some breaking changes that you should all be aware of. Please read the post carefully and test your add-ons on Nightly or the newest Developer Edition.

Extension Signing

The wiki page on Extension Signing has information about the timeline, as well as responses to some frequently asked questions. The current plan is to turn on enforcement by default in Firefox 43.


Electrolysis, also known as e10s, is the next major compatibility change coming to Firefox. In a nutshell, Firefox will run on multiple processes now, running content code in a different process than browser code.

This is the time to test your add-ons and make sure they continue working in Firefox. We’re holding regular office hours to help you work on your add-ons, so please drop in on Tuesdays and chat with us!

Web Extensions

If you read the post on the future of add-on development, you should know there are big changes coming. We’re investing heavily on the new WebExtensions API, so we strongly recommend that you start looking into it for your add-ons. You can track progress of its development in

Air MozillaQuality Team (QA) Public Meeting, 25 Nov 2015

Quality Team (QA) Public Meeting This is the meeting where all the Mozilla quality teams meet, swap ideas, exchange notes on what is upcoming, and strategize around community building and...

Air MozillaBugzilla Development Meeting, 25 Nov 2015

Bugzilla Development Meeting Help define, plan, design, and implement Bugzilla's future!

Chris H-CHow Mozilla Pays Me

When I told people I was leaving BlackBerry and going to work for Mozilla, the first question was often “Who?”

(“The Firefox people, ${familyMember}” “Oh, well why didn’t you say so”)

More often the first question (and almost always the second question for ${familyMember}) was “How do they make their money?”

When I was working for BlackBerry, it seemed fairly obvious: BlackBerry made its money selling BlackBerry devices. (Though obvious, this was actually incorrect, as the firm made its money more through services and servers than devices. But that’s another story.)

With Mozilla, there’s no clear thing that people’s minds can latch onto. There’s no doodad being sold for dollarbucks, there’s no subscriber fee, there’s no “professional edition” upsell…

Well, today the Mozilla Foundation released its State of Mozilla report including financials for calendar 2014. This ought to clear things up, right? Well…

The most relevant part of this would be page 6 of the audited financial statement which shows that, roughly speaking, Mozilla makes its money thusly (top three listed):

  • $323M – Royalties
  • $4.2M – Contributions (from fundraising efforts)
  • $1M – Interest and Dividends (from investments)

Where this gets cloudy is that “Royalties” line. The Mozilla Foundation is only allowed to accrue certain kinds of income since it is a non-profit.

Which is why I’m not employed by the Foundation but by Mozilla Corporation, the wholly-owned subsidiary of the Mozilla Foundation. MoCo is a taxable entity responsible for software development and stuff. As such, it can earn and spend like any other privately-held concern. It sends dollars back up the chain via that “Royalties” line because it needs to pay to license wordmarks, trademarks, and other intellectual property from the Foundation. It isn’t the only contributor to that line, I think, as I expect sales of plushie Firefoxen and tickets to MozFest factor in somehow.

So, in conclusion, rest assured, ${conceredPerson}: Mozilla Foundation has plenty of money coming in to pay my…

Well, yes, I did just say I was employed by Mozilla Corporation. So?

What do you mean where does the Corporation get its money?

Fine, fine, I was just going to gloss over this part and sway you with those big numbers and how MoCo and MoFo sound pretty similar… but I guess you’re too cunning for that.

Mozilla Corporation is not a publicly-traded corporation, so there are no public documents I can point you to for answers to that question. However, there was a semi-public statement back in 2006 that confirmed that the Corporation was earning within an order of magnitude of $76M in search-related partnership revenue.

It’s been nine years since then. The Internet has changed a lot since the year Google bought YouTube and MySpace was the primary social network of note. And our way of experiencing it has changed from sitting at a desk to having it in our pockets. Firefox has been downloaded over 100 million times on Android and topped some of the iTunes App Store charts after being released twelve days ago for iOS. If this sort of partnership is still active, and is somewhat proportional to Firefox’s reach, then it might just be a different number than “within an order of magnitude of $76M.”

So, ${concernedPerson}, I’m afraid there just isn’t any more information I can give you. Mozilla does its business, and seems to be doing it well. As such, it collects revenue which it has to filter through various taxes and regulation authorities at various levels which are completely opaque even when they’re transparent. From that, I collect a paycheque.

At the very least, take heart from the Contributions line. That money comes from people who like that Mozilla does good things for the Internet. So as long as we’re doing good things (and we have no plans to stop), there is a deep and growing level of support that should keep me from asking for money.

Though, now that you mention it


Air MozillaThe Joy of Coding - Episode 36

The Joy of Coding - Episode 36 mconley livehacks on real Firefox bugs while thinking aloud.

Jan de MooijMaking `this` a real binding in SpiderMonkey

Last week I landed bug 1132183, a pretty large patch rewriting the implementation of this in SpiderMonkey.

How this Works In JS

In JS, when a function is called, an implicit this argument is passed to it. In strict mode, this inside the function just returns that value:

function f() { "use strict"; return this; }; // 123

In non-strict functions, this always returns an object. If the this-argument is a primitive value, it's boxed (converted to an object):

function f() { return this; }; // returns an object: new Number(123)

Arrow functions don't have their own this. They inherit the this value from their enclosing scope:

function f() {
    "use strict";
    () => this; // `this` is 123

And, of course, this can be used inside eval:

function f() {
    "use strict";
    eval("this"); // 123

Finally, this can also be used in top-level code. In that case it's usually the global object (lots of hand waving here).

How this Was Implemented

Until last week, here's how this worked in SpiderMonkey:

  • Every stack frame had a this-argument,
  • Each this expression in JS code resulted in a single bytecode op (JSOP_THIS),
  • This bytecode op boxed the frame's this-argument if needed and then returned the result.

Special case: to support the lexical this behavior of arrow functions, we emitted JSOP_THIS when we defined (cloned) the arrow function and then copied the result to a slot on the function. Inside the arrow function, JSOP_THIS would then load the value from that slot.

There was some more complexity around eval: eval-frames also had their own this-slot, so whenever we did a direct eval we'd ensure the outer frame had a boxed (if needed) this-value and then we'd copy it to the eval frame.

The Problem

The most serious problem was that it's fundamentally incompatible with ES6 derived class constructors, as they initialize their 'this' value dynamically when they call super(). Nested arrow functions (and eval) then have to 'see' the initialized this value, but that was impossible to support because arrow functions and eval frames used their own (copied) this value, instead of the updated one.

Here's a worst-case example:

class Derived extends Base {
    constructor() {
        var arrow = () => this;

        // Runtime error: `this` is not initialized inside `arrow`.

        // Call Base constructor, initialize our `this` value.

        // The arrow function now returns the initialized `this`.

We currently (temporarily!) throw an exception when arrow functions or eval are used in derived class constructors in Firefox Nightly.

Boxing this lazily also added extra complexity and overhead. I already mentioned how we had to compute this whenever we used eval.

The Solution

To fix these issues, I made this a real binding:

  • Non-arrow functions that use this or eval define a special .this variable,
  • In the function prologue, we get the this-argument, box it if needed (with a new op, JSOP_FUNCTIONTHIS) and store it in .this,
  • Then we simply use that variable each time this is used.

Arrow functions and eval frames no longer have their own this-slot, they just reference the .this variable of the outer function. For instance, consider the function below:

function f() {
    return () =>;

We generate bytecode similar to the following pseudo-JS:

function f() {
    var .this = BoxThisIfNeeded(this);
    return () => (.this).foo();

I decided to call this variable .this, because it nicely matches the other magic 'dot-variable' we already had, .generator. Note that these are not valid variable names so JS code can't access them. I only had to make sure with-statements don't intercept the .this lookup when this is used inside a with-statement...

Doing it this way has a number of benefits: we only have to check for primitive this values at the start of the function, instead of each time this is accessed (although in most cases our optimizing JIT could/can eliminate these checks, when it knows the this-argument must be an object). Furthermore, we no longer have to do anything special for arrow functions or eval; they simply access a 'variable' in the enclosing scope and the engine already knows how to do that.

In the global scope (and in eval or arrow functions in the global scope), we don't use a binding for this (I tried this initially but it turned out to be pretty complicated). There we emit JSOP_GLOBALTHIS for each this-expression, then that op gets the this value from a reserved slot on the lexical scope. This global this value never changes, so the JITs can get it from the global lexical scope at compile time and bake it in as a constant :) (Well.. in most cases. The embedding can run scripts with a non-syntactic scope chain, in that case we have to do a scope walk to find the nearest lexical scope. This should be uncommon and can be optimized/cached if needed.)

The Debugger

The main nuisance was fixing the debugger: because we only give (non-arrow) functions that use this or eval their own this-binding, what do we do when the debugger wants to know the this-value of a frame without a this-binding?

Fortunately, the debugger (DebugScopeProxy, actually) already knew how to solve a similar problem that came up with arguments (functions that don't use arguments don't get an arguments-object, but the debugger can request one anyway), so I was able to cargo-cult and do something similar for this.

Other Changes

Some other changes I made in this area:

  • In bug 1125423 I got rid of the innerObject/outerObject/thisValue Class hooks (also known as the holy grail). Some scope objects had a (potentially effectful) thisValue hook to override their this behavior, this made it hard to see what was going on. Getting rid of that made it much easier to understand and rewrite the code.
  • I posted patches in bug 1227263 to remove the this slot from generator objects, eval frames and global frames.
  • IonMonkey was unable to compile top-level scripts that used this. As I mentioned above, compiling the new JSOP_GLOBALTHIS op is pretty simple in most cases; I wrote a small patch to fix this (bug 922406).


We changed the implementation of this in Firefox 45. The difference is (hopefully!) not observable, so these changes should not break anything or affect code directly. They do, however, pave the way for more performance work and fully compliant ES6 Classes! :)

Mozilla Addons BlogA New Firefox Add-ons Validator

The state of add-ons has changed a lot over the past five years, with Jetpack add-ons rising in popularity and Web Extensions on the horizon. Our validation process hasn’t changed as much as the ecosystem it validates, so today Mozilla is announcing we’re building a new Add-ons Validator, written in JS and available for testing today! We started this project only a few months ago and it’s still not production-ready, but we’d love your feedback on it.

Why the Add-ons Validator is Important

Add-ons are a huge part of why people use Firefox. There are currently over 22,000 available, and with work underway to allow Web Extensions in Firefox, it will become easier than ever to develop and update them.

All add-ons listed on (AMO) are required to pass a review by Mozilla’s add-on review team, and the first step in this process is automated validation using the Add-ons Validator.

The validator alerts reviewers to deprecated API usage, errors, and bad practices. Since add-ons can contain a lot of code, the alerts can help developers pinpoint the bits of code that might make your browser buggy or slow, among other problems. It also helps detect insecure add-on code. It helps keep your browsing fast and safe.

Our current validator is a bit old, and because it’s written in Python with JavaScript dependencies, our old validator is difficult for add-on developers to install themselves. This means add-on developers often don’t know about validation errors until they submit their add-on for review.

This wastes time, introducing a feedback cycle that could have been avoided if the add-on developer could have just run addons-validator myAddon.xpi before they uploaded their add-on. If developers could easily check their add-ons for errors locally, getting their add-ons in front of millions of users is that much faster.

And now they can!

The new Add-ons Validator, in JS

I’m not a fan of massive rewrites, but in this case it really helps. Add-on developers are JavaScript coders and nearly everyone involved in web development these days uses Node.js. That’s why we’ve written the new validator in JavaScript and published it on npm, which you can install right now.

We also took this opportunity to review all the rules the old add-on validator defined, and removed a lot of outdated ones. Some of these hadn’t been seen on AMO for years. This allowed us to cut down on code footprint and make a faster, leaner, and easier-to-work-with validator for the future.

Speaking of which…

What’s next?

The new validator is not production-quality code yet and there are rules that we haven’t implemented yet, but we’re looking to finish it by the first half of next year.

We’re still porting over relevant rules from the old validator. Our three objectives are:

  1. Porting old rules (discarding outdated ones where necessary)
  2. Adding support for Web Extensions
  3. Getting the new validator running in production

We’re looking for help with those first two objectives, so if you’d like to help us make our slightly ambitious full-project-rewrite-deadline, you can…

Get Involved!

If you’re an add-on developer, JavaScript programmer, or both: we’d love your help! Our code and issue tracker are on GitHub at We keep a healthy backlog of issues available, so you can help us add rules, review code, or test things out there. We also have a good first bug label if you’re new to add-ons but want to contribute!

If you’d like to try the next-generation add-ons validator, you can install it with npm: npm install addons-validator. Run your add-ons against it and let us know what you think. We’d love your feedback as GitHub issues, or emails on the add-on developer mailing list.

And if you’re an add-on developer who wishes the validator did something it currently doesn’t, please let us know!

We’re really excited about the future of add-ons at Mozilla; we hope this new validator will help people write better add-ons. It should make writing add-ons faster, help reviewers get through add-on approvals faster, and ultimately result in more awesome add-ons available for all Firefox users.

Happy hacking!

Matjaž HorvatMeet Jarek, splendid Pontoon contributor

Some three months ago, a new guy named jotes showed up in #pontoon IRC channel. It quickly became obvious he’s running a local instance of Pontoon and is ready to start contributing code. Fast forward to the present, he is one of the core Pontoon contributors. In this short period of time, he implemented several important features, all in his free time:

Top contributors. He started by optimizing the Top contributors page. More specifically, he reduced the number of DB queries by some 99%. Next, he added filtering by time period and later on also by locale and project.

User permissions. Pontoon used to rely on the Mozillians API for giving permissions to localizers. It turned out we need a more detailed approach with team managers manually granting permission to their localizers. Guess who took care of it!

Translation memory. Currently, Jarek is working on translation memory optimizations. Given his track record, our expectations are pretty high. :-)

I have this strange ability to close my eyes when somebody tries to take a photo of me, so on most of them I look like a statue of melancholy. :D

What brought you to Mozilla?
A friend recommended me a documentary called Code Rush. Maybe it will sound stupid, but I was fascinated by the idea of a garage full of fellow hackers with power to change the world. During one of the sleepless nights I visited and after a few patches I knew Mozilla is my place. A place where I can learn something new with help of many amazing people.

Jarek Śmiejczak, thank you for being splendid! And as you said, huge thanks to Linda – love of your life – for her patience and for being an active supporter of the things you do.

To learn more about Jarek, follow his blog at Joyful hackin’.
To start hackin’ on Pontoon, get involved now.

Emily DunhamGiving Thanks to Rust Contributors

Giving Thanks to Rust Contributors

It’s the day before Thanksgiving here in the US, and the time of year when we’re culturally conditioned to be a bit more public than usual in giving thanks for things.

As always, I’m grateful that I’m working in tech right now, because almost any job in the tech industry is enough to fulfill all of one’s tangible needs like food and shelter and new toys. However, plenty of my peers have all those material needs met and yet still feel unsatisfied with the impact of their work. I’m grateful to be involved with the Rust project because I know that my work makes a difference to a project that I care about.

Rust is satisfying to be involved with because it makes a difference, but that would not be true without its community. To say thank you, I’ve put together a little visualization for insight into one facet of how that community works its magic:


The stats page is interactive and available at The pretty graphs take a moment to render, since they’re built in your browser.

There’s a whole lot of data on that page, and you can scroll down for a list of all authors. It’s especially great to see the high impact that the month’s new contributors have had, as shown in the group comparison at the bottom of the “natural log of commits” chart!

It’s made with the little toy I wrote a while ago called orglog, which builds on gitstat to help visualize how many people contribute code to a GitHub organization. It’s deployed to GitHub Pages with TravisCI (eww) and so that the Rust’s organization-wide contributor stats will be automatically rebuilt and updated every day.

If you’d like to help improve the page, you can contribute to gitstat or orglog!

Tarek ZiadéShould I use PYTHONOPTIMIZE ?

Yesterday, I was reviewing some code for our projects and in a PR I saw something roughly similar to this:

    assert hasattr(SomeObject, 'some_attribute')
except AssertionError:

That didn't strike me as a good idea to rely on assert because when Python is launched using the PYTHONOPTIMIZE flag, which you can activate with the eponymous environment variable or with -O or -OO, all assertions are stripped from the code.

To my surprise, a lot of people are dismissing -O and -OO saying that no one uses those flags in production and that a code containing asserts is fine.

PYTHONOPTIMIZE has three possible values: 0, 1 (-O) or 2 (-OO). 0 is the default, nothing happens.

For 1 this is what happens:

  • asserts are stripped
  • the generated bytecode files are using the .pyo extension instead of .pyc
  • sys.flags.optimize is set to 1
  • __debug__ is set to False

And for 2:

  • everything 1 does
  • doctsrings are stripped.

To my knowledge, one legacy reason to run -O was to produce a more efficient bytecode, but I was told that this is not true anymore.

Another behavior that has changed is related to pdb: you could not run some step-by-step debugging when PYTHONOPTIMIZE was activated.

Last, the pyo vs pyc thing should go away one day, according to PEP 488

So what does that leaves us ? is there any good reason to use those flags ?

Some applications leverage the __debug__ flag to offer two running modes. One with more debug information or a different behavior when an error is encoutered.

That's the case for pyglet, according to their doc.

Some companies are also using the -OO mode to slighlty reduce the memory footprint of running apps. It seems to be the case at YouTube.

And it's entirely possible that Python itself in the future, adds some new optimizations behind that flag.

So yeah, even if you don't use yourself those options, it's good practice to make sure that your python code is tested with all possible values for PYTHONOPTIMIZE.

It's easy enough, just run your tests with -O and -OO and without, and make sure your code does not depend on doctsrings or assertions.

If you have to depend on one of them, make sure your code gracefully handles the optimize modes or raises an early error explaining why you are not compatible with them.

Thanks to Brett Cannon, Michael Foord and others for their feedback on Twitter on this.

James LongA Simple Way to Route with Redux

This post took months to write. I wasn't working on it consistently, but every time I made progress something would happen that made me scratch everything. It started off as an explanation of how I integrated react-router 0.13 into my app. Now I'm going to talk about how redux-simple-router came to be and explain the philosophy behind it.

Redux embraces a single atom app state to represent all the state for your UI. This has many benefits, the biggest of which is that pieces of state are always consistent with each other. If we update the tree immutably, it's very easy to make atomic updates to the state and keep everything consistent (as opposed to mutating individual pieces of state over time).

Conceptually, the UI is derived from this app state. Everything needed to render the UI is contained in this state, and this is powerful because you can inspect/snapshot/replay the entire UI just by targeting the app state.

But it gets awkard when you want to work with other libraries like react-router that want to take part in state management. react-router is a powerful library for component-based routing; it inherently manages the routing state to provide the user with powerful APIs that handle everything gracefully.

So what do we do? We could use react-router and redux side-by-side, but then the app state object does not contain everything needed for the UI. Snapshotting, replaying, and all that is broken.

One option is to try to take control over all the router state and proxy everything back to react-router. This is what redux-router attempts to do, but it's very complicated and prone to bugs. react-router may put unserializable state in the tree, thus still breaking snapshotting and other useful features.

After integrating redux and react-router in my site, I extracted my solution to a new project: redux-simple-router. The goal is simple: let react-router do all the work. They have already developed very elegant APIs for implementing routing components, and you should just use them.

If you use the regular react-router APIs, how does it work? How does the app state object know anything about routing? Simple: we already have a serialized form of all the react-router state: the URL. All we have to do is store the URL in the app state and keep it in sync with react-router, and the app state has everything it needs to render the UI.

People think that the app state object has to have everything, but it doesn't. It just has to have the primary state; anything that can be deduced can live outside of redux.

Above, the blue thing is serializable dumb app state, and the green things are unserializable programs that exist in memory. As long as you can recreate the green things above when loading up an app state, you're fine. And you can easily do this with react-router by just initializing it with the URL from the app state.

Since launching it, a bunch of people have already helped improve it in many ways, and a lot of people seem to be finding it useful. Thank you for providing feedback and contributing patches!

Just use react-router

The brilliant thing about just tracking the URL is that it takes almost no code at all. redux-simple-router is only 87 lines of code and it's easy to understand what's going on. You already have a lot of concepts to juggle (react, redux, react-router, etc); you shouldn't have to learn another large abstraction.

Everything you want to do can be done with react-router directly. A lot of people coming from redux-router seem to surprised about this. Some people don't understand the following:

  • Routing components have all the information you need as properties. See the docs; the current location, params, and more are all there for you to use.
  • You can block route transitions with listenBefore.
  • You can inject code to run when a routing component is created with createElement, if you want to do stuff like automatically start loading data.

We should invest in the react-router community and figure out the right patterns for everybody using it, not just people using redux. We also get to use new react-router features immediately.

The only additional thing redux-simple-router provides is a way to change the URL with the updatePath action creator. The reason is that it's a very common use case to update the URL inside of an action creator; you might want to redirect the user to another page depending on the result of an async request, for example. You don't have access to the history object there.

You shouldn't really even be selecting the path state from the redux-simple-router state; try to only make top-level routing components actually depend on the URL.

So how does it work?

You can skip this section if you aren't interested in the nitty-gritty details. We use a pretty clever hack to simplify the syncing though, so I wanted to write about it!

You call syncReduxAndRouter with history and store objects and it will keep them in sync. It does this by listening to history changes with history.listen and state changes with store.subscribe and telling each other when something changes.

It's a little tricky because each listener needs to know when to "stop." If the app state changes, it needs to call history.pushState, but the history listener should see that it's up-to-date and not do anything. When it's the other way around, the history listener needs to call store.dispatch to update the path but the store listener should see that nothing has changed.

First, let's talk about history. How can we tell if anything has changed? We get the new location object so we just stringify it into a URL and then compare it with the URL in the app state. If it's the same, we do nothing. Pretty easy!

Detecting app state changes is a little harder. In previous versions, we were comparing the URL from state with the current location's URL. But this caused tons of problems. For example, if the user has installed a listenBefore hook, it will be invoked from the pushState call in the store subscriber (because the app state URL is different from the current URL). The user might dispatch actions in listenBefore and update other state though, and since we are subscribed to the whole store, our listener will run again. At this point the URL has not been updated yet so we will call pushState again, and the listenBefore hook will be called again, causing an infinite loop.

Even if we could somehow only trigger pushState calls when the URL app state changes, this is not semantically correct. Every single time the user tries to change the URL, we should always call pushState even if the URL is the same as the current one. This is how browsers work; think of clicking on a link to "/foo" even though "/foo" is the current URL: what happens?

In redux, reducers are pure so we cannot call pushState there. We could do it in a middleware (which is what redux-router does) but I really don't want to force people to install a middleware just for this. We could do it in the action creator, but that seems like the wrong time: reducers may respond to the UPDATE_PATH action and update some state, so we shouldn't rerender routing components until after reducing.

I came up with a clever hack: just use an id in the routing state and increment it whenever we want to trigger a pushState! This has drastically simplified everything, made it far more robust, and even better made testing really easy because we can just check that the changeId field is the right number.

We just have to keep track of the last changeId we've seen an compare it in the store subscriber. This means there's always a 1:1 relationship with updatePath action creator calls and pushState calls no matter what. Try any transition logic you want, it should work!

It also simplifies how changes from the router to redux work, because it calls the updatePath action creator with an avoidRouterUpdate flag and all we have to do in the reducer it just not increment changeId and we won't call back into the router.

I think my favorite side effect of this technique is testing. Look at the tests and you'll see I can compare a bunch of changeIds to make sure that the right number of pushState calls are being made.

More Complex Examples of react-router

Originally I was going to walk through how I used react-router for complex use cases like server-side rendering. This post is already too long to go into details, and I don't have time to write another post, so I will leave you with a few points that will help you dig into the code to see how it works:

  • There's no problem making a component both a redux "connected" component and a route component. Here I'm exporting a connected Drafts page will be installed in the router. That means the component can both select from state as well as be controlled by the router.
  • I perform data fetching by specifying a static populateStore function. On the client, the router will call this in createElement seen here , and the backend can prepopulate the store by iterating over all route components and calling this method. The action creators are responsible for checking if the data is already loaded and not re-fetching on the frontend of it's already there (example).
  • The server uses the lower-level match API seen here to get the current route. This gives us flexibility to control everything. We store the current HTML status in redux (like a 500) so that components can change it. For example, the Post component can set a 404 code if the post isn't found. The server sends the page with the right HTML status code.
  • This also means the top-level App component can inspect the status code to see if it should display a special 404 or 500 page.

I really like how the react-router 1.0 API turned out. The idea seems to be use low-level APIs on the server so that you can control everything, but the client can simply render a Router component to automatically handle state. The two environments are different enough that this works great.

That's It

It's my goal to research ideas and present them in a way to help other people. In this case a cool project, redux-simple-router, came out of it. I hope this post explains the reasons behind and the above links help show more complicated examples of using it.

We are working on porting react-redux-universal-hot-example to redux-simple-router, so that will be another example of all kinds of uses. We're really close to finishing it, and you can follow along in this issue.

I'm also going to add more examples in the repo itself. But the goal is that you should be able to just read react-router's docs and do whatever it tells you to do.

Lastly, the folks working on redux-router have put in a lot of good work and I don't mean to diminish that. I think it's healthy for multiple approaches to exist and everyone can learn something from each one.

Nick CameronMacro plans, overview

In this post I want to give a bit of an overview of the changes I'm planning to propose for the macro system. I haven't worked out some of the details yet, so this could change a lot.

To summarise, the broad thrusts of the redesign are:

  • changes to the way procedural macros work and the parts of the compiler they have access to,
  • change the hygiene algorithm, and what hygiene is applied to,
  • address modularisation issues,
  • syntactic changes.

I'll summarise each here, but there will probably be a blog post about each before a proper RFC. At the end of this blog post I'll talk about backwards compatibility.

I'd also like to support macro and ident inter-operation better, as described here.

Procedural macros


I intend to tweak the system of traits and enums, etc. to make procedural macros easier to use. My intention is that there should be a small number of function signatures that can be implemented (not just one unfortunately, because I believe function-like vs attribute-like macros will take different arguments, furthermore I think we need versions for hygienic expansion and expansion with DIY-hygiene, and the latter case must be supplied with some hygiene information in order for the function to do it's own hygiene. I'm not certain that is the right approach though). Although this is not as Rust-y as using traits, I believe the simplicity benefits outweigh the loss in elegance.

All macros will take a set of tokens in and generate a set of tokens out. The token trees should be a simplified version of the compiler's internal token trees to allow procedural macros more flexibility (and forwards compatibility). For attribute-like macros, the code that they annotate must still parse (necessary due to internal attributes, unfortunately), but will be supplied as tokens to the macro itself.

I intend that libsyntax will remain unstable and (stable) procedural macros will not have direct access to it or any other compiler internals. We will create a new crate, libmacro (or something) which will re-export token trees from libsyntax and provide a whole bunch of functionality specifically for procedural macros. This library will take the usual path to stabilisation.

Macros will be able to parse tokens and expand macros in various ways. The output will be some kind of AST. However, after manipulating the AST, it is converted back into tokens to be passed back to the macro expander. Note that this requires us storing hygiene and span information directly in the tokens, not the AST.

I'm not sure exactly what the AST we provide should look like, nor the bounds on what should be in libmacro vs what can be supplied by outside libraries. I would like to start by providing no AST at all and see what the eco-system comes up with.

It is worth thinking about the stability implications of this proposal. At some point in the future, the procedural macro mechanism and libmacro will be stable. So, a crate using stable Rust can use a crate which provides a procedural macro. At some point later we evolve the language in a non-breaking way which changes the AST (internal to libsyntax). We must ensure that this does not change the structure of the token trees we give to macros. I believe that should not be a problem for a simple enough token tree. However, the procedural macro might expect those tokens to parse in a certain way, which they no longer do causing the procedural macro to fail and thus compilation to fail. Thus, the stability guarantees we provide users can be subverted by procedural macros. However, I don't think this is possible to prevent. In the most pathological case, the macro could check if the current date is later than a given one and in that case panic. So, we are basically passing the buck about backwards compatibility with the language to the procedural macro authors and the libraries they use. There is an obvious hazard here if a macro is widely used and badly written. I'm not sure if this can be addressed, other than making sure that libraries exist which make compatibility easy.


I hope that the situation for macro authors will be similar to that for other authors: we provide a small but essential standard library (libmacro) and more functionality is provided by the ecosystem via

The functionality I expect to see in libmacro should be focused on interaction with the rest of the parser and macro expander, including macro hygiene. I expect it to include:

  • interning a string and creating an ident token from a string
  • creating and manipulating tokens
  • expanding macros (macro_rules and procedural), possibly in different ways
  • manipulating the hygiene of tokens
  • manipulating expansion traces for spans
  • name resolution of module and macro names - note that I expect these to return token trees, which gives a macro access to the whole program, I'm not sure this is a good idea since it breaks locality for macros
  • check and set feature gates
  • mark attributes and imports as used

The most important external libraries I would like to see would be to provide an AST-like abstraction, parsing, and tools for building and manipulating AST. These already exist (syntex, ASTer), so I am confident we can have good solutions in this space, working towards crates which are provided on, but are officially blessed (similar to the goals of other libraries).

I would very much like to see quasi-quoting and pattern matching in blessed libraries. These are important tools, the former currently provided by libsyntax. I don't see any reason these must be provided by libmacro, and since quasi-quoting produces AST, they probably can't be (since they would be associated with a particular AST implementation). However, I would like to spend some time improving the current quasi-quoting system, in particular to make it work better with hygiene and expansion traces.

Alternatively, libmacro could provide quasi-quoting which produces token trees, and there is then a second step to produce AST. Since hygiene info will operate at the tokens level, this might be possible.

Pattern matching on tokens should provide functionality similar to that provided by macro_rules!, making writing procedural macros much easier. I'm convinced we need something here, but not sure of the design.

Naming and registration

See section on modularisation below, the same things apply to procedural macros as to macro_rules macros.

A macro called baz declared in a module bar inside a crate foo could be called using ::foo::bar::baz!(...) or imported using use foo::bar::baz!; and used as baz!(...). Other than a feature flag until procedural macros are stabilised, users of macros need no other annotations. When looking at an extern crate foo statement, the compiler will work out whether we are importing macros.

I expect that functions expected to work as procedural macros would be marked with an attribute (#[macro] or some such). We would also have #[cfg(macro)] for helper functions, etc. Initially, I expect a whole crate must be #[cfg(macro)], but eventually I would like to allow mixing in a crate (just as we allow macro_rules macros in the same crate as normal code).

There would be no need to register macros with the plugin registry.

A vaguely related issue is whether interaction between the macros and the compiler should be via normal function calls (to libmacro) or via IPC. The latter would allow produral macros to be used without dynamic linking and thus permit a statically linked compiler.


I plan to change the hygiene algorithm we use from mtwt to sets of scopes. This allows us to use hygiene information in name resolution, thus alleviating the 'absolute path' problem in macros. We can also use this information to support hygienic checking of privacy. I'll explain the algorithm and how it will apply to Rust in another blog post. I think this algorithm will be easier for procedural macro authors to work with too.

Orthogonally, I want to make all identifiers hygienic, not just variables and labels. I would also like to support hygienic unsafety. I believe both these things are more implementation than design issues.


The goal here is to treat macros the same way as other items, naming via paths and allowing imports. This includes naming of attributes, which will allow paths for naming (e.g., #[foo::bar::baz]). Ordering of macros should also not be important. The mechanism to support this is moving parts of name resolution and privacy checking to macro expansion time. The details of this (and the interaction with sets of scopes hygiene, which essentially gives a new mechanism for name resolution) are involved.


These things are nice to have, rather than core parts of the plan. New syntax for procedural macros is covered above.

I would like to fix the expansion issues with arguments and nested macros, see blog post.

I propose that new macros should use macro! rather than macro_rules!.

I would like a syntactic form for macro_rules macros which only matches a single pattern and is more lightweight than the current syntax. The current syntax would still be used where there are multiple patterns. Something like,

macro! foo(...) => {  

Perhaps we drop the => too.

We need to allow privacy annotations for macros, not sure the best way to do this: pub macro! foo { ... } or macro! pub foo { ... } or something else.

Backwards compatability

Procedural macros are currently unstable, there will be a lot of breaking changes, but the reward is a path to stability.

macro_rules! is a stable part of the language. It will not break (modulo usual caveat about bug fixes). The plan is to introduce a whole new macro system around macro!, if you have macros currently called macro!, I guess we break them (we will run a warning cycle for this and try and help anyone who is affected). We will deprecate macro_rules! once macro! is stable. We will track usage with the intention of removing macro_rules at 2.0 or 3.0 or whatever. All macros in the standard libraries will be converted to using macro!, this will be a breaking change, we will mitigate by continuing to support the old but deprecated versions of the macros. Hopefully, modularisation will support this (needs more thought to be sure). The only change for users of macros will be how the macro is named, not how it is used (modulo new applications of hygiene).

Most existing macro_rules! macros should be valid macro! macros. The only difference will be using macro! instead of macro_rules! and the new scoping/naming rules may lead to name clashes that didn't exist before (note this is not in itself a breaking change, it is a side effect of using the new system). Macros converted in this way should only break where they take advantage of holes in the current hygiene system. I hope that this is a low enough bar that adoption of macro! by macro_rules! authors will be quick.


There are two backwards compatibility hazards with hygiene, both affect only macro_rules! macros: we must emulate the mtwt algorithm with the sets of scopes algorithm, and we must ensure unhygienic name resolution for items which are currently not treated hygienically. In the second case, I think we can simulate unhygienic expansion for types etc, by using the set of scopes for the macro use-site, rather than the proper set. Since only local variables are currently treated hygienically, I believe this means the first case will Just Work. More details on this in a future blog post.

Air MozillaPrivacy for Normal People

Privacy for Normal People Mozilla cares deeply about user control. But designing products that protect users is not always obvious. Sometimes products give the illusion of control and security...

Armen ZambranoWelcome F3real, xenny and MikeLing!

As described by jmaher, we started this week our first week of mozci's quarter of contribution.

I want to personally welcome Stefan, Vaibhav and Mike to mozci. We hope you get to learn and we thank you for helping Mozilla move forward in this corner of our automation systems.

I also want to give thanks to Alice for committing at mentoring. This could not be possible without her help.

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Armen ZambranoMozilla CI tools meet up

In order to help the contributors' of mozci's quarter of contribution, we have set up a Mozci meet up this Friday.

If you're interested on learning about Mozilla's CI, how to contribute or how to build your own scheduling with mozci come and join us!

9am ET -> other time zones
Vidyo room:

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Air MozillaMartes mozilleros, 24 Nov 2015

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos.

Kim MoirUSENIX Release Engineering Summit 2015 recap

November 13th, I attended the USENIX Release Engineering Summit in Washington, DC.  This summit was along side the larger LISA conference at the same venue. Thanks to Dinah McNutt, Gareth Bowles, Chris Cooper,  Dan Tehranian and John O'Duinn for organizing.

I gave two talks at the summit.  One was a long talk on how we have scaled our Android testing infrastructure on AWS, as well as a look back at how it evolved over the years.

Picture by Tim Norris - Creative Commons Attribution-NonCommercial-NoDerivs 2.0 Generic (CC BY-NC-ND 2.0)

Scaling mobile testing on AWS: Emulators all the way down from Kim Moir

I gave a second lightning talk in the afternoon on the problems we face with our large distributed continuous integration, build and release pipeline, and how we are working to address the issues. The theme of this talk was that managing a large distributed system is like being the caretaker for the water, or some days, the sewer system for a city.  We are constantly looking system leaks and implementing system monitoring. And probably will have to replace it with something new while keeping the existing one running.

Picture by Korona Lacasse - Creative Commons 2.0 Attribution 2.0 Generic

In preparation for this talk, I did a lot of reading on complex systems design and designing for recovery from failure in distributed systems.  In particular, I read Donatella Meadows' book Thinking in Systems. (Cate Huston reviewed the book here). I also watched several talks by people who talked about the challenges they face managing their distributed systems including the following:
I'd also like to thank all the members of Mozilla releng/ateam who reviewed my slides and provided feedback before I gave the presentations.
    The attendees of the summit attended the same keynote as the LISA attendees.  Jez Humble, well known for his Continuous Delivery and Lean Enterprise books provided a keynote on Lean Configuration Management which I really enjoyed. (Older version of slides from another conference, are available here and here.)

    In particular, I enjoyed his discussion of the cultural aspects of devops. I especially like that he stated that "You should not have to have planned downtime or people working outside business hours to release".  He also talked a bit about how many of the leaders that are looked up to as visionaries in the tech industry are known for not treating people very well and this is not a good example to set for others who believe this to be the key to their success.  For instance, he said something like "what more could Steve Jobs have accomplished had he treated his employees less harshly".

    Another concept he discussed which I found interesting was that of the strangler application. When moving from a large monolithic application, the goal is to split out the existing functionality into services until the originally application is left with nothing.  Exactly what Mozilla releng is doing as we migrate from Buildbot to taskcluster.

    At the release engineering summit itself,   Lukas Blakk from Pinterest gave a fantastic talk Stop Releasing off Your Laptop—Implementing a Mobile App Release Management Process from Scratch in a Startup or Small Company.  This included grumpy cat picture to depict how Lukas thought the rest of the company felt when that a more structured release process was implemented.

    Lukas also included a timeline of the tasks that implemented in her first six months working at Pinterest. Very impressive to see the transition!

    Another talk I enjoyed was Chaos Patterns - Architecting for Failure in Distributed Systems by Jos Boumans of Krux. (Similar slides from an earlier conference here). He talked about some high profile distributed systems that failed and how chaos engineering can help illuminate these issues before they hit you in production.

    For instance, it is impossible for Netflix to model their entire system outside of production given that they consume around one third of nightly downstream bandwidth consumption in the US. 

    Evan Willey and Dave Liebreich from Pivotal Cloud Foundry gave a talk entitled "Pivotal Cloud Foundry Release Engineering: Moving Integration Upstream Where It Belongs". I found this talk interesting because they talked about how the built Concourse, a CI system that is more scaleable and natively builds pipelines.   Travis and Jenkins are good for small projects but they simply don't scale for large numbers of commits, platforms to test or complicated pipelines. We followed a similar path that led us to develop Taskcluster

    There were many more great talks, hopefully more slides will be up soon!

Henrik SkupinSurvey about sharing information inside the Firefox Automation team

Within the Firefox Automation team we were suffering a bit in sharing information about our work over the last couple of months. That mainly happened because I was alone and not able to blog more often than once in a quarter. The same applies to our dev-automation mailing list which mostly only received emails from Travis CI with testing results.

Given that the team has been increased to 4 people now (beside me this is Maja Frydrychowicz, Syd Polk, and David Burns, we want to be more open again and also trying to get more people involved into our projects. To ensure that we do not make use of the wrong communication channels – depending where most of our readers are – I have setup a little survey. It will only take you a minute to go through but it will help us a lot to know more about the preferences of our automation geeks. So please take that little time and help us.

The survey can be found here and is open until end of November 2015:

Thank you a lot!

Nick CameronMacros pt6 - more issues

I discovered another couple of issues with Rust macros (both affect the macro_rules flavour).

Nested macros and arguments

These don't work because of the way macros do substitution. When expanding a macro, the expander looks for token strings starting with $ to expand. If there is a variable which is not bound by the outer macro, then it is an error. E.g.,

macro_rules! foo {  
    () => {
        macro_rules! bar {
            ($x: ident) => { $x }

When we try to expand foo!(), the expander errors out because it can't find a value for $x, it doesn't know that macro_rules bar is binding $x.

The proper solution here is to make macros aware of binding and lexical scoping etc. However, I'm not sure that is possible because macros are not parsed until after expansion. We might be able to fix this by just being less eager to report these errors. We wouldn't get proper lexical scoping, i.e., all macro variables would need to have different names, but at least the easy cases would work.

Matching expression fragments


macro_rules! foo {  
    ( if $e:expr { $s:stmt } ) => {
        if $e {

fn main() {  
    let x = 1;
    foo! {
        if 0 < x {

This gives an error because it tries to parse x { as the start of a struct literal. We have a hack in the parser where in some contexts where we parse an expression, we explicitly forbid struct literals from appearing so that we can correctly parse a following block. This is not usually apparent, but in this case, where the macro expects an expr, what we'd like to have is 'an expression but not a struct literal'. However, exposing this level of detail about the parser implementation to macro authors (not even procedural macro authors!) feels bad. Not sure how to tackle this one.

Relatedly, it would be nice to be able to match other fragments of the AST, for example the interior of a block. Again, there is the issue of how much of the internals we wish to expose.

(HT @bmastenbrook for the second issue).

Chris FinkeReenact Now Available for Android

I’ve increased the audience for Reenact (an app for reenacting photos) by 100,000% by porting it from Firefox OS to Android.


It took me about ten evenings to go from “I don’t even know what language Android apps are written in” to submitting the .apk to the Google PlayTM store. I’d like to thank Stack Overflow, the Android developer docs, and Android Studio’s autocomplete.

Reenact for Android, like Reenact for Firefox OS, is open-source; the complete source for both apps is available on GitHub. Also like the Firefox OS app, Reenact for Android is free and ad-free. Just think: if even just 10% of all 1 billion Android users install Reenact, I’d have $0!

In addition to making Reenact available on Android, I’ve launched, a home for the app. If you try out Reenact, send your photo to to get it included in the photo gallery on

You can install Reenact on Google Play or directly from Try it out and let me know how it works on your device!

Mozilla Security BlogImproving Revocation: OCSP Must-Staple and Short-lived Certificates

Last year, we laid out a long-range plan for improving revocation support for Firefox. As of this week, we’ve completed most of the major elements of that plan. After adding OneCRL earlier this year, we have recently added support for OCSP Must-Staple and short-lived certificates. Together, these changes enable website owners several ways to achieve fast, secure certificate revocation.

In an ideal world, the browser would perform an online status check (such as OCSP) whenever it verifies a certificate, and reject the certificate if the check failed. However, these checks can be slow and unreliable. They time out about 15% of the time, and take about 350ms even when they succeed. Browsers generally soft-fail on revocation in an attempt to balance these concerns.

To get back to stronger revocation checking, we have added support for short-lived certificates and Must-Staple to let sites opt in to hard failures. As of Firefox 41, Firefox will not do “live” OCSP queries for sufficiently short-lived certs (with a lifetime shorter than the value set in “security.pki.cert_short_lifetime_in_days”). Instead, Firefox will just assume the certificate is valid. There is currently no default threshold set, so users need to configure it. We are collecting telemetry on certificate lifetimes, and expect to set the threshold somewhere around the maximum OCSP response lifetime specfied in the baseline requirements.

OCSP Must-Staple makes use of the recently specified TLS Feature Extension. When a CA adds this extension to a certificate, it requires your browser to ensure a stapled OCSP response is present in the TLS handshake. If an OCSP response is not present, the connection will fail and Firefox will display a non-overridable error page. This feature will be included in Firefox 45, currently scheduled to be released in March 2016.

Mozilla Addons BlogTest your add-ons for Multi-process Firefox compatibility

You might have heard the news that future versions of Firefox will run the browser UI separately from web content. This is called Multi-process Firefox (also “Electrolysis” or “e10s”), and it is scheduled for release in the first quarter of 2016.

If your add-on code accesses web content directly, using an overlay extension, a bootstrapped extension, or low-level SDK APIs like window/utils or tabs/utils, then you will probably be affected.

To minimize the impact on users of your add-ons, we are urging you to test your add-ons for compatibility. You can find documentation on how to make them compatible here.

Starting Nov. 24, 2015, we are available to assist you every Tuesday in the #addons channel at Click here to see the schedule. Whether you need help testing or making your add-ons compatible, we’re here to help!

Emily DunhamPSA: Docker on Ubuntu

PSA: Docker on Ubuntu

$ sudo apt-get install docker
$ which docker
$ docker
The program 'docker' is currently not installed. You can install it by typing:
apt-get install docker
$ apt-get install docker
Reading package lists... Done
Building dependency tree
Reading state information... Done
docker is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 13 not upgraded.

Oh, you wanted to run a docker container? The docker package in Ubuntu is some window manager dock thingy. The docker binary that runs containers comes from the system package.

$ sudo apt-get install
$ which docker

Also, if it can’t connect to its socket:

FATA[0000] Post http:///var/run/docker.sock/v1.18/containers/create: dial
unix /var/run/docker.sock: permission denied. Are you trying to connect to a
TLS-enabled daemon without TLS?

you need to make sure you’re in the right group:

sudo usermod -aG docker <username>; newgrp docker

(thanks, stackoverflow!)

Daniel Stenbergcopy as curl

Using curl to perform an operation a user just managed to do with his or her browser is one of the more common requests and areas people ask for help about.

How do you get a curl command line to get a resource, just like the browser would get it, nice and easy? Both Chrome and Firefox have provided this feature for quite some time already!

From Firefox

You get the site shown with Firefox’s network tools.  You then right-click on the specific request you want to repeat in the “Web Developer->Network” tool when you see the HTTP traffic, and in the menu that appears you select “Copy as cURL”. Like this screenshot below shows. The operation then generates a curl command line to your clipboard and you can then paste that into your favorite shell window. This feature is available by default in all Firefox installations.


From Chrome

When you pop up the More tools->Developer mode in Chrome, and you select the Network tab you see the HTTP traffic used to get the resources of the site. On the line of the specific resource you’re interested in, you right-click with the mouse and you select “Copy as cURL” and it’ll generate a command line for you in your clipboard. Paste that in a shell to get a curl command line  that makes the transfer. This feature is available by default in all Chome and Chromium installations.


On Firefox, without using the devtools

If this is something you’d like to get done more often, you probably find using the developer tools a bit inconvenient and cumbersome to pop up just to get the command line copied. Then cliget is the perfect add-on for you as it gives you a new option in the right-click menu, so you can get a quick command line generated really quickly, like this example when I right-click an image in Firefox:


This Week In RustThis Week in Rust 106

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: nasa42, brson, and llogiq.

Updates from Rust Community

News & Blog Posts

Notable New Crates & Projects

  • nom 1.0 is released.
  • Freepass. The free password manager for power users.
  • Barcoders. A barcode encoding library for the Rust programming language.
  • fst. Fast implementation of ordered sets and maps using finite state machines.
  • Rusty Code. Advanced language support for the Rust language in Visual Studio Code.
  • Dybuk. Prettify the ugly Rustc messages (inspired by Elm).
  • Substudy. Use SRT subtitle files to study foreign languages.

Updates from Rust Core

99 pull requests were merged in the last week.

See the triage digest and subteam reports for more details.

Notable changes

New Contributors

  • Alexander Bulaev
  • Ashkan Kiani
  • Devon Hollowood
  • Doug Goldstein
  • Jean Maillard
  • Joshua Holmer
  • Matthias Kauer
  • Ole Krüger
  • Ravi Shankar

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

Tweet us at @ThisWeekInRust to get your job offers listed here!

Crate of the Week

This week's Crate of the Week is nom, a library of fast zero-copy parser combinators, which has already been used to create safe, high-performance parsers for a number of formats both binary and textual. nom just reached version 1.0, too, so congratulations for both the major version and the CotW status!

Thanks to Reddit user gbersac for the nom-ination! Submit your suggestions for next week!

Mark FinkleAn Engineer’s Guide to App Metrics

Building and shipping a successful product takes more than raw engineering. I have been posting a bit about using Telemetry to learn about how people interact with your application so you can optimize use cases. There are other types of data you should consider too. Being aware of these metrics can help provide a better focus for your work and, hopefully, have a bigger impact on the success of your product.

Active Users

This includes daily active users (DAUs) and monthly active users (MAUs). How many people are actively using the product within a time-span? At Mozilla, we’ve been using these for a long time. From what I’ve read, these metrics seem less important when compared to some of the other metrics, but they do provide a somewhat easy to measure indicator of activity.

These metrics don’t give a good indication of how much people use the product though. I have seen a variation metric called DAU/MAU (daily divided by monthly) and gives something like retention or engagement. DAU/MAU rates of 50% are seen as very good.


This metric focuses on how much people really use the product, typically tracking the duration of session length or time spent using the application. The amount of time people spend in the product is an indication of stickiness. Engagement can also help increase retention. Mozilla collects data on session length now, but we need to start associating metrics like this with some of our experiments to see if certain features improve stickiness and keep people using the application.

We look for differences across various facets like locales and releases, and hopefully soon, across A/B experiments.

Retention / Churn

Based on what I’ve seen, this is the most important category of metrics. There are variations in how these metrics can be defined, but they cover the same goal: Keep users coming back to use your product. Again, looking across facets, like locales, can provide deeper insight.

Rolling Retention: % of new users return in the next day, week, month
Fixed Retention: % of this week’s new users still engaged with the product over successive weeks.
Churn: % of users who leave divided by the number of total users

Most analysis tools, like iTunes Connect and Google Analytics, use Fixed Retention. Mozilla uses Fixed Retention with our internal tools.

I found some nominal guidance (grain of salt required):
1-week churn: 80% bad, 40% good, 20% phenomenal
1-week retention: 25% baseline, 45% good, 65% great

Cost per Install (CPI)

I have also seen this called Customer Acquisition Cost (CAC), but it’s basically the cost (mostly marketing or pay-to-play pre-installs) of getting a person to install a product. I have seen this in two forms: blended – where ‘installs’ are both organic and from campaigns, and paid – where ‘installs’ are only those that come from campaigns. It seems like paid CPI is the better metric.

Lower CPI is better and Mozilla has been using Adjust with various ad networks and marketing campaigns to figure out the right channel and the right messaging to get Firefox the most installs for the lowest cost.

Lifetime Value (LTV)

I’ve seen this defined as the total value of a customer over the life of that customer’s relationship with the company. It helps determine the long-term value of the customer and can help provide a target for reasonable CPI. It’s weird thinking of “customers” and “value” when talking about people who use Firefox, but we do spend money developing and marketing Firefox. We also get revenue, maybe indirectly, from those people.

LTV works hand-in-hand with churn, since the length of the relationship is inversely proportional to the churn. The longer we keep a person using Firefox, the higher the LTV. If CPI is higher than LTV, we are losing money on user acquisition efforts.

Total Addressable Market (TAM)

We use this metric to describe the size of a potential opportunity. Obviously, the bigger the TAM, the better. For example, we feel the TAM (People with kids that use Android tablets) for Family Friendly Browsing is large enough to justify doing the work to ship the feature.

Net Promoter Score (NPS)

We have seen this come up in some surveys and user research. It’s suppose to show how satisfied your customers are with your product. This metric has it’s detractors though. Many people consider it a poor value, but it’s still used quiet a lot.

NPS can be as low as -100 (everybody is a detractor) or as high as +100 (everybody is a promoter). An NPS that is positive (higher than zero) is felt to be good, and an NPS of +50 is excellent.

Go Forth!

If you don’t track any of these metrics for your applications, you should. There are a lot of off-the-shelf tools to help get you started. Level-up your engineering game and make a bigger impact on the success of your application at the same time.

Cameron KaiserTenFourFoxBox: because it's time to think inside the (fox)box. (a/k/a: we dust off Mozilla Prism for a new generation)

As long as there have been web browsers, there have been people trying to get the web freed up from the browser that confines it, because, you know, the Web wants to be free, or some other similarly aspirational throwaway platitude. These could be robots, or screen scrapers, or aggregating services, or chromeless viewers, but no matter what these browserless browsers are doing, they all tend to specialize in a particular site for any number of reasons usually circulating around business or convenience. This last type, the chromeless viewer, spawned the subcategory of "site specific browsers" that morphed into the "Rich Internet Application" and today infects our phones and tablets in the guise of the "lazy-*ss programmer mobile app."

Power Mac users have only had access to a few tools that could generate site-specific browsers. Until Adobe withdrew support, Adobe AIR could run on PowerPC 10.4+, but it was more for generally Internet-enabled apps and wasn't specifically focused at creating site-specific browsers, though it could, with a little work. Leopard users could use early betas of Fluid before that went Intel-only, and I know a few of you still do. Even Mozilla themselves got into the act with Mark Finkle's WebRunner, which became Mozilla Prism in 2007, languished after a few releases, got moved to Salsita and renamed WebRunner again in 2011, and cancelled there as well around the time of Firefox 5. However, WebRunner née Prism née WebRunner was never available for Power Macs; its required binary components were Intel-only, even though the Mozilla releases could run on 10.4, so that was about it for PowerPC. (Mozilla tried again shortly afterward with Chromeless, but this didn't get off the ground either, and was never intended as a Prism successor in any case. Speaking of, Google Chrome can do something similar, but Chrome was of course never released for Power Macs either because Alphagooglebet are meaniepants.)

There are unique advantages as TenFourFox users to having separate apps that only handle one site at a time. Lots of tabs requires lots of garbage collection, the efficiency of which Mozilla has improved substantially, but is still a big drain on old computers like ours which are always under memory pressure. In addition, currently Firefox and TenFourFox must essentially cooperatively multitask between tabs because JavaScript infamously has run-to-completion semantics, which is why you get the "script too long" dialogue box if the watchdog portion of the browser detects something's pegging it. Since major portions of the browser itself are written in JavaScript, plus all those addons you tart it up with, the browser chrome must also cooperatively multitask with everything else which is why sometimes it temporarily grinds to a halt. I've sunk an incredible amount of time over TenFourFox's existence into our just-in-time JavaScript compiler for PowerPC to reduce this overhead, but that only gets us so far, and the typical scripts on popular websites aren't getting any less complex. Mozilla intends to solve this problem (and others) with multi-process Firefox, also known as Electrolysis, but it won't work without significant effort on 10.4 and I have grave doubts about its ability to perform well on these older computers; for that reason, I've chosen not to support it.

However, generating standalone browser apps for your common sites helps to mitigate both these problems. While each instance of the standalone browser uses more memory than a browser tab, with only one site in it garbage collection is much easier to accomplish (and therefore faster), and the memory is instantly reclaimed when the standalone browser terminates. In fact, on G5 systems with more than 2GB of RAM, it helps you actually use that extra memory more effectively: while TenFourFox is a 32-bit application (being a hybrid of Carbon and Cocoa), you'd be running multiple instances of it, all of which have their own 32-bit address space which can be located in that extra RAM you've got on board. Also, separate browser instances become ... multiple processes. That means they preemptively multitask, like Electrolysis content processes would. They could even be scheduled on a different core on multiprocessor Power Macs. That improves their responsiveness substantially, to say nothing of the fact that the substantially reduced amount of browser chrome has dramatically less overhead. Now, standalone browsers also have disadvantages; they lack a lot of the features of a regular browser, including safety features, and they can be more difficult to navigate in because of the reduced interface. But for many sites those are acceptable tradeoffs.

So, without further ado, let's introduce TenFourFoxBox.

TenFourFoxBox is an application that generates site-specific browsers ("foxboxes") for you, running them in private instances of TenFourFox (a la XULRunner). This has been one of my secret internal projects since I got Amazon Music working properly with TenFourFox, so I wanted to use it as a jukebox without dragging down the rest of the browser, and to help beef up the performance of my online coursework site which has a rather heavy implementation and depends greatly on Google Docs and Box. And now you'll get to play with it as well.

Although TenFourFoxBox borrows some code from Prism/WebRunner, mostly the reduced browser chrome, in actual operation it functions somewhat differently. First, TenFourFoxBox isn't itself written in XUL; it's a "native" OS X application that just happens to generate XUL-based applications. Second, for webapps created with Prism (or its companion tool Refractor), it's Prism itself that actually does the running with its own embedded copy of the XUL framework, not Firefox. With TenFourFoxBox, however, foxboxes you create actually run using the copy of TenFourFox you have installed (and yes, the foxboxes will look for and run the correct version for your architecture), just as separate processes, with their own browser chrome and their own application support and cache directory independent of the main browser. The nice thing about that is when you upgrade TenFourFox, you upgrade the browser core in every foxbox on your system all at once, as well as your main browser, because TenFourFox is your main browser, amirite?

The implementation in TenFourFoxBox is also a little different with respect to how data is stored. Foxboxes are driven essentially as independent XULRunner apps, so they have their own storage separate from the browser. Prism allowed this space to be shared, but I don't think that was a good idea, so not only are all foxboxes independent, but by default they operate effectively in "private browsing" mode and clear out cookies and other site data when they quit. By default they also disable autocomplete, improving both privacy and a little bit of performance; you can, of course, change these settings, and override checks sites might do which could detect you're not actually in a regular browser. I also decided to keep a constant unchanging title (regardless of the website you're viewing) so that you can more easily identify it in Exposé.

So, let's see it in action. Here's Bing Maps, in full 1080p on the quad G5, looking for drone landing sites.

And here's what I originally wrote this for, Amazon Music, playing the more or less official album of International Space Year:

(Stupid Amazon. I already have Flood and Junta!)

So now it's time to get this ready for the masses, and what better way than to have you slavering lot mercilessly bang on it? The following bugs/deficiencies are known:

  • The application menu only has "Quit." This is actually Mozilla bug 1181977, and will be fixed in TenFourFox 38.5, after which all the foxboxes will "fix themselves."
  • Localization isn't supported yet, even if you have a localized TenFourFox; most things will still appear in English. It's certainly possible to do, just non-trivial because of TenFourFoxBox's dual nature (we have to localize both the OS X portion and the XUL code it generates, and then figure out how to juggle multi-lingual resources). I'm not likely to do anything with this until the rest of it is stable enough to freeze strings.
  • Although the browser core they run is shared, individual foxboxes have their own private copies of the foxbox support code and chrome which are independent. Thus, when a new TenFourFoxBox comes out, you will need to manually update each of your foxboxes. You can do this in place and overwrite them; it's just somewhat inconvenient.
  • There are probably browser features missing that you'd like. I'm willing to entertain reasonable requests.

Even the manual is delivered as a foxbox, which makes it easy to test on your system. Download it, try it and post your comments in the comments. TenFourFox 38.4 or higher is required. This is a beta, so treat it accordingly, with the plan to release it for general consumption a week or so after 38.5 comes out.

Let's do a little inside-the-box thinking with an old idea for a new generation, shall we?

Benjamin KerensaOpenly Thankful

ThankfulSo next week has a certain meaning for millions of Americans that we relate to a story of indians and pilgrims gathering to have a meal together. While that story may be distorted from the historical truth, I do think the symbolic holiday we celebrate is important.

That said, I want to name some individuals I am thankful for….



Lukas Blakk

I’m thankful for Lukas for being a excellent mentor to me at Mozilla for the last two years she was at Mozilla. Lukas helped me learn skills and have opportunities that many Mozillians would not have the opportunity to do. I’m very grateful for her mentoring, teaching, and her passion to help others, especially those who have less opportunity.

Jeff Beatty

I’m especially thankful for Jeff. This year, out of the blue, he came to me this year and offered to have his university students support an open source project I launched and this has helped us grow our l10n community. I’m also grateful for Jeff’s overall thoughtfulness and my ability to go to him over the last couple of years for advice and feedback.

Majken Connor

I’m thankful for Majken. She is always a very friendly person who is there to welcome people to the Mozilla Community but also I appreciate how outspoken she is. She is willing to share opinions and beliefs she has that add value to conversations and help us think outside the box. No matter how busy she is, she has been a constant in the Mozilla Project. always there to lend advice or listen.

Emma Irwin

I’m thankful for Emma. She does something much different than teaching us how to lead or build community, she teaches us how to participate better and build better participation into open source projects. I appreciate her efforts in teaching future generations the open web and being such a great advocate for participation.

Stormy Peters

I’m thankful for Stormy. She has always been a great leader and it’s been great to work with her on evangelism and event stuff at Mozilla. But even more important than all the work she did at Mozilla, I appreciate all the work she does with various open source nonprofits the committees and boards she serves on or advises that you do not hear about because she does it for the impact.


Jonathan Riddell

I’m thankful for Jonathan. He has done a lot for Ubuntu, Kubuntu, KDE and the great open source ecosystem over the years. Jonathan has been a devout open source advocate always standing for what is right and unafraid to share his opinion even if it meant disappointment from others.

Elizabeth Krumbach Joseph

I’m thankful for Elizabeth. She has been a good friend, mentor and listener for years now and does so much more than she gets credit for. Elizabeth is welcoming in the multiple open source projects she is involved in and if you contribute to any of those projects you know who she is because of the work she does.


Paolo Rotolo

I’m thankful for our lead Android developer who helps lead our Android development efforts and is a driving force in helping us move forward the vision behind Glucosio and help people around the world. I enjoy near daily if not multiple time a day conversations with him about the technical bits and big picture.

The Core Team + Contributors

I’m very thankful for everyone on the core team and all of our contributors at Glucosio. Without all of you, we would not be what we are today, which is a growing open source project doing amazing work to bring positive change to Diabetes.


Leslie Hawthorne

I’m thankful for Leslie. She is always very helpful for advice on all things open source and especially open source non-profits. I think she helps us all be better human beings. She really is a force of good and perhaps the best friend you can have in open source.

Jono Bacon

I’m thankful for Jono. While we often disagree on things, he always has very useful feedback and has an ocean of community management and leadership experience. I also appreciate Jono’s no bullshit approach to discussions. While it can be rough for some, the cut to the chase approach is sometimes a good thing.

Christie Koehler

I’m thankful for Christie. She has been a great listener over the years I have known her and has been very supportive of community at Mozilla and also inclusion & diversity efforts. Christie is a teacher but also an organizer and in addition to all the things I am thankful for that she did at Mozilla, I also appreciate her efforts locally with Stumptown Syndicate.

Air MozillaWebdev Beer and Tell: November 2015

Webdev Beer and Tell: November 2015 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Mozilla Addons BlogSigning API now available

Over the years, has had many APIs. These are used by Firefox and other clients to provide add-on listings, blocklists, and other features. But there hasn’t really been an API that developers can interact with. As part of ongoing improvements to the site, we’ve started focusing on producing APIs for add-on developers as well.

Our first one aims to make add-on signing a little easier for developers. This API enables you to upload an XPI and get back the signed add-on if it passes all the validation checks.

To use this API, log in to and go to Tools > Manage API Keys. Then agree to the terms and fetch an API key and secret to use in subsequent API calls.


Once you’ve done that, generate authorization tokens and use the documented API to sign your add-on.

The documented examples use curl to interact with the API. For example:

-XPUT --form 'upload=@build/my-addon.xpi' -H 'Authorization: JWT your-jwt-token'

This is just the first of the APIs that we hope to add to the site and a path that we hope will lead to increased functionality throughout the add-ons ecosystem. This feature is under development, so we are keen to hear feedback or any issues.

John O'DuinnThe real cost of an office

Woodwards building Vancouver demolition 2 by Tannoy | CC BY-SA 3.0 via Wikimedia Commons

The shift from “building your own datacenter” to “using the cloud” revolutionized how companies viewed internal infrastructure, and significantly reduced the barrier to starting your own fast-growth, global-scale company. Suddenly, you could have instant, reliable, global-scale infrastructure.

(Personally, I dislike the term “cloud” but it’s the easiest vendor-neutral term I know for describing essential infrastructure running on rent-by-the-hour Amazon AWS, Google GCE, Microsoft Azure and others…)

Like any new major change, “the cloud” went through an uphill acceptance curve with resistance from established nay-sayers. Meanwhile, smaller companies with no practical alternatives jumped in with both feet and found that “the cloud” worked just fine. And scaled better. And was cheaper to run. And was faster to setup, so the opportunity-cost was significantly reduced.

Today, of course, “using the cloud” for your infrastructure has crossed the chasm. It is the default. Today, if you were starting a new company, and went looking for funding to build your own custom datacenter, you’d need to explain why you were not “using the cloud”. Deciding to have your own physical data center involves one-time-setup costs as well as ongoing recurring operational costs. Similarly, deciding to have a physical office involves one-time-setup costs as well as ongoing recurring operational costs.

Rethinking infrastructure from the fixed costs of servers and datacenters to rented by the hour “in the cloud” is an industry game changer. Similarly, rethinking the other expensive part of a company’s infrastructure — the physical office — is an industry game changer.

Just like physical datacenters, deciding to setup an office is an expensive decision which complicates, not liberates, the ongoing day-to-day life of your company.

The reality of having an office

It is easy to skip past the “Do we really need an office?” question – and plunge into the mechanics, without first thinking through some company-threatening questions.

What city, and which neighborhood in the city, is the best location for your company office? Sometimes the answer is “near to where the CEO lives”, or “near the offices of our lead VCs”. However, this should include answers to questions like “where will we find most of the talent (people) we plan to hire?” and “where will most of our customers be?”.

What size should your office be? This requires thinking through your hiring plans — not just for today, but also for the duration of the lease — typically 3–5–10 years. The consequences of this decision may be even longer, given how some people do not like relocating! When starting a company, it is very tricky to accurately predict the answers to these questions for multiple years into the future.

Business plans change. Technologies change. Market needs and finances change. Product scope changes. Companies pivot. Brick-and-mortar buildings (usually) stay where they are.

If you convince yourself that your company does need a physical office, setting up and running an office is “non-trivial”. You quickly get distracted by the expensive logistics and operational mechanics of a physical building – instead of keeping focus on people and the shipping product.

You need to negotiate, sign and pay leases. Debate offices-with-doors vs open-plan — and if open-plan, do you want library-quiet, or bull-pen with cross-chatter and music? Negotiate seating arrangements — including the who-gets-a-window-view debate. Construct the actual office-space, bathrooms and kitchens. Pick, buy and install desks, chairs, ping-pong tables and fridges. Set up wifi, security doorbadge systems, printers, phones. Hire staff who are focused on running the physical office, not focused on your product. The list goes on and on. All of these take time, money and most importantly focus. This distracts humans away from the entire point of the company — hiring humans to create and ship product to earn money. And the distraction does not end once the office is built — maintaining and running a physical office takes ongoing time, money and focus.

After your office is up-and-running, you discover the impact this new office has on hiring. You pay to relocate people who would be great additions to your company, but do not live near your new office. You are disappointed by good people turning down job offers because of the location. You have debates about “hiring the best person for the job” vs “hiring the best person for the job who is willing to relocate”. You have to limit hiring because you don’t have a spare desk available. You need to sublease a part of your new office space, because growth plans changed because revenue didn’t go as well as hoped – and now you have unused idle office space costing you money every month.

The benefits of no office

You dedicate more time, money and focus on the people, and the shipping product — simply by avoiding the financial costs, lead-time-delays and focus-distractions of setting up a physical office.

Phrased another way: Distributed teams let you focus the company time and money where it is most important — on the people and the product. After all, it doesn’t matter how fancy your office is unless you have a product that people want to use.

Having no office lets you sidestep a few potentially serious and distracting ongoing problems:

You don’t need to worry about signing a lease for a space that is too small (or too large) for the planned growth of the company. You avoid adding a large recurring cost (a lease) to the company books, which impacts your company’s financial burn rate.

You don’t need to worry if the location of the office helps or hinders future hiring plans. You don’t need to worry about good people turn down your job offers simply because of the office location. You can hire from a significantly larger pool of candidates, so you can hire better and faster then all-in-one-location competitors. For more on this, see .

Even larger companies like Aetna, with established offices, have been encouraging work-from-home since 2005 – because they can hire more people and also because of the money savings from real estate. Last I’ve heard, Aetna was saving $78 million a year by having people work from home. Each year. No wonder Dell and others are now doing the same.

You sidestep human distractions about office layout.

You don’t need to worry about business continuity if the office is closed for a while.

Sidestepping all these distractions helps you (and everyone else in the company) focus attention and money on the people and the product you are building and shipping. This is a competitive advantage over all-in-one-office companies. Important stuff to keep in mind when you ask yourself “Do we really need an office?”

(Versions of this post are on and also in the latest early release of my “Distributed” book.)

(Photo credit: Woodwards building Vancouver demolition 2 by Tannoy | CC BY-SA 3.0 via Wikimedia Commons)

Daniel PocockDatabases of Muslims and homosexuals?

One US presidential candidate has said a lot recently, but the comments about making a database of Muslims may qualify as the most extreme.

Of course, if he really wanted to, somebody with this mindset could find all the Muslims anyway. A quick and easy solution would involve tracing all the mobile phone signals around mosques on a Friday. Mr would-be President could compel Facebook and other social networks to disclose lists of users who identify as Muslim.

Databases are a dangerous side-effect of gay marriage

In 2014 there was significant discussion about Brendan Eich's donation to the campaign against gay marriage.

One fact that never ranked very highly in the debate at the time is that not all gay people actually support gay marriage. Even where these marriages are permitted, not everybody who can marry now is choosing to do so.

The reasons for this are varied, but one key point that has often been missed is that there are two routes to marriage equality: one involves permitting gay couples to visit the register office and fill in a form just as other couples do. The other route to equality is to remove all the legal artifacts around marriage altogether.

When the government does issue a marriage certificate, it is not long before other organizations start asking for confirmation of the marriage. Everybody from banks to letting agents and Facebook wants to know about it. Many companies outsource that data into cloud CRM systems such as Salesforce. Before you know it, there are numerous databases that somebody could mine to make a list of confirmed homosexuals.

Of course, if everybody in the world was going to live happily ever after none of this would be a problem. But the reality is different.

While discrimination: either against Muslims or homosexuals - is prohibited and can even lead to criminal sanctions in some countries, this attitude is not shared globally. Once gay people have their marriage status documented in the frequent flyer or hotel loyalty program, or in the public part of their Facebook profile, there are various countries where they are going to be at much higher risk of prosecution/persecution. The equality to marry in the US or UK may mean they have less equality when choosing travel destinations.

Those places are not as obscure as you might think: even in Australia, regarded as a civilized and laid-back western democracy, the state of Tasmania fought tooth-and-nail to retain the criminalization of virtually all homosexual conduct until 1997 when the combined actions of the federal government and high court compelled the state to reform. Despite the changes, people with some of the most offensive attitudes are able to achieve and retain a position of significant authority. The same Australian senator who infamously linked gay marriage with bestiality has successfully used his position to set up a Senate inquiry as a platform for conspiracy theories linking Halal certification with terrorism.

There are many ways a database can fall into the wrong hands

Ironically, one of the most valuable lessons about the risk of registering Muslims and homosexuals was an injustice against the very same tea-party supporters a certain presidential candidate is trying to woo. In 2013, it was revealed IRS employees had started applying a different process to discriminate against groups with Tea party in their name.

It is not hard to imagine other types of rogue or misinformed behavior by people in positions of authority when they are presented with information that they don't actually need about somebody's religion or sexuality.

Beyond this type of rogue behavior by individual officials and departments, there is also the more sinister proposition that somebody truly unpleasant is elected into power and can immediately use things like a Muslim database, surveillance data or the marriage database for a program of systematic discrimination. France had a close shave with this scenario in the 2002 presidential election when
Jean-Marie Le Pen, who has at least six convictions for racism or inciting racial hatred made it to the final round in a two-candidate run-off with Jacques Chirac.

The best data security

The best way to be safe- wherever you go, both now and in the future - is not to have data about yourself on any database. When filling out forms, think need-to-know. If some company doesn't really need your personal mobile number, your date of birth, your religion or your marriage status, don't give it to them.

Support.Mozilla.OrgWhat’s up with SUMO – 20th November

Hello, SUMO Nation!

Good to see you reading these words again. Thank you for dropping by and willing to learn more about the most recent goings-on at SUMO.

Welcome, new contributors!

If you joined us recently, don’t hesitate – come over and say “hi” in the forums!

Contributors of the last week

  • SynergSINE – for his proactive attitude and conversation with the Ivory Coast Mozillians who are interested in participating in SUMO!

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Last SUMO Community meeting

Reminder: the next SUMO Community meeting…

  • …is going to take place on Monday, 23rd of November. Join us!
  • If you want to add a discussion topic to upcoming the live meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Monday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).



Support Forum


  • for Android
    • Nothing new to report.
  • for Desktop
    • All quiet on the desktop front.
  •  for iOS
    • We keep getting more users!
  • Firefox OS
    • Guess what… no big news here, either ;-)
All this quiet is good – time to recharge our Moz-batteries and get ready for a busy end-of-year season! We wish you a great weekend and hope to see you around on Monday. Take it easy!

Joel MaherIntroducing the contributors for the MozCI Project

As I previously announced who will be working on Pulse Guardian, the Web Platform Tests Results Explorer, and the  Web Driver Infrastructure projects, I would like to introduce the contributors for the 4th project this quarter, Mozilla CI Tools – Polish and Packaging:

* MikeLing (:mikeling on IRC) –

What interests you in this specific project?

As its document described, Mozilla CI Tools is designed to allow interacting with the various components which compose Mozilla’s Continuous Integration. So, I think get involved into it can help me know more about how Treeherder and Mozci works and give me better understanding of A-team.

What do you plan to get out of this after 8 weeks?

Keep try my best to contribute! Hope I can push forward this project with Armen, Alice and other contributors in the furture :)

Are there any interesting facts/hobbies that you would like to share so others can enjoy reading about you?

I’m a guy who would like to keep challenge myself and try new stuff.

* Stefan (:F3real on IRC) –

What interests you in this specific project?

I thought it would be good starting project and help me learn new things.

What do you plan to get out of this after 8 weeks?

Expand my knowledge and meet new people.

Are there any interesting facts/hobbies that you would like to share so others can enjoy reading about you?

I play guitar but I don’ t think that’s really interesting.

* Vaibhav Tulsyan (:xenny on IRC) –

What interests you in this specific project?

Continuous Integration, in general, is interesting for me.

What do you plan to get out of this after 8 weeks?

I want to learn how to work efficiently in a team in spite of working remotely, learn how to explore a new code base and some new things about Python, git, hg and Mozilla. Apart from learning, I want to be useful to the community in some way. I hope to contribute to Mozilla for a long term, and I hope that this helps me build a solid foundation.

Are there any interesting facts/hobbies that you would like to share so others can enjoy reading about you?

One of my hobbies is to create algorithmic problems from real-world situations. I like to think a lot about the purpose of existence, how people think about things/events and what affects their thinking. I like teaching and gaining satisfaction from others’ understanding.


Please join me in welcoming all the contributors to this project and the previously mentioned ones as they have committed to work on a larger project with their free time!

Joel MaherIntroducing a contributor for the WebDriver Infrastructure project

As I previously announced who will be working on Pulse Guardian and the Web Platform Tests Results Explorer, let me introduce who will be working on Web Platform Tests – WebDriver Infrastructure:

* Ravi Shankar (:waffles on IRC) –

What interests you in this specific project?

There are several. Though I love coding, I’m usually more inclined to Python & Rust (so, a “Python project” is what excited me at first). Then, my recently-developed interest in networking code (ever since my work on a network-related issue in Servo), and finally, I’m very curious about how we’re establishing the Python-JS communication and emulate user inputs.

What do you plan to get out of this after 8 weeks?

Over the past few months of my (fractional) contributions to Mozilla, I’ve always learned something useful whenever I finish working on a bug/issue. Since this is a somewhat “giant” implementation that requires more time and commitment, I think I’ll learn some great deal of stuff in relatively less time (which is what excites me).

Are there any interesting facts/hobbies that you would like to share so others can enjoy reading about you?

Well, I juggle, or I (try to) reproduce some random music in my flute (actually, a Bansuri – Indian flute) when I’m away from my keyboard.


We look forward to working with Ravi over the next 8 weeks.  Please say hi in irc when you see :waffles in channel :)

Joel MaherIntroducing 2 contributors for the Web Platform Tests project

As I previously announced who will be working on Pulse Guardian, let me introduce who will be working on Web Platform Tests – Results Explorer:

* Kalpesh Krishna (:martianwars on irc) –

What interests you in this specific project?

I have been contributing to Mozilla for a couple of months now and was keen on taking up a project on a slightly larger scale. This particular project was recommended to me by Manish Goregaokar. I had worked out a few issues in Servo prior to this and all involved Web Platform Tests in some form. That was the initial motivation. I find this project really interesting as it gives me a chance to help build an interface that will simplify browser comparison so much! This project seems to have more of planning rather than execution, and that’s another reason that I’m so excited! Besides, I think this would be a good chance to try out some statistics / data visualization ideas I have, though they might be a bit irrelevant to the goal.

What do you plan to get out of this after 8 weeks?

I plan to learn as much as I can, make some great friends, and most importantly make a real sizeable contribution to open source :)

Are there any interesting facts/hobbies that you would like to share so others can enjoy reading about you?

I love to star gaze. Constellations and Messier objects fascinate me. Given a chance, I would love to let my imagination run wild and draw my own set of constellations! I have an unusual ambition in life. Though a student of Electrical Engineering, I have always wanted to own a chocolate factory (too much Roald Dahl as a child) and have done some research regarding the same. Fingers crossed! I also love to collect Rubiks Cube style puzzles. I make it a point to increase my collection by 3-4 puzzles every semester and learn how to solve them. I’m not fast at any of them, but love solving them!

* Daniel Deutsch

What interests you in this specific project?

I am really interested in getting involved in Web Standards. Also, I am excited to be involved in a project that is bigger than itself–something that spans the Internet and makes it better for everyone (web authors and users).

What do you plan to get out of this after 8 weeks?

As primarily a Rails developer, I am hoping to expand my skill-set. Specifically, I am looking forward to writing some Python and learning more about JavaScript. Also, I am excited to dig deeper into automated testing. Lastly, I think Mozilla does a lot of great work and am excited to help in the effort to drive the web forward with open source contribution.

Are there any interesting facts/hobbies that you would like to share so others can enjoy reading about you?

I live in Brooklyn, NY and have terrible taste in music. I like writing long emails, running, and Vim.


We look forward to working with these great 2 hackers over the next 8 weeks.

Joel MaherIntroducing a contributor for the Pulse Guardian project

3 weeks ago we announced the new Quarter of Contribution, today I would like to introduce the participants.  Personally I really enjoy meeting new contributors and learning about them. It is exciting to see interest in all 4 projects.  Let me introduce who will be working on Pulse Guardian – Core Hacker:

Mike Yao

What interests you in this specific project?

Python, infrastructure

What do you plan to get out of this after 8 weeks?

Continue to contribute to Mozilla

Are there any interesting facts/hobbies that you would like to share so others can enjoy reading about you?

Cooking/food lover, I was chef long time ago. Free software/Open source and Linux changed my mind and career.


I do recall one other eager contributor who might join in late when exams are completed, meanwhile, enjoy learning a bit about Mike Yao (who was introduced to Mozilla by Mike Ling who did our first every Quarter of Contribution).

Mozilla FundraisingOur plan for fundraising A/B testing in 2015

Our end of year (EOY) fundraising campaign is getting started today, so I wanted to write a note about our A/B testing plan and the preparation work that has gone into this so far. Although right now our donation form … Continue reading

Daniel StenbergThis post was not bought

coinsAt times I post blog articles that get the view counter go up to and beyond 50,000 views. This puts me in a position where I get offers from companies to mention them or to “cooperate” on further blog posts that would somehow push their agenda or businesses.

I also get the more simple offers of adding random ads or “text only information” on specific individual pages on my sites that some SEO person out there figured out could potentially attract audience that search for specific terms.

I’ve even gotten offers from a company to sell off my server logs. Allegedly to help them work on anti-fraud so possibly for a good cause, but still…

This is by no counts a “big” blog or site, yet I get a steady stream of individuals and companies offering me money to give up a piece of my soul. I can only imagine what more popular sites get and it is clear that someone with a less strict standpoint than mine could easily make an extra income that way.

I turn down all those examples of “easy money”.

I want to be able to look you, my dear readers, straight in the eyes when I say that what’s written here are my own words and the opinions revealed are my own – even if of course you may not agree with me and I may do mistakes and be completely wrong at times or even many times. You can rest assured that I did the mistakes on my own and I was not paid by anyone to do them.

I’ve also removed ads from most of my sites and I don’t run external analytic scripts, minimizing the privacy intrusions and optimizing the contents: the stuff downloaded from my sites are what your browser needs to render the page. Not heaps of useless crap to show ads or to help anyone track you (in order to show more targeted ads).

I don’t judge others’ actions based on how I decide to run my blog. I’m in a fortunate position to take this stand, I realize that.

Still biased of course

This all said, I’m still employed by a company (Mozilla) that pays my salary and I work on several projects that are dear to me so of course I will show bias to some subjects. I don’t claim to have an objective view on things and I don’t even try to have that. When I write posts here, they come colored by my background and by what I am.

Justin DolskeFoxkeh Dance is back!

That’s right! Everyone’s favorite dancing mascot is back, baby!

Back in 2008, Alex Polvi (of Firefox crop circle fame), departed Mozilla to found his own startup. In one of the most epic farewell emails of all time, he created Foxkeh Dance, a Mozilla flavor of the Internet-classic Hampster Dance site.

Alas, domains expire, and for the last 5 years has been the home of a domain squatter hoping to interest you in the usual assortment of spam. But a few weeks ago, I randomly  checked the site, and discovered it was available for registration! So I grabbed the domain, and set about restoring it.

The ever-amazing has a cached version of the original 7-year-old site from August 24th 2008… Mostly. It has the HTML, but not the images or background music. Luckily a couple of contemporaneous Mozilla community sites included copies of the animated images, and from that I was able to restore what I believe are original versions. (Update: it seems is now using these newly-restored images to fill in their incomplete cache. Curious.) While the original embedded “hamster.mp3” file is lost, I remember it a being a straight copy of the Hampster Dance site, and that’s easily available. Of course, the original site used plugins to play sound, so I’ve updated it to use a modern HTML5 <audio> replacement.

And now is back!

For those unfamiliar, Foxkeh is Mozilla Japan’s cartoon mascot. Recently it’s been the unofficial mascot of the new Tracking Protection feature in Firefox (butt flames and all). I hope we’ll see more of the ‘lil guy in the future!

You may now resume dancing.

Monica ChewDownload files more safely with Firefox 31

Did you know that the estimated cost of malware is hundreds of billions of dollars per year? Even without data loss or identity theft, the time and annoyance spent dealing with infected machines is a significant cost.

Firefox 31 offers improved malware detection. Firefox has integrated Google’s Safe Browsing API for detecting phishing and malware sites since Firefox 2. In 2012 Google expanded their malware detection to include downloaded files and made it available to other browsers. I am happy to report that improved malware detection has landed in Firefox 31, and will have expanded coverage in Firefox 32.

In preliminary testing, this feature cuts the amount of undetected malware by half. That’s a significant user benefit.

What happens when you download malware? Firefox checks URLs associated with the download against a local Safe Browsing blocklist. If the binary is signed, Firefox checks the verified signature against a local allowlist of known good publishers. If no match is found, Firefox 32 and later queries the Safe Browsing service with download metadata (NB: this happens only on Windows, because signature verification APIs to suppress remote lookups are only available on Windows). In case malware is detected, the Download Manager will block access to the downloaded file and remove it from disk, displaying an error in the Downloads Panel.

How can I turn this feature off? This feature respects the existing Safe Browsing preference for malware detection, so if you’ve already turned that off, there’s nothing further to do. Below is a screenshot of the new, beautiful in-content preferences (Preferences > Security) with all Safe Browsing integration turned off. I strongly recommend against turning off malware detection, but if you decide to do so, keep in mind that phishing detection also relies on Safe Browsing.

Many thanks to Gian-Carlo Pascutto and Paolo Amadini for reviews, and the Google Safe Browsing team for helping keep Firefox users safe and secure!

Monica ChewMaking decisions with limited data

It is challenging but possible to make decisions with limited data. For example, take the rollout saga of public key pinning.

The first implementation of public key pinning included enforcing pinning on In retrospect, this was a bad decision because it broke the Addons Panel and generated pinning warnings 86% of the time. As it turns out, the pinset was missing some Verisign certificates used by, and the pinning enforcement on included subdomains. Having more data lets us avoid bad decisions.

To enable safer rollouts, we implemented a test mode for pinning. In test mode, pinning violations are counted but not enforced. With sufficient telemetry, it is possible to measure how badly sites would break without actually breaking the site.

Due to privacy restrictions in telemetry, we do not collect per-organization pinning violations except for Mozilla sites that are operationally critical to Firefox. This means that it is not possible to distinguish pinning violations for Google domains from Twitter domains, for example. I do not believe that collecting the aggregated number of pinning violations for sites on the Alexa top 10 list constitutes a privacy violation, but I look forward to the day when technologies such as RAPPOR make it easier to collect actionable data in a privacy-preserving way.

Fortunately for us, Chrome has already implemented pinning on many high-traffic sites. This is fantastic news, because it means we can import Chrome’s pin list in test mode with relatively high assurance that the pin list won’t break Firefox, since it is already in production in Chrome.

Given sufficient test mode telemetry, we can decide whether to enforce pins instead of just counting violations. If the pinning violation rate is sufficiently low, it is probably safe to promote the pinned domain from test mode to production mode.

Because the current implementation of pinning in Firefox relies on built-in static pinsets and we are unable to count violations per-pinset, it is important to track changes to the pinset file in the dashboard. Fortunately HighStock supports event markers which somewhat alleviates this problem, and David Keeler also contributed some tooltip code to roughly associate dates with Mercurial revisions. Armed with the timeseries of pinning violation rates, event markers for dates that we promoted organizations to production mode (or high-traffic organizations like Dropbox were added in test mode due to a new import from Chromium) we can see whether pinning is working or not.

Telemetry is useful for forensics, but in our case, it is not useful for catching problems as they occur. This limitation is due to several difficulties, which I hope will be overcome by more generalized, comprehensive SSL error-reporting and HPKP:
  • Because pinsets are static and built-in, there is sometimes a 24-hour lag between making a change to a pinset and reaching the next Nightly build.
  • Telemetry information is only sent back once per day, so we are looking at a 2-day delay between making a change and receiving any data back at all.
  • Telemetry dashboards (as accessible from telemetry.js and need about a day to aggregate, which adds another day.
  • Update uptake rates are slow. The median time to update Nightly is around 3 days, getting to 80% takes 10 days or longer.
Due to these latency issues, pinning violation rates take at least a week to stabilize. Thankfully, telemetry is on by default in all pre-release channels as of Firefox 31, which gives us a lot more confidence that the pinning violation rates are representative.

Despite all the caveats and limitations, using these simple tools we were able to successfully roll out pinning pretty much all sites that we’ve attempted (including AMO, our unlucky canary) as of Firefox 34 and look forward to expanding coverage.

Thanks for reading, and don’t forget to update your Nightly if you love Mozilla! :)

Air MozillaNovember Privacy Lab - Tracking Protection

November Privacy Lab - Tracking Protection For November's Privacy Lab, Mozilla and Disconnect will provide an overview of and invite feedback on Firefox's newly launched Tracking Protection feature - tracking blocking...

Mozilla Addons BlogAdd-on Compatibility for Firefox 43

Firefox 43 will be released on December 15th. Here’s the list of changes that went into this version that can affect add-on compatibility. There is more information available in Firefox 43 for Developers, so you should also give it a look.




  • This is the first version of Firefox that will enforce signing. Unsigned add-ons won’t install and will be disabled by default. There’s a preference that turns signing enforcement off (xpinstall.signatures.required in about:config), but the current plan is to drop the preference in Firefox 44.

Please let me know in the comments if there’s anything missing or incorrect on these lists. If your add-on breaks on Firefox 43, I’d like to know.

The automatic compatibility validation and upgrade for add-ons on AMO will happen in the coming weeks, so keep an eye on your email if you have an add-on listed on our site with its compatibility set to Firefox 42.

Chris CooperClarification about our “Build and Release Intern - Toronto” position

We’ve had lots of interest already in our advertised internship position, and that’s great. However, many of the applications I’ve looked at won’t pan out because they overlooked a key line in the posting:

*Only local candidates will be considered for this role.*

That’s right, we’re only able to accept interns who are legally able to work in Canada.

The main reason behind this is that all of our potential mentors are in Toronto, and having an engaged, local mentor is one of the crucial determinants of a successful internship. In the past, it was possible for Mozilla to sponsor foreign students to come to Canada for internships, but recent changes to visa and international student programs has made the bureacratic process (and concomitant costs) a nightmare to manage. Many applicants simply aren’t eligible any more under the new rules either.

I’m not particularly happy about this, but it’s the reality of our intern hiring landscape. Some of our best former interns have come from abroad, and I’ve already seen some impressive resumes this year from international students. Hopefully one of the non-Toronto-based positions will still appeal to them.

Air MozillaLondon Web Components Meetup – 20151119

London Web Components Meetup –  20151119 The London Web Component Meetup hosts talks about 'An e-commerce journey to using Web Components'.

Air MozillaCarto Meetup Paris #4

Carto Meetup Paris #4 4 ème meetup Carto Paris, il y aura 3 présentations (Mind-mapping, carto géographique et carto de réseaux sociaux).

Monica ChewTracking Protection Officially Supported in Firefox 42

Mozilla officially started supported Tracking Protection in Private Browsing mode with Firefox 42, which launched a couple of weeks ago. Congratulations to everyone who worked on the launch! The onboarding looks awesome and the unified UI is a nice touch, although I have to admit a preference for the original, engineer-designed marketing aesthetics pictured below.

Even outside of Private Browsing mode, you can still take advantage of Tracking Protection by going to about:config and turning on privacy.trackingprotection.enabled. This behavior has been supported for over a year since Firefox 34, so it's great to see Mozilla making this more usable by turning it on in Private Browsing mode.

I hope that Mozilla continues to use its products to challenge the notion that we owe our eyeballs, our computing resources and our entire browsing history to the ad industry, with no questions asked.

Air MozillaWeb QA Weekly Meeting, 19 Nov 2015

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Christian HeilmannDevfest Asia – JSConf Asia closing keynote and Microsoft Meetup

I am currently on a trip in Singapore, Thailand and Sydney for the next 8 days and today I presented at JSConf Asia and a meetup in the Microsoft offices in Singapore.
thumbs up audience

JSConf Asia closing keynote

The closing keynote of the first day of JSConf Asia covered my worries that we are getting slightly overboard in our affection of JavaScript to solve every issue. It seems we have forgotten just how versatile a language it is and that how we use it depends very much on the environment we run it in. The slides are on SlideShare.

I also recorded a screencast of the keynote and published it on YouTube.

Microsoft Meetup

As the audience at the meetup was more mixed, and I was deadly tired, I thought it is a good plan to create a presentation that covers how we can learn JavaScript these days. It explains the use cases of it, resources how to easily run a node and express server and talks about Visual Studio Code and how to clean up old and outdated code. The learning JS meetup slides are also on Slideshare.

The screencast is on YouTube

Air MozillaReps weekly, 19 Nov 2015

Reps weekly This is a weekly call with some of the Reps council members to discuss all matters Reps, share best practices and invite Reps to share...

Chris AtLeeMozFest 2015

I had the privilege of attending MozFest last week. Overall it was a really great experience. I met lots of really wonderful people, and learned about so many really interesting and inspiring projects.

My biggest takeaway from MozFest was how important it is to provide good APIs and data for your systems. You can't predict how somebody else will be able to make use of your data to create something new and wonderful. But if you're not making your data available in a convenient way, nobody can make use of it at all!

It was a really good reminder for me. We generate a lot of data in Release Engineering, but it's not always exposed in a way that's convenient for other people to make use of.

The rest of this post is a summary of various sessions I attended.


Friday night started with a Science Fair. Lots of really interesting stuff here. Some of the projects that stood out for me were:

  • naturebytes - a DIY wildlife camera based on the raspberry pi, with an added bonus of aiding conservation efforts.
  • histropedia - really cool visualizations of time lines, based on data in Wikipedia and Wikidata. This was the first time I'd heard of Wikidata, and the possibilities were very exciting to me! More on this later, as I attended a whole session on Wikidata.
  • Several projects related to the Internet-of-Things (IOT)


On Saturday, the festival started with some keynotes. Surman spoke about how MozFest was a bit chaotic, but this was by design. In a similar way that the web is an open platform that you can use as a platform for building your own ideas, MozFest should be an open platform so you can meet, brainstorm, and work on your ideas. This means it can seem a bit disorganized, but that's a good thing :) You get what you want out of it.

I attended several good sessions on Saturday as well:

  • Ending online tracking. We discussed various methods currently used to track users, such as cookies and fingerprinting, and what can be done to combat these. I learned, or re-learned, about a few interesting Firefox extensions as a result:

    • privacybadger. Similar to Firefox's tracking protection, except it doesn't rely on a central blacklist. Instead, it tries to automatically identify third party domains that are setting cookies, etc. across multiple websites. Once identified, these third party domains are blocked.
    • https everywhere. Makes it easier to use HTTPS by default everywhere.
  • Intro to D3JS. d3js is a JS data visualization library. It's quite powerful, but something I learned is that you're expected to do quite a bit of work up-front to make sure it's showing you the things you want. It's not great as a data exploration library, where you're not sure exactly what the data means, and want to look at it from different points of view. The nvd3 library may be more suitable for first time users.

  • 6 kitchen cases for IOT We discussed the proposed IOT design manifesto briefly, and then split up into small groups to try and design a product, using the principles outlined in the manifesto. Our group was tasked with designing some product that would help connect hospitals with amateur chefs in their local area, to provide meals for patients at the hospital. We ended up designing a "smart cutting board" with a built in display, that would show you your recipes as you prepared them, but also collect data on the frequency of your meal preparations, and what types of foods you were preparing.

    Going through the exercise of evaluating the product with each of the design principles was fun. You could be pretty evil going into this and try and collect all customer data :)


  • How to fight an internet shutdown - we role played how we would react if the internet was suddenly shut down during some political protests. What kind of communications would be effective? What kind of preparation can you have done ahead of time for such an event?

    This session was run by Deji from accessnow. It was really eye opening to see how internet shutdowns happen fairly regularly around the world.

  • Data is beaufitul Introduction to wikidata Wikidata is like Wikipedia, but for data. An open database of...stuff. Anybody can edit and query the database. One of the really interesting features of Wikidata is that localization is kind of built-in as part of the design. Each item in the database is assigned an id (prefixed by "Q"). E.g. Q42 is Douglas Adams. The description for each item is simply a table of locale -> localized description. There's no inherent bias towards English, or any other language. The beauty of this is that you can reference the same piece of data from multiple languages, only having to focus on localizing the various descriptions. You can imagine different translations of the same Wikipedia page right now being slightly inconsistent due to each one having to be updated separately. If they could instead reference the data in Wikidata, then there's only one place to update the data, and all the other places that reference that data would automatically benefit from it.

    The query language is quite powerful as well. A simple demonstration was "list all the works of art in the same room in the Louvre as the Mona Lisa."

    It really got me thinking about how powerful open data is. How can we in Release Engineering publish our data so others can build new, interesting and useful tools on top of it?

  • Local web Various options for purely local web / networks were discussed. There are some interesting mesh network options available commotion was demo'ed. These kind of distributions give you file exchange, messaging, email, etc. on a local network that's not necessarily connected to the internet.

Mozilla Release Management TeamFirefox 43 beta3 to beta4

In this beta, many changes to improve the support of GTK3+ for the GNU/Linux users. Besides that, some changes to increase the stability of Fennec.

  • 31 changesets
  • 39 files changed
  • 743 insertions
  • 283 deletions



List of changesets:

Wes KocherBacked out changeset 9fbc92fa9e4b (Bug 1221855) because I backed out the other half a=backout - 51f0f13e7985
Alessio PlacitelliBug 1213780 - Fix Telemetry reporting repeated hang annotations for Chrome hangs. r=aklotz a=lizzard - 2fd90a7f326e
Alessio PlacitelliBug 1211411 - Limit the number of thread hang stats reported to Telemetry. r=vladan a=lizzard - 59e6a978773f
Alessio PlacitelliBug 1215540 - Fix Telemetry reporting repeated hang annotations for Thread hangs. r=aklotz a=lizzard - 12762fdf5ab6
Alessio PlacitelliBug 1219751 - Change the the depth limit of the thread hangs stack to use the 99th percentile. r=gfritzsche a=lizzard - 6cabb1a43af6
Chris H-CBug 1198196 - rework EVENTLOOP_UI_LAG_EXP_MS to record all lag. r=vladan a=lizzard - ffc3382d3829
Chris H-CBug 1223800 - Accept BHR reports from 50% of beta clients. Up from 1%. r=vladan a=lizzard - aaa5100e2085
Vladan DjericBug 1223800: Fix broken build -- Telemetry on aurora & beta doesn't know about the bug_numbers field yet. a=broken - ec4b13420b71
Bill McCloskeyBug 1218552 - Fix GTK drag-and-drop coordinate scaling on HiDPI displays. r=karlt a=lizzard - 581b3e8f954f
Karl TomlinsonBug 1221855 - test Web Audio memory reporting r=erahm, a=test-only - 93d92b8c2b6c
Karl TomlinsonBug 1221855 - null-check mInputBuffer in SizeOfExcludingThis(). r=padenot, a=lizzard - 294b55e22276
Karl TomlinsonBug 1218552 - make GdkPointToDevicePixels() public to fix build. a=lizzard - 844ff2b4f267
L. David BaronBug 1222783 - Make nsHTMLFramesetFrame::Reflow set firstTime based on what firstTime means. r=roc approval-mozilla-beta=lizzard - 7947f1e4ca76
Jeff GilbertBug 1209612 - Only QueryString with null if supported. r=jmuizelaar, a=lizzard - d720ce07c464
James WillcoxBug 1221228 - Work around busted OpenSL causing hangs/reboots on Android. r=padenot, a=sylvestre - 4af91393a8f8
Andreas PehrsonBug 1103188 - Keep track of capture stop only in gUM stream listener. r=jib a=lizzard - 1540124e58cd
Andreas PehrsonBug 1103188 - Keep track of stopped tracks in gUM stream listener. r=jib a=lizzard - e3fad0bd414e
Jan-Ivar BruaroeyBug 1210852 - do SelectSettings of device capabilities on media thread. r=jib a=lizzard - 1ffe42de58bd
Andreas PehrsonBug 1070216 - Properly manage lifetime of allocated CaptureDevices. r=jib a=lizzard - 98d9576c7d13
Andreas PehrsonBug 1103188 - Always call MediaManager::NotifyFinished/NotifyRemoved on main thread. r=jib a=lizzard - 5ca6857c26e5
Jan HorakBug 1216582 - [gtk3] Scrollbar buttons not drawn correctly. r=karlt a=lizzard - 807e612c17ef
Bas SchoutenBug 1216349: Upload the old valid region as well if our texture host changed. r=nical a=lizzard - 94c40ce2d93b
Andrew ComminosBug 1218008 - Fix progress bar rendering on the Ambiance GTK3 theme. r=karlt a=lizzard - 51585d9e70eb
Jean-Yves AvenardBug 1220033 - Don't use fuzz arithmetic for calculating internal buffered ranges. r=gerald, a=lizzard - f6fa2e5fb632
Karl TomlinsonBug 726483 - remove unnecessary DispatchResized() parameters. r=roc, a=lizzard - 6ceeb10435a8
Karl TomlinsonBug 726483 - avoid DispatchResized() during size-allocate. r=roc, a=lizzard - c134a04010a0
Karl TomlinsonBug 726483 - keep an extra reference to the window. r=roc, a=lizzard - bc7eea62ab83
Robert LongsonBug 1222812 - add a null check in case there is no old style. r=dholbert a=lizzard - d35d09b0b24f
Nathan FroydBug 1217047 - try harder in IsContractIDRegistered to return a reasonable answer; r=bsmedberg,f=yury a=lizzard - c66289e84c50
Karl TomlinsonBug 726483 pass newly allocated runnable to NS_DispatchToCurrentThread() r=roc a=bustage - e4802c73f705
Byron Campen [:bwc]Bug 1218326: Prevent datachannel operations on closed PeerConnections. r=jesup a=lizzard - d8f0412f38f7

Karl DubostApp Shell and Service workers

Google published a very interesting article 📰 about service workers : Instant Loading Web Apps With An Application Shell Architecture. It promotes using service workers for caching the main UI of the appsite, so it gets out of the way when it's time to load the content.

Progressive Web Apps (PWAs) describe how a web app can progressively change with use and user consent to give the user a more native-app-like experience with offline support, push notifications and being installable to the home-screen. They can gain substantial performance benefits thanks to intelligent service worker caching of your UI shell for repeat visits.

They define what they mean:

When we talk about an app’s shell, we mean the minimal HTML, CSS and JavaScript powering the user interface. This should load fast, be cached and once loaded, dynamic content can populate your view.

Indeed this seems an interesting option when the appsite is being used multiple times a day. It becomes less so for a site that you go every once in a while. The cache will have been destroyed by the browsing of other sites. Maybe what is missing is the possibility for users to somehow keep a local version of the full UI, the same way that when you install an app on your computer 💻, you are not necessary forced to update to the next version, but still can access content with it.

Progressive Enhancement

They mentionned progressive enhancement:

While service worker isn’t currently supported by all browsers, the application content shell architecture uses progressive enhancement to ensure everyone can access the content. For example, take our sample project.

but this is progressive enhancement in the context of the support or not of service workers. It is not in the context of a Web site where you separate the notion of UI and content.

I guess it helped me to understand what bugs me in system like our own webcompat website we are creating. When we deactivate JavaScript, there is no way the site keeps working. We currently load the shell 🐚 through a "normal HTTP" request and then the content through XHR. No JavaScript, No XHR, No content. Sad panda 🐼. I can see how service workers 👷 could provide a better way of thinking 🤔 about it, but if and only if content comes first.

Content First

As a user, when I'm being given a URI to something, I want to get access first to the content. This is the thing I'm interested by. The UI and features come second. The content is the first class citizen. I don't mind at all having a non styled text as long as I can read it right away, and then if the UI 🏗 starts appearing after (at least on first load), it's all jolly good🎉. And given the fact that service workers will come to help for caching the UI elements, when going to the second URI, there should not be any delay anymore.

To think more about it.

PS: no emojis have been abused during the writing of this blog post.


Emma IrwinThe Journey Continues – Mozlando is Coming!

It’s the most wonderful time of the year!  The ‘Mozilla Coincidental Work Week’ brings everyone at Mozilla together in the same city, at the same time for the opportunity of collaboration – this time in Orlando Florida (Dec 7 – 11) !

‘Mozlando’ is the next stop on our Participation Cohort’s journey –  a perfect environment for goal-setting focused on building high impact participation opportunities with product teams.  Truly – a  beautiful opportunity to invest in, and with each other.

IMG_2015-11-18 15:36:44

Over 100 volunteers will have the opportunity to work directly with teams helping design and strengthen goals which in many (and most)  cases includes Participation.  For those invited by the Participation Team, we will of course, be dedicating ourselves to that focus.


We have three distinct volunteer groups attending Orlando:

  1. Those invited by the Participation Team.
  2. Those invited by another functional area, but who are also part of the Participation Cohort.
  3. Those who were invited by another functional area, but currently have no Participation connection.
    1. Subset: those in this group who may, informally, have Participation goals in their work.

We will reach out with offers for 1:1 coaching for all in groups 1 & 2.  And for the subset of the 3rd group, will reserve blocks of time for those interested in Participation.

The coaching this time around even more important recognizing that connecting volunteers with the project goals is a critical step to bringing sustained strategic advantage to Mozilla . We are asking our cohort to research and consider the following:

  1. What are my participation goals for 2016?
  2. What are the goals in 2016 of the product team I will be working with?
  3. How do these align with my own goals for 2016?  What adjustments do I need to make?  What questions do I need to ask?
  4. How can I share what I learn, and bring others in who want to contribute to the same area of the project?

CC by-nc-sa 2.0 by Christos Bacharakis

At the heart of everything of course is people, why we’re here, why we care,  where we envision we can go individually, and with each other.  I’m looking forward to all of it!

For those who think of Orlando as ‘Disney’ and for those who think of Orlando as ‘Space ‘ I give you an image for everyone. ‘Mickey Mouse on Mercury’ CC by 2.0 Nasa Goddard Space Flight Center

Feature Image Credit:  Nasa on The Commons


Chris CooperWelcome back, Mihai!

Mr. KotterThis is *not* Mihai.

I’ve been remiss in (re)introducing our latest hire in release engineering here at Mozilla.

Mihai Tabara is a two-time former intern who joins us again, now in a full-time capacity, after a stint as a release engineer at Hortonworks. He’s in Toronto this week with some other members of our team to sprint on various aspects of release promotion.

After a long hiring drought for releng, it’s great to be able to welcome someone new to the team, and even better to be able to welcome someone back. Welcome, Mihai!

Air MozillaThe Joy of Coding - Episode 35

The Joy of Coding - Episode 35 mconley livehacks on real Firefox bugs while thinking aloud.

Daniel PocockImproving DruCall and JSCommunicator user interface

DruCall is one of the easiest ways to get up and running with WebRTC voice and video calling on your own web site or blog. It is based on 100% open source and 100% open standards - no binary browser plugins and no lock-in to a specific service provider or vendor.

On Debian or Ubuntu, just running a command such as

# apt-get install -t jessie-backports drupal7-mod-drucall

will install Drupal, Apache, MySQL, JSCommunicator, JsSIP and all the other JavaScript library packages and module dependencies for DruCall itself.

The user interface

Most of my experience is in server-side development, including things like the powerful SIP over WebSocket implementation in the reSIProcate SIP proxy repro.

In creating DruCall, I have simply concentrated on those areas related to configuring and bringing up the WebSocket connection and creating the authentication tokens for the call.

Those things provide a firm foundation for the module, but it would be nice to improve the way it is presented and optimize the integration with other Drupal features. This is where the projects (both DruCall and JSCommunicator) would really benefit from feedback and contributions from people who know Drupal and web design in much more detail.

Benefits for collaboration

If anybody wants to collaborate on either or both of these projects, I'd be happy to offer access to a pre-configured SIP WebSocket server in my lab for more convenient testing. The DruCall source code is a hosted project and the JSCommunicator source code is on Github.

When you get to the stage where you want to run your own SIP WebSocket server as well then free community support can also be provided through the repro-user mailing list. The free, online RTC Quick Start Guide gives a very comprehensive overview of everything you need to do to run your own WebRTC SIP infrastructure.

Soledad PenadesOn Loop 2015

I was invited to join a panel about Open Source and Music in Loop, an slightly unusual (for my “standards”) event. It wasn’t a conference per se, although there were talks. Most of the sessions were panels and workshops, there were very little “individual” talk tracks. Lots of demos, unusual hardware to play with in the hall, relaxed atmosphere, and very little commercialism—really cool!

Before I agreed to join them, I spoke to Juanpe Bolivar, the host of my panel, and made sure he was aware of why I didn’t actually want to join a panel, because I had been in a few so far and they were always horrendous due to the power dynamics in place. I explained all my concerns to him, and suggested tons of ideas to make things better, and he listened and put them in practice! So that was really good, and made me feel good about the event. It also helped that I knew some people who work for Ableton or who were connected to them, so I trusted them. Also they mentioned the code of conduct early on and they mentioned it during the opening event as well—with the room full of people.

Their organisation for booking travel and accommodation was super great as well: they helped me be time efficient by booking the most convenient flights, and the hotel they reserved was very good! Which I super greatly appreciated after having been travelling so much recently… the last thing you want is being placed on a crappy hotel!

When I was waiting for the flight to Berlin I noticed that James Holden was on the same flight too, because he was joining Loop as well. James Holden! OK maybe you don’t know him, but he’s a quite popular DJ who’s also an author/producer and instigator of various experimental acts, and also really chill and loves to share how he does things and what his process is. So he was there in front of me eating a croissant, and of course I would NOT tell him anything because eating a croissant is one of life’s sacred moments. You don’t want to interrupt anyone when they’re eating a croissant. It just breaks the magic and everything gets awkward, with pieces of pastry going all over. No, just don’t do it.

So I didn’t say anything.

But when we landed I got welcomed by a representative from Loop. She was very nice and told me we should wait until we met James and his colleague Camilo Tirado and then we would head out to get a taxi to our hotels. So I had a chance to actually speak to James! I said “hi” like a shy child, and then I told him I had seen him play at a long-closed club in London, many years ago, and he said something like “Ahh yeah when we were young!”, and asked me what I did! So yeah, exactly the down to earth person I expected. Camilo was also super nice, and I got to talk to him later on about musical composition, how you play Indian music, universities and schools, etc.

This was just a bit of what was to happen during the event: you would be listening to some artist talk about their process and then it was just very natural to come and talk to them afterwards, and they would also ask what were you doing. The whole event was set up with the goal of getting artists to make connections and not work alone, as the event premise is that making music has turned into a very solitary act nowadays and we spend so much time in front of our computer screens in contrast to playing with other artists, etc. It was a bit funny for me as my “main job” is not as a musician, but I’m “enabling” people’s music creations on the web and also make my own music from time to time, so it was interesting to see that they were really accepting of my ‘hybrid’ situation, and very excited about the notion of me enabling other people on the web, whereas generally people in tech are way more condescending and exclusivist (“oh, you’re not a real developer!”, etc).

As you can see I was semi unconsciously trying to extrapolate this to our “industry”; my brain was making comparisons all the time. I noticed little things like:

  • the drinks at the bar were not free, and no one batted an eyelid; they just paid for them and also there were zero incidents with drunkards harassing me. Correlation? Causation?
  • there was no t-shirt for the event, the only event themed t-shirts were worn by event people
  • picking the swag bag was optional, they didn’t give it to you automatically. And the bag essentially just had a leaflet with the program and a notebook.

The panel

After the opening, Juanpe brought us to dinner to a nice Vietnamese place so we could get to know each other’s background a little bit more before the panel happened. I hadn’t met my co-panelists before, and I was a bit scared that they would be “more open source than ye” kind of people, as they essentially worked in Linux Audio stuff, but they were excellent people and really easy to get along with. Soon my concerns evaporated.

For reference, they were:

  • Gianfranco Ceccolini, he works on a programmable pedal device called the MOD: it’s a device which has an embedded computer running Linux, and you can download effects and install them on it. Their business model consists in that they provide precompiled binaries so they are convenient for musicians that just want to get music done
  • Marije Baalman, she is an artist and also developer for STEIM, a company that builds custom instruments and stuff for artists. They use Super Collider and Linux, and she was also involved in the Linux Audio conference.
  • Paul Davis, he’s the lead developer of Ardour which is a very popular open source audio workstation (think Garage Band, but free), and also JACK which enables you to pipe and control audio in your system (without|with very low) latency (like CoreAudio etc, but again, free, and multiplatform). He also happens to be the 2nd employee Amazon ever hired so he’s been in the tech industry for a while too! 😉

We were asked to prepare a 5 minute self-introduction for the panel, here is mine. People seemed to like my succinctness and quick slide-changing game! Also I brought them many tracking memories, judging by the tweets, so I’m happy about that :-)

I think the panel went quite well, specially compared to previous panels I’d been before! We all had the chance to talk and no one super monopolised the time. The only minor nit was that we had to share hand microphones and so it was hard to get ‘impromptu’ interventions, it felt a bit mechanical. Although you could consider it a good thing too because that meant that no one would talk you over when you had the microphone… so my thoughts are a bit ambivalent in this sense!

Of course we ran out of time and didn’t have a chance to discuss many of the ideas that our moderator proposed beforehand (and which we augmented with other ideas), but I think we covered a bunch of interesting topics such as why we’ll start seeing more audio stuff happening on the web soon, why distribution and convenience is how open source can win users’ hearts rather than just open source per se, what was our setup for making music (answering “just ViM” made me giggle a bit), and then some more “boring” questions such as “which license you use”.

Someone in the audience asked how to get started in open source if you are not a developer, and Paul suggested to “write docs”, but I sent the ball back to “the developers” and said that they need to provide an entry point, a placeholder, so people at least know *where to put* the docs, and also that it’s not just about developing or writing docs; making a screencast or a tutorial about a thing you found on the internet and like a lot can help in making it better. Or translating existing docs can make a positive impact as well—not everyone has the privilege of knowing more than one language.

People also asked where should they start if they worked in closed source software; should they open it up or…? and where should they start? I said if nothing else, make the file format description open, so users are not doomed if the makers stop supporting the software. Similar questions arose regarding music making software “in the cloud”; my posture is if you can’t export it, just don’t use it. You don’t want your music to disappear when a startup goes out of business or gets acquired.

After the panel and during the rest of the event, random people would come and tell me they loved it and found the discussion very interesting and loved my optimism and enthusiasm because I had given them lots of ideas (!). I obviously have a skewed vision of myself and thought I had been a bit harsh and pessimist about the state of audio in Linux, and about the JACK daemon not playing nice with the rest of things in my system when I was running Linux, so I apologised to Paul, but he said that it had been great and someone had to “tell it like it was”, and despite of that, I had been really uplifting! (!)

Well then…!

Music on the web

I also spoke to a number of people who were interested in various aspects of Web + Audio + MIDI.

Someone from a hardware making company said they are su-per-in-te-res-ted in Web MIDI support coming to Firefox, and would maybe even want to contribute code, but the last time they looked into the bug, the WebIDL part wasn’t done yet, so they didn’t know where they could contribute with their knowledge (they know how to deal with audio code, but not browser code). The importance of placeholders, again.

Audio software writers were concerned about performance and how to extract the maximum ‘audio juice’ out of a browser running Web Audio code. So Audio Worklets (the latest working name for the concept of “better than ScriptProcessorNodes” custom processors, née AudioWorkers) will come in handy here—same for WebAssembly and SIMD. Yay tying everything together!

When asked about this during the panel (“oh but Web Audio can’t run the same thing that I can on my native app!”) I suggested everyone to look back to 2005 when Google Docs were starting: people were joking and asking “but WHO would want to run Office in the browser?! HA! HA! HA!”… and now pretty much everyone is moving to “the cloud”. So I hinted that perhaps we were going towards a hybrid future, where most people would just choose the web option because it was easy to access (no installs!) and convenient, and if they could export their data (using an open format), then they could continue working locally with their favourite native app.

In contrast, music writers were intrigued and excited about the notion of putting things online and having your audience interact with it. I didn’t hear them complain much about performance, and there were also lots of talks on limitations fostering creativity rather than blocking it. As developers we so often are blinded by the desire to perfect our tools that we never actually get to do anything with the tools!

Demos, take aways and ideas

As I said there were lots of demos and interesting take aways. I liked this from the opening keynote: “Successes point to the past, failures point to the future” (as in, a success is something you’ve done, and that’s it, but failing is something that didn’t work out, and gave you information on how to proceed in the future). And also: “if all your experiments are a success, perhaps you’re not trying the right things”.

Here are some vines from panels I attended:

The first one is Jono Brandel demonstrating Patatap, which is running in the browser and the animations are “powered by” tween.js!

Then there’s Leafcutter John demonstrating how his light-based custom instrument works, by flashing two lights at an array of light sensors:

The final keynote was the only commercially-strong content in the event, and it was the first time I saw Ableton Live on a big screen. They showed an advance of the new version of Ableton Live in exclusive (thought they didn’t specifically say “do not tweet”, it was subtly implied). It had newer sampling abilities, and timestretching, better flow…

The most interesting thing to me was when the showed a new protocol they had devised to sync various music apps playing live… which is one of the ideas I was exploring a while back with Firefox OS phones and p2p communications in The Disconnected Ensemble.

I normally don’t come back from a conference this excited, but here’s just a number of things I heard about during the event and I want to look at:

  • SuperCollider
  • Sonic PI
  • Gibber and Gibberish
  • using the webcam with motion detection in place of a ‘grid of light sensors’ (like Leafcutter John… but with less soldering)
  • the Leap Motion as an instrument (inspired by what Rebecca Fiebrink  showed with her AI-learning based instrument builder)
  • exposing experiments into the window global object so they can be scripted/augmented with bookmarklets or WebExtensions
  • also providing an interactive demo / playground page for some of my open source libraries
  • and clowncore! (or perhaps I don’t want to look into that, actually).

Conclusion: if you are interested in music making, and can attend next year, do so!

flattr this!

QMOFirefox 43.0 Beta 7 Testday, November 27th

Hi, mozillian! Friday, November 27th, we will host a new Testday for Firefox 43.0 Beta 7, I bet you did not see this coming 😀 We will have fun testing Hello (I encourage you to engage our moderators for 1 on 1 calls) and Migrations (give me my data back ugly browsers!). If you want to find out more information visit this etherpad.

You don’t need testing experience to take part in the testday so feel free to join the #qa IRC channel and the moderators will help if you have any questions.

This message will disappear in 3, 2, 1…well, just kiddin :) See you next Friday!