Mozilla Addons BlogDecember 2015 Featured Add-ons

Pick of the Month: Fox Web Security

by Oleksandr
Fox Web Security is designed to automatically block known dangerous websites and unwanted content that is not suitable for children.

“This add-on is extremely fast and effective! You can say goodbye to porno sites, scams and viruses—now my web is absolutely safe.”

Featured: YouTube™ Flash-HTML5

by A Ulmer
YouTube™ Flash-HTML allows you to play YouTube Videos in Flash or HTML5 player.

Featured: AdBlock for YouTube™

by AdblockLite
AdBlock for YouTube™ removes all ads from YouTube.

Featured: 1-Click YouTube Video Download

by The 1-Click YouTube Video Download Team
YouTube™ Flash-HTML allows you to play YouTube Videos in Flash or HTML5 player.

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months, so there’s always an opportunity to participate. Stayed tuned to this blog for the next call for applications.

If you’d like to nominate an add-on for featuring, please send it to for the board’s consideration. We welcome you to submit your own add-on!

Karl DubostCSS prefixes and gzip compression

I was discussing with Mike how some Web properties are targeting only WebKit/Blink browsers (for their mobile sites) to the point that they do not add the standard properties for certain CSS features. We see that a lot in Japan, for example, but not only.

We often see things like this code:

    min-height: 50px;
    display: -webkit-box;
    -webkit-box-align: center;
    -webkit-box-pack: center;
    padding-bottom: 3px;

which is easily fixed by just adding the necessary properties:

    min-height: 50px;
    display: -webkit-box;
    -webkit-box-align: center;
    -webkit-box-pack: center;
    padding-bottom: 3px;
    display: flex;
    align-items: center;
    justify-content: center;

It would make the Web site more future resilient too.

gzip Compression and CSS

Adding standard properties costs a couple of bytes more in the CSS. Mike wondered if the compression would be interesting when it's about adding the standard property because of compression patterns:

#foo {
-webkit-box-shadow: 1px 1px 1px red;
box-shadow: 1px 1px 1px red;

Pattern of compression for a CSS file

It seems to be working. With Mike's idea I was wondering if the order was significative. So I tested by adding additional properties and changing the order


#foo {
background-color: #fff;
-webkit-box-shadow: 1px 1px 1px red;


#foo {
background-color: #fff;
-webkit-box-shadow: 1px 1px 1px red;
box-shadow:1px 1px 1px red;


#foo {
-webkit-box-shadow: 1px 1px 1px red;
background-color: #fff;
box-shadow:1px 1px 1px red;

then doing similar tests than Mike.

Pattern of compression for a CSS file

Obviously the order matters, because it helps gzip to find text patterns to compress.

  • raw: 70 compressed:  98 gzip -c mike.prefix.css | wc -c
  • raw: 98 compressed: 100 gzip -c mike.both.css | wc -c
  • raw: 98 compressed: 106 gzip -c mike.both-order.css | wc -c

Flexbox and Gradients Drawbacks

For things like -webkit- flexbox and gradients, it doesn't help very much, because the syntaxes are very different (see the first piece of code in this post), but for properties were the standard properties is just about removing the prefix, the order matters. It would be interesting to test that on real long CSS files and not just a couple of properties.


Mozilla Addons BlogDe-coupling Reviews from Signing Unlisted Add-ons

tl;dr – By the end of this week (December 4th), we plan to completely automate the signing of unlisted add-ons and remove the trigger for manual reviews.

Over the past few days, there have been discussions around the first step of the add-on signing process, which involves a programmatic review of submissions by a piece of code known as the “validator”. The validator can trigger a manual review of submissions for a variety of reasons and halt the signing process, which can delay the release of an add-on because of the signing requirement that will be enforced in Firefox 43 and later versions.

There has been debate over whether the validator is useful at all, since it is possible for a malicious player to write code that bypasses it. We agree the validator has limitations; the reality is we can only detect what we know about, and there’s an awful lot we don’t know about. But the validator is only one component of a review process that we hope will make it easier for developers to ship add-ons, and safer for people to use them. It is not meant to be a catch-all malware detection utility; rather, it is meant to help developers get add-ons into the hands of Firefox users more expediently.

With that in mind, we are going to remove validation as a gating mechanism for unlisted add-ons. We want to make it easier for developers to ship unlisted add-ons, and will perform reviews independently of any signing process. By the end of this week (December 4th), we plan to completely automate the signing of unlisted add-ons and remove the trigger for manual reviews. This date is contingent on how quickly we can make the technical, procedural, and policy changes required to support this. The add-ons signing API, introduced earlier this month, will allow for a completely automated signing process, and will be used as part of this solution.

We’ll continue to require developers to adhere to the Firefox Add-ons policies outlined on MDN, and would ask that they ensure their add-ons conform to those polices prior to submitting them for signing. Developers should also be familiar with the Add-ons Reviewer Guide, which outlines some of the more popular reasons an add-on would fail a review and be subject to blocklisting.

I want to thank everyone for their input and insights over the last week. We want to make sure the experience with Firefox is as painless as possible for Add-on developers and users, and our goals have never included “make life harder”, even if it sometimes seems that way. Please continue to speak out, and feel free to reach out to me or other team members directly.

I’ll post a more concrete overview of the next steps as they’re available, and progress will be tracked in bug 1229197. Thanks in advance for your patience.


Chris AtLeeMozLando Survival Guide

MozLando is coming!

I thought I would share a few tips I've learned over the years of how to make the most of these company gatherings. These summits or workweeks are always full of awesomeness, but they can also be confusing and overwhelming.

#1 Seek out people

It's great to have a (short!) list of people you'd like to see in person. Maybe somebody you've only met on IRC / vidyo or bugzilla?

Having a list of people you want to say "thank you" in person to is a great way to approach this. Who doesn't like to hear a sincere "thank you" from someone they work with?

#2 Take advantage of increased bandwidth

I don't know about you, but I can find it pretty challenging at times to get my ideas across in IRC or on an etherpad. It's so much easier in person, with a pad of paper or whiteboard in front of you. You can share ideas with people, and have a latency/lag-free conversation! No more fighting AV issues!

#3 Don't burn yourself out

A week of full days of meetings, code sprints, and blue sky dreaming can be really draining. Don't feel bad if you need to take a breather. Go for a walk or a jog. Take a nap. Read a book. You'll come back refreshed, and ready to engage again.

That's it!

I look forward to seeing you all next week!

Air MozillaWebdev Extravaganza: December 2015

Webdev Extravaganza: December 2015 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on.

Chris H-CTo-Order Telemetry Dashboards: dashboard-generator

Say you’ve been glued to my posts about Firefox Telemetry. You became intrigued by the questions you could answer and ask using actual data from actual users, and considered writing your own website using the single-API telemetry-wrapper.

However, you aren’t a web developer. You don’t like JavaScript. Or you’re busy. Or you don’t like reading READMEs on GitHub.

This is where dashboard-generator can step in to help out. Simply visit the website and build-your-own dash to your exacting specifications:


Choose your channel, version, and metric. “-Latest-” will ensure that the generated dashboard will always use the latest version in the selected channel when you reload that page. Otherwise, you might find yourself always looking at GC_MS values from beta 39.

If you are only interested in clients reporting from a particular application, operating system, or with a certain E10s setting then make your choices in Filters.

If you want a histogram like’s “Histogram Dashboard” then make sure you select Histogram and then choose if you want the ends of the histogram trimmed, whether (and how sensibly) you want to compare clients across particular settings, and whether to sanitize the results so you only use data that is valid and has a lot of samples.

If you want an evolution plot like’s “Evolution Dashboard” then select Evolution. From there, choose whether to use the build date or submission date of samples, how many versions back from the selected one you would like to graph the values over, and whether to sanitize the results so you only use data that is valid and has a lot of samples.

Your choices made, click “Add to Dashboard”. Then choose again! And again!

Make a mistake? Don’t worry, you can remove rows using the ‘-‘ buttons.

Not sure what it’ll look like when you’re done? Hit ‘Generate Dashboard’ and you’ll get a preview in CodePen showing what it will look like and giving you an opportunity to fiddle with the HTML, CSS, and JS.


When you see something you like in the CodePen, hit ‘Save’ and it’ll give you a URL you can use to collaborate with others, and an option to ‘Export’ the whole site for when you want to self-host.

If you find any bugs or have any requests, please file an issue ticket here. I’ll be using it to write an E10s dashboard in the near term, and hope you’ll use it, too!


Mozilla FundraisingMozilla’s New Donation Form Features

We’ve been redoing our donation form for this end of year campaign, and have a couple major changes. We’ve talked this in a previous post. Stripe Our first, and probably biggest change is using Stripe to accept non-PayPal donations. This … Continue reading

Jan de MooijTesting Math.random(): Crushing the browser

(For tl;dr, see the Conclusion.)

A few days ago, I wrote about Math.random() implementations in Safari and (older versions of) Chrome using only 32 bits of precision. As I mentioned in that blog post, I've been working on upgrading Math.random() in SpiderMonkey to XorShift128+. V8 has been using the same algorithm since last week. (Update Dec 1: WebKit is now also using XorShift128+!)

The most extensive RNG test is TestU01. It's a bit of a pain to run: to test a custom RNG, you have to compile the library and then link it to a test program. I did this initially for the SpiderMonkey shell but after that I thought it'd be more interesting to use Emscripten to compile TestU01 to asm.js so we can easily run it in different browsers.

Today I tried this and even though I had never used Emscripten before, I had it running in the browser in less than an hour. Because the tests can take a long time, it runs in a web worker. You can try it for yourself here.

I also wanted to test window.crypto.getRandomValues() but unfortunately it's not available in workers.

Disclaimer: browsers implement Math functions like Math.sin differently and this can affect their precision. I don't know if TestU01 uses these functions and whether it affects the results below, but it's possible. Furthermore, some test failures are intermittent so results can vary between runs.


TestU01 has three batteries of tests: SmallCrush, Crush, and BigCrush. SmallCrush runs only a few tests and is very fast. Crush and especially BigCrush have a lot more tests so they are much slower.


Running SmallCrush takes about 15-30 seconds. It runs 10 tests with 15 statistics (results). Here are the number of failures I got:

Browser Number of failures
Firefox Nightly 1: BirthdaySpacings
Firefox with XorShift128+ 0
Chrome 48 11
Safari 9 1: RandomWalk1 H
Internet Explorer 11 1: BirthdaySpacings
Edge 20 1: BirthdaySpacings

Chrome/V8 failing 11 out of 15 is not too surprising. Again, the V8 team fixed this last week and the new RNG should pass SmallCrush.


The Crush battery of tests is much more time consuming. On my MacBook Pro, it finishes in less than an hour in Firefox but in Chrome and Safari it can take at least 2 hours. It runs 96 tests with 144 statistics. Here are the results I got:

Browser Number of failures
Firefox Nightly 12
Firefox with XorShift128+ 0
Chrome 48 108
Safari 9 33
Internet Explorer 11 14

XorShift128+ passes Crush, as expected. V8's previous RNG fails most of these tests and Safari/WebKit isn't doing too great either.


BigCrush didn't finish in the browser because it requires more than 512 MB of memory. To fix that I probably need to recompile the asm.js code with a different TOTAL_MEMORY value or with ALLOW_MEMORY_GROWTH=1.

Furthermore, running BigCrush would likely take at least 3 hours in Firefox and more than 6-8 hours in Safari, Chrome, and IE, so I didn't bother.

The XorShift128+ algorithm being implemented in Firefox and Chrome should pass BigCrush (for Firefox, I verified this in the SpiderMonkey shell).

About IE and Edge

I noticed Firefox (without XorShift128+) and Internet Explorer 11 get very similar test failures. When running SmallCrush, they both fail the same BirthdaySpacings test. Here's the list of Crush failures they have in common:

  • 11 BirthdaySpacings, t = 2
  • 12 BirthdaySpacings, t = 3
  • 13 BirthdaySpacings, t = 4
  • 14 BirthdaySpacings, t = 7
  • 15 BirthdaySpacings, t = 7
  • 16 BirthdaySpacings, t = 8
  • 17 BirthdaySpacings, t = 8
  • 19 ClosePairs mNP2S, t = 3
  • 20 ClosePairs mNP2S, t = 7
  • 38 Permutation, r = 15
  • 40 CollisionPermut, r = 15
  • 54 WeightDistrib, r = 24
  • 75 Fourier3, r = 20

This suggests the RNG in IE may be very similar to the one we used in Firefox (imported from Java decades ago). Maybe Microsoft imported the same algorithm from somewhere? If anyone on the Chakra team is reading this and can tell us more, it would be much appreciated :)

IE 11 fails 2 more tests that pass in Firefox. Some failures are intermittent and I'd have to rerun the tests to see if these failures are systematic.

Based on the SmallCrush results I got with Edge 20, I think it uses the same algorithm as IE 11 (not too surprising). Unfortunately the Windows VM I downloaded to test Edge shut down for some reason when it was running Crush so I gave up and don't have full results for it.


I used Emscripten to port TestU01 to the browser. Results confirm most browsers currently don't use very strong RNGs for Math.random(). Both Firefox and Chrome are implementing XorShift128+, which has no systematic failures on any of these tests.

Furthermore, these results indicate IE and Edge may use the same algorithm as the one we used in Firefox.

The Servo BlogThis Week In Servo 43

In the last two weeks, we landed 165 PRs in the Servo organization’s repositories.

The huge news from the last two weeks is that after some really serious efforts from across the team and community to handle the libc changes required, we’ve upgraded Rust compiler versions! This change is more exciting than usual because it switches us from our custom Rust compiler and onto the nightlies produced by the Rust team. The following upgrade was really quick!

Now that we have separate support for making try builds, we have added dzbarsky, ecoal95, KiChjang, ajeffrey, and waffles. Please nominate your local friendly contributor today!

Notable additions

  • notriddle made GitHub look better
  • ms2ger ran rustfmt and began cleaning up our code
  • bholley landed type system magic for the layout wrapper
  • frewsxcv implemented a compile time url parsing macro
  • dzbarsky implemented currentColor for Canvas
  • pcwalton improved ipc error reporting
  • simonsapin removed string-cache’s plugin usage
  • mbrubeck fixed hit testing for iframe content
  • jgraham and crzytrickster did lots of webdriver work
  • evilpie implemented the document.domain getter
  • waffles improved the feedback when trying to open a missing file
  • mfeckie added “last modified” information to our “good first PR” aggregator, Servo Starters
  • frewsxcv landed compile-time URL parsing
  • kichjang provided MIME types for file:// URLs
  • pcwalton split the engine into multiple sandboxed processes

New Contributors


Screencast of this post being submitted to Hacker News:



At the meeting two weeks ago we discussed intermittent test failures, using a mailing lists vs. discourse, the libcpocalypse, and our E-Easy issues. There was no meeting last week.

Air MozillaMozilla Weekly Project Meeting, 30 Nov 2015

Mozilla Weekly Project Meeting The Monday Project Meeting

Kartikaya GuptaAsynchronous scrolling in Firefox

In the Firefox family of products, we've had asynchronous scrolling (aka async pan/zoom, aka APZ, aka compositor-thread scrolling) in Firefox OS and Firefox for Android for a while - even though they had different implementations, with different behaviors. We are now in the process of taking the Firefox OS implementation and bringing it to all our other platforms - including desktop and Android. After much hard work by many people, including but not limited to :botond, :dvander, :mattwoodrow, :mstange, :rbarker, :roc, :snorp, and :tn, we finally have APZ enabled on the nightly channel for both desktop and Android. We're working hard on fixing outstanding bugs and getting the quality up before we let it ride the trains out to DevEdition, Beta, and the release channel.

If you want to try it on desktop, note that APZ requires e10s to be enabled, and is currently only enabled for mousewheel/trackpad scrolling. We do have plans to implement it for other input types as well, although that may not happen in the initial release.

Although getting the basic machinery working took some effort, we're now mostly done with that and are facing a different but equally challenging aspect of this change - the fallout on web content. Modern web pages have access to many different APIs via JS and CSS, and implement many interesting scroll-linked effects, often triggered by the scroll event or driven by a loop on the main thread. With APZ, these approaches don't work quite so well because inherently the user-visible scrolling is async from the main thread where JS runs, and we generally avoid blocking the compositor on main-thread JS. This can result in jank or jitter for some of these effects, even though the main page scrolling itself remains smooth. I picked a few of the simpler scroll effects to discuss in a bit more detail below - not a comprehensive list by any means, but hopefully enough to help you get a feel for some of the nuances here.

Smooth scrolling

Smooth scrolling - that is, animating the scroll to a particular scroll offset - is something that is fairly common on web pages. Many pages do this using a JS loop to animate the scroll position. Without taking advantage of APZ, this will still work, but can result in less-than-optimal smoothness and framerate, because the main thread can be busy with doing other things.

Since Firefox 36, we've had support for the scroll-behavior CSS property which allows content to achieve the same effect without the JS loop. Our implementation for scroll-behavior without APZ enabled still runs on the main thread, though, and so can still end up being janky if the main thread is busy. With APZ enabled, the scroll-behavior implementation triggers the scroll animation on the compositor thread, so it should be smooth regardless of load on the main thread. Polyfills for scroll-behavior or old-school implementations in JS will remain synchronous, so for best performance we recommend switching to the CSS property where possible. That way as APZ rolls out to release, you'll get the benefits automatically.

Here is a simple example page that has a spinloop to block the main thread for 500ms at a time. Without APZ, clicking on the buttons results in a very janky/abrupt scroll, but with APZ it should be smooth.


Another common paradigm seen on the web is "sticky" elements - they scroll with the page for a bit, and then turn into position:fixed elements after a point. Again, this is usually implemented with JS listening for scroll events and updating the styles on the elements based on the scroll offset. With APZ, scroll events are going to be delayed relative to what the user is seeing, since the scroll events arrive on the main thread while scrolling is happening on the compositor thread. This will result in glitches as the user scrolls.

Our recommended approach here is to use position:sticky when possible, which we have supported since Firefox 32, and which we have support for in the compositor. This CSS property allows the element to scroll normally but take on the behavior of position:fixed beyond a threshold, even with APZ enabled. This isn't supported across all browsers yet, but there are a number of polyfills available - see the resources tab on the Can I Use position:sticky page for some options.

Again, here is a simple example page that has a spinloop to block the main thread for 500ms at a time. With APZ, the JS version will be laggy but the position:sticky version should always remain in the right place.


Parallax. Oh boy. There's a lot of different ways to do this, but almost all of them rely on listening to scroll events and updating element styles based on that. For the same reasons as described in the previous section, implementations of parallax scrolling that are based on scroll events are going to be lagging behind the user's actual scroll position. Until recently, we didn't have a solution for this problem.

However, a few days ago :mattwoodrow landed compositor support for asynchronous scroll adjustments of 3D transforms, which allows a pure CSS parallax implementation to work smoothly with APZ. Keith Clark has a good writeup on how to do this, so I'm just going to point you there. All of his demo pages should scroll smoothly in Nightly with APZ enabled.

Unfortunately, it looks like this CSS-based approach may not work well across all browsers, so please make sure to test carefully if you want to try it out. Also, if you have suggestions on other methods on implementing parallax so that it doesn't rely on a responsive main thread, please let us know. For example, :mstange created one at which we should be able to support in the compositor without too much difficulty.

Other features

I know that there are other interesting scroll-linked effects that people are doing or want to do on the web, and we'd really like to support them with asynchronous scrolling. The Blink team has a bunch of different proposals for browser APIs that can help with these sorts of things, including things like CompositorWorker and scroll customization. For more information and to join the discussion on these, please see the public-houdini mailing list. We'd love to get your feedback!

(Thanks to :botond and :mstange for reading a draft of this post and providing feedback.)

Gijs KruitboschDid it land?

I wrote a thing to check if your patch landed/stuck. It’s on github because that’s what people seem to do these days. That means you can use it here:

Did it land?

The “point” of this mini-project is to be able to easily determine whether bug X made today’s nightly, or if bug Y landed in beta 5. Sometimes non-graph changelogs, such as are most accessible on hgweb, can be misleading (ie beta 5 was tagged after you landed, but on a revision before you landed…), plus it’s boring to look up revisions manually in a bug, and then look them up on hgweb, and then try to determine if revision A is in the ancestry tree for revision B. So I automated it.

Note that the tool doesn’t:

  • deal cleverly with backouts. It’ll give you revision hashes from the bug, but if it notices comments that seem to indicate something got backed out, it will be cautious about saying “yes, this landed”. If you know that you bounced once but the last revision(s) is/are definitely “enough” to have the fixes be considered “landed”, then you can just switch to looking up a revision instead of a bug, copy-paste the last hash, and try that one. With a bit of work it could probably expose the internal data about which commits landed before a nightly in the UI – the data is there!
  • use hg to extract the bug metadata. It’s dumb and just asks for a bug’s comments from bugzilla. Pull requests or other help about how to do this “properly” welcome.
  • deal cleverly with branching. If you select aurora/beta, it will look for commits that landed on aurora/beta, not for commits that landed on “earlier” trees and made their way down to aurora/beta with the regular train. This is not super hard to fix, I think, but I haven’t gotten around to it, and I don’t think it will be a very common case.
  • have a particularly nice UI. Feel free to send me pull requests to make it look better.

Andreas TolfsenWebDriver update from TPAC 2015

I came back from the TPAC (the W3C’s Technical Plenary/Advisory Committee meeting week) earlier this month, where I attended the Browser Tools- and Testing Working Group’s meetings on WebDriver.

Unlike previous meetings, this was the first time we had a reasonably up-to-date specification text to discuss. That was clearly not a bad idea to have because we were able to make some defining decisions on long-standing, controversial topics. This shows how important it is for assigned action items to be completed in time before a specification meeting, and to have someone with time dedicated to working on the spec.


The WG decided to punt the element visibility, or “displayedness” concept, to level 2 of the specification and in the meantime push for better visibility primitives in the platform. I’ve previously outlined in detail the reasons why it’s not just a bad idea—but impossible—for WebDriver to specify this concept. Instead we will provide a non-normative description of Selenium’s visibility atom in an appendix to give some level of consistency for implementors.

Fortunately Selenlium’s visibility approximation atom can be implemented entirely in content JavaScript, which means it can be provided in both client bindings and as extension commands.

This does not mean we are giving up on visibility. There is general agreement in the WG that it is a desirable feature, but since it’s impossible to define naked eye visibility using existing platform APIs we call upon other WGs to help outline this. Visibility of elements in viewport is not a primitive that naturally fits within the scope of WebDriver.

Our decision has implications for element interactability, which is used to determine if you can interact with an element. This previously relied on the element visibility algorithm, but as an alternative to the tree traversal visibility algorithm we dismissed, we are experimenting with a somewhat naïve hit-testing alternative that takes the centre coordinates of the portion of the element inside the viewport and calls elementsAtPoint, ignoring elements that are opaque.

Attributes and properties

We had previously decided to make two separate commands for getting attributes and properties. This was controversial because it deviates from the behaviour of Selenium’s getAttribute, that conflates the DOM concepts of attributes and properties.

Because the WG decided to stick with David Burns’s proposal on special-casing boolean attributes, the good news is that the Selenium behaviour can be emulated using WebDriver primitives.

In practice this means that when Get Element Attribute is called for an element that carries a boolean attribute, this will return a string "true", rather than the DOM attribute value which would normally be an empty string. We return a string so that dynamically typed programming languages can evaluate this into something truthful, and because there is a belief in the WG that an empty string return value for e.g. <input disabled>, would be confusing to users.

Because we don’t know which attributes are boolean attributes from the DOM’s point of view, it’s not the cleanest approach since it means we must maintain a hard-coded list in WebDriver. It will also arguably cause problems for custom elements, because it is not given that they mirror the default attribute values.

Test suite

One of the requirements for moving to REC is writing a decent test suite. WebDriver is in the fortunate position that it’s an evolution of existing implementations, each with their own body of tests, many of whom we can probably re-purpose. One of the challenges with the existing tests is that the harness does not easily allow for testing the lower level details of the protocol.

So far I have been able to make a start with merging Microsoft’s pending pull requests. Not all the tests merged match what the specification mandates any longer, but we decided to do this before any substantial harness work is done, to eliminate the need for Microsoft to maintain their own fork of Web Platform Tests.


Microsoft and Mozilla are both working on implementations, so there is a pressing need for a test suite that reflects the realities of the specification. Vital chapters, such as Element Retrieval and Interactions, are either undefined or in such a poor state that they should be considered unimplementable.

Despite these reservations, I’d say the WebDriver spec is in a better state than ever before. At TPAC we also had meetings about possible future extensions, including permissions and how WebDriver might help facilitate testing of WebBluetooth as well as other platform APIs.

The WG is concurrently pushing for WebDriver to be used in Web Platform Tests to automate the “non-automatable” test cases that require human interaction or privileged access. In fact, there’s an ongoing Quarter of Contribution project sponsored by Mozilla to work on facilitating WebDriver in a sort of “meta-circular” fashion, directly from testharness.js tests.

But more on that later. (-:

Air MozillaAt your service! Practical uses of Service Workers (in Spanish)

At your service! Practical uses of Service Workers (in Spanish) Los Service Workers representan uno de los más novedosos y revolucionarios conceptos de la Web. Desde el equipo de Firefox OS tratamos de desentrañar el...

This Week In RustThis Week in Rust 107

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: nasa42, brson, and llogiq.

Updates from Rust Community

News & Blog Posts

Notable New Crates & Projects

  • Diesel. A safe, extensible ORM and Query Builder for Rust.
  • Chomp. Fast parser combinator library for Rust.
  • libkeccak-tiny. A tiny implementation of SHA-3, SHAKE, Keccak, and sha3sum in Rust.
  • Waitout. Simple interface for tracking and awaiting the completion of multiple asynchounous tasks.

Updates from Rust Core

69 pull requests were merged in the last week.

See the triage digest and subteam reports for more details.

Notable changes

New Contributors

  • androm3da
  • ebadf
  • Ivan Stankovic
  • Jack Fransham
  • Jeffrey Seyfried
  • Josh Austin
  • Kevin Yeh
  • Matthias Bussonnier
  • Philipp Matthias Schäfer
  • xd1le

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

Tweet us at @ThisWeekInRust to get your job offers listed here!

Crate of the Week

This week's Crate of the Week is Chrono, a crate that offers very handy timezone-aware Duration and Date/Time types.

Thanks to Ygg01 for the suggestion. Submit your suggestions for next week!

Emma IrwinRevisiting the Word ‘Recognition’ in #FOSS and the Dream of Open Credentials

I think a lot about ways we can better surface Participation as real-world offering for professional and personal development.

And this tweet from Laura  triggered all kinds of thinking.

Most thinking was reminiscent at first. 

Working on open projects teaches relevant skills, helps establish mentorship relationships and surfaces hidden strengths and talents. It’s my own story.

And then reflective..

The reason we’ve struggled to make participation a universally recognized opportunity for credential building, is our confusion over the term ‘recognition’. In Open Source we use this term to mean of similar, yet entirely different meanings:

* Gratitude (“hey thanks for that !”)

* You’re making progress (“great work, keep going! “)

* Appreciation (“we value you”)

* You completed or finished something (congratulations you did it!)

In my opinion, many experiments with badges for FOSS participation have actually compounded the problem: If I am issued a badge I didn’t request( and I have many of these) , or don’t value ( I have many of these too) we’re using the process as a prod and not as a genuine acknowledgement of accomplishment.  That’s OK, gamification is OK – but it’s not credential building in the real-world sense, we need to separate these two ‘use cases’ to move forward with open credentials. 

And I kept thinking…

The Drupal community already does a good job at helping people surface real-world credentials. member profiles expose contribution and community leadership, while  business profiles  demonstrate (and advertise) their commitment through project sponsorship, and contribution.  Drupal also has this fantastic series of project ladders which I’ve always thought would be a great way to experiment with badges, designing connected learning experiences through participation.  Drupal ladders definitely inspired my own work with around a ‘Participation Standard‘ , and I wonder how projects can work together a bit more on defining a standard for  ‘Distributed Recognition’ even between projects like Mozilla, Drupal and Fedora.  

And the relentless thinking continued…

I then posed the question in our Discourse — asking what ‘Open Credentials’ could look like for Participation at Mozilla . And there are some great responses so far, including solutions like Makerbase and   reminder of of how hard it current is to be ‘seen’ in the Mozilla community, and thus how important this topic actually is.









And the thinking will continue, hopefully as a growing group ….

What I do know is that we have to stop using the word recognition as the catch all, and that there is huge opportunity to build Open Credentials through Participation and leadership framework might be a way to test what that looks like.

If you have opinions – would love to have you join our discussion thread!

image by jingleslenobel CC by-NC-ND 2.0


Robert O'CallahanEven More rr Replay Performance Improvements!

While writing my last blog post I realized I should try to eliminate no-op reschedule events from rr traces. The patch turned out to be very easy, and the results are impressive:

Now replay is faster than recording in all the benchmarks, and for Mochitest is about as fast as normal execution. (As discussed in my previous post, this is probably because the replay excludes some code that runs during normal execution: the test harness and the HTTP server.) Hopefully this turns into real productivity gains for rr users.

Adam RoachBetter Living through Tracking Protection

There's been a bit of a hullabaloo in the press recently about blocking of ads in web browsers. Very little of the conversation is new, but the most recent round of discussion has been somewhat louder and more excited, in part because of Apple's recent decision to allow web content blockers on the iPhone and iPad.

In this latest round of salvos, the online ad industry has taken a pretty brutal beating, and key players appear to be rethinking long-entrenched strategies. Even the Interactive Advertising Bureau -- who has referred to ad blocking as "robbery" and "an extortionist scheme" -- has gone on record to admit that the Internet ads got so bad that users basically had no choice but to start blocking them.

So maybe things will get better in the coming months and years, as online advertisers learn to moderate their behavior. Past behavior shows a spotty track record in this area, though, and change will come slowly. In the meanwhile, there are some pretty good tools that can help you take back control of your web experience.

How We Got Here

While we probably all remember the nadir of online advertising -- banners exhorting users to "punch the monkey to win $50", epilepsy-inducing ads for online gambling, and out-of-control popup ads for X10 cameras -- the truth is that most ad networks have already pulled back from the most obvious abuses of users' eyeballs. It would appear that annoying users into spending money isn't a winning strategy.

Unfortunately, the move away from hyperkinetic ads to more subtle ones was not a retreat as much as a carefully calculated refinement. Ads nowadays are served by colossal ad networks with tendrils on every site -- and they're accompanied by pretty sophisticated code designed to track you around the web.

The thought process that went into this is: if we can track you enough, we learn a lot about who you are and what your interests are. This is driven by the premise that people will be less annoyed by ads that actually fit their interests; and, at the same time, such ads are far more likely to convert into a sale.

Matching relevant ads to users was a reasonable goal. It should have been a win-win for both advertisers and consumers, as long as two key conditions were met: (1) the resulting system didn't otherwise ruin the web browsing experience, and (2) users who don't want to have their personal movements across the web could tell advertisers not to track them, and have those requests honored.

Neither is true.

Tracking Goes off the Rails

Just like advertisers went overboard with animated ads, pop-ups, pop-unders, noise-makers, interstitials, and all the other overtly offensive behavior, they've gone overboard with tracking.

You hear stories of overreach all the time: just last night, I had a friend recount how she got an email (via Gmail) from a friend that mentioned front-loaders, and had to suffer through weeks of banner ads for construction equipment on unrelated sites. The phenomenon is so bad and so well-known, even The Onion is making fun of it.

Beyond the "creepy" factor of having ad agencies building a huge personal profile for you and following you around the web to use it, user tracking code itself has become so bloated as to ruin the entire web experience.

In fact, on popular sites such as CNN, code to track users accounts for somewhere on the order of three times as much memory usage as the actual page content: a recent demo of the Firefox memory tracking tool found that 30 MB of the 40 MB used to render a news article on CNN was consumed by code whose sole purpose was user tracking.

This drags your browsing experience to a crawl.

Ad Networks Know Who Doesn't Want to be Tracked, But Don't Care.

Under the assumption that advertisers were actually willing to honor user choice, there has been a large effort to develop and standardize a way for users to indicate to ad networks that they didn't want to be tracked. It's been implemented by all major browsers, and endorsed by the FTC.

For this system to work, though, advertisers need to play ball: they need to honor user requests not to be tracked. As it turns out, advertisers aren't actually interested in honoring users' wishes; as before, they see a tiny sliver of utility in abusing web users with the misguided notion that this somehow translates into profits. Attempts to legislate conformance were made several years ago, but these never really got very far.

So what can you do? The balance of power seems so far out of whack that consumers have little choice than to sit back and take it.

You could, of course, run one of any number of ad blockers -- Adblock Plus is quite popular -- but this is a somewhat nuclear option. You're throwing out the slim selection of good players with the bad ones; and, let's face it, someone's gotta provide money to keep the lights on at your favorite website.

Even worse, many ad blockers employ techniques that consume as much (or more) memory and as much (or more) time as the trackers they're blocking -- and Adblock Plus is one of the worst offenders. They'll stop you from seeing the ads, but at the expense of slowing down everything you do on the web.

What you can do

When people ask me how to fix this, I recommend a set of three tools to make their browsing experience better: Firefox Tracking Protection, Ghostery, and (optionally) Privacy Badger. (While I'm focusing on Firefox here, it's worth noting that both Ghostery and Privacy Badger are also available for Chrome.)

1. Turn on Tracking Protection

Firefox Tracking Protection is automatically activated in recent versions of Firefox whenever you enter "Private Browsing" mode, but you can also manually turn it on to run all the time. If you go to the URL bar and type in "about:config", you'll get into the advanced configuration settings for Firefox (you may have to agree to be careful before it lets you in). Search for a setting called "privacy.trackingprotection.enabled", and then double-click next to it where it says "false" to change it to "true." Once you do that, Tracking Protection will stay on regardless of whether you're in private browsing mode.

Firefox tracking protection uses a curated list of sites that are known to track you and known to ignore the "Do Not Track" setting. Basically, it's a list of known bad actors. And a study of web page load times determined that just turning it on improves page load times by a median of 44%.

2. Install and Configure Ghostery

There's also an add-on that works similar to Tracking Protection, called Ghostery. Install it from, and then go into its configuration (type "about:addons" into your URL bar, and select the "Preferences" button next to Ghostery). Now, scroll down to "blocking options," near the bottom of the page. Under the "Trackers" tab, click on "select all." Then, uncheck the "widgets" category. (Widgets can be used to track you, but they also frequently provide useful functions for a web page: they're a mixed bag, but I find that their utility outweighs their cost).

Ghostery also uses a curated list, but it's far more aggressive in what it considers to be tracking. It also allows you fine-grained control over what you block, and lets you easily whitelist sites, if you find that they're not working quite right with all the potential trackers removed.

Poke around at the other options in there, too. It's really a power-users tracker blocker.

3. Optionally, Install Privacy Badger

Unlike tracking protection and Ghostery, Privacy Badger isn't a curated list of known trackers. Instead, it's a tool that watches what webpages do. When it sees behavior that could be used to track users across multiple sites, it blocks that behavior from ever happening again. So, instead of knowing ahead of time what to block, it learns what to block. In other words, it picks up where the other two tools leave off.

This sounds really good on paper, and does work pretty well in practice. I ran with Privacy Badger turned on for about a month, with mostly good results. Unfortunately, its "learning" can be a bit aggressive, and I found that it broke sites far more frequently than Ghostery. So the trade-off here: if you run Privacy Badger, you'll have much better protection against tracking, but you'll also have to be alert to the kinds of defects that it can introduce, and go turn it off when it interferes with what you're trying to do. Personally, I turned it off a few months ago, and haven't bothered to reactivate it yet; but I'll be checking back periodically to see if they've tuned their algorithms (and their yellow-list) to be more user-friendly.

If you're interested in giving it a spin, you can download Privacy Badger from the website.

Andy McKayDocumentation debt

There's lots of talk about technical debt, but documentation debt is just as real and similar. Every line of documentation written needs maintaining and keeping up to date... and the chances are that over time it will slowly become more and more outdated and useless.

This does harm when the documentation actively misleads people, causing them to make wrong decisions and costing them time. You've probably all seen a person come onto a mailing list or group chat wondering why something doesn't work and getting frustrated. Followed by the answer "oh that documentation is out of date".

That frustration is real and can be harmful to your project.

  • Avoid documenting stuff that doesn't need to be documented, especially if it is documented elsewhere. For example: if your project is on GitHub and follows standard practices, you shouldn't really need to document that commit process.

  • Avoid the trap of "it might be useful to someone". It just might however taking that to its extreme means you can't distinguish what to document. In code terms this is similar to "let's make this an Adpater/Factory/Engine/Class of boggling complexity because in the future someone might want to..." problem.

  • Review your documentation and be merciless about cutting things.

  • Apply a review process to your documentation.. Wiki's are fine for collaboration and spontaneity but that might not be suitable for your projects documentation. One example is to use source control tools to store your documentation and apply a similar review process.

  • Finally, if a document needs critical information, putting it at the top of the page is pointless if the page runs for more than one screen length. For example consider this page vs this page, both are deprecated.

Just as you spend time reviewing technical debt, I recommend reviewing and cleaning documentation debt too.

John O'Duinn“Distributed” ER#3 now available!

Book Cover for DistributedEarlier this week, just before the US Thanksgiving holidays, we shipped Early Release #3 for my “Distributed” book-in-progress.

Early Release #3 (ER#3) adds two new chapters: Ch.1 remoties trends, Ch.2 the real cost of an office, and many tweaks/fixes to the previous Chapters. There are now a total of 9 chapters available (1,2,4,6,7,8,10,13,15) arranged into three sections. These chapters were the inspiration for recent presentations and blog posts here, here and here.)

ER#3 comes one month after ER#2. You can buy ER#3 by clicking here, or clicking on the thumbnail of the book cover. Anyone who already has ER#1 or ER#2 should get prompted with a free update to ER#3. (If you don’t please let me know!). And yes, you’ll get updated when ER#4 comes out next month.

Please let me know what you think of the book so far. Your feedback get to help shape/scope the book! Is there anything I should add/edit/change? Anything you found worked for you, as a “remotie” or person in a distributed team, which you wish you knew when you were starting? If you were going to setup a distributed team today, what would you like to know before you started?

Thank you to everyone who’s already sent me feedback/opinions/corrections – all changes that are making the book better. I’m merging changes/fixes as fast as I can – some days are fixup days, some days are new writing days. All great to see coming together. To make sure that any feedback doesn’t get lost or caught in spam filters, it’s best to email a special email address (feedback at oduinn dot com) although feedback via twitter and linkedin works also. Thanks again to everyone for their encouragement, proof-reading help and feedback so far.

Now, it’s time to get back to typing. ER#4 is coming soon!


Robert O'Callahanrr Replay Performance Improvements

I've been spending a lot of time using rr, as have some other Mozilla developers, and it occurred to me a small investment in speeding up the debugging experience could pay off in improved productivity quite quickly. Until recently no-one had ever really done any work to speed up replay, so there was some low-hanging fruit.

During recording we avoid trapping from tracees to the rr process for common syscalls (read, clock_gettime and the like) with an optimization we call "syscall buffering". The basic idea is that the tracee performs the syscall "untraced", we use a seccomp-bpf predicate to detect that the syscall should not cause a ptrace trap, and when the syscall completes the tracee copies its results to a log buffer. During replay we do not use seccomp-bpf; we were using PTRACE_SYSEMU to generate a ptrace trap for every syscall and then emulating the results of all syscalls from the rr process. The obvious major performance improvement is to avoid generating ptrace traps for buffered syscalls during replay, just as we do during recording.

This was tricky to do while preserving our desired invariants that control flow is identical between recording and replay, and data values (in application memory and registers) are identical at all times. For example consider the recvmsg system call, which takes an in/out msg parameter. During recording syscall wrappers in the tracee would copy msg to the syscall log buffer, perform the system call, then copy the data from the log buffer back to msg. Hitherto, during replay we would trap on the system call and copy the saved buffer contents for that system call to the tracee buffer, whereupon the tracee syscall wrappers would copy the data out to msg. To avoid trapping to rr for a sequence of such syscalls we need to copy the entire syscall log buffer to the tracee before replaying them, but then the syscall wrapper for recvmsg would overwrite the saved output when it copies msg to the buffer! I solved this, and some other related problems, by introducing a few functions that behave differently during recording and replay while preserving control flow and making sure that register values only diverge temporarily and only in a few registers. For this recvmsg case I introduced a function memcpy_input_parameter which behaves like memcpy during recording but is a noop during replay: it reads a global is_replay flag and then does a conditional move to set the source address to the destination address during replay.

Another interesting problem is recapturing control of the tracee after it has run a set of buffered syscalls. We need to trigger some kind of ptrace trap after reaching a certain point in the syscall log buffer, without altering the control flow of the tracee. I handled this by generating a large array of stub functions (each only one byte, a RET instruction) and after processing the log buffer entry ending at offset O, we call stub function number O/8 (each log record is at least 8 bytes long). rr identifies the last log entry after which it wants to stop the tracee, and sets a breakpoint at the appropriate stub function.

It took a few late nights and a couple of half-days of debugging but it works now and I landed it on master. (Though I expect there may be a few latent bugs to shake out.) The results are good:

This shows much improved replay overhead for Mochitest and Reftest, though not much improvement on Octane. Mochitest and Reftest are quite system-call intensive so our optimization gives big wins there. Mochitests spend a significant amount of time in the HTTP server, which is not recorded by rr, and therefore zero-overhead replay could actually run significantly faster than normal execution, so it's not surprising we're already getting close to parity there. Octane replay is dominated by SCHED context-switch events, each one of which we replay using relatively expensive trickery to context-switch at exactly the right moment.

For rr cognoscenti: as part of eliminating traps for replay of buffered syscalls, I also eliminated the traps for the ioctls that arm/disarm the deschedule-notification events. That was relatively easy (just replace those syscalls with noops during replay) and actually simplified code since we don't have to write those events to the trace and can wholly ignore them during replay.

There's definitely more that can be squeezed out of replay, and probably recording as well. E.g. currently we record a SCHED event every time we try to context-switch, even if we end up rescheduling the thread that was already running (which is common). We don't need to do that, and eliminating those events would reduce syscallbuf flushing and also the number of ptrace traps taken during replay. This should hugely benefit Octane. I'm trying to focus on easy rr improvements with big wins that are likely to pay off for Mozilla developers in the short term; it's difficult to know whether any given improvement is in that category, but I think SCHED elision during recording probably is. (We used to elide recorded SCHED events during replay, but that added significant complexity to reverse execution so I took it out.)

Chris AtLeeFirefox builds on the Taskcluster Index


You have have heard rumblings that FTP is going away...


Over the past few quarters we've been working to migrate our infrastructure off of the ageing "FTP" [1] system to Amazon S3.

We've maintained some backwards compatibility for the time being [2], so that current Firefox CI and release builds are still available via, or preferably, since we don't support the ftp protocol any more!

Our long term plan is to make the builds available via the Taskcluster Index, and stop uploading builds to

How do I find my builds???


This is pretty big change, but we really think this will make it easier to find the builds you're looking for.

The Taskcluster Index allows us to attach multiple "routes" to a build job. Think of a route as a kind of hierarchical tag, or directory. Unlike regular directories, a build can be tagged with multiple routes, for example, according to the revision or buildid used.

A great tool for exploring the Taskcluster Index is the Indexed Artifact Browser

Here are some recent examples of nightly Firefox builds:

The latest win64 nightly Firefox build is available via the
gecko.v2.mozilla-central.nightly.latest.firefox.win64-opt route

This same build (as of this writing) is also available via its revision:


Or the date:


The artifact browser is simply an interface on top of the index API. Using this API, you can also fetch files directly using wget, curl, python requests, etc.: [3]

Similar routes exist for other platforms, for B2G and mobile, and for opt/debug variations. I encourage you to explore the gecko.v2 namespace, and see if it makes things easier for you to find what you're looking for! [4]

Can't find what you want in the index? Please let us know!

[1]A historical name referring back to the time when we used the FTP prototol to serve these files. Today, the files are available only via HTTP(S)
[2]in fact, all Firefox builds right now are currently uploaded to S3. we've just had to implement some compatibility layers to make S3 appear in many ways like the old FTP service.
[3]yes, you need to know the version number...for now. we're considering stripping that from the filenames. if you have thoughts on this, please get in touch!
[4]ignore the warning on the right about "Task not found" - that just means there are no tasks with that exact route; kind of like an empty directory

Jan de MooijMath.random() and 32-bit precision

Last week, Mike Malone, CTO of Betable, wrote a very insightful and informative article on Math.random() and PRNGs in general. Mike pointed out V8/Chrome used a pretty bad algorithm to generate random numbers and, since this week, V8 uses a better algorithm.

The article also mentioned the RNG we use in Firefox (it was copied from Java a long time ago) should be improved as well. I fully agree with this. In fact, the past days I've been working on upgrading Math.random() in SpiderMonkey to XorShift128+, see bug 322529. We think XorShift128+ is a good choice: we already had a copy of the RNG in our repository, it's fast (even faster than our current algorithm!), and it passes BigCrush (the most complete RNG test available).

While working on this, I looked at a number of different RNGs and noticed Safari/WebKit uses GameRand. It's extremely fast but very weak. (Update Dec 1: WebKit is now also using XorShift128+, so this doesn't apply to newer Safari/WebKit versions.)

Most interesting to me, though, was that, like the previous V8 RNG, it has only 32 bits of precision: it generates a 32-bit unsigned integer and then divides that by UINT_MAX + 1. This means the result of the RNG is always one of about 4.2 billion different numbers, instead of 9007199 billion (2^53). In other words, it can generate 0.00005% of all numbers an ideal RNG can generate.

I wrote a small testcase to visualize this. It generates random numbers and plots all numbers smaller than 0.00000131072.

Here's the output I got in Firefox (old algorithm) after generating 115 billion numbers:

And a Firefox build with XorShift128+:

In Chrome (before Math.random was fixed):

And in Safari:

These pics clearly show the difference in precision.


Safari and older Chrome versions both generate random numbers with only 32 bits of precision. This issue has been fixed in Chrome, but Safari's RNG should probably be fixed as well. Even if we ignore its suboptimal precision, the algorithm is still extremely weak.

Math.random() is not a cryptographically-secure PRNG and should never be used for anything security-related, but, as Mike argued, there are a lot of much better (and still very fast) RNGs to choose from.

Support.Mozilla.OrgWhat’s up with SUMO – 27th November

Hello, SUMO Nation!

Have you had a good week so far? We hope you have! Here are a few pertinent updates from the world of SUMO for your reading pleasure.

Welcome, new contributors!

…at least that’s the only one we know of! So, if you joined us recently, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

  • Scribe & Phoxuponyou – for their constant contributions on the support forum – cheers!
  • Costenslayer – for offering to help us with cloning our YT videos to AirMo – thanks!

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

Reminder: the next SUMO Community meeting…

  • …is going to take place on Monday, 30th of November. Join us!
  • If you want to add a discussion topic to upcoming the live meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Monday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).



Support Forum


  • for Desktop
    • GTK+3 is required for Firefox on Linux as of Beta 43 (the full release comes on the 15th of December).
  •  for iOS
    • Version 1.2 is out – please go ahead and test it on your devices!
    • Version 2.0 is going to happen in 2016 (we just don’t know when exactly… yet), and should add synchronization from iOS to Desktop and/or Android.
  • Firefox OS
    • Peace and quiet makes for the start of a good weekend!
Thank you for reading all the way to the end! We hope you join us on Monday (and beyond that day), and wish you a great, relaxing weekend. Take it easy and stay foxy!

Mozilla FundraisingA/B Test: Three-page vs One-page donation flow

Here are the results of our first A/B test from our 2015 End of Year fundraising campaign. Three page flow (our control) In our control (above) credit card donations are processed (via Stripe) from within our user interface, in a … Continue reading

Dustin J. MitchellRemote GPG Agent

Private keys should be held close -- the fewer copies of them, and the fewer people have access to them, the better. SSH agents, with agent forwarding, do a pretty good job of this. For quite a long time, I've had my SSH private key stored only on my laptop and desktop, with a short script to forward that agent into my remote screen sessions. This works great: while I'm connected and my key is loaded, I can connect to hosts and push to repositories with no further interaction. But once I disconnect, the screen sessions can no longer access the key.

Doing the same for GPG keys turns out to be a bit harder, not helped by the lack of documentation from GnuPG itself. In fact, as far as I can tell, it was impossible before GnuPG 2.1, and a great deal more difficult before OpenSSH 6.7.

I don't want exactly the same thing, anyway: I only need access to my GPG private keys once every few days (to sign a commit, for example) So I'd like to control exactly when I make the agent available.

The solution I have found involves this shell script, named remote-gpg:

#! /bin/bash

set -e

if [ -z "$host" ]; then
    echo "Supply a hostname"
    exit 1

# remove any existing agent socket (in theory `StreamLocalBindUnlink yes` does this,
# but in practice, not so much)
ssh $host rm -f ~/.gnupg/S.gpg-agent
ssh -t -R ~/.gnupg/S.gpg-agent:.gnupg/S.gpg-agent-extra $host \
    sh -c 'echo; echo "Perform remote GPG operations and hit enter"; \
        read; \
        rm -f ~/.gnupg/S.gpg-agent'; 

The critical bit of configuration was to add the following to .gnupg/gpg-agent.conf on my laptop and desktop:

extra-socket /home/dustin/.gnupg/S.gpg-agent-extra

and then kill the agent to reload the config:

gpg-connect-agent reloadagent /bye

The idea is this: the local GPG agent (on the laptop or desktop) publishes this "extra" socket specifically for forwarding to remote machines. The set of commands accepted over the socket is limited, although it does include access to the key material. The SSH command then forwards the socket (this functionality was added in OpenSSH 6.7) to the remote host, after first deleting any existing socket. That command displays a prompt, waits for the user to signal completion of the operation, then cleans up.

To use this, I just open a new terminal or local screen window and run remote-gpg euclid. If my key is not already loaded, I'm prompted to enter the passphrase. GPG even annotates the prompt to indicate that it's from a remote connection. Once I've finished with the private keys, I go back to the window and hit enter.

Air MozillaParticipation Call, 26 Nov 2015

Participation Call The Participation Call helps connect Mozillians who are thinking about how we can achieve crazy and ambitious goals by bringing new people into the project...

Air MozillaReps weekly, 26 Nov 2015

Reps weekly This is a weekly call with some of the Reps council members to discuss all matters Reps, share best practices and invite Reps to share...

Armen ZambranoMozhginfo/Pushlog client released

If you've ever spent time trying to query metadata from hg with regards to revisions, you can now use a Python library we've released to do so.

In bug 1203621 [1], our community contributor @MikeLing has helped us release the module we had written for Mozilla CI tools.

You can find the pushlog_client package in here [3] and you can find the code in here [4]

Thanks MikeLing!


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Andy McKayAdd-ons at Mozlando

If you are going to Orlando for the Mozilla summit and want to talk add-ons, we want to talk to you. But, if you look at the schedule we haven't scheduled a whole pile of meetings, yet there are 430 meetings (by my quick count) scheduled.

In fact we've got one meeting I'd like people who are interested to learn about add-ons to come to, the add-ons open house and demos. We'll be talking road map, 2016 planning, a few demos and then getting a chat going. You should come.

If you want to talk with us on any add-ons subject any other time we'd love to talk and I'm sure we can work around each others schedule. You can find me in online or wander into the Firefox home room and look for the add-ons sign (yes I plan on making one). We can set up quick ad-hoc meetings on pretty much anything.

There's a reason for that, the productivity and happiness I've encountered at work weeks is the related to the inverse of the number of meetings I have. At Portland I was triple booked at one point. At Whistler I had few meetings and most of the ones I did have were relaxed, outdoors and in the sun.

Whistler ended up being a much more positive experience for me and my team.

So here's the plan for my team:

  • meet and interact with the rest of our team
  • learn what people outside our group are doing
  • learn where Mozilla is going
  • don't feel under pressure to attend any meetings

We'll be hacking on code when we are hanging out, the criteria being:

  • nothing that is on a critical path
  • the hack must involve working with other people
  • don't feel under pressure to complete that code

That's about it, just let the team flow, don't hold it back with meetings.

Because let's face it, if you want to have a meeting on a subject, we can do that any time with video conferencing. I look forward to seeing you there.

Nick CameronMacro hygiene in all its guises and variations

Note, I'm not sure of the terminology for some of this stuff, so I might be making horrible mistakes, apologies.

Usually, when we talk about macro hygiene we mean the ability to not confuse identifiers with the same name but from different contexts. This is a big and interesting topic in it's own right and I'll discuss it in some depth later. Today I want to talk about other kinds of macro hygiene.

There is hygiene when naming items (I've heard this called "path hygiene", but I'm not sure if that is a standard term). For example,

mod a {  
    fn f() {}

    pub macro foo() {


The macro use will expand to f(), but there is no f in scope. Currently this will be a name resolution error. Ideally, we would remember the scope where the call to f came from and look up f in that scope.

I believe that switching our hygiene algorithm to scope sets and using the scope sets for name resolution solves this issue.

Privacy hygiene

In the above example, f is private to a, so even if we can name it from the expansion of foo, we still can't access it due to its visibility. Again, scope sets comes to the rescue. The intuition is that we check privacy from the scope used to find f, not from its lexical position. There are a few more details than that, but nothing that will make sense before explaining the scope sets algorithm in detail.

Unsafety hygiene

The goal here is that when checking for unsafety, whether or not we are allowed to execute unsafe code depends on the context where the code is written, not where it is expanded. For example,

unsafe fn foo(x: i32) {}

macro m1($x: expr) {  

macro m2($x: expr) {  

macro m3($x: expr) {  
    unsafe {

macro m4($x: expr) {  
    unsafe {

fn main() {  
    foo(42); // bad
    unsafe {
        foo(42);  // ok
    m1(42); // bad
    m2(foo(42)); // bad
    m3(42); // ok
    m4(foo(42)); // bad
    unsafe {
        m1(42); // bad
        m2(foo(42)); // ok
        m3(42); // ok
        m4(foo(42)); // ok

We could in theory use the same hygiene information as for the previous kinds. But when checking unsafety we are checking expressions, not identifiers, and we only record hygiene info for identifiers.

One solution would be to track hygiene for all tokens, not just identifiers. That might not be too much effort since groups of tokens passed together would have the same hygiene info. We would only be duplicating indices into a table, not more data than that. We would also have to track or be able to calculate the safety-status of scopes.

Alternatively, we could introduce a new kind of block into the token tree system - a block which can't be written by the user, only created by expansion or procedural macros. It would affect precedence but not scoping. Such a block is also the solution to having interpolated AST in the token stream - we just have tokens wrapped in the scope-less block. Such a block could be annotated with its safety-status. We would need to track unsafety during parsing/expansion to make this work. We have something similar to this in the HIR where we can push/pop unsafe blocks. I believe we want an absolute setting here rather than push/pop though, and we also don't want to introduce new scoping.

We could follow the current stability solution and annotate spans, but this is a bit of an abuse of spans, IMO.

I'm not super-happy with any of these solutions.

Stability hygiene

Finally, stability. We would like for macros in libraries with access to unstable code to be able to access unstable code when expanded. This is currently supported in Rust by having a bool on spans. We can probably continue to use this system or adapt either of the solutions proposed for unsafety hygiene.

It would be nice for macros to be marked as stable and unstable, I believe this is orthogonal to hygiene though.

Mozilla Addons BlogAdd-ons Update – Week of 2015/11/25

I post these updates every 3 weeks to inform add-on developers about the status of the review queues, add-on compatibility, and other happenings in the add-ons world.

The Review Queues

In the past 3 weeks, 758 add-ons were reviewed:

  • 602 (79%) were reviewed in less than 5 days.
  • 32 (4%) were reviewed between 5 and 10 days.
  • 124 (16%) were reviewed after more than 10 days.

There are 281 listed add-ons awaiting review, and 189 unlisted add-ons awaiting review. I should note that this is an unusually large number of unlisted add-ons, which is due to a mass uploading by a developer with 100+ add-ons.

Review times for most add-ons have improved recently  due to more volunteer activity. Add-ons that are admin-flagged or very complex are now getting much needed attention, thanks to a new contractor reviewer. There’s still a fairly large review backlog to go through.

If you’re an add-on developer and would like to see add-ons reviewed faster, please consider joining us. Add-on reviewers get invited to Mozilla events and earn cool gear with their work. Visit our wiki page for more information.

Firefox 43 Compatibility

This compatibility blog post is now public. The bulk compatibility validation should be run soon.

As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Changes in let and const in Firefox 44

Firefox 44 includes some breaking changes that you should all be aware of. Please read the post carefully and test your add-ons on Nightly or the newest Developer Edition.

Extension Signing

The wiki page on Extension Signing has information about the timeline, as well as responses to some frequently asked questions. The current plan is to turn on enforcement by default in Firefox 43.


Electrolysis, also known as e10s, is the next major compatibility change coming to Firefox. In a nutshell, Firefox will run on multiple processes now, running content code in a different process than browser code.

This is the time to test your add-ons and make sure they continue working in Firefox. We’re holding regular office hours to help you work on your add-ons, so please drop in on Tuesdays and chat with us!

Web Extensions

If you read the post on the future of add-on development, you should know there are big changes coming. We’re investing heavily on the new WebExtensions API, so we strongly recommend that you start looking into it for your add-ons. You can track progress of its development in

Air MozillaQuality Team (QA) Public Meeting, 25 Nov 2015

Quality Team (QA) Public Meeting This is the meeting where all the Mozilla quality teams meet, swap ideas, exchange notes on what is upcoming, and strategize around community building and...

Air MozillaBugzilla Development Meeting, 25 Nov 2015

Bugzilla Development Meeting Help define, plan, design, and implement Bugzilla's future!

Chris H-CHow Mozilla Pays Me

When I told people I was leaving BlackBerry and going to work for Mozilla, the first question was often “Who?”

(“The Firefox people, ${familyMember}” “Oh, well why didn’t you say so”)

More often the first question (and almost always the second question for ${familyMember}) was “How do they make their money?”

When I was working for BlackBerry, it seemed fairly obvious: BlackBerry made its money selling BlackBerry devices. (Though obvious, this was actually incorrect, as the firm made its money more through services and servers than devices. But that’s another story.)

With Mozilla, there’s no clear thing that people’s minds can latch onto. There’s no doodad being sold for dollarbucks, there’s no subscriber fee, there’s no “professional edition” upsell…

Well, today the Mozilla Foundation released its State of Mozilla report including financials for calendar 2014. This ought to clear things up, right? Well…

The most relevant part of this would be page 6 of the audited financial statement which shows that, roughly speaking, Mozilla makes its money thusly (top three listed):

  • $323M – Royalties
  • $4.2M – Contributions (from fundraising efforts)
  • $1M – Interest and Dividends (from investments)

Where this gets cloudy is that “Royalties” line. The Mozilla Foundation is only allowed to accrue certain kinds of income since it is a non-profit.

Which is why I’m not employed by the Foundation but by Mozilla Corporation, the wholly-owned subsidiary of the Mozilla Foundation. MoCo is a taxable entity responsible for software development and stuff. As such, it can earn and spend like any other privately-held concern. It sends dollars back up the chain via that “Royalties” line because it needs to pay to license wordmarks, trademarks, and other intellectual property from the Foundation. It isn’t the only contributor to that line, I think, as I expect sales of plushie Firefoxen and tickets to MozFest factor in somehow.

So, in conclusion, rest assured, ${conceredPerson}: Mozilla Foundation has plenty of money coming in to pay my…

Well, yes, I did just say I was employed by Mozilla Corporation. So?

What do you mean where does the Corporation get its money?

Fine, fine, I was just going to gloss over this part and sway you with those big numbers and how MoCo and MoFo sound pretty similar… but I guess you’re too cunning for that.

Mozilla Corporation is not a publicly-traded corporation, so there are no public documents I can point you to for answers to that question. However, there was a semi-public statement back in 2006 that confirmed that the Corporation was earning within an order of magnitude of $76M in search-related partnership revenue.

It’s been nine years since then. The Internet has changed a lot since the year Google bought YouTube and MySpace was the primary social network of note. And our way of experiencing it has changed from sitting at a desk to having it in our pockets. Firefox has been downloaded over 100 million times on Android and topped some of the iTunes App Store charts after being released twelve days ago for iOS. If this sort of partnership is still active, and is somewhat proportional to Firefox’s reach, then it might just be a different number than “within an order of magnitude of $76M.”

So, ${concernedPerson}, I’m afraid there just isn’t any more information I can give you. Mozilla does its business, and seems to be doing it well. As such, it collects revenue which it has to filter through various taxes and regulation authorities at various levels which are completely opaque even when they’re transparent. From that, I collect a paycheque.

At the very least, take heart from the Contributions line. That money comes from people who like that Mozilla does good things for the Internet. So as long as we’re doing good things (and we have no plans to stop), there is a deep and growing level of support that should keep me from asking for money.

Though, now that you mention it


Air MozillaThe Joy of Coding - Episode 36

The Joy of Coding - Episode 36 mconley livehacks on real Firefox bugs while thinking aloud.

Jan de MooijMaking `this` a real binding in SpiderMonkey

Last week I landed bug 1132183, a pretty large patch rewriting the implementation of this in SpiderMonkey.

How this Works In JS

In JS, when a function is called, an implicit this argument is passed to it. In strict mode, this inside the function just returns that value:

function f() { "use strict"; return this; }; // 123

In non-strict functions, this always returns an object. If the this-argument is a primitive value, it's boxed (converted to an object):

function f() { return this; }; // returns an object: new Number(123)

Arrow functions don't have their own this. They inherit the this value from their enclosing scope:

function f() {
    "use strict";
    () => this; // `this` is 123

And, of course, this can be used inside eval:

function f() {
    "use strict";
    eval("this"); // 123

Finally, this can also be used in top-level code. In that case it's usually the global object (lots of hand waving here).

How this Was Implemented

Until last week, here's how this worked in SpiderMonkey:

  • Every stack frame had a this-argument,
  • Each this expression in JS code resulted in a single bytecode op (JSOP_THIS),
  • This bytecode op boxed the frame's this-argument if needed and then returned the result.

Special case: to support the lexical this behavior of arrow functions, we emitted JSOP_THIS when we defined (cloned) the arrow function and then copied the result to a slot on the function. Inside the arrow function, JSOP_THIS would then load the value from that slot.

There was some more complexity around eval: eval-frames also had their own this-slot, so whenever we did a direct eval we'd ensure the outer frame had a boxed (if needed) this-value and then we'd copy it to the eval frame.

The Problem

The most serious problem was that it's fundamentally incompatible with ES6 derived class constructors, as they initialize their 'this' value dynamically when they call super(). Nested arrow functions (and eval) then have to 'see' the initialized this value, but that was impossible to support because arrow functions and eval frames used their own (copied) this value, instead of the updated one.

Here's a worst-case example:

class Derived extends Base {
    constructor() {
        var arrow = () => this;

        // Runtime error: `this` is not initialized inside `arrow`.

        // Call Base constructor, initialize our `this` value.

        // The arrow function now returns the initialized `this`.

We currently (temporarily!) throw an exception when arrow functions or eval are used in derived class constructors in Firefox Nightly.

Boxing this lazily also added extra complexity and overhead. I already mentioned how we had to compute this whenever we used eval.

The Solution

To fix these issues, I made this a real binding:

  • Non-arrow functions that use this or eval define a special .this variable,
  • In the function prologue, we get the this-argument, box it if needed (with a new op, JSOP_FUNCTIONTHIS) and store it in .this,
  • Then we simply use that variable each time this is used.

Arrow functions and eval frames no longer have their own this-slot, they just reference the .this variable of the outer function. For instance, consider the function below:

function f() {
    return () =>;

We generate bytecode similar to the following pseudo-JS:

function f() {
    var .this = BoxThisIfNeeded(this);
    return () => (.this).foo();

I decided to call this variable .this, because it nicely matches the other magic 'dot-variable' we already had, .generator. Note that these are not valid variable names so JS code can't access them. I only had to make sure with-statements don't intercept the .this lookup when this is used inside a with-statement...

Doing it this way has a number of benefits: we only have to check for primitive this values at the start of the function, instead of each time this is accessed (although in most cases our optimizing JIT could/can eliminate these checks, when it knows the this-argument must be an object). Furthermore, we no longer have to do anything special for arrow functions or eval; they simply access a 'variable' in the enclosing scope and the engine already knows how to do that.

In the global scope (and in eval or arrow functions in the global scope), we don't use a binding for this (I tried this initially but it turned out to be pretty complicated). There we emit JSOP_GLOBALTHIS for each this-expression, then that op gets the this value from a reserved slot on the lexical scope. This global this value never changes, so the JITs can get it from the global lexical scope at compile time and bake it in as a constant :) (Well.. in most cases. The embedding can run scripts with a non-syntactic scope chain, in that case we have to do a scope walk to find the nearest lexical scope. This should be uncommon and can be optimized/cached if needed.)

The Debugger

The main nuisance was fixing the debugger: because we only give (non-arrow) functions that use this or eval their own this-binding, what do we do when the debugger wants to know the this-value of a frame without a this-binding?

Fortunately, the debugger (DebugScopeProxy, actually) already knew how to solve a similar problem that came up with arguments (functions that don't use arguments don't get an arguments-object, but the debugger can request one anyway), so I was able to cargo-cult and do something similar for this.

Other Changes

Some other changes I made in this area:

  • In bug 1125423 I got rid of the innerObject/outerObject/thisValue Class hooks (also known as the holy grail). Some scope objects had a (potentially effectful) thisValue hook to override their this behavior, this made it hard to see what was going on. Getting rid of that made it much easier to understand and rewrite the code.
  • I posted patches in bug 1227263 to remove the this slot from generator objects, eval frames and global frames.
  • IonMonkey was unable to compile top-level scripts that used this. As I mentioned above, compiling the new JSOP_GLOBALTHIS op is pretty simple in most cases; I wrote a small patch to fix this (bug 922406).


We changed the implementation of this in Firefox 45. The difference is (hopefully!) not observable, so these changes should not break anything or affect code directly. They do, however, pave the way for more performance work and fully compliant ES6 Classes! :)

Mozilla Addons BlogA New Firefox Add-ons Validator

The state of add-ons has changed a lot over the past five years, with Jetpack add-ons rising in popularity and Web Extensions on the horizon. Our validation process hasn’t changed as much as the ecosystem it validates, so today Mozilla is announcing we’re building a new Add-ons Validator, written in JS and available for testing today! We started this project only a few months ago and it’s still not production-ready, but we’d love your feedback on it.

Why the Add-ons Validator is Important

Add-ons are a huge part of why people use Firefox. There are currently over 22,000 available, and with work underway to allow Web Extensions in Firefox, it will become easier than ever to develop and update them.

All add-ons listed on (AMO) are required to pass a review by Mozilla’s add-on review team, and the first step in this process is automated validation using the Add-ons Validator.

The validator alerts reviewers to deprecated API usage, errors, and bad practices. Since add-ons can contain a lot of code, the alerts can help developers pinpoint the bits of code that might make your browser buggy or slow, among other problems. It also helps detect insecure add-on code. It helps keep your browsing fast and safe.

Our current validator is a bit old, and because it’s written in Python with JavaScript dependencies, our old validator is difficult for add-on developers to install themselves. This means add-on developers often don’t know about validation errors until they submit their add-on for review.

This wastes time, introducing a feedback cycle that could have been avoided if the add-on developer could have just run addons-validator myAddon.xpi before they uploaded their add-on. If developers could easily check their add-ons for errors locally, getting their add-ons in front of millions of users is that much faster.

And now they can!

The new Add-ons Validator, in JS

I’m not a fan of massive rewrites, but in this case it really helps. Add-on developers are JavaScript coders and nearly everyone involved in web development these days uses Node.js. That’s why we’ve written the new validator in JavaScript and published it on npm, which you can install right now.

We also took this opportunity to review all the rules the old add-on validator defined, and removed a lot of outdated ones. Some of these hadn’t been seen on AMO for years. This allowed us to cut down on code footprint and make a faster, leaner, and easier-to-work-with validator for the future.

Speaking of which…

What’s next?

The new validator is not production-quality code yet and there are rules that we haven’t implemented yet, but we’re looking to finish it by the first half of next year.

We’re still porting over relevant rules from the old validator. Our three objectives are:

  1. Porting old rules (discarding outdated ones where necessary)
  2. Adding support for Web Extensions
  3. Getting the new validator running in production

We’re looking for help with those first two objectives, so if you’d like to help us make our slightly ambitious full-project-rewrite-deadline, you can…

Get Involved!

If you’re an add-on developer, JavaScript programmer, or both: we’d love your help! Our code and issue tracker are on GitHub at We keep a healthy backlog of issues available, so you can help us add rules, review code, or test things out there. We also have a good first bug label if you’re new to add-ons but want to contribute!

If you’d like to try the next-generation add-ons validator, you can install it with npm: npm install addons-validator. Run your add-ons against it and let us know what you think. We’d love your feedback as GitHub issues, or emails on the add-on developer mailing list.

And if you’re an add-on developer who wishes the validator did something it currently doesn’t, please let us know!

We’re really excited about the future of add-ons at Mozilla; we hope this new validator will help people write better add-ons. It should make writing add-ons faster, help reviewers get through add-on approvals faster, and ultimately result in more awesome add-ons available for all Firefox users.

Happy hacking!

Matjaž HorvatMeet Jarek, splendid Pontoon contributor

Some three months ago, a new guy named jotes showed up in #pontoon IRC channel. It quickly became obvious he’s running a local instance of Pontoon and is ready to start contributing code. Fast forward to the present, he is one of the core Pontoon contributors. In this short period of time, he implemented several important features, all in his free time:

Top contributors. He started by optimizing the Top contributors page. More specifically, he reduced the number of DB queries by some 99%. Next, he added filtering by time period and later on also by locale and project.

User permissions. Pontoon used to rely on the Mozillians API for giving permissions to localizers. It turned out we need a more detailed approach with team managers manually granting permission to their localizers. Guess who took care of it!

Translation memory. Currently, Jarek is working on translation memory optimizations. Given his track record, our expectations are pretty high. :-)

I have this strange ability to close my eyes when somebody tries to take a photo of me, so on most of them I look like a statue of melancholy. :D

What brought you to Mozilla?
A friend recommended me a documentary called Code Rush. Maybe it will sound stupid, but I was fascinated by the idea of a garage full of fellow hackers with power to change the world. During one of the sleepless nights I visited and after a few patches I knew Mozilla is my place. A place where I can learn something new with help of many amazing people.

Jarek Śmiejczak, thank you for being splendid! And as you said, huge thanks to Linda – love of your life – for her patience and for being an active supporter of the things you do.

To learn more about Jarek, follow his blog at Joyful hackin’.
To start hackin’ on Pontoon, get involved now.

Emily DunhamGiving Thanks to Rust Contributors

Giving Thanks to Rust Contributors

It’s the day before Thanksgiving here in the US, and the time of year when we’re culturally conditioned to be a bit more public than usual in giving thanks for things.

As always, I’m grateful that I’m working in tech right now, because almost any job in the tech industry is enough to fulfill all of one’s tangible needs like food and shelter and new toys. However, plenty of my peers have all those material needs met and yet still feel unsatisfied with the impact of their work. I’m grateful to be involved with the Rust project because I know that my work makes a difference to a project that I care about.

Rust is satisfying to be involved with because it makes a difference, but that would not be true without its community. To say thank you, I’ve put together a little visualization for insight into one facet of how that community works its magic:


The stats page is interactive and available at The pretty graphs take a moment to render, since they’re built in your browser.

There’s a whole lot of data on that page, and you can scroll down for a list of all authors. It’s especially great to see the high impact that the month’s new contributors have had, as shown in the group comparison at the bottom of the “natural log of commits” chart!

It’s made with the little toy I wrote a while ago called orglog, which builds on gitstat to help visualize how many people contribute code to a GitHub organization. It’s deployed to GitHub Pages with TravisCI (eww) and so that the Rust’s organization-wide contributor stats will be automatically rebuilt and updated every day.

If you’d like to help improve the page, you can contribute to gitstat or orglog!

Tarek ZiadéShould I use PYTHONOPTIMIZE ?

Yesterday, I was reviewing some code for our projects and in a PR I saw something roughly similar to this:

    assert hasattr(SomeObject, 'some_attribute')
except AssertionError:

That didn't strike me as a good idea to rely on assert because when Python is launched using the PYTHONOPTIMIZE flag, which you can activate with the eponymous environment variable or with -O or -OO, all assertions are stripped from the code.

To my surprise, a lot of people are dismissing -O and -OO saying that no one uses those flags in production and that a code containing asserts is fine.

PYTHONOPTIMIZE has three possible values: 0, 1 (-O) or 2 (-OO). 0 is the default, nothing happens.

For 1 this is what happens:

  • asserts are stripped
  • the generated bytecode files are using the .pyo extension instead of .pyc
  • sys.flags.optimize is set to 1
  • __debug__ is set to False

And for 2:

  • everything 1 does
  • doctsrings are stripped.

To my knowledge, one legacy reason to run -O was to produce a more efficient bytecode, but I was told that this is not true anymore.

Another behavior that has changed is related to pdb: you could not run some step-by-step debugging when PYTHONOPTIMIZE was activated.

Last, the pyo vs pyc thing should go away one day, according to PEP 488

So what does that leaves us ? is there any good reason to use those flags ?

Some applications leverage the __debug__ flag to offer two running modes. One with more debug information or a different behavior when an error is encoutered.

That's the case for pyglet, according to their doc.

Some companies are also using the -OO mode to slighlty reduce the memory footprint of running apps. It seems to be the case at YouTube.

And it's entirely possible that Python itself in the future, adds some new optimizations behind that flag.

So yeah, even if you don't use yourself those options, it's good practice to make sure that your python code is tested with all possible values for PYTHONOPTIMIZE.

It's easy enough, just run your tests with -O and -OO and without, and make sure your code does not depend on doctsrings or assertions.

If you have to depend on one of them, make sure your code gracefully handles the optimize modes or raises an early error explaining why you are not compatible with them.

Thanks to Brett Cannon, Michael Foord and others for their feedback on Twitter on this.

James LongA Simple Way to Route with Redux

This post took months to write. I wasn't working on it consistently, but every time I made progress something would happen that made me scratch everything. It started off as an explanation of how I integrated react-router 0.13 into my app. Now I'm going to talk about how redux-simple-router came to be and explain the philosophy behind it.

Redux embraces a single atom app state to represent all the state for your UI. This has many benefits, the biggest of which is that pieces of state are always consistent with each other. If we update the tree immutably, it's very easy to make atomic updates to the state and keep everything consistent (as opposed to mutating individual pieces of state over time).

Conceptually, the UI is derived from this app state. Everything needed to render the UI is contained in this state, and this is powerful because you can inspect/snapshot/replay the entire UI just by targeting the app state.

But it gets awkard when you want to work with other libraries like react-router that want to take part in state management. react-router is a powerful library for component-based routing; it inherently manages the routing state to provide the user with powerful APIs that handle everything gracefully.

So what do we do? We could use react-router and redux side-by-side, but then the app state object does not contain everything needed for the UI. Snapshotting, replaying, and all that is broken.

One option is to try to take control over all the router state and proxy everything back to react-router. This is what redux-router attempts to do, but it's very complicated and prone to bugs. react-router may put unserializable state in the tree, thus still breaking snapshotting and other useful features.

After integrating redux and react-router in my site, I extracted my solution to a new project: redux-simple-router. The goal is simple: let react-router do all the work. They have already developed very elegant APIs for implementing routing components, and you should just use them.

If you use the regular react-router APIs, how does it work? How does the app state object know anything about routing? Simple: we already have a serialized form of all the react-router state: the URL. All we have to do is store the URL in the app state and keep it in sync with react-router, and the app state has everything it needs to render the UI.

People think that the app state object has to have everything, but it doesn't. It just has to have the primary state; anything that can be deduced can live outside of redux.

Above, the blue thing is serializable dumb app state, and the green things are unserializable programs that exist in memory. As long as you can recreate the green things above when loading up an app state, you're fine. And you can easily do this with react-router by just initializing it with the URL from the app state.

Since launching it, a bunch of people have already helped improve it in many ways, and a lot of people seem to be finding it useful. Thank you for providing feedback and contributing patches!

Just use react-router

The brilliant thing about just tracking the URL is that it takes almost no code at all. redux-simple-router is only 87 lines of code and it's easy to understand what's going on. You already have a lot of concepts to juggle (react, redux, react-router, etc); you shouldn't have to learn another large abstraction.

Everything you want to do can be done with react-router directly. A lot of people coming from redux-router seem to surprised about this. Some people don't understand the following:

  • Routing components have all the information you need as properties. See the docs; the current location, params, and more are all there for you to use.
  • You can block route transitions with listenBefore.
  • You can inject code to run when a routing component is created with createElement, if you want to do stuff like automatically start loading data.

We should invest in the react-router community and figure out the right patterns for everybody using it, not just people using redux. We also get to use new react-router features immediately.

The only additional thing redux-simple-router provides is a way to change the URL with the updatePath action creator. The reason is that it's a very common use case to update the URL inside of an action creator; you might want to redirect the user to another page depending on the result of an async request, for example. You don't have access to the history object there.

You shouldn't really even be selecting the path state from the redux-simple-router state; try to only make top-level routing components actually depend on the URL.

So how does it work?

You can skip this section if you aren't interested in the nitty-gritty details. We use a pretty clever hack to simplify the syncing though, so I wanted to write about it!

You call syncReduxAndRouter with history and store objects and it will keep them in sync. It does this by listening to history changes with history.listen and state changes with store.subscribe and telling each other when something changes.

It's a little tricky because each listener needs to know when to "stop." If the app state changes, it needs to call history.pushState, but the history listener should see that it's up-to-date and not do anything. When it's the other way around, the history listener needs to call store.dispatch to update the path but the store listener should see that nothing has changed.

First, let's talk about history. How can we tell if anything has changed? We get the new location object so we just stringify it into a URL and then compare it with the URL in the app state. If it's the same, we do nothing. Pretty easy!

Detecting app state changes is a little harder. In previous versions, we were comparing the URL from state with the current location's URL. But this caused tons of problems. For example, if the user has installed a listenBefore hook, it will be invoked from the pushState call in the store subscriber (because the app state URL is different from the current URL). The user might dispatch actions in listenBefore and update other state though, and since we are subscribed to the whole store, our listener will run again. At this point the URL has not been updated yet so we will call pushState again, and the listenBefore hook will be called again, causing an infinite loop.

Even if we could somehow only trigger pushState calls when the URL app state changes, this is not semantically correct. Every single time the user tries to change the URL, we should always call pushState even if the URL is the same as the current one. This is how browsers work; think of clicking on a link to "/foo" even though "/foo" is the current URL: what happens?

In redux, reducers are pure so we cannot call pushState there. We could do it in a middleware (which is what redux-router does) but I really don't want to force people to install a middleware just for this. We could do it in the action creator, but that seems like the wrong time: reducers may respond to the UPDATE_PATH action and update some state, so we shouldn't rerender routing components until after reducing.

I came up with a clever hack: just use an id in the routing state and increment it whenever we want to trigger a pushState! This has drastically simplified everything, made it far more robust, and even better made testing really easy because we can just check that the changeId field is the right number.

We just have to keep track of the last changeId we've seen an compare it in the store subscriber. This means there's always a 1:1 relationship with updatePath action creator calls and pushState calls no matter what. Try any transition logic you want, it should work!

It also simplifies how changes from the router to redux work, because it calls the updatePath action creator with an avoidRouterUpdate flag and all we have to do in the reducer it just not increment changeId and we won't call back into the router.

I think my favorite side effect of this technique is testing. Look at the tests and you'll see I can compare a bunch of changeIds to make sure that the right number of pushState calls are being made.

More Complex Examples of react-router

Originally I was going to walk through how I used react-router for complex use cases like server-side rendering. This post is already too long to go into details, and I don't have time to write another post, so I will leave you with a few points that will help you dig into the code to see how it works:

  • There's no problem making a component both a redux "connected" component and a route component. Here I'm exporting a connected Drafts page will be installed in the router. That means the component can both select from state as well as be controlled by the router.
  • I perform data fetching by specifying a static populateStore function. On the client, the router will call this in createElement seen here , and the backend can prepopulate the store by iterating over all route components and calling this method. The action creators are responsible for checking if the data is already loaded and not re-fetching on the frontend of it's already there (example).
  • The server uses the lower-level match API seen here to get the current route. This gives us flexibility to control everything. We store the current HTML status in redux (like a 500) so that components can change it. For example, the Post component can set a 404 code if the post isn't found. The server sends the page with the right HTML status code.
  • This also means the top-level App component can inspect the status code to see if it should display a special 404 or 500 page.

I really like how the react-router 1.0 API turned out. The idea seems to be use low-level APIs on the server so that you can control everything, but the client can simply render a Router component to automatically handle state. The two environments are different enough that this works great.

That's It

It's my goal to research ideas and present them in a way to help other people. In this case a cool project, redux-simple-router, came out of it. I hope this post explains the reasons behind and the above links help show more complicated examples of using it.

We are working on porting react-redux-universal-hot-example to redux-simple-router, so that will be another example of all kinds of uses. We're really close to finishing it, and you can follow along in this issue.

I'm also going to add more examples in the repo itself. But the goal is that you should be able to just read react-router's docs and do whatever it tells you to do.

Lastly, the folks working on redux-router have put in a lot of good work and I don't mean to diminish that. I think it's healthy for multiple approaches to exist and everyone can learn something from each one.

Nick CameronMacro plans, overview

In this post I want to give a bit of an overview of the changes I'm planning to propose for the macro system. I haven't worked out some of the details yet, so this could change a lot.

To summarise, the broad thrusts of the redesign are:

  • changes to the way procedural macros work and the parts of the compiler they have access to,
  • change the hygiene algorithm, and what hygiene is applied to,
  • address modularisation issues,
  • syntactic changes.

I'll summarise each here, but there will probably be a blog post about each before a proper RFC. At the end of this blog post I'll talk about backwards compatibility.

I'd also like to support macro and ident inter-operation better, as described here.

Procedural macros


I intend to tweak the system of traits and enums, etc. to make procedural macros easier to use. My intention is that there should be a small number of function signatures that can be implemented (not just one unfortunately, because I believe function-like vs attribute-like macros will take different arguments, furthermore I think we need versions for hygienic expansion and expansion with DIY-hygiene, and the latter case must be supplied with some hygiene information in order for the function to do it's own hygiene. I'm not certain that is the right approach though). Although this is not as Rust-y as using traits, I believe the simplicity benefits outweigh the loss in elegance.

All macros will take a set of tokens in and generate a set of tokens out. The token trees should be a simplified version of the compiler's internal token trees to allow procedural macros more flexibility (and forwards compatibility). For attribute-like macros, the code that they annotate must still parse (necessary due to internal attributes, unfortunately), but will be supplied as tokens to the macro itself.

I intend that libsyntax will remain unstable and (stable) procedural macros will not have direct access to it or any other compiler internals. We will create a new crate, libmacro (or something) which will re-export token trees from libsyntax and provide a whole bunch of functionality specifically for procedural macros. This library will take the usual path to stabilisation.

Macros will be able to parse tokens and expand macros in various ways. The output will be some kind of AST. However, after manipulating the AST, it is converted back into tokens to be passed back to the macro expander. Note that this requires us storing hygiene and span information directly in the tokens, not the AST.

I'm not sure exactly what the AST we provide should look like, nor the bounds on what should be in libmacro vs what can be supplied by outside libraries. I would like to start by providing no AST at all and see what the eco-system comes up with.

It is worth thinking about the stability implications of this proposal. At some point in the future, the procedural macro mechanism and libmacro will be stable. So, a crate using stable Rust can use a crate which provides a procedural macro. At some point later we evolve the language in a non-breaking way which changes the AST (internal to libsyntax). We must ensure that this does not change the structure of the token trees we give to macros. I believe that should not be a problem for a simple enough token tree. However, the procedural macro might expect those tokens to parse in a certain way, which they no longer do causing the procedural macro to fail and thus compilation to fail. Thus, the stability guarantees we provide users can be subverted by procedural macros. However, I don't think this is possible to prevent. In the most pathological case, the macro could check if the current date is later than a given one and in that case panic. So, we are basically passing the buck about backwards compatibility with the language to the procedural macro authors and the libraries they use. There is an obvious hazard here if a macro is widely used and badly written. I'm not sure if this can be addressed, other than making sure that libraries exist which make compatibility easy.


I hope that the situation for macro authors will be similar to that for other authors: we provide a small but essential standard library (libmacro) and more functionality is provided by the ecosystem via

The functionality I expect to see in libmacro should be focused on interaction with the rest of the parser and macro expander, including macro hygiene. I expect it to include:

  • interning a string and creating an ident token from a string
  • creating and manipulating tokens
  • expanding macros (macro_rules and procedural), possibly in different ways
  • manipulating the hygiene of tokens
  • manipulating expansion traces for spans
  • name resolution of module and macro names - note that I expect these to return token trees, which gives a macro access to the whole program, I'm not sure this is a good idea since it breaks locality for macros
  • check and set feature gates
  • mark attributes and imports as used

The most important external libraries I would like to see would be to provide an AST-like abstraction, parsing, and tools for building and manipulating AST. These already exist (syntex, ASTer), so I am confident we can have good solutions in this space, working towards crates which are provided on, but are officially blessed (similar to the goals of other libraries).

I would very much like to see quasi-quoting and pattern matching in blessed libraries. These are important tools, the former currently provided by libsyntax. I don't see any reason these must be provided by libmacro, and since quasi-quoting produces AST, they probably can't be (since they would be associated with a particular AST implementation). However, I would like to spend some time improving the current quasi-quoting system, in particular to make it work better with hygiene and expansion traces.

Alternatively, libmacro could provide quasi-quoting which produces token trees, and there is then a second step to produce AST. Since hygiene info will operate at the tokens level, this might be possible.

Pattern matching on tokens should provide functionality similar to that provided by macro_rules!, making writing procedural macros much easier. I'm convinced we need something here, but not sure of the design.

Naming and registration

See section on modularisation below, the same things apply to procedural macros as to macro_rules macros.

A macro called baz declared in a module bar inside a crate foo could be called using ::foo::bar::baz!(...) or imported using use foo::bar::baz!; and used as baz!(...). Other than a feature flag until procedural macros are stabilised, users of macros need no other annotations. When looking at an extern crate foo statement, the compiler will work out whether we are importing macros.

I expect that functions expected to work as procedural macros would be marked with an attribute (#[macro] or some such). We would also have #[cfg(macro)] for helper functions, etc. Initially, I expect a whole crate must be #[cfg(macro)], but eventually I would like to allow mixing in a crate (just as we allow macro_rules macros in the same crate as normal code).

There would be no need to register macros with the plugin registry.

A vaguely related issue is whether interaction between the macros and the compiler should be via normal function calls (to libmacro) or via IPC. The latter would allow produral macros to be used without dynamic linking and thus permit a statically linked compiler.


I plan to change the hygiene algorithm we use from mtwt to sets of scopes. This allows us to use hygiene information in name resolution, thus alleviating the 'absolute path' problem in macros. We can also use this information to support hygienic checking of privacy. I'll explain the algorithm and how it will apply to Rust in another blog post. I think this algorithm will be easier for procedural macro authors to work with too.

Orthogonally, I want to make all identifiers hygienic, not just variables and labels. I would also like to support hygienic unsafety. I believe both these things are more implementation than design issues.


The goal here is to treat macros the same way as other items, naming via paths and allowing imports. This includes naming of attributes, which will allow paths for naming (e.g., #[foo::bar::baz]). Ordering of macros should also not be important. The mechanism to support this is moving parts of name resolution and privacy checking to macro expansion time. The details of this (and the interaction with sets of scopes hygiene, which essentially gives a new mechanism for name resolution) are involved.


These things are nice to have, rather than core parts of the plan. New syntax for procedural macros is covered above.

I would like to fix the expansion issues with arguments and nested macros, see blog post.

I propose that new macros should use macro! rather than macro_rules!.

I would like a syntactic form for macro_rules macros which only matches a single pattern and is more lightweight than the current syntax. The current syntax would still be used where there are multiple patterns. Something like,

macro! foo(...) => {  

Perhaps we drop the => too.

We need to allow privacy annotations for macros, not sure the best way to do this: pub macro! foo { ... } or macro! pub foo { ... } or something else.

Backwards compatability

Procedural macros are currently unstable, there will be a lot of breaking changes, but the reward is a path to stability.

macro_rules! is a stable part of the language. It will not break (modulo usual caveat about bug fixes). The plan is to introduce a whole new macro system around macro!, if you have macros currently called macro!, I guess we break them (we will run a warning cycle for this and try and help anyone who is affected). We will deprecate macro_rules! once macro! is stable. We will track usage with the intention of removing macro_rules at 2.0 or 3.0 or whatever. All macros in the standard libraries will be converted to using macro!, this will be a breaking change, we will mitigate by continuing to support the old but deprecated versions of the macros. Hopefully, modularisation will support this (needs more thought to be sure). The only change for users of macros will be how the macro is named, not how it is used (modulo new applications of hygiene).

Most existing macro_rules! macros should be valid macro! macros. The only difference will be using macro! instead of macro_rules! and the new scoping/naming rules may lead to name clashes that didn't exist before (note this is not in itself a breaking change, it is a side effect of using the new system). Macros converted in this way should only break where they take advantage of holes in the current hygiene system. I hope that this is a low enough bar that adoption of macro! by macro_rules! authors will be quick.


There are two backwards compatibility hazards with hygiene, both affect only macro_rules! macros: we must emulate the mtwt algorithm with the sets of scopes algorithm, and we must ensure unhygienic name resolution for items which are currently not treated hygienically. In the second case, I think we can simulate unhygienic expansion for types etc, by using the set of scopes for the macro use-site, rather than the proper set. Since only local variables are currently treated hygienically, I believe this means the first case will Just Work. More details on this in a future blog post.

Air MozillaPrivacy for Normal People

Privacy for Normal People Mozilla cares deeply about user control. But designing products that protect users is not always obvious. Sometimes products give the illusion of control and security...

Armen ZambranoWelcome F3real, xenny and MikeLing!

As described by jmaher, we started this week our first week of mozci's quarter of contribution.

I want to personally welcome Stefan, Vaibhav and Mike to mozci. We hope you get to learn and we thank you for helping Mozilla move forward in this corner of our automation systems.

I also want to give thanks to Alice for committing at mentoring. This could not be possible without her help.

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Armen ZambranoMozilla CI tools meet up

In order to help the contributors' of mozci's quarter of contribution, we have set up a Mozci meet up this Friday.

If you're interested on learning about Mozilla's CI, how to contribute or how to build your own scheduling with mozci come and join us!

9am ET -> other time zones
Vidyo room:

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Air MozillaMartes mozilleros, 24 Nov 2015

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos.

Kim MoirUSENIX Release Engineering Summit 2015 recap

November 13th, I attended the USENIX Release Engineering Summit in Washington, DC.  This summit was along side the larger LISA conference at the same venue. Thanks to Dinah McNutt, Gareth Bowles, Chris Cooper,  Dan Tehranian and John O'Duinn for organizing.

I gave two talks at the summit.  One was a long talk on how we have scaled our Android testing infrastructure on AWS, as well as a look back at how it evolved over the years.

Picture by Tim Norris - Creative Commons Attribution-NonCommercial-NoDerivs 2.0 Generic (CC BY-NC-ND 2.0)

Scaling mobile testing on AWS: Emulators all the way down from Kim Moir

I gave a second lightning talk in the afternoon on the problems we face with our large distributed continuous integration, build and release pipeline, and how we are working to address the issues. The theme of this talk was that managing a large distributed system is like being the caretaker for the water, or some days, the sewer system for a city.  We are constantly looking system leaks and implementing system monitoring. And probably will have to replace it with something new while keeping the existing one running.

Picture by Korona Lacasse - Creative Commons 2.0 Attribution 2.0 Generic

In preparation for this talk, I did a lot of reading on complex systems design and designing for recovery from failure in distributed systems.  In particular, I read Donatella Meadows' book Thinking in Systems. (Cate Huston reviewed the book here). I also watched several talks by people who talked about the challenges they face managing their distributed systems including the following:
I'd also like to thank all the members of Mozilla releng/ateam who reviewed my slides and provided feedback before I gave the presentations.
    The attendees of the summit attended the same keynote as the LISA attendees.  Jez Humble, well known for his Continuous Delivery and Lean Enterprise books provided a keynote on Lean Configuration Management which I really enjoyed. (Older version of slides from another conference, are available here and here.)

    In particular, I enjoyed his discussion of the cultural aspects of devops. I especially like that he stated that "You should not have to have planned downtime or people working outside business hours to release".  He also talked a bit about how many of the leaders that are looked up to as visionaries in the tech industry are known for not treating people very well and this is not a good example to set for others who believe this to be the key to their success.  For instance, he said something like "what more could Steve Jobs have accomplished had he treated his employees less harshly".

    Another concept he discussed which I found interesting was that of the strangler application. When moving from a large monolithic application, the goal is to split out the existing functionality into services until the originally application is left with nothing.  Exactly what Mozilla releng is doing as we migrate from Buildbot to taskcluster.

    At the release engineering summit itself,   Lukas Blakk from Pinterest gave a fantastic talk Stop Releasing off Your Laptop—Implementing a Mobile App Release Management Process from Scratch in a Startup or Small Company.  This included grumpy cat picture to depict how Lukas thought the rest of the company felt when that a more structured release process was implemented.

    Lukas also included a timeline of the tasks that implemented in her first six months working at Pinterest. Very impressive to see the transition!

    Another talk I enjoyed was Chaos Patterns - Architecting for Failure in Distributed Systems by Jos Boumans of Krux. (Similar slides from an earlier conference here). He talked about some high profile distributed systems that failed and how chaos engineering can help illuminate these issues before they hit you in production.

    For instance, it is impossible for Netflix to model their entire system outside of production given that they consume around one third of nightly downstream bandwidth consumption in the US. 

    Evan Willey and Dave Liebreich from Pivotal Cloud Foundry gave a talk entitled "Pivotal Cloud Foundry Release Engineering: Moving Integration Upstream Where It Belongs". I found this talk interesting because they talked about how the built Concourse, a CI system that is more scaleable and natively builds pipelines.   Travis and Jenkins are good for small projects but they simply don't scale for large numbers of commits, platforms to test or complicated pipelines. We followed a similar path that led us to develop Taskcluster

    There were many more great talks, hopefully more slides will be up soon!

Henrik SkupinSurvey about sharing information inside the Firefox Automation team

Within the Firefox Automation team we were suffering a bit in sharing information about our work over the last couple of months. That mainly happened because I was alone and not able to blog more often than once in a quarter. The same applies to our dev-automation mailing list which mostly only received emails from Travis CI with testing results.

Given that the team has been increased to 4 people now (beside me this is Maja Frydrychowicz, Syd Polk, and David Burns, we want to be more open again and also trying to get more people involved into our projects. To ensure that we do not make use of the wrong communication channels – depending where most of our readers are – I have setup a little survey. It will only take you a minute to go through but it will help us a lot to know more about the preferences of our automation geeks. So please take that little time and help us.

The survey can be found here and is open until end of November 2015:

Thank you a lot!

Nick CameronMacros pt6 - more issues

I discovered another couple of issues with Rust macros (both affect the macro_rules flavour).

Nested macros and arguments

These don't work because of the way macros do substitution. When expanding a macro, the expander looks for token strings starting with $ to expand. If there is a variable which is not bound by the outer macro, then it is an error. E.g.,

macro_rules! foo {  
    () => {
        macro_rules! bar {
            ($x: ident) => { $x }

When we try to expand foo!(), the expander errors out because it can't find a value for $x, it doesn't know that macro_rules bar is binding $x.

The proper solution here is to make macros aware of binding and lexical scoping etc. However, I'm not sure that is possible because macros are not parsed until after expansion. We might be able to fix this by just being less eager to report these errors. We wouldn't get proper lexical scoping, i.e., all macro variables would need to have different names, but at least the easy cases would work.

Matching expression fragments


macro_rules! foo {  
    ( if $e:expr { $s:stmt } ) => {
        if $e {

fn main() {  
    let x = 1;
    foo! {
        if 0 < x {

This gives an error because it tries to parse x { as the start of a struct literal. We have a hack in the parser where in some contexts where we parse an expression, we explicitly forbid struct literals from appearing so that we can correctly parse a following block. This is not usually apparent, but in this case, where the macro expects an expr, what we'd like to have is 'an expression but not a struct literal'. However, exposing this level of detail about the parser implementation to macro authors (not even procedural macro authors!) feels bad. Not sure how to tackle this one.

Relatedly, it would be nice to be able to match other fragments of the AST, for example the interior of a block. Again, there is the issue of how much of the internals we wish to expose.

(HT @bmastenbrook for the second issue).

Chris FinkeReenact Now Available for Android

I’ve increased the audience for Reenact (an app for reenacting photos) by 100,000% by porting it from Firefox OS to Android.


It took me about ten evenings to go from “I don’t even know what language Android apps are written in” to submitting the .apk to the Google PlayTM store. I’d like to thank Stack Overflow, the Android developer docs, and Android Studio’s autocomplete.

Reenact for Android, like Reenact for Firefox OS, is open-source; the complete source for both apps is available on GitHub. Also like the Firefox OS app, Reenact for Android is free and ad-free. Just think: if even just 10% of all 1 billion Android users install Reenact, I’d have $0!

In addition to making Reenact available on Android, I’ve launched, a home for the app. If you try out Reenact, send your photo to to get it included in the photo gallery on

You can install Reenact on Google Play or directly from Try it out and let me know how it works on your device!

Mozilla Security BlogImproving Revocation: OCSP Must-Staple and Short-lived Certificates

Last year, we laid out a long-range plan for improving revocation support for Firefox. As of this week, we’ve completed most of the major elements of that plan. After adding OneCRL earlier this year, we have recently added support for OCSP Must-Staple and short-lived certificates. Together, these changes enable website owners several ways to achieve fast, secure certificate revocation.

In an ideal world, the browser would perform an online status check (such as OCSP) whenever it verifies a certificate, and reject the certificate if the check failed. However, these checks can be slow and unreliable. They time out about 15% of the time, and take about 350ms even when they succeed. Browsers generally soft-fail on revocation in an attempt to balance these concerns.

To get back to stronger revocation checking, we have added support for short-lived certificates and Must-Staple to let sites opt in to hard failures. As of Firefox 41, Firefox will not do “live” OCSP queries for sufficiently short-lived certs (with a lifetime shorter than the value set in “security.pki.cert_short_lifetime_in_days”). Instead, Firefox will just assume the certificate is valid. There is currently no default threshold set, so users need to configure it. We are collecting telemetry on certificate lifetimes, and expect to set the threshold somewhere around the maximum OCSP response lifetime specfied in the baseline requirements.

OCSP Must-Staple makes use of the recently specified TLS Feature Extension. When a CA adds this extension to a certificate, it requires your browser to ensure a stapled OCSP response is present in the TLS handshake. If an OCSP response is not present, the connection will fail and Firefox will display a non-overridable error page. This feature will be included in Firefox 45, currently scheduled to be released in March 2016.

Mozilla Addons BlogTest your add-ons for Multi-process Firefox compatibility

You might have heard the news that future versions of Firefox will run the browser UI separately from web content. This is called Multi-process Firefox (also “Electrolysis” or “e10s”), and it is scheduled for release in the first quarter of 2016.

If your add-on code accesses web content directly, using an overlay extension, a bootstrapped extension, or low-level SDK APIs like window/utils or tabs/utils, then you will probably be affected.

To minimize the impact on users of your add-ons, we are urging you to test your add-ons for compatibility. You can find documentation on how to make them compatible here.

Starting Nov. 24, 2015, we are available to assist you every Tuesday in the #addons channel at Click here to see the schedule. Whether you need help testing or making your add-ons compatible, we’re here to help!

Emily DunhamPSA: Docker on Ubuntu

PSA: Docker on Ubuntu

$ sudo apt-get install docker
$ which docker
$ docker
The program 'docker' is currently not installed. You can install it by typing:
apt-get install docker
$ apt-get install docker
Reading package lists... Done
Building dependency tree
Reading state information... Done
docker is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 13 not upgraded.

Oh, you wanted to run a docker container? The docker package in Ubuntu is some window manager dock thingy. The docker binary that runs containers comes from the system package.

$ sudo apt-get install
$ which docker

Also, if it can’t connect to its socket:

FATA[0000] Post http:///var/run/docker.sock/v1.18/containers/create: dial
unix /var/run/docker.sock: permission denied. Are you trying to connect to a
TLS-enabled daemon without TLS?

you need to make sure you’re in the right group:

sudo usermod -aG docker <username>; newgrp docker

(thanks, stackoverflow!)

Daniel Stenbergcopy as curl

Using curl to perform an operation a user just managed to do with his or her browser is one of the more common requests and areas people ask for help about.

How do you get a curl command line to get a resource, just like the browser would get it, nice and easy? Both Chrome and Firefox have provided this feature for quite some time already!

From Firefox

You get the site shown with Firefox’s network tools.  You then right-click on the specific request you want to repeat in the “Web Developer->Network” tool when you see the HTTP traffic, and in the menu that appears you select “Copy as cURL”. Like this screenshot below shows. The operation then generates a curl command line to your clipboard and you can then paste that into your favorite shell window. This feature is available by default in all Firefox installations.


From Chrome

When you pop up the More tools->Developer mode in Chrome, and you select the Network tab you see the HTTP traffic used to get the resources of the site. On the line of the specific resource you’re interested in, you right-click with the mouse and you select “Copy as cURL” and it’ll generate a command line for you in your clipboard. Paste that in a shell to get a curl command line  that makes the transfer. This feature is available by default in all Chome and Chromium installations.


On Firefox, without using the devtools

If this is something you’d like to get done more often, you probably find using the developer tools a bit inconvenient and cumbersome to pop up just to get the command line copied. Then cliget is the perfect add-on for you as it gives you a new option in the right-click menu, so you can get a quick command line generated really quickly, like this example when I right-click an image in Firefox:


This Week In RustThis Week in Rust 106

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: nasa42, brson, and llogiq.

Updates from Rust Community

News & Blog Posts

Notable New Crates & Projects

  • nom 1.0 is released.
  • Freepass. The free password manager for power users.
  • Barcoders. A barcode encoding library for the Rust programming language.
  • fst. Fast implementation of ordered sets and maps using finite state machines.
  • Rusty Code. Advanced language support for the Rust language in Visual Studio Code.
  • Dybuk. Prettify the ugly Rustc messages (inspired by Elm).
  • Substudy. Use SRT subtitle files to study foreign languages.

Updates from Rust Core

99 pull requests were merged in the last week.

See the triage digest and subteam reports for more details.

Notable changes

New Contributors

  • Alexander Bulaev
  • Ashkan Kiani
  • Devon Hollowood
  • Doug Goldstein
  • Jean Maillard
  • Joshua Holmer
  • Matthias Kauer
  • Ole Krüger
  • Ravi Shankar

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

Tweet us at @ThisWeekInRust to get your job offers listed here!

Crate of the Week

This week's Crate of the Week is nom, a library of fast zero-copy parser combinators, which has already been used to create safe, high-performance parsers for a number of formats both binary and textual. nom just reached version 1.0, too, so congratulations for both the major version and the CotW status!

Thanks to Reddit user gbersac for the nom-ination! Submit your suggestions for next week!

Mark FinkleAn Engineer’s Guide to App Metrics

Building and shipping a successful product takes more than raw engineering. I have been posting a bit about using Telemetry to learn about how people interact with your application so you can optimize use cases. There are other types of data you should consider too. Being aware of these metrics can help provide a better focus for your work and, hopefully, have a bigger impact on the success of your product.

Active Users

This includes daily active users (DAUs) and monthly active users (MAUs). How many people are actively using the product within a time-span? At Mozilla, we’ve been using these for a long time. From what I’ve read, these metrics seem less important when compared to some of the other metrics, but they do provide a somewhat easy to measure indicator of activity.

These metrics don’t give a good indication of how much people use the product though. I have seen a variation metric called DAU/MAU (daily divided by monthly) and gives something like retention or engagement. DAU/MAU rates of 50% are seen as very good.


This metric focuses on how much people really use the product, typically tracking the duration of session length or time spent using the application. The amount of time people spend in the product is an indication of stickiness. Engagement can also help increase retention. Mozilla collects data on session length now, but we need to start associating metrics like this with some of our experiments to see if certain features improve stickiness and keep people using the application.

We look for differences across various facets like locales and releases, and hopefully soon, across A/B experiments.

Retention / Churn

Based on what I’ve seen, this is the most important category of metrics. There are variations in how these metrics can be defined, but they cover the same goal: Keep users coming back to use your product. Again, looking across facets, like locales, can provide deeper insight.

Rolling Retention: % of new users return in the next day, week, month
Fixed Retention: % of this week’s new users still engaged with the product over successive weeks.
Churn: % of users who leave divided by the number of total users

Most analysis tools, like iTunes Connect and Google Analytics, use Fixed Retention. Mozilla uses Fixed Retention with our internal tools.

I found some nominal guidance (grain of salt required):
1-week churn: 80% bad, 40% good, 20% phenomenal
1-week retention: 25% baseline, 45% good, 65% great

Cost per Install (CPI)

I have also seen this called Customer Acquisition Cost (CAC), but it’s basically the cost (mostly marketing or pay-to-play pre-installs) of getting a person to install a product. I have seen this in two forms: blended – where ‘installs’ are both organic and from campaigns, and paid – where ‘installs’ are only those that come from campaigns. It seems like paid CPI is the better metric.

Lower CPI is better and Mozilla has been using Adjust with various ad networks and marketing campaigns to figure out the right channel and the right messaging to get Firefox the most installs for the lowest cost.

Lifetime Value (LTV)

I’ve seen this defined as the total value of a customer over the life of that customer’s relationship with the company. It helps determine the long-term value of the customer and can help provide a target for reasonable CPI. It’s weird thinking of “customers” and “value” when talking about people who use Firefox, but we do spend money developing and marketing Firefox. We also get revenue, maybe indirectly, from those people.

LTV works hand-in-hand with churn, since the length of the relationship is inversely proportional to the churn. The longer we keep a person using Firefox, the higher the LTV. If CPI is higher than LTV, we are losing money on user acquisition efforts.

Total Addressable Market (TAM)

We use this metric to describe the size of a potential opportunity. Obviously, the bigger the TAM, the better. For example, we feel the TAM (People with kids that use Android tablets) for Family Friendly Browsing is large enough to justify doing the work to ship the feature.

Net Promoter Score (NPS)

We have seen this come up in some surveys and user research. It’s suppose to show how satisfied your customers are with your product. This metric has it’s detractors though. Many people consider it a poor value, but it’s still used quiet a lot.

NPS can be as low as -100 (everybody is a detractor) or as high as +100 (everybody is a promoter). An NPS that is positive (higher than zero) is felt to be good, and an NPS of +50 is excellent.

Go Forth!

If you don’t track any of these metrics for your applications, you should. There are a lot of off-the-shelf tools to help get you started. Level-up your engineering game and make a bigger impact on the success of your application at the same time.

Cameron KaiserTenFourFoxBox: because it's time to think inside the (fox)box. (a/k/a: we dust off Mozilla Prism for a new generation)

As long as there have been web browsers, there have been people trying to get the web freed up from the browser that confines it, because, you know, the Web wants to be free, or some other similarly aspirational throwaway platitude. These could be robots, or screen scrapers, or aggregating services, or chromeless viewers, but no matter what these browserless browsers are doing, they all tend to specialize in a particular site for any number of reasons usually circulating around business or convenience. This last type, the chromeless viewer, spawned the subcategory of "site specific browsers" that morphed into the "Rich Internet Application" and today infects our phones and tablets in the guise of the "lazy-*ss programmer mobile app."

Power Mac users have only had access to a few tools that could generate site-specific browsers. Until Adobe withdrew support, Adobe AIR could run on PowerPC 10.4+, but it was more for generally Internet-enabled apps and wasn't specifically focused at creating site-specific browsers, though it could, with a little work. Leopard users could use early betas of Fluid before that went Intel-only, and I know a few of you still do. Even Mozilla themselves got into the act with Mark Finkle's WebRunner, which became Mozilla Prism in 2007, languished after a few releases, got moved to Salsita and renamed WebRunner again in 2011, and cancelled there as well around the time of Firefox 5. However, WebRunner née Prism née WebRunner was never available for Power Macs; its required binary components were Intel-only, even though the Mozilla releases could run on 10.4, so that was about it for PowerPC. (Mozilla tried again shortly afterward with Chromeless, but this didn't get off the ground either, and was never intended as a Prism successor in any case. Speaking of, Google Chrome can do something similar, but Chrome was of course never released for Power Macs either because Alphagooglebet are meaniepants.)

There are unique advantages as TenFourFox users to having separate apps that only handle one site at a time. Lots of tabs requires lots of garbage collection, the efficiency of which Mozilla has improved substantially, but is still a big drain on old computers like ours which are always under memory pressure. In addition, currently Firefox and TenFourFox must essentially cooperatively multitask between tabs because JavaScript infamously has run-to-completion semantics, which is why you get the "script too long" dialogue box if the watchdog portion of the browser detects something's pegging it. Since major portions of the browser itself are written in JavaScript, plus all those addons you tart it up with, the browser chrome must also cooperatively multitask with everything else which is why sometimes it temporarily grinds to a halt. I've sunk an incredible amount of time over TenFourFox's existence into our just-in-time JavaScript compiler for PowerPC to reduce this overhead, but that only gets us so far, and the typical scripts on popular websites aren't getting any less complex. Mozilla intends to solve this problem (and others) with multi-process Firefox, also known as Electrolysis, but it won't work without significant effort on 10.4 and I have grave doubts about its ability to perform well on these older computers; for that reason, I've chosen not to support it.

However, generating standalone browser apps for your common sites helps to mitigate both these problems. While each instance of the standalone browser uses more memory than a browser tab, with only one site in it garbage collection is much easier to accomplish (and therefore faster), and the memory is instantly reclaimed when the standalone browser terminates. In fact, on G5 systems with more than 2GB of RAM, it helps you actually use that extra memory more effectively: while TenFourFox is a 32-bit application (being a hybrid of Carbon and Cocoa), you'd be running multiple instances of it, all of which have their own 32-bit address space which can be located in that extra RAM you've got on board. Also, separate browser instances become ... multiple processes. That means they preemptively multitask, like Electrolysis content processes would. They could even be scheduled on a different core on multiprocessor Power Macs. That improves their responsiveness substantially, to say nothing of the fact that the substantially reduced amount of browser chrome has dramatically less overhead. Now, standalone browsers also have disadvantages; they lack a lot of the features of a regular browser, including safety features, and they can be more difficult to navigate in because of the reduced interface. But for many sites those are acceptable tradeoffs.

So, without further ado, let's introduce TenFourFoxBox.

TenFourFoxBox is an application that generates site-specific browsers ("foxboxes") for you, running them in private instances of TenFourFox (a la XULRunner). This has been one of my secret internal projects since I got Amazon Music working properly with TenFourFox, so I wanted to use it as a jukebox without dragging down the rest of the browser, and to help beef up the performance of my online coursework site which has a rather heavy implementation and depends greatly on Google Docs and Box. And now you'll get to play with it as well.

Although TenFourFoxBox borrows some code from Prism/WebRunner, mostly the reduced browser chrome, in actual operation it functions somewhat differently. First, TenFourFoxBox isn't itself written in XUL; it's a "native" OS X application that just happens to generate XUL-based applications. Second, for webapps created with Prism (or its companion tool Refractor), it's Prism itself that actually does the running with its own embedded copy of the XUL framework, not Firefox. With TenFourFoxBox, however, foxboxes you create actually run using the copy of TenFourFox you have installed (and yes, the foxboxes will look for and run the correct version for your architecture), just as separate processes, with their own browser chrome and their own application support and cache directory independent of the main browser. The nice thing about that is when you upgrade TenFourFox, you upgrade the browser core in every foxbox on your system all at once, as well as your main browser, because TenFourFox is your main browser, amirite?

The implementation in TenFourFoxBox is also a little different with respect to how data is stored. Foxboxes are driven essentially as independent XULRunner apps, so they have their own storage separate from the browser. Prism allowed this space to be shared, but I don't think that was a good idea, so not only are all foxboxes independent, but by default they operate effectively in "private browsing" mode and clear out cookies and other site data when they quit. By default they also disable autocomplete, improving both privacy and a little bit of performance; you can, of course, change these settings, and override checks sites might do which could detect you're not actually in a regular browser. I also decided to keep a constant unchanging title (regardless of the website you're viewing) so that you can more easily identify it in Exposé.

So, let's see it in action. Here's Bing Maps, in full 1080p on the quad G5, looking for drone landing sites.

And here's what I originally wrote this for, Amazon Music, playing the more or less official album of International Space Year:

(Stupid Amazon. I already have Flood and Junta!)

So now it's time to get this ready for the masses, and what better way than to have you slavering lot mercilessly bang on it? The following bugs/deficiencies are known:

  • The application menu only has "Quit." This is actually Mozilla bug 1181977, and will be fixed in TenFourFox 38.5, after which all the foxboxes will "fix themselves."
  • Localization isn't supported yet, even if you have a localized TenFourFox; most things will still appear in English. It's certainly possible to do, just non-trivial because of TenFourFoxBox's dual nature (we have to localize both the OS X portion and the XUL code it generates, and then figure out how to juggle multi-lingual resources). I'm not likely to do anything with this until the rest of it is stable enough to freeze strings.
  • Although the browser core they run is shared, individual foxboxes have their own private copies of the foxbox support code and chrome which are independent. Thus, when a new TenFourFoxBox comes out, you will need to manually update each of your foxboxes. You can do this in place and overwrite them; it's just somewhat inconvenient.
  • There are probably browser features missing that you'd like. I'm willing to entertain reasonable requests.

Even the manual is delivered as a foxbox, which makes it easy to test on your system. Download it, try it and post your comments in the comments. TenFourFox 38.4 or higher is required. This is a beta, so treat it accordingly, with the plan to release it for general consumption a week or so after 38.5 comes out.

Let's do a little inside-the-box thinking with an old idea for a new generation, shall we?

Benjamin KerensaOpenly Thankful

ThankfulSo next week has a certain meaning for millions of Americans that we relate to a story of indians and pilgrims gathering to have a meal together. While that story may be distorted from the historical truth, I do think the symbolic holiday we celebrate is important.

That said, I want to name some individuals I am thankful for….



Lukas Blakk

I’m thankful for Lukas for being a excellent mentor to me at Mozilla for the last two years she was at Mozilla. Lukas helped me learn skills and have opportunities that many Mozillians would not have the opportunity to do. I’m very grateful for her mentoring, teaching, and her passion to help others, especially those who have less opportunity.

Jeff Beatty

I’m especially thankful for Jeff. This year, out of the blue, he came to me this year and offered to have his university students support an open source project I launched and this has helped us grow our l10n community. I’m also grateful for Jeff’s overall thoughtfulness and my ability to go to him over the last couple of years for advice and feedback.

Majken Connor

I’m thankful for Majken. She is always a very friendly person who is there to welcome people to the Mozilla Community but also I appreciate how outspoken she is. She is willing to share opinions and beliefs she has that add value to conversations and help us think outside the box. No matter how busy she is, she has been a constant in the Mozilla Project. always there to lend advice or listen.

Emma Irwin

I’m thankful for Emma. She does something much different than teaching us how to lead or build community, she teaches us how to participate better and build better participation into open source projects. I appreciate her efforts in teaching future generations the open web and being such a great advocate for participation.

Stormy Peters

I’m thankful for Stormy. She has always been a great leader and it’s been great to work with her on evangelism and event stuff at Mozilla. But even more important than all the work she did at Mozilla, I appreciate all the work she does with various open source nonprofits the committees and boards she serves on or advises that you do not hear about because she does it for the impact.


Jonathan Riddell

I’m thankful for Jonathan. He has done a lot for Ubuntu, Kubuntu, KDE and the great open source ecosystem over the years. Jonathan has been a devout open source advocate always standing for what is right and unafraid to share his opinion even if it meant disappointment from others.

Elizabeth Krumbach Joseph

I’m thankful for Elizabeth. She has been a good friend, mentor and listener for years now and does so much more than she gets credit for. Elizabeth is welcoming in the multiple open source projects she is involved in and if you contribute to any of those projects you know who she is because of the work she does.


Paolo Rotolo

I’m thankful for our lead Android developer who helps lead our Android development efforts and is a driving force in helping us move forward the vision behind Glucosio and help people around the world. I enjoy near daily if not multiple time a day conversations with him about the technical bits and big picture.

The Core Team + Contributors

I’m very thankful for everyone on the core team and all of our contributors at Glucosio. Without all of you, we would not be what we are today, which is a growing open source project doing amazing work to bring positive change to Diabetes.


Leslie Hawthorne

I’m thankful for Leslie. She is always very helpful for advice on all things open source and especially open source non-profits. I think she helps us all be better human beings. She really is a force of good and perhaps the best friend you can have in open source.

Jono Bacon

I’m thankful for Jono. While we often disagree on things, he always has very useful feedback and has an ocean of community management and leadership experience. I also appreciate Jono’s no bullshit approach to discussions. While it can be rough for some, the cut to the chase approach is sometimes a good thing.

Christie Koehler

I’m thankful for Christie. She has been a great listener over the years I have known her and has been very supportive of community at Mozilla and also inclusion & diversity efforts. Christie is a teacher but also an organizer and in addition to all the things I am thankful for that she did at Mozilla, I also appreciate her efforts locally with Stumptown Syndicate.

Air MozillaWebdev Beer and Tell: November 2015

Webdev Beer and Tell: November 2015 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Mozilla Addons BlogSigning API now available

Over the years, has had many APIs. These are used by Firefox and other clients to provide add-on listings, blocklists, and other features. But there hasn’t really been an API that developers can interact with. As part of ongoing improvements to the site, we’ve started focusing on producing APIs for add-on developers as well.

Our first one aims to make add-on signing a little easier for developers. This API enables you to upload an XPI and get back the signed add-on if it passes all the validation checks.

To use this API, log in to and go to Tools > Manage API Keys. Then agree to the terms and fetch an API key and secret to use in subsequent API calls.


Once you’ve done that, generate authorization tokens and use the documented API to sign your add-on.

The documented examples use curl to interact with the API. For example:

-XPUT --form 'upload=@build/my-addon.xpi' -H 'Authorization: JWT your-jwt-token'

This is just the first of the APIs that we hope to add to the site and a path that we hope will lead to increased functionality throughout the add-ons ecosystem. This feature is under development, so we are keen to hear feedback or any issues.

John O'DuinnThe real cost of an office

Woodwards building Vancouver demolition 2 by Tannoy | CC BY-SA 3.0 via Wikimedia Commons

The shift from “building your own datacenter” to “using the cloud” revolutionized how companies viewed internal infrastructure, and significantly reduced the barrier to starting your own fast-growth, global-scale company. Suddenly, you could have instant, reliable, global-scale infrastructure.

(Personally, I dislike the term “cloud” but it’s the easiest vendor-neutral term I know for describing essential infrastructure running on rent-by-the-hour Amazon AWS, Google GCE, Microsoft Azure and others…)

Like any new major change, “the cloud” went through an uphill acceptance curve with resistance from established nay-sayers. Meanwhile, smaller companies with no practical alternatives jumped in with both feet and found that “the cloud” worked just fine. And scaled better. And was cheaper to run. And was faster to setup, so the opportunity-cost was significantly reduced.

Today, of course, “using the cloud” for your infrastructure has crossed the chasm. It is the default. Today, if you were starting a new company, and went looking for funding to build your own custom datacenter, you’d need to explain why you were not “using the cloud”. Deciding to have your own physical data center involves one-time-setup costs as well as ongoing recurring operational costs. Similarly, deciding to have a physical office involves one-time-setup costs as well as ongoing recurring operational costs.

Rethinking infrastructure from the fixed costs of servers and datacenters to rented by the hour “in the cloud” is an industry game changer. Similarly, rethinking the other expensive part of a company’s infrastructure — the physical office — is an industry game changer.

Just like physical datacenters, deciding to setup an office is an expensive decision which complicates, not liberates, the ongoing day-to-day life of your company.

The reality of having an office

It is easy to skip past the “Do we really need an office?” question – and plunge into the mechanics, without first thinking through some company-threatening questions.

What city, and which neighborhood in the city, is the best location for your company office? Sometimes the answer is “near to where the CEO lives”, or “near the offices of our lead VCs”. However, this should include answers to questions like “where will we find most of the talent (people) we plan to hire?” and “where will most of our customers be?”.

What size should your office be? This requires thinking through your hiring plans — not just for today, but also for the duration of the lease — typically 3–5–10 years. The consequences of this decision may be even longer, given how some people do not like relocating! When starting a company, it is very tricky to accurately predict the answers to these questions for multiple years into the future.

Business plans change. Technologies change. Market needs and finances change. Product scope changes. Companies pivot. Brick-and-mortar buildings (usually) stay where they are.

If you convince yourself that your company does need a physical office, setting up and running an office is “non-trivial”. You quickly get distracted by the expensive logistics and operational mechanics of a physical building – instead of keeping focus on people and the shipping product.

You need to negotiate, sign and pay leases. Debate offices-with-doors vs open-plan — and if open-plan, do you want library-quiet, or bull-pen with cross-chatter and music? Negotiate seating arrangements — including the who-gets-a-window-view debate. Construct the actual office-space, bathrooms and kitchens. Pick, buy and install desks, chairs, ping-pong tables and fridges. Set up wifi, security doorbadge systems, printers, phones. Hire staff who are focused on running the physical office, not focused on your product. The list goes on and on. All of these take time, money and most importantly focus. This distracts humans away from the entire point of the company — hiring humans to create and ship product to earn money. And the distraction does not end once the office is built — maintaining and running a physical office takes ongoing time, money and focus.

After your office is up-and-running, you discover the impact this new office has on hiring. You pay to relocate people who would be great additions to your company, but do not live near your new office. You are disappointed by good people turning down job offers because of the location. You have debates about “hiring the best person for the job” vs “hiring the best person for the job who is willing to relocate”. You have to limit hiring because you don’t have a spare desk available. You need to sublease a part of your new office space, because growth plans changed because revenue didn’t go as well as hoped – and now you have unused idle office space costing you money every month.

The benefits of no office

You dedicate more time, money and focus on the people, and the shipping product — simply by avoiding the financial costs, lead-time-delays and focus-distractions of setting up a physical office.

Phrased another way: Distributed teams let you focus the company time and money where it is most important — on the people and the product. After all, it doesn’t matter how fancy your office is unless you have a product that people want to use.

Having no office lets you sidestep a few potentially serious and distracting ongoing problems:

You don’t need to worry about signing a lease for a space that is too small (or too large) for the planned growth of the company. You avoid adding a large recurring cost (a lease) to the company books, which impacts your company’s financial burn rate.

You don’t need to worry if the location of the office helps or hinders future hiring plans. You don’t need to worry about good people turn down your job offers simply because of the office location. You can hire from a significantly larger pool of candidates, so you can hire better and faster then all-in-one-location competitors. For more on this, see .

Even larger companies like Aetna, with established offices, have been encouraging work-from-home since 2005 – because they can hire more people and also because of the money savings from real estate. Last I’ve heard, Aetna was saving $78 million a year by having people work from home. Each year. No wonder Dell and others are now doing the same.

You sidestep human distractions about office layout.

You don’t need to worry about business continuity if the office is closed for a while.

Sidestepping all these distractions helps you (and everyone else in the company) focus attention and money on the people and the product you are building and shipping. This is a competitive advantage over all-in-one-office companies. Important stuff to keep in mind when you ask yourself “Do we really need an office?”

(Versions of this post are on and also in the latest early release of my “Distributed” book.)

(Photo credit: Woodwards building Vancouver demolition 2 by Tannoy | CC BY-SA 3.0 via Wikimedia Commons)

Daniel PocockDatabases of Muslims and homosexuals?

One US presidential candidate has said a lot recently, but the comments about making a database of Muslims may qualify as the most extreme.

Of course, if he really wanted to, somebody with this mindset could find all the Muslims anyway. A quick and easy solution would involve tracing all the mobile phone signals around mosques on a Friday. Mr would-be President could compel Facebook and other social networks to disclose lists of users who identify as Muslim.

Databases are a dangerous side-effect of gay marriage

In 2014 there was significant discussion about Brendan Eich's donation to the campaign against gay marriage.

One fact that never ranked very highly in the debate at the time is that not all gay people actually support gay marriage. Even where these marriages are permitted, not everybody who can marry now is choosing to do so.

The reasons for this are varied, but one key point that has often been missed is that there are two routes to marriage equality: one involves permitting gay couples to visit the register office and fill in a form just as other couples do. The other route to equality is to remove all the legal artifacts around marriage altogether.

When the government does issue a marriage certificate, it is not long before other organizations start asking for confirmation of the marriage. Everybody from banks to letting agents and Facebook wants to know about it. Many companies outsource that data into cloud CRM systems such as Salesforce. Before you know it, there are numerous databases that somebody could mine to make a list of confirmed homosexuals.

Of course, if everybody in the world was going to live happily ever after none of this would be a problem. But the reality is different.

While discrimination: either against Muslims or homosexuals - is prohibited and can even lead to criminal sanctions in some countries, this attitude is not shared globally. Once gay people have their marriage status documented in the frequent flyer or hotel loyalty program, or in the public part of their Facebook profile, there are various countries where they are going to be at much higher risk of prosecution/persecution. The equality to marry in the US or UK may mean they have less equality when choosing travel destinations.

Those places are not as obscure as you might think: even in Australia, regarded as a civilized and laid-back western democracy, the state of Tasmania fought tooth-and-nail to retain the criminalization of virtually all homosexual conduct until 1997 when the combined actions of the federal government and high court compelled the state to reform. Despite the changes, people with some of the most offensive attitudes are able to achieve and retain a position of significant authority. The same Australian senator who infamously linked gay marriage with bestiality has successfully used his position to set up a Senate inquiry as a platform for conspiracy theories linking Halal certification with terrorism.

There are many ways a database can fall into the wrong hands

Ironically, one of the most valuable lessons about the risk of registering Muslims and homosexuals was an injustice against the very same tea-party supporters a certain presidential candidate is trying to woo. In 2013, it was revealed IRS employees had started applying a different process to discriminate against groups with Tea party in their name.

It is not hard to imagine other types of rogue or misinformed behavior by people in positions of authority when they are presented with information that they don't actually need about somebody's religion or sexuality.

Beyond this type of rogue behavior by individual officials and departments, there is also the more sinister proposition that somebody truly unpleasant is elected into power and can immediately use things like a Muslim database, surveillance data or the marriage database for a program of systematic discrimination. France had a close shave with this scenario in the 2002 presidential election when
Jean-Marie Le Pen, who has at least six convictions for racism or inciting racial hatred made it to the final round in a two-candidate run-off with Jacques Chirac.

The best data security

The best way to be safe- wherever you go, both now and in the future - is not to have data about yourself on any database. When filling out forms, think need-to-know. If some company doesn't really need your personal mobile number, your date of birth, your religion or your marriage status, don't give it to them.

Support.Mozilla.OrgWhat’s up with SUMO – 20th November

Hello, SUMO Nation!

Good to see you reading these words again. Thank you for dropping by and willing to learn more about the most recent goings-on at SUMO.

Welcome, new contributors!

If you joined us recently, don’t hesitate – come over and say “hi” in the forums!

Contributors of the last week

  • SynergSINE – for his proactive attitude and conversation with the Ivory Coast Mozillians who are interested in participating in SUMO!

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Last SUMO Community meeting

Reminder: the next SUMO Community meeting…

  • …is going to take place on Monday, 23rd of November. Join us!
  • If you want to add a discussion topic to upcoming the live meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Monday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).



Support Forum


  • for Android
    • Nothing new to report.
  • for Desktop
    • All quiet on the desktop front.
  •  for iOS
    • We keep getting more users!
  • Firefox OS
    • Guess what… no big news here, either ;-)
All this quiet is good – time to recharge our Moz-batteries and get ready for a busy end-of-year season! We wish you a great weekend and hope to see you around on Monday. Take it easy!

Joel MaherIntroducing the contributors for the MozCI Project

As I previously announced who will be working on Pulse Guardian, the Web Platform Tests Results Explorer, and the  Web Driver Infrastructure projects, I would like to introduce the contributors for the 4th project this quarter, Mozilla CI Tools – Polish and Packaging:

* MikeLing (:mikeling on IRC) –

What interests you in this specific project?

As its document described, Mozilla CI Tools is designed to allow interacting with the various components which compose Mozilla’s Continuous Integration. So, I think get involved into it can help me know more about how Treeherder and Mozci works and give me better understanding of A-team.

What do you plan to get out of this after 8 weeks?

Keep try my best to contribute! Hope I can push forward this project with Armen, Alice and other contributors in the furture :)

Are there any interesting facts/hobbies that you would like to share so others can enjoy reading about you?

I’m a guy who would like to keep challenge myself and try new stuff.

* Stefan (:F3real on IRC) –

What interests you in this specific project?

I thought it would be good starting project and help me learn new things.

What do you plan to get out of this after 8 weeks?

Expand my knowledge and meet new people.

Are there any interesting facts/hobbies that you would like to share so others can enjoy reading about you?

I play guitar but I don’ t think that’s really interesting.

* Vaibhav Tulsyan (:xenny on IRC) –

What interests you in this specific project?

Continuous Integration, in general, is interesting for me.

What do you plan to get out of this after 8 weeks?

I want to learn how to work efficiently in a team in spite of working remotely, learn how to explore a new code base and some new things about Python, git, hg and Mozilla. Apart from learning, I want to be useful to the community in some way. I hope to contribute to Mozilla for a long term, and I hope that this helps me build a solid foundation.

Are there any interesting facts/hobbies that you would like to share so others can enjoy reading about you?

One of my hobbies is to create algorithmic problems from real-world situations. I like to think a lot about the purpose of existence, how people think about things/events and what affects their thinking. I like teaching and gaining satisfaction from others’ understanding.


Please join me in welcoming all the contributors to this project and the previously mentioned ones as they have committed to work on a larger project with their free time!

Joel MaherIntroducing a contributor for the WebDriver Infrastructure project

As I previously announced who will be working on Pulse Guardian and the Web Platform Tests Results Explorer, let me introduce who will be working on Web Platform Tests – WebDriver Infrastructure:

* Ravi Shankar (:waffles on IRC) –

What interests you in this specific project?

There are several. Though I love coding, I’m usually more inclined to Python & Rust (so, a “Python project” is what excited me at first). Then, my recently-developed interest in networking code (ever since my work on a network-related issue in Servo), and finally, I’m very curious about how we’re establishing the Python-JS communication and emulate user inputs.

What do you plan to get out of this after 8 weeks?

Over the past few months of my (fractional) contributions to Mozilla, I’ve always learned something useful whenever I finish working on a bug/issue. Since this is a somewhat “giant” implementation that requires more time and commitment, I think I’ll learn some great deal of stuff in relatively less time (which is what excites me).

Are there any interesting facts/hobbies that you would like to share so others can enjoy reading about you?

Well, I juggle, or I (try to) reproduce some random music in my flute (actually, a Bansuri – Indian flute) when I’m away from my keyboard.


We look forward to working with Ravi over the next 8 weeks.  Please say hi in irc when you see :waffles in channel :)

Joel MaherIntroducing 2 contributors for the Web Platform Tests project

As I previously announced who will be working on Pulse Guardian, let me introduce who will be working on Web Platform Tests – Results Explorer:

* Kalpesh Krishna (:martianwars on irc) –

What interests you in this specific project?

I have been contributing to Mozilla for a couple of months now and was keen on taking up a project on a slightly larger scale. This particular project was recommended to me by Manish Goregaokar. I had worked out a few issues in Servo prior to this and all involved Web Platform Tests in some form. That was the initial motivation. I find this project really interesting as it gives me a chance to help build an interface that will simplify browser comparison so much! This project seems to have more of planning rather than execution, and that’s another reason that I’m so excited! Besides, I think this would be a good chance to try out some statistics / data visualization ideas I have, though they might be a bit irrelevant to the goal.

What do you plan to get out of this after 8 weeks?

I plan to learn as much as I can, make some great friends, and most importantly make a real sizeable contribution to open source :)

Are there any interesting facts/hobbies that you would like to share so others can enjoy reading about you?

I love to star gaze. Constellations and Messier objects fascinate me. Given a chance, I would love to let my imagination run wild and draw my own set of constellations! I have an unusual ambition in life. Though a student of Electrical Engineering, I have always wanted to own a chocolate factory (too much Roald Dahl as a child) and have done some research regarding the same. Fingers crossed! I also love to collect Rubiks Cube style puzzles. I make it a point to increase my collection by 3-4 puzzles every semester and learn how to solve them. I’m not fast at any of them, but love solving them!

* Daniel Deutsch

What interests you in this specific project?

I am really interested in getting involved in Web Standards. Also, I am excited to be involved in a project that is bigger than itself–something that spans the Internet and makes it better for everyone (web authors and users).

What do you plan to get out of this after 8 weeks?

As primarily a Rails developer, I am hoping to expand my skill-set. Specifically, I am looking forward to writing some Python and learning more about JavaScript. Also, I am excited to dig deeper into automated testing. Lastly, I think Mozilla does a lot of great work and am excited to help in the effort to drive the web forward with open source contribution.

Are there any interesting facts/hobbies that you would like to share so others can enjoy reading about you?

I live in Brooklyn, NY and have terrible taste in music. I like writing long emails, running, and Vim.


We look forward to working with these great 2 hackers over the next 8 weeks.

Joel MaherIntroducing a contributor for the Pulse Guardian project

3 weeks ago we announced the new Quarter of Contribution, today I would like to introduce the participants.  Personally I really enjoy meeting new contributors and learning about them. It is exciting to see interest in all 4 projects.  Let me introduce who will be working on Pulse Guardian – Core Hacker:

Mike Yao

What interests you in this specific project?

Python, infrastructure

What do you plan to get out of this after 8 weeks?

Continue to contribute to Mozilla

Are there any interesting facts/hobbies that you would like to share so others can enjoy reading about you?

Cooking/food lover, I was chef long time ago. Free software/Open source and Linux changed my mind and career.


I do recall one other eager contributor who might join in late when exams are completed, meanwhile, enjoy learning a bit about Mike Yao (who was introduced to Mozilla by Mike Ling who did our first every Quarter of Contribution).

Mozilla FundraisingOur plan for fundraising A/B testing in 2015

Our end of year (EOY) fundraising campaign is getting started today, so I wanted to write a note about our A/B testing plan and the preparation work that has gone into this so far. Although right now our donation form … Continue reading

Daniel StenbergThis post was not bought

coinsAt times I post blog articles that get the view counter go up to and beyond 50,000 views. This puts me in a position where I get offers from companies to mention them or to “cooperate” on further blog posts that would somehow push their agenda or businesses.

I also get the more simple offers of adding random ads or “text only information” on specific individual pages on my sites that some SEO person out there figured out could potentially attract audience that search for specific terms.

I’ve even gotten offers from a company to sell off my server logs. Allegedly to help them work on anti-fraud so possibly for a good cause, but still…

This is by no counts a “big” blog or site, yet I get a steady stream of individuals and companies offering me money to give up a piece of my soul. I can only imagine what more popular sites get and it is clear that someone with a less strict standpoint than mine could easily make an extra income that way.

I turn down all those examples of “easy money”.

I want to be able to look you, my dear readers, straight in the eyes when I say that what’s written here are my own words and the opinions revealed are my own – even if of course you may not agree with me and I may do mistakes and be completely wrong at times or even many times. You can rest assured that I did the mistakes on my own and I was not paid by anyone to do them.

I’ve also removed ads from most of my sites and I don’t run external analytic scripts, minimizing the privacy intrusions and optimizing the contents: the stuff downloaded from my sites are what your browser needs to render the page. Not heaps of useless crap to show ads or to help anyone track you (in order to show more targeted ads).

I don’t judge others’ actions based on how I decide to run my blog. I’m in a fortunate position to take this stand, I realize that.

Still biased of course

This all said, I’m still employed by a company (Mozilla) that pays my salary and I work on several projects that are dear to me so of course I will show bias to some subjects. I don’t claim to have an objective view on things and I don’t even try to have that. When I write posts here, they come colored by my background and by what I am.

Justin DolskeFoxkeh Dance is back!

That’s right! Everyone’s favorite dancing mascot is back, baby!

Back in 2008, Alex Polvi (of Firefox crop circle fame), departed Mozilla to found his own startup. In one of the most epic farewell emails of all time, he created Foxkeh Dance, a Mozilla flavor of the Internet-classic Hampster Dance site.

Alas, domains expire, and for the last 5 years has been the home of a domain squatter hoping to interest you in the usual assortment of spam. But a few weeks ago, I randomly  checked the site, and discovered it was available for registration! So I grabbed the domain, and set about restoring it.

The ever-amazing has a cached version of the original 7-year-old site from August 24th 2008… Mostly. It has the HTML, but not the images or background music. Luckily a couple of contemporaneous Mozilla community sites included copies of the animated images, and from that I was able to restore what I believe are original versions. (Update: it seems is now using these newly-restored images to fill in their incomplete cache. Curious.) While the original embedded “hamster.mp3” file is lost, I remember it a being a straight copy of the Hampster Dance site, and that’s easily available. Of course, the original site used plugins to play sound, so I’ve updated it to use a modern HTML5 <audio> replacement.

And now is back!

For those unfamiliar, Foxkeh is Mozilla Japan’s cartoon mascot. Recently it’s been the unofficial mascot of the new Tracking Protection feature in Firefox (butt flames and all). I hope we’ll see more of the ‘lil guy in the future!

You may now resume dancing.

Monica ChewDownload files more safely with Firefox 31

Did you know that the estimated cost of malware is hundreds of billions of dollars per year? Even without data loss or identity theft, the time and annoyance spent dealing with infected machines is a significant cost.

Firefox 31 offers improved malware detection. Firefox has integrated Google’s Safe Browsing API for detecting phishing and malware sites since Firefox 2. In 2012 Google expanded their malware detection to include downloaded files and made it available to other browsers. I am happy to report that improved malware detection has landed in Firefox 31, and will have expanded coverage in Firefox 32.

In preliminary testing, this feature cuts the amount of undetected malware by half. That’s a significant user benefit.

What happens when you download malware? Firefox checks URLs associated with the download against a local Safe Browsing blocklist. If the binary is signed, Firefox checks the verified signature against a local allowlist of known good publishers. If no match is found, Firefox 32 and later queries the Safe Browsing service with download metadata (NB: this happens only on Windows, because signature verification APIs to suppress remote lookups are only available on Windows). In case malware is detected, the Download Manager will block access to the downloaded file and remove it from disk, displaying an error in the Downloads Panel.

How can I turn this feature off? This feature respects the existing Safe Browsing preference for malware detection, so if you’ve already turned that off, there’s nothing further to do. Below is a screenshot of the new, beautiful in-content preferences (Preferences > Security) with all Safe Browsing integration turned off. I strongly recommend against turning off malware detection, but if you decide to do so, keep in mind that phishing detection also relies on Safe Browsing.

Many thanks to Gian-Carlo Pascutto and Paolo Amadini for reviews, and the Google Safe Browsing team for helping keep Firefox users safe and secure!

Monica ChewMaking decisions with limited data

It is challenging but possible to make decisions with limited data. For example, take the rollout saga of public key pinning.

The first implementation of public key pinning included enforcing pinning on In retrospect, this was a bad decision because it broke the Addons Panel and generated pinning warnings 86% of the time. As it turns out, the pinset was missing some Verisign certificates used by, and the pinning enforcement on included subdomains. Having more data lets us avoid bad decisions.

To enable safer rollouts, we implemented a test mode for pinning. In test mode, pinning violations are counted but not enforced. With sufficient telemetry, it is possible to measure how badly sites would break without actually breaking the site.

Due to privacy restrictions in telemetry, we do not collect per-organization pinning violations except for Mozilla sites that are operationally critical to Firefox. This means that it is not possible to distinguish pinning violations for Google domains from Twitter domains, for example. I do not believe that collecting the aggregated number of pinning violations for sites on the Alexa top 10 list constitutes a privacy violation, but I look forward to the day when technologies such as RAPPOR make it easier to collect actionable data in a privacy-preserving way.

Fortunately for us, Chrome has already implemented pinning on many high-traffic sites. This is fantastic news, because it means we can import Chrome’s pin list in test mode with relatively high assurance that the pin list won’t break Firefox, since it is already in production in Chrome.

Given sufficient test mode telemetry, we can decide whether to enforce pins instead of just counting violations. If the pinning violation rate is sufficiently low, it is probably safe to promote the pinned domain from test mode to production mode.

Because the current implementation of pinning in Firefox relies on built-in static pinsets and we are unable to count violations per-pinset, it is important to track changes to the pinset file in the dashboard. Fortunately HighStock supports event markers which somewhat alleviates this problem, and David Keeler also contributed some tooltip code to roughly associate dates with Mercurial revisions. Armed with the timeseries of pinning violation rates, event markers for dates that we promoted organizations to production mode (or high-traffic organizations like Dropbox were added in test mode due to a new import from Chromium) we can see whether pinning is working or not.

Telemetry is useful for forensics, but in our case, it is not useful for catching problems as they occur. This limitation is due to several difficulties, which I hope will be overcome by more generalized, comprehensive SSL error-reporting and HPKP:
  • Because pinsets are static and built-in, there is sometimes a 24-hour lag between making a change to a pinset and reaching the next Nightly build.
  • Telemetry information is only sent back once per day, so we are looking at a 2-day delay between making a change and receiving any data back at all.
  • Telemetry dashboards (as accessible from telemetry.js and need about a day to aggregate, which adds another day.
  • Update uptake rates are slow. The median time to update Nightly is around 3 days, getting to 80% takes 10 days or longer.
Due to these latency issues, pinning violation rates take at least a week to stabilize. Thankfully, telemetry is on by default in all pre-release channels as of Firefox 31, which gives us a lot more confidence that the pinning violation rates are representative.

Despite all the caveats and limitations, using these simple tools we were able to successfully roll out pinning pretty much all sites that we’ve attempted (including AMO, our unlucky canary) as of Firefox 34 and look forward to expanding coverage.

Thanks for reading, and don’t forget to update your Nightly if you love Mozilla! :)