Jan de MooijMaking `this` a real binding in SpiderMonkey

Last week I landed bug 1132183, a pretty large patch rewriting the implementation of this in SpiderMonkey.

How this Works In JS

In JS, when a function is called, an implicit this argument is passed to it. In strict mode, this inside the function just returns that value:

function f() { "use strict"; return this; }
f.call(123); // 123

In non-strict functions, this always returns an object. If the this-argument is a primitive value, it's boxed (converted to an object):

function f() { return this; }
f.call(123); // returns an object: new Number(123)

Arrow functions don't have their own this. They inherit the this value from their enclosing scope:

function f() {
    "use strict";
    () => this; // `this` is 123

And, of course, this can be used inside eval:

function f() {
    "use strict";
    eval("this"); // 123

Finally, this can also be used in top-level code. In that case it's usually the global object (lots of hand waving here).

How this Was Implemented

Until last week, here's how this worked in SpiderMonkey:

  • Every stack frame had a this-argument,
  • Each this expression in JS code resulted in a single bytecode op (JSOP_THIS),
  • This bytecode op boxed the frame's this-argument if needed and then returned the result.

Special case: to support the lexical this behavior of arrow functions, we emitted JSOP_THIS when we defined (cloned) the arrow function and then copied the result to a slot on the function. Inside the arrow function, JSOP_THIS would then load the value from that slot.

There was some more complexity around eval: eval-frames also had their own this-slot, so whenever we did a direct eval we'd ensure the outer frame had a boxed (if needed) this-value and then we'd copy it to the eval frame.

The Problem

The most serious problem was that it's fundamentally incompatible with ES6 derived class constructors, as they initialize their 'this' value dynamically when they call super(). Nested arrow functions (and eval) then have to 'see' the initialized this value, but that was impossible to support because arrow functions and eval frames used their own (copied) this value, instead of the updated one.

Here's a worst-case example:

class Derived extends Base {
    constructor() {
        var arrow = () => this;

        // Runtime error: `this` is not initialized inside `arrow`.

        // Call Base constructor, initialize our `this` value.

        // The arrow function now returns the initialized `this`.

We currently (temporarily!) throw an exception when arrow functions or eval are used in derived class constructors in Firefox Nightly.

Boxing this lazily also added extra complexity and overhead. I already mentioned how we had to compute this whenever we used eval.

The Solution

To fix these issues, I made this a real binding:

  • Non-arrow functions that use this or eval define a special .this variable,
  • In the function prologue, we get the this-argument, box it if needed (with a new op, JSOP_FUNCTIONTHIS) and store it in .this,
  • Then we simply use that variable each time this is used.

Arrow functions and eval frames no longer have their own this-slot, they just reference the .this variable of the outer function. For instance, consider the function below:

function f() {
    return () => this.foo();

We generate bytecode similar to the following pseudo-JS:

function f() {
    var .this = BoxThisIfNeeded(this);
    return () => (.this).foo();

I decided to call this variable .this, because it nicely matches the other magic 'dot-variable' we already had, .generator. Note that these are not valid variable names so JS code can't access them. I only had to make sure with-statements don't intercept the .this lookup when this is used inside a with-statement...

Doing it this way has a number of benefits: we only have to check for primitive this values at the start of the function, instead of each time this is accessed (although in most cases our optimizing JIT could/can eliminate these checks, when it knows the this-argument must be an object). Furthermore, we no longer have to do anything special for arrow functions or eval; they simply access a 'variable' in the enclosing scope and the engine already knows how to do that.

In the global scope (and in eval or arrow functions in the global scope), we don't use a binding for this (I tried this initially but it turned out to be pretty complicated). There we emit JSOP_GLOBALTHIS for each this-expression, then that op gets the this value from a reserved slot on the lexical scope. This global this value never changes, so the JITs can get it from the global lexical scope at compile time and bake it in as a constant :) (Well.. in most cases. The embedding can run scripts with a non-syntactic scope chain, in that case we have to do a scope walk to find the nearest lexical scope. This should be uncommon and can be optimized/cached if needed.)

The Debugger

The main nuisance was fixing the debugger: because we only give (non-arrow) functions that use this or eval their own this-binding, what do we do when the debugger wants to know the this-value of a frame without a this-binding?

Fortunately, the debugger (DebugScopeProxy, actually) already knew how to solve a similar problem that came up with arguments (functions that don't use arguments don't get an arguments-object, but the debugger can request one anyway), so I was able to cargo-cult and do something similar for this.

Other Changes

Some other changes I made in this area:

  • In bug 1125423 I got rid of the innerObject/outerObject/thisValue Class hooks (also known as the holy grail). Some scope objects had a (potentially effectful) thisValue hook to override their this behavior, this made it hard to see what was going on. Getting rid of that made it much easier to understand and rewrite the code.
  • I posted patches in bug 1227263 to remove the this slot from generator objects, eval frames and global frames.
  • IonMonkey was unable to compile top-level scripts that used this. As I mentioned above, compiling the new JSOP_GLOBALTHIS op is pretty simple in most cases; I wrote a small patch to fix this (bug 922406).


We changed the implementation of this in Firefox 45. The difference is (hopefully!) not observable, so these changes should not break anything or affect code directly. They do, however, pave the way for more performance work and fully compliant ES6 Classes! :)

Mozilla Addons BlogA New Firefox Add-ons Validator

The state of add-ons has changed a lot over the past five years, with Jetpack add-ons rising in popularity and Web Extensions on the horizon. Our validation process hasn’t changed as much as the ecosystem it validates, so today Mozilla is announcing we’re building a new Add-ons Validator, written in JS and available for testing today! We started this project only a few months ago and it’s still not production-ready, but we’d love your feedback on it.

Why the Add-ons Validator is Important

Add-ons are a huge part of why people use Firefox. There are currently over 22,000 available, and with work underway to allow Web Extensions in Firefox, it will become easier than ever to develop and update them.

All add-ons listed on addons.mozilla.org (AMO) are required to pass a review by Mozilla’s add-on review team, and the first step in this process is automated validation using the Add-ons Validator.

The validator alerts reviewers to deprecated API usage, errors, and bad practices. Since add-ons can contain a lot of code, the alerts can help developers pinpoint the bits of code that might make your browser buggy or slow, among other problems. It also helps detect insecure add-on code. It helps keep your browsing fast and safe.

Our current validator is a bit old, and because it’s written in Python with JavaScript dependencies, our old validator is difficult for add-on developers to install themselves. This means add-on developers often don’t know about validation errors until they submit their add-on for review.

This wastes time, introducing a feedback cycle that could have been avoided if the add-on developer could have just run addons-validator myAddon.xpi before they uploaded their add-on. If developers could easily check their add-ons for errors locally, getting their add-ons in front of millions of users is that much faster.

And now they can!

The new Add-ons Validator, in JS

I’m not a fan of massive rewrites, but in this case it really helps. Add-on developers are JavaScript coders and nearly everyone involved in web development these days uses Node.js. That’s why we’ve written the new validator in JavaScript and published it on npm, which you can install right now.

We also took this opportunity to review all the rules the old add-on validator defined, and removed a lot of outdated ones. Some of these hadn’t been seen on AMO for years. This allowed us to cut down on code footprint and make a faster, leaner, and easier-to-work-with validator for the future.

Speaking of which…

What’s next?

The new validator is not production-quality code yet and there are rules that we haven’t implemented yet, but we’re looking to finish it by the first half of next year.

We’re still porting over relevant rules from the old validator. Our three objectives are:

  1. Porting old rules (discarding outdated ones where necessary)
  2. Adding support for Web Extensions
  3. Getting the new validator running in production

We’re looking for help with those first two objectives, so if you’d like to help us make our slightly ambitious full-project-rewrite-deadline, you can…

Get Involved!

If you’re an add-on developer, JavaScript programmer, or both: we’d love your help! Our code and issue tracker are on GitHub at github.com/mozilla/addons-validator. We keep a healthy backlog of issues available, so you can help us add rules, review code, or test things out there. We also have a good first bug label if you’re new to add-ons but want to contribute!

If you’d like to try the next-generation add-ons validator, you can install it with npm: npm install addons-validator. Run your add-ons against it and let us know what you think. We’d love your feedback as GitHub issues, or emails on the add-on developer mailing list.

And if you’re an add-on developer who wishes the validator did something it currently doesn’t, please let us know!

We’re really excited about the future of add-ons at Mozilla; we hope this new validator will help people write better add-ons. It should make writing add-ons faster, help reviewers get through add-on approvals faster, and ultimately result in more awesome add-ons available for all Firefox users.

Happy hacking!

Matjaž HorvatMeet Jarek, splendid Pontoon contributor

Some three months ago, a new guy named jotes showed up in #pontoon IRC channel. It quickly became obvious he’s running a local instance of Pontoon and is ready to start contributing code. Fast forward to the present, he is one of the core Pontoon contributors. In this short period of time, he implemented several important features, all in his free time:

Top contributors. He started by optimizing the Top contributors page. More specifically, he reduced the number of DB queries by some 99%. Next, he added filtering by time period and later on also by locale and project.

User permissions. Pontoon used to rely on the Mozillians API for giving permissions to localizers. It turned out we need a more detailed approach with team managers manually granting permission to their localizers. Guess who took care of it!

Translation memory. Currently, Jarek is working on translation memory optimizations. Given his track record, our expectations are pretty high. :-)

I have this strange ability to close my eyes when somebody tries to take a photo of me, so on most of them I look like a statue of melancholy. :D

What brought you to Mozilla?
A friend recommended me a documentary called Code Rush. Maybe it will sound stupid, but I was fascinated by the idea of a garage full of fellow hackers with power to change the world. During one of the sleepless nights I visited whatcanidoformozilla.org and after a few patches I knew Mozilla is my place. A place where I can learn something new with help of many amazing people.

Jarek Śmiejczak, thank you for being splendid! And as you said, huge thanks to Linda – love of your life – for her patience and for being an active supporter of the things you do.

To learn more about Jarek, follow his blog at Joyful hackin’.
To start hackin’ on Pontoon, get involved now.

Tarek ZiadéShould I use PYTHONOPTIMIZE ?

Yesterday, I was reviewing some code for our projects and in a PR I saw something roughly similar to this:

    assert hasattr(SomeObject, 'some_attribute')
except AssertionError:

That didn't strike me as a good idea to rely on assert because when Python is launched using the PYTHONOPTIMIZE flag, which you can activate with the eponymous environment variable or with -O or -OO, all assertions are stripped from the code.

To my surprise, a lot of people are dismissing -O and -OO saying that no one uses those flags in production and that a code containing asserts is fine.

PYTHONOPTIMIZE has three possible values: 0, 1 (-O) or 2 (-OO). 0 is the default, nothing happens.

For 1 this is what happens:

  • asserts are stripped
  • the generated bytecode files are using the .pyo extension instead of .pyc
  • sys.flags.optimize is set to 1
  • __debug__ is set to False

And for 2:

  • everything 1 does
  • doctsrings are stripped.

To my knowledge, one legacy reason to run -O was to produce a more efficient bytecode, but I was told that this is not true anymore.

Another behavior that has changed is related to pdb: you could not run some step-by-step debugging when PYTHONOPTIMIZE was activated.

Last, the pyo vs pyc thing should go away one day, according to PEP 488

So what does that leaves us ? is there any good reason to use those flags ?

Some applications leverage the __debug__ flag to offer two running modes. One with more debug information or a different behavior when an error is encoutered.

That's the case for pyglet, according to their doc.

Some companies are also using the -OO mode to slighlty reduce the memory footprint of running apps. It seems to be the case at YouTube.

And it's entirely possible that Python itself in the future, adds some new optimizations behind that flag.

So yeah, even if you don't use yourself those options, it's good practice to make sure that your python code is tested with all possible values for PYTHONOPTIMIZE.

It's easy enough, just run your tests with -O and -OO and without, and make sure your code does not depend on doctsrings or assertions.

If you have to depend on one of them, make sure your code gracefully handles the optimize modes or raises an early error explaining why you are not compatible with them.

Thanks to Brett Cannon, Michael Foord and others for their feedback on Twitter on this.

Nick CameronMacro plans, overview

In this post I want to give a bit of an overview of the changes I'm planning to propose for the macro system. I haven't worked out some of the details yet, so this could change a lot.

To summarise, the broad thrusts of the redesign are:

  • changes to the way procedural macros work and the parts of the compiler they have access to,
  • change the hygiene algorithm, and what hygiene is applied to,
  • address modularisation issues,
  • syntactic changes.

I'll summarise each here, but there will probably be a blog post about each before a proper RFC. At the end of this blog post I'll talk about backwards compatibility.

I'd also like to support macro and ident inter-operation better, as described here.

Procedural macros


I intend to tweak the system of traits and enums, etc. to make procedural macros easier to use. My intention is that there should be a small number of function signatures that can be implemented (not just one unfortunately, because I believe function-like vs attribute-like macros will take different arguments, furthermore I think we need versions for hygienic expansion and expansion with DIY-hygiene, and the latter case must be supplied with some hygiene information in order for the function to do it's own hygiene. I'm not certain that is the right approach though). Although this is not as Rust-y as using traits, I believe the simplicity benefits outweigh the loss in elegance.

All macros will take a set of tokens in and generate a set of tokens out. The token trees should be a simplified version of the compiler's internal token trees to allow procedural macros more flexibility (and forwards compatibility). For attribute-like macros, the code that they annotate must still parse (necessary due to internal attributes, unfortunately), but will be supplied as tokens to the macro itself.

I intend that libsyntax will remain unstable and (stable) procedural macros will not have direct access to it or any other compiler internals. We will create a new crate, libmacro (or something) which will re-export token trees from libsyntax and provide a whole bunch of functionality specifically for procedural macros. This library will take the usual path to stabilisation.

Macros will be able to parse tokens and expand macros in various ways. The output will be some kind of AST. However, after manipulating the AST, it is converted back into tokens to be passed back to the macro expander. Note that this requires us storing hygiene and span information directly in the tokens, not the AST.

I'm not sure exactly what the AST we provide should look like, nor the bounds on what should be in libmacro vs what can be supplied by outside libraries. I would like to start by providing no AST at all and see what the eco-system comes up with.

It is worth thinking about the stability implications of this proposal. At some point in the future, the procedural macro mechanism and libmacro will be stable. So, a crate using stable Rust can use a crate which provides a procedural macro. At some point later we evolve the language in a non-breaking way which changes the AST (internal to libsyntax). We must ensure that this does not change the structure of the token trees we give to macros. I believe that should not be a problem for a simple enough token tree. However, the procedural macro might expect those tokens to parse in a certain way, which they no longer do causing the procedural macro to fail and thus compilation to fail. Thus, the stability guarantees we provide users can be subverted by procedural macros. However, I don't think this is possible to prevent. In the most pathological case, the macro could check if the current date is later than a given one and in that case panic. So, we are basically passing the buck about backwards compatibility with the language to the procedural macro authors and the libraries they use. There is an obvious hazard here if a macro is widely used and badly written. I'm not sure if this can be addressed, other than making sure that libraries exist which make compatibility easy.


I hope that the situation for macro authors will be similar to that for other authors: we provide a small but essential standard library (libmacro) and more functionality is provided by the ecosystem via crates.io.

The functionality I expect to see in libmacro should be focused on interaction with the rest of the parser and macro expander, including macro hygiene. I expect it to include:

  • interning a string and creating an ident token from a string
  • creating and manipulating tokens
  • expanding macros (macro_rules and procedural), possibly in different ways
  • manipulating the hygiene of tokens
  • manipulating expansion traces for spans
  • name resolution of module and macro names - note that I expect these to return token trees, which gives a macro access to the whole program, I'm not sure this is a good idea since it breaks locality for macros
  • check and set feature gates
  • mark attributes and imports as used

The most important external libraries I would like to see would be to provide an AST-like abstraction, parsing, and tools for building and manipulating AST. These already exist (syntex, ASTer), so I am confident we can have good solutions in this space, working towards crates which are provided on crates.io, but are officially blessed (similar to the goals of other libraries).

I would very much like to see quasi-quoting and pattern matching in blessed libraries. These are important tools, the former currently provided by libsyntax. I don't see any reason these must be provided by libmacro, and since quasi-quoting produces AST, they probably can't be (since they would be associated with a particular AST implementation). However, I would like to spend some time improving the current quasi-quoting system, in particular to make it work better with hygiene and expansion traces.

Alternatively, libmacro could provide quasi-quoting which produces token trees, and there is then a second step to produce AST. Since hygiene info will operate at the tokens level, this might be possible.

Pattern matching on tokens should provide functionality similar to that provided by macro_rules!, making writing procedural macros much easier. I'm convinced we need something here, but not sure of the design.

Naming and registration

See section on modularisation below, the same things apply to procedural macros as to macro_rules macros.

A macro called baz declared in a module bar inside a crate foo could be called using ::foo::bar::baz!(...) or imported using use foo::bar::baz!; and used as baz!(...). Other than a feature flag until procedural macros are stabilised, users of macros need no other annotations. When looking at an extern crate foo statement, the compiler will work out whether we are importing macros.

I expect that functions expected to work as procedural macros would be marked with an attribute (#[macro] or some such). We would also have #[cfg(macro)] for helper functions, etc. Initially, I expect a whole crate must be #[cfg(macro)], but eventually I would like to allow mixing in a crate (just as we allow macro_rules macros in the same crate as normal code).

There would be no need to register macros with the plugin registry.

A vaguely related issue is whether interaction between the macros and the compiler should be via normal function calls (to libmacro) or via IPC. The latter would allow produral macros to be used without dynamic linking and thus permit a statically linked compiler.


I plan to change the hygiene algorithm we use from mtwt to sets of scopes. This allows us to use hygiene information in name resolution, thus alleviating the 'absolute path' problem in macros. We can also use this information to support hygienic checking of privacy. I'll explain the algorithm and how it will apply to Rust in another blog post. I think this algorithm will be easier for procedural macro authors to work with too.

Orthogonally, I want to make all identifiers hygienic, not just variables and labels. I would also like to support hygienic unsafety. I believe both these things are more implementation than design issues.


The goal here is to treat macros the same way as other items, naming via paths and allowing imports. This includes naming of attributes, which will allow paths for naming (e.g., #[foo::bar::baz]). Ordering of macros should also not be important. The mechanism to support this is moving parts of name resolution and privacy checking to macro expansion time. The details of this (and the interaction with sets of scopes hygiene, which essentially gives a new mechanism for name resolution) are involved.


These things are nice to have, rather than core parts of the plan. New syntax for procedural macros is covered above.

I would like to fix the expansion issues with arguments and nested macros, see blog post.

I propose that new macros should use macro! rather than macro_rules!.

I would like a syntactic form for macro_rules macros which only matches a single pattern and is more lightweight than the current syntax. The current syntax would still be used where there are multiple patterns. Something like,

macro! foo(...) => {  

Perhaps we drop the => too.

We need to allow privacy annotations for macros, not sure the best way to do this: pub macro! foo { ... } or macro! pub foo { ... } or something else.

Backwards compatability

Procedural macros are currently unstable, there will be a lot of breaking changes, but the reward is a path to stability.

macro_rules! is a stable part of the language. It will not break (modulo usual caveat about bug fixes). The plan is to introduce a whole new macro system around macro!, if you have macros currently called macro!, I guess we break them (we will run a warning cycle for this and try and help anyone who is affected). We will deprecate macro_rules! once macro! is stable. We will track usage with the intention of removing macro_rules at 2.0 or 3.0 or whatever. All macros in the standard libraries will be converted to using macro!, this will be a breaking change, we will mitigate by continuing to support the old but deprecated versions of the macros. Hopefully, modularisation will support this (needs more thought to be sure). The only change for users of macros will be how the macro is named, not how it is used (modulo new applications of hygiene).

Most existing macro_rules! macros should be valid macro! macros. The only difference will be using macro! instead of macro_rules! and the new scoping/naming rules may lead to name clashes that didn't exist before (note this is not in itself a breaking change, it is a side effect of using the new system). Macros converted in this way should only break where they take advantage of holes in the current hygiene system. I hope that this is a low enough bar that adoption of macro! by macro_rules! authors will be quick.


There are two backwards compatibility hazards with hygiene, both affect only macro_rules! macros: we must emulate the mtwt algorithm with the sets of scopes algorithm, and we must ensure unhygienic name resolution for items which are currently not treated hygienically. In the second case, I think we can simulate unhygienic expansion for types etc, by using the set of scopes for the macro use-site, rather than the proper set. Since only local variables are currently treated hygienically, I believe this means the first case will Just Work. More details on this in a future blog post.

Air MozillaPrivacy for Normal People

Privacy for Normal People Mozilla cares deeply about user control. But designing products that protect users is not always obvious. Sometimes products give the illusion of control and security...

Armen ZambranoWelcome F3real, xenny and MikeLing!

As described by jmaher, we started this week our first week of mozci's quarter of contribution.

I want to personally welcome Stefan, Vaibhav and Mike to mozci. We hope you get to learn and we thank you for helping Mozilla move forward in this corner of our automation systems.

I also want to give thanks to Alice for committing at mentoring. This could not be possible without her help.

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Armen ZambranoMozilla CI tools meet up

In order to help the contributors' of mozci's quarter of contribution, we have set up a Mozci meet up this Friday.

If you're interested on learning about Mozilla's CI, how to contribute or how to build your own scheduling with mozci come and join us!

9am ET -> other time zones
Vidyo room: https://v.mozilla.com/flex.html?roomdirect.html&key=GC1ftgyxhW2y

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Air MozillaMartes mozilleros, 24 Nov 2015

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos.

Kim MoirUSENIX Release Engineering Summit 2015 recap

November 13th, I attended the USENIX Release Engineering Summit in Washington, DC.  This summit was along side the larger LISA conference at the same venue. Thanks to Dinah McNutt, Gareth Bowles, Chris Cooper,  Dan Tehranian and John O'Duinn for organizing.

I gave two talks at the summit.  One was a long talk on how we have scaled our Android testing infrastructure on AWS, as well as a look back at how it evolved over the years.

Picture by Tim Norris - Creative Commons Attribution-NonCommercial-NoDerivs 2.0 Generic (CC BY-NC-ND 2.0)

Scaling mobile testing on AWS: Emulators all the way down from Kim Moir

I gave a second lightning talk in the afternoon on the problems we face with our large distributed continuous integration, build and release pipeline, and how we are working to address the issues. The theme of this talk was that managing a large distributed system is like being the caretaker for the water, or some days, the sewer system for a city.  We are constantly looking system leaks and implementing system monitoring. And probably will have to replace it with something new while keeping the existing one running.

Picture by Korona Lacasse - Creative Commons 2.0 Attribution 2.0 Generic https://www.flickr.com/photos/korona4reel/14107877324/sizes/l

In preparation for this talk, I did a lot of reading on complex systems design and designing for recovery from failure in distributed systems.  In particular, I read Donatella Meadows' book Thinking in Systems. (Cate Huston reviewed the book here). I also watched several talks by people who talked about the challenges they face managing their distributed systems including the following:
I'd also like to thank all the members of Mozilla releng/ateam who reviewed my slides and provided feedback before I gave the presentations.
    The attendees of the summit attended the same keynote as the LISA attendees.  Jez Humble, well known for his Continuous Delivery and Lean Enterprise books provided a keynote on Lean Configuration Management which I really enjoyed. (Older version of slides from another conference, are available here and here.)

    In particular, I enjoyed his discussion of the cultural aspects of devops. I especially like that he stated that "You should not have to have planned downtime or people working outside business hours to release".  He also talked a bit about how many of the leaders that are looked up to as visionaries in the tech industry are known for not treating people very well and this is not a good example to set for others who believe this to be the key to their success.  For instance, he said something like "what more could Steve Jobs have accomplished had he treated his employees less harshly".

    Another concept he discussed which I found interesting was that of the strangler application. When moving from a large monolithic application, the goal is to split out the existing functionality into services until the originally application is left with nothing.  Exactly what Mozilla releng is doing as we migrate from Buildbot to taskcluster.


    At the release engineering summit itself,   Lukas Blakk from Pinterest gave a fantastic talk Stop Releasing off Your Laptop—Implementing a Mobile App Release Management Process from Scratch in a Startup or Small Company.  This included grumpy cat picture to depict how Lukas thought the rest of the company felt when that a more structured release process was implemented.

    Lukas also included a timeline of the tasks that implemented in her first six months working at Pinterest. Very impressive to see the transition!

    Another talk I enjoyed was Chaos Patterns - Architecting for Failure in Distributed Systems by Jos Boumans of Krux. (Similar slides from an earlier conference here). He talked about some high profile distributed systems that failed and how chaos engineering can help illuminate these issues before they hit you in production.

    For instance, it is impossible for Netflix to model their entire system outside of production given that they consume around one third of nightly downstream bandwidth consumption in the US. 

    Evan Willey and Dave Liebreich from Pivotal Cloud Foundry gave a talk entitled "Pivotal Cloud Foundry Release Engineering: Moving Integration Upstream Where It Belongs". I found this talk interesting because they talked about how the built Concourse, a CI system that is more scaleable and natively builds pipelines.   Travis and Jenkins are good for small projects but they simply don't scale for large numbers of commits, platforms to test or complicated pipelines. We followed a similar path that led us to develop Taskcluster

    There were many more great talks, hopefully more slides will be up soon!

Henrik SkupinSurvey about sharing information inside the Firefox Automation team

Within the Firefox Automation team we were suffering a bit in sharing information about our work over the last couple of months. That mainly happened because I was alone and not able to blog more often than once in a quarter. The same applies to our dev-automation mailing list which mostly only received emails from Travis CI with testing results.

Given that the team has been increased to 4 people now (beside me this is Maja Frydrychowicz, Syd Polk, and David Burns, we want to be more open again and also trying to get more people involved into our projects. To ensure that we do not make use of the wrong communication channels – depending where most of our readers are – I have setup a little survey. It will only take you a minute to go through but it will help us a lot to know more about the preferences of our automation geeks. So please take that little time and help us.

The survey can be found here and is open until end of November 2015:


Thank you a lot!

Nick CameronMacros pt6 - more issues

I discovered another couple of issues with Rust macros (both affect the macro_rules flavour).

Nested macros and arguments

These don't work because of the way macros do substitution. When expanding a macro, the expander looks for token strings starting with $ to expand. If there is a variable which is not bound by the outer macro, then it is an error. E.g.,

macro_rules! foo {  
    () => {
        macro_rules! bar {
            ($x: ident) => { $x }

When we try to expand foo!(), the expander errors out because it can't find a value for $x, it doesn't know that macro_rules bar is binding $x.

The proper solution here is to make macros aware of binding and lexical scoping etc. However, I'm not sure that is possible because macros are not parsed until after expansion. We might be able to fix this by just being less eager to report these errors. We wouldn't get proper lexical scoping, i.e., all macro variables would need to have different names, but at least the easy cases would work.

Matching expression fragments


macro_rules! foo {  
    ( if $e:expr { $s:stmt } ) => {
        if $e {

fn main() {  
    let x = 1;
    foo! {
        if 0 < x {

This gives an error because it tries to parse x { as the start of a struct literal. We have a hack in the parser where in some contexts where we parse an expression, we explicitly forbid struct literals from appearing so that we can correctly parse a following block. This is not usually apparent, but in this case, where the macro expects an expr, what we'd like to have is 'an expression but not a struct literal'. However, exposing this level of detail about the parser implementation to macro authors (not even procedural macro authors!) feels bad. Not sure how to tackle this one.

Relatedly, it would be nice to be able to match other fragments of the AST, for example the interior of a block. Again, there is the issue of how much of the internals we wish to expose.

(HT @bmastenbrook for the second issue).

Chris FinkeReenact Now Available for Android

I’ve increased the audience for Reenact (an app for reenacting photos) by 100,000% by porting it from Firefox OS to Android.


It took me about ten evenings to go from “I don’t even know what language Android apps are written in” to submitting the .apk to the Google PlayTM store. I’d like to thank Stack Overflow, the Android developer docs, and Android Studio’s autocomplete.

Reenact for Android, like Reenact for Firefox OS, is open-source; the complete source for both apps is available on GitHub. Also like the Firefox OS app, Reenact for Android is free and ad-free. Just think: if even just 10% of all 1 billion Android users install Reenact, I’d have $0!

In addition to making Reenact available on Android, I’ve launched Reenact.me, a home for the app. If you try out Reenact, send your photo to gallery@reenact.me to get it included in the photo gallery on Reenact.me.

You can install Reenact on Google Play or directly from Reenact.me. Try it out and let me know how it works on your device!

Mozilla Security BlogImproving Revocation: OCSP Must-Staple and Short-lived Certificates

Last year, we laid out a long-range plan for improving revocation support for Firefox. As of this week, we’ve completed most of the major elements of that plan. After adding OneCRL earlier this year, we have recently added support for OCSP Must-Staple and short-lived certificates. Together, these changes enable website owners several ways to achieve fast, secure certificate revocation.

In an ideal world, the browser would perform an online status check (such as OCSP) whenever it verifies a certificate, and reject the certificate if the check failed. However, these checks can be slow and unreliable. They time out about 15% of the time, and take about 350ms even when they succeed. Browsers generally soft-fail on revocation in an attempt to balance these concerns.

To get back to stronger revocation checking, we have added support for short-lived certificates and Must-Staple to let sites opt in to hard failures. As of Firefox 41, Firefox will not do “live” OCSP queries for sufficiently short-lived certs (with a lifetime shorter than the value set in “security.pki.cert_short_lifetime_in_days”). Instead, Firefox will just assume the certificate is valid. There is currently no default threshold set, so users need to configure it. We are collecting telemetry on certificate lifetimes, and expect to set the threshold somewhere around the maximum OCSP response lifetime specfied in the baseline requirements.

OCSP Must-Staple makes use of the recently specified TLS Feature Extension. When a CA adds this extension to a certificate, it requires your browser to ensure a stapled OCSP response is present in the TLS handshake. If an OCSP response is not present, the connection will fail and Firefox will display a non-overridable error page. This feature will be included in Firefox 45, currently scheduled to be released in March 2016.

Mozilla Addons BlogTest your add-ons for Multi-process Firefox compatibility

You might have heard the news that future versions of Firefox will run the browser UI separately from web content. This is called Multi-process Firefox (also “Electrolysis” or “e10s”), and it is scheduled for release in the first quarter of 2016.

If your add-on code accesses web content directly, using an overlay extension, a bootstrapped extension, or low-level SDK APIs like window/utils or tabs/utils, then you will probably be affected.

To minimize the impact on users of your add-ons, we are urging you to test your add-ons for compatibility. You can find documentation on how to make them compatible here.

Starting Nov. 24, 2015, we are available to assist you every Tuesday in the #addons channel at irc.mozilla.org. Click here to see the schedule. Whether you need help testing or making your add-ons compatible, we’re here to help!

Emily DunhamPSA: Docker on Ubuntu

PSA: Docker on Ubuntu

$ sudo apt-get install docker
$ which docker
$ docker
The program 'docker' is currently not installed. You can install it by typing:
apt-get install docker
$ apt-get install docker
Reading package lists... Done
Building dependency tree
Reading state information... Done
docker is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 13 not upgraded.

Oh, you wanted to run a docker container? The docker package in Ubuntu is some window manager dock thingy. The docker binary that runs containers comes from the docker.io system package.

$ sudo apt-get install docker.io
$ which docker

Also, if it can’t connect to its socket:

FATA[0000] Post http:///var/run/docker.sock/v1.18/containers/create: dial
unix /var/run/docker.sock: permission denied. Are you trying to connect to a
TLS-enabled daemon without TLS?

you need to make sure you’re in the right group:

sudo usermod -aG docker <username>; newgrp docker

(thanks, stackoverflow!)

Daniel Stenbergcopy as curl

Using curl to perform an operation a user just managed to do with his or her browser is one of the more common requests and areas people ask for help about.

How do you get a curl command line to get a resource, just like the browser would get it, nice and easy? Both Chrome and Firefox have provided this feature for quite some time already!

From Firefox

You get the site shown with Firefox’s network tools.  You then right-click on the specific request you want to repeat in the “Web Developer->Network” tool when you see the HTTP traffic, and in the menu that appears you select “Copy as cURL”. Like this screenshot below shows. The operation then generates a curl command line to your clipboard and you can then paste that into your favorite shell window. This feature is available by default in all Firefox installations.


From Chrome

When you pop up the More tools->Developer mode in Chrome, and you select the Network tab you see the HTTP traffic used to get the resources of the site. On the line of the specific resource you’re interested in, you right-click with the mouse and you select “Copy as cURL” and it’ll generate a command line for you in your clipboard. Paste that in a shell to get a curl command line  that makes the transfer. This feature is available by default in all Chome and Chromium installations.


On Firefox, without using the devtools

If this is something you’d like to get done more often, you probably find using the developer tools a bit inconvenient and cumbersome to pop up just to get the command line copied. Then cliget is the perfect add-on for you as it gives you a new option in the right-click menu, so you can get a quick command line generated really quickly, like this example when I right-click an image in Firefox:


This Week In RustThis Week in Rust 106

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: nasa42, brson, and llogiq.

Updates from Rust Community

News & Blog Posts

Notable New Crates & Projects

  • nom 1.0 is released.
  • Freepass. The free password manager for power users.
  • Barcoders. A barcode encoding library for the Rust programming language.
  • fst. Fast implementation of ordered sets and maps using finite state machines.
  • Rusty Code. Advanced language support for the Rust language in Visual Studio Code.
  • Dybuk. Prettify the ugly Rustc messages (inspired by Elm).
  • Substudy. Use SRT subtitle files to study foreign languages.

Updates from Rust Core

99 pull requests were merged in the last week.

See the triage digest and subteam reports for more details.

Notable changes

New Contributors

  • Alexander Bulaev
  • Ashkan Kiani
  • Devon Hollowood
  • Doug Goldstein
  • Jean Maillard
  • Joshua Holmer
  • Matthias Kauer
  • Ole Krüger
  • Ravi Shankar

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

Tweet us at @ThisWeekInRust to get your job offers listed here!

Crate of the Week

This week's Crate of the Week is nom, a library of fast zero-copy parser combinators, which has already been used to create safe, high-performance parsers for a number of formats both binary and textual. nom just reached version 1.0, too, so congratulations for both the major version and the CotW status!

Thanks to Reddit user gbersac for the nom-ination! Submit your suggestions for next week!

Mark FinkleAn Engineer’s Guide to App Metrics

Building and shipping a successful product takes more than raw engineering. I have been posting a bit about using Telemetry to learn about how people interact with your application so you can optimize use cases. There are other types of data you should consider too. Being aware of these metrics can help provide a better focus for your work and, hopefully, have a bigger impact on the success of your product.

Active Users

This includes daily active users (DAUs) and monthly active users (MAUs). How many people are actively using the product within a time-span? At Mozilla, we’ve been using these for a long time. From what I’ve read, these metrics seem less important when compared to some of the other metrics, but they do provide a somewhat easy to measure indicator of activity.

These metrics don’t give a good indication of how much people use the product though. I have seen a variation metric called DAU/MAU (daily divided by monthly) and gives something like retention or engagement. DAU/MAU rates of 50% are seen as very good.


This metric focuses on how much people really use the product, typically tracking the duration of session length or time spent using the application. The amount of time people spend in the product is an indication of stickiness. Engagement can also help increase retention. Mozilla collects data on session length now, but we need to start associating metrics like this with some of our experiments to see if certain features improve stickiness and keep people using the application.

We look for differences across various facets like locales and releases, and hopefully soon, across A/B experiments.

Retention / Churn

Based on what I’ve seen, this is the most important category of metrics. There are variations in how these metrics can be defined, but they cover the same goal: Keep users coming back to use your product. Again, looking across facets, like locales, can provide deeper insight.

Rolling Retention: % of new users return in the next day, week, month
Fixed Retention: % of this week’s new users still engaged with the product over successive weeks.
Churn: % of users who leave divided by the number of total users

Most analysis tools, like iTunes Connect and Google Analytics, use Fixed Retention. Mozilla uses Fixed Retention with our internal tools.

I found some nominal guidance (grain of salt required):
1-week churn: 80% bad, 40% good, 20% phenomenal
1-week retention: 25% baseline, 45% good, 65% great

Cost per Install (CPI)

I have also seen this called Customer Acquisition Cost (CAC), but it’s basically the cost (mostly marketing or pay-to-play pre-installs) of getting a person to install a product. I have seen this in two forms: blended – where ‘installs’ are both organic and from campaigns, and paid – where ‘installs’ are only those that come from campaigns. It seems like paid CPI is the better metric.

Lower CPI is better and Mozilla has been using Adjust with various ad networks and marketing campaigns to figure out the right channel and the right messaging to get Firefox the most installs for the lowest cost.

Lifetime Value (LTV)

I’ve seen this defined as the total value of a customer over the life of that customer’s relationship with the company. It helps determine the long-term value of the customer and can help provide a target for reasonable CPI. It’s weird thinking of “customers” and “value” when talking about people who use Firefox, but we do spend money developing and marketing Firefox. We also get revenue, maybe indirectly, from those people.

LTV works hand-in-hand with churn, since the length of the relationship is inversely proportional to the churn. The longer we keep a person using Firefox, the higher the LTV. If CPI is higher than LTV, we are losing money on user acquisition efforts.

Total Addressable Market (TAM)

We use this metric to describe the size of a potential opportunity. Obviously, the bigger the TAM, the better. For example, we feel the TAM (People with kids that use Android tablets) for Family Friendly Browsing is large enough to justify doing the work to ship the feature.

Net Promoter Score (NPS)

We have seen this come up in some surveys and user research. It’s suppose to show how satisfied your customers are with your product. This metric has it’s detractors though. Many people consider it a poor value, but it’s still used quiet a lot.

NPS can be as low as -100 (everybody is a detractor) or as high as +100 (everybody is a promoter). An NPS that is positive (higher than zero) is felt to be good, and an NPS of +50 is excellent.

Go Forth!

If you don’t track any of these metrics for your applications, you should. There are a lot of off-the-shelf tools to help get you started. Level-up your engineering game and make a bigger impact on the success of your application at the same time.

Cameron KaiserTenFourFoxBox: because it's time to think inside the (fox)box. (a/k/a: we dust off Mozilla Prism for a new generation)

As long as there have been web browsers, there have been people trying to get the web freed up from the browser that confines it, because, you know, the Web wants to be free, or some other similarly aspirational throwaway platitude. These could be robots, or screen scrapers, or aggregating services, or chromeless viewers, but no matter what these browserless browsers are doing, they all tend to specialize in a particular site for any number of reasons usually circulating around business or convenience. This last type, the chromeless viewer, spawned the subcategory of "site specific browsers" that morphed into the "Rich Internet Application" and today infects our phones and tablets in the guise of the "lazy-*ss programmer mobile app."

Power Mac users have only had access to a few tools that could generate site-specific browsers. Until Adobe withdrew support, Adobe AIR could run on PowerPC 10.4+, but it was more for generally Internet-enabled apps and wasn't specifically focused at creating site-specific browsers, though it could, with a little work. Leopard users could use early betas of Fluid before that went Intel-only, and I know a few of you still do. Even Mozilla themselves got into the act with Mark Finkle's WebRunner, which became Mozilla Prism in 2007, languished after a few releases, got moved to Salsita and renamed WebRunner again in 2011, and cancelled there as well around the time of Firefox 5. However, WebRunner née Prism née WebRunner was never available for Power Macs; its required binary components were Intel-only, even though the Mozilla releases could run on 10.4, so that was about it for PowerPC. (Mozilla tried again shortly afterward with Chromeless, but this didn't get off the ground either, and was never intended as a Prism successor in any case. Speaking of, Google Chrome can do something similar, but Chrome was of course never released for Power Macs either because Alphagooglebet are meaniepants.)

There are unique advantages as TenFourFox users to having separate apps that only handle one site at a time. Lots of tabs requires lots of garbage collection, the efficiency of which Mozilla has improved substantially, but is still a big drain on old computers like ours which are always under memory pressure. In addition, currently Firefox and TenFourFox must essentially cooperatively multitask between tabs because JavaScript infamously has run-to-completion semantics, which is why you get the "script too long" dialogue box if the watchdog portion of the browser detects something's pegging it. Since major portions of the browser itself are written in JavaScript, plus all those addons you tart it up with, the browser chrome must also cooperatively multitask with everything else which is why sometimes it temporarily grinds to a halt. I've sunk an incredible amount of time over TenFourFox's existence into our just-in-time JavaScript compiler for PowerPC to reduce this overhead, but that only gets us so far, and the typical scripts on popular websites aren't getting any less complex. Mozilla intends to solve this problem (and others) with multi-process Firefox, also known as Electrolysis, but it won't work without significant effort on 10.4 and I have grave doubts about its ability to perform well on these older computers; for that reason, I've chosen not to support it.

However, generating standalone browser apps for your common sites helps to mitigate both these problems. While each instance of the standalone browser uses more memory than a browser tab, with only one site in it garbage collection is much easier to accomplish (and therefore faster), and the memory is instantly reclaimed when the standalone browser terminates. In fact, on G5 systems with more than 2GB of RAM, it helps you actually use that extra memory more effectively: while TenFourFox is a 32-bit application (being a hybrid of Carbon and Cocoa), you'd be running multiple instances of it, all of which have their own 32-bit address space which can be located in that extra RAM you've got on board. Also, separate browser instances become ... multiple processes. That means they preemptively multitask, like Electrolysis content processes would. They could even be scheduled on a different core on multiprocessor Power Macs. That improves their responsiveness substantially, to say nothing of the fact that the substantially reduced amount of browser chrome has dramatically less overhead. Now, standalone browsers also have disadvantages; they lack a lot of the features of a regular browser, including safety features, and they can be more difficult to navigate in because of the reduced interface. But for many sites those are acceptable tradeoffs.

So, without further ado, let's introduce TenFourFoxBox.

TenFourFoxBox is an application that generates site-specific browsers ("foxboxes") for you, running them in private instances of TenFourFox (a la XULRunner). This has been one of my secret internal projects since I got Amazon Music working properly with TenFourFox, so I wanted to use it as a jukebox without dragging down the rest of the browser, and to help beef up the performance of my online coursework site which has a rather heavy implementation and depends greatly on Google Docs and Box. And now you'll get to play with it as well.

Although TenFourFoxBox borrows some code from Prism/WebRunner, mostly the reduced browser chrome, in actual operation it functions somewhat differently. First, TenFourFoxBox isn't itself written in XUL; it's a "native" OS X application that just happens to generate XUL-based applications. Second, for webapps created with Prism (or its companion tool Refractor), it's Prism itself that actually does the running with its own embedded copy of the XUL framework, not Firefox. With TenFourFoxBox, however, foxboxes you create actually run using the copy of TenFourFox you have installed (and yes, the foxboxes will look for and run the correct version for your architecture), just as separate processes, with their own browser chrome and their own application support and cache directory independent of the main browser. The nice thing about that is when you upgrade TenFourFox, you upgrade the browser core in every foxbox on your system all at once, as well as your main browser, because TenFourFox is your main browser, amirite?

The implementation in TenFourFoxBox is also a little different with respect to how data is stored. Foxboxes are driven essentially as independent XULRunner apps, so they have their own storage separate from the browser. Prism allowed this space to be shared, but I don't think that was a good idea, so not only are all foxboxes independent, but by default they operate effectively in "private browsing" mode and clear out cookies and other site data when they quit. By default they also disable autocomplete, improving both privacy and a little bit of performance; you can, of course, change these settings, and override checks sites might do which could detect you're not actually in a regular browser. I also decided to keep a constant unchanging title (regardless of the website you're viewing) so that you can more easily identify it in Exposé.

So, let's see it in action. Here's Bing Maps, in full 1080p on the quad G5, looking for drone landing sites.

And here's what I originally wrote this for, Amazon Music, playing the more or less official album of International Space Year:

(Stupid Amazon. I already have Flood and Junta!)

So now it's time to get this ready for the masses, and what better way than to have you slavering lot mercilessly bang on it? The following bugs/deficiencies are known:

  • The application menu only has "Quit." This is actually Mozilla bug 1181977, and will be fixed in TenFourFox 38.5, after which all the foxboxes will "fix themselves."
  • Localization isn't supported yet, even if you have a localized TenFourFox; most things will still appear in English. It's certainly possible to do, just non-trivial because of TenFourFoxBox's dual nature (we have to localize both the OS X portion and the XUL code it generates, and then figure out how to juggle multi-lingual resources). I'm not likely to do anything with this until the rest of it is stable enough to freeze strings.
  • Although the browser core they run is shared, individual foxboxes have their own private copies of the foxbox support code and chrome which are independent. Thus, when a new TenFourFoxBox comes out, you will need to manually update each of your foxboxes. You can do this in place and overwrite them; it's just somewhat inconvenient.
  • There are probably browser features missing that you'd like. I'm willing to entertain reasonable requests.

Even the manual is delivered as a foxbox, which makes it easy to test on your system. Download it, try it and post your comments in the comments. TenFourFox 38.4 or higher is required. This is a beta, so treat it accordingly, with the plan to release it for general consumption a week or so after 38.5 comes out.

Let's do a little inside-the-box thinking with an old idea for a new generation, shall we?

Benjamin KerensaOpenly Thankful

ThankfulSo next week has a certain meaning for millions of Americans that we relate to a story of indians and pilgrims gathering to have a meal together. While that story may be distorted from the historical truth, I do think the symbolic holiday we celebrate is important.

That said, I want to name some individuals I am thankful for….



Lukas Blakk

I’m thankful for Lukas for being a excellent mentor to me at Mozilla for the last two years she was at Mozilla. Lukas helped me learn skills and have opportunities that many Mozillians would not have the opportunity to do. I’m very grateful for her mentoring, teaching, and her passion to help others, especially those who have less opportunity.

Jeff Beatty

I’m especially thankful for Jeff. This year, out of the blue, he came to me this year and offered to have his university students support an open source project I launched and this has helped us grow our l10n community. I’m also grateful for Jeff’s overall thoughtfulness and my ability to go to him over the last couple of years for advice and feedback.

Majken Connor

I’m thankful for Majken. She is always a very friendly person who is there to welcome people to the Mozilla Community but also I appreciate how outspoken she is. She is willing to share opinions and beliefs she has that add value to conversations and help us think outside the box. No matter how busy she is, she has been a constant in the Mozilla Project. always there to lend advice or listen.

Emma Irwin

I’m thankful for Emma. She does something much different than teaching us how to lead or build community, she teaches us how to participate better and build better participation into open source projects. I appreciate her efforts in teaching future generations the open web and being such a great advocate for participation.

Stormy Peters

I’m thankful for Stormy. She has always been a great leader and it’s been great to work with her on evangelism and event stuff at Mozilla. But even more important than all the work she did at Mozilla, I appreciate all the work she does with various open source nonprofits the committees and boards she serves on or advises that you do not hear about because she does it for the impact.


Jonathan Riddell

I’m thankful for Jonathan. He has done a lot for Ubuntu, Kubuntu, KDE and the great open source ecosystem over the years. Jonathan has been a devout open source advocate always standing for what is right and unafraid to share his opinion even if it meant disappointment from others.

Elizabeth Krumbach Joseph

I’m thankful for Elizabeth. She has been a good friend, mentor and listener for years now and does so much more than she gets credit for. Elizabeth is welcoming in the multiple open source projects she is involved in and if you contribute to any of those projects you know who she is because of the work she does.


Paolo Rotolo

I’m thankful for our lead Android developer who helps lead our Android development efforts and is a driving force in helping us move forward the vision behind Glucosio and help people around the world. I enjoy near daily if not multiple time a day conversations with him about the technical bits and big picture.

The Core Team + Contributors

I’m very thankful for everyone on the core team and all of our contributors at Glucosio. Without all of you, we would not be what we are today, which is a growing open source project doing amazing work to bring positive change to Diabetes.


Leslie Hawthorne

I’m thankful for Leslie. She is always very helpful for advice on all things open source and especially open source non-profits. I think she helps us all be better human beings. She really is a force of good and perhaps the best friend you can have in open source.

Jono Bacon

I’m thankful for Jono. While we often disagree on things, he always has very useful feedback and has an ocean of community management and leadership experience. I also appreciate Jono’s no bullshit approach to discussions. While it can be rough for some, the cut to the chase approach is sometimes a good thing.

Christie Koehler

I’m thankful for Christie. She has been a great listener over the years I have known her and has been very supportive of community at Mozilla and also inclusion & diversity efforts. Christie is a teacher but also an organizer and in addition to all the things I am thankful for that she did at Mozilla, I also appreciate her efforts locally with Stumptown Syndicate.

Air MozillaWebdev Beer and Tell: November 2015

Webdev Beer and Tell: November 2015 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Mozilla Addons BlogSigning API now available

Over the years, addons.mozilla.org has had many APIs. These are used by Firefox and other clients to provide add-on listings, blocklists, and other features. But there hasn’t really been an API that developers can interact with. As part of ongoing improvements to the site, we’ve started focusing on producing APIs for add-on developers as well.

Our first one aims to make add-on signing a little easier for developers. This API enables you to upload an XPI and get back the signed add-on if it passes all the validation checks.

To use this API, log in to addons.mozilla.org and go to Tools > Manage API Keys. Then agree to the terms and fetch an API key and secret to use in subsequent API calls.


Once you’ve done that, generate authorization tokens and use the documented API to sign your add-on.

The documented examples use curl to interact with the API. For example:

curl https://addons.mozilla.org/api/v3/addons/my-addon/versions/1.0/
-XPUT --form 'upload=@build/my-addon.xpi' -H 'Authorization: JWT your-jwt-token'

This is just the first of the APIs that we hope to add to the site and a path that we hope will lead to increased functionality throughout the add-ons ecosystem. This feature is under development, so we are keen to hear feedback or any issues.

John O'DuinnThe real cost of an office

Woodwards building Vancouver demolition 2 by Tannoy | CC BY-SA 3.0 via Wikimedia Commons

The shift from “building your own datacenter” to “using the cloud” revolutionized how companies viewed internal infrastructure, and significantly reduced the barrier to starting your own fast-growth, global-scale company. Suddenly, you could have instant, reliable, global-scale infrastructure.

(Personally, I dislike the term “cloud” but it’s the easiest vendor-neutral term I know for describing essential infrastructure running on rent-by-the-hour Amazon AWS, Google GCE, Microsoft Azure and others…)

Like any new major change, “the cloud” went through an uphill acceptance curve with resistance from established nay-sayers. Meanwhile, smaller companies with no practical alternatives jumped in with both feet and found that “the cloud” worked just fine. And scaled better. And was cheaper to run. And was faster to setup, so the opportunity-cost was significantly reduced.

Today, of course, “using the cloud” for your infrastructure has crossed the chasm. It is the default. Today, if you were starting a new company, and went looking for funding to build your own custom datacenter, you’d need to explain why you were not “using the cloud”. Deciding to have your own physical data center involves one-time-setup costs as well as ongoing recurring operational costs. Similarly, deciding to have a physical office involves one-time-setup costs as well as ongoing recurring operational costs.

Rethinking infrastructure from the fixed costs of servers and datacenters to rented by the hour “in the cloud” is an industry game changer. Similarly, rethinking the other expensive part of a company’s infrastructure — the physical office — is an industry game changer.

Just like physical datacenters, deciding to setup an office is an expensive decision which complicates, not liberates, the ongoing day-to-day life of your company.

The reality of having an office

It is easy to skip past the “Do we really need an office?” question – and plunge into the mechanics, without first thinking through some company-threatening questions.

What city, and which neighborhood in the city, is the best location for your company office? Sometimes the answer is “near to where the CEO lives”, or “near the offices of our lead VCs”. However, this should include answers to questions like “where will we find most of the talent (people) we plan to hire?” and “where will most of our customers be?”.

What size should your office be? This requires thinking through your hiring plans — not just for today, but also for the duration of the lease — typically 3–5–10 years. The consequences of this decision may be even longer, given how some people do not like relocating! When starting a company, it is very tricky to accurately predict the answers to these questions for multiple years into the future.

Business plans change. Technologies change. Market needs and finances change. Product scope changes. Companies pivot. Brick-and-mortar buildings (usually) stay where they are.

If you convince yourself that your company does need a physical office, setting up and running an office is “non-trivial”. You quickly get distracted by the expensive logistics and operational mechanics of a physical building – instead of keeping focus on people and the shipping product.

You need to negotiate, sign and pay leases. Debate offices-with-doors vs open-plan — and if open-plan, do you want library-quiet, or bull-pen with cross-chatter and music? Negotiate seating arrangements — including the who-gets-a-window-view debate. Construct the actual office-space, bathrooms and kitchens. Pick, buy and install desks, chairs, ping-pong tables and fridges. Set up wifi, security doorbadge systems, printers, phones. Hire staff who are focused on running the physical office, not focused on your product. The list goes on and on. All of these take time, money and most importantly focus. This distracts humans away from the entire point of the company — hiring humans to create and ship product to earn money. And the distraction does not end once the office is built — maintaining and running a physical office takes ongoing time, money and focus.

After your office is up-and-running, you discover the impact this new office has on hiring. You pay to relocate people who would be great additions to your company, but do not live near your new office. You are disappointed by good people turning down job offers because of the location. You have debates about “hiring the best person for the job” vs “hiring the best person for the job who is willing to relocate”. You have to limit hiring because you don’t have a spare desk available. You need to sublease a part of your new office space, because growth plans changed because revenue didn’t go as well as hoped – and now you have unused idle office space costing you money every month.

The benefits of no office

You dedicate more time, money and focus on the people, and the shipping product — simply by avoiding the financial costs, lead-time-delays and focus-distractions of setting up a physical office.

Phrased another way: Distributed teams let you focus the company time and money where it is most important — on the people and the product. After all, it doesn’t matter how fancy your office is unless you have a product that people want to use.

Having no office lets you sidestep a few potentially serious and distracting ongoing problems:

You don’t need to worry about signing a lease for a space that is too small (or too large) for the planned growth of the company. You avoid adding a large recurring cost (a lease) to the company books, which impacts your company’s financial burn rate.

You don’t need to worry if the location of the office helps or hinders future hiring plans. You don’t need to worry about good people turn down your job offers simply because of the office location. You can hire from a significantly larger pool of candidates, so you can hire better and faster then all-in-one-location competitors. For more on this, see .

Even larger companies like Aetna, with established offices, have been encouraging work-from-home since 2005 – because they can hire more people and also because of the money savings from real estate. Last I’ve heard, Aetna was saving $78 million a year by having people work from home. Each year. No wonder Dell and others are now doing the same.

You sidestep human distractions about office layout.

You don’t need to worry about business continuity if the office is closed for a while.

Sidestepping all these distractions helps you (and everyone else in the company) focus attention and money on the people and the product you are building and shipping. This is a competitive advantage over all-in-one-office companies. Important stuff to keep in mind when you ask yourself “Do we really need an office?”

(Versions of this post are on medium.com and also in the latest early release of my “Distributed” book.)

(Photo credit: Woodwards building Vancouver demolition 2 by Tannoy | CC BY-SA 3.0 via Wikimedia Commons)

Daniel PocockDatabases of Muslims and homosexuals?

One US presidential candidate has said a lot recently, but the comments about making a database of Muslims may qualify as the most extreme.

Of course, if he really wanted to, somebody with this mindset could find all the Muslims anyway. A quick and easy solution would involve tracing all the mobile phone signals around mosques on a Friday. Mr would-be President could compel Facebook and other social networks to disclose lists of users who identify as Muslim.

Databases are a dangerous side-effect of gay marriage

In 2014 there was significant discussion about Brendan Eich's donation to the campaign against gay marriage.

One fact that never ranked very highly in the debate at the time is that not all gay people actually support gay marriage. Even where these marriages are permitted, not everybody who can marry now is choosing to do so.

The reasons for this are varied, but one key point that has often been missed is that there are two routes to marriage equality: one involves permitting gay couples to visit the register office and fill in a form just as other couples do. The other route to equality is to remove all the legal artifacts around marriage altogether.

When the government does issue a marriage certificate, it is not long before other organizations start asking for confirmation of the marriage. Everybody from banks to letting agents and Facebook wants to know about it. Many companies outsource that data into cloud CRM systems such as Salesforce. Before you know it, there are numerous databases that somebody could mine to make a list of confirmed homosexuals.

Of course, if everybody in the world was going to live happily ever after none of this would be a problem. But the reality is different.

While discrimination: either against Muslims or homosexuals - is prohibited and can even lead to criminal sanctions in some countries, this attitude is not shared globally. Once gay people have their marriage status documented in the frequent flyer or hotel loyalty program, or in the public part of their Facebook profile, there are various countries where they are going to be at much higher risk of prosecution/persecution. The equality to marry in the US or UK may mean they have less equality when choosing travel destinations.

Those places are not as obscure as you might think: even in Australia, regarded as a civilized and laid-back western democracy, the state of Tasmania fought tooth-and-nail to retain the criminalization of virtually all homosexual conduct until 1997 when the combined actions of the federal government and high court compelled the state to reform. Despite the changes, people with some of the most offensive attitudes are able to achieve and retain a position of significant authority. The same Australian senator who infamously linked gay marriage with bestiality has successfully used his position to set up a Senate inquiry as a platform for conspiracy theories linking Halal certification with terrorism.

There are many ways a database can fall into the wrong hands

Ironically, one of the most valuable lessons about the risk of registering Muslims and homosexuals was an injustice against the very same tea-party supporters a certain presidential candidate is trying to woo. In 2013, it was revealed IRS employees had started applying a different process to discriminate against groups with Tea party in their name.

It is not hard to imagine other types of rogue or misinformed behavior by people in positions of authority when they are presented with information that they don't actually need about somebody's religion or sexuality.

Beyond this type of rogue behavior by individual officials and departments, there is also the more sinister proposition that somebody truly unpleasant is elected into power and can immediately use things like a Muslim database, surveillance data or the marriage database for a program of systematic discrimination. France had a close shave with this scenario in the 2002 presidential election when
Jean-Marie Le Pen, who has at least six convictions for racism or inciting racial hatred made it to the final round in a two-candidate run-off with Jacques Chirac.

The best data security

The best way to be safe- wherever you go, both now and in the future - is not to have data about yourself on any database. When filling out forms, think need-to-know. If some company doesn't really need your personal mobile number, your date of birth, your religion or your marriage status, don't give it to them.

Support.Mozilla.OrgWhat’s up with SUMO – 20th November

Hello, SUMO Nation!

Good to see you reading these words again. Thank you for dropping by and willing to learn more about the most recent goings-on at SUMO.

Welcome, new contributors!

If you joined us recently, don’t hesitate – come over and say “hi” in the forums!

Contributors of the last week

  • SynergSINE – for his proactive attitude and conversation with the Ivory Coast Mozillians who are interested in participating in SUMO!

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Last SUMO Community meeting

Reminder: the next SUMO Community meeting…

  • …is going to take place on Monday, 23rd of November. Join us!
  • If you want to add a discussion topic to upcoming the live meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Monday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).



Support Forum


  • for Android
    • Nothing new to report.
  • for Desktop
    • All quiet on the desktop front.
  •  for iOS
    • We keep getting more users!
  • Firefox OS
    • Guess what… no big news here, either ;-)
All this quiet is good – time to recharge our Moz-batteries and get ready for a busy end-of-year season! We wish you a great weekend and hope to see you around on Monday. Take it easy!

Joel MaherIntroducing the contributors for the MozCI Project

As I previously announced who will be working on Pulse Guardian, the Web Platform Tests Results Explorer, and the  Web Driver Infrastructure projects, I would like to introduce the contributors for the 4th project this quarter, Mozilla CI Tools – Polish and Packaging:

* MikeLing (:mikeling on IRC) –

What interests you in this specific project?

As its document described, Mozilla CI Tools is designed to allow interacting with the various components which compose Mozilla’s Continuous Integration. So, I think get involved into it can help me know more about how Treeherder and Mozci works and give me better understanding of A-team.

What do you plan to get out of this after 8 weeks?

Keep try my best to contribute! Hope I can push forward this project with Armen, Alice and other contributors in the furture :)

Are there any interesting facts/hobbies that you would like to share so others can enjoy reading about you?

I’m a guy who would like to keep challenge myself and try new stuff.

* Stefan (:F3real on IRC) –

What interests you in this specific project?

I thought it would be good starting project and help me learn new things.

What do you plan to get out of this after 8 weeks?

Expand my knowledge and meet new people.

Are there any interesting facts/hobbies that you would like to share so others can enjoy reading about you?

I play guitar but I don’ t think that’s really interesting.

* Vaibhav Tulsyan (:xenny on IRC) –

What interests you in this specific project?

Continuous Integration, in general, is interesting for me.

What do you plan to get out of this after 8 weeks?

I want to learn how to work efficiently in a team in spite of working remotely, learn how to explore a new code base and some new things about Python, git, hg and Mozilla. Apart from learning, I want to be useful to the community in some way. I hope to contribute to Mozilla for a long term, and I hope that this helps me build a solid foundation.

Are there any interesting facts/hobbies that you would like to share so others can enjoy reading about you?

One of my hobbies is to create algorithmic problems from real-world situations. I like to think a lot about the purpose of existence, how people think about things/events and what affects their thinking. I like teaching and gaining satisfaction from others’ understanding.


Please join me in welcoming all the contributors to this project and the previously mentioned ones as they have committed to work on a larger project with their free time!

Joel MaherIntroducing a contributor for the WebDriver Infrastructure project

As I previously announced who will be working on Pulse Guardian and the Web Platform Tests Results Explorer, let me introduce who will be working on Web Platform Tests – WebDriver Infrastructure:

* Ravi Shankar (:waffles on IRC) –

What interests you in this specific project?

There are several. Though I love coding, I’m usually more inclined to Python & Rust (so, a “Python project” is what excited me at first). Then, my recently-developed interest in networking code (ever since my work on a network-related issue in Servo), and finally, I’m very curious about how we’re establishing the Python-JS communication and emulate user inputs.

What do you plan to get out of this after 8 weeks?

Over the past few months of my (fractional) contributions to Mozilla, I’ve always learned something useful whenever I finish working on a bug/issue. Since this is a somewhat “giant” implementation that requires more time and commitment, I think I’ll learn some great deal of stuff in relatively less time (which is what excites me).

Are there any interesting facts/hobbies that you would like to share so others can enjoy reading about you?

Well, I juggle, or I (try to) reproduce some random music in my flute (actually, a Bansuri – Indian flute) when I’m away from my keyboard.


We look forward to working with Ravi over the next 8 weeks.  Please say hi in irc when you see :waffles in channel :)

Joel MaherIntroducing 2 contributors for the Web Platform Tests project

As I previously announced who will be working on Pulse Guardian, let me introduce who will be working on Web Platform Tests – Results Explorer:

* Kalpesh Krishna (:martianwars on irc) –

What interests you in this specific project?

I have been contributing to Mozilla for a couple of months now and was keen on taking up a project on a slightly larger scale. This particular project was recommended to me by Manish Goregaokar. I had worked out a few issues in Servo prior to this and all involved Web Platform Tests in some form. That was the initial motivation. I find this project really interesting as it gives me a chance to help build an interface that will simplify browser comparison so much! This project seems to have more of planning rather than execution, and that’s another reason that I’m so excited! Besides, I think this would be a good chance to try out some statistics / data visualization ideas I have, though they might be a bit irrelevant to the goal.

What do you plan to get out of this after 8 weeks?

I plan to learn as much as I can, make some great friends, and most importantly make a real sizeable contribution to open source :)

Are there any interesting facts/hobbies that you would like to share so others can enjoy reading about you?

I love to star gaze. Constellations and Messier objects fascinate me. Given a chance, I would love to let my imagination run wild and draw my own set of constellations! I have an unusual ambition in life. Though a student of Electrical Engineering, I have always wanted to own a chocolate factory (too much Roald Dahl as a child) and have done some research regarding the same. Fingers crossed! I also love to collect Rubiks Cube style puzzles. I make it a point to increase my collection by 3-4 puzzles every semester and learn how to solve them. I’m not fast at any of them, but love solving them!

* Daniel Deutsch

What interests you in this specific project?

I am really interested in getting involved in Web Standards. Also, I am excited to be involved in a project that is bigger than itself–something that spans the Internet and makes it better for everyone (web authors and users).

What do you plan to get out of this after 8 weeks?

As primarily a Rails developer, I am hoping to expand my skill-set. Specifically, I am looking forward to writing some Python and learning more about JavaScript. Also, I am excited to dig deeper into automated testing. Lastly, I think Mozilla does a lot of great work and am excited to help in the effort to drive the web forward with open source contribution.

Are there any interesting facts/hobbies that you would like to share so others can enjoy reading about you?

I live in Brooklyn, NY and have terrible taste in music. I like writing long emails, running, and Vim.


We look forward to working with these great 2 hackers over the next 8 weeks.

Joel MaherIntroducing a contributor for the Pulse Guardian project

3 weeks ago we announced the new Quarter of Contribution, today I would like to introduce the participants.  Personally I really enjoy meeting new contributors and learning about them. It is exciting to see interest in all 4 projects.  Let me introduce who will be working on Pulse Guardian – Core Hacker:

Mike Yao

What interests you in this specific project?

Python, infrastructure

What do you plan to get out of this after 8 weeks?

Continue to contribute to Mozilla

Are there any interesting facts/hobbies that you would like to share so others can enjoy reading about you?

Cooking/food lover, I was chef long time ago. Free software/Open source and Linux changed my mind and career.


I do recall one other eager contributor who might join in late when exams are completed, meanwhile, enjoy learning a bit about Mike Yao (who was introduced to Mozilla by Mike Ling who did our first every Quarter of Contribution).

Mozilla FundraisingOur plan for fundraising A/B testing in 2015

Our end of year (EOY) fundraising campaign is getting started today, so I wanted to write a note about our A/B testing plan and the preparation work that has gone into this so far. Although right now our donation form … Continue reading

Daniel StenbergThis post was not bought

coinsAt times I post blog articles that get the view counter go up to and beyond 50,000 views. This puts me in a position where I get offers from companies to mention them or to “cooperate” on further blog posts that would somehow push their agenda or businesses.

I also get the more simple offers of adding random ads or “text only information” on specific individual pages on my sites that some SEO person out there figured out could potentially attract audience that search for specific terms.

I’ve even gotten offers from a company to sell off my server logs. Allegedly to help them work on anti-fraud so possibly for a good cause, but still…

This is by no counts a “big” blog or site, yet I get a steady stream of individuals and companies offering me money to give up a piece of my soul. I can only imagine what more popular sites get and it is clear that someone with a less strict standpoint than mine could easily make an extra income that way.

I turn down all those examples of “easy money”.

I want to be able to look you, my dear readers, straight in the eyes when I say that what’s written here are my own words and the opinions revealed are my own – even if of course you may not agree with me and I may do mistakes and be completely wrong at times or even many times. You can rest assured that I did the mistakes on my own and I was not paid by anyone to do them.

I’ve also removed ads from most of my sites and I don’t run external analytic scripts, minimizing the privacy intrusions and optimizing the contents: the stuff downloaded from my sites are what your browser needs to render the page. Not heaps of useless crap to show ads or to help anyone track you (in order to show more targeted ads).

I don’t judge others’ actions based on how I decide to run my blog. I’m in a fortunate position to take this stand, I realize that.

Still biased of course

This all said, I’m still employed by a company (Mozilla) that pays my salary and I work on several projects that are dear to me so of course I will show bias to some subjects. I don’t claim to have an objective view on things and I don’t even try to have that. When I write posts here, they come colored by my background and by what I am.

Justin DolskeFoxkeh Dance is back!

That’s right! Everyone’s favorite dancing mascot is back, baby!

Back in 2008, Alex Polvi (of Firefox crop circle fame), departed Mozilla to found his own startup. In one of the most epic farewell emails of all time, he created Foxkeh Dance, a Mozilla flavor of the Internet-classic Hampster Dance site.

Alas, domains expire, and for the last 5 years foxkehdance.com has been the home of a domain squatter hoping to interest you in the usual assortment of spam. But a few weeks ago, I randomly  checked the site, and discovered it was available for registration! So I grabbed the domain, and set about restoring it.

The ever-amazing Archive.org has a cached version of the original 7-year-old site from August 24th 2008… Mostly. It has the HTML, but not the images or background music. Luckily a couple of contemporaneous Mozilla community sites included copies of the animated images, and from that I was able to restore what I believe are original versions. (Update: it seems archive.org is now using these newly-restored images to fill in their incomplete cache. Curious.) While the original embedded “hamster.mp3” file is lost, I remember it a being a straight copy of the Hampster Dance site, and that’s easily available. Of course, the original site used plugins to play sound, so I’ve updated it to use a modern HTML5 <audio> replacement.

And now Foxkehdance.com is back!

For those unfamiliar, Foxkeh is Mozilla Japan’s cartoon mascot. Recently it’s been the unofficial mascot of the new Tracking Protection feature in Firefox (butt flames and all). I hope we’ll see more of the ‘lil guy in the future!

You may now resume dancing.

Monica ChewDownload files more safely with Firefox 31

Did you know that the estimated cost of malware is hundreds of billions of dollars per year? Even without data loss or identity theft, the time and annoyance spent dealing with infected machines is a significant cost.

Firefox 31 offers improved malware detection. Firefox has integrated Google’s Safe Browsing API for detecting phishing and malware sites since Firefox 2. In 2012 Google expanded their malware detection to include downloaded files and made it available to other browsers. I am happy to report that improved malware detection has landed in Firefox 31, and will have expanded coverage in Firefox 32.

In preliminary testing, this feature cuts the amount of undetected malware by half. That’s a significant user benefit.

What happens when you download malware? Firefox checks URLs associated with the download against a local Safe Browsing blocklist. If the binary is signed, Firefox checks the verified signature against a local allowlist of known good publishers. If no match is found, Firefox 32 and later queries the Safe Browsing service with download metadata (NB: this happens only on Windows, because signature verification APIs to suppress remote lookups are only available on Windows). In case malware is detected, the Download Manager will block access to the downloaded file and remove it from disk, displaying an error in the Downloads Panel.

How can I turn this feature off? This feature respects the existing Safe Browsing preference for malware detection, so if you’ve already turned that off, there’s nothing further to do. Below is a screenshot of the new, beautiful in-content preferences (Preferences > Security) with all Safe Browsing integration turned off. I strongly recommend against turning off malware detection, but if you decide to do so, keep in mind that phishing detection also relies on Safe Browsing.

Many thanks to Gian-Carlo Pascutto and Paolo Amadini for reviews, and the Google Safe Browsing team for helping keep Firefox users safe and secure!

Monica ChewMaking decisions with limited data

It is challenging but possible to make decisions with limited data. For example, take the rollout saga of public key pinning.

The first implementation of public key pinning included enforcing pinning on addons.mozilla.org. In retrospect, this was a bad decision because it broke the Addons Panel and generated pinning warnings 86% of the time. As it turns out, the pinset was missing some Verisign certificates used by services.addons.mozilla.org, and the pinning enforcement on addons.mozilla.org included subdomains. Having more data lets us avoid bad decisions.

To enable safer rollouts, we implemented a test mode for pinning. In test mode, pinning violations are counted but not enforced. With sufficient telemetry, it is possible to measure how badly sites would break without actually breaking the site.

Due to privacy restrictions in telemetry, we do not collect per-organization pinning violations except for Mozilla sites that are operationally critical to Firefox. This means that it is not possible to distinguish pinning violations for Google domains from Twitter domains, for example. I do not believe that collecting the aggregated number of pinning violations for sites on the Alexa top 10 list constitutes a privacy violation, but I look forward to the day when technologies such as RAPPOR make it easier to collect actionable data in a privacy-preserving way.

Fortunately for us, Chrome has already implemented pinning on many high-traffic sites. This is fantastic news, because it means we can import Chrome’s pin list in test mode with relatively high assurance that the pin list won’t break Firefox, since it is already in production in Chrome.

Given sufficient test mode telemetry, we can decide whether to enforce pins instead of just counting violations. If the pinning violation rate is sufficiently low, it is probably safe to promote the pinned domain from test mode to production mode.

Because the current implementation of pinning in Firefox relies on built-in static pinsets and we are unable to count violations per-pinset, it is important to track changes to the pinset file in the dashboard. Fortunately HighStock supports event markers which somewhat alleviates this problem, and David Keeler also contributed some tooltip code to roughly associate dates with Mercurial revisions. Armed with the timeseries of pinning violation rates, event markers for dates that we promoted organizations to production mode (or high-traffic organizations like Dropbox were added in test mode due to a new import from Chromium) we can see whether pinning is working or not.

Telemetry is useful for forensics, but in our case, it is not useful for catching problems as they occur. This limitation is due to several difficulties, which I hope will be overcome by more generalized, comprehensive SSL error-reporting and HPKP:
  • Because pinsets are static and built-in, there is sometimes a 24-hour lag between making a change to a pinset and reaching the next Nightly build.
  • Telemetry information is only sent back once per day, so we are looking at a 2-day delay between making a change and receiving any data back at all.
  • Telemetry dashboards (as accessible from telemetry.js and telemetry.mozilla.org) need about a day to aggregate, which adds another day.
  • Update uptake rates are slow. The median time to update Nightly is around 3 days, getting to 80% takes 10 days or longer.
Due to these latency issues, pinning violation rates take at least a week to stabilize. Thankfully, telemetry is on by default in all pre-release channels as of Firefox 31, which gives us a lot more confidence that the pinning violation rates are representative.

Despite all the caveats and limitations, using these simple tools we were able to successfully roll out pinning pretty much all sites that we’ve attempted (including AMO, our unlucky canary) as of Firefox 34 and look forward to expanding coverage.

Thanks for reading, and don’t forget to update your Nightly if you love Mozilla! :)

Air MozillaNovember Privacy Lab - Tracking Protection

November Privacy Lab - Tracking Protection For November's Privacy Lab, Mozilla and Disconnect will provide an overview of and invite feedback on Firefox's newly launched Tracking Protection feature - tracking blocking...

Mozilla Addons BlogAdd-on Compatibility for Firefox 43

Firefox 43 will be released on December 15th. Here’s the list of changes that went into this version that can affect add-on compatibility. There is more information available in Firefox 43 for Developers, so you should also give it a look.




  • This is the first version of Firefox that will enforce signing. Unsigned add-ons won’t install and will be disabled by default. There’s a preference that turns signing enforcement off (xpinstall.signatures.required in about:config), but the current plan is to drop the preference in Firefox 44.

Please let me know in the comments if there’s anything missing or incorrect on these lists. If your add-on breaks on Firefox 43, I’d like to know.

The automatic compatibility validation and upgrade for add-ons on AMO will happen in the coming weeks, so keep an eye on your email if you have an add-on listed on our site with its compatibility set to Firefox 42.

Chris CooperClarification about our “Build and Release Intern - Toronto” position

We’ve had lots of interest already in our advertised internship position, and that’s great. However, many of the applications I’ve looked at won’t pan out because they overlooked a key line in the posting:

*Only local candidates will be considered for this role.*

That’s right, we’re only able to accept interns who are legally able to work in Canada.

The main reason behind this is that all of our potential mentors are in Toronto, and having an engaged, local mentor is one of the crucial determinants of a successful internship. In the past, it was possible for Mozilla to sponsor foreign students to come to Canada for internships, but recent changes to visa and international student programs has made the bureacratic process (and concomitant costs) a nightmare to manage. Many applicants simply aren’t eligible any more under the new rules either.

I’m not particularly happy about this, but it’s the reality of our intern hiring landscape. Some of our best former interns have come from abroad, and I’ve already seen some impressive resumes this year from international students. Hopefully one of the non-Toronto-based positions will still appeal to them.

Air MozillaLondon Web Components Meetup – 20151119

London Web Components Meetup –  20151119 The London Web Component Meetup hosts talks about 'An e-commerce journey to using Web Components'.

Air MozillaCarto Meetup Paris #4

Carto Meetup Paris #4 4 ème meetup Carto Paris, il y aura 3 présentations (Mind-mapping, carto géographique et carto de réseaux sociaux).

Monica ChewTracking Protection Officially Supported in Firefox 42

Mozilla officially started supported Tracking Protection in Private Browsing mode with Firefox 42, which launched a couple of weeks ago. Congratulations to everyone who worked on the launch! The onboarding looks awesome and the unified UI is a nice touch, although I have to admit a preference for the original, engineer-designed marketing aesthetics pictured below.

Even outside of Private Browsing mode, you can still take advantage of Tracking Protection by going to about:config and turning on privacy.trackingprotection.enabled. This behavior has been supported for over a year since Firefox 34, so it's great to see Mozilla making this more usable by turning it on in Private Browsing mode.

I hope that Mozilla continues to use its products to challenge the notion that we owe our eyeballs, our computing resources and our entire browsing history to the ad industry, with no questions asked.

Air MozillaWeb QA Weekly Meeting, 19 Nov 2015

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Christian HeilmannDevfest Asia – JSConf Asia closing keynote and Microsoft Meetup

I am currently on a trip in Singapore, Thailand and Sydney for the next 8 days and today I presented at JSConf Asia and a meetup in the Microsoft offices in Singapore.
thumbs up audience

JSConf Asia closing keynote

The closing keynote of the first day of JSConf Asia covered my worries that we are getting slightly overboard in our affection of JavaScript to solve every issue. It seems we have forgotten just how versatile a language it is and that how we use it depends very much on the environment we run it in. The slides are on SlideShare.

I also recorded a screencast of the keynote and published it on YouTube.

Microsoft Meetup

As the audience at the meetup was more mixed, and I was deadly tired, I thought it is a good plan to create a presentation that covers how we can learn JavaScript these days. It explains the use cases of it, resources how to easily run a node and express server and talks about Visual Studio Code and how to clean up old and outdated code. The learning JS meetup slides are also on Slideshare.

The screencast is on YouTube

Air MozillaReps weekly, 19 Nov 2015

Reps weekly This is a weekly call with some of the Reps council members to discuss all matters Reps, share best practices and invite Reps to share...

Chris AtLeeMozFest 2015

I had the privilege of attending MozFest last week. Overall it was a really great experience. I met lots of really wonderful people, and learned about so many really interesting and inspiring projects.

My biggest takeaway from MozFest was how important it is to provide good APIs and data for your systems. You can't predict how somebody else will be able to make use of your data to create something new and wonderful. But if you're not making your data available in a convenient way, nobody can make use of it at all!

It was a really good reminder for me. We generate a lot of data in Release Engineering, but it's not always exposed in a way that's convenient for other people to make use of.

The rest of this post is a summary of various sessions I attended.


Friday night started with a Science Fair. Lots of really interesting stuff here. Some of the projects that stood out for me were:

  • naturebytes - a DIY wildlife camera based on the raspberry pi, with an added bonus of aiding conservation efforts.
  • histropedia - really cool visualizations of time lines, based on data in Wikipedia and Wikidata. This was the first time I'd heard of Wikidata, and the possibilities were very exciting to me! More on this later, as I attended a whole session on Wikidata.
  • Several projects related to the Internet-of-Things (IOT)


On Saturday, the festival started with some keynotes. Surman spoke about how MozFest was a bit chaotic, but this was by design. In a similar way that the web is an open platform that you can use as a platform for building your own ideas, MozFest should be an open platform so you can meet, brainstorm, and work on your ideas. This means it can seem a bit disorganized, but that's a good thing :) You get what you want out of it.

I attended several good sessions on Saturday as well:

  • Ending online tracking. We discussed various methods currently used to track users, such as cookies and fingerprinting, and what can be done to combat these. I learned, or re-learned, about a few interesting Firefox extensions as a result:

    • privacybadger. Similar to Firefox's tracking protection, except it doesn't rely on a central blacklist. Instead, it tries to automatically identify third party domains that are setting cookies, etc. across multiple websites. Once identified, these third party domains are blocked.
    • https everywhere. Makes it easier to use HTTPS by default everywhere.
  • Intro to D3JS. d3js is a JS data visualization library. It's quite powerful, but something I learned is that you're expected to do quite a bit of work up-front to make sure it's showing you the things you want. It's not great as a data exploration library, where you're not sure exactly what the data means, and want to look at it from different points of view. The nvd3 library may be more suitable for first time users.

  • 6 kitchen cases for IOT We discussed the proposed IOT design manifesto briefly, and then split up into small groups to try and design a product, using the principles outlined in the manifesto. Our group was tasked with designing some product that would help connect hospitals with amateur chefs in their local area, to provide meals for patients at the hospital. We ended up designing a "smart cutting board" with a built in display, that would show you your recipes as you prepared them, but also collect data on the frequency of your meal preparations, and what types of foods you were preparing.

    Going through the exercise of evaluating the product with each of the design principles was fun. You could be pretty evil going into this and try and collect all customer data :)


  • How to fight an internet shutdown - we role played how we would react if the internet was suddenly shut down during some political protests. What kind of communications would be effective? What kind of preparation can you have done ahead of time for such an event?

    This session was run by Deji from accessnow. It was really eye opening to see how internet shutdowns happen fairly regularly around the world.

  • Data is beaufitul Introduction to wikidata Wikidata is like Wikipedia, but for data. An open database of...stuff. Anybody can edit and query the database. One of the really interesting features of Wikidata is that localization is kind of built-in as part of the design. Each item in the database is assigned an id (prefixed by "Q"). E.g. Q42 is Douglas Adams. The description for each item is simply a table of locale -> localized description. There's no inherent bias towards English, or any other language. The beauty of this is that you can reference the same piece of data from multiple languages, only having to focus on localizing the various descriptions. You can imagine different translations of the same Wikipedia page right now being slightly inconsistent due to each one having to be updated separately. If they could instead reference the data in Wikidata, then there's only one place to update the data, and all the other places that reference that data would automatically benefit from it.

    The query language is quite powerful as well. A simple demonstration was "list all the works of art in the same room in the Louvre as the Mona Lisa."

    It really got me thinking about how powerful open data is. How can we in Release Engineering publish our data so others can build new, interesting and useful tools on top of it?

  • Local web Various options for purely local web / networks were discussed. There are some interesting mesh network options available commotion was demo'ed. These kind of distributions give you file exchange, messaging, email, etc. on a local network that's not necessarily connected to the internet.

Mozilla Release Management TeamFirefox 43 beta3 to beta4

In this beta, many changes to improve the support of GTK3+ for the GNU/Linux users. Besides that, some changes to increase the stability of Fennec.

  • 31 changesets
  • 39 files changed
  • 743 insertions
  • 283 deletions



List of changesets:

Wes KocherBacked out changeset 9fbc92fa9e4b (Bug 1221855) because I backed out the other half a=backout - 51f0f13e7985
Alessio PlacitelliBug 1213780 - Fix Telemetry reporting repeated hang annotations for Chrome hangs. r=aklotz a=lizzard - 2fd90a7f326e
Alessio PlacitelliBug 1211411 - Limit the number of thread hang stats reported to Telemetry. r=vladan a=lizzard - 59e6a978773f
Alessio PlacitelliBug 1215540 - Fix Telemetry reporting repeated hang annotations for Thread hangs. r=aklotz a=lizzard - 12762fdf5ab6
Alessio PlacitelliBug 1219751 - Change the the depth limit of the thread hangs stack to use the 99th percentile. r=gfritzsche a=lizzard - 6cabb1a43af6
Chris H-CBug 1198196 - rework EVENTLOOP_UI_LAG_EXP_MS to record all lag. r=vladan a=lizzard - ffc3382d3829
Chris H-CBug 1223800 - Accept BHR reports from 50% of beta clients. Up from 1%. r=vladan a=lizzard - aaa5100e2085
Vladan DjericBug 1223800: Fix broken build -- Telemetry on aurora & beta doesn't know about the bug_numbers field yet. a=broken - ec4b13420b71
Bill McCloskeyBug 1218552 - Fix GTK drag-and-drop coordinate scaling on HiDPI displays. r=karlt a=lizzard - 581b3e8f954f
Karl TomlinsonBug 1221855 - test Web Audio memory reporting r=erahm, a=test-only - 93d92b8c2b6c
Karl TomlinsonBug 1221855 - null-check mInputBuffer in SizeOfExcludingThis(). r=padenot, a=lizzard - 294b55e22276
Karl TomlinsonBug 1218552 - make GdkPointToDevicePixels() public to fix build. a=lizzard - 844ff2b4f267
L. David BaronBug 1222783 - Make nsHTMLFramesetFrame::Reflow set firstTime based on what firstTime means. r=roc approval-mozilla-beta=lizzard - 7947f1e4ca76
Jeff GilbertBug 1209612 - Only QueryString with null if supported. r=jmuizelaar, a=lizzard - d720ce07c464
James WillcoxBug 1221228 - Work around busted OpenSL causing hangs/reboots on Android. r=padenot, a=sylvestre - 4af91393a8f8
Andreas PehrsonBug 1103188 - Keep track of capture stop only in gUM stream listener. r=jib a=lizzard - 1540124e58cd
Andreas PehrsonBug 1103188 - Keep track of stopped tracks in gUM stream listener. r=jib a=lizzard - e3fad0bd414e
Jan-Ivar BruaroeyBug 1210852 - do SelectSettings of device capabilities on media thread. r=jib a=lizzard - 1ffe42de58bd
Andreas PehrsonBug 1070216 - Properly manage lifetime of allocated CaptureDevices. r=jib a=lizzard - 98d9576c7d13
Andreas PehrsonBug 1103188 - Always call MediaManager::NotifyFinished/NotifyRemoved on main thread. r=jib a=lizzard - 5ca6857c26e5
Jan HorakBug 1216582 - [gtk3] Scrollbar buttons not drawn correctly. r=karlt a=lizzard - 807e612c17ef
Bas SchoutenBug 1216349: Upload the old valid region as well if our texture host changed. r=nical a=lizzard - 94c40ce2d93b
Andrew ComminosBug 1218008 - Fix progress bar rendering on the Ambiance GTK3 theme. r=karlt a=lizzard - 51585d9e70eb
Jean-Yves AvenardBug 1220033 - Don't use fuzz arithmetic for calculating internal buffered ranges. r=gerald, a=lizzard - f6fa2e5fb632
Karl TomlinsonBug 726483 - remove unnecessary DispatchResized() parameters. r=roc, a=lizzard - 6ceeb10435a8
Karl TomlinsonBug 726483 - avoid DispatchResized() during size-allocate. r=roc, a=lizzard - c134a04010a0
Karl TomlinsonBug 726483 - keep an extra reference to the window. r=roc, a=lizzard - bc7eea62ab83
Robert LongsonBug 1222812 - add a null check in case there is no old style. r=dholbert a=lizzard - d35d09b0b24f
Nathan FroydBug 1217047 - try harder in IsContractIDRegistered to return a reasonable answer; r=bsmedberg,f=yury a=lizzard - c66289e84c50
Karl TomlinsonBug 726483 pass newly allocated runnable to NS_DispatchToCurrentThread() r=roc a=bustage - e4802c73f705
Byron Campen [:bwc]Bug 1218326: Prevent datachannel operations on closed PeerConnections. r=jesup a=lizzard - d8f0412f38f7

Karl DubostApp Shell and Service workers

Google published a very interesting article 📰 about service workers : Instant Loading Web Apps With An Application Shell Architecture. It promotes using service workers for caching the main UI of the appsite, so it gets out of the way when it's time to load the content.

Progressive Web Apps (PWAs) describe how a web app can progressively change with use and user consent to give the user a more native-app-like experience with offline support, push notifications and being installable to the home-screen. They can gain substantial performance benefits thanks to intelligent service worker caching of your UI shell for repeat visits.

They define what they mean:

When we talk about an app’s shell, we mean the minimal HTML, CSS and JavaScript powering the user interface. This should load fast, be cached and once loaded, dynamic content can populate your view.

Indeed this seems an interesting option when the appsite is being used multiple times a day. It becomes less so for a site that you go every once in a while. The cache will have been destroyed by the browsing of other sites. Maybe what is missing is the possibility for users to somehow keep a local version of the full UI, the same way that when you install an app on your computer 💻, you are not necessary forced to update to the next version, but still can access content with it.

Progressive Enhancement

They mentionned progressive enhancement:

While service worker isn’t currently supported by all browsers, the application content shell architecture uses progressive enhancement to ensure everyone can access the content. For example, take our sample project.

but this is progressive enhancement in the context of the support or not of service workers. It is not in the context of a Web site where you separate the notion of UI and content.

I guess it helped me to understand what bugs me in system like our own webcompat website we are creating. When we deactivate JavaScript, there is no way the site keeps working. We currently load the shell 🐚 through a "normal HTTP" request and then the content through XHR. No JavaScript, No XHR, No content. Sad panda 🐼. I can see how service workers 👷 could provide a better way of thinking 🤔 about it, but if and only if content comes first.

Content First

As a user, when I'm being given a URI to something, I want to get access first to the content. This is the thing I'm interested by. The UI and features come second. The content is the first class citizen. I don't mind at all having a non styled text as long as I can read it right away, and then if the UI 🏗 starts appearing after (at least on first load), it's all jolly good🎉. And given the fact that service workers will come to help for caching the UI elements, when going to the second URI, there should not be any delay anymore.

To think more about it.

PS: no emojis have been abused during the writing of this blog post.


Emma IrwinThe Journey Continues – Mozlando is Coming!

It’s the most wonderful time of the year!  The ‘Mozilla Coincidental Work Week’ brings everyone at Mozilla together in the same city, at the same time for the opportunity of collaboration – this time in Orlando Florida (Dec 7 – 11) !

‘Mozlando’ is the next stop on our Participation Cohort’s journey –  a perfect environment for goal-setting focused on building high impact participation opportunities with product teams.  Truly – a  beautiful opportunity to invest in, and with each other.

IMG_2015-11-18 15:36:44

Over 100 volunteers will have the opportunity to work directly with teams helping design and strengthen goals which in many (and most)  cases includes Participation.  For those invited by the Participation Team, we will of course, be dedicating ourselves to that focus.


We have three distinct volunteer groups attending Orlando:

  1. Those invited by the Participation Team.
  2. Those invited by another functional area, but who are also part of the Participation Cohort.
  3. Those who were invited by another functional area, but currently have no Participation connection.
    1. Subset: those in this group who may, informally, have Participation goals in their work.

We will reach out with offers for 1:1 coaching for all in groups 1 & 2.  And for the subset of the 3rd group, will reserve blocks of time for those interested in Participation.

The coaching this time around even more important recognizing that connecting volunteers with the project goals is a critical step to bringing sustained strategic advantage to Mozilla . We are asking our cohort to research and consider the following:

  1. What are my participation goals for 2016?
  2. What are the goals in 2016 of the product team I will be working with?
  3. How do these align with my own goals for 2016?  What adjustments do I need to make?  What questions do I need to ask?
  4. How can I share what I learn, and bring others in who want to contribute to the same area of the project?

CC by-nc-sa 2.0 by Christos Bacharakis

At the heart of everything of course is people, why we’re here, why we care,  where we envision we can go individually, and with each other.  I’m looking forward to all of it!

For those who think of Orlando as ‘Disney’ and for those who think of Orlando as ‘Space ‘ I give you an image for everyone. ‘Mickey Mouse on Mercury’ CC by 2.0 Nasa Goddard Space Flight Center

Feature Image Credit:  Nasa on The Commons


Chris CooperWelcome back, Mihai!

Mr. KotterThis is *not* Mihai.

I’ve been remiss in (re)introducing our latest hire in release engineering here at Mozilla.

Mihai Tabara is a two-time former intern who joins us again, now in a full-time capacity, after a stint as a release engineer at Hortonworks. He’s in Toronto this week with some other members of our team to sprint on various aspects of release promotion.

After a long hiring drought for releng, it’s great to be able to welcome someone new to the team, and even better to be able to welcome someone back. Welcome, Mihai!

Air MozillaThe Joy of Coding - Episode 35

The Joy of Coding - Episode 35 mconley livehacks on real Firefox bugs while thinking aloud.

Daniel PocockImproving DruCall and JSCommunicator user interface

DruCall is one of the easiest ways to get up and running with WebRTC voice and video calling on your own web site or blog. It is based on 100% open source and 100% open standards - no binary browser plugins and no lock-in to a specific service provider or vendor.

On Debian or Ubuntu, just running a command such as

# apt-get install -t jessie-backports drupal7-mod-drucall

will install Drupal, Apache, MySQL, JSCommunicator, JsSIP and all the other JavaScript library packages and module dependencies for DruCall itself.

The user interface

Most of my experience is in server-side development, including things like the powerful SIP over WebSocket implementation in the reSIProcate SIP proxy repro.

In creating DruCall, I have simply concentrated on those areas related to configuring and bringing up the WebSocket connection and creating the authentication tokens for the call.

Those things provide a firm foundation for the module, but it would be nice to improve the way it is presented and optimize the integration with other Drupal features. This is where the projects (both DruCall and JSCommunicator) would really benefit from feedback and contributions from people who know Drupal and web design in much more detail.

Benefits for collaboration

If anybody wants to collaborate on either or both of these projects, I'd be happy to offer access to a pre-configured SIP WebSocket server in my lab for more convenient testing. The DruCall source code is a Drupal.org hosted project and the JSCommunicator source code is on Github.

When you get to the stage where you want to run your own SIP WebSocket server as well then free community support can also be provided through the repro-user mailing list. The free, online RTC Quick Start Guide gives a very comprehensive overview of everything you need to do to run your own WebRTC SIP infrastructure.

Soledad PenadesOn Loop 2015

I was invited to join a panel about Open Source and Music in Loop, an slightly unusual (for my “standards”) event. It wasn’t a conference per se, although there were talks. Most of the sessions were panels and workshops, there were very little “individual” talk tracks. Lots of demos, unusual hardware to play with in the hall, relaxed atmosphere, and very little commercialism—really cool!

Before I agreed to join them, I spoke to Juanpe Bolivar, the host of my panel, and made sure he was aware of why I didn’t actually want to join a panel, because I had been in a few so far and they were always horrendous due to the power dynamics in place. I explained all my concerns to him, and suggested tons of ideas to make things better, and he listened and put them in practice! So that was really good, and made me feel good about the event. It also helped that I knew some people who work for Ableton or who were connected to them, so I trusted them. Also they mentioned the code of conduct early on and they mentioned it during the opening event as well—with the room full of people.

Their organisation for booking travel and accommodation was super great as well: they helped me be time efficient by booking the most convenient flights, and the hotel they reserved was very good! Which I super greatly appreciated after having been travelling so much recently… the last thing you want is being placed on a crappy hotel!

When I was waiting for the flight to Berlin I noticed that James Holden was on the same flight too, because he was joining Loop as well. James Holden! OK maybe you don’t know him, but he’s a quite popular DJ who’s also an author/producer and instigator of various experimental acts, and also really chill and loves to share how he does things and what his process is. So he was there in front of me eating a croissant, and of course I would NOT tell him anything because eating a croissant is one of life’s sacred moments. You don’t want to interrupt anyone when they’re eating a croissant. It just breaks the magic and everything gets awkward, with pieces of pastry going all over. No, just don’t do it.

So I didn’t say anything.

But when we landed I got welcomed by a representative from Loop. She was very nice and told me we should wait until we met James and his colleague Camilo Tirado and then we would head out to get a taxi to our hotels. So I had a chance to actually speak to James! I said “hi” like a shy child, and then I told him I had seen him play at a long-closed club in London, many years ago, and he said something like “Ahh yeah when we were young!”, and asked me what I did! So yeah, exactly the down to earth person I expected. Camilo was also super nice, and I got to talk to him later on about musical composition, how you play Indian music, universities and schools, etc.

This was just a bit of what was to happen during the event: you would be listening to some artist talk about their process and then it was just very natural to come and talk to them afterwards, and they would also ask what were you doing. The whole event was set up with the goal of getting artists to make connections and not work alone, as the event premise is that making music has turned into a very solitary act nowadays and we spend so much time in front of our computer screens in contrast to playing with other artists, etc. It was a bit funny for me as my “main job” is not as a musician, but I’m “enabling” people’s music creations on the web and also make my own music from time to time, so it was interesting to see that they were really accepting of my ‘hybrid’ situation, and very excited about the notion of me enabling other people on the web, whereas generally people in tech are way more condescending and exclusivist (“oh, you’re not a real developer!”, etc).

As you can see I was semi unconsciously trying to extrapolate this to our “industry”; my brain was making comparisons all the time. I noticed little things like:

  • the drinks at the bar were not free, and no one batted an eyelid; they just paid for them and also there were zero incidents with drunkards harassing me. Correlation? Causation?
  • there was no t-shirt for the event, the only event themed t-shirts were worn by event people
  • picking the swag bag was optional, they didn’t give it to you automatically. And the bag essentially just had a leaflet with the program and a notebook.

The panel

After the opening, Juanpe brought us to dinner to a nice Vietnamese place so we could get to know each other’s background a little bit more before the panel happened. I hadn’t met my co-panelists before, and I was a bit scared that they would be “more open source than ye” kind of people, as they essentially worked in Linux Audio stuff, but they were excellent people and really easy to get along with. Soon my concerns evaporated.

For reference, they were:

  • Gianfranco Ceccolini, he works on a programmable pedal device called the MOD: it’s a device which has an embedded computer running Linux, and you can download effects and install them on it. Their business model consists in that they provide precompiled binaries so they are convenient for musicians that just want to get music done
  • Marije Baalman, she is an artist and also developer for STEIM, a company that builds custom instruments and stuff for artists. They use Super Collider and Linux, and she was also involved in the Linux Audio conference.
  • Paul Davis, he’s the lead developer of Ardour which is a very popular open source audio workstation (think Garage Band, but free), and also JACK which enables you to pipe and control audio in your system (without|with very low) latency (like CoreAudio etc, but again, free, and multiplatform). He also happens to be the 2nd employee Amazon ever hired so he’s been in the tech industry for a while too! 😉

We were asked to prepare a 5 minute self-introduction for the panel, here is mine. People seemed to like my succinctness and quick slide-changing game! Also I brought them many tracking memories, judging by the tweets, so I’m happy about that :-)

I think the panel went quite well, specially compared to previous panels I’d been before! We all had the chance to talk and no one super monopolised the time. The only minor nit was that we had to share hand microphones and so it was hard to get ‘impromptu’ interventions, it felt a bit mechanical. Although you could consider it a good thing too because that meant that no one would talk you over when you had the microphone… so my thoughts are a bit ambivalent in this sense!

Of course we ran out of time and didn’t have a chance to discuss many of the ideas that our moderator proposed beforehand (and which we augmented with other ideas), but I think we covered a bunch of interesting topics such as why we’ll start seeing more audio stuff happening on the web soon, why distribution and convenience is how open source can win users’ hearts rather than just open source per se, what was our setup for making music (answering “just ViM” made me giggle a bit), and then some more “boring” questions such as “which license you use”.

Someone in the audience asked how to get started in open source if you are not a developer, and Paul suggested to “write docs”, but I sent the ball back to “the developers” and said that they need to provide an entry point, a placeholder, so people at least know *where to put* the docs, and also that it’s not just about developing or writing docs; making a screencast or a tutorial about a thing you found on the internet and like a lot can help in making it better. Or translating existing docs can make a positive impact as well—not everyone has the privilege of knowing more than one language.

People also asked where should they start if they worked in closed source software; should they open it up or…? and where should they start? I said if nothing else, make the file format description open, so users are not doomed if the makers stop supporting the software. Similar questions arose regarding music making software “in the cloud”; my posture is if you can’t export it, just don’t use it. You don’t want your music to disappear when a startup goes out of business or gets acquired.

After the panel and during the rest of the event, random people would come and tell me they loved it and found the discussion very interesting and loved my optimism and enthusiasm because I had given them lots of ideas (!). I obviously have a skewed vision of myself and thought I had been a bit harsh and pessimist about the state of audio in Linux, and about the JACK daemon not playing nice with the rest of things in my system when I was running Linux, so I apologised to Paul, but he said that it had been great and someone had to “tell it like it was”, and despite of that, I had been really uplifting! (!)

Well then…!

Music on the web

I also spoke to a number of people who were interested in various aspects of Web + Audio + MIDI.

Someone from a hardware making company said they are su-per-in-te-res-ted in Web MIDI support coming to Firefox, and would maybe even want to contribute code, but the last time they looked into the bug, the WebIDL part wasn’t done yet, so they didn’t know where they could contribute with their knowledge (they know how to deal with audio code, but not browser code). The importance of placeholders, again.

Audio software writers were concerned about performance and how to extract the maximum ‘audio juice’ out of a browser running Web Audio code. So Audio Worklets (the latest working name for the concept of “better than ScriptProcessorNodes” custom processors, née AudioWorkers) will come in handy here—same for WebAssembly and SIMD. Yay tying everything together!

When asked about this during the panel (“oh but Web Audio can’t run the same thing that I can on my native app!”) I suggested everyone to look back to 2005 when Google Docs were starting: people were joking and asking “but WHO would want to run Office in the browser?! HA! HA! HA!”… and now pretty much everyone is moving to “the cloud”. So I hinted that perhaps we were going towards a hybrid future, where most people would just choose the web option because it was easy to access (no installs!) and convenient, and if they could export their data (using an open format), then they could continue working locally with their favourite native app.

In contrast, music writers were intrigued and excited about the notion of putting things online and having your audience interact with it. I didn’t hear them complain much about performance, and there were also lots of talks on limitations fostering creativity rather than blocking it. As developers we so often are blinded by the desire to perfect our tools that we never actually get to do anything with the tools!

Demos, take aways and ideas

As I said there were lots of demos and interesting take aways. I liked this from the opening keynote: “Successes point to the past, failures point to the future” (as in, a success is something you’ve done, and that’s it, but failing is something that didn’t work out, and gave you information on how to proceed in the future). And also: “if all your experiments are a success, perhaps you’re not trying the right things”.

Here are some vines from panels I attended:

The first one is Jono Brandel demonstrating Patatap, which is running in the browser and the animations are “powered by” tween.js!

Then there’s Leafcutter John demonstrating how his light-based custom instrument works, by flashing two lights at an array of light sensors:

The final keynote was the only commercially-strong content in the event, and it was the first time I saw Ableton Live on a big screen. They showed an advance of the new version of Ableton Live in exclusive (thought they didn’t specifically say “do not tweet”, it was subtly implied). It had newer sampling abilities, and timestretching, better flow…

The most interesting thing to me was when the showed a new protocol they had devised to sync various music apps playing live… which is one of the ideas I was exploring a while back with Firefox OS phones and p2p communications in The Disconnected Ensemble.

I normally don’t come back from a conference this excited, but here’s just a number of things I heard about during the event and I want to look at:

  • SuperCollider
  • Sonic PI
  • Gibber and Gibberish
  • using the webcam with motion detection in place of a ‘grid of light sensors’ (like Leafcutter John… but with less soldering)
  • the Leap Motion as an instrument (inspired by what Rebecca Fiebrink  showed with her AI-learning based instrument builder)
  • exposing experiments into the window global object so they can be scripted/augmented with bookmarklets or WebExtensions
  • also providing an interactive demo / playground page for some of my open source libraries
  • and clowncore! (or perhaps I don’t want to look into that, actually).

Conclusion: if you are interested in music making, and can attend next year, do so!

flattr this!

QMOFirefox 43.0 Beta 7 Testday, November 27th

Hi, mozillian! Friday, November 27th, we will host a new Testday for Firefox 43.0 Beta 7, I bet you did not see this coming 😀 We will have fun testing Hello (I encourage you to engage our moderators for 1 on 1 calls) and Migrations (give me my data back ugly browsers!). If you want to find out more information visit this etherpad.

You don’t need testing experience to take part in the testday so feel free to join the #qa IRC channel and the moderators will help if you have any questions.

This message will disappear in 3, 2, 1…well, just kiddin :) See you next Friday!

David LawrenceHappy BMO Push Day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1221998] Make sure localconfig is generated deterministically
  • [1219767] Explicitly load extensions at Bugzilla->login
  • [1221423] unable to impersonate users when 2fa is enabled
  • [1181637] Update Req Opening Process (Cost Center list on 2015-07-08)
  • [1223495] String handling bug in form.creative on BMO
  • [1223669] don’t scroll to the top of the page when clicking on the resolution buttons
  • [1209625] MozReview API Keys should use a more specific error message
  • [1223590] Unable to login to bugzilla via login to github (fresh bugzilla account)
  • [1224620] Update VP list in Recruiting Product
  • [1187429] Remove the WebOps Request Form
  • [1225249] module subtitles should be the same size as other text on the page (13px)

discuss these changes on mozilla.tools.bmo.

Chris H-CSingle-API Firefox Telemetry Plots: telemetry-wrapper.js

Say you’ve read my earlier post on Firefox Telemetry and want to make your own plots like https://telemetry.mozilla.org does. Well, like almost everything Mozilla does, that website is open source, so we can just copy what it does… except it’s a little convoluted. It needs to support a range of requirements and configurations, after all.

But your needs are just yours, so wouldn’t it be nice if there were something a little more direct?

Enter telemetry-wrapper.js. Simply include it and its dependencies in your HTML and then call it like so:

TelemetryWrapper.go(params, element);

Then ‘element’ will suddenly (after a few seconds to collate and render the data) contain one or more plots corresponding to ‘params’.

Want to see GC_MS compared by e10sEnabled setting on Nightly 44?

  channel: "nightly",
  version: "44",
  metric: "GC_MS",
  compare: "e10sEnabled",
}, document.body);

(e10s is looking like a win on this metric, though part of the sample population self-selected so we can’t be sure. Await blog posts from other quarters on A/B tests we’re conducting.)

How about the top three plugins activated in Firefox 42?

  channel: "release",
  version: "42",
  keyLimit: 3,
}, document.body);

(no one is surprised that the top one is Flash and that most people only use it the once. But googletalk has an odd shape to it, being activated exactly twice by clients more often than any other frequency…)

The technical details are in the README. Use and reuse it to ask and answer questions about Firefox Telemetry data!


Air MozillaSam Piggott – The Dojo Story - Swift LDN - 20151116

Sam Piggott – The Dojo Story - Swift LDN - 20151116 Sam Piggot presents his experience in releasing The Dojo app on a tight schedule. This talk has been given as part of the Swift LDN...

Air MozillaRadek Pietruszewski – Statically-typed Swifty APIs - Swift LDN - 2015116

Radek Pietruszewski – Statically-typed Swifty APIs - Swift LDN - 2015116 Radek described his experience and process to build a statically type API in Swift. This talk has been given as part of the Swift LDN...

Air MozillaEmily Toop – WKWebView made me do it! – Swift LDN – 20151116

Emily Toop – WKWebView made me do it! – Swift LDN – 20151116 Emily Toop speaks about the hacks and tricks the Fennec iOS team went through to produce Firefox for iOS using WKWebView. This talk has been...

Christian HeilmannMeetup in London: why is Windows not your platform of choice

This Thursday, my colleague Mike Harsh and Keith Rowe (@krow) from Microsoft’s Windows and Devices Group invite you to the Square Pig in London for some drinks and a chat. These two program managers are leading efforts to make Windows-based machines a better place for web development.

I’ve put up a small web site with the info of the meetup and there’s also a Lanyrd page. Many thanks also to London Webstandards for banging the drum.

Whilst I am not affiliated with this group and I can’t be there as I am on my way to JSConf Asia to present, I’d love to see a lot of people go and talk to them. This is a genuine offer to improve what Windows has for web developers and I already gave them quite a bit of feedback on the matter (I am a Mac user…).

I’ve been worried about our Mac fixation as web developers for a while. We preach about supporting all platforms as “it is the web” but a lot of our tooling and best practices are very Mac/Command Line centric.

I know this is a bit last minute, but as it is with London Pubs, you have to spring a grand to get the room for the evening, so please show up and at least make sure this expense ends in the form of food and drinks inside people who Microsoft can learn from.

Emily DunhamInstalling Rust without root

Installing Rust without root

I just got a good question from a friend on IRC: “Should I ask my university’s administration to install Rust on our shared servers?” The answer is “you don’t have to”.

Pick one of the two following sets of directions. I’d recommend using Multirust, because it automatically checks the packages it downloads and lets you switch between Rust versions trivially.


Andy McKayA year in cycling

When I started cycling into the office this year, I said I would probably end in November or December when it got too cold and wet.

Last night was really, cold, windy and I had so many near misses with cars, pedestrians and signs. The worst part was as usual the Second Narrows bridge which was really, really windy. A bus hit a puddle on the bridge and the wall of water went over the barricade and hit me. Coming off the bridge the ramp is covered in wet slippy composting leaves and when you are trying to avoid that unlit pedestrian, in the dark on skinny road tires with frozen fingers in driving rain....

So I'm done.

I cycled for about 9 months of the year from Deep Cove to the office for several days a week. Not too shabby. The real difference this year was training for the Fondo, which meant that in the summer when I cycled five days a week to the office, that's 170 km a week, I wasn't that tired but wanted to do more and started detouring over the Lions Gate bridge. I hit over 300km one week.

According to Strava I cycled 3,842km over 1712h this year. That's not too shabby. Next year I'm aiming for two Fondo's. But for now its time to go to the gym more and do more curling and snowboarding.

Nick Cameronconcat_idents! and macros in ident position

In the next few blog posts I want to cover some areas for improvement for Rust macros. I'll drill down into one topic at a time. I'll propose solutions, but not in RFC-levels of detail. I hope to get lots of early feedback this way, rather than doing all the design myself and having a 'big-bang' RFC. A lot of the macro issues are inter-related, so some of the details in early areas won't get fleshed out until later.

I'm going to start with concatenating idents in macros and using macros as idents. This is a fairly small thing, but is an irritating thing for many macro authors. The problem space has been expored already, see the references section at the end for links to issues and a previous RFC.


It is often desired to merge two (or more) identifiers to make a new one in a macro. For example, a macro might take an identifier and create two functions - e.g., foo and foo_mut from foo. This is related to the issue of creating new identifiers in macros.

Currently we support this by using the concat_idents macro. There are two main problems with it - it is not very smart with hygiene, and it is not very useful since macros can't be used in ident position; so although it can synthesis idents, these can only be used as expressions. E.g., although we could access variables called foo and foo_mut given foo, we can't generate functions with these names or even call functions with those names.

The hygiene issue is that concat_idents creates a fresh ident. Essentially, the ident is only in the scope of the macro which created it. Generally, you want to inherit the hygiene context from one of the source idents.

Proposed solution

  • deprecate concat_idents! - without proper hygiene support, it is not suitable for stable use,

  • allow macros in ident position (see below for details),

  • provide libraries for manipulating hygiene contexts in procedural macros (see later blog posts),

  • provide a library of procedural macros which can concatenate identifiers with various different kinds of hygiene results (see below for details).

Macros in ident position

The concept is pretty simple - anywhere we accept an identifier in current Rust, we would accept a macro. This includes (but is not limited to): variable declarations, paths, field expressions, method and function calls, item names (functions, structs, etc.), field names, type names, type variables (declaration and use), and so forth. I propose that this facility is only available for 'new' macros, partly for technical reasons (see below, hygiene and types) and partly as a carrot to move people to the new macro system.

Examples, if foo is a macro: x.foo!() (field access), x.foo!(bar)() (method call), fn foo!(baz)(x: T) { ... } (function declaration).

If a macro occurs in expression or pattern position, then there is the question of whether it should be parsed as a macro in expression position or a macro in ident position as a path which is an expression (similarly with patterns). I think we can always choose the least specific option (i.e., expression rather than ident position) because we parse the macro body after expansion, so if it is an ident, the parser will wrap it to make an expression (if we assumed it was an ident, but it was a non-ident expression, we would be stuck). This only works for procedural macros if their output is tokens rather than AST (see later blog posts).

I believe parsing should not be too affected - after parsing an ident, we simply need to check for an !. Any AST node that contains an Ident must instead contain a new enum which can either be an Ident or a macro in ident position.

I don't believe there are issues with expansion or hygiene, expansion should work the same as for macros in other position and hygiene should "just work". One might imagine that since idents are the target of the hygiene algorithm, that creating new ones with macros would be difficult. But I don't think this is the case - the macro must expand to an ident only (or further macros that expand to an ident), and the hygiene context for that ident will be due to the macro which creates it and the expansion itself (see note on sets of scopes algorithm below). We might need to adjust the hygiene context of the ident in the macro where it is created, but that is a separate issue, see below. Note that in nearly all cases, users of a macro that produce an ident will need to pass some context to the macro to make the produced ident accessible.

There are some undecided questions where macros supply idents in items, types, and other places where we don't currently support hygiene. I propose to only support macros in ident position with new macros, which should be hygienic where current macros are not. This should make things easier. Exactly how macros in ident position interact with item hygiene is an open question until we nail down exactly how item hygiene should work.

Note: sets of scopes

I have been thinking of changing our hygiene algorithm to the sets of scopes algorithm. I won't go into the details here, but I think it will help with a lot of issues. It should mostly be simpler than the mtwt algorithm, but one area where it will add complexity is with use-site scopes. These are added to the sets of scopes in order to handle recursive macros, but when a macro contributes to a new binding (I believe this will mean macros in pattern position and macros in ident position where the ident introduces a binding), then we must be careful not to add use-site scopes. This point needs more consideration, but I think it will be OK - we just have to be careful about these scopes.

Drawbacks and alternatives

I think macros in ident position look ugly - having double sets of parentheses (in function calls) or parentheses where there wouldn't normally be is confusing to read. It also makes code harder to read in general, since names are an essential way we link parts of code together. Having names be macro generated makes code harder to make sense of. It's also confusing for tools.

One alternative would be to only allow macros in ident position inside macro definitions - this should address most use cases, without making general Rust code harder to read (macros are already harder to read, so there is less of an impact). I think I favour this alternative, although I am very keen to get others' opinions on it.

Another alternative would be to come up with some new syntax especially for use in ident position - this might be less ugly and give more of a hint of the generated name. However, since we must be able to pass arguments to macros, I'm not optimistic that this is possible. Furthermore, it is more syntax and thus a bigger language which is confusing in its own way.

Library macros for concatenation

This section will be a bit hand-wavey for now, we need to decide on the fundamental APIs for allowing procedural macros to interact with the hygiene system before we can settle the details.

I expect we want a fundamental create_ident! macro (or create_ident function) which takes a string name and a syntax context (probably some token which carries it's hygiene information, more on exactly how this works later). E.g., create_ident!(foo, "bar") would create an ident with the name bar and a hygiene context taken from foo.

We would also have fresh_ident! which would create an ident from a string name with a fresh hygiene context (similar to gensym'ing) and new_ident! which does the same thing but with an empty hygiene context (i.e., it gets only the context due to the expansion of the macro where it is created). The difference between the two being that two idents created with fresh_ident! would have different contexts, but two created with empty_ident! would have the same contexts.

We then provide convenience macros which take a list of idents and/or an ident (for its hygiene context) and a list of things which produce strings, and produce a new ident with either hygiene contexts taken from the first ident, or a seperately specified object, or a fresh context. Obviously, we need to make this a bit more concrete.


struct Foo {  
    a: A,
    b: B,

macro! def_getters {  
    ($f: ident, $t: ident) => {
        fn concat_idents_with!($f, "get_", $f)(&self) -> &$t {
        fn concat_idents_with!($f, "get_", $f, "_mut")(&mut self) -> &mut $t {
            &mut self.$f

impl Foo {  
    def_getters!(a, A);
    def_getters!(b, B);

fn main() {  
    let f = Foo { ... };
    println!("{}", f.create_ident!(f, "get_a")());

Where concat_idents_with!($f, "get_", $f, "_mut") expands to an ident with name get_$f_mut and hygiene context taken from $f. Note that in this case concat_idents_with! is used in a binding context, so the hygiene context (under a set of scopes model) should not include a use-site scope.

The use of create_ident in main is a bit silly, it's just for demonstration purposes: f.create_ident!(f, "get_a")() has exactly the same meaning as writing f.get_a().


concat_idents tracking issue

macros in ident position issue

macros in ident position RFC

Pascal ChevrelFollow-up to my current migration to Atom Editor

After my recent blog post announcing that I was transisionning from Sublime Text to Atom, I got a couple of nice surprises from the community that fix some of the annoyances I have with Atom or that will just make it better.

How to fix the keyboard shortcut to comment out a single line on a French keyboard

Just click on the  'Edit/Open your Keymap' menu item and put that line at the bottom:

'.platform-linux atom-text-editor': 'ctrl-:': 'editor:toggle-line-comments'

That will make the shortcut work along with the keyboard localization package installed for French (and Belgian French too). If you are on Windows, use the selector .platform-win32 (I don't know what the MacOS one is).

How to have a basic project mode like in Sublime text and be able to switch projects

Install the Project Manager package (thanks to Giorgio Maone, mozilla Add-on dev for the tip), it's roughly equivalent in functionnality to Sublime's built-in project manager and it seems good enough to me. One caveat is that switching from one project to another is a bit slow.

Support for .lang Mozilla syntax files in Atom

This is a nice gift from my colleague Francesco Lodolo, he made a syntax highlighter for the DotLang localization text format we use for mozilla.org and other sites for which we need fast translations for, this will be useful to me but also to Mozilla localizers that could want to use Atom to edit their translations, here is the package:

DotLang language support in Atom

And a screenshot to give you an idea of the end result.


New Version of Atom 1.2 stable and 1.3beta released

This is always nice to have, I like new stuff :) Here is their blog post about it: Atom 1.2

Codeintel equivalent in Atom for PHP ?

There is a series of packages called php-integrator-* (base, autocomplete plus, tooltips, annotations...) that are supposedly providing the equivalent services as Codeintel  but after indexing a single project for an hour (bringing my computer to its knees), I couldn't make it work. On a couple of occasions I saw a nice tooltip for a method indexed, but I don't know how I triggered it and I don't get any autocompletion of classes while typing. I guess it's still pretty much alpha stuff  but hopefully that will work some day. Also I suspect it tried to index all of my dependencies in the vendor directory… I only need my own code to be indexed, not the external dependencies or a whole framework. The good news I guess is that something is developped and I might get that feature one day.

Atoum integration in Atom ?

Another nice surprise from the community, looks like Julien Bianchi, one of Atoum developers is working on a package to get Atoum in Atom following my request on Twitter:



Many many thanks to him, I am always amazed at how nice the people in the Atoum project are with their users :)

UPDATE: here is the Atoum plugin and a video demoing it

Conclusion of the day

My transition is going well and progressing quickly, today I coded exclusively in Atom, I found some small bugs and needed to get my marks in the new environment  but it's not a revolution compared to Sublime and so far I felt rather productive. Most of the problems I have really are in the realm of polishing and finding where an option is set up or what a new shortcut is, that said, the experience is satisfying and I probably didn't get today more headaches than I had when I switched from Geany to Sublime a couple of years ago. So far, so good :)


Nick ThomasThe latest on firefox/releases/latest

The primary way to download Firefox is at www.mozilla.org, but Mozilla’s Release Engineering team has also maintained directories like


to provide a stable location for scripted downloads. There are similar links for betas and extended support releases for organisations. Read on to learn how these directories have changed, and how you can continue to download the latest releases.

Until recently these directories were implemented using a symlink to the current version, for example firefox/releases/42.0/. The storage backend has now changed to Amazon S3 and this is no longer possible. To implement the same functionality we’d need a duplicate set of keys, which incurs more maintenance overhead. And we already have a mechanism for delivering files independent of the current shipped version – our download redirector Bouncer. For example, here’s the latest release for Windows 32bit, U.S. English:


Modifying the product, os, and/or lang allow other combinations. This is described in the README.txt files for beta, release, and esr, as well as the Thunderbird equivalents release and beta.

Please adapt your scripts to use download.mozilla.org links. We hope it will help you simplify at the same time, as scraping to determine the current version is no longer necessary.

PS. We’ve also removed some latest- directories which were old and crufty, eg firefox/releases/latest-3.6.

The Servo BlogThis Week In Servo 42

In the last week, we landed 106 PRs in the Servo organization’s repositories.

We have a new homepage, http://www.servo.org! Many thanks to nerith and lucywyman!

Prabhjyot Sodhi did some initial analysis of Servo’s E-Easy bugs to show how long it is before they are closed - for a large subset of them, it was within 24 hours!

Notable additions

  • SimonSapin added a ./mach run --android to help ease the new Android developer experience
  • rillian fixed our placeholder image
  • larsberg fixed the Gonk intermittent failures - still lots more intermittents to go, sadly!
  • ajeffrey made DOMString opaque, enabling future optimization and representation tricks
  • Manishearth enabled delegation, which allows reviewers to let contributors review-carry their PRs without having to wait for a reviewer
  • paul added the mozbrowsericonchangeevent API
  • nox made JS-managed objects weakref-able
  • David Raifaizen limited the image formats that Servo will accept
  • Michael Howell fixed a layout regression affecting sites declaring viewports
  • Greg Guthe implemented a subset of the NetworkTiming DOM API.
  • waffles made network requests cancellable.
  • Emilio Cobos Álvarez implemented WebIDL sequence type return values.

New Contributors


At last week’s meeting, we discussed our GitHub subproject handling, the new servo.org webpage, multi-repo workflows, and the revival of our Rust-tracking-bug.

Daniel PocockQuick start using Blender for video editing

Updated 2015-11-16 for WebM

Although it is mostly known for animation, Blender includes a non-linear video editing system that is available in all the current stable versions of Debian, Ubuntu and Fedora.

Here are some screenshots showing how to start editing a video of a talk from a conference.

In this case, there are two input files:

  • A video file from a DSLR camera, including an audio stream from a microphone on the camera
  • A separate audio file with sound captured by a lapel microphone attached to the speaker's smartphone. This is a much better quality sound and we would like this to replace the sound included in the video file.

Open Blender and choose the video editing mode

Launch Blender and choose the video sequence editor from the pull down menu at the top of the window:

Now you should see all the video sequence editor controls:

Setup the properties for your project

Click the context menu under the strip editor panel and change the panel to a Properties panel:

The video file we are playing with is 720p, so it seems reasonable to use 720p for the output too. Change that here:

The input file is 25fps so we need to use exactly the same frame rate for the output, otherwise you will either observe the video going at the wrong speed or there will be a conversion that is CPU intensive and degrades the quality. Also check that the resolution_percentage setting under the picture dimensions is 100%:

Now specify output to PNG files. Later we will combine them into a WebM file with a script. Specify the directory where the files will be placed and use the # placeholder to specify the number of digits to use to embed the frame number in the filename:

Now your basic rendering properties are set. When you want to generate the output file, come back to this panel and use the Animation button at the top.

Editing the video

Use the context menu to change the properties panel back to the strip view panel:

Add the video file:

and then right click the video strip (the lower strip) to highlight it and then add a transform strip:

Audio waveform

Right click the audio strip to highlight it and then go to the properties on the right hand side and click to show the waveform:

Rendering length

By default, Blender assumes you want to render 250 frames of output. Looking in the properties to the right of the audio or video strip you can see the actual number of frames. Put that value in the box at the bottom of the window where it says 250:

Enable AV-sync

Also at the bottom of the window is a control to enable AV-sync. If your audio and video are not in sync when you preview, you need to set this AV-sync option and also make sure you set the frame rate correctly in the properties:

Add the other sound strip

Now add the other sound file that was recorded using the lapel microphone:

Enable the waveform display for that sound strip too, this will allow you to align the sound strips precisely:

You will need to listen to the strips to make an estimate of the time difference. Use this estimate to set the "start frame" in the properties for your audio strip, it will be a negative value if the audio strip starts before the video. You can then zoom the strip panel to show about 3 to 5 seconds of sound and try to align the peaks. An easy way to do this is to look for applause at the end of the audio strips, the applause generates a large peak that is easily visible.

Once you have synced the audio, you can play the track and you should not be able to hear any echo. You can then silence the audio track from the camera by right clicking it, look in the properties to the right and change volume to 0.

Make any transforms you require

For example, to zoom in on the speaker, right click the transform strip (3rd from the bottom) and then in the panel on the right, click to enable "Uniform Scale" and then set the scale factor as required:

Render the video output to PNG

Click the context menu under the Curves panel and choose Properties again.

Click the Animation button to generate a sequence of PNG files for each frame.

Render the audio output

On the Properties panel, click the Audio button near the top. Choose a filename for the generated audio file.

Look on the bottom left-hand side of the window for the audio file settings, change it to the ogg container and Vorbis codec:

Ensure the filename has a .ogg extension

Now look at the top right-hand corner of the window for the Mixdown button. Click it and wait for Blender to generate the audio file.

Combine the PNG files and audio file into a WebM video file

You will need to have a few command line tools installed for manipulating the files from scripts. Install them using the package manager, for example, on a Debian or Ubuntu system:

# apt-get install mjpegtools vpx-tools mkvtoolnix

Now create a script like the following:

#!/bin/bash -e

# Set this to match the project properties

# Set this to the rate you desire:


NUM_FRAMES=`find ${PNG_DIR} -type f | wc -l`

png2yuv -I p -f $FRAME_RATE -b 1 -n $NUM_FRAMES \
    -j ${PNG_DIR}/%08d.png > ${YUV_FILE}

vpxenc --good --cpu-used=0 --auto-alt-ref=1 \
   --lag-in-frames=16 --end-usage=vbr --passes=2 \
   --threads=2 --target-bitrate=${TARGET_BITRATE} \
   -o ${WEBM_FILE}-noaudio ${YUV_FILE}

rm ${YUV_FILE}

mkvmerge -o ${WEBM_FILE} -w ${WEBM_FILE}-noaudio ${AUDIO_FILE}

rm ${WEBM_FILE}-noaudio

Next steps

There are plenty of more comprehensive tutorials, including some videos on Youtube, explaining how to do more advanced things like fading in and out or zooming and panning dynamically at different points in the video.

If the lighting is not good (faces too dark, for example), you can right click the video strip, go to the properties panel on the right hand side and click Modifiers, Add Strip Modifier and then select "Color Balance". Use the Lift, Gamma and Gain sliders to adjust the shadows, midtones and highlights respectively.

Tim TaubertMore Privacy, Less Latency - Improved Handshakes in TLS Version 1.3

Up to this writing, TLS v1.3 (draft-11) has not been finalized and the proposals presented here might change. I will do my best to update this post timely.

TLS must be fast. Adoption will greatly benefit from speeding up the initial handshake that authenticates and secures the connection. You want to get the protocol out of the way and start delivering data to visitors as soon as possible. This is crucial if we want the web to succeed at deprecating non-secure HTTP.

Let’s start by looking at full handshakes as standardized in TLS v1.2, and then continue to abbreviated handshakes that decrease connection times for resumed sessions. Once we understand the current protocol we can proceed to proposals made in the latest TLS v1.3 draft to achieve full 1-RTT and even 0-RTT handshakes.

It helps if you already have a rough idea of how TLS and Diffie-Hellman work as I can’t go into every detail. The focus of this post is on comparing current and future handshakes and I might omit a few technicalities to get basic ideas across more easily.

Full TLS 1.2 Handshake (static RSA)

Static RSA is a straightforward key exchange method, available since SSLv2. After sharing basic protocol information via the ClientHello and ServerHello messages the server sends its certificate to the client. ServerHelloDone signals that for now there will be no further messages until the client responds.

The client then encrypts the so-called premaster secret with the server’s public key found in the certificate and wraps it in a ClientKeyExchange message. ChangeCipherSpec signals that from now on messages will be encrypted. Finished, the first message to be encrypted and the client’s last message of the handshake, contains a MAC of all handshake messages exchanged thus far to prove that both parties saw the same messages, without interference from a MITM.

The server decrypts the premaster secret found in the ClientKeyExchange message using its certificate’s private key, and derives the master secret and communication keys. It then too signals a switch to encrypted communication and completes the handshake. It takes two round-trips to establish a connection.

Authentication: With static RSA key exchanges, the connection is authenticated by encrypting the premaster secret with the server certificate’s public key. Only the server in possession of the private key can decrypt, correctly derive the master secret, and send an encrypted Finished message with the right MAC.

The simplicity of static RSA has a serious drawback: it does not offer forward secrecy. If a passive adversary records all traffic to a server then every recorded TLS session can be broken later by obtaining the certificate’s private key.

This key exchange method will be removed in TLS v1.3.

Full TLS 1.2 Handshake (ephemeral DH)

A full handshake using (Elliptic Curve) Diffie-Hellman to exchange ephemeral keys is very similar to the flow of static RSA. The main difference is that after sending the certificate the server will also send a ServerKeyExchange message. This message contains either the parameters of a DH group or of an elliptic curve, paired with an ephemeral public key computed by the server.

The client too computes an ephemeral public key compatible with the given parameters and sends it to the server. Knowing their private keys and the other party’s public key both sides should now share the same premaster secret and can derive a shared master secret.

Authentication: With (EC)DH key exchanges it’s still the certificate that must be signed by a CA listed in the client’s trust store. To authenticate the connection the server will sign the parameters contained in ServerKeyExchange with the certificate’s private key. The client verifies the signature with the certificate’s public key and only then proceeds with the handshake.

Abbreviated Handshakes in TLS 1.2

Since SSLv2 clients have been able to use session identifiers as a way to resume previously established TLS/SSL sessions. Session resumption is important because a full handshake can take time: it has a high latency as it needs two round-trips and might involve expensive computation to exchange keys, or sign and verify certificates.

Session IDs, assigned by the server, are unique identifiers under which both parties store the master secret and other details of the connection they established. The client may include this ID in the ClientHello message of the next handshake to short-circuit the negotiation and reuse previous connection parameters.

If the server is willing and able to resume the session it responds with a ServerHello message including the Session ID given by the client. This handshake is effectively 1-RTT as the client can send application data immediately after the Finished message.

Sites with lots of visitors will have to manage and secure big session caches, or risk pushing out saved sessions too quickly. A setup involving multiple load-balanced servers will need to securely synchronize session caches across machines. The forward secrecy of a connection is bounded by how long session information is retained on servers.

Session tickets, created by the server and stored by the client, are blobs containing all necessary information about a connection, encrypted by a key only known to the server. If the client presents this tickets with the ClientHello message, and proves that it knows the master secret stored in the ticket, the session will be resumed.

A server willing and able to decrypt the given ticket responds with a ServerHello message including an empty SessionTicket extension, otherwise the extension would be omitted completely. As with session IDs, the client will start sending application data immediately after the Finished message to achieve 1-RTT.

To not affect the forward secrecy provided by (EC)DHE suites session ticket keys should be rotated periodically, otherwise stealing the ticket key would allow recovering recorded sessions later. In a setup with multiple load-balanced servers the main challenge here is to securely generate, rotate, and synchronize keys across machines.

Authentication: Both session resumption mechanisms retain the client’s and server’s authentication states as established in the session’s initial handshake. Neither the server nor the client have to send and verify certificates a second time, and thus can reduce connection times significantly, especially when dealing with RSA certificates.

Full Handshakes in TLS 1.3

The first good news about handshakes in TLS v1.3 is that static RSA key exchanges are no longer supported. Great! That means we can start with full handshakes using forward-secure Diffie-Hellman.

Another important change is the removal of the ChangeCipherSpec protocol (yes, it’s actually a protocol, not a message). With TLS v1.3 every message sent after ServerHello is encrypted with the so-called ephemeral secret to lock out passive adversaries very early in the game. EncryptedExtensions carries Hello extension data that must be encrypted because it’s not needed to set up secure communication.

The probably most important change with regard to 1-RTT is the removal of the ServerKeyExchange and ClientKeyExchange messages. The DH parameters and public keys are now sent in special KeyShare extensions, a new type of extension to be included in the ServerHello and ClientHello messages. Moving this data into Hello extensions keeps the handshake compatible with TLS v1.2 as it doesn’t change the order of messages.

The client sends a list of KeyShare values, a value consisting of a named (EC)DH group and an ephemeral public key. If the server accepts it must respond with one of the proposed groups and its own public key. If the server does not support any of the given key shares the client may try again with a different configuration or abort.

Authentication: The Diffie-Hellman parameters itself aren’t signed anymore, authentication will be a tad more explicit in TLS v1.3. The server sends a CertificateVerify message that contains a hash of all handshake message exchanged so far, signed with the certificate’s private key. The client then simply verifies the signature with the certificate’s public key.

Session Resumption in TLS 1.3 (PSK)

Session resumption via identifiers and tickets is obsolete in TLS v1.3. Both methods are replaced by a pre-shared key (PSK) mode. A PSK is established on a previous connection after the handshake is completed, and can then be presented by the client on the next visit.

The client sends one or more PSK identities as opaque blobs of data. They can be database lookup keys (similar to Session IDs), or self-encrypted and self-authenticated values (similar to Session Tickets). If the server accepts one of the given PSK identities it replies with the one it selected. The KeyShare extension is sent to allow servers to ignore PSKs and fall back to a full handshake.

Forward secrecy can be maintained by limiting the lifetime of PSK identities sensibly. Clients and servers may also choose an (EC)DHE cipher suite for PSK handshakes to provide forward secrecy for every connection, not just the whole session.

Authentication: As in TLS v1.2, the client’s and server’s authentication states are retained and both parties don’t need to exchange and verify certificates again. A regular PSK handshake initiating a new session, instead of resuming, omits certificates completely.

Session resumption still allows significantly faster handshakes when using RSA certificates and can prevent user-facing client authentication dialogs on subsequent connections. However, the fact that it requires a single round-trip just like a full handshake might make it less appealing, especially if you have an ECDSA or EdDSA certificate and do not require client authentication.

Zero-RTT Handshakes in TLS 1.3

The latest draft of the specification contains a proposal to let clients encrypt application data and include it in their first flights. On a previous connection, after the handshake completes, the server would send a ServerConfiguration message that the client can use for 0-RTT handshakes on subsequent connections. The configuration includes a configuration identifier, the server’s semi-static (EC)DH parameters, an expiration date, and other details.

With the very first TLS record the client sends its ClientHello and, changing the order of messages, directly appends application data (e.g. GET / HTTP/1.1). Everything after the ClientHello will be encrypted with the static secret, derived from the client’s ephemeral KeyShare and the semi-static DH parameters given in the server’s configuration.

The server, if able and willing to decrypt, responds with its default set of messages and immediately appends the contents of the requested resource. That’s the same round-trip time as for an unencrypted HTTP request. All communication following the ServerHello will again be encrypted with the ephemeral secret, derived from the client’s and server’s ephemeral key shares. After exchanging Finished messages the server will be re-authenticated, and traffic encrypted with keys derived from the master secret.

Security of 0-RTT Handshakes

At first glance, 0-RTT mode seems similar to session resumption or PSK, and you might wonder why one wouldn’t merge these mechanisms. The differences however are subtle but important, and the security properties of 0-RTT handshakes are weaker than those for other kinds of TLS data:

1. To protect against replay attacks the server must incorporate a server random into the master secret. That is unfortunately not possible before the first round-trip and so the poor server can’t easily tell whether it’s a valid request or an attacker replaying a recorded conversation. Replay protection will be in place again after the ServerHello message is sent.

2. The semi-static DH share given in the server configuration, used to derive the static secret and encrypt first flight data, defies forward secrecy. We need at least one round-trip to establish the ephemeral secret. As configurations are shared between clients, and recovering the server’s DH share becomes more attractive, expiration dates should be limited sensibly. The maximum allowed validity is 7 days.

3. If the server’s DH share is compromised a MITM can tamper with the 0-RTT data sent by the client, without being detected. This does not extend to the full session as the client can retrospectively authenticate the server via the remaining handshake messages.

Defending against Replay Attacks

Thwarting replay attacks without input from the server is fundamentally very expensive. It’s important to understand that this is a generic problem, not an issue with TLS in particular, so alas one can’t just borrow another protocol’s 0-RTT model and put that into TLS.

It is possible to have servers keep a list of every ClientRandom they have received in a given time window. Upon receiving a ClientHello the server checks its list and rejects replays if necessary. This list must be globally and temporally consistent as there are possible attack vectors due to TLS’ reliable delivery guarantee if an attacker can force a server to lose its state, as well as with multiple servers in loosely-synchronized data centers.

Maintaining a consistent global state is possible, but only in some limited circumstances, namely for very sophisticated operators or situations where there is a single server with good state management. We will need something better.

Removing Anti-Replay Guarantee

A possible solution might be a TLS stack API to let applications designate certain data as replay-safe, for example GET / HTTP/1.1 assuming that GET requests against a given resource are idempotent.

let c = new TLSConnection(...);
c.setReplayable0RTTData("GET / ...");

Applications can, before opening the connection, specify replayable 0-RTT data to send on the first flight. If the server ignores the given 0-RTT data, the TLS stack automatically replays it after the first round-trip.

Removing Reliable Delivery Guarantee

Another way of achieving the same outcome would be a TLS stack API that again lets applications designate certain data as replay-safe, but does not automatically replay if the server ignores it. The application can decide to do this manually if necessary.

let c = new TLSConnection(...);
c.setUnreliable0RTTData("GET / ...");

if (c.delivered0RTTData()) {
  // Things are cool.
} else {
  // Try to figure out whether to replay or not.

Both of these APIs are early proposals and the final version of the specification might look very different from what we can see above. Though, as 0-RTT handshakes are a charter goal, the working group will very likely find a way to make them work.

Summing up

TLS v1.3 will bring major improvements to handshakes, how exactly will be finalized in the coming months (or years?). They will be more private by default as all information not needed to set up a secure channel will be encrypted as early as possible. Clients will need only a single round-trip to establish secure and authenticated connections to servers they never spoke to before.

Static RSA mode will no longer be available, forward secrecy will be the default. The two session resumption standards, session identifiers and session tickets, are merged into a single PSK mode which will allow streamlining implementations.

The proposed 0-RTT mode is promising, for custom application communication based on TLS but also for browsers, where a GET / HTTP/1.1 request to your favorite news page could deliver content blazingly fast as if no TLS was involved. The security aspects of zero round-trip handshakes will become more clear as the draft progresses.

QMOFirefox 43 Beta 3 Testday Results

Hey everyone! :)

On November 13th, we held Firefox 43.0 Beta 3 Testday and it was another notable event – we had quite a number of participants this time too 😀 

Many thanks go out to kenkon, Moin Shaikh, Spandana, Iryna Thompson, PreethiDhinesh and to the people from our Bangladesh CommunityHossain Al IkramNazir Ahmed SabbirRaihan AliSaheda Reza AntoraRezaul huque NayeemKhalid Syfullah ZamanAsiful KabirSajedul IslamFariha AfrinMohammad Maruf IslamForhad HossainAkib Naved, Md.Tarikul Islam Oashi, Tanjil Haque, Md. Almas Hossain, Amlan Biswas, Mahfuza Humayra Mohona, Nazmus Shakib Robin, Md. Badiuzzaman Pranto, Tahsan Chowdhury Akash, Md.Ehsanul Hassan, Shaily Roy and Shahadat for getting involved.

Also a big thank you to all our active moderators.


  • no new issues logged for Add-ons Signing, Search Suggestions or Unified Autocomplete features, but some potential ones were mentioned in the etherpad; therefore, please feel free to add the requested details in the etherpad or, even better, join us on #qa IRC channel and let’s figure them out 😉
  • 6 bugs were verified: 1168552, 1025778, 1192606, 1128472, 1137009 and 1024343.    

Keep an eye on QMO for upcoming events! 😉


Daniel StenbergThe most popular curl download – by a malware

During October 2015 the curl web site sent out 1127 gigabytes of data. This was the first time we crossed the terabyte limit within a single month.

Looking at the stats a little closer, I noticed that in July 2015 a particular single package started to get very popular. The exact URL was


Curious. In October it alone was downloaded more than 300,000 times, accounting for over 70% of the site’s bandwidth. Why?

The downloads came from what appears to be different locations. They don’t use any HTTP referer headers and they used different User-agent headers. I couldn’t really see a search bot gone haywire or a malicious robot stuck in a crazy mode.

After I shared some of this data over in our IRC channel (#curl on freenode), Björn Stenberg stumbled over this AVG slide set, describing how a particular malware works when it infects a computer. Downloading that particular file is thus a step in its procedures to create a trojan that will run on the host system – see slide 11 for the curl details. The slide also mentions that an updated version of the malware comes bundled with the curl library already, which then I guess makes the hits we see on the curl site being done by the older versions still being run.

Of course, we can’t be completely sure this is the source for the increased download of this particular file but it seems highly likely.

I renamed the file just now to see what happens.

Evil use of good code

We can of course not prevent evil uses of our code. We provide source code and we even host some binaries of curl and libcurl and both good and bad actors are able to take advantage of our offers.

This rename won’t prevent a dedicated hacker, but hopefully it can prevent a few new victims from getting this malware running on their machines.

Update: the hacker news discussion about this post.

This Week In RustThis Week in Rust 105

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: nasa42, brson, and llogiq.

Updates from Rust Community

News & Blog Posts

Notable New Crates & Projects

  • rustfmt is now part of Rust Nursery.
  • Leaf. Open Machine Intelligence Framework.
  • lrs. An experimental, linux-only standard library.

Updates from Rust Core

95 pull requests were merged in the last week.

See the triage digest and subteam reports for more details.

Notable changes

New Contributors

  • corentih
  • Danilo Bargen
  • Eric Findlay
  • Erik Davidson
  • Kohei Hasegawa
  • Sebastian Hahn

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week!

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

Tweet us at @ThisWeekInRust to get your job offers listed here!

Crate of the Week

This week's Crate of the Week is Hyper which offers a Rust HTTP(S) implementation for both clients and servers.

Thanks to DanielKeep for this week's suggestion. Submit your suggestions for next week!