Mozilla Addons BlogBecome a Featured Themes Collection Curator

The Featured Themes collection is a great place to start if you’re looking for nice lightweight themes to personalize your Firefox. From kittens to foxes, winter snowscapes to sunny beaches, it is a continually rotating collection of high-quality themes in a variety of colors, images, and moods.

Currently, volunteer theme reviewers are invited to help curate this collection, but we’d like to open it to more community participation. We invite theme creators with a keen eye for design to apply to become featured themes curators. Over a six-month period, volunteer curators will join a small group of theme reviewers and staff members to add 3 – 5 themes each week to the collection and to remove any themes that have been featured for longer than two weeks.

To learn more about becoming a featured themes curator and what it entails please take a look at the wiki. If you would like to apply to become a curator, please email cneiman [at] mozilla [dot] com with a link to your AMO profile, a brief statement about why you would make a strong curator, and a link to a collection of at least five themes that you feel are feature-worthy.

Carsten BookSheriffing @ Mozilla – Sheriffing a Community Task!

i was recently asked if volunteers can help with Sheriffing!
And the answer is very simple: Of course you can and you are very welcome! 
As every part of Mozilla, volunteers are very important. Our team is mixed of Full-Time Employees and Volunteers.
What is needed to join as Community Sheriff:
I think basically there are 3 things you need to have to participate as Community Sheriff:
-> Communication Skills and Teamwork – Sheriffing means a lot of communication – communication with the other sheriff Teams, developers and teams like Taskcluster and Release Engineering. 
-> Background Knowledge how Bugzilla works (commenting in bugs, resolving bugs and setting flags etc)
-> Ability to see context & relationships between failures (like the relation of a set of failures to a checkin) to identify the changeset that causes the regression.
All our tools are public accessible and you don’t need any specific access rights to get started.
Our main tool is Treeherder ( and the first task a Community Sheriff could do is to watch Treeherder and star failures.
We have described this task here
That would help us a lot!
When you are curious how a day in Sheriffing looks then maybe can help 🙂
Please let us know when you are interested in becoming a Sheriff! You can find us on in the #sheriffs channel!

Christian HeilmannImportant talks: Sacha Judd’s “How the tech sector could move in One Direction”

I just watched a very important talk from last year’s Beyond Tellerand conference in Berlin. Sacha Judd (@szechuan) delivered her How the tech sector could move in One Direction at this conference and Webstock in New Zealand a few days ago. It is a great example of how a talk can be insightful, exciting and challenge your biases at the same time.

You can watch the video, read the transcript and get the slides.

I’ve had this talk on my “to watch” list for a long time and the reason is simple: I couldn’t give a toss about One Direction. I was – like many others – of the impression that boy bands like them are the spawn of commercial satan (well, Simon Cowell, to a large degree) and everything that is wrong with music as an industry and media spectacle.

And that’s the great thing about this talk: it challenged my biases and it showed me that by dismissing something not for me I also discard a lot of opportunity.

This isn’t a talk about One Direction. It is a talk about how excitement for a certain topic gets people to be creative, communicate and do things together. That their tastes and hysteria aren’t ours and can be off-putting isn’t important. What is important is that people are driven to create. And it is important to analyse the results and find ways to nurture this excitement. It is important to possibly channel it into ways how these fans can turn the skills they learned into a professional career.

This is an extension to something various people (including me) kept talking about for quite a while. It is not about technical excellence. It is about the drive to create and learn. Our market changes constantly. This is not our parent’s 50ies generation where you get a job for life and you die soon after retirement, having honed and used one skill for your whole lifetime. We need to roll with the punches and changes in our markets. We need to prepare to be more human as the more technical we are, the easier we are to be replaced my machines.

When Mark Surman of Mozilla compared the early days of the web to his past in the punk subculture creating fanzines by hand it resonated with me. As this is what I did, too.

When someone talks about fanpages on tumblr about One Direction, it didn’t speak to me at all. And that’s a mistake. The web has moved from a technical subculture flourishing under an overly inflated money gamble (ecommerce, VC culture) to being a given. Young people don’t find the web. They are always connected and happy to try and discard new technology like they would fashion items.

But young people care about things, too. And they find ways to tinker with them. When a fan of One Direction gets taught by friends how to change CSS to make their Tumblr look different or use browser extensions to add functionality to the products they use to create content we have a magical opportunity.

Our job as people in the know is to ensure that the companies running creation tools don’t leave these users in the lurch when the VC overlords tell them to pivot. Our job is to make sure that they can become more than products to sell on to advertisers. Our job is to keep an open mind and see how people use the media we helped create. Our job is to go there and show opportunities, not to only advertise on hackernews. Our job is to harvest these creative movements to turn them into to the next generation of carers of the web.

I want to thank Sacha for this talk. There is a lot of great information in there and I don’t want to give it all away. Just watch it.

Cameron KaiserFarewell to SHA-1 and hello to TenFourFox FPR1

The long-in-the-tooth hashing algorithm SHA-1 has finally been broken by its first collision attack (and the broken SHA-1 apparently briefly broke the WebKit repository, too). SHA-1 SSL TLS certificates have been considered dangerous for some time, which is why the Web has largely moved on to SHA-256 certificates and why I updated Classilla to support them.

Now that it is within the realm of practical plausibility to actually falsify site certificates signed with SHA-1 and deploy them in the real world, it's time to cut that monkey loose. Mozilla is embarking on a plan of increasing deprecation, but Google is dumping SHA-1 certificates entirely in Chrome 56, in which they will become untrusted. Ordinarily we would follow Mozilla's lead here with TenFourFox, but 45ESR doesn't have the more gradated current SHA-1 handling they're presently using.

So, we're going to go with Chrome's approach and shut the whole sucker down. The end of 45ESR (and the end of TenFourFox source parity) will occur on June 13 with the release of Firefox 54 (52.2 ESR); the last scheduled official 45ESR is 45.9, on April 18. For that first feature parity release of TenFourFox in June after 45.9, security.pki.sha1_enforcement_level will be changed from the default 0 (permit) to 1 (deny) and all SHA-1 certificates will become untrusted. You can change it back if you have a server you must connect to that uses a SHA-1 certificate to secure it, but this will not be the default, and it will not be a supported configuration.

You can try this today. In fact, please do switch over now and see what breaks; just go into about:config, find security.pki.sha1_enforcement_level and set it to 1. The good news is that Mozilla's metrics indicate few public-facing sites should be affected by this any more. In fact, I can't easily find a server to test this with; I've been running with this setting switched over for the past day or so with nothing gone wrong. If you hit this on a site, you might want to let them know as well, because it won't be just TenFourFox that will refuse to connect to it very soon. TBH, it would have to cause problems on a major, major site for me not to implement this change because of the urgency of the issue, but I want to give our users enough time to poke around and make sure they won't suddenly be broken with that release.

That release, by the way, will be the end of TenFourFox 45 and the beginning of the FPR (Feature Parity Release) series, starting with FPR1. Internally the browser will still be a fork of Gecko 45 and addons that ask its version will get 45.10 for FPR1 and 45.11 for FPR2 and so on, but the FPR release number will now appear in the user agent string as well as the 45.x version number, and the About box and site branding will now reference the current FPR number instead. The FPR will not necessarily be bumped every release: if it's a security update only and there are no new major features, it will get an SPR bump instead (Security Parity Release). FPR1 (SPR2) would be equivalent, then, to an internal version of 45.10.2.

Why drop the 45 branding? Frankly, because we won't really be 45.* Like Classilla, where I backported later changes into its Mozilla 1.3.1 core, I already have a list of things to add to the TenFourFox 45 core that will improve JavaScript ES6 compatibility and enable additional HTML5 features, which will make the browser more advanced. Features post-52ESR will be harder to backport as Mozilla moves more towards Rust code descended from Servo and away from traditional C++ and XPCOM, but there is a lot we can still make work, and unlike Classilla we won't be already six years behind when we get started.

The other thing we need to start thinking about is addons. There is no XUL, only WebExtensions, according to Mountain View, and moreover we'll be an entire ESR behind even if Mozilla ends up keeping legacy XUL addons around. While I don't really want to be in the business of maintaining these addons, we may want to archive a set of popular ones so that we don't need to depend on AMO. Bluhell Firewall, uBlock, Download YouTube Videos as MP4 and OverbiteFF are already on my list, and we will continue to host and support the QuickTime Enabler. What other ones of general appeal would people like added to our repository? Are there any popular themes or Personas we should keep copies of? (No promises; the final decision is mine.)

The big picture is still very bright overall. We'll have had almost seven years of source parity by June, and I think I can be justifiably proud of that. Even the very last Quad G5 to roll off the assembly line will have had almost eleven years of current Firefox support and we should still have several more years of practical utility in TenFourFox yet. So, goodbye SHA-1, but hello FPR1. The future beckons.

(* The real reason: FPR and SPR are absolutely hilarious PowerPC assembly language puns that I could backronym. I haven't figured out what I can use for GPR yet.)

James LongWhy I'm Frequently Absent from Open Source

The goal of free open source development is empowerment: everyone can not only use code for free but also contribute to and influence it. This model allows people to teach and learn from each other, improves businesses by sharing work on similar ideas, and has given some people the chance to break out and become well-known leaders.

Unfortunately, in reality open source development is rife with problems and is ultimately unsustainable. Somebody has to pay the cost of maintaining a project. Actually writing code is only a small part of running a project, so it's never as simple as mutually benefitting from sharing code. Somebody, or a group of people, needs to step up and spend lots of hours every week to make sure code actually gets pushed through the pipeline and issues are triaged.

Even if that problem was solved, human beings are complex and full of conflict when we try to work together, and open source has little structure for dealing with these problems like a normal workplace would.

For businesses, an active open source project may not make sense. I've seen projects go open source just for the "warm fuzzies" that unfortunately embody that model, only to drown in all the work that goes into powering open source. When most of the time is spent trying to help contributors, call it for what it is: charity. Which is great! But it may be more efficient to focus on the project internally.

I love open source! I think it's a net win for society, even if there are big problems we still need to solve. I wouldn't be close to where I am today without it, and it felt amazing to release prettier and watch its success (even if I'm dealing with the above problems now).

I think we need to change how we talk about it and change our expectations. Because right now open source is full of guilt. Tell people that it's OK to neglect a project for a while. It's OK to let PRs pile up for a few weeks, and then close some that you don't have time to look into. It's OK to tell contributors that nobody has time to help them. The best solution is to find other people to help, but if that doesn't happen, all of the above are totally OK to do.

Now that I got that off my chest, I wanted to explain why I am frequently absent from my open source projects. I haven't even looked at prettier's issues for days, and I've let other projects die because of lack of time.

The reason is simple: family.

Sarah and I have been married for 6 years as of today (with 2 kids, Evy and Georgia). I write about a lot about technology on my blog but I don't write about the thing that I spend more time on and is way more important than tech: my family. I thought our 6-year anniversary is a good time to acknowledge that!

For a while I put myself in a lucky position of not maintaining any open source projects. But recently I released prettier and it's been interesting to try to balance work on it with my family (and not to mention change of jobs). While it's an important project to me, most of the time I intentionally ignore it to spend time with my family at nights and weekends. (Thanks to Christopher for spending so much time on prettier and making my time away less guilty).

I'll be honest, it's a struggle sometimes. Being married and raising kids is a lot of work by itself, and balancing free time is complicated. There's no question in my mind though that, by comparison, tech has little meaning in the greater context of life.

Happy 6th anniversary babe!

Mozilla Addons BlogImproving Themes in Firefox

Last year, we started thinking about how to improve themes in Firefox. We asked theme designers and developers for their thoughts too, and received over 250 responses. After sorting through the responses and a few months of deliberation and experimentation, we have a plan, and would like to share what we’re going to do and why.

Let’s start with the “why.”

Currently, Firefox supports two types of themes. Complete Themes and Lightweight Themes (aka LWTs, formerly “Personas”, and now just “Themes” on

Lightweight Themes are very popular. There are currently over 400,000 such themes on (AMO). They’re extremely simple to create; anyone can create one by just uploading an image to AMO. I made my first LWT eight years ago, and it still works fine today without any changes. However, LWTs are very limited in what they can do. They can lightly modify the default Firefox appearance with a background image and set a couple of colors, but nothing else. The survey confirmed that there’s a lot of interest in giving LWTs more creative control over how the browser looks. More backgrounds, colors, and icons on more UI elements.

Complete Themes, the second type of theme, completely replace the default Firefox appearance. Authors must provide all the needed CSS, images, and icons for the entire browser (from scratch!), which makes Complete Themes flexible but difficult to create and maintain. Authors have to understand Firefox’s complex and ever-changing UI internals, and there’s little documentation beyond Firefox’s source code. This leads to a serious compatibility burden, as authors need to invest time to keep their theme working with each Firefox release – only 60 of the 500 Complete Themes on AMO are compatible with current Firefox releases. Further, we’re not able to directly fix these issues without limiting our ability to improve Firefox. And since only 0.089% of Firefox users use a Complete Theme (less than 4% of the usage of LWTs), we’re going to focus on improving theming in other ways.

Firefox extensions (as opposed to themes) can also run into these problems, as there is no JavaScript API to control the appearance of the browser. Legacy add-ons wanting to do this had to directly alter the UI’s internal DOM and CSS. As with Complete Themes, this can give an extension great power, but it comes with a serious price in complexity and compatibility. WebExtensions are unable to access any UI internals, and so are particularly in need of a solution.

What we’re doing

We want to fix Firefox so theming is better for everyone.

The next generation of Firefox themes will blend LWTs’ ease of authoring with the additional capabilities we see most often used in Complete Themes. At its core is a JSON manifest, mapping defined property names to the underlying UI elements. Theme developers will be able to control a variety of styles on these properties (such as colors, icons, and background images), and Firefox will ensure the manifests are supported in a stable and well-documented way across future releases and UI updates. These themes will be layered on top of the default Firefox appearance, so you can create a trivial theme that just changes one property, or a complex theme that changes all of them.

The goal is to provide capabilities that enable people to make themes they love and use. We think it’s possible to make themes in Firefox powerful without the previous pitfalls. To get started, we’ll initially support the same properties as Chrome, to make the thousands of Chrome themes more easily available in Firefox. Then we’ll expand the set of properties in our API, so that Firefox themes will be able to do more. We expect to continue adding to this theming framework over time, with your feedback helping to guide what’s needed.

We also recognize that a property manifest won’t have all the capabilities of direct CSS manipulation, especially during the time we’re expanding coverage to cover the most common use cases. So, for theme authors who need additional capabilities (and are willing to bear the burden of supporting them), we’ll provide an “experimental” section of the manifest to continue to allow direct CSS manipulation of the Firefox UI. This is similar in spirit to WebExtension Experiments, and usage will also be restricted to pre-release versions of Firefox.

A WebExtensions API

Finally, we’re adding a WebExtensions API for theming. It will have the same capabilities as the JSON manifest’s properties, except via a JavaScript API, so add-ons can make dynamic changes at runtime. This will enable add-ons to do things like adjusting colors based on the time of day (e.g. Flux), or matching your theme with the weather outside.

Questions and Feedback

For more information on our current work, see our Engineering Plan. We’ve just started some of the foundational work, but would welcome your input as we move towards building the first version that will be usable by theme authors. Our goal is to have this in place before Firefox 57 ships in November, which will end support for Complete Themes. We currently expect early testing to begin in Nightly over the next few months.

Please address your questions and feedback to the Dev-Addons list at

We’re also keeping a list of Frequently Asked Questions, check it out to see if it answers your question.

Will Kahn-GreeneWho uses my stuff?


I work on a lot of different things. Some are applications, are are libraries, some I started, some other people started, etc. I have way more stuff to do than I could possibly get done, so I try to spend my time on things "that matter".

For Open Source software that doesn't have an established community, this is difficult.

This post is a wandering stream of consciousness covering my journey figuring out who uses Bleach.

Read more… (4 mins to read)

Karl Dubost[worklog] Edition 056 - The wind is crazy

webcompat life

is the reason behind the magical value used by developers for coping with iOS change in between iOS 8 and iOS 9. It's awsome in some ways, because it is not really documented and if it is the reason. Web developers are using a value from the C++ code not intended for it to overcome a change in a platform, which in itself creates Web compatibility isssues on Firefox. I will write a bit more about it once I really understood what is happening.

webcompat issues

  • webc-4728 landofknown on Firefox iOS receives the desktop version.
  • webc-4744. No music. The second song never comes.
  • webc-4746. Another issue with viewport and large content. Chrome resizes the content, but Firefox respects the initial value.
  • webc-4782. Interaction between column layout and float.
  • webc-4804. London Transport with issues. dev


Air MozillaInternet Trivia Night (HTTPS: Humans + Tech Trivia Party

Internet Trivia Night (HTTPS: Humans + Tech Trivia Party On February 23rd, we're hosting an Internet Trivia Night (HTTPS: Humans + Tech Trivia Party(s)) at WeWork Civic Center in San Francisco. We also want...

Mozilla Security BlogThe end of SHA-1 on the Public Web

Our deprecation plan for the SHA-1 algorithm in the public Web, first announced in 2015, is drawing to a close. Today a team of researchers from CWI Amsterdam and Google revealed the first practical collision for SHA-1, affirming the insecurity of the algorithm and reinforcing our judgment that it must be retired from security use on the Web.

As announced last fall, we’ve been disabling SHA-1 for increasing numbers of Firefox users since the release of Firefox 51 using a gradual phase-in technique. Tomorrow, this deprecation policy will reach all Firefox users. It is enabled by default in Firefox 52.

Phasing out SHA-1 in Firefox will affect people accessing websites that have not yet migrated to SHA-2 certificates, well under 0.1% of Web traffic. In parallel to phasing out insecure cryptography from Firefox, we will continue our outreach efforts to help website operators use modern and secure HTTPS.

Users should always make sure to update to the latest version of Firefox for the most-recent security updates and features by going to

Questions about Mozilla policies related to SHA-1 based certificates should be directed to the forum.

Mozilla Cloud Services BlogManaging Push Subscriptions

This article is a continuing series about using and working with WebPush and Mozilla’s WebPush service. This article is not meant to be a general guide, but instead offer suggestions and insight into best using the service. Some knowledge of Javascript, Python, or other technologies is presumed.

Sending out push notifications to Customers is a great way to ensure that they’re up-to-date with information that is of importance to them. What’s also important is that your growing business use this service efficiently so you don’t wind up wasting money creating and sending out messages that no one will ever read. In addition, not honoring response codes could lead to your server being blocked or severely limited in how many Subscription Updates it may be able to send.

Mozilla’s WebPush server will let you know if a Customer has unsubscribed, but it’s important that you notice and act on these. I have created a simple demonstration program that can help you understand what you will need to consider when creating a Push Subscription service.


First off, let’s refresh a few terms we’ll use in this article:

App – The javascript web application that receives the decoded Push Message.

Customer – The end recipient of the Push Message.

Push Message – The content of the Subscription Update to be provided to the App by the User Agent and potentially viewed by the Customer.

Push Server – The service located at the endpoint URL which handles delivery of the Subscription Update to the User Agent.

Subscription – A Customer request to be updated. A Subscription contains an endpoint URL, and a set of keys that are to be used to encode the Push Message.

Subscription Provider – The subscription provider sends out Push Messages to User Agents to be read by Customers.

Subscription Update – The message sent by the Subscription Provider to the Push Server.

User Agent – The Customer’s browser, specifically, code internal to the browser which processes and routes Push Messages to your App.

Sending Subscription Updates

Sending a Push Message is a fairly simple operation, and thanks to modern libraries, easily done. It’s important to pay attention to the returned result. When a Subscription Provider sends a Subscription Update, the Push Service returns a response code. In most cases, the response code will be either 201 (Message Created) or 202 (Message Accepted). There’s subtle differences between those, but those differences are not important right now.

What is important is to know that the Push Server will return an HTTP error code along with a body that has extra information about what may have happened.

A possible 404 return message body might look like:
'errno': 102,
'message': 'Request did not validate invalid token',
'code': 404,
'more_info': '',
'error': 'Not Found'

In this case, there was a problem with the URL. More than likely it was corrupted at some point. In any case, the URL is now invalid and should not be tried again. The Customer’s record can be safely removed from storage.

This is also true for 410 return codes. These are subscribers who no longer wish to receive your updates. A Customer may unsubscribe for any number of reasons, and you should respect that choice. You can always ask the Customer to resubscribe later.

The Demo App

As an example, I’ve created a very simple demonstration project that uses Python3. This project does require Python3 to take advantage of native async programming to speed up delivery and message handling. Follow the steps in the README file to get it started. You can then navigate to http://localhost:8200 to see the page.

The test page (located in the /page directory) is very simple and only starts the process of subscribing once the Customer has requested it. Clicking the one button on the page will automatically create a Subscription Request and offer some script snippets you can use to send messages to the App.

To see what happens when a user unsubscribes, disable the permissions using the page information pip:
An example of using the page information pip to unregister from WebPush messages

If you try to send a Subscription Update to that customer again, you will receive an error and should drop the subscription. An example error from pusher may look like:

Failed to send to HMV192av: No such subscription
For more info, see:
Dropping no longer valid user: HMV192av

In this case, the subscription for user HMV192av has been removed, and the record was dropped from the user database.

It’s important to only ask your Customers to subscribe once they understand what they’re subscribing to. A Customer who is asked to subscribe to WebPush notifications without being given a clear indication of what they’re being offered may click the “Always Block Notifications” option. An example of a user blocking WebPush message subscription

When a user blocks notifications from your site, you may never get a chance to ask them again.

Following these simple guidelines will ensure that both you and your Customers are happy using the WebPush service.

Air MozillaIoT Meet-Up

IoT Meet-Up We are holding a Meet up with speakers in two locations and audience in two locations- London and Berlin Mozilla offices.

Dave TownsendMore new Firefox/Toolkit peers

A few things are happening which means there are a bunch of new Firefox/Toolkit peers to announce.

First since I’m now owner of both Firefox and Toolkit I’ve decided it doesn’t make much sense to have separate lists of peers. Either I trust people to review or I don’t so I’m merging the lists of peers for these modules. Practically since Toolkit already included all of the Firefox peers this just means that the following folks are now Firefox peers in addition to already being Toolkit peers:

  • Nathan Froyd
  • Axel Hecht
  • Mark Mentovai
  • Ted Mielczarek
  • Brian Nicholson
  • Neil Rashbrook
  • Gregory Szorc
  • David Teller

Second we’re updating the list of suggested reviewers in bugzilla for Toolkit. In doing so I found that we have a number of reviewers who weren’t already listed as peers so these folks are now considered full Firefox/Toolkit peers:

  • Kit Cambridge
  • Tantek Çelik
  • Mike Hommey
  • Matt Howell
  • Mike Kaply
  • François Marier
  • Nicholas Nethercote
  • Gian-Carlo Pascutto
  • Olli Pettay
  • J Ryan Stinnett
  • Andrew Sutherland
  • Gabriele Svelto
  • Jan Varga
  • Jonathan Watt

All of these folks have been doing reviews for some time now so this is largely just book-keeping but welcome to the fold anyway!

Chris LordMachine Learning Speech Recognition

Keeping up my yearly blogging cadence, it’s about time I wrote to let people know what I’ve been up to for the last year or so at Mozilla. People keeping up would have heard of the sad news regarding the Connected Devices team here. While I’m sad for my colleagues and quite disappointed in how this transition period has been handled as a whole, thankfully this hasn’t adversely affected the Vaani project. We recently moved to the Emerging Technologies team and have refocused on the technical side of things, a side that I think most would agree is far more interesting, and also far more suited to Mozilla and our core competence.

Project DeepSpeech

So, out with Project Vaani, and in with Project DeepSpeech (name will likely change…) – Project DeepSpeech is a machine learning speech-to-text engine based on the Baidu Deep Speech research paper. We use a particular layer configuration and initial parameters to train a neural network to translate from processed audio data to English text. You can see roughly how we’re progressing with that here. We’re aiming for a 10% Word Error Rate (WER) on English speech at the moment.

You may ask, why bother? Google and others provide state-of-the-art speech-to-text in multiple languages, and in many cases you can use it for free. There are multiple problems with existing solutions, however. First and foremost, most are not open-source/free software (at least none that could rival the error rate of Google). Secondly, you cannot use these solutions offline. Third, you cannot use these solutions for free in a commercial product. The reason a viable free software alternative hasn’t arisen is mostly down to the cost and restrictions around training data. This makes the project a great fit for Mozilla as not only can we use some of our resources to overcome those costs, but we can also use the power of our community and our expertise in open source to provide access to training data that can be used openly. We’re tackling this issue from multiple sides, some of which you should start hearing about Real Soon Now™.

The whole team has made contributions to the main code. In particular, I’ve been concentrating on exporting our models and writing clients so that the trained model can be used in a generic fashion. This lets us test and demo the project more easily, and also provides a lower barrier for entry for people that want to try out the project and perhaps make contributions. One of the great advantages of using TensorFlow is how relatively easy it makes it to both understand and change the make-up of the network. On the other hand, one of the great disadvantages of TensorFlow is that it’s an absolute beast to build and integrates very poorly with other open-source software projects. I’ve been trying to overcome this by writing straight-forward documentation, and hopefully in the future we’ll be able to distribute binaries and trained models for multiple platforms.

Getting Involved

We’re still at a fairly early stage at the moment, which means there are many ways to get involved if you feel so inclined. The first thing to do, in any case, is to just check out the project and get it working. There are instructions provided in READMEs to get it going, and fairly extensive instructions on the TensorFlow site on installing TensorFlow. It can take a while to install all the dependencies correctly, but at least you only have to do it once! Once you have it installed, there are a number of scripts for training different models. You’ll need a powerful GPU(s) with CUDA support (think GTX 1080 or Titan X), a lot of disk space and a lot of time to train with the larger datasets. You can, however, limit the number of samples, or use the single-sample dataset (LDC93S1) to test simple code changes or behaviour.

One of the fairly intractable problems about machine learning speech recognition (and machine learning in general) is that you need lots of CPU/GPU time to do training. This becomes a problem when there are so many initial variables to tweak that can have dramatic effects on the outcome. If you have the resources, this is an area that you can very easily help with. What kind of results do you get when you tweak dropout slightly? Or layer sizes? Or distributions? What about when you add or remove layers? We have fairly powerful hardware at our disposal, and we still don’t have conclusive results about the affects of many of the initial variables. Any testing is appreciated! The Deep Speech 2 paper is a great place to start for ideas if you’re already experienced in this field. Note that we already have a work-in-progress branch implementing some of these ideas.

Let’s say you don’t have those resources (and very few do), what else can you do? Well, you can still test changes on the LDC93S1 dataset, which consists of a single sample. You won’t be able to effectively tweak initial parameters (as unsurprisingly, a dataset of a single sample does not represent the behaviour of a dataset with many thousands of samples), but you will be able to test optimisations. For example, we’re experimenting with model quantisation, which will likely be one of multiple optimisations necessary to make trained models usable on mobile platforms. It doesn’t particularly matter how effective the model is, as long as it produces consistent results before and after quantisation. Any optimisation that can be made to reduce the size or the processor requirement of training and using the model is very valuable. Even small optimisations can save lots of time when you start talking about days worth of training.

Our clients are also in a fairly early state, and this is another place where contribution doesn’t require expensive hardware. We have two clients at the moment. One written in Python that takes advantage of TensorFlow serving, and a second that uses TensorFlow’s native C++ API. This second client is the beginnings of what we hope to be able to run on embedded hardware, but it’s very early days right now.

And Finally

Imagine a future where state-of-the-art speech-to-text is available, for free (in cost and liberty), on even low-powered devices. It’s already looking like speech is going to be the next frontier of human-computer interaction, and currently it’s a space completely tied up by entities like Google, Amazon, Microsoft and IBM. Putting this power into everyone’s hands could be hugely transformative, and it’s great to be working towards this goal, even in a relatively modest capacity. This is the vision, and I look forward to helping make it a reality.

Air MozillaReps Weekly Meeting Feb. 23, 2017

Reps Weekly Meeting Feb. 23, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Chris H-CData Science is Hard: Anomalies and What to Do About Them

:mconley‘s been looking at tab spinners to try and mitigate their impact on user experience. That’s when he noticed something weird that happened last October on Firefox Developer Edition:


It’s a spike a full five orders of magnitude larger than submission volumes for a single build have ever been.

At first I thought it was users getting stuck on an old version. But then :frank noticed that the “by submission date” of that same graph didn’t tally with that hypothesis:


Submissions from Aurora (what the Firefox Developer Edition branch is called internally) 51 tailed of when Aurora 52 was released in exactly the way we’ve come to expect. Aurora 52 had a jump mid-December when we switched to using “main” pings instead of “saved-session” pings to run our aggregates, but otherwise everything’s fine heading into the end of the year.

But then there’s Aurora 51 rising from the dead in late December. Some sort of weird re-adoption update problem? But where are all those users coming from? Or are they actually users? These graphs only plot ping volumes.

( Quick refresher: we get anonymous usage data from Firefox users via “pings”: packets of data that are sent at irregular intervals. A single user can send many pings per day, though more than 25 in a day is a pretty darn chatty. )

At this point I filed a bug. It appeared as though, somehow, we were getting new users running Aurora 51 build 20161014.

:mhowell popped the build onto a Windows machine and confirmed that it was updating fine for him. Anyone running that build ought not to be running it for long as they’d update within a couple of hours.

At this point we’d squeezed as much information as the aggregated data could give us, so I wandered off closer to the source to get deeper answers.

First I double-checked that what we were seeing in aggregate was what the data actually showed. Our main_summary dataset confirmed what we were seeing was not some weird artefact… but it also showed that there was no increase in client numbers:


A quick flip of the query and I learned that a single “client” was sending tens of thousands of pings each and every day from a long-dead non-release build of Firefox Developer Edition.

A “client” in this case is identified by “client_id”, a unique identifier that lives in a Firefox profile. Generally we take a single “client” to roughly equal a single “user”, but this isn’t always the case. Sometimes a single user may have multiple profiles (one at work, one at home, for instance). Sometimes multiple users may have the same profile (an enterprise may provision a specific Firefox profile to every terminal).

It seemed likely we were in the second case: one profile, many Firefox installs.

But could we be sure? What could we learn about the “client” sending us this unexpectedly-large amount of data?

So I took a look.

First, a sense of scaleoutput_11_0

This single client began sending a few pings around November 15, 2016. This makes sense, as Aurora 51 was still the latest version at that time. Things didn’t ramp up until December when we started seeing over ten thousand pings per day. After a lull during Christmas it settled into what appeared to be some light growth with a large peak on Feb 17 reaching one hundred thousand pings on just that day.

This is kinda weird. If we assume some rule-of-thumb of say, two pings per user per day, then we’re talking fifty thousand users running this ancient version of Aurora. What are they doing with it?

Well, we deliberately don’t record too much information about what our users do with their browsers. We don’t know what URLs are being visited, what credentials they’re using, or whether they prefer one hundred duck-sized horses or one horse-sized duck.

But we do know for how long the browser session lasted (from Firefox start to Firefox shutdown), so let’s take a look at that:output_23_0

Woah. Over half of the sessions reported by the pings were exactly 215 seconds long. Two minutes and 35 seconds.

It gets weirder. It turns out that these Aurora 51 builds are all running on the same Operating System (Windows XP, about which I’ve blogged before), all have the same addon installed (Random Agent Spoofer, though about 10% also have Alexa Traffic Rank), none have Aurora 51 set to be their default browser, none have application updates enabled, and they come from 418 different geographical locations according to the IP address of the submitters (top 10 locations include 5 in the US, 2 in France, 2 in Britain, and one in Germany).

This is where I would like to report the flash of insight that had me jumping out of the bath shouting Eureka.

But I don’t have one.

Everyone mentioned here and some others besides have thrown their heads at this mystery and can’t come up with anything suitably satisfying. Is it a Windows XP VM that is distributed to help developers test their websites? Is it an embedded browser in some popular piece of software with broad geographic appeal? Is someone just spoofing us by setting their client ids the same? If so, how did they spoof their session lengths?

To me the two-minute-and-thirty-five-second length of sessions just screams that this is some sort of automated process. I’m worried that Firefox might have been packaged into some sort of botnet-type-thingy that has gone out and infected thousands of hosts and is using our robust network stack to… to do what?

And then there’s the problem of what to do about it.

On one hand, this is data from Firefox. It arrived properly-formed, and no one’s trying to attack us with it, so we have no need to stop it entering our data pipeline for processing.

On the other hand, this data is making the Aurora tab spinner graph look wonky for :mconley, and might cause other mischief down the line.

It leads us to question whether we care about data that’s been sent to use by automated processes… and whether we could identify such data if we didn’t.

For now we’re going to block this particular client_id’s data from entering the aggregate dataset. The aggregate dataset is used by to display interesting stuff about Firefox users. Human users. So we’re okay with blocking it.

But all Firefox users submit data that might be useful to us, so what we’re not going to do is block this problematic client’s data from entering the pipeline. We’ll continue to collect and collate it in the hopes that it can reveal to us some way to improve Firefox or data collection in the future.

And that’s sadly where we’re at with this: an unsolved mystery, some unanswered questions about the value of automated data, and an unsatisfied sense of curiosity.


Mozilla Reps CommunityRep of the Month – January 2017

Please join us in congratulating Petras, Rep of the Month for January 2017!

Petras is an IT specialist and entrepreneur, who started at an extremely young age to build up a company and to develop his Project Manager skills. Nowadays he has a track record of working with or even heading a number of world class IT projects and companies.


During the last few months he organized and participated in many events giving talks to thousand of people in various topics, including Privacy and Advocacy and the EU Copyright Reform. He adds his PM knowledge to the program by building up proposals for different projects. He managed to mobilize our community of volunteers, showing great leadership and communication skills.

Congrats Petras! Join us in congratulating him in discourse.

About:CommunityOpen and agile?

Facilitating openness and including volunteers in agile sprints

This post is based on a discussion at the Mozilla All-Hands event in December 2016. Participants in that discussion included Janet Swisher, Chris Mills, Noah Y, Alex Gibson, Jeremie Patonnier, Elio Qoshi, Sebastian Zartner, Eric Shepherd, Julien G, Xie Wensheng, and others.

For most of 2016, the Mozilla marketing organization has been moving towards working agile — much of our work now happens within “sprints” lasting around 2.5 weeks, resulting in better prioritization and flexibility, among other things. (For those familiar with agile software development, agile marketing applies similar principles and workflows to marketing. Read the Agile Marketing Manifesto for more background.)

However, one aspect of our work that has suffered in this new paradigm is openness and inclusion of our volunteer community in marketing activities. While the agile workflow encourages accountability for full-time team members, it leaves very little space for part-time or occasional participation by volunteers. We are Mozilla, and transparency, openness, and participation are part of our core values. This disconnect is a problem.

This post explores the issues we’ve found, and shares some potential solutions that we could work towards implementing.

The problem

Since moving to an agile, “durable team” model, the amount of interaction we have had with our volunteer community has decreased. This is written from the perspective of the Mozilla Developer Network (MDN) team, but we are sure that other teams have experienced similar negative trends.

The main reasons for this decrease are:

  • The tools used to track work are not open, so volunteers cannot access them.
  • The agile process in general does not lend itself well to volunteer discovery or contribution. Their work is deprioritized, they feel ignored, they find it harder to participate.
  • There is very little information provided for volunteers who want to participate in a project managed in an agile fashion.

The challenge we face is how to make the agile process more transparent, so that volunteers can understand what is happening and how it happens; more open so that volunteers can contribute in meaningful ways; and more welcoming so that volunteers feel they are part of a team and are appreciated and recognized for their efforts.


Let’s look at the solutions that were discussed. These details are fairly brief, but they act as good starting points for further investigation.

Openness of tools

We need to review the tools we are using, and make sure that they are open for volunteers to get information and updates. For example:

  • Slack is hard in terms of giving people the right permissions.
  • Taiga is closed off.

We need to work out better tools/workflows for volunteer contributions.


An explainer doc to explain the agile process would be useful, including:

  • How the process works in general, and an explanation of key terms like durable team, “functional team”, “cross-functional”, etc.
  • What the different teams are and what they are responsible for.
  • How to find more information on what you can do for each team at the current time and in the future.
  • How you can contact each team and get updates on their work.

There is some information that already exists — an explanation of MDN’s agile process, and a glossary of agile terminology.

When questions are asked, it would be good to make sure that we not only answer them on a public list, but also document the answers to make them easy to find for others.

When decisions are made, they need to be clearly communicated on the public list for the agile team it relates to.

Including volunteers

To participate in an agile team, volunteers need:

  • Access to tools and information, so they can do work and communicate with agile teams.
  • Access to lists of “low hanging fruit” tasks related to the team’s work that they can pick up and work on.
  • A chance for inclusion in the conversation, whether synchronously or asynchronously.

To achieve this, we need to:

  1. Review our tools, and make sure they can be made open to volunteers.
  2. Make information available, as described in the previous section.
  3. Make lists of tasks that volunteers can do in each agile team easily available. To achieve this, one suggestion is to share Epics with volunteers (don’t overwhelm them upfront with too much detail), but make lists of tasks available on the epic pages, along with who to contact about each task.
  4. Make it clear how you can communicate with agile teams — a clear mailing list and IRC/slack channel would be the minimum.
  5. Keep an open dialog going with our community:
    • Open up planning meetings to allow volunteers to take part in the planning.
    • Say what we are working on next before the sprint starts and what work volunteers can do.
    • Say how things are going during the sprint, and what help we need at each stage.
    • Report on how things went at the end of the sprint.
  6. Make meetings (standups, mid-sprint review, post mortem, etc.) more useful.
    • Include an IRC channel running at the same time, for further communication and to provide notes that people can read afterwards.
    • Allow people to talk about other subjects in meetings, to make them more human.
    • Sometimes conversation can be monotonous (e.g. “I’m still working on the same task” for 5 days running). If this is the case, use the time more usefully.
    • Use the meetings as an opportunity to update volunteer asks; if we find we need more help than we initially thought, do a shout out to the community.
    • Consider office hours for agile teams (whether in Vidyo, IRC, etc.), so team members and volunteers alike can drop in and talk.
    • Record video meetings so others can benefit from them asynchronously.

We ought to fit communication with the community/volunteers into each agile meeting, as a recurring agenda item.

Further points

  1. Make sure there is always someone present to help volunteers with questions they have, all the way through a sprint. Keep communication public; don’t let conversations happen in private, then risk the team member going on holiday or something.
  2. We should relax a little on the agile principles/processes — use them where they are useful, but don’t stick to the letter of the law for the sake of it, when it isn’t helpful. As an example, the MDN durable team has gone from everyone having a standup every day, to two standups per week for the writers, and two standups per week for the developers. We found anything more to be too high-frequency, and a waste of time.
  3. One situation that might make using open tools difficult is if any of the work is sensitive, e.g., discusses security issues. We could get around this by dealing with such issues inside private bugs on Bugzilla.

Next steps

We want to get feedback on this, and hear about other people’s pain points and solutions or insights for working open inside agile teams. If you have any other thoughts about how the process could be improved, please do share!

After that, we want to distill this information down in a set of documentation to help everyone in marketing (Mozilla, or otherwise) to be open while working in an agile form.

Andrew HalberstadtA Course of Action for Replacing Try Syntax

I've previously blogged about why I believe try syntax is an antiquated development process that should be replaced with something more modern and flexible. What follows is a series of ideas that I'm trying to convert into a concrete plan of action to bring this about. This is not an Intent to Implement or anything like that, but my hope is that this outline is detailed enough that it could be used as a solid starting point by someone with enough time and motivation to work on it.

This plan of action will operate on the rather large assumption that all tasks are scheduled with taskcluster (either natively or over BuildbotBridge). I also want to be clear that I'm not talking about removing try syntax completely. I simply think it should be parsed client side, before any changes get pushed to try.

Brief Overview of How Try Syntax Currently Works in Taskcluster

In order to understand where we're going, I think it's important to be aware of where we're coming from. This is a high level explanation of how a try syntax string currently gets turned into running tasks:

  1. A developer pushes a commit to try with a 'try:' token somewhere in the commit message.
  2. The pushlog mercurial extension picks this up on the server, and publishes a JSON stream.
  3. After getting triggered via a pulse message, the mozilla-taskcluster module queries this URL and pulls in the relevant push.
  4. Then, mozilla-taskcluster grabs the last commit in the push, extracts the try syntax from the description, and uses it to create a templating variable.
  5. This template variable is substituted into the decision task's configuration, and ultimately ends up getting passed into mach taskgraph decision with the --message parameter.
  6. The decision task kicks off the taskgraph generation process. When it comes time to optimize, the try syntax is finally passed into the TryOptionSyntax parser, which filters out tasks that don't match any of the try options.
  7. The optimized task graph is then submitted to taskcluster, and the relevant tasks start running on try.

An Improved Data Transport

A key thing to realize, is that the decision task runs from within a mozilla-central clone. In other words the try syntax string starts in version control, gets published to a webserver, gets downloaded by a node module, gets substituted into a task configuration, only to be passed into a process that had full access to the original version control all along. Steps 2-5 in the previous section, could be replaced with:

  • The decision task extracts the try syntax from the appropriate commit message.

If we stopped there, this change wouldn't be worth making. It might make some code a bit cleaner, but would hardly make things faster or more efficient since mozilla-taskcluster would still need to query the pushlog either way. But this method has another, more important benefit: it gives the decision task access to the entire commit instead of limiting it to whatever the pushlog extension decides to publish.

That means there would be no particular reason we'd need to store try syntax strings in the commit message at all. We could instead stuff it into the commit as arbitrary metadata using the commit's extra field. To get this working, we could use the push-to-try extension to stuff the try syntax into the extra field like this. Then the decision task could extract that syntax out of the commit metadata like this:

bash $ hg log -r $GECKO_HEAD_REV -T "{extras}"

An Improved Data Format

Again, these changes mostly amount to a refactoring and wouldn't be worth making just for the sake of it. But once we are using arbitrary commit metadata to pass information to the decision task, there's no reason for us to limit ourselves to a single line syntax string. We could use data structures of arbitrary complexity.

One possibility (which I'll run with for the rest of the post), is simply to use a list of taskcluster task labels as the data format. This has several advantages:

  • It's unambiguous (what is passed in, is what will be scheduled)
  • It's an easy target for tools to generate to
  • It provides flexibility in how we could potentially interact with try (via said tools)

The last two points are pretty big, have you ever attempted to write a tool that tries to convert inputs into a try sytnax? It's very hard, and involves lots of hard coding in the tool and memorization on the part of the users.

What we've done to this point is transform the data transport from a human friendly format to a machine friendly format on top of which human friendly tools can be built. Probably the first tool that will need to be built, will be a legacy try syntax specifier for those of us who enjoy writing out try syntax strings. But that's not very interesting. There are probably a hundred different ways we could dream of specifying tasks, but because my imagination is limited, I'll just talk about one potential idea.

Fuzzy Finding Tasks

I've recently discovered and become a huge fan of fuzzyfinder. Fuzzyfinder the project consists of two parts:

  1. A binary called fzf
  2. A vast multitude of shell and editor integrations that utilize fzf

The integrations allow you to quickly find things like file paths, processes and shell history (both on the terminal or within an editor) with an intelligent approximate matching algorithm at blazing speeds. While the integrations are insanely useful, it's the binary itself that would come in useful here.

The fzf binary is actually quite simple. It receives a list of strings through stdin, allows the user to select one or more of them using the fuzzy finding algorithm and a text based gui, then prints all selected strings to stdout. The input is completely arbitrary, for example, I could fuzzy select running processes with:

bash $ ps -ef | fzf

Or lines in a file:

bash $ cat foo.txt | fzf

Or the numbers 1-100:

bash $ seq 100 | fzf

You get the idea. The other day I was thinking, what if we could pipe a list of every single task, expanded over both chunks and platforms, into fzf? How useful would that be? Luckily, a list of all taskcluster tasks can be generated with a mach command, so it was easy to test this out:

bash $ mach taskgraph tasks -p artifacts/parameters.yml -q > all-tasks.json $ cat all-tasks.json | fzf -m

The parameters.yml file can be downloaded from any decision task on treeherder. I piped it into a file because the mach taskgraph command takes a bit of time to complete, it's not a penalty we'd want to incur on subsequent runs unless it was necessary. The -m tells fzf to allow multi-selection.

The results were wonderful. But rather than try to describe how awesome the new (potential) try choosing experience was, I created a demo. In this scenario, pretend I want to select all linux32 opt mochitest-chrome tasks:

<video width="100%" height="100%" controls> <source src="" type="video/mp4"> </video>

Now instead of printing the task labels to stdout, imagine if this theoretical try chooser stuffed that output into the commit's metadata. This is the last piece of the puzzle, to what I believe is a comprehensive outline towards a viable replacement for try syntax.

No Plan Survives Breakfast

As Mike Conley is fond of saying, no plan survives breakfast. I'm sure my outline here is full of holes that will need to be patched, but I think (hope) that at least the overall direction is solid. While I'd love to work on this, I won't have the time or mandate to do so until later in the year. With this post I hope to accomplish three things:

  1. Serve as a brain dump so when (or if) I do get back to it, I'll remember everything
  2. Motivate others to push in this direction in the meantime (or better yet, implement the whole thing!)
  3. Provide an excuse to plug fuzzyfinder. It's been months and using it still makes me giddy. Seriously, give it a try, you'll be glad you did!

Let me know if you have any feedback, and especially if you have any other crazy ideas for selecting tasks on try!

Jorge VillalobosShutting down this blog

I created almost 8 years ago, even before joining Mozilla, as an outlet for my add-on development projects. My path has taken quite a few turns since then, so I don’t have many add-ons to speak for, but I’ve ended up being partly in charge of some very fundamental decisions for the future of add-ons, so I think I’ve done okay.

Lately I haven’t been good keeping this blog and website up to date, and I don’t see much need in having them around. The domain name is now hilariously outdated, and most content here is either outdated or will be very soon. Thanks a lot, me! Also, at this point the Add-ons Blog has become my main blog and I don’t have much to share beyond that and the occasional tweet.

Anyway, this site had a good run, and I’m happy and optimistic about the future that is coming to replace it.

See ya!

Bas SchoutenWhy don't you like my facts?

Disclaimer: The opinions I will be expressing will be solely my own, they will in no way be related to Mozilla or the work I do there.

This is the second part of a series of post I will be doing on society's current issues, you can find my previous post here: When silence just isn't an option

So, I've heard a lot of people say we now live in a 'post-fact' society. I've found this a very interesting topic recently, since information is so important when making decisions, if 'facts' are no longer important. It seems like that might be a bit of a problem? I will attempt to divulge my thoughts here on what the deal is with facts, what they are, and how we can go (back) to being a 'factual' society.

It should be pretty simple to establish facts, right?

Let me start with a little about what makes a fact, whenever you talk about something like this anyone can come up with quasi-philosophical arguments about how there are no facts. Maybe they're right, our eyes can easily be tricked, maybe the moon isn't there when I'm not looking at it, maybe the sun does have an army of invisible tiny little midgets orbiting it and maybe gravity will suddenly disappear tomorrow. But that is not how we generally live our lives, in general in our day to day lives we take the things for which we have directly observable evidence and say they're facts. We do this for observations (the wall is white, the book is made of paper) and predictions (when I press the light-switch the light will turn on, when I heat up this chicken it will be cooked). Those types of deductions based on our senses and inductions based on past experiences are essential to being able to live our lives, and I really don't want to get into those type of 'fact or no fact' arguments.

Now next to facts we have beliefs. Often a belief is something we have been told many times during our lives, something that we would like to be true, that makes us feel justified or better about ourselves or something we were told by someone we believe in and we can experience them with various degrees of certainty. As humans we act upon our beliefs as much as we act upon facts, you might believe in a deity, you might believe everyone in the world should be kind to one another or you might be an existential nihilists that believes there's no point to life either way. The point is that we all have a large collection of beliefs, and the vast majority of us is naturally uncomfortable changing those beliefs. In order not to be in this position we subject apparent evidence in favor of those beliefs to far less scrutiny before we admit it into our lives, and we reject apparent evidence against those beliefs much more easily than we would reject evidence confirming them.

But the line between those two can become somewhat blurry, essentially we have a spectrum where anything we can say with near certainty is a fact, a large range between those things we are 'certain' are true, and things we are 'certain' not to be true we call beliefs that we do or do not ascribe to, and finally there are things we are certain not to be true. When an individual holds a belief with an extremely high degree of certainty, to them that belief is as much a fact as anything else that most of us might agree on is a fact. It would take a lot of observable evidence to dissuade them from that idea. If you've lived your life believing the universe revolves around the Earth, the notion of that not being true would be extremely hard to accept. Superficial observation does seem to confirm that that is the case, and after all that does seem to be the most likely explanation. It will take a lot of convincing for you to understand that the Earth, in fact rotates around the Sun.

Let's stick with that example for a second. In the modern world, for some of us, the Earth orbiting the sun is a fact, we have looked through a telescope, traced the motions of the sun, the stars and the planets in our sky, and we have observed directly that the universe revolving around the Earth cannot explain our observations, subsequently we have continued to make observations which show the only possible explanation (without going into absurdity) to be the Earth revolving around the Sun. Now the people who have done the observations of this fact, can now produce a set of predictions and descriptions of observations, and as experts, through the system of education, we can share these with as many people as possible. Now if we're in a position where people consider us credible, they will acknowledge our observations and predictions, they won't try to reproduce them themselves, but in the absence of a credible source to the contrary they will believe us. As such most of the western world holds a strong belief in heliocentrism, strong enough for most, to accept this belief as fact.

You end up believing in individuals or institutions to tell you what is or isn't fact. That is essentially a good thing, since none of us would have time in our lives to observe all the evidence for all the things we are told are facts and that are relevant to our modern-day lives. Therefore belief in various forms plays an extremely important and often positive role in our lives. But there is always a danger in believing in ideas, people or institutions. When rapper B.o.B. sent out a number of tweets supposedly proving the Earth was flat [1] the vast majority of people disagreed. They pointed out that although some of his own predictions were right, his observations were lacking. However there were some who saw their own beliefs confirmed in his observations, and rushed to join them, there is actually a society that collects evidence for a flat earth [2], partially grounded in geocentric religious beliefs.

So what's the point? The point is that in the case of any fact for which we do not take the time to observe the evidence ourselves (of which there will always be a lot), whether we accept it as a fact or not depends on the strength of our belief in the source that has collected the evidence for that fact.

Now how do we get to the facts?

Scientists and engineers probably have an above average ability and tendency to try to look at claims from different angles and seek unbiased evidence for those claims. But it's important to remember that even for them, it turns out to be extremely difficult to tune out beliefs and biases, and there exists a long history in science of conducting research focused on confirming views for which there really wasn't that much evidence in the first place. Often, like with normal human beliefs, that path involved taking evidence at face value that we liked, and extensive attempts to debunk evidence that we didn't like.

This continues to the present day, and we all do it! After all how many liberals have really investigated the claims about the crowd size at the inauguration of Donald Trump? Didn't most of those people see the pretty picture on the news and go 'Well see, there's all we need to know!', whereas that picture would not have been particularly hard to manipulate? (SPOILER: The news gave a reasonably realistic idea of crowd size difference, but don't take my word for it![3][4]) How many really went to look at research about crime in Sweden rather than read the article that said 'nah, it isn't true' and then went: 'Ah, I'm sure CNN did their homework!'? (SPOILER: They did [5]) So we can argue that once you've done research into this, it seems certain media outlets turn out to be more reliable than others, but most of us don't really do that. Our environment has told us certain media outlets are reliable, and we take what they say at face value. That really isn't that different from what people are doing that put their faith into media outlets that in practice really do manipulate facts on a large scale, so it would be silly to think that you can just point people at the media outlet you believe to be factual and expect them to accept that information as such.

So how do we convince people the Earth orbits the Sun?

Here is where I believe the internet is actually helping us, and should make the job of autocratic dictators and media outlets with strong political agenda's a lot harder. Unlike as recent as two decades ago, everyone with an internet connection now has direct access to a wealth of open source information. Images, tweets and many other types of data directly from their sources. A single source might be unreliable, but by combining a wide array of sources from facebook, instagram and twitter to public satellite imagery. Several parties (one of the better known being bellingcat [6]) have shown that it's possible to collect information in such a way that stories from media outlets and the government can be corroborated (or debunked!) in an independent way, citing sources that anyone can verify directly from their articles. This wealth of data has made it far easier for anyone to verify the reliability of the media outlets they consume.

In the end, the issue of how much someone can change your beliefs is all about trust. If there's one thing that is almost certainly true, it's that trust in the traditional media outlets is at a historical low point [7], however, a source of optimism is that research indicates that citations, expert opinions, etc. all those things do matter. They do increase the amount of trust people experience towards a source of information and, perhaps more importantly, once people discover a source of information gets their facts wrong, their trust in that source does decrease[8]. I believe there's a lesson in there for those of us, be it journalists or scientists, who are attempting to convene information to the masses, or even those of us simply trying to convene a 'fact' over dinner. Don't simply tell people their wrong, don't simply tell people you have facts that are better than theirs and try to get away with one questionable image to back up your story.

When you want to convince people of your views, dig deep, look for a myriad of different sources that show evidence for your point of view (the 'fact' you are trying to convene, if you will) and never forget to really look for evidence to the contrary. In the worst case you'll discover that you were wrong and you'll have to give up a belief, which although uncomfortable is worthwhile. In the best case you will be much better prepared when someone tries to confront you with their conflicting ideas. Whenever possible communicate with sources that are accessible and understandable for whomever you are trying to reach with your story. Never argue from authority, information has no authorities, only experts. But most importantly never be deceptive when trying to convince someone of your point of view. There is no way you will be able to instantly change someone's beliefs, but a consistent, steady stream of truthful information can erode it.

Nicolaus Copernicus' ideas went against everything people believed in, the government and the church tried to do everything in their power to give people 'alternative facts'. Copernicus made a wealth of observations and predictions, consistently showing the superiority of his ideas[9]. Eventually he would die under house arrest and it would take until 1758 for books describing the heliocentric model to be allowed by the Catholic church. Governments and other institutions holding beliefs which hold back the flow of facts is nothing new in this world, but in the end, no amount of misinformation can stand the test of time when confronted with real facts.

And now we have the internet.

I realize Social Media provide an interesting perspective on the spread of information, I will treat those in a separate article.


Mozilla Open Design BlogMaking a Conscious Choice

Rare as it is to update a brand identity in the open and invite everyone to participate, it’s even more uncommon for an organization to share a core part of its marketing strategy. In the corporate world, information about the target audience for a brand for instance, is often a closely held secret. Here, Mozilla’s head of audience insights, Venetia Tay, pulls back the curtain to reveal the global audience we’ve identified for Mozilla Following our open approach, she invites you to play a part in our audience research as well.


“I am not required to make the world a better place.”  -Reality Bites 1994

“I can’t keep quiet.”  -MILCK 2017


The 1990s and 2000s were framed by a clear sense of ambivalence, self-interested technological growth, wanton materialism and projected idealism.  We’ve entered a new world – a world with a tangible surge in people who approach their affiliations and consumer decisions carefully, with a great deal of thought; a new world of people who care. Whether it’s a focus on environmental issues, civil rights, or sustainable sourcing, individuals’ commitment to clear values and beliefs are driving their behavior and they are holding brands, companies and organizations to the same set of standards.


From recent boycotts of #deleteuber, #BoycottBudweiser, #NoNetflix to the success of Small Business Saturday, Everlane, Seventh Generation, Mary Kay and Ben & Jerry’s, we’re witnessing a trend in human behavior that aligns day-to-day decision making with consumers’ moral compasses. This group of consumers is willing to invest research time, manage inconveniences, pay a premium to support the organizations that align with their personal values and even reject brands whose values do not align. In a 2015 Nielsen global survey, 66% of respondents say they’re willing to pay more for products and services that come from companies who are committed to positive social and environmental impact. They’re making a conscious choice to put their money, their influence and their actions where their heart is. As a result of technology, these Conscious Choosers can not only vote with their purchases, but also hold brands responsible on social media – this public megaphone increases consumers’ power and forces companies to be more socially accountable.


At Mozilla, we recognize that this cohort of Conscious Choosers are critical participants in our movement for Internet health. We believe that the Internet is a global public resource and we’re committed to keeping it open and accessible to all.  We need help in this work, the way World Wildlife Fund needed help 30 years ago; we need the public to understand that the Internet is at risk and worth fighting for.




This idea of the “Conscious Chooser” consumer started with a hypothesis. We believed there was a cohort of people who exhibit signs of “everyday activism” in their offlines lives.  They vote, recycle, read labels, are involved their community, and can see a connection between their individual actions and the overall “health” of their community. However, we also believed that these same people may not yet be applying these values and behaviors to their online lives.



Our initial research in Germany, Brazil and the US reinforced our hypotheses. The notion of “conscious choosing” emerged as a common behaviour in 20-30% of Internet users we surveyed. We found that Conscious Choosers are hungry for information and details, and are willing to invest time to make decisions that align with their values.  They believe in the power of the collective and that their influence can go beyond just their own personal buying power to impact others’ decisions and the world around them as well. We also met people who struggle to connect their offline lives with their online lives but are open and eager to find ways to do so. As as we suspected, Conscious Choosers see the Internet as a platform for their connections and causes but find it hard to translate their values into practice online because of the internet’s intangible nature.


An internal research study shows that amongst the respondents who are familiar with the work Mozilla does to support the Internet, including web literacy, policy and advocacy programs and technology innovations, 69% are more likely to be engaged users and supporters of our Firefox browser (compared to respondents who have no knowledge about our activities, only 33% are engaged).  In other words, the values and behaviors of a public interest “parent” company are a differentiating factor for the product brand.  Based on this insight, we also believe that Conscious Choosers are a growth market for our Internet Health movement, as well as products from Mozilla, like our Firefox browser.



We recognize that our consumers exist in an imperfect environment—where there is asymmetry of information, financial and social constraints and inconsistencies in human behavior, both offline and online. We are curious to explore why, understand the reasons for inconsistencies and define the thresholds. We also recognizes that “personal values” can vary from country to country and culture to culture, as seen with research from World Values Survey. We need to explore further to understand what values are important in different countries and cultures.

At the heart of it, we believe that Mozilla’s target consumer audience is community-minded, driven by their personal values to take action, and these values influence the products and services they use, the purchases they make and organizations they support.  And we believe that this group is not unique just to Mozilla, but represents a growing segment of important consumers, users, and advocates that other organizations will be interested in learning more about, especially as brands are expected to take more of a political stance by savvy consumers.

In the upcoming months, we will be embarking on further research – through quantitative studies, ethnographic studies as well as participatory session across the globe – firstly in the US, Germany,Taiwan and then will look to expand to other geographies over time. Staying true to our open source roots, we will make our process, insights and findings open and available to all.


Please help!

We want to invite individuals, communities and organizations to participate through a variety of activities. For example, you can support us by being a fixer or a guide in-field, or recommending people we should talk to or letting us interview and run participatory exercises with you or attending an interactive workshop to add to the wealth of knowledge. To help us off the ground, we would love to connect with you so send us an email if:

  • You are a researcher and would like to get involved
  • You are an advocate for Mozilla and want to support our discovery process
  • You are actively are making choices and supporting causes, but technology might not be your thing
  • You are skeptical but would like to see a non-profit, independent corporation succeed
  • You are rooting for Mozilla because you think a healthy, widely used web matters


If you like to engage, have questions about our research approach or would like to receive updates, please email


Watch this space to learn more about where all of the resources will be made available.


The post Making a Conscious Choice appeared first on Mozilla Open Design.

The Mozilla BlogSnoozeTabs and Pulse: New Experiments Coming to Firefox Test Pilot

Since launching the new Firefox Test Pilot program in May 2016, we’ve debuted several experiments with the goal of finding browser features that users love and incorporating them into future versions of Firefox. Today, we’re continuing our efforts toward creating a more modern and better performing Firefox with two new Test Pilot experiments.


We’ve all been in that situation where your friend sends you a link to an interesting and lengthy article, but you’re too busy at the moment to take the time to read it. With SnoozeTabs, you can dismiss this article’s tab and set a time for when you want the tab to reappear. SnoozeTabs helps reduce clutter on your screen and in your bookmarks so you can focus on what matters right now.

Test Pilot SnoozeTabs

Snoozing a tab is simple. Just click the snooze icon in the top right corner and select from the dropdown menu when you’d like to be reminded. From here, you can also manage your snoozed tabs.

Test Pilot SnoozeTab #2

When the snooze is ended, the old tab will reappear with the snooze icon and alert you that the page is back. Sweet!


Pulse is a way for you to instantly send Firefox engineers your opinion on which sites work well in Firefox and which sites don’t. Just click on the pulse button in the bookmark bar and rate a site’s performance with one through five stars. By telling us how Firefox performed on a wide variety of sites, you will help us understand how Firefox is performing in general and also help our engineers understand where to focus their efforts to improve Firefox browser performance.

Test Pilot Pulse

Click on the new Pulse icon in the URL bar and you’ll be prompted to rate the website and answer a few questions – and help us make Firefox faster while you browse.

How to get started:

These Test Pilot experiments are available in English only. To activate Test Pilot and help us build the future of Firefox, visit

If you’ve experimented with Test Pilot features before, you know that you might run into some bugs or lose some of the polish in Firefox, so you can easily enable or disable features at any time.

Your feedback on these and the other Test Pilot experiments will help us determine what ultimately ends up in Firefox, so let us know what you think!

Air MozillaThe Joy of Coding - Episode 92

The Joy of Coding - Episode 92 mconley livehacks on real Firefox bugs while thinking aloud.

Air MozillaInclusive Design: The Intersection of Product and Behavior (Panel Discussion)

Inclusive Design: The Intersection of Product and Behavior (Panel Discussion) Mozilla builds products and platforms directly for developers, communities and publishers worldwide. How can we create and sustain experiences that are open, accessible and participatory?...

fantasaiWashing Dishes Clean

An explanation of what it means to wash dishes clean, and how to do it.

JavaScript at MozillaECMAScript 2016+ in Firefox

After the release of ECMAScript 2015, a.k.a. ES6, the ECMAScript Language Specification is evolving rapidly: it’s getting many new features that will help developing web applications, with a new release planned every year.

Last week, Firefox Nightly 54 reached 100% on the Kangax ECMAScript 2016+ compatibility table that currently covers ECMAScript 2016 and the ECMAScript 2017 draft.

Here are some highlights for those spec updates.

Kangax ECMAScript 2016+ compatibility table

ECMAScript 2016

ECMAScript 2016 is the latest stable edition of the ECMAScript specification. ECMAScript 2016 introduces two new features, the Exponentiation Operator and Array.prototype.includes, and also contains various minor changes.

New Features

Exponentiation Operator

Status: Available from Firefox 52 (now Beta, will ship in March 2017).

The exponentiation operator (**) allows infix notation of exponentiation.
It’s a shorter and simpler replacement for Math.pow. The operator is right-associative.

// without Exponentiation Operator
console.log(Math.pow(2, 3));             // 8
console.log(Math.pow(2, Math.pow(3, 2)); // 512

// with Exponentiation Operator
console.log(2 ** 3);                     // 8
console.log(2 ** 3 ** 2);                // 512

Status: Available from Firefox 43.

Array.prototype.includes is an intuitive way to check the existence of an element in an array, replacing the array.indexOf(element) !== -1 idiom.

let supportedTypes = [

// without Array.prototype.includes.
console.log(supportedTypes.indexOf("text/html") !== -1); // true
console.log(supportedTypes.indexOf("image/png") !== -1); // false

// with Array.prototype.includes.
console.log(supportedTypes.includes("text/html")); // true
console.log(supportedTypes.includes("image/png")); // false

Miscellaneous Changes

Generators can’t be constructed

Status: Available from Firefox 43.

Calling generator with new now throws.

function* g() {

new g(); // throws
Iterator for yield* can catch throw()

Status: Available from Firefox 27.

When Generator.prototype.throw is called on a generator while it’s executing yield*, the operand of the yield* can catch the exception and return to normal completion.

function* inner() {
  try {
    yield 10;
    yield 11;
  } catch (e) {
    yield 20;

function* outer() {
  yield* inner();

let g = outer();
console.log(;  // 10
console.log(g.throw().value); // 20, instead of throwing
Function with non-simple parameters can’t have “use strict”

Status: Available from Firefox 52 (now Beta, will ship in March 2017).

When a function has non-simple parameters (destructuring parameters, default parameters, or rest parameters), the function can’t have the "use strict" directive in its body.

function assertEq(a, b, message="") {
  "use strict"; // error

  // ...

However, functions with non-simple parameters can appear in code that’s already strict mode.

"use strict";
function assertEq(a, b, message="") {
  // ...
Nested RestElement in Destructuring

Status: Available from Firefox 47.

Now a rest pattern in destructuring can be an arbitrary pattern, and also be nested.

let [a, ...[b, ...c]] = [1, 2, 3, 4, 5];

console.log(a); // 1
console.log(b); // 2
console.log(c); // [3, 4, 5]

let [x, y, ...{length}] = "abcdef";

console.log(x);      // "a"
console.log(y);      // "b"
console.log(length); // 4
Remove [[Enumerate]]

Status: Removed in Firefox 47.

The enumerate trap of the Proxy handler has been removed.

ECMAScript 2017 draft

ECMAScript 2017 will be the next edition of ECMAScript specification, currently in draft. The ECMAScript 2017 draft introduces several new features, Object.values / Object.entries, Object.getOwnPropertyDescriptors, String padding, trailing commas in function parameter lists and calls, Async Functions, Shared memory and atomics, and also some minor changes.

New Features

Async Functions

Status: Available from Firefox 52 (now Beta, will ship in March 2017).

Async functions help with long promise chains broken up to separate scopes, letting you write the chains just like a synchronous function.

When an async function is called, it returns a Promise that gets resolved when the async function returns. When the async function throws, the promise gets rejected with the thrown value.

An async function can contain an await expression. That receives a promise and returns a resolved value. If the promise gets rejected, it throws the reject reason.

async function makeDinner(candidates) {
  try {
    let itemsInFridge = await fridge.peek();

    let itemsInStore = await store.peek();

    let recipe = await chooseDinner(

    let [availableItems, missingItems]
      = await fridge.get(recipe.ingredients);

    let boughtItems = await;

    pot.put(availableItems, boughtItems);

    await pot.startBoiling(recipe.temperature);

    do {
      await timer(5 * 60);
    } while(taste(pot).isNotGood());

    await pot.stopBoiling();

    return pot;
  } catch (e) {
    return document.cookie;

async function eatDinner() {
  eat(await makeDinner(["Curry", "Fried Rice", "Pizza"]));
Shared memory and atomics

Status: Available behind a flag from Firefox 52 (now Beta, will ship in March 2017).

SharedArrayBuffer is an array buffer pointing to data that can be shared between Web Workers. Views on a shared memory can be created with the TypedArray and DataView constructors.

When transferring SharedArrayBuffer between the main thread and a worker, the underlying data is not transferred but instead the information about the data in memory is sent. As a result it reduces the cost of using a worker to process data retrieved on main thread, and also makes it possible to process data in parallel on multiple workers without creating separate data for each.

// main.js
let worker = new Worker("worker.js");

let sab = new SharedArrayBuffer(IMAGE_SIZE);
worker.onmessage = functions(event) {
  let image = createImageFromBitmap(;
worker.postMessage({buffer: sab})

// worker.js
onmessage = function(event) {
  let sab =;
  postMessage({buffer: sab});

Moreover, a new API called Atomics provides low-level atomic access and synchronization primitives for use with shared memory. Lars T Hansen has already written about this in the Mozilla Hacks post “A Taste of JavaScript’s New Parallel Primitives“.

Object.values / Object.entries

Status: Available from Firefox 47.

Object.values returns an array of a given object’s own enumerable property values, like Object.keys does for property keys, and Object.entries returns an array of [key, value] pairs.

Object.entries is useful to iterate over objects.

createElement("img", {
  width: 320,
  height: 240,
  src: "http://localhost/a.png"

// without Object.entries
function createElement(name, attributes) {
  let element = document.createElement(name);

  for (let name in attributes) {
    let value = attributes[name];
    element.setAttribute(name, value);

  return element;

// with Object.entries
function createElement(name, attributes) {
  let element = document.createElement(name);

  for (let [name, value] of Object.entries(attributes)) {
    element.setAttribute(name, value);

  return element;

When property keys are not used in the loop, Object.values can be used.


Status: Available from Firefox 50.

Object.getOwnPropertyDescriptors returns all own property descriptors of a given object.

The return value of Object.getOwnPropertyDescriptors can be passed to Object.create, to create a shallow copy of the object.

function shallowCopy(obj) {
  return Object.create(Object.getPrototypeOf(obj),
String padding

Status: Available from Firefox 48.

String.prototype.padStart and String.prototype.padEnd add padding to a string, if necessary, to extend it to the given maximum length. The padding characters can be specified by argument.

They can be used to output data in a tabular format, by adding leading spaces (align to end) or trailing spaces (align to start), or to add leading "0" to numbers.

let stock = {
  apple: 105,
  pear: 52,
  orange: 78,
for (let [name, count] of Object.entries(stock)) {
  console.log(name.padEnd(10) + ": " + String(count).padStart(5, 0));
  // "apple     : 00105"
  // "pear      : 00052"
  // "orange    : 00078"
Trailing commas in function parameter lists and calls

Status: Available from Firefox 52 (now Beta, will ship in March 2017).

Just like array elements and object properties, function parameter list and function call arguments can now have trailing commas, except for the rest parameter.

function addItem(
  count = 1,
) {


This simplifies generating JavaScript code programatically, i.e. transpiling from other language. Code generator doesn’t have to worry about whether to emit comma or not, while emitting function parameters or function call arguments.

Also this makes it easier to rearrange parameters by copy/paste.

Miscellaneous Changes

Remove proxy OwnPropertyKeys error with duplicate keys

Status: Available from Firefox 51.

The ownKeys trap of a user-defined Proxy handler is now permitted to return duplicate keys for non-extensible object.

let nonExtensibleObject = Object.preventExtensions({ a: 10 });

let x = new Proxy(nonExtensibleObject, {
  ownKeys() {
    return ["a", "a", "a"];

Object.getOwnPropertyNames(x); // ["a", "a", "a"]
Case folding for \w, \W, \b, and \B in unicode RegExp
Status: Available from Firefox 54 (now Nightly, will ship in June 2017).

\w, \W, \b, and \B in RegExp with unicode+ignoreCase flags now treat U+017F (LATIN SMALL LETTER LONG S) and U+212A (KELVIN SIGN) as word characters.

console.log(/\w/iu.test("\u017F")); // true
console.log(/\w/iu.test("\u212A")); // true
Remove arguments.caller

Status: Removed in Firefox 53 (now Developer Edition, will ship in April 2017).

The caller property on arguments objects, that threw a TypeError when gotten or set, has been removed.

function f() {
  "use strict";
  arguments.caller; // doesn't throw.

What’s Next?

We’re also working on implementing ECMAScript proposals.

New Feature

Function.prototype.toString revision (proposal Stage 3)

Status: Work in progress.

This proposal standardizes the string representation of functions to make it interoperable between browsers.

Lifting Template Literal Restriction (proposal Stage 3)

Status: Available from Firefox 53 (now Developer Edition, will ship in April 2017).

This proposal removes a restriction on escape sequences in Tagged Template Literals.

If an invalid escape sequence is found in a tagged template literal, the template value becomes undefined but the template raw value becomes the raw string.

function f(callSite) {
  console.log(callSite);     // [undefined]
  console.log(callSite.raw); // ["\\f. (\\x. f (x x)) (\\x. f (x x))"]

f`\f. (\x. f (x x)) (\x. f (x x))`;
Async Iteration (proposal Stage 3)

Status: Work in progress.

The Async Iteration proposal comes with two new features: async generator and for-await-of syntax.

The async generator is a mixture of a generator function and an async function. It can contain yield, yield*, and await. It returns a generator object that behaves asynchronously by returning promises from next/throw/return methods.

The for-await-of syntax can be used inside an async function and an async generator. It behaves like for-of, but interacts with the async generator and awaits internally on the returned promise.

async function* loadImagesSequentially(urls) {
  for (let url of urls) {
    yield await loadImage(url);

async function processImages(urls) {
  let processedImages = [];

  for await (let image of loadImagesSequentially(urls)) {
    let processedImage = await processImageInWorker(image);

  return processedImage;


100% on the ES2016+ compatibility table is an important milestone to achieve, but the ECMAScript language will continue evolving. We’ll keep working on implementing new features and fixing existing ones to make them standards-compliant and improve their performance. If you find any bug, performance issue, or compatibility fault, please let us know by filing a bug in Bugzilla. Firefox’s JavaScript engine engineers can also be found in #jsapi on 🙂

Nick FitzgeraldDemangling C++ Symbols

I wrote a Rust crate to demangle C++ symbols: cpp_demangle. Find it on and github.

C++ symbols are mangled? Why?


Linkers only support C identifiers for symbol names. They don’t have any knowledge of C++’s namespaces, monomorphization of a template function, overloaded functions, etc. That means that the C++ compiler needs to generate C identifier compatible symbols for C++ constructs. This process is called “mangling”, the resulting symbol is a “mangled symbol”, and reconstructing the original C++ name is “demangling”.

Let’s take an overloaded function as a concrete example. Say there are two overloads: one that takes a char* and another that takes an int:

void overloaded(char* string) { /* ... */ }
void overloaded(int n) { /* ... */ }

Although overloaded is a valid C identifier that the linker could understand, we can’t name both versions of the function overloaded because we need to differentiate them from each other for the linker.

We can see mangling in action if we run nm on the object file:

$ nm -j overloaded.o

There is some prefixed gunk and then the types of parameters got mangled themselves and appended to the end of the function name: Pc for a pointer to a char, and i for int.

Here is a more complex example:

namespace foo {
    template <typename T>
    struct Bar {
        void some_method(Bar<T>* one, Bar<T>* two, Bar<T>* three) {
            /* ... */

void instantiate() {
    // Instantiate a Foo<int> and trigger a Foo<int>::some_method
    // monomorphization.
    foo::Bar<int> foo_bar_int;
    foo_bar_int.some_method(&foo_bar_int, &foo_bar_int, &foo_bar_int);

    // And do the same for Foo<char*>.
    foo::Bar<char*> foo_bar_char;
    foo_bar_char.some_method(&foo_bar_char, &foo_bar_char, &foo_bar_char);

This time the mangled symbols are a bit less human readable, and its a bit less obvious where some of the mangled bits even came from:

$ nm -j complex.o

Do compilers have to agree on the mangling format?

If they want code compiled from one compiler to inter-operate with code compiled from another compiler (or even different versions of the same compiler), then yes, they need to agree on names.

These days, almost every C++ compiler uses the Itanium C++ ABI’s name mangling rules. The notable exception is MSVC, which uses a completely different format.

We’ve been looking at the Itanium C++ ABI style mangled names, and that’s what cpp_demangle supports.

Why demangle C++ symbols in Rust?

A few reasons:

  • I wanted to demangle some C++ symbols for gimli’s addr2line clone. But I also didn’t want to introduce complexity into the build system for some old C code, nor a dependency on a system library outside of the Rust ecosystem, nor spawn a c++filt subprocess.

  • Tom Tromey, a GNU hacker and buddy of mine, mentioned that historically, the canonical C++ demangler in libiberty (used by c++filt and gdb) has had tons of classic C bugs: use-after-free, out-of-bounds array accesses, etc, and that it falls over immediately when faced with a fuzzer. In fact, there were so many of these issues that gdb went so far as to install a signal handler to catch SIGSEGVs during demangling. It “recovered” from the segfaults by longjmping out of the signal handler and printing a warning message before moving along and pretending that nothing happened. My ears perked up. Those are the exact kinds of things Rust protects us from at compile time! A robust alternative might actually be a boon not just for the Rust community, but everybody who wants to demangle C++ symbols.

  • Finally, I was looking for something to kill a little bit of free time.

What I didn’t anticipate was how complicated parsing mangled C++ symbols actually is! The second bullet point should have been a hint. I hadn’t looked to deeply into how C++ symbols are mangled, and I expected simple replacement rules. But if that was the case, why would the canonical demangler have had so many bugs?

What makes demangling C++ symbols difficult?

Maybe I’ve oversold how complicated it is a little bit: it isn’t terribly difficult, its just more difficult than I expected it was going to be. That said, here are some things I didn’t expect.

The grammar is pretty huge, and has to allow encoding all of C++’s expressions, e.g. because of decltype. And its not just the grammar that’s huge, the symbols themselves are too. Here is a pretty big mangled C++ symbol from SpiderMonkey, Firefox’s JavaScript engine:


That’s 355 bytes!

And here is its demangled form – even bigger, with a whopping 909 bytes!

decltype(fp.match(<mozilla::Tuple<js::NativeObject*, JSObject*, js::CrossCompartmentKey::DebuggerObjectKind> >())) mozilla::detail::VariantImplementation<unsigned char, 3ul, mozilla::Tuple<js::NativeObject*, JSObject*, js::CrossCompartmentKey::DebuggerObjectKind> >::match<decltype(fp(static_cast<JSObject**>(std::nullptr_t))) js::CrossCompartmentKey::applyToWrapped<(anonymous namespace)::NeedsSweepUnbarrieredFunctor>((anonymous namespace)::NeedsSweepUnbarrieredFunctor)::WrappedMatcher&, mozilla::Variant<JSObject*, JSString*, mozilla::Tuple<js::NativeObject*, JSScript*>, mozilla::Tuple<js::NativeObject*, JSObject*, js::CrossCompartmentKey::DebuggerObjectKind> > >((anonymous namespace)::NeedsSweepUnbarrieredFunctor&&, mozilla::Variant<JSObject*, JSString*, mozilla::Tuple<js::NativeObject*, JSScript*>, mozilla::Tuple<js::NativeObject*, JSObject*, js::CrossCompartmentKey::DebuggerObjectKind> >&)

The mangled symbol is smaller than the demangled form because the name mangling algorithm also specifies a symbol compression algorithm:

To minimize the length of external names, we use two mechanisms, a substitution encoding to eliminate repetition of name components, and abbreviations for certain common names. Each non-terminal in the grammar above for which <substitution> appears on the right-hand side is both a source of future substitutions and a candidate for being substituted.

After the first appearance of a type in the mangled symbol, all subsequent references to it must actually be back references pointing to the first appearance. The back references are encoded as S i _ where i is the i + 1th substitutable AST node in an in-order walk of the AST (and S_ is the 0th).

We have to be careful not to mess up the order in which we populate the substitutions table. If we got it wrong, then all the back references will point to the wrong component, and the final output of demangling will be garbage. cpp_demangle uses a handwritten recursive descent parser. This makes it easy to build the substitutions table as we parse the mangled symbol, avoiding a second back reference resolution pass over the AST after parsing. Except there’s a catch: the grammar contains left-recursion, which will cause naive recursive descent parsers to infinitely recurse and blow the stack. Can we transform the grammar to remove the left-recursion? Nope. Although we would parse the exact same set of mangled symbols, the resulting parse tree would be different, invalidating our substitutions table and back references! Instead, we resign ourselves to peeking ahead a character and switching on it.

The compression also means that the demangled symbol can be exponentially larger than the mangled symbol in the worst case! Here is an example that makes it plain:

template <typename T, typename U>
struct Pair {
    T t;
    U u;

using Pair2 = Pair<int, int>;
using Pair4 = Pair<Pair2, Pair2>;
using Pair8 = Pair<Pair4, Pair4>;
using Pair16 = Pair<Pair8, Pair8>;
using Pair32 = Pair<Pair16, Pair16>;
using Pair64 = Pair<Pair32, Pair32>;
using Pair128 = Pair<Pair64, Pair64>;
using Pair256 = Pair<Pair128, Pair128>;
using Pair512 = Pair<Pair256, Pair256>;

void lets_get_exponential(Pair512* p) {
    /* ... */

The mangled lets_get_exponential symbol is 91 bytes long:


The demangled lets_get_exponential symbol is 5902 bytes long; too big to include here, but you can run that symbol through c++filt to verify it yourself.

The final thing to watch out for with compression and back references is malicious input that creates cycles with the references. If we aren’t checking for and defending against cycles, we could hang indefinitely in the best case, or more likely infinitely recurse and blow the stack.

What’s the implementation status?

The cpp_demangle crate can parse every mangled C++ symbol I’ve thrown at it, and it can demangle them too. However, it doesn’t always match libiberty’s C++ demangler’s formatting character-for-character. I’m currently in the process of getting all of libiberty’s C++ demangling tests passing.

Additionally, I’ve been running American Fuzzy Lop (with on cpp_demangle overnight. It found a panic involving unhandled integer overflow, which I fixed. Since then, AFL hasn’t triggered any panics, and its never been able to find a crash (thanks Rust!) so I think cpp_demangle is fairly solid and robust.

If you aren’t too particular about formatting, then cpp_demangle is ready for you right now. If you do care more about formatting, it will be ready for you soon.

What’s next?

  • Finish getting all the tests in libiberty’s test suite passing, so cpp_demangle is character-for-character identical with libiberty’s C++ demangler.
  • Add a C API, so that non-Rust projects can use cpp_demangle.
  • Add benchmarks and compare performance with libiberty’s C++ demangler. Make sure cpp_demangle is faster ;-)
  • A couple other things listed in the github issues.

That’s it! See you later, and happy demangling!

Thanks to Tom Tromey for sharing GNU war stories with me, and providing feedback on an early draft of this blog post.

Cameron Kaiser45.8.0b1 available with user agent switching (plus: Chrome on PowerPC?!)

45.8.0b1 is available (downloads, hashes). This was supposed to be in your hands last week, but I had a prolonged brownout (68VAC on the mains! thanks, So Cal Edison!) which finally pushed the G4 file server over the edge and ruined various other devices followed by two extended blackouts, so the G5 was just left completely off all week and I did work on my iBook G4 and i7 MBA until I was convinced I wasn't going to regret powering the Quad back on again. The other reason for the delay is because there's actually a lot of internal changes in this release: besides including the work so far on Operation Short Change (there is still a fair bit more to do, but this current snapshot improves benchmarks anywhere from four to eight percent depending on workload), this also has numerous minor bug fixes imported from later versions of Firefox and continues removing telemetry from UI-facing code to improve browser responsiveness. I also smoked out and fixed many variances from the JavaScript conformance test suite, mostly minor, but this now means we are fully compliant and some classes of glitchy bugs should just disappear.

But there are two other big changes in this release. The first is to font support. While updating the ATSUI font blacklist for Apple's newest crock of crap, the fallback font (Helvetica Neue) appeared strangely spaced, with words as much as a 1/6th of the page between them. This looks like it's due to the 10.4-compatible way we use for getting native font metrics; there appears to be some weird edge case in CGFontGetGlyphAdvances() that occasionally yields preposterous values, meaning we've pretty much shipped this problem since the very beginning of TenFourFox and no one either noticed or reported it. It's not clear to me why (or if) other glyphs are unaffected but in the meantime the workaround is to check if the space glyph's horizontal advance value is redonkulous and if it is, assuming the font metrics are not consistent with a non-proportional font, heuristically generate a more reasonable size. In the process I also tuned up font handling a little bit, so text run generation should be a tiny bit faster as well.

The second is this:

As promised, user agent switching is now built-in to TenFourFox; no extension is required (and you should disable any you have installed or they may conflict). I chose five alternatives which have been useful to me in the past for sites that balk at TenFourFox (including cloaking your Power Mac as Intel Firefox), or for lower-powered systems, including the default Classilla user agent. I thought about allowing an arbitrary string but frankly you could do that in about:config if you really wanted, and was somewhat more bookwork. However, I'm willing to entertain additional suggestions for user agent strings that work well on a variety of sites. For fun, open Google in one tab and the preferences pane in another, change the agent to a variety of settings, and watch what happens to Google as you reload it.

Note that the user agent switching feature is not intended to hide you're using TenFourFox; it's just for stupid sites that don't properly feature-sniff the browser (Chase Bank, I'm looking at yoooooouuuuu). The TenFourFox start page can still determine you're using TenFourFox for the purpose of checking if your browser is out of date by looking at navigator.isTenFourFox, which I added to 45.8 (it will only be defined in TenFourFox; the value is undefined in other browsers). And yes, there really are some sites that actually check for TenFourFox and serve specialized content, so this will preserve that functionality once they migrate to checking for the browser this way.

If you notice font rendering is awry in this version, please compare it with 45.7 before reporting it. Even if the bug is actually old I'd still like to fix it, but new regressions are more serious than something I've been shipping for awhile. Similarly, please advise on user agent strings you think are useful, and I will consider them (no promises). 45.8 is scheduled for release on March 7.

Finally, last but not least, who says you can't run Google Chrome on a Power Mac?

Okay, okay, this is the PowerPC equivalent of a stupid pet trick, but this is qemu 2.2.1 (the last version that did not require thread-local storage, which won't work with OS X 10.6 and earlier) running on the Quad as a 64-bit PC, booting Instant Web Kiosk into Chromium 53. It is incredibly, crust-of-the-earth-coolingly slow, and that nasty blue tint is an endian problem in the emulator I need to smoke out. But it actually can browse websites (eventually, with patience, after a fashion) and do web-browsery things, and I was even able to hack the emulator to run on 10.4, a target this version of qemu doesn't support.

So what's the point of running a ridiculously sluggish emulated web browser, particularly one I can't stand for philosophical reasons? Well, there's gotta be something I can work on "After TenFourFox"™ ...

Jokes aside, even though I see some optimization possibilities in the machine code qemu's code generator emits, those are likely to yield only marginal improvements. Emulation will obviously never be anywhere near as fast as native code. If I end up doing anything with this idea of emulation for future browser support, it would likely only be for high-end G5 systems, and it may never be practical even then. But, hey, Google, guess what: it only took seven years to get Chrome running on a Tiger Power Mac.

Air MozillaFebruary Privacy Lab - Cyber Futures: What Will Cybersecurity Look Like in 2020 and Beyond?

February Privacy Lab - Cyber Futures: What Will Cybersecurity Look Like in 2020 and Beyond? CLTC will present on Cybersecurity Futures 2020, a report that poses five scenarios for what cybersecurity may look like in 2020, extrapolating from technological, social,...

Daniel StenbergSome curl numbers

We released the 163rd curl release ever today. curl 7.53.0 – approaching 19 years since the first curl release (6914 days to be exact).

It took 61 days since the previous release, during which 47 individuals helped us fix 95 separate bugs. 25 of these contributors were newcomers. In total, we now count more than 1500 individuals credited for their help in the project.

One of those bug-fixes, one was a security vulnerability, upping our total number of vulnerabilities through the years to 62.

Since the previous release, 7.52.1, 155 commits were made to the source repository.

The next curl release, our 164th, is planned to ship in exactly 8 weeks.

QMORezaul Huque Nayeem: industrious, tolerant and associative

28506408403_9c6f20866c_mRezaul Huque Nayeem has been involved with Mozilla since 2013. He is from Mirpur, Dhaka, Bangladesh where he is an undergraduate student of Computer science and Engineering at the Daffodil International University. He loves to travel countrywide and hangout with friends. In his spare time, he volunteers for some social organizations.

Rezaul Huque Nayeem is from Bangladesh in south Asia.

Rezaul Huque Nayeem is from Bangladesh in south Asia.

Hi Nayeem! How did you discover the Web?

I discovered the web when I was kid, one day (2000/2001) in my uncle’s office I heard something about Email and Yahoo. That was the first time and I learned little bit about the internet on that day. I remember that I was very amazed when I downloaded a picture.

How did you hear about Mozilla?

In 2012 one of my friends told me about Mozilla and its mission. He told me how to contribute in many pathways in Mozilla.

How and why did you start contributing to Mozilla?

I started contributing to Mozilla on 20 march, 2015 by QA Marathon Dhaka. On that day my mentor Hossain Al Ikram showed me how to contribute to Mozilla by doing QA. He teached me how to test any feature on Firefox, how to verify bugs or do triage. From that day I love doing QA. Day by day I met many awesome mozillians and was helped by them. I love to contribute with them in a global community. I also like the way Mozilla works for making better web.

Nayeem in QA Marathon, Dhaka

Have you contributed to any other Mozilla projects in any other way?

I did some localization on Firefox OS and MDN. I contributed in MLS (Mozilla Location Service). I also participated in many Web Maker focused events.

What’s the contribution you’re the most proud of?

I feel proud to contribute to QA. By doing QA now i can find bugs and help to get them fixed. That’s why now many people can use a bug free browser. And it also teaches me how to work with a community and make me active and industrious.


Please tell us more about your community. Is there anything you find particularly interesting or special about it?

The community I work with is Mozilla Bangladesh QA Community, a functional community of Mozilla Bangladesh where we are focused in contributing in QA. It is the biggest QA community and growing day by day. There is about 50 + active contributors who regularly participates in Test days, Bug Verification days and Bug triage days. Last year, we verified more than 700 bugs. We have more than 10 community mentors to help contributors. In our community every member is so much friendly and helpful. It’s a very active and lovely community.


What’s your best memory with your fellow community members?

Every online and offline event was very exciting for me. But Firefox QA Testday, Dhaka (4.dec.2015) was the best memorable event with my community for me. It was really an awesome offline daylong event.


You had worked as a Firefox Student Ambassador. Do you have any event that you want to share?

I organized two events on my institutional campus, Dhaka Polytechnic Institute. One was a Webmaker event and another one was for MozillaBD Privacy Talk. Both was thrilling for me, as I was leading those events.


What advice would you give to someone who is new and interested in contributing to Mozilla?

I will tell him that, first you have to decide what you really want to do? If you work with a community, then please do not contribute for yourself, do it for your community.

If you had one word or sentence to describe Mozilla, what would it be?

Mozilla is the One who really wants to make web free for people

What exciting things do you envision for you and Mozilla in the future?

I envision that Mozilla will give much effort on connecting devices so that world would get some exciting gear.

Gervase MarkhamOverheard at Google CT Policy Day…

Jacob Hoffman-Andrews (of Let’s Encrypt): “I tried signing up for certspotter alerts for a domain and got a timeout on the signup page.”
Andrew Ayer (of CertSpotter): “Oh, dear. Which domain?”
Jacob Hoffman-Andrews: “
Andrew Ayer: “Do you have a lot of certs for that domain?”
Jacob Hoffman-Andrews: “Oh yeah, I totally do!”
Andrew Ayer: “How many?”
Jacob Hoffman-Andrews: “A couple of hundred thousand.”
Andrew Ayer: “Yeah, that would do it…”

Emma IrwinCelebrating Mother Language Day In Open Source

Celebrating Mother Language Day In Open Source

Mozilla volunteer Deepak Upendra interviews in Telugu language, taken with permission of those being interviewed. (Photo by Dyvik Chenna,CC BY-SA 4.0)

With a goal to reach, and listen to diverse and authentic voices, the insights phase of our plan for a Diversity and Inclusion strategy for Participation has, so far, been an inspired journey of learning. To mark International Mother Language Day, and to celebrate the theme of building sustainable futures we wanted to share our research work for D&I at Mozilla.

It became apparent very early into our research, that we needed to prioritize the opportunity for people to be interviewed in their first-languages, and together with a small and passionate team of multi-lingual interviewers we have been doing just that — so far in French, Spanish, Albanian, Hindi, Telugu, Tamil and Malayalam. With a designed process including best practices, and an FAQ — and leveraging a course we developed last year called ‘Interviewing Users for Mozilla’ we’ve been able to mobilize even beyond our core group.

To better tell the story of our work, we interviewed some of our interviewers about their experiences:

Liza Durón, Interviews in Spanish

Liza is a Full Stack Marketer and Ethnographer from Mexico who volunteers here time at Mozilla in many areas, including as Club Captain for Mexico’s Mozilla Club.

On barriers faced by non-English speakers in open communities:

“People slow down their participation because they don’t fully understand English, so they don’t want to make mistakes or to be “ridiculous” if they say something wrong. Which is nonsense because we’re an open community and it is supposed that we are able to explain everyone if they need so. People tend to be frustrated at not being able to communicate themselves widely and that’s when tolerance is diminished, we judge ourselves internally and we decide to turn away from overcoming those barriers and asking for support. “

On how people can bridge the linguistic barriers in Open Source:

Every task they do, document it in their first language and their in English. If we only care about doing it in Spanish, it won’t figure globally and if we only do it in English, it will only be available for more people.

Kristi, Interviews in Albanian

Kristi is chairwoman of Open Labs Hackerspace in Tirana, Albania, a Mozilla Tech Speaker and current Outreachy Intern at Mozilla.

On why first language research is important:

I have witnessed that interviews in first language People are so free to express and the results are even more real and it’s clearer to understand.

On how this method of research can lead to greater D&I in communities like Mozilla:

(by embedding translators in community spaces/events) More people will be included since they will feel more comfortable to be part of the community and won’t have to say : “I can not attend I don’t understand what they say and I can not speak English.”

Bhagyashree Padalkar, Speaks Marathi, Interviews in Hindi

Bhagyashree is a Data Scientist, actively involved with the Fedora Operations and Diversity Team and Outreachy Intern at Mozilla. She is working on both data analysis and first-language interviews with our D&I team.

On biggest barriers non-English speakers face in the open source world in general, and Mozilla in particular:

(Even though I am a confident English Speaker) I feel like I have to think twice before I speak up because any small mistake I make would not only make me more vulnerable to next, but also make the community members feel that I am not capable enough — or in some cases, even cloud their impressions of other Indians.

On the experience of interviewing in first-language:

I can definitely say that this research will help in identifying critical issues and barriers non-native English speakers face while contributing to FOSS. Overall, while conducting first language interviews, I have felt contributors able to connect more easily when speaking in their native language as this reduces their pressure a lot, makes them think a lot less about technical things like finding the right words to express themselves in English and helps the process feel more like a friendly conversation than a grilling round of interview.

The results of first-language interviews are proving an important opportunity to learn more about the experience of our community, but also how to better at include non-English speakers in future. Thank you to all of our interviews, and community members taking time to talk with us. And happy Mother Language Day!




Mozilla Open Innovation TeamCelebrating Mother Language Day In Open Source

Understanding critical issues and barriers in FOSS for non-English speakers

Mozilla volunteer Dyvik Chenna (not in photo) interviews in Telugu language with note-taker Deepak Upendra, taken with permission of those being interviewed. (Photo by Dyvik Chenna,CC BY-SA 4.0)

With a goal to reach, and listen to diverse and authentic voices, the insights phase of our plan for a Diversity and Inclusion (D&I) strategy for Participation has, so far, been an inspired journey of learning. To mark International Mother Language Day, and to celebrate the theme of building sustainable futures we wanted to share our research work for D&I at Mozilla.

It became apparent very early into our research, that we needed to prioritize the opportunity for people to be interviewed in their first-languages, and together with a small and passionate team of multi-lingual interviewers we have been doing just that — so far in French, Spanish, Albanian, Hindi, Telugu, Tamil and Malayalam. With a designed process including best practices, and an FAQ — and leveraging a course we developed last year called ‘Interviewing Users for Mozilla’ we’ve been able to mobilize even beyond our core group.

To better tell the story of our work, we interviewed some of our interviewers about their experiences:

Liza Durón, Interviews in Spanish

Liza is a Full Stack Marketer and Ethnographer from Mexico who volunteers here time at Mozilla in many areas, including as Club Captain for Mexico’s Mozilla Club.

On barriers faced by non-English speakers in open communities:

“People slow down their participation because they don’t fully understand English, so they don’t want to make mistakes or to be “ridiculous” if they say something wrong. Which is nonsense because we’re an open community and it is supposed that we are able to explain everyone if they need so. People tend to be frustrated at not being able to communicate themselves widely and that’s when tolerance is diminished, we judge ourselves internally and we decide to turn away from overcoming those barriers and asking for support. “

On how people can bridge the linguistic barriers in Open Source:

Every task they do, document it in their first language and their in English. If we only care about doing it in Spanish, it won’t figure globally and if we only do it in English, it will only be available for more people.

Kristi, Interviews in Albanian

Kristi is chairwoman of Open Labs Hackerspace in Tirana, Albania, a Mozilla Tech Speaker and current Outreachy Intern at Mozilla.

On why first language research is important:

I have witnessed that interviews in first language People are so free to express and the results are even more real and it’s clearer to understand.

On how this method of research can lead to greater D&I in communities like Mozilla:

(by embedding translators in community spaces/events) More people will be included since they will feel more comfortable to be part of the community and won’t have to say : “I can not attend I don’t understand what they say and I can not speak English.”

Bhagyashree Padalkar, Speaks Marathi, Interviews in Hindi

Bhagyashree is a Data Scientist, actively involved with the Fedora Operations and Diversity Team and Outreachy Intern at Mozilla. She is working on both data analysis and first-language interviews with our D&I team.

On biggest barriers non-English speakers face in the open source world in general, and Mozilla in particular:

(Even though I am a confident English Speaker) I feel like I have to think twice before I speak up because any small mistake I make would not only make me more vulnerable to next, but also make the community members feel that I am not capable enough — or in some cases, even cloud their impressions of other Indians.

On the experience of interviewing in first-language:

I can definitely say that this research will help in identifying critical issues and barriers non-native English speakers face while contributing to FOSS. Overall, while conducting first language interviews, I have felt contributors able to connect more easily when speaking in their native language as this reduces their pressure a lot, makes them think a lot less about technical things like finding the right words to express themselves in English and helps the process feel more like a friendly conversation than a grilling round of interview.

The results of first-language interviews are proving an important opportunity to learn more about the experience of our community, but also how to better at include non-English speakers in future. Thank you to all of our interviewers, and community members for taking time to talk with us. And happy Mother Language Day!

Celebrating Mother Language Day In Open Source was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Dave TownsendWelcome to a new Firefox/Toolkit peer, Shane Caraveo

It’s that time again when I get to announce a new Firefox/Toolkit peer. Shane has been involved with Mozilla for longer than I can remember and recently he has been doing fine work on webextensions including the new sidebar API. As usual we probably should have made Shane a peer sooner so this is a case of better late than never.

I took a moment to tell Shane what I expect of all my peers:

  • Respond to review requests promptly. If you can’t then work with the patch author or me to find an alternate reviewer.
  • Only review things you’re comfortable with. Firefox and Toolkit is a massive chunk of code and I don’t expect any of the peers to know it all. If a patch is outside your area then again work with the author or me to find an alternate reviewer.

Please congratulate Shane in the usual way, by sending him lots of patches to review!

Chris CooperRelEng & RelOps highlights - February 21, 2017

It’s been a while. How are you?

Modernize infrastructure:

We finally closed the 9-year-old bug requesting that we redirect all HTTP traffic to to HTTPS! Many thanks to everyone who helped ensure that automation and other tools continued to work normally. Not every day you get to close bugs that are older than my kids.

The new TreeStatus page ( was finally released by garbas with a proxy in place of old url.

Improve Release Pipeline:

Initial work on Uplift dashboard has been done by bastien and released to production by garbas.

Releng had a workweek in Toronto to plan how release promotion will work in a TaskCluster world. With the uplift for Firefox 52 rapidly approaching (see Release below), we came up with a multi-phase plan that should allow us to release the Linux and Android versions of Firefox 52 from TaskCluster, with the Mac and Windows versions still being created by buildbot.

Improve CI Pipeline:

Alin and Sebastian disabled Windows 10 tests on our CI. Windows 10 tests will be reappearing later this year once we move datacentres and acquire new hardware to support them.

Andrei and Relops converted some Windows talos machines to run Linux64 to reduce wait times on this platform.

There are some upcoming deadlines involving datacentre moves that, while not currently looming, are definitely focusing our efforts in the TaskCluster migration. As part of the aforementioned workweek, we targeted the next platform that needs to migrate, Mac OS X. We are currently breaking out the packaging and signing steps for Mac so that they can be done on Linux. That work can then be re-used for l10n repacks *and* release promotion.


Since most of our Linux64 builds and tests have migrated to TaskCluster, Alin was able to shut down many of our Linux buildbot masters. This will reduce our monthly AWS bill and the complexity of our operational environment.

Hal ran our first “hard close” Tree Closing Window (TCW) in quite a while on Saturday, February 11 ( It ran about an hour longer than planned due to some strange interactions deep in the back end, which is why it was a “hard close.” The issue may be related to occasional “database glitches” we have seen in the past. This time IT got some data, and have raised a case with our load balancer vendor.


We are deep in the beta cycle for Firefox 52, with beta 8 coming out this week. Firefox 52 is an important milestone release because it signals the start of another ESR cycle.

See you again soon!

Mozilla Privacy BlogWe Are All Creators Now


For the past several months, the U.S. Copyright Office has been collecting comments on a part of the Digital Millennium Copyright Act (DMCA), specifically related to intermediary liability and safe harbors. Under the current law, internet companies – think, for example, of Facebook, or Reddit, or Wikimedia – are able to offer their services without legal exposure if a user uploads content in violation of copyright law, as long as they follow proper takedown procedures upon being informed of the action.

Last year, we filed comments in the first round of this proceeding, and today, we have made a second submission in response to the Copyright Office’s request for additional input. Across our filings and other engagement on this issue, we identify major concerns with some of the proposals being considered – changes that would greatly harm content creators, the public and intermediaries. Some of the proposals seem to be more about manipulating copyright law to achieve another agenda, these filings need to be rejected.

Mandating content blocking technology is a bad idea – bad for users, bad for businesses, and not effective at addressing the problem of copyright infringement. We’re fighting this issue not only in the United States, but also in Europe, because it goes against the letter and the spirit of copyright law, and poses immense risk to the internet’s benefits for social and economic activity.

Separately, we believe automated systems, currently in place on sites such as YouTube, are very ineffective at assessing fair use and copyright infringement, as they can’t consider context. We are making suggestions to ease or eliminate the burden of those automated systems on those acting within the law.

Ultimately, to achieve its constitutional purpose and deliver ultimate benefit for people and for the internet, copyright practice must recognize that members of the general public are no longer just consumers, but creators. Copyright issues relating to creator’s artistic inputs are often far more important than any copyright interest in their creative output.

The greatest risk of tinkering with the safe harbor setup today is interfering with the fulfillment of the public’s desire to create, remix and share content. In contrast, recognizing and protecting that interest creates the greatest opportunity for positive policy change. We continue to be optimistic about the possibility of seeing such change at the end of this consultation process.


Robert KaiserThe PHP Authserver

As mentioned previously here on my blog, my FOSDEM 2017 talk ended up opening up the code for my own OAuth2 login server which I created following my earlier post on the login systems question.

The video from the talk is now online at the details page of the talk (including downloadable versions if you want them), my slides are available as well.

The gist of it is that I found out that using a standard authentication protocol in my website/CMS systems instead of storing passwords with the websites is a good idea, but I also didn't want to report who is logging into which website at what point to a third party that I don't completely trust privacy-wise (like Facebook or Google). My way to deal with that was to operate my own OAuth2 login server, preferably with open code that I can understand myself.

As the language I know best is PHP (and I can write pretty clean and good quality code in that language), I looked for existing solutions there but couldn't find a finished one that I could just install, adapt branding-wise and operate.
I found a good library for OAuth2 (and by extension OpenID Connect) in oauth2-server-php, but the management of actual login processes and creating the various end points that call the library still had to be added, and so I set out to do just that. For storing passwords, I investigated what solutions would be good and in the end settled for using PHP's builtin password_hash function including its auto-upgrade-on-login functionalities, right now that means using bcrypt (which is decent but not fully ideal), with PHP 7.2, it will move to Argon2 (which is probably the best available option right now). That said, I wrote some code to add an on-disk random value to the passwords so that hacking the database alone will be insufficient for an offline brute-force attack on the hashes. In general, I tried to use a lot of advice from Mozilla's secure coding guidelines for websites, and also made sure my server passes with A+ score on Mozilla Observatory as well as SSL Labs, and put the changes for that in the code as much as possible, or example server configurations in the repository otherwise, so that other installations can profit from this as well.
For sending emails and building up HTML as DOM doucuments, I'm using helper classes from my own php-utility-classes and for some of the database access, esp. schema upgrades, I ended up including doctrine DBAL. Optionally, the code is there to monitor traffic via Piwik.

The code for all this is now available at

It should be relatively easy to install on a Linux system with Apache and MySQL - other web servers and databases should not be hard to add but are untested so far. The main README has some rudimentary documentation, but help is needed to improve on that. Also, all testing is done by trying logins with the two OAuth2 implementations I have done in my own projects, I need help in getting a real test suite set up for the system.
Right now, all the system supports is the OAuth2 "Authorization Code" flow, it would be great to extend it to support OIDC as well, which php-server-php can handle but the support code for it needs to be written.
Branding can easily be adapted for the operator running the service via the skin support (my own branding on my installation builds on that as well), and right now US English and German are supported by the service but more can easily be added if someone contributes them.

And last but not least, it's all under the MPL2 license, which I hope enables people easily to contribute - I hope including yourself!

QMOFirefox 52 Beta 7 Testday Results

Hello Mozillians!

As you may already know, last Friday – February 17th – we held a new Testday event, for Firefox 52 Beta 7.

Thank you all for helping us making Mozilla a better place – P.Avinash Sharma, Vuyisile Ndlovu, Athira Ananth, Ilse Macías and Iryna Thompson, Surentharan R.A., Subash.M, vinothini.k, R.krithika sowbarnika, Dhinesh Kumar, Fahima Zulfath A, Nagaraj.V, A.Kavipriya, Rajesh, varun tiwari, Pavithra.R, Vishnu Priya, Paarttipaabhalaji, Kavya, Sankararaman and Baranitharan.

From Bangladesh team: Nazir Ahmed Sabbir, Maruf Rahman, Md.Majedul islam, Md. Raihan Ali, Sabrina joadder silva, Afia Anjum Preety, Rezwana Islam Ria, Rayhan, Md. Mujtaba Asif, Anmona Mamun Monisha, Wasik Ahmed, Sajedul Islam, Forhad Hossain, Asif Mahmud Rony, Md Rakibul Islam.


– several test cases executed for the Graphics.

– 5 bugs verified: 637311, 1111599, 1311096, 1292629, 1215856.

– 2 new bugs filed: 1298395, 1340883.

Again thanks for another successful testday 🙂

We hope to see you all in our next events, all the details will be posted on QMO!


About:CommunityMozilla at FOSDEM 2017

With over 17 talks hosted in our packed Mozilla Devroom, more than 550 attendees at our booth, and our #mozdem hashtag earning 1,8 million impressions, Mozilla’s presence at FOSDEM 2017 February 4-5 was a successful, volunteer-Mozillian driven initiative.

FOSDEM is one of the biggest events in the Open Source world and attracts more than 6,000 attendees from all over the world — Open Source advocates, Technical developers, and people interested in Copyright, new languages, and the open web. Through our booth we were able to hear from developers about what they expect from Mozilla — from our tools and technologies, our involvement in the open source community, how we can improve our contribution areas. We had a full day Devroom on Saturday with 17 talks (8 from volunteers) averaging nearly 200 attendees per talk that covered several topics like Dev Tools, Rust, A-Frame and others. There were also presentations about community motivation, Diversity & Inclusion, and Copyright in Europe. Together these allowed us to show what’s important for Mozilla right now, what ideas and issues we want to promote, and what technologies are we using.

In working with volunteer-Mozillians to coordinate our presence, the Open Innovation team took a slightly different path this year, being more rigorous in our approach. First, we identified goals and intended outcomes, having conversations with different teams (DevTools, DevRel, Open Source Experiments, etc). Those conversations helped us to define a set of expectations and success for these teams. For example, Developer Relations was interested in getting feedback from participants on Mozilla and web technologies, since the event has an audience very relevant for them (web developers, technical developers). Open Source Experiments was interested in create warm leads for project partners, to help boost the program. So we had a variety of goals, which were shared with volunteers, and that helped us to measure the success of our participation in a solid way.

DevTools talk Graphic

DevTools talk Graphic

FOSDEM is always a place to discuss and have interesting conversations. While we covered several topics at the Devroom and at our booth, Rust proved to be a common talking point on many occasions. Although it can be considered a new programming language, we were asked about how to participate, where to find more information and how to join the Rust community.

All in all, the Mozilla presence at FOSDEM proved to be very solid and it couldn’t had happened with the help of the volunteers that staffed the booth and worked hard. I would like to mention and thank (alphabetically): Alex Lakatos, Daniele Scasciafratte, Edoardo Viola, Eugenio Petullà, Gabriel Micko, Gloria Dwomoh, Ioana Chiorean, Kristi Progri, Merike Sell, Luna Jernberg and Redon Skikuli and a lot of other volunteers that went there to help or only participate at the event. Also big kudos to Ziggy Maes and Anthony Maton, who helped to coordinate Mozilla presence.

our volunteers at FOSDEM 2017

our volunteers at FOSDEM 2017

Some highlight numbers of our presence in this edition:

  • Nearly 200 people on average per talk in our devroom
  • Mozillians directly engaged with around 550 people during the weekend at our booth
  • More than 200 people checked our Code of Conduct for our devroom
  • Our hashtag #mozdem, had around 1,8 Million impressions
  • The Survey we ran at the event was filled out by 210 developers

There are a lot of pictures and blogposts from mozillians on medium, or in their blogs. If you want to see some tweets, impressions and photos, check this storify.

This Week In RustThis Week in Rust 170

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate of the week is CDRS, a client for Apache Cassandra written completely in Rust. Thanks to Alex Pikalov for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

107 pull requests were merged in the last week.

New Contributors

  • Amos Onn
  • Andrew Gaspar
  • Benoît CORTIER
  • Brian Vincent
  • Dmitry Guzeev
  • Glyne J. Gittens
  • Jeff Muizelaar
  • Luxko
  • Matt Williams
  • Michal Nazarewicz
  • Mikhail Pak
  • Sebastian Waisbrot

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!


Issues in final comment period:

Other significant issues:

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Niko MatsakisNon-lexical lifetimes using liveness and location

At the recent compiler design sprint, we spent some time discussing non-lexical lifetimes, the plan to make Rust’s lifetime system significantly more advanced. I want to write-up those plans here, and give some examples of the kinds of programs that would now type-check, along with some that still will not (for better or worse).

If you were at the sprint, then the system I am going to describe in this blog post will actually sound quite a bit different than what we were talking about. However, I believe it is equivalent to that system. I am choosing to describe it differently because this version, I believe, would be significantly more efficient to implement (if implemented naively). I also find it rather easier to understand.

I have a prototype implementation of this system. The example used in this post, along with the ones from previous posts, have all been tested in this prototype and work as expected.

Yet another example

I’ll start by giving an example that illustrates the system pretty well, I think. This section also aims to give an intution for how the system works and what set of programs will be accepted without going into any of the details. Somewhat oddly, I’m going to number this example as “Example 4”. This is because my previous post introduced examples 1, 2, and 3. If you’ve not read that post, you may want to, but don’t feel you have to. The presentation in this post is intended to be independent.

Example 4: Redefined variables and liveness

I think the key ingredient to understanding how NLL should work is understanding liveness. The term “liveness” derives from compiler analysis, but it’s fairly intuitive. We say that a variable is live if the current value that it holds may be used later. This is very important to Example 4:

let mut foo, bar;
let p = &foo;
// `p` is live here: its value may be used on the next line.
if condition {
    // `p` is live here: its value will be used on the next line.
    // `p` is DEAD here: its value will not be used.
    p = &bar;
    // `p` is live here: its value will be used later.
// `p` is live here: its value may be used on the next line.
// `p` is DEAD here: its value will not be used.

Here you see a variable p that is assigned in the beginning of the program, and then maybe re-assigned during the if. The key point is that p becomes dead (not live) in the span before it is reassigned. This is true even though the variable p will be used again, because the value that is in p will not be used.

So how does liveness relate to non-lexical lifetimes? The key rule is this: Whenever a variable is live, all references that it may contain are live. This is actually a finer-grained notion than just the liveness of a variable, as we will see. For example, the first assignment to p is &foo – we want foo to be borrowed everywhere that this assignment may later be accessed. This includes both print() calls, but excludes the period after p = &bar. Even though the variable p is live there, it now holds a different reference:

let foo, bar;
let p = &foo;
// `foo` is borrowed here, but `bar` is not
if condition {
    // neither `foo` nor `bar` are borrowed here
    p = &bar;   // assignment 1
    // `foo` is not borrowed here, but `bar` is
// both `foo` and `bar` are borrowed here
// neither `foo` nor `bar` are borrowed here,
// as `p` is dead

Our analysis will begin with the liveness of a variable (the coarser-grained notion I introduced first). However, it will use reachability to refine that notion of liveness to obtain the liveness of individual values.

Control-flow graphs and point notation

Recall that in NLL-land, all reasoning about lifetimes and borrowing will take place in the context of MIR, in which programs are represented as a control-flow graph. This is what Example 4 looks like as a control-flow graph:

// let mut foo: i32;
// let mut bar: i32;
// let p: &i32;

[ p = &foo     ]
[ if condition ] ----\ (true)
       |             |
       |     B       v
       |     [ print(*p)     ]
       |     [ ...           ]
       |     [ p = &bar      ]
       |     [ ...           ]
       |     [ goto C        ]
       |             |
C      v
[ print(*p)    ]
[ return       ]

As a reminder, I will use a notation like Block/Index to refer to a specific point (statement) in the control-flow graph. So A/0 and B/2 refer to p = &foo and p = &bar, respectively. Note that there is also a point for the goto/return terminators of each block (i.e., A/1, B/4, and C/1).

Using this notation, we can say that we want foo to be borrowed during the points A/1, B/0, and C/0. We want bar to be borrowed during the points B/3, B/4, and C/0.

Defining the NLL analysis

Now that we have our two examples, let’s work on defining how the NLL analysis will work.

Step 0: What is a lifetime?

The lifetime of a reference is defined in our system to be a region of the control-flow graph. We will represent such regions as a set of points.

A note on terminology: For the remainder of this post, I will often use the term region in place of “lifetime”. Mostly this is because it’s the standard academic term and it’s often the one I fall back to when thinking more formally about the system, but it also feels like a good way to differentiate the lifetime of the reference (the region where it is in use) with the lifetime of the referent (the span of time before the underlying resource is freed).

Step 1: Instantiate erased regions

The plan for adopting NLL is to do type-checking in two phases. The first phase, which is performed on the HIR, I would call type elaboration. This is basically the “traditional type-system” phase. It infers the types of all variables and other things, figures out where autoref goes, and so forth; the result of this is the MIR.

The key change from today is that I want to do all of this type elaboration using erased regions. That is, until we build the MIR, we won’t have any regions at all. We’ll just keep a placeholder (which I’ll write as 'erased). So if you have something like &i32, the elaborated, internal form would just be &'erased i32. This is quite different from today, where the elaborated form includes a specific region. (However, this erased form is precisely what we want for generating code, and indeed MIR today goes through a “region erasure” step; this step would be unnecessary in the new plan, since MIR as produced by type check would always have fully erased regions.)

Once we have built MIR, then, the idea is roughly to go and replace all of these erased regions with inference variables. This means we’ll have region inference variables in the types of all local variables; it also means that for each borrow expression like &foo, we’ll have a region representing the lifetime of the resulting reference. I’ll write the expression together with this region like so: &'0 foo.

Here is what the CFG for Example 4 looks like with regions instantiated. You can see I used the variable '0 to represent the region in the type of p, and '1 and '2 for the regions of the two borrows:

// let mut foo: i32;
// let mut bar: i32;
// let p: &'0 i32;

[ p = &'1 foo  ]
[ if condition ] ----\ (true)
       |             |
       |     B       v
       |     [ print(*p)     ]
       |     [ ...           ]
       |     [ p = &'2 bar   ]
       |     [ ...           ]
       |     [ goto C        ]
       |             |
C      v
[ print(*p)    ]
[ return       ]

Step 2: Introduce region constraints

Now that we have our region variables, we have to introduce constraints. These constriants will come in two kinds:

  • liveness constraints; and,
  • subtyping constraints.

Let’s look at each in turn.

Liveness constraints.

The basic rule is this: if a variable is live on entry to a point P, then all regions in its type must include P.

Let’s continue with Example 4. There, we have just one variable, p. It’s type has one region ('0) and it is live on entry to A/1, B/0, B/3, B/4, and C/0. So we wind up with a constraint like this:

{A/1, B/0, B/3, B/4, C/0} <= '0

We also include a rule that for each borrow expression like &'1 foo, '1 must include the point of borrow. This gives rise to two further constraints in Example 4:

{A/0} <= '1
{B/2} <= '2

Location-aware subtyping constraints

The next thing we do is to go through the MIR and establish the normal subtyping constraints. However, we are going to do this with a slight twist, which is that we are going to take the current location into account. That is, instead of writing T1 <: T2 (T1 is required to be a subtype of T2) we will write (T1 <: T2) @ P (T1 is required to be a subtype of T2 at the point P). This in turn will translate to region constraints like (R2 <= R1) @ P.

Continuing with Example 4, there are a number of places where subtyping constraints arise. For example, at point A/0, we have p = &'1 foo. Here, the type of &'1 foo is &'1 i32, and the type of p is &'0 i32, so we have a (location-aware) subtyping constraint:

(&'1 i32 <: &'0 i32) @ A/1

which in turn implies

('0 <= '1) @ A/1 // Note the order is reversed.

Note that the point here is A/1, not A/0. This is because A/1 is the first point in the CFG where this constraint must hold on entry.

The meaning of a region constraint like ('0 <= '1) @ P is that, starting from the point P, the region '1 must include all points that are reachable without leaving the region '0. The implementation basically does a depth-first search starting from P; the search stops if we exit the region '0. Otherwise, for each point we find, we add it to '1.

Jumping back to example 4, we wind up with two constraints in total. Combining those with the liveness constraint, we get this:

('0 <= '1) @ A/1
('0 <= '2) @ B/3
{A/1, B/0, B/3, B/4, C/0} <= '0
{A/0} <= '1
{B/2} <= '2

We can now try to find the smallest values for '0, '1, and '2 that will make this true. The result is:

'0 = {A/1, B/0, B/3, B/4, C/0}
'1 = {A/0, A/1, B/0, C/0}
'2 = {B/3, B/4, C/0}

These results are exactly what we wanted. The variable foo is borrowed for the region '1, which does not include B/3 and B/4. This is true even though the '0 includes those points; this is because you cannot reach B/3 and B/4 from A/1 without going through B/1, and '0 does not include B/1 (because p is not live at B/1). Similarly, bar is borrowed for the region '2, which begins at B/4 and extends to C/0 (and need not include earlier points, which are not reachable).

You may wonder why we do not have to include all points in '0 in '1. Intuitively, the reasoning here is based on liveness: '1 must ultimately include all points where the reference may be accessed. In this case, the subregion constraint arises because we are copying a reference (with region '1) into a variable (let’s call it x) whose type includes the region '0, so we need reads of '0 to also be counted as reads of '1but, crucially, only those reads that may observe this write. Because of the liveness constraints we saw earlier, if x will later be read, then x must be live along the path from this copy to that read (by the definition of liveness, essentially). Therefore, because the variable is live, '0 will include that entire path. Hence, by including the points in '0 that are reachable from the copy (without leaving '0), we include all potential reads of interest.


This post presents a system for computing non-lexical lifetimes. It assumes that all regions are erased when MIR is created. It uses only simple compiler concepts, notably liveness, but extends the subtyping relation to take into account where the subtyping must hold. This allows it to disregard unreachable portions of the control-flow.

I feel pretty good about this iteration. Among other things, it seems so simple I can’t believe it took me this long to come up with it. This either means that is it the right thing or I am making some grave error. If it’s the latter people will hopefully point it out to me. =) It also seems to be efficiently implementable.

I want to emphasize that this system is the result of a lot of iteration with a lot people, including (but not limited to) Cameron Zwarich, Ariel Ben-Yehuda, Felix Klock, Ralf Jung, and James Miller.

It’s interesting to compare this with various earlier attempts:

  • Our earliest thoughts assumed continuous regions (e.g., RFC 396). The idea was that the region for a reference ought to correspond to some continuous bit of control-flow, rather than having “holes” in the middle.
    • The example in this post shows the limitation of this, however. Note that the region for the variable p includes B/0 and B/4 but excludes B/1.
    • This is why we lean on liveness requirements instead, so as to ensure that the region contains all paths from where a reference is created to where it is eventually dereferenced.
  • An alternative solution might be to consider continuous regions but apply an SSA or SSI transform.
    • This allows the example in this post to type, but it falls down on more advanced examples, such as vec-push-ref (hat tip, Cameron Zwarich). In particular, it’s possible for subregion relations to arise without a variable being redefined.
    • You can go farther, and give variables a distinct type at each point in the program, as in Ericson2314’s stateful MIR for Rust. But even then you must contend with invariance or you have the same sort of problems.
    • Exploring this led to the development of the “localized” subregion relationship constraint (r1 <= r2) @ P, which I had in mind in my original series but which we elaborated more fully at the rustc design sprint.
    • The change in this post versus what we said at the sprint is that I am using one type per variable instead of one type per variable per statement; I am also explicitly using the results of an earlier liveness analysis to construct the constraints, whereas in the sprint we incorporated the liveness into the region inference itself (by reasoning about which values were live across each individual statement and thus creating many more inequalities).

There are some things I’ve left out of this post. Hopefully I will get to them in future posts, but they all seem like relatively minor twists on this base model.

  • I’d like to talk about how to incorporate lifetime parameters on fns (I think we can do that in a fairly simple way by modeling them as regions in an expanded control-flow graph, as illustrated by this example in my prototype).
  • There are various options for modeling the “deferred borrows” needed to accept vec.push(vec.len()).
  • We might consider a finer-grained notion of liveness that operates not on variables but rather on the “fragments” (paths) that we use when doing move-checking. This would help to make let (p, q) = (&foo, &bar) and let pair = (&foo, &bar) entirely equivalent (in the system as I described it, they are not, because whenever pair is live, both foo and bar would be borrowed, even if only pair.0 is ever used). But even if we do this there will still be cases where storing pointers into an aggregate (e.g., a struct) can lose precision versus using variables on the stack, so I’m not sure it’s worth worrying about.

Comments? Let’s use this old internals thread.

Firefox NightlyFosdem 2017 Nightly slides and video


FOSDEM is a two-day event organised by volunteers to promote the widespread use of free and open source software.

Every year in February, Mozillians from all over the world go to Belgium to attend Fosdem, the biggest Free and Open Source Software event in Europe with over 5,000 developers and Free Software advocates attending.

Mozilla has its own Developer Room and a booth and many or our projects were presented. A significant part of the Firefox Release Management team attended the event and we had the opportunity to present the Firefox Nightly Reboot project in our Developer room on Saturday to a crowd of Mozillians and visitors interested in Mozilla and the future of Firefox.

Here are the slides of my presentation and this is the video recording of my talk:

With Mozilla volunteers (thanks guys!), we also heavily promoted the use of Nightly on the Mozilla booth over the two days of the event.

Mozilla booth Fosdem 2017

We had some interesting Nightly-specific feedback such as:

  • Many visitors thought that the Firefox Dev Edition was actually Nightly (promoted to developers, dark theme, updates daily).
  • Some users mentionned that they prefer to use the Dev Edition or Beta over Nightly not because of a concern about stability but because they find the updating window that pops up if you don’t update daily to be annoying.
  • People were very positive about Firefox and wanted to help Mozilla but said they lacked time to get involved. So they were happy to know that just using Firefox Nightly with telemetry activated and sending crash reports is already a great way to help us.

In a nutshell, this event was really great, we probably spoke to a hundred developers about Nightly and it was almost as popular on the booth as Rust (people really love Rust!).

Do you want to talk about Nightly yourself?

Of course my slides can be used as a basis for your own presentations to promote the use of Nightly to power users and our core community through the open source events you participate in your region or the ones organized by Mozilla Clubs!

The slides use reveal.js as a presentation framework and only need a browser to be displayed. You can download the tar.gz/zip archive of the slides or pull them from github with this command:
git clone -b nightly_fosdem_2017

Anjana VakilNotes from KatsConf2

Hello from Dublin! Yesterday I had the privilege of attending KatsConf2, a functional programming conference put on by the fun-loving, welcoming, and crazy-well-organized @FunctionalKats. It was a whirlwind of really exciting talks from some of the best speakers around. Here’s a glimpse into what I learned.

I took a bunch of notes during the talks, in case you’re hungering for more details. But @jessitron took amazing graphical notes that I’ve linked to in the talks below, so just go read those!

And for the complete experience, check out this storify Vicky Twomey-Lee, who led a great ally skills workshop the evening before the conference, made of the #KatsConf2 tweets:

<noscript>[<a href="" target="_blank">View the story "Kats Conf 2" on Storify</a>]</noscript>

Hopefully this gives you an idea of what was said and which brain-exploding things you should go look up now! Personally it opened up a bunch of cans of worms for me - definitely a lot of the material went over my head, but I have a ton of stuff to go find out more (i.e. the first thing) about.

Disclaimer: The (unedited!!!) notes below represent my initial impressions of the content of these talks, jotted down as I listened. They may or may not be totally accurate, or precisely/adequately represent what the speakers said or think, and the code examples are almost certainly mistake-ridden. Read at your own risk!

The origin story of FunctionalKats

FunctionalKatas => FunctionalKats => (as of today) FunctionalKubs

  • Meetups in Dublin & other locations
  • Katas for solving programming problems in different functional languages
  • Talks about FP and related topics
  • Welcome to all, including beginners

The Perfect Language

Bodil Stokke @bodil

What would the perfect programming language look like?

“MS Excel!” “Nobody wants to say ‘JavaScript’ as a joke?” “Lisp!” “I know there are Clojurians in the audience, they’re suspiciously silent…”

There’s no such thing as the perfect language; Languages are about compromise.

What the perfect language actually is is a personal thing.

I get paid to make whatever products I feel like to make life better for programmers. So I thought: I should design the perfect language.

What do I want in a language?

It should be hard to make mistakes

On that note let’s talk about JavaScript. It was designed to be easy to get into, and not to place too many restrictions on what you can do. But this means it’s easy to make mistakes & get unexpected results (cf. crazy stuff that happens when you add different things in JS). By restricting the types of inputs/outputs (see TypeScript), we can throw errors for incorrect input types - error messages may look like the compiler yelling at you, but really they’re saving you a bunch of work later on by telling you up front.

Let’s look at PureScript

Category theory! Semiring: something like addition/multiplication that has commutativity (a+b == b+a). Semigroup: …?

There should be no ambiguity

1 + 2 * 3


(+ 1 (* 2 3))

Pony: 1 + (2 * 3) – have to use parentheses to make precedence explicit

It shouldn’t make you think

Joe made a language at Ericsson in the late 80’s called “Erlang”. This is a gif of Joe from the Erlang movie. He’s my favorite movie star.

Immutability: In Erlang, values and variable bindings never change. At all.

This takes away some cognitive overhead (because we don’t have to think about what value a variable has at the moment)

Erlang tends to essentially fold over state: the old state is an input to the function and the new state is an output.

The “abstraction ceiling”

This term has to do with being able to express abstractions in your language.

Those of you who don’t know C: you don’t know what you’re missing, and I urge you not to find out. If garbage collection is a thing you don’t have to worry about in your language, that’s fantastic.

Elm doesn’t really let you abstract over the fact that e.g. map over array, list, set is somehow the same type of operation. So you have to provide 3 different variants of a function that can be mapped over any of the 3 types of collections. This is a bit awkward, but Elm programmers tend not to mind, because there’s a tradeoff: the fact that you can’t do this makes the type system simple so that Elm programmers get succinct, helpful error messages from the compiler.

I was learning Rust recently and I wanted to be able to express this abstraction. If you have a Collection trait, you can express that you take in a Collection and return a Collection. But you can’t specify that the output Collection has to be the same type as the incoming one. Rust doesn’t have this ability to deal with this, but they’re trying to add it.

We can do this in Haskell, because we have functors. And that’s the last time I’m going to use a term from category theory, I promise.

On the other hand, in a language like Lisp you can use its metaprogramming capabilities to raise the abstraction ceiling in other ways.


I have a colleague and when I suggested using OCaml as an implementation language for our utopian language, she rejected it because it was 50% slower than C.

In slower languages like Python or Ruby you tend to have performance-critical code written in the lower-level language of C.

But my feeling is that in theory, we should be able to take a language like Haskell and build a smarter compiler that can be more efficient.

But the problem is that we’re designing languages that are built on the lambda calculus and so on, but the machines they’re implemented on are not built on that idea, but rather on the Von Neumann architecture. The computer has to do a lot of contortions to take the beautiful lambda calculus idea and convert it into something that can run on an architecture designed from very different principles. This obviously complicates writing a performant and high-level language.

Rust wanted to provide a language as high-level as possible, but with zero-cost abstractions. So instead of garbage collection, Rust has a type-system-assisted kind of clean up. This is easier to deal with than the C version.

If you want persistent data structures a la Erlang or Clojure, they can be pretty efficient, but simple mutation is always going to be more efficient. We couldn’t do PDSs natively.

Suppose you have a langauge that’s low-level enough to have zero-cost abstractions, but you can plug in something like garbage collection, currying, perhaps extend the type system, so that you can write high-level programs using that functionality, but it’s not actually part of the library. I have no idea how to do this but it would be really cool.

Summing up

You need to think about:

  • Ergonomics
  • Abstraction
  • Efficiency
  • Tooling (often forgotten at first, but very important!)
  • Community (Code sharing, Documentation, Education, Marketing)

Your language has to be open source. You can make a proprietary language, and you can make it succeed if you throw enough money at it, but even the successful historical examples of that were eventually open-sourced, which enabled their continued use. I could give a whole other talk about open source.

Functional programming & static typing for server-side web

Oskar Wickström @owickstrom

FP has been influencing JavaScript a lot in the last few years. You have ES6 functional features, libraries like Underscore, Rambda, etc, products like React with FP/FRP at their core, JS as a compile target for functional languages

But the focus is still client-side JS.

Single page applications: using the browser to write apps more like you wrote desktop apps before. Not the same model as perhaps the web browser was intended for at the beginning.

Lots of frameworks to choose from: Angular, Ember, Meteor, React&al. Without JS on the client, you get nothing.

There’s been talk recently of “isomorphic” applications: one framework which runs exactly the same way on the esrver and the client. The term is sort of stolen & not used in the same way as in category theory.

Static typing would be really useful for Middleware, which is a common abstraction but every easy to mess up if dynamically typed. In Clojure if you mess up the middleware you get the Stack Trace of Doom.

Let’s use extensible records in PureScript - shout out to Edwin’s talk related to this. That inspired me to implement this in PureScript, which started this project called Hyper which is what I’m working on right now in my free time.


  • Safe HTTP middleware architecture
  • Make effects of middleware explicit
  • No magic


  • Track middleware effects in type system
  • leverage extensible records in PureScript
  • Provide a common API for middleware
  • Write middleware that can work on multiple backends


  • Conn: sort of like in Elixer, instead of passing a request and returning a response, pass them all together as a single unit
  • Middleware: a function that takes a connection c and returns another connection type c’ inside another type m
  • Indexed monads: similar to a state monad, but with two additional parameters: the type of the state before this action, and the type after. We can use this to prohibit effectful operations which aren’t correct.
  • Response state transitions: Hyper uses phantom types to track the state of response, guaranteeing correctness in response side effects

Functional Program Derivation


(Amazing, hand-drawn, animal-filled) Slides

Program derivation:

Problem => Specification => Derivation => Program

Someone said this is like “refactoring in reverse”

Generalization: introduce parameters instead of constant values

Induction: prove something for a base case and a first step, and you’ve proven it for all numbers

Induction hypothesis: if you are at step n, you must have been at step n-1 before that.

With these elements, we have a program! We just make an if/else: e.g. for sum(n), if n == 0: return 0; else return sum(n-1) + n

It all comes down to writing the right specification: which is where we need to step away from the keyboard and think.

Induction is the basis of recursion.

We can use induction to create a specification for sorting lists from which we can derive the QuickSort algorithm.

But we get 2 sorting algorithms for the price of 1: if we place a restriction that we can only do one recursive call, we can tweak the specification to derive InsertionSort, thus proving that Insertion Sort is a special case of Quick Sort.

I stole this from a PhD dissertation (“Functional Program Derivation” by ). This is all based on program derivation work by Djikstra.


  • Programming == Math. Practicing some basic math is going to help you write code, even if you won’t be doing these kind of exercises on yo ur day-to-day
  • Calculations provide insight
  • Delay choices where possible. Say “let’s assume a solution to this part of the problem” and then go back and solve it later.

I’m writing a whole book on this, if you’re interested in giving feedback on chapter drafts let me know! mail at felienne dot com


  • is there a link between the specification and the complexity of the program? Yes, the specification has implications for implementation. The choices you make within the specification (e.g. caching values, splitting computation) affect the efficency of the program.
  • What about proof assistants? Those are nice if you’re writing a dissertation or whatnot, but if you’re at the stage where you’re practicing this, the exercise is being precise, so I recommend doing this on paper. The second your fingers touch the keyboard, you can outsource your preciseness to the computer.
  • Once you’ve got your specification, how do you ensure that your program meets it? One of the things you could do is write the spec in something like fscheck, or you could convert the specification into tests. Testing and specification are really enriching each other. Writing tests as a way to test your specification is also a good way to go. You should also have some cases for which you know, or have an intuition of, the behavior. But none of this is supposed to go in a machine, it’s supposed to be on paper.

The cake and eating it: or writing expressive high-level programs that generate fast low-level code at runtime

Nada Amin @nadamin

Distinguish stages of computation

  • Program generator: basic types (Int, String, T) are executed at code generation time
  • Rep(Int), Rep(String), Rep(T) are left as variables in the generated code and executed at program run time(?)

Shonan Challenge for Generative Programming - part of the gen. pro. for HPC literature: you want to generate code that is specialized to a particular matrix

  • Demo of generating code to solve this challenge

Generative Programming Patterns

  • Deep linguistic reuse
  • Turning interpreters into compilers
    • You can think of the process of staging as something which generates code, think of an interpreter as taking code and additional input and creates a result.
    • Putting them together we get something that takes code and symbolic input, and in the interpret stage generates code which takes actual input, which in the execution stage produces a result
    • This idea dates back to 1971, Futamura’s Partial Evaluation
  • Generating efficient low-level code
    • e.g. for specialized parsers
    • We can take an efficient HTTP parser from 2000+ lines to 200, with parser combinators
    • But while this is great for performance, it leaves big security holes
    • So we can use independent tools to verify the generated code after the fact

Sometimes generating code is not the right solution to your problem

More info on the particular framework I’m using: Generative Programming for ‘Abstraction without Regret’

Rug: an External DSL for Coding Code Transformations (with Scala Parser-Combinators)

Jessica Kerr @jessitron, Atomist

The last talk was about abstraction without (performance) regret. This talk is about abstraction without the regret of making your code harder to read.

Elm is a particularly good language to modify automatically, because it’s got some boilerplate, but I love that boilerplate! No polymorphism, no type classes - I know exactly what that code is going to do! Reading it is great, but writing it can be a bit of a headache.

As a programmer I want to spend my time thinking about what the users need and what my program is supposed to do. I don’t want to spend my time going “Oh no, i forgot to put that thing there”.

Here’s a simple Elm program that prints “Hello world”. The goal is to write a program that modifies this existing Elm code and changes the greeting that we print.

We’re going to do this with Scala. The goal is to generate readable code that I can later go ahead and change. It’s more like a templating engine, but instead of starting with a templating file it starts from a cromulent Scala program.

Our goal is to parse an Elm file into a parse tree, which give us the meaningful bits of that file.

The “parser” in parser combinators is actually a combination of lexer and parser.

Reuse is dangerous, dependencies are dangerous, because they create coupling. (Controlled, automated) Cut & Paste is a safer solution.

at which point @jessitron does some crazy fast live coding to write an Elm parser in Scala

Rug is the super-cool open-source project I get to work on as my day job now! It’s a framework for creating code rewriters

Repo for this talk

In conclusion: any time my job feels easy, I think “OMG I’m doing it wrong”. But I don’t want to introduce abstraction into my code, because someone else is going to have difficulty reading that. I want to be able to abstract without sacrificing code readability. I can make my job faster and harder by automating it.

Relational Programming

Will Byrd

There are many programming paradigms that don’t get enough attention. The one I want to talk about today is Relational Programming. It’s somewhat representative of Logic Programming, like Prolog. I want to show you what can happen when you commit fully to the paradigm, and see where that leads us.

Functional Programming is a special case of Relational Programming, as we’re going to see in a minute.

What is functional programming about? There’s a hint in the name. It’s about functions, the idea that representing computation in the form of mathematical functions could be useful. Because you can compose functions, you don have to reason about mutable state, etc. - there are advantages to modeling computation as math. functions.

In relational programming, instead of representing computation as functions we represent it as relations. You can think of a relation in may ways. If you’re familiar with relational databases, or you can think in terms of tuples where we want to reason over sets or collections of tuples, or we can think of it in terms of algebra - like high school algebra - where we have variables representing unknown quantities and we have to figure out their values. We’ll see that we can get FP as a special case - there’s a different set of tradeoffs - but we’ll see that when you commit fully to this paradigm you can get some very surprising behavior.

Let’s start in our functional world, we’re going to write a little program in Scheme or Racket, a little program to manipulate lists. We’ll just do something simple like append or concatenate. Let’s define append in Scheme:

(define append
  (lambda (l s)
    (if (null? l)
        (cons (car l) (append (cdr l) s))))

We’re going to use a relational programming language called Mini Kanren which is basically an extension that has been applied to lots of languages which allows us to put in variables representing values and ask Kanren to fill in those values.

So I’m going to define appendo. (By convention we define our names ending in -o, it’s kind of a long story, happy to explain offline.)

Writes a bunch of Kanren that we don’t really understand

Now I can do:

> (run 1 (q) (appendo '(a b c) '(d e) q))
((a b c d e))

So far, not very interesting, if this is all it does then it’s no better than append. But where it gets interesting is that I can run it backwards to find an input:

> (run 1 (X) (appendo '(a, b, c) X (a b c d e)))
((d e))

Or I can ask it to find N possible inputs:

> (run 2 (X Y) (appendo X Y (a b c d e)))
((a b c d) (e))
((a b c d e) ())

Or all possible inputs:

> (run* (X Y) (appendo X Y (a b c d e)))
((a b c d) (e))
((a b c d e) ())

What happens if I do this?

> (run* (X Y Z) (appendo X Y Z))

It will run forever. This is sort of like a database query, except where the tables are infinite.

One program we could write is an interpreter, an evaluator. We’re going to take an eval that’s written in MiniKanren, which is called evalo and takes two arguments: the expression to be evaluated, and the value of that expression.

> (run 1 (a) (evalo '(lambda (x) x) q))
((closure x x ()))
> (run 1 (a) (evalo '(list 'a) q))

Professor wrote a Valentine's day post "99 ways to say 'I love you' in Racket", to teach people Racket by showing 99 different racket expressions that evaluate to the list `(I love you)`

> (run 99 (q) (evalo q '(I love you)))
...99 ways...

What about quines: a quine is a program that evaluates to itself. How could we find or generate a quine?

> (run 1 (q) (evalo q q))

And twines: two different programs p and q where p evaluates to q and q evaluates to p.

> (run 1 (p q) (=/= p q) (evalo p q) (evalo q p))
...two expressions that basically quote/unquote themselves...

What would happen if we run Scheme’s append in our evaluator?

> (run 1 (q)
      `(letrec ((append
                  (lambda (l s)
                    (if (null? l)
                        (cons (car l)
                              (append (cdr l)
          (append '(a b c) '(d e))
((a b c d e))

But we can put the variable also inside the definition of append:

> (run 1 (q)
      `(letrec ((append
                  (lambda (l s)
                    (if (null? l)
                        (cons (car l)
                              (append (cdr l)
          (append '(a b c) '(d e))
      '(a b c d e)))

Now we’re starting to synthesize programs, based on specifications. When I gave this talk at PolyConf a couple of years ago Jessitron trolled me about how long it took to run this, since then we’ve gotten quite a bit faster.

This is a tool called Barliman that I (and Greg Rosenblatt) have been working on, and it’s basically a frontend, a dumb GUI to the interpreter we were just playing with. It’s just a prototype. We can see a partially specified definition - a Scheme function that’s partially defined, with metavariables that are fill-in-the-blanks for some Scheme expressions that we don’t know what they are yet. Barliman’s going to guess what the definition is going to be.

(define ,A
    (lambda ,B

Now we give Barliman a bunch of examples. Like (append '() '()) gives '(). It guesses what the missing expressions were based on those examples. The more test cases we give it, the better approximation of the program it guesses. With 3 examples, we can get it to correctly guess the definition of append.

Yes, you are going to lose your jobs. Well, some people are going to lose their jobs. This is actually something that concerns me, because this tool is going to get a lot better.

If you want to see the full dog & pony show, watch the ClojureConj talk I gave with Greg.

Writing the tests is indeed the harder part. But if you’re already doing TDD or property-based testing, you’re already writing the tests, why don’t you just let the computer figure out the code for you based on those tests?

Some people say this is too hard, the search space is too big. But that’s what they said about Go, and it turns out that if you use the right techniques plus a lot of computational power, Go isn’t as hard as we thought. I think in about 10-15 years program synthesis won’t be as hard as we think now. We’ll have much more powerful IDEs, much more powerful synthesis tools. It could even tell you as you’re writing your code whether it’s inconsistent with your tests.

What this will do for jobs, I don’t know. I don’t know, maybe it won’t pan out, but I can no longer tell you that this definitely won’t work. I think we’re at the point now where a lot of the academic researchers are looking at a bunch of different parts of synthesis, and no one’s really combining them, but when they do, there will be huge breakthroughs. I don’t know what it’s going to do, but it’s going to do something.

Working hard to keep things lazy

Raichoo @raichoo

Without laziness, we waste a lot of space, because when we have recursion we have to keep allocating memory for each evaluated thing. Laziness allows us to get around that.

What is laziness, from a theoretical standpoint?

The first thing we want to talk about is different ways to evaluate expressions.

> f x y = x + y
> f (1 + 1) (2 + 2)  

How do we evaluate this?

=> (1 + 1) + (2 + 2)
=> 2 + 4
=> 6  

This evaluation was normal form

Church-Rosser Theorem: the order of evaluation doesn’t matter, ultimately a lambda expression will evaluate to the same thing.

But! We have things like non-termination, and termination can only be determined after the fact.

Here’s a way we can think of types: Let’s think of a Boolean as something which has three possible values: True, False, and “bottom”, which represents not-yet-determined, a computation that hasn’t ended yet. True and False are more defined than bottom (e.g. _|_ <= True). Partial ordering.

Monotone functions: if we have a function that takes a Bool and returns a Bool, and x and y are bools where x <= y, then f x <= f y. We can now show that f _|_ = True and f x = False doesn’t work out, because it would have the consequence that True => False, which doesn’t work - that’s a good thing because if it did, we would have solve the halting problem. What’s nice here is that if we write a function and evaluate it in normal order, in the lazy way, then this naturally works out.

Laziness is basically non-strictness (this normal order thing I’ve been talking about the whole time), and sharing.

Laziness lets us reuse code and use combinators. This is something I miss from Haskell when I use any other language.

Honorable mention: Purely Functional Data Structures by Chris Okasaki. When you have Persistent Data Structures, you need laziness to have this whole amortization argument going on. This book introduces its own dialect of ML (lazy ML).

How do we do laziness in Haskell (in GHC)? At an intermediate stage of compilation called STG, Haskell takes unoptimized code and optimizes it to make it lazy. (???)

Total Functional Programming

Edwin Brady @edwinbrady

Idris is a pure functional language with dependent types. It’s a “total” language, which means you have program totality: a program either terminates, or gives you new results.

Goals are:

  • Encourage type-driven development
  • Reduce the cost of writing correct software - giving you more tools to know upfront the program will do the correct thing.

People on the internet say, you can’t do X, you can’t do Y in a total language. I’m going to do X and Y in a total language.

Types become plans for a program. Define the type up front, and use it to guide writing the program.

You define the program interactively. The compiler should be less like a teacher, and more like a lab assistant. You say “let’s work on this” and it says “yes! let me help you”.

As you go, you need to refine the type and the program as necessary.

Test-driven development has “red, green, refactor”. We have “type, define, refine”.

If you care about types, you should also care about totality. You don’t have a type that completely describes your program unless your program is total.

Given f : T: if program f is total, we know that it will always give a result of type T. If it’s partial, we only know that if it gives a result, it will be type T, but it might crash, run forever, etc. and not give a result.

The difference between total and partial functions in this world: if it’s total, we can think of it as a Theorem.

Idris can tell us whether or not it thinks a program is total (though we can’t be sure, because we haven’t solved the halting problem “yet”, as a student once wrote in an assignment). If I write a program that type checks but Idris thinks it’s possibly not total, then I’ve probably done the wrong thing. So in my Idris code I can tell it that some function I’m defining should be total.

I can also tell Idris that if I can prove something that’s impossible, then I can basically deduce anything, e.g. an alt-fact about arithmetic. We have the absurd keyword.

We have Streams, where a Stream is sort of like a list without nil, so potentially infinite. As far as the runtime is concerned, this means this is lazy. Even though we have strictness.

Idris uses IO like Haskell to write interactive programs. IO is a description of actions that we expect the program to make(?). If you want to write interactive programs that loop, this stops it being total. But we can solve this by describing looping programs as a stream of IO actions. We know that the potentially-infinite loops are only going to get evaluated when we have a bit more information about what the program is going to do.

Turns out, you can use this to write servers, which run forever and accept responses, which are total. (So the people on the internet are wrong).

Check out David Turner’s paper “Elementary Strong Functional Programming”, where he argues that totality is more important than Turing-completeness, so if you have to give up one you should give up the latter.

Book coming out: Type-Driven Development with Idris

Gervase MarkhamTechnology Is More Like Magic Than Like Science

So said Peter Kreeft, commenting on three very deep sentences from C.S. Lewis on the problems and solutions of the human condition.

Suppose you are asked to classify four things –

  • religion,
  • science,
  • magic, and
  • technology.

– and put them into two categories. Most people would choose “religion and magic” and “science and technology”. Read Justin Taylor’s short article to see why the deeper commonalities are between “religion and science” and “magic and technology”.

Chris CooperBeing productive when distributed teams get together, take 2

Salmon jumpingEvery year, hundreds of release engineers swim upstream because they’re built that way.

Last week, we (Mozilla release engineering) had a workweek in Toronto to jumpstart progress on the TaskCluster (TC) migration. After the success of our previous workweek for release promotion, we were anxious to try the same format once again and see if we could realize any improvements.

Prior preparation prevents panic

We followed all of the recommendations in the Logistics section of Jordan’s post to great success.

Keeping developers fed & watered is an integral part of any workweek. If you ever want to burn a lot of karma, try building consensus between 10+ hungry software developers about where to eat tonight, and then finding a venue that will accommodate you all. Never again; plan that shit in advance. Another upshot of advance planning is that you can also often go to nicer places that cost the same or less. Someone on your team is a (closet) foodie, or is at least a local. If it’s not you, ask that person to help you with the planning.

What stage are you at?

The workweek in Vancouver benefitted from two things:

  1. A week of planning at the All-Hands in Orlando the month before; and,
  2. Rail flying out to Vancouver a week early to organize much of the work to be done.

For this workweek, it turned out we were still at the planning stage, but that’s totally fine! Never underestimate the power of getting people on the same page. Yes, we did do *some* hacking during the week. Frankly, I think it’s easier to do the hacking bit remotely, but nothing beats a bunch of engineers in a room in front of a whiteboard for planning purposes. As a very distributed team, we rarely have that luxury.

Go with it

…which brings me to my final observation. Because we are a very distributed team, opportunities to collaborate in person are infrequent at best. When you do manage to get a bunch of people together in the same room, you really do need to go with discussions and digressions as they develop.

This is not to say that you shouldn’t facilitate those discussions, timeboxing them as necessary. If I have one nit to pick with Jordan’s post it’s that the “Operations” role would be better described as a facilitator. As a people manager for many years now, this is second-nature to me, but having someone who understands the problem space enough to know “when to say when” and keep people on track is key to getting the most out of your time together.

By and large, everything worked out well in Toronto. It feels like we have a really solid format for workweeks going forward.

Air MozillaWebdev Beer and Tell: February 2017

Webdev Beer and Tell: February 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Christian HeilmannMy closing keynote of the Tweakers DevSummit – slides and resources

Yesterday I gave the closing keynote of the Tweakers Developer Summit in Utrecht, The Netherlands. The conference topic was “Webdevelopment – Coding the Universe” and the organisers asked me to give a talk about Machine Learning and what it means for developers in the nearer future. So I took out my crystal ball 🔮 and whipped up the following talk:

Here are the resources covered in the talk:

Yes, this was a lot – maybe too much – for one talk, but the feedback I got was all very positive, so I am hoping for the video to come out soon.

Karl Dubost[worklog] Edition 055 - Tiles on a wall

webcompat life

With all these new issues, it's sometimes difficult to follow everything. The team has grown. We are more active on more fronts and the non-written rules we had when we were 3 or 4 persons were working. When we grow, we need a bit more process, and slightly more dedicated areas for each of us. It will help us to avoid work conflicts and to make progress smoothly. Defining responsibilities and empowering people. This week it happens some tiles were put in my house on two days. After the first day, the tiles were there on the wall without the joint in between. I started to regret the choice of tiles. Too big, too obvious. But the day after, the joint was put in place in between the tiles and everything made sense. Everything was elegant and how we had envision it.

webcompat issues dev


Niko MatsakisProject idea: datalog output from rustc

I want to have a tool that would enable us to answer all kinds of queries about the structure of Rust code that exists in the wild. This should cover everything from synctactic queries like “How often do people write let x = if { ... } else { match foo { ... } }?” to semantic queries like “How often do people call unsafe functions in another module?” I have some ideas about how to build such a tool, but (I suspect) not enough time to pursue them. I’m looking for people who might be interested in working on it!

The basic idea is to build on Datalog. Datalog, if you’re not familiar with it, is a very simple scheme for relating facts and then performing analyses on them. It has a bunch of high-performance implementations, notably souffle, which is also available on GitHub. (Sadly, it generates C++ code, but maybe we’ll fix that another day.)

Let me work through a simple example of how I see this working. Perhaps we would like to answer the question: How often do people write tests in a separate file (foo/ versus an inline module (mod test { ... })?

We would (to start) have some hacked up version of rustc that serializes the HIR in Datalog form. This can include as much information as we would like. To start, we can stick to the syntactic structures. So perhaps we would encode the module tree via a series of facts like so:

// links a module with the id `id` to its parent `parent_id`
ModuleParent(id, parent_id).
ModuleName(id, name).

// specifies the file where a given `id` is located
File(id, filename).

So for a module structure like:

// foo/
mod test;

// foo/
fn test() { }

we might generate the following facts:

// module with id 0 has name "" and is in foo/
ModuleName(0, "").
File(0, "foo/").

// module with id 1 is in foo/,
// and its parent is module with id 0.
ModuleName(1, "test").
ModuleParent(1, 0).
File(1, "foo/").

Then we can write a query to find all the modules named test which are in a different file from their parent module:

// module T is a test module in a separate file if...
TestModuleInSeparateFile(T) :-
    // ...the name of module T is test, and...
    ModuleName(T, "test"),
    // is in the file T_File... 
    File(T, T_File),
    // has a parent module P, and...
    ModuleParent(T, P),
    // ...the parent module P is in the file P_File... 
    File(P, P_File),
    // ...and file of the parent is not the same as the file of the child.
    T_File != P_File.

Anyway, I’m waving my hands here, and probably getting datalog syntax all wrong, but you get the idea!

Obviously my encoding here is highly specific for my particular query. But eventually we can start to encode all kinds of information this way. For example, we could encode the types of every expression, and what definition each path resolved to. Then we can use this to answer all kinds of interesting queries. For example, some things I would like to use this for right now (or in the recent past):

So, you interested? If so, contact me – either privmsg over IRC (nmatsakis) or over on the internals threads I created.

Mozilla Addons BlogThe Road to Firefox 57 – Compatibility Milestones

Back in November, we laid out our plans for add-ons in 2017. Notably, we defined Firefox 57 as the first release where only WebExtensions will be supported. In parallel, the deployment of Multiprocess Firefox (also known as e10s) continues, with many users already benefiting from the performance and stability gains. There is a lot going on and we want you to know what to expect, so here is an update on the upcoming compatibility milestones.

We’ve been working on setting out a simple path forward, minimizing the compatibility hurdles along the way, so you can focus on migrating your add-ons to WebExtensions.

Legacy add-ons

By legacy add-ons, we’re referring to:

Language packs, dictionaries, OpenSearch providers, lightweight themes, and add-ons that only support Thunderbird or SeaMonkey aren’t considered legacy.

Firefox 53, April 18th release

  • Firefox will run in multiprocess mode by default for all users, with some exceptions. If your add-on has the multiprocessCompatible flag set to false, Firefox will run in single process mode if the add-on is enabled.
  • Add-ons that are reported and confirmed as incompatible with Multiprocess Firefox (and don’t have the flag set to false) will be marked as incompatible and disabled in Firefox.
  • Add-ons will only be able to load binaries using the Native Messaging API.
  • No new legacy add-ons will be accepted on (AMO). Updates to existing legacy add-ons will still be accepted.

Firefox 54-56

  • Legacy add-ons that work with Multiprocess Firefox in 53 may still run into compatibility issues due to followup work:
    • Multiple content processes is being launched in Firefox 55. This enables multiple content processes, instead of the single content process currently used.
    • Sandboxing will be launched in Firefox 54. Additional security restrictions will prevent certain forms of file access from content processes.

Firefox 57, November 14th release

  • Firefox will only run WebExtensions.
  • AMO will continue to support listing and updating legacy add-ons after the release of 57 in order to have an easier transition. The exact cut-off time for this support hasn’t been determined yet.
  • Multiprocess compatibility shims are removed from Firefox. This doesn’t affect WebExtensions, but it’s one of the reasons went with this timeline.

For all milestones, keep in mind that Firefox is released using a “train” model, where Beta, Developer Edition, and Nightly correspond to the future 3 releases. You should expect users of pre-release versions to be impacted by these changes earlier than their final release dates. The Release Calendar lists future release dates per channel.

We are committed to this timeline, and will work hard to make it happen. We urge all developers to look into WebExtensions and port their add-ons as soon as possible. If you think your add-on can’t be ported due to missing APIs, here’s how you can let us know.

Christopher Arnold
A long time ago I remember reading Stephen Pinker discussing the evolution of language.  I had read Beowulf, Chaucer and Shakespeare, so I was quite interested in these linguistic adaptations over time.  Language shifts rapidly through the ages, to the  point that even English of 500 years ago sounds foreign to us now.  His thesis in the piece was about how language is going to shift toward the Chinese pronunciation of it.  Essentially, the majority of speakers will determine the rules of the language’s direction.  There are more Chinese in the world than native English speakers, so as they adopt and adapt the language, more of us will speak like the greater factions of our language’s custodians.  The future speakers of English, will determine its course.  By force of "majority rules", language will go in the direction of its greatest use, which will be the Pangea of the global populace seeking common linguistic currency with others of foreign tongues.  Just as the US dollar is an “exchange currency” standard at present between foreign economies, English is the shortest path between any two ESL speakers, no matter which background.

Subsequently, I heard these concepts reiterated in a Scientific American podcast.  The concept there being that English, when spoken by those who learned it as a second language, is easier for other speakers to understand than native-spoken English.  British, Indian, Irish, Aussie, New Zealand and American English are relics in a shift, very fast, away from all of them.  As much as we appreciate each, they are all toast.  Corners will be cut, idiomatic usage will be lost, as the fastest path to information conveyance determines that path that language takes in its evolution.  English will continue to be a mutt language flavored by those who adopt and co-opt it.  Ultimately meaning that no matter what the original language was, the common use of it will be the rules of the future.  So we can say goodbye to grammar as native speakers know it.  There is a greater shift happening than our traditions.  And we must brace as this evolution takes us with it to a linguistic future determined by others.

I’m a person who has greatly appreciated idiomatic and aphoristic usage of English.  So I’m one of those, now old codgers, who cringes at the gradual degradation of language.  But I’m listening to an evolution in process, a shift toward a language of broader and greater utility.  So the cringes I feel, are reactions to the time-saving adaptations of our language as it becomes something greater than it has been in the past.  Brits likely thought/felt the same as their linguistic empire expanded.  Now is just a slightly stranger shift.

This evening I was in the kitchen, and I decided to ask Amazon Alexa to play some Led Zeppelin.  This was a band that used to exist in the 1970’s era during which I grew up.  I knew their entire corpus very well.  So when I started hearing one of my favorite songs, I knew this was not what I had asked for.  It was a good rendering for sure, but it was not Robert Plant singing.  Puzzled, I asked Alexa who was playing.  She responded “Lez Zeppelin”.  This was a new band to me.  A very good cover band I admit.  (You can read about them here:
But why hadn't Alexa wanted to respond to my initial request?  Was it because Atlantic Records hadn't licensed Led Zeppelin's actual catalog for Amazon Prime subscribers?

Two things struck me.  First, we aren’t going to be tailoring our English to Chinese ESL common speech patterns as Mr. Pinker predicted.  We’re probably also going to be shifting our speech patterns to what Alexa, Siri, Cortana and Google Home can actually understand.  They are the new ESL vector that we hadn't anticipated a decade ago.  It is their use of English that will become conventional, as English is already the de facto language of computing, and therefore our language is now the slave to code.

What this means for that band (that used to be called Zeppelin) is that such entity will no longer be discoverable.  In the future, if people say “Led Zeppelin” to Alexa, she’ll respond with Lez Zeppelin (the rights-available version of the band formerly known as "Led Zeppelin").  Give humanity 100 years or so, and the idea of a band called Led Zeppelin will seem strange to folk.  Five generations removed, nobody will care who the original author was.  The "rights" holder will be irrelevant.  The only thing that will matter in 100 years is what the bot suggests.

Our language isn't ours.  It is the path to the convenient.  In bot speak, names are approximate and rights (ignoring the stalwart protectors) are meaningless.  Our concepts of trademarks, rights ownership, etc. are going to be steam-rolled by other factors, other "agents" acting at the user's behest.  The language and the needs of the spontaneous are immediate!

Air MozillaReps Weekly Meeting Feb. 16, 2017

Reps Weekly Meeting Feb. 16, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Emma IrwinThank you Guillermo Movia


I first got to know Guillermo during our time together on Mozilla Reps council –  which was actually his second time contributing community leadership, the original was as a founding council member. Since this time, I’ve come to appreciate and rely on his intuition, experience and skill navigating complexities of community management as a peer in the community and colleague at Mozilla for the past two years.

Before I go any further I would like to thank Guillermo, on behalf of many,  for politely responding to terrible mispronunciations of his name over the years including (but not limited to)  ‘G-glermo, geejermo, Glermo, Juremo, Glermo, Gillermo and various versions of Guilllllllllmo’.

Although I am  excited to see Guillermo off to new adventures – I ,  and many others in the Mozilla community wanted to mark his 12 years with Mozilla by honoring and celebrating a journey so far.  Thankfully, he took some time to meet with me last week for an interview…

In the Beginning…

As many who do not speak English as a first language might understand, Guillermo remembers spending his early days on IRC and mailing lists trying to understand ways to get involved in his first language – Spanish.  It was this experience and eventual collaboration with other Spanish speaking community leaders Ruben and Francisco that led to the formation of the Hispano Community.

Emerging Leader, Emerging Community

Guillermo’s love of the open web, radiates through all aspects of his life, and history including his Bachelor’s thesis with a cover which you might notice resembles a browser…

During this same time of dedicated study, Guillermo began to both participate-in, and organize Mozilla events in Argentina.  One of his most memorable moments of empowerment was when Asa Dozler from Mozilla, who had been visiting his country, declared to Guillermo and the emerging community group  ‘you are the Argentina Community’ – with subsequent emails in support of that from both Asa and Mary Colvig that ultimately led to a new community evolution.

Building Participation

Guillermo joined Mozilla as staff during Firefox OS era, at first part time while he also worked at Nobox  organizing events and activities for De Todos Para Todos campaign.  He started full time not long afterwards stepping into the role of community manager for LATAM.   His work with Participation included developing regional leadership strategies including development of a coaching framework.

Proudest Moments

I asked Guillermo to reflect on what his proudest moments have been so far, and here’s what he said:

  1. Participation Mozilla Hispano creation community.
  2. Being part of Firefox OS launch teams, and localization
  3. Organizing community members, training. translating.
  4. Being part of the Mozilla Reps original council.

A Central Theme

In all that Guillermo shared, there was such a strong theme of empowering people – of building frameworks and opportunities that help people reach into their potential as emerging community leaders and mobilizers.  I think as a community we have been fortunate recipients of his talents in this area.

And that theme continues on in his wishes for Mozilla’s future – as an organization were community members continue to  innovate and have impact on Mozilla’s mission.

Thank you Guillermo!

Goodbye – not Goodbye!  Once a Mozillian, always a Mozillian – see you soon.

Please share your #mozlove memories, photos and gtratitude for #gmovia on Twitter and other social media!





Air MozillaMozilla Curriculum Workshop, February 2017 - Privacy & Security

Mozilla Curriculum Workshop, February 2017 - Privacy & Security Join us for a discussion and prototyping session about privacy and security curriculum.

Tarek ZiadéMolotov, simple load testing

I don't know why, but I am a bit obsessed with load testing tools. I've tried dozens, I built or been involved in the creation of over ten of them in the past 15 years. I am talking about load testing HTTP services with a simple HTTP client.

Three years ago I built Loads at Mozilla, which is still being used to load test our services - and it's still evolving. It was based on a few fundamental principles:

  1. A Load test is an integration test that's executed many times in parallel against a server.
  2. Ideally, load tests should be built with vanilla Python and a simple HTTP client. There's no mandatory reason we have to rely on a Load Test class or things like this - the lighter the load test framework is, the better.
  3. You should be able to run a load test from your laptop without having to deploy a complicated stack, like a load testing server and clients, etc. Because when you start building a load test against an API, step #1 is to start with small loads from one box - not going nuclear from AWS on day 1.
  4. Doing a massively distributed load test should not happen & be driven from your laptop. Your load test is one brick and orchestrating a distributed load test is a problem that should be entirely solved by another software that runs in the cloud on its own.

Since Loads was built, two major things happened in our little technical word:

  • Docker is everywhere
  • Python 3.5 & asyncio, yay!

Python 3.5+ & asyncio just means that unlike my previous attempts at building a tool that would generate as many concurrent requests as possible, I don't have to worry anymore about key principle #2: we can do async code now in vanilla Python, and I don't have to force ad-hoc async frameworks on people.

Docker means that for running a distributed test, a load test that runs from one box can be embedded inside a Docker image, and then a tool can orchestrate a distributed test that runs and manages Docker images in the cloud.

That's what we've built with Loads: "give me a Docker image of something that's performing a small load test against a server, and I shall run it in hundreds of box." This Docker-based design was a very elegant evolution of Loads thanks to Ben Bangert who had that idea. Asking for people to embed their load test inside a Docker image also means that they can use whatever tool they want as long as it performs HTTP calls on the server to stress, and optionally send some info via statsd.

But proposing a helpful, standard tool to build the load test script that will be embedded in Docker is still something we want to suggest. And frankly, 90% of the load tests happen from a single box. Going nuclear is not happening that often.

Introducing Molotov

Molotov is a new tool I've been working on for the past few months - it's based on asyncio, aiohttp and tries to be as light as possible.

Molotov scripts are coroutines to perform HTTP calls, and spawning a lot of them in a few processes can generate a fair amount of load from a single box.

Thanks to Richard, Chris, Matthew and others - my Mozilla QA teammates, I had some great feedback to create the tool, and I think it's almost ready for being used by more folks - it stills need to mature, and the docs to improve but the design is settled, and it works well already.

I've pushed a release at PyPI and plan to push a first stable final release this month once the test coverage is looking better & the docs are polished.

But I think it's ready for a bit of community feedback. That's why I am blogging about it today -- if you want to try it, help building it here are a few links:

Try it with the console mode (-c), try to see if it fits your brain and let us know what you think.

Christian HeilmannScriptConf in Linz, Austria – if you want all the good with none of the drama.

Last month I was very lucky to be invited to give the opening keynote of a brand new conference that can utterly go places: ScriptConf in Linz, Austria.

Well deserved sticker placement

What I liked most about the event was an utter lack of drama. The organisation for us presenters was just enough to be relaxed and allowing us to concentrate on our jobs rather than juggling ticket bookings. The diversity of people and subjects on stage was admirable. The catering and the location did the job and there was not much waste left over.

I said it before that a great conference stands and falls with the passion of the organisers. And the people behind ScriptConf were endearingly scared and amazed by their own success. There were no outrageous demands, no problems that came up in the last moment, and above all there was a refreshing feeling of excitement and a massive drive to prove themselves as a new conference in a country where JavaScript conferences aren’t a dime a dozen.

ScriptConf grew out of 5 different meetups in Austria. It had about 500 extremely well behaved and excited attendees. The line-up of the conference was diverse in terms of topics and people and it was a great “value for money” show.

As a presenter you got spoiled. The hotel was 5 minutes walk away from the event and 15 minutes from the main train station. We had a dinner the day before and a tour of a local ars electronica center before the event. It is important to point out that the schedule was slightly different: the event started at noon and ended at “whenever” (we went for “Leberkäse” at 3am, I seem to recall). Talks were 40 minutes and there were short breaks in between each two talks. As the opening keynote presenter I loved this. It is tough to give a rousing talk at 8am whilst people file slowly into the building and you’ve still got wet hair from the shower. You also have a massive lull in the afternoon when you get tired. It is a totally different thing to start well-rested at noon with an audience who had enough time to arrive and settle in.

Presenters were from all around the world, from companies like Slack, NPM, Ghost, Google and serverless.

The presentations:

Here’s a quick roundup of who spoke on what:

  • I was the opening keynote, talking about how JavaScript is not a single thing but a full development environment now and what that means for the community. I pointed out the importance of understanding different ways to use JavaScript and how they yield different “best practices”. I also did a call to arms to stop senseless arguing and following principles like “build more in shorter time” and “move fast and break things” as they don’t help us as a market. I pointed out how my employer works with its engineers as an example how you can innovate but also have a social life. It was also an invitation to take part in open source and bring more human, understanding communication to our pull requests.
  • Raquel Vélez of NPM told the history of NPM and explained in detail how they built the web site and the NPM search
  • Nik Graf of Serverless covered the serverless architecture of AWS Lambda
  • Hannah Wolfe of Ghost showed how they moved their kickstarter-funded NodeJS based open blogging system from nothing to a ten people company and their 1.0 release explaining the decisions and mistakes they did. She also announced their open journalism fund “Ghost for journalism”
  • Felix Rieseberg of Slack is an ex-Microsoft engineer and his talk was stunning. His slides about building Apps with Electron are here and the demo code is on GitHub. His presentation was a live demo of using Electron to built a clone of Visual Studio Code by embedding Monaco into an Electron Web View. He coded it all live using Visual Studio Code and doing a great job explaining the benefits of the console in the editor and the debugging capabilities. I don’t like live code, but this was enjoyable and easy to follow. He also did an amazing job explaining that Electron is not there to embed a web site into a app frame, but to allow you to access native functionality from JavaScript. He also had lots of great insight into how Slack was built using Electron. A great video to look forward to.
  • Franziska Hinkelmann of the Google V8 team gave a very detailed talk about Performance Debugging of V8, explaining what the errors shown in the Chrome Profiler mean. It was an incredibly deep-tech talk but insightful. Franziska made sure to point out that optimising your code for the performance tricks of one JavaScript engine is not a good idea and gave ChakraCore several shout-outs.
  • Mathieu Henri from Microsoft Oslo and JS1K fame rounded up the conference with a mind-bending live code presentation creating animations and sound with JavaScript and Canvas. He clearly got the most applause. His live coding session was a call to arms to play with technology and not care about the code quality too much but dare to be artsy. He also very much pointed out that in his day job writing TypeScript for Microsoft, this is not his mode of operation. He blogged about his session and released the code here.

This was an exemplary conference, showing how it should be done and reminded me very much of the great old conferences like Fronteers, @media and the first JSConf. The organisers are humble, very much engaged and will do more great work given the chance. I am looking forward to re-live the event watching the videos and can safely recommend each of the presenters for any other conference. There was a great flow and lots of helping each other out on stage and behind the scenes. It was a blast.

Air MozillaThe Joy of Coding - Episode 91

The Joy of Coding - Episode 91 mconley livehacks on real Firefox bugs while thinking aloud.

Tim TaubertThe Future of Session Resumption

A while ago I wrote about the state of server-side session resumption implementations in popular web servers using OpenSSL. Neither Apache, nor Nginx or HAproxy purged stale entries from the session cache or rotated session tickets automatically, potentially harming forward secrecy of resumed TLS session.

Enabling session resumption is an important tool for speeding up HTTPS websites, especially in a pre-HTTP/2 world where a client may have to open concurrent connections to the same host to quickly render a page. Subresource requests would ideally resume the session that for example a GET / HTTP/1.1 request started.

Let’s take a look at what has changed in over two years, and whether configuring session resumption securely has gotten any easier. With the TLS 1.3 spec about to be finalized I will show what the future holds and how these issues were addressed by the WG.

Did web servers react?

No, not as far as I’m aware. None of the three web servers mentioned above has taken steps to make it easier to properly configure session resumption. But to be fair, OpenSSL didn’t add any new APIs or options to help them either.

All popular TLS 1.2 web servers still don’t evict cache entries when they expire, keeping them around until a client tries to resume — for performance or ease of implementation. They generate a session ticket key at startup and will never automatically rotate it so that admins have to manually reload server configs and provide new keys.

The Caddy web server

I want to seize the chance and positively highlight the Caddy web server, a relative newcomer with the advantage of not having any historical baggage, that enables and configures HTTPS by default, including automatically acquiring and renewing certificates.

Version 0.8.3 introduced automatic session ticket key rotation, thereby making session tickets mostly forward secure by replacing the key every ~10 hours. Session cache entries though aren’t evicted until access just like with the other web servers.

But even for “traditional” web servers all is not lost. The TLS working group has known about the shortcomings of session resumption for a while and addresses those with the next version of TLS.

1-RTT handshakes by default

One of the many great things about TLS 1.3 handshakes is that most connections should take only a single round-trip to establish. The client sends one or more KeyShareEntry values with the ClientHello, and the server responds with a single KeyShareEntry for a key exchange with ephemeral keys.

If the client sends no or only unsupported groups, the server will send a HelloRetryRequest message with a NamedGroup selected from the ones supported by the client. The connection will fall back to two round-trips.

That means you’re automatically covered if you enable session resumption only to reduce network latency, a normal handshake is as fast as 1-RTT resumption in TLS 1.2. If you’re worried about computational overhead from certificate authentication and key exchange, that still might be a good reason to abbreviate handshakes.

Pre-shared keys in TLS 1.3

Session IDs and session tickets are obsolete since TLS 1.3. They’ve been replaced by a more generic PSK mechanism that allows resuming a session with a previously established shared secret key.

Instead of an ID or a ticket, the client will send an opaque blob it received from the server after a successful handshake in a prior session. That blob might either be an ID pointing to an entry in the server’s session cache, or a session ticket encrypted with a key known only to the server.

enum { psk_ke(0), psk_dhe_ke(1), (255) } PskKeyExchangeMode;

struct {
   PskKeyExchangeMode ke_modes<1..255>;
} PskKeyExchangeModes;

Two PSK key exchange modes are defined, psk_ke and psk_dhe_ke. The first signals a key exchange using a previously shared key, it derives a new master secret from only the PSK and nonces. This basically is as (in)secure as session resumption in TLS 1.2 if the server never rotates keys or discards cache entries long after they expired.

The second psk_dhe_ke mode additionally incorporates a key agreed upon using ephemeral Diffie-Hellman, thereby making it forward secure. By mixing a shared (EC)DHE key into the derived master secret, an attacker can no longer pull an entry out of the cache, or steal ticket keys, to recover the plaintext of past resumed sessions.

Note that 0-RTT data cannot be protected by the DHE secret, the early traffic secret is established without any input from the server and thus derived from the PSK only.

TLS 1.2 is surely here to stay

In theory, there should be no valid reason for a web client to be able to complete a TLS 1.3 handshake but not support psk_dhe_ke, as ephemeral Diffie-Hellman key exchanges are mandatory. An internal application talking TLS between peers would likely be a legitimate case for not supporting DHE.

But also for TLS 1.3 it might make sense to properly configure session ticket key rotation and cache turnover in case the odd client supports only psk_ke. It still makes sense especially for TLS 1.2, it will be around for probably longer than we wish and imagine.

Tim TaubertThe Future of Session Resumption

A while ago I wrote about the state of server-side session resumption implementations in popular web servers using OpenSSL. Neither Apache, nor Nginx or HAproxy purged stale entries from the session cache or rotated session tickets automatically, potentially harming forward secrecy of resumed TLS session.

Enabling session resumption is an important tool for speeding up HTTPS websites, especially in a pre-HTTP/2 world where a client may have to open concurrent connections to the same host to quickly render a page. Subresource requests would ideally resume the session that for example a GET / HTTP/1.1 request started.

Let’s take a look at what has changed in over two years, and whether configuring session resumption securely has gotten any easier. With the TLS 1.3 spec about to be finalized I will show what the future holds and how these issues were addressed by the WG.

Did web servers react?

No, not as far as I’m aware. None of the three web servers mentioned above has taken steps to make it easier to properly configure session resumption. But to be fair, OpenSSL didn’t add any new APIs or options to help them either.

All popular TLS 1.2 web servers still don’t evict cache entries when they expire, keeping them around until a client tries to resume — for performance or ease of implementation. They generate a session ticket key at startup and will never automatically rotate it so that admins have to manually reload server configs and provide new keys.

The Caddy web server

I want to seize the chance and positively highlight the Caddy web server, a relative newcomer with the advantage of not having any historical baggage, that enables and configures HTTPS by default, including automatically acquiring and renewing certificates.

Version 0.8.3 introduced automatic session ticket key rotation, thereby making session tickets mostly forward secure by replacing the key every ~10 hours. Session cache entries though aren’t evicted until access just like with the other web servers.

But even for “traditional” web servers all is not lost. The TLS working group has known about the shortcomings of session resumption for a while and addresses those with the next version of TLS.

1-RTT handshakes by default

One of the many great things about TLS 1.3 handshakes is that most connections should take only a single round-trip to establish. The client sends one or more KeyShareEntry values with the ClientHello, and the server responds with a single KeyShareEntry for a key exchange with ephemeral keys.

If the client sends no or only unsupported groups, the server will send a HelloRetryRequest message with a NamedGroup selected from the ones supported by the client. The connection will fall back to two round-trips.

That means you’re automatically covered if you enable session resumption only to reduce network latency, a normal handshake is as fast as 1-RTT resumption in TLS 1.2. If you’re worried about computational overhead from certificate authentication and key exchange, that still might be a good reason to abbreviate handshakes.

Pre-shared keys in TLS 1.3

Session IDs and session tickets are obsolete since TLS 1.3. They’ve been replaced by a more generic PSK mechanism that allows resuming a session with a previously established shared secret key.

Instead of an ID or a ticket, the client will send an opaque blob it received from the server after a successful handshake in a prior session. That blob might either be an ID pointing to an entry in the server’s session cache, or a session ticket encrypted with a key known only to the server.

enum { psk_ke(0), psk_dhe_ke(1), (255) } PskKeyExchangeMode;

struct {
   PskKeyExchangeMode ke_modes<1..255>;
} PskKeyExchangeModes;

Two PSK key exchange modes are defined, psk_ke and psk_dhe_ke. The first signals a key exchange using a previously shared key, it derives a new master secret from only the PSK and nonces. This basically is as (in)secure as session resumption in TLS 1.2 if the server never rotates keys or discards cache entries long after they expired.

The second psk_dhe_ke mode additionally incorporates a key agreed upon using ephemeral Diffie-Hellman, thereby making it forward secure. By mixing a shared (EC)DHE key into the derived master secret, an attacker can no longer pull an entry out of the cache, or steal ticket keys, to recover the plaintext of past resumed sessions.

Note that 0-RTT data cannot be protected by the DHE secret, the early traffic secret is established without any input from the server and thus derived from the PSK only.

TLS 1.2 is surely here to stay

In theory, there should be no valid reason for a web client to be able to complete a TLS 1.3 handshake but not support psk_dhe_ke, as ephemeral Diffie-Hellman key exchanges are mandatory. An internal application talking TLS between peers would likely be a legitimate case for not supporting DHE.

But also for TLS 1.3 it might make sense to properly configure session ticket key rotation and cache turnover in case the odd client supports only psk_ke. It still makes sense especially for TLS 1.2, it will be around for probably longer than we wish and imagine.

Daniel StenbergNew screen and new fuses

I got myself a new 27″ 4K screen to my work setup, a Dell P2715Q, and replaced one of my old trusty twenty-four inch friends with it.

I now work with the “Thinkpad 13″ on the left as my video conference machine (it does nothing else and it runs Windows!), the two mid screens are a 24″ and the new 27” and they are connected to my primary dev machine while the rightmost thing is my laptop for when I need to move.

Did everything run smoothly? Heck no.

When I first inserted the 4K screen without modifying anything else in the setup, it was immediately obvious that I really needed to upgrade my graphics card since it didn’t have muscles enough to drive the screen at 4K so the screen would then instead upscale a 1920×1200 image in a slightly blurry fashion. I couldn’t have that!

New graphics card

So when I was out and about later that day I more or less accidentally passed a Webhallen store, and I got myself a new card. I wanted to play it easy so I stayed with an AMD processor and went with ASUS Dual-Rx460-O2G. The key feature I wanted was to be able to drive one 4K screen and one at 1920×1200, and then I unfortunately had to give up on the ones with only passive cooling and I instead had to pick what sounds like a gaming card. (I hate shopping graphics cards.)As I was about to do surgery on the machine anyway. I checked and noticed that I could add more memory to the motherboard so I bought 16 more GB to a total of 32GB.

Blow some fuses

Later that night, when the house was quiet and dark I shut down my machine, inserted the new card, the new memory DIMMs and powered it back up again.

At least that was the plan. When I fired it back on, it said clock and my lamps around me all got dark and the machine didn’t light up at all. The fuse was blown! Man, wasn’t that totally unexpected?

I did some further research on what exactly caused the fuse to blow and blew a few more in the process, as I finally restored the former card and removed the memory DIMMs again and it still blew the fuse. Puzzled and slightly disappointed I went to bed when I had no more spare fuses.

I hate leaving the machine dead in parts on the floor with an uncertain future, but what could I do?

A new PSU

Tuesday morning I went to get myself a PSU replacement (Plexgear PS-600 Bronze), and once I had that installed no more fuses blew and I could start the machine again!

I put the new memory back in and I could get into the BIOS config with both screens working with the new card (and it detected 32GB ram just fine). But as soon as I tried to boot Linux, the boot process halted after just 3-4 seconds and seemingly just froze. Hm, I tested a few different kernels and safety mode etc but they all acted like that. Weird!

BIOS update

A little googling on the messages that appeared just before it froze gave me the idea that maybe I should see if there’s an update for my bios available. After all, I’ve never upgraded it and it was a while since I got my motherboard (more than 4 years).

I found a much updated bios image on ASUS support site, put it on a FAT-formatted USB-drive and I upgraded.

Now it booted. Of course the error messages I had googled for are still present, and I suppose they were there before too, I just hadn’t put any attention to them when everything was working dandy!

Displayport vs HDMI

I had the wrong idea that I should use the display port to get 4K working, but it just wouldn’t work. DP + DVI just showed up on one screen and I even went as far as trying to download some Ubuntu Linux driver package for Radeon RX460 that I found, but of course it failed miserably due to my Debian Unstable having a totally different kernel running and what not.

In a slightly desperate move (I had now wasted quite a few hours on this and my machine still wasn’t working), I put back the old graphics card – (with DVI + hdmi) only to note that it no longer works like it did (the DVI one didn’t find the correct resolution anymore). Presumably the BIOS upgrade or something shook the balance?

Back on the new card I booted with DVI + HDMI, leaving DP entirely, and now suddenly both screens worked!


Once I had logged in, I could configure the 4K screen to show at its full 3840×2160 resolution glory. I was back.

Now I only had to start fiddling with getting the two screens to somehow co-exist next to each other, which is a challenge in its own. The large difference in DPI makes it hard to have one config that works across both screens. Like I usually have terminals on both screens – which font size should I use? And I put browser windows on both screens…

So far I’ve settled with increasing the font DPI in KDE and I use two different terminal profiles depending on which screen I put the terminal on. Seems to work okayish. Some texts on the 4K screen are still terribly small, so I guess it is good that I still have good eye sight!

24 + 27

So is it comfortable to combine a 24″ with a 27″ ? Sure, the size difference really isn’t that notable. The 27 one is really just a few centimeters taller and the differences in width isn’t an inconvenience. The photo below shows how similar they look, size-wise:

Mozilla Open Design BlogRoads not taken

Here, Michael Johnson (MJ), founder of johnson banks, and Tim Murray (TM), Mozilla creative director, have a long-distance conversation about the Mozilla open design process while looking in the rear-view mirror.

TM: We’ve come a long way from our meet-in-the-middle in Boston last August, when my colleague Mary Ellen Muckerman and I first saw a dozen or so brand identity design concepts that had emerged from the studio at johnson banks.

MJ: If I recall, we didn’t have the wall space to put them all up, just one big table – but by the end of the day, we’d gathered around seven broad approaches that had promise. I went back to London and we gave them a good scrubbing to put on public show.


It’s easy to see, in retrospect, certain clear design themes starting to emerge from these earliest concepts. Firstly, the idea of directly picking up on Mozilla’s name in ‘The Eye’ (and in a less overt way in the ‘Flik Flak’). ‘The Eye’ also hinted at the dinosaur-slash-Godzilla iconography that had represented Mozilla at one time. We also see at this stage the earliest, and most minimal version of the ‘Protocol’ idea.

TM: You explored several routes related to code, and ‘Protocol’ was the cleverest. Mary Ellen and I were both drawn to ‘The Eye’ for its suggestion that Mozilla would be opinionated and bold. It had a brutal power, but we were also a bit worried that it was too reminiscent of the Eye of Sauron or Monsters, Inc.

MJ: Logos can be a bit of a Rorschach test, can’t they? Within a few weeks, we’d come to a mutual conclusion as to which of these ideas needed to be left on the wayside, for various reasons. The ‘Open button’, whilst enjoyed by many, seemed to restrict Mozilla too closely to just one area of work. Early presentation favourites such as ‘The Impossible M’ turned out to be just a little too close to other things out in the ether, as did Flik Flak – the value, in a way, of sharing ideas publicly and doing an impromptu IP check with the world. ‘Wireframe World’ was to come back, in a different form in the second round.

TM: This phase was when our brand advisory group, a regular gathering of Mozillians representing different parts of our organization, really came into play. We had developed a list of criteria by which to review the designs – Global Reach, Technical Beauty, Breakthrough Design, Scalability, and Longevity – and the group ranked each of the options. It’s funny to look back on this now, but ‘Protocol’ in its original form received some of the lowest scores.

MJ: One of my sharpest memories of this round of designs, once they became public was how many online commentators critiqued the work for being ‘too trendy’ or said ‘this would work for a gallery not Mozilla’. This was clearly going to be an issue because, rightly or wrongly, it seemed to me that the tech and coding community had set the bar lower than we had expected in terms of design.

TM: A bit harsh there, Michael? It was the tech and coding community that had the most affinity for ‘Protocol’ in the beginning. If it wasn’t for them, we might have let it go early on.

MJ: Ok, that’s a fair point. Well, we also started to glimpse what was to become another recurring theme – despite johnson banks having been at the vanguard of broadening out brands into complete and wide-ranging identity systems, we were going to have to get used to a TL:DR way of seeing/reading that simply judged a route by its logo alone.

TM: Right! And no matter how many times we said that these were early explorations we received feedback that they were too “unpolished.” Meanwhile, others felt that they were too polished, suggesting that this was the final design. We were whipsawed by feedback.


MJ: Whilst to some the second round seemed like a huge jump from the first, to us it was a logical development. All our attempts to develop The Eye had floundered, in truth – but as we did that work, a new way to hint at the name and its heritage had appeared. It was initially nicknamed ‘Chomper’, but was then swiftly renamed ‘Dino 2.0’. We see, of course the second iteration of Protocol, this time with slab-serifs. And two new approaches – Flame and Burst.



TM: I kind of fell in love with ‘Burst’ and ‘Dino 2.0’ in this round. I loved the idea behind ‘Flame’ — that it represented both our eternal quest to keep the Internet alive and healthy and the warmth of community that’s so important to Mozilla — but not this particular execution. To be fair, we’d asked you to crank it out in a matter of days.

MJ: Well, yes that’s true. With ‘Flame’ we then suffered from too close a comparison to all the other flame logos out in there in the ether, including Tinder. Whoops! ‘Burst’ was, and still is, a very intriguing thought – that we see Mozilla’s work through 5 key nodes and questions, and see the ‘M’ shape that appears as the link between statistical points.


TM: Web developers really rebelled against the moire of Burst, didn’t they? We now had four really distinct directions that we could put into testing and see how people outside of Mozilla (and North America) might feel. The testing targeted both existing and desired audiences, the latter being ‘Conscious Choosers’, a group of people who made decisions based on their personal values.

MJ: We also put these four in front of a design audience in Nashville at the Brand New conference, and of course commentary lit up the Open Design blog. The results in person and on the blogs were pretty conclusive – that Protocol and Dino 2.0 were the clear favourites. But one set of results were a curve ball – that the overall ‘feel’ of Burst was enjoyed by a key group of desired (and not current) supporters.


TM: This was a tricky time. ‘Dino’ had many supporters, but just about as many vocal critics. As soon as one person remarked “It looks like a stapler,” the entire audience of blog readers piled on, or so it seemed. At our request, you put the icon through a series of operations that resulted in the loss of his lower jaw. Ouch. Even then, we had to ask, as beloved as this dino character was for its historical reference to the old Mozilla, would it mean anything to newcomers?


MJ: Yes, this was an odd period. For many, it seemed that the joyful and playful side of the idea was just too much ‘fun’ and didn’t chime with the desired gravitas of a famous Internet not-for-profit wanting to amplify its voice, be heard and be taken seriously. Looking back, Dino died a slow and painful death.

TM: Months later, rogue stickers are still turning up of the ‘Dino’ favicon showing just the open jaws without the Mozilla word mark. A sticker seemed to be the perfect vehicle for this design idea. And meanwhile, ‘Protocol’ kept chugging along like a tortoise at the back of the race, always there, never dropping out.

MJ: We entered the autumn period in a slightly odd place. Dino had been effectively trolled out of the game. Burst had, against the odds, researched well, but more for the way it looked than what it was trying to say. And Protocol, the idea that had sat in the background whilst all around it hogged the spotlight, was still there, waiting for its moment. It had always researched well, especially with the coding community, and was a nice reminder of Mozilla’s pioneering spirit with its nod to the http protocol.

TM: To me though, the ‘Protocol’ logo was a bit of a one-trick pony, an inside joke for the coding community that didn’t have as much to offer to the non-tech world. You and I came to the same conclusion at the same time, Michael. We needed to push ‘Protocol’ further, maybe even bring over some of the joy and color of routes such as ‘Burst’ and the fun that ‘Dino’ represented.


MJ: Our early reboot work wasn’t encouraging. Simply putting the two routes together, such as this, looked exactly like what it is: two routes mashed together. A boardroom compromise to please no-one. But, gradually, some interesting work started to emerge, from different parts of the studio.

We began to think more about how Mozilla’s key messages could really flow from the Moz://a ‘start’, and started to explore how to use intentionally attention grabbing copy-lines next to the mark, without word spaces, as though written into a URL.


We also experimented with the Mozilla wordmark used more like a ‘selected’ item in a design or editing programme – where that which is selected reverses out from that around it.


From an image-led perspective, we also began to push the Mozilla mark much further, exploring if it could be shattered, deconstructed or glitched, in so becoming a much more expressive idea.


In parallel, an image-led idea had developed, which placed the Mozilla name at the beginning of an imaginary toolbar, and there followed imagery culled directly from the Internet, in an almost haphazard way.


TM: This slide, #19 in the exploratory deck, was the one that really caught our attention. Without losing the ‘moz://a’ idea that coders loved, it added imagery that suggested freedom and exploration which would appeal to a variety of audiences. It was eye-catching. And the thought that this imagery could be ever-changing made me excited by the possibilities for extending our brand as a representation of the wonder of the Internet.



MJ: When we sat back and took stock of this frenetic period of exploration, we realised that we could bring several of these ideas together, and this composite stage was shared with the wider team.

What you see from this, admittedly slightly crude ‘reboot’ of the idea, are the roots of the final route – a dynamic toolkit of words and images, deliberately using bright, neon, early-internet hex colours and crazy collaged imagery. This was really starting to feel like a viable way forward to us.

To be fair, we’d kept Mozilla slightly in the dark for several weeks by this stage. We had retreated to our design ‘bunker’, and weren’t going to come out until we had a reboot of the Protocol idea that we thought was worthy. That took a bit of explaining to the team at Mozilla towers.

TM: That’s a very comic-book image you have of our humble digs, Michael. But you’re right, our almost daily email threads went silent for a few weeks, and we tap-danced a bit back at the office when people asked when we’d see something. It was make or break time from all perspectives. As soon as we saw the new work arrive, we knew we had something great to work with.


MJ: The good news was that, yes, the Mozilla team were on board with the direction of travel. But, certain amends began almost immediately: we’d realised that stand-out of the logo was much improved if it reversed out of a block, and we started to explore softening the edges of the type-forms. And whilst the crazy image collages were, broadly, enjoyed, it was clear that we were going to need some clear rules on how these would be curated and used.


TM: We had the most disagreement over the palette. We found the initial neon palette to be breathtaking and a fun recollection of the early days of the Internet. On the other hand, it didn’t seem practical for use in the user interface for our web properties, especially when these colors were paired together. And our executive team found the neons to be quite polarizing.

MJ: We were asked to explore a more ‘pastel’ palette, that to our eyes lacked some of the ‘oomph’ of the neons. We’d also started to debate the black bounding box, and whether it should or shouldn’t crop the type on the outer edges. We went back and forwards on these details for a while.

TM: To us, it helped to know that the softer colors picked up directly from the highlight bar in Firefox and other browsers. We liked how distinctive the cropped bounding box looked and grew used to it quickly, but ultimately felt that it created too much tension around the logo.


MJ: And as we continued to refine the idea for launch, we also debated the precise typeforms of the logo. In the first stage of the Protocol ‘reboot’ we’d proposed a slab-serif free font called Arvo as the lead font, but as we used it more, we realised many key characters would have to be redrawn.

That started a new search – for a type foundry that could both develop a slab-serif font for Mozilla, and at the same time work on the final letterforms of the logo so there would be harmony between the two. We started discussions with Arvo’s creators, and also with Fontsmith in London, who had helped on some of the crafting of the interim logos and also had some viable typefaces.

TM: Meanwhile a new associate creative director, Yuliya Gorlovetsky, had joined our Mozilla team, and she had distinct ideas about typeface, wordmark, and logo based on her extensive typography experience.


MJ: Finally, after some fairly robust discussions, Typotheque was chosen to do this work, and above and below you can see the journey that the final mark had been on, and the editing process down to the final logo, testing different ways to tackle the key characters, especially the  ‘m’, ‘z’ and ‘a’.

This work, in turn, and after many debates about letterspacing, led to the final logo and its set of first applications.


TM: So here we are. Looking back at this makes it seem a fait accompli somehow, even though we faced setbacks and dead-ends along the way. You always got us over the roadblocks though, Michael, even when you disagreed profoundly with some of our decisions.

MJ: Ha! Well, our job was and is to make sure you stay true to your original intent with this brand, which was to be much bolder, more provocative and to reach new audiences. I’ll admit that I sometimes feared that corporate forces or the online hordes were wearing down the distinctive elements of any of the systems we were proposing.

But, even though the iterative process was occasionally tough, I think it’s worked out for the best and it’s gratifying that an idea that was there from the very first design presentation slowly but surely became the final route.

TM: It’s right for Mozilla. Thanks for being on this journey with us, Michael. Where shall we go next?


The post Roads not taken appeared first on Mozilla Open Design.

Eitan IsaacsonDark Windows for Dark Firefox

I recently set the Compact Dark theme as my default in Firefox. Since we don’t yet have Linux client-side window decorations yet (when is that happening??), it looks kind of bad in GNOME. The window decorator shows up as a light band in a sea of darkness:

Firefox with light window decorator

It just looks bad. You know? I looked for an addon that would change the decorator to the dark-themed one, but I couldn’t find any. I ended up adapting the gtk-dark-theme Atom addon to a Firefox one. It was pretty easy. I did it over a remarkable infant sleep session on a Saturday morning. Here is the result:

Firefox with dark window decorator

You can grab the yet-to-be-reviewed addon here.

Mozilla VR BlogNew features in A-Frame Inspector v0.5.0

New features in A-Frame Inspector v0.5.0

This is a small summary of some new features of the latest A-Frame Inspector version that may pass unnoticed to some.

Image assets dialog

v0.5.0 introduces an assets management window to import textures in your scene without having to manually type URLs.

The updated texture widget includes the following elements:

  • Preview thumbnail: it will open the image assets dialog.
  • Input box: Hover the mouse over it and it will show the complete URL of the asset..
  • Open in a new tab: It will open a new tab with the full sized texture
  • Clear: It will clear the value of the attribute.

New features in A-Frame Inspector v0.5.0

Once the image assets dialog is open you’ll see the list of images currently being used in your project, with the previous selection for the widget, if any, highlighted.
You could click in any image from this gallery to set the value of the map attribute you’re editing.

New features in A-Frame Inspector v0.5.0

If you want to include new images to this list, click on LOAD TEXTURE and you’ll see several options to include a new image on your project:

New features in A-Frame Inspector v0.5.0

Here you could add new image assets to your scene by:

  • Entering an URL
  • Opening an uploadcare dialog that will let you upload files from your computer, google drive, dropbox.. and from other sources in ( this is currently uploading the images to our uploadcare account, so please be kind :), we’re working on letting you define your API key to use your own account).
  • Drag and dropping from your computer. This will upload to uploadcare too.
  • Choosing one from the curated list of images we’ve included in the assets-sample repo.

Once added your image you’ll see a thumbnail showing some information about the image and the name that will have this texture in your project (the asset ID that can be referenced as #name).

New features in A-Frame Inspector v0.5.0

After editing the name if needed, click on LOAD TEXTURE and it will add your texture to the list of assets available in your project, showing you the list of textures you saw when you opened the dialog.

New features in A-Frame Inspector v0.5.0 Now just clicking on the newly created texture you’ll set the new value for the attribute you were editing.

New features in A-Frame Inspector v0.5.0

New features in the scenegraph

Toggle visibility

  • Toggle panels: New shortcuts:
    • 1: Toggle scenegraph panel
    • 2: Toggle components panel
    • TAB: Toggle both panels
  • Toggle entity visibility of each element in the scene is now possible by pressing the eye icon in the scenegraph.

New features in A-Frame Inspector v0.5.0

Broader scenegraph filtering

In the previous version of the inspector we could filter by the tag name of the entity or by its ID. In the new version the filter will take into account also the names of the components that each entity has and the values of the attributes of these components.

New features in A-Frame Inspector v0.5.0

For example if we write: red it will return the entities which name contains red but also all of them with a red color in the material component. We could also filter by geometry, or directly by sphere and so on.
We’ve added the shortcut CTRL or CMD + f to set the focus on the filter input for a faster filtering, and ESC to clear the filter.

Cut, copy and paste

Thanks to @vershwal it’s now possible to cut, copy and paste entities using the expected shorcuts:

  • CTRL or CMD + x: Cut selected entity
  • CTRL or CMD + c: Copy selected entity
  • CTRL or CMD + v: Paste the latest copied or cut entity

New shortcuts

The list of the new shorcuts introduced in this version:

  • 1: Toggle scenegraph panel
  • 2: Toggle components panel
  • TAB: Toggle both scenegraph and components panel
  • CTRL or CMD + x: Cut selected entity
  • CTRL or CMD + c: Copy selected entity
  • CTRL or CMD + v: Paste a new entity
  • CTRL or CMD + f: Focus scenegraph filter

Remember that you can press h to show the list of all the shortcuts available:

New features in A-Frame Inspector v0.5.0

Emma IrwinThank you Brian King!

Brian King was one of the first people I met at Mozilla.  He is someone whose opinion,  ideas, trust, support and friendship have meant a lot to me – and I know countless others would  similarly describe Brian as someone who made collaborating, working and gathering together as a highlight of their Mozilla experiences, and personal success.

Brian has  been a part of the Mozilla community for nearly 18 years – and even though we are thrilled for his new adventures, we really wanted to find a megaphone to say thank you…   Here are some highlights from my interview with him last week.

Finding Mozilla

Brian came to Mozilla all those years ago, as a developer.  He worked for a company that developed software which promoted minority languages including Basque, Catalan, Frisian, Irish, Welsh.  As many did back in the day – he met people in newsgroups and on IRC, and slowly became immersed in the community – regularly attending developer meetups.  Community, from the very beginning was the reason Brian became grew more deeply involved and connected to Mozilla’s mission.

Shining Brightly

Early contributions were code – becoming involved in with the HTML Editor, then part of the Mozilla Suite. He got a job at Activestate in Vancouver, and worked on the Komodo IDE for dynamic languages. Skipping forward he became more and more invested in transitioning to Add-On contribution, and review – even co-authoring a book “Creating Applications with Mozilla”  – which I did not know!  Very cool. During this time he describes himself as being “very fortunate” to be able to make a living by working in the Mozilla and Web ecosystem while running a consultancy writing Firefox add-ons and other software.

Dear Community – “You had me at Hello”




Something Brian shared with me, was that being part of the community essentially sustained  his connection with Mozilla during times when he was to busy to contribute – and I think many other Mozillians feel this same way  – it’s never goodbye, only see you soon.  On Brian’s next adventure, I think we can take comfort that the open door of community will sustain our connection for years to come.

As Staff

Brian came on as Mozilla staff in 2012 as the European Community Manager, with success in this and overseeing the evolution of the Mozilla Reps program. He was instrumental in successfully building Firefox OS launch teams all around the world. Most recently he has been sharpening that skillset of empowering individuals, teams and communities with support for various programs, regional support, and the Activate campaign.

Proud Moments

With a long string of accomplishments at Mozilla, I asked Brian what his proudest moments were. Some of those he listed were:

  1. AMO editor for a few years reviewing thousands of Addons
  2. Building community in the Balkan area
  3. Building out the Mozilla Reps program, and being a founding council member.
  4. Helping drive Mozilla success at FOSDEM
  5. Building FFOS Launch Teams

But he emphasized, in all of these, the opportunity to bring new people into the community, to nurture and help individuals and groups reach their goals provided an enormous sense of accomplishment and fulfillment.

He didn’t mention it, but I also found this photo of Brian on TV in Transylvania, Romania that looks pretty cool.

Look North!

To wrap up, I asked Brian what he most wanted to see for Mozilla in the next 5 years, leaning on what he knows for years as part of, and leading community:

My hope is that Mozilla finds it’s North Star for the next 5-10 years, doubles down on recent momentum, and as part of that bakes community participation into all parts of the organization. It must be a must-have, and not a nice-to-have.”

Thank you Brian King!

You can give your thanks to Brian with #mozlove #brianking  – share gratitude, laughs and stories.





Patrick WaltonPathfinder, a fast GPU-based font rasterizer in Rust

Ever since some initial discussions with Raph Levien (author of font-rs) at RustConf last September, I’ve been thinking about ways to improve vector graphics rendering using modern graphics hardware, specifically for fonts. These ideas began to come together in December, and over the past couple of months I’ve been working on actually putting them into a real, usable library. They’ve proved promising, and now I have some results to show.

Today I’m pleased to announce Pathfinder, a Rust library for OpenType font rendering. The goal is nothing less than to be the fastest vector graphics renderer in existence, and the results so far are extremely encouraging. Not only is it very fast according to the traditional metric of raw rasterization performance, it’s practical, featuring very low setup time (end-to-end time superior to the best CPU rasterizers), best-in-class rasterization performance even at small glyph sizes, minimal memory consumption (both on CPU and GPU), compatibility with existing font formats, portability to most graphics hardware manufactured in the past five years (DirectX 10 level), and security/safety.


To illustrate what it means to be both practical and fast, consider these two graphs:

Rasterization performance

Setup performance

(Click each graph for a larger version.)

The first graph is a comparison of Pathfinder with other rasterization algorithms with all vectors already prepared for rendering (and uploaded to the GPU, in the case of the GPU algorithms). The second graph is the total time taken to prepare and rasterize a glyph at a typical size, measured from the point right after loading the OTF file in memory to the completion of rasterization. Lower numbers are better. All times were measured on a Haswell Intel Iris Pro (mid-2015 MacBook Pro).

From these graphs, we can see two major problems with existing GPU-based approaches:

  1. Many algorithms aren’t that fast, especially at small sizes. Algorithms aren’t fast just because they run on the GPU! In general, we want rendering on the GPU to be faster than rendering on the CPU; that’s often easier said than done, because modern CPUs are surprisingly speedy. (Note that, even if the GPU is somewhat slower at a task than the CPU, it may be a win for CPU-bound apps to offload some work; however, this makes the use of the algorithm highly situational.) It’s much better to have an algorithm that actually beats the CPU.

  2. Long setup times can easily eliminate the speedup of algorithms in practice. This is known as the “end-to-end” time, and real-world applications must carefully pay attention to it. One of the most common use cases for a font rasterizer is to open a font file, rasterize a character set from it (Latin-1, say) at one pixel size for later use, and throw away the file. With Web fonts now commonplace, this use case becomes even more important, because Web fonts are frequently rasterized once and then thrown away as the user navigates to a new page. Long setup times, whether the result of tessellation or more exotic approaches, are real problems for these scenarios, since what the user cares about is the document appearing quickly. Faster rasterization doesn’t help if it regresses that metric.

(Of the two problems mentioned above, the second is often totally ignored in the copious literature on GPU-based vector rasterization. I’d like to see researchers start to pay attention to it. In most scenarios, we don’t have the luxury of inventing our own GPU-friendly vector format. We’re not going to get the world to move away from OpenType and SVG.)

Vector drawing basics

In order to understand the details of the algorithm, some basic knowledge of vector graphics is necessary. Feel free to skip this section if you’re already familiar with Bézier curves and fill rules.

OpenType fonts are defined in terms of resolution-independent Bézier curves. TrueType outlines contain lines and quadratic Béziers only, while OpenType outlines can contain lines, quadratic Béziers, and cubic Béziers. (Right now, Pathfinder only supports quadratic Béziers, but extending the algorithm to support cubic Béziers should be straightforward.)

In order to fill vector paths, we need a fill rule. A fill rule is essentially a test that determines, for every point, whether that point is inside or outside the curve (and therefore whether it should be filled in). OpenType’s fill rule is the winding rule, which can be expressed as follows:

  1. Pick a point that we want to determine the color of. Call it P.

  2. Choose any point outside the curve. (This is easy to determine since any point outside the bounding box of the curve is trivially outside the curve.) Call it Q.

  3. Let the winding number be 0.

  4. Trace a straight line from Q to P. Every time we cross a curve going clockwise, add 1 to the winding number. Every time we cross a curve going counterclockwise, subtract 1 from the winding number.

  5. The point is inside the curve (and so should be filled) if and only if the winding number is not zero.

How it works, conceptually

The basic algorithm that Pathfinder uses is the by-now-standard trapezoidal pixel coverage algorithm pioneered by Raph Levien’s libart (to the best of my knowledge). Variations of it are used in FreeType, stb_truetype version 2.0 and up, and font-rs. These implementations differ as to whether they use sparse or dense representations for the coverage buffer. Following font-rs, and unlike FreeType and stb_truetype, Pathfinder uses a dense representation for coverage. As a result, Raph’s description of the algorithm applies fairly well to Pathfinder as well.

There are two phases to the algorithm: drawing and accumulation. During the draw phase, Pathfinder computes coverage deltas for every pixel touching (or immediately below) each curve. During the accumulation phase, the algorithm sweeps down each column of pixels, computing winding numbers (fractional winding numbers, since we’re antialiasing) and filling pixels appropriately.

The most important concept to understand is that of the coverage delta. When drawing high-quality antialiased curves, we care not only about whether each pixel is inside or outside the curve but also how much of the pixel is inside or outside the curve. We treat each pixel that a curve passes through as a small square and compute how much of the square the curve occupies. Because we break down curves into small lines before rasterizing them, these coverage areas are always trapezoids or triangles, and so and so we can use trapezoidal area expressions to calculate them. The exact formulas involved are somewhat messy and involve several special cases; see Sean Barrett’s description of the stb_truetype algorithm for the details.

Rasterizers that calculate coverage in this way differ in whether they calculate winding numbers and fill at the same time they calculate coverage or whether they fill in a separate step after coverage calculation. Sparse implementations like FreeType and stb_truetype usually fill as they go, while dense implementations like font-rs and Pathfinder fill in a separate step. Filling in a separate step is attractive because it can be simplified to a prefix sum over each pixel column if we store the coverage for each pixel as the difference between the coverage of the pixel and the coverage of the pixel above it. In other words, instead of determining the area of each pixel that a curve covers, for each pixel we determine how much additional area the curve covers, relative to the coverage area of the immediately preceding pixel.

This modification has the very attractive property that all coverage deltas both inside and outside the curve are zero, since points completely inside a curve contribute no additional area (except for the first pixel completely inside the curve). This property is key to Pathfinder’s performance relative to most vector texture algorithms. Calculating exact area coverage is slow, but calculating coverage deltas instead of absolute coverage essentially allows us to limit the expensive calculations to the edges of the curve, reducing the amount of work the GPU has to do to a fraction of what it would have to do otherwise.

In order to fill the outline and generate the final glyph image, we simply have to sweep down each column of pixels, calculating the running total of area coverage and writing pixels according to the winding rule. The formula to determine the color of each pixel is simple and fast: min(|coverage total so far|, 1.0) (where 0.0 is a fully transparent pixel, 1.0 is a fully opaque pixel, and values in between are different shades). Importantly, all columns are completely independent and can be calculated in parallel.

Implementation details

With the advanced features in OpenGL 4.3, this algorithm can be straightforwardly adapted to the GPU.

  1. As an initialization step, we create a coverage buffer to hold delta coverage values. This coverage buffer is a single-channel floating-point framebuffer. We always draw to the framebuffer with blending enabled (GL_FUNC_ADD, both source and destination factors set to GL_ONE).

  2. We expand the TrueType outlines from the variable-length compressed glyf format inside the font to a fixed-length, but still compact, representation. This is necessary to be able to operate on vertices in parallel, since variable-length formats are inherently sequential. These outlines are then uploaded to the GPU.

  3. Next, we draw each curve as a patch. In a tessellation-enabled drawing pipeline like the one that Pathfinder uses, rather than submitting triangles directly to the GPU, we submit abstract patches which are converted into triangles in hardware. We use indexed drawing (glDrawElements) to take advantage of the GPU’s post-transform cache, since most vertices belong to two curves.

  4. For each path segment that represents a Bézier curve, we tessellate the Bézier curve into a series of small lines on the GPU. Then we expand all lines out to screen-aligned quads encompassing their bounding boxes. (There is a complication here when these quads overlap; we may have to generate extra one-pixel-wide quads here and strip them out with backface culling. See the comments inside the tessellation control shader for details.)

  5. In the fragment shader, we calculate trapezoidal coverage area for each pixel and write it to the coverage buffer. This completes the draw step.

  6. To perform the accumulation step, we attach the coverage buffer and the destination texture to images. We then dispatch a simple compute shader with one invocation per pixel column. For each row, the shader reads from the coverage buffer and writes the total coverage so far to the destination texture. The min(|coverage total so far, 1.0) expression above need not be computed explicitly, because our unsigned normalized atlas texture stores colors in this way automatically.

The performance characteristics of this approach are excellent. No CPU preprocessing is needed other than the conversion of the variable-length TrueType outline to a fixed-length format. The number of draw calls is minimal—any number of glyphs can be rasterized in one draw call, even from different fonts—and the depth and stencil buffers remain unused. Because the tessellation is performed on the fly instead of on the CPU, the amount of data uploaded to the GPU is minimal. Area coverage is essentially only calculated for pixels on the edges of the outlines, avoiding expensive fragment shader invocations for all the pixels inside each glyph. The final accumulation step has ideal characteristics for GPU compute, since branch divergence is nonexistent and cache locality is maximized. All pixels in the final buffer are only painted at most once, regardless of the number of curves present.

Compatibility concerns

For any GPU code designed to be shipping to consumers, especially OpenGL 3.0 and up, compatibility and portability are always concerns. As Pathfinder is designed for OpenGL 4.3, released in 2012, it is no exception. Fortunately, the algorithm can be adapted in various ways depending on the available functionality.

  • When compute shaders are not available (OpenGL 4.2 or lower), Pathfinder uses OpenCL 1.2 instead. This is the case on the Mac, since Apple has not implemented any OpenGL features newer than OpenGL 4.2 (2011). The compute-shader crate abstracts over the subset of OpenGL and OpenCL necessary to access GPU compute functionality.

  • When tessellation shaders are not available (OpenGL 3.3 or lower), Pathfinder uses geometry shaders, available in OpenGL 3.2 and up.

(Note that it should be possible to avoid both geometry shaders and tessellation shaders, at the cost of performing that work on the CPU. This turns out to be quite fast. However, since image load/store is a hard requirement, this seems pointless: both image load/store and geometry shaders were introduced in DirectX 10-level hardware.)

Although these system requirements may seem high at first, the integrated graphics found in any Intel Sandy Bridge (2011) CPU or later meet them.

Future directions

The immediate next step for Pathfinder is to integrate into WebRender as an optional accelerated path for applicable fonts on supported GPUs. Beyond that, there are several features that could be added to extend Pathfinder itself.

  1. Support vector graphics outside the font setting. As Pathfinder is a generic vector graphics rasterizer, it would be interesting to expose an API allowing it to be used as the backend for e.g. an SVG renderer. Rendering the entire SVG specification is outside of the scope of Pathfinder itself, but it could certainly be the path rendering component of a full SVG renderer.

  2. Support CFF and CFF2 outlines. These have been seen more and more over time, e.g. in Apple’s new San Francisco font. Adding this support involves both parsing and extracting the CFF2 format and adding support for cubic Bézier curves to Pathfinder.

  3. Support WOFF and WOFF2. In the case of WOFF2, this involves writing a parser for the transformed glyf table.

  4. Support subpixel antialiasing. This should be straightforward.

  5. Support emoji. The Microsoft COLR and Apple sbix extensions are straightforward, but the Google SVG table allows arbitrary SVGs to be embedded into a font. Full support for SVG is probably out of scope of Pathfinder, but perhaps the subset used in practice is small enough to support.

  6. Optimize overlapping paths. It would be desirable to avoid antialiasing edges that are covered by other paths. The fill rule makes this trickier than it initially sounds.

  7. Support hinting. This is low-priority since it’s effectively obsolete with high-quality antialiasing, subpixel AA, and high-density displays, but it might be useful to match the system rendering on Windows.


Pathfinder is available on GitHub and should be easily buildable using the stable version of Rust and Cargo. Please feel free to check it out, build it, and report bugs! I’m especially interested in reports of poor performance, crashes, or rendering problems on a variety of hardware. As Pathfinder does use DirectX 10-level hardware features, some amount of driver pain is unavoidable. I’d like to shake these problems out as soon as possible.

Finally, I’d like to extend a special thanks to Raph Levien for many fruitful discussions and ideas. This project wouldn’t have been possible without his insight and expertise.

Mozilla Addons BlogAdd-ons Update – 2017/02

Here’s the state of the add-ons world this month.

If you haven’t read Add-ons in 2017, I suggest that you do. It lays out the high-level plan for add-ons this year.

The Review Queues

In the past month, 1,670 listed add-on submissions were reviewed:

  • 1148 (69%) were reviewed in fewer than 5 days.
  • 106 (6%) were reviewed between 5 and 10 days.
  • 416 (25%) were reviewed after more than 10 days.

There are 467 listed add-ons awaiting review.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Add-on reviewers are critical for our success, and can earn cool gear for their work. Visit our wiki page for more information.


The blog post for 52 is up and the bulk validation was already run. Firefox 53 is coming up.

Multiprocess Firefox is enabled for some users, and will be deployed for all users very soon. Make sure you’ve tested your add-on and either use WebExtensions or set the multiprocess compatible flag in your add-on manifest.

As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.


We would like to thank the following people for their recent contributions to the add-ons world:

  • eight04
  • Aayush Sanghavi
  • zombie
  • Doug Thayer
  • ingoe
  • totaki
  • Piotr Drąg
  • ZenanZha
  • Joseph Frazier
  • Revanth47

You can read more about their work in our recognition page.

Firefox NightlyThese Weeks in Firefox: Issue 10


  • The Sidebar WebExtension API (compatible with Opera’s API) has been implemented 💰
  • Preferences reorg and search project is fully underway. jaws and mconley lead a “hack-weekend” this past weekend with some MSU students working on the reorg and search projects
  • A lot of people were stuck on Firefox Beta 44, we found out about it and fixed it. Read more about it on :chuttens blog
  • According to our Telemetry, ~62% of our release population has multi-process Firefox enabled by default now 😎
  • Page Shot is going to land in Firefox 54.  We are planning on making it a WebExtension so that users can remove it fully if they choose to.

Friends of the Firefox team

Project Updates

Activity Stream

Content Handling Enhancement

Electrolysis (e10s)

  • e10s-multi is tentatively targeted to ride the trains in Firefox 55
    • Hoping to use a scheme where we measure the user’s available memory in order to determine maximum content process count
    • Here’s the bug to track requirements to enable e10s-multi on Dev Edition by default

Firefox Core Engineering

Form Autofill

Go Faster

  • 1-day uptake of system add-ons is ~85% in beta (thanks to restartless), and ~72% in release (Wiki)

Platform UI and other Platform Audibles



  • Fixed a glaring problem with one-off buttons in scaled (zoomed) display configurations that made the search settings button appear in a separate line.
  • We now correctly report when a search engine could not be installed due to an invalid format.
  • Some Places work mostly in preparation of support for hi-res favicons.

Sync / Firefox Accounts

Storage Management

Test Pilot

Here are the raw meeting notes that were used to derive this list.

Want to help us build Firefox? Get started here!

Here’s a tool to find some mentored, good first bugs to hack on.

Air MozillaMartes Mozilleros, 14 Feb 2017

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Chris H-CThe Most Satisfying Graph

There were a lot of Firefox users on Beta 44.

Usually this is a good thing. We like having a lot of users[citation needed].

It wasn’t a good thing this time, as Beta had already moved on to 45. Then 46. Eventually we were at Beta 52, and the number of users on Beta 44 was increasing.

We thought maybe it was because Beta 44 had the same watershed as Release 43. Watershed? Every user running a build before a watershed must update to the watershed first before updating to the latest build. If you have Beta 41 and the latest is Beta 52, you must first update to Beta 44 (watershed) so we can better ascertain your cryptography support before continuing on to 48, which is another watershed, this time to do with fourteen-year-old processor extensions. Then, and only then, can you proceed to the currently-most-recent version, Beta 52.

(If you install afresh, the installer has the smarts to figure out your computer’s cryptographic and CPU characteristics and suitability so that new users jump straight to the front of the line)

Beta 44 being a watershed should, indeed, require a longer-than-usual lifetime of the version, with respect to population. If this were the only effect at play we’d expect the population to quickly decrease as users updated.

But they didn’t update.

It turns out that whenever the Beta 44 users attempted to download an update to that next watershed release, Beta 48, they were getting a 404 Not Found. At some point, the watershed Beta 48 build on was removed, possibly due to age (we can’t keep everything forever). So whenever the users on Beta 44 wanted to update, they couldn’t. To compound things, any time a user before Beta 44 wanted to update, they had to go through Beta 44. Where they were caught.

This was fixed on… well, I’ll let you figure out which day it was fixed on:


This is now the most satisfying graph I’ve ever plotted at Mozilla.


David LawrenceHappy BMO Push Day!

the following changes have been pushed to

  • [1337382] clicking on ‘show’ after updating a bug to show the list of people emailed no longer works
  • [1280393] [a11y] All inputs and selects need to be labeled properly

discuss these changes on

This Week In RustThis Week in Rust 169

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate of the week is derive_builder, automatically implement the builder pattern for arbitrary structs. Now with macro 1.1 support (custom derive since Rust 1.15). Thanks to Willi Kappler for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

153 pull requests were merged in the last week.

New Contributors

  • Aaron Power
  • Alexander Battisti
  • bjorn3
  • Charlie Fan
  • Gheorghe Anghelescu
  • Giang Nguyen
  • Henning Kowalk
  • Ingvar Stepanyan
  • Jan Zerebecki
  • Jordi Polo
  • Mario
  • Rob Speer
  • Shawn Walker-Salas

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

Closed RFCs

Following proposals were rejected by the team after their 'final comment period' elapsed.

No RFCs were closed this week!

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!


Issues in final comment period:

Other significant issues:

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.