Mozilla Addons BlogMore Recommended extensions added to Firefox for Android Nightly

As we mentioned recently, we’re adding Recommended extensions to Firefox for Android Nightly as a broader set of APIs become available to accommodate more add-on functionality. We just updated the collection with some new Recommended extensions, including…

Mobile favorites Video Background Play Fix (keeps videos playing in the background even when you switch tabs) and Google Search Fixer (mimics the Google search experience on Chrome) are now in the fold.

Privacy related extensions FoxyProxy (proxy management tool with advanced URL pattern matching) and Bitwarden (password manager) join popular ad blockers Ghostery and AdGuard.

Dig deeper into web content with Image Search Options (customizable reverse image search tool) and Web Archives (view archived web pages from an array of search engines). And if you end up wasting too much time exploring images and cached pages you can get your productivity back on track with Tomato Clock (timed work intervals) and LeechBlock NG (block time-wasting websites).

The new Recommended extensions will become available for Firefox for Android Nightly on 26 September, If you’re interested in exploring these new add-ons and others on your Android device, install Firefox Nightly and visit the Add-ons menu. Barring major issues while testing on Nightly, we expect these add-ons to be available in the release version of Firefox for Android in November.

The post More Recommended extensions added to Firefox for Android Nightly appeared first on Mozilla Add-ons Blog.

Data@MozillaData Publishing @ Mozilla


Mozilla’s history is steeped in openness and transparency –  it’s simply core to what we do and how we see ourselves in the world.  We are always looking  for ways to bring our mission to life in ways that help create a healthy internet and support the Mozilla Manifesto.  One of  our commitments says  “We are committed to an internet that elevates critical thinking, reasoned argument, shared knowledge, and verifiable facts”.

To this end, we have spent a good amount of time considering how we can publicly share  our Mozilla telemetry data sets – it is one of the most simple and effective ways we can enable collaboration and share knowledge.  But, only if it can be done safely and in a privacy protecting, principled way. We believe we’ve designed a way to do this and we are excited to outline our approach here.

Making data public not only  allows us to be transparent about our data practices, but directly demonstrates how our work contributes to our mission. Having a publicly available methodology for vetting and sharing our data demonstrates our values as a company. It will also enable other research opportunities with trusted scientists, analysts, journalists, and policymakers in a way that furthers our efforts to shape an internet that benefits everyone.

Dataset Publishing Process

We want our data publishing review process, as well as our review decisions to be public and understandable, similar to our Mozilla  Data Collection program. To that end, our full dataset publishing policy and details about what considerations we look at before determining what is safe to publish can be found on our wiki here.  Below is a summary of the critical pieces of that process.

The goal of our data publishing process is to:

  • Reduce friction for data publishing requests with low privacy risk to users;
  • Have a review system of checks and balances that considers both data aggregations and data level sensitivities to determine privacy risk prior to publishing, and;
  • Create a public record of these reviews,  including making data and the queries that generate it publicly available and putting a link to the dataset + metadata on a public-facing Mozilla property.

Having a dataset published requires filling out a publicly available request on Bugzilla.  Requesters will answer a series of questions, including information about aggregation levels, data collection  categories, and dimensions or metrics that include sensitive data.

A data steward will review the bug request.  They will help ensure the questions are correctly answered and determine if the data can be published or whether it requires review by our Trust & Security or  Legal teams.

When a request is approved, our telemetry data engineering team will:

  • Write (or review) the query
  • Schedule it to update on the desired frequency
  • Include it in the pubic facing dataset infrastructure, including metadata that links the public data back to the  review bug.

Finally, once the dataset is published, we’ll announce it on the Data @ Mozilla blog. It will also be added to

Want to know more?

Questions? Contact us at

Data@MozillaThis Week in Glean: glean-core to Wasm experiment

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.)

All “This Week in Glean” blog posts are listed in the TWiG index.



In the past week Alessio, Mike, Hamilton and I got together for the Glean.js workweek. Our purpose was to build a proof-of-concept of a Glean SDK that works on Javascript environments. You can expect a TWiG in the next few weeks about the outcome of that. Today I am going to talk about something that I tried out in preparation for that week: attempting to compile glean-core to Wasm.


A quick primer




The glean-core is the heart of the Glean SDK where most of the logic and functionality of Glean lives. It is written in Rust and communicates with the language bindings in C#, Java, Swift or Python through an FFI layer. For a comprehensive overview of the Glean SDKs architecture, please refer to Jan-Erik’s great blog post and talk on the subject.



From the WebAssembly website:

“WebAssembly (abbreviated Wasm) is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable compilation target for programming languages, enabling deployment on the web for client and server applications.”

Or, from Lin Clark’s “A cartoon intro to WebAssembly”:

“WebAssembly is a way of taking code written in programming languages other than JavaScript and running that code in the browser.”


Why did I decide to do this?


On the Glean team we make an effort to move as much of the logic as possible to glean-core, so that we don’t have too much code duplication on the language bindings and guarantee standardized behaviour throughout all platforms.

Since that is the case, it was counterintuitive for me, that when we set out to build a version of Glean for the web, we wouldn’t rely on the same glean-core as all our other language bindings. The hypothesis was: let’s make JavaScript just another language binding, by making our Rust core compile to a target that runs on the browser.

Rust is notorious for making an effort to have a great Rust to Wasm experience, and the Rust and Webassembly working group has built awesome tools that make boilerplate for such projects much leaner.


First try: compile glean-core “as is” to Wasm


Since this was my first try in doing anything Wasm, I started by following MDN’s guide “Compiling from Rust to WebAssembly”, but instead of using their example “Hello, World!” Rust project, I used glean-core.

From that guide I learned about wasm-pack, a tool that deals with the complexities of compiling a Rust crate to Wasm and wasm-bindgen a tool that exposes, among many other things, the #[wasm_bindgen] attribute which, when added to a function, will make that function accessible from Javascript.

The first thing that was obvious, was that it would be much harder to try and compile glean-core directly to Wasm. Passing complex types to it has many limitations and I was not able to add the #[wasm_bindgen] attribute to trait objects or structs that contain trait objects or lifetime annotations. I needed a simpler API surface to make the connection between Rust and Javascript. Fortunately, I had that in hand: glean-ffi.

Our FFI crate exposes functions that rely on a global Glean singleton and have relatively simple signatures. These functions are the ones accessed by our language bindings through a C FFI. Most of the Rust complex structures are hidden by this layer from the consumers.

Perfect! I proceeded to add the #[wasm_bindgen] attribute to one of our entrypoint functions: glean_initialize. This uncovered a limitation I didn’t know about: you can’t add this attribute to functions that are unsafe, which unfortunately this one is.

My assumption that I would be able to just expose the API of glean-ffi to Javascript by compiling it to Wasm without making any changes to it was not holding up. I would have to go through some refactoring to make that work. But until now, I hadn’t gotten to the actual compilation step, the error I was getting was a syntax error. I wanted to go through compilation and see if that completed before diving into any refactoring work. I just removed the  #[wasm_bindgen] attribute for now and made a new attempt at compiling.

Now I got a new error. Progress! If you clone the Glean repository, install wasm-pack, and run wasm-pack build inside the glean-core/ffi/ folder right now, you are bound to get this same error and here is one important excerpt of it:


fatal error: 'sys/types.h' file not found
cargo:warning=#include <sys/types.h>
cargo:warning=         ^~~~~~~~~~~~~
cargo:warning=1 error generated.
exit code: 1

--- stderr

error occurred: Command "clang" "-Os" "-ffunction-sections" "-fdata-sections" "-fPIC" "--target=wasm32-unknown-unknown" "-Wall" "-Wextra" "-DMDB_IDL_LOGN=16" "-o" "<...>/target/wasm32-unknown-unknown/release/build/lmdb-rkv-sys-5e7282bb8d9ba64e/out/mdb.o" "-c" "<...>/.cargo/registry/src/" with args "clang" did not execute successfully (status code exit code: 1)

One of glean-core’s dependencies is rkv a storage crate we use for persisting metrics before they are collected and sent in pings. This crate depends on LMDB which is written in C, thus the clang error.

I do not have extensive experience in writing C/C++ programs, so this was not familiar to me. I figured out that the file this error points to as “not found”, <sys/types.h>, is a header file that should be part of libc. This compiles just fine when trying to compile for our usual targets, so I had a hunch that maybe I just didn’t have the proper libc files for compiling to Wasm targets.

Internet searching pointed me to wasi-libc, a libc for WebAssembly programs. Promising! With this, I retried compiling glean-ffi to Wasm.  I just needed to run the build command with added flags:

CFLAGS="--sysroot=/path/to/the/newly/built/wasi-libc/sysroot" wasm-pack build

This didn’t work immediately and the error messages told me to add some extra flags to the command, which I did without thinking much and the final command is:

CFLAGS="--sysroot=/path/to/wasi-sdk/clone/share/wasi-sysroot -D_WASI_EMULATED_MMAN -D_WASI_EMULATED_SIGNAL" wasm-pack build

I would advise the reader now not to get too excited. This command still doesn’t work. It will return yet another set of errors and warnings, mostly related to “usage of undeclared identifiers” or “implicit declaration of functions”. Most of the identifiers that were erroing started with the pthread_ prefix, which reminded me of something that I read on the wasi-sdk, a toolkit for compiling C programs to WebAssembly that includes wasi-libc,  README section:

“Specifically, WASI does not yet have an API for creating and managing threads yet, and WASI libc does not yet have pthread support”.

That was it. I was done with trying to approach the problem of compiling glean-core to Wasm “as is” and I decided to try another way. I could try to abstract away our usage of rkv so that depending on it didn’t block compilation to Wasm, but that is way too big a refactoring task that I considered it a blocker for this experiment.


Second try: take a part of glean-core and compile that to Wasm


After learning that it would require way too much refactoring of glean-core and glean-ffi to get them to compile to Wasm, I decided to try a different approach and just get a small self contained part of glean-core and compile that to Wasm.

Earlier this year I had a small taste of trying to rewrite part of glean-core in Javascript for the distribution simulators that we added to The Glean Book. To make the simulators work I essentially had to reimplement histograms code and part of the distribution metrics code in Javascript.

The histograms code is very self contained so it was a perfect candidate to try and single out for this experiment. I did just that and I was actually able to get it to not error fairly quickly as a standalone thing (you can check out the histogram code on the glean-to-wasm repo vs. the histogram code on the Glean repo).

After getting this to work I created three accumulation functions that would mimic how each one of the distribution metric types work. These functions would then be exposed to Javascript. The resulting API looks like this:

pub fn accumulate_samples_custom_distribution(
    range_min: u32,
    range_max: u32,
    bucket_count: usize,
    histogram_type: i32,
    samples: Vec<u64>,
) -> String

pub fn accumulate_samples_timing_distribution(
    time_unit: i32,
    samples: Vec<u64>
) -> String

pub fn accumulate_samples_memory_distribution(
    memory_unit: i32,
    samples: Vec<u64>
) -> String

Each one of these functions creates a histogram, accumulates the given samples to this histogram and returns the resulting histogram as a JSON encoded string. I tried getting them to return HashMap<u64,u64> at first, but that is not supported.

For this I was still following MDN’s guide “Compiling from Rust to WebAssembly”, which I can’t recommend enough, and after I got my Rust code to compile to Wasm it was fairly straightforward to call the functions imported from the Wasm module inside my Javascript code.

Here is a little taste of what that looked like:

import("glean-wasm").then(Glean => {
    const data = JSON.parse(
            unit, // A Number value between 0 - 3
            values // A BigUint64Array with the sample values
    // <Do something with data>

The only hiccup I ran into was that I needed to change my code to use the BigInt number type instead of the default Number type from Javascript. That is necessary because, in Rust, my functions expect a u64 and BigInt is the type that maps to that from Javascript.

This code can be checked out at:

And there is a demo of it working in:


Final considerations


This was a very fun experiment, but does it validate my initial hypothesis:

Should we compile glean-core to Wasm and have Javascript be just another language binding?

We definitely can do that. Even though my first try was not concluded, if we abstract away all the dependencies that we have that can’t be compiled to Wasm, refactor the unsafe functions out and all other possible roadblocks that we find other than these, we can do it. The effort that would take though, I believe is not worth it. It would take us much less time to rewrite glean-core’s code in Javascript. Spoiler alert for our upcoming TWiG about the Glean.js workweek, but in just a week we were able to get a functioning prototype of that.

Our requirements for a Glean software for the web are different from our requirements for a native version of Glean. Different enough that the burden of maintenance for two versions of glean-core, one in Rust and another in Javascript, is probably smaller than the amount of work and hacks it would take to build a single version that attends both platforms.

Another issue is compatibility, Wasm is very well supported but there are environments that still don’t have support for it. It would be suboptimal if we went through the trouble of changing glean-core for it to compile to Wasm and then still had to make a Javascript only version for compatibility reasons.

My conclusion is that although we can compile glean-core to Wasm, it doesn’t mean that we should do that. The advantages of having a single source of truth for the Glean SDK are very enticing, but at the moment it would be more practical to rewrite something specific for the web.

Jeff KlukasThe Nitty-Gritty of Moving Data with Apache Beam

Summary of a talk delivered at Apache Beam Digital Summit on August 24, 2020.

Title slide

In this session, you won’t learn about joins or windows or timers or any other advanced features of Beam. Instead, we will focus on the real-world complexity that comes from simply moving data from one system to another safely. How do we model data as it passes from one transform to another? How do we handle errors? How do we test the system? How do we organize the code to make the pipeline configurable for different source and destination systems?

We will explore how each of these questions are addressed in Mozilla’s open source codebase for ingesting telemetry data from Firefox clients. By the end of the session, you’ll be equipped to explore the codebase and documentation on your own to see how these concepts are composed together.


Karl DubostWeek notes - 2020 w39 - worklog - A new era

Mozilla Webcompat Team New Management

So the Mozilla Webcompat team is entering a new era. Mike Taylor (by the time this will be published) was the manager of the webcompat team at Mozilla since August 2015. He decided to leave. Monday, September 21 was his last day. We had to file an issue about this.

The new interim manager is… well… myself.

So last week and this week will be a lot about:

  • have a better understanding of the tasks and meetings that Mike was attending.
  • trying to readjust schedules and understanding how to get a bit of sleep with a distributed organization which has most of its meeting toward friendly European and American time zones. Basically, all meetings are outside the reasonable working timeframe (8:00 to 17:00 Japan Time).
  • trying to figure out how to switch from peer to manager with the other persons in the webcompat team. I want to remove any sources of stress.

Hence these notes restarting. I will try to keep a track of what I do and what I learn both for the public, but mostly for my team mates.

Currently the Mozilla webcompat team is composed of these wonderful people:

Regular Contributors:

Softvision Contractors:

Mozilla Employees:

A lot of reading, a lot of thinking around management (probably more about that later).

I always said to Mike (and previous managers) in the past, that I was not interested in management position. But I deeply care about the webcompat project, and I want it to thrive as much as possible. I never associated management with a sense of promotion or career growth. I'm very careful about the issues that positions of power create both ways: from the manager toward the people being managed and from the people toward their manager. Power is a often tool of corruption and abuse and makes some people abandon their sense of autonomy and responsibility. The interim word in the title here is quite important. If someone more qualified wants to jump into the job, please reach out to Lonnen or Andrew Overholt. If anyone from the webcompat team is not satisfied, I will happily step down.

Last but not least, Thanks to Mike to have done this job for the last couple of years. Mike has a talent for being human and in touch with people. I wish a bright journey on his new endeavors.

Firefox Cross-Functional meeting

  • Goal: Coordinate what is ready to be shipped in Firefox and keep track of the projects status
  • When: Wednesday 09:00-10:00 (PDT) - Thursday 01:00-02:00 (JST) (will be 02:00-03:00 winter time)
  • Frequency: Every 3 weeks
  • Owner: Thomas Elin
  • Notes: The meeting is using trello to track the shipping of Firefox features. The Webcompat relevant cards (Members only) need to be updated every 2 weeks (Tuesday morning Japan Time aka Monday evening for the rest of the world). I didn't attend. They have a slides deck which is not accessible to public unfortunately.

ETP workarounds for site breakage

Rachel Tublitz asked to give an update about ETP workarounds for site breakage for the What's New with Firefox 82 for the SUMO team. She's doing an amazing job at compiling information for the sumo team to be prepared in advance of the release and be able to support users.

Thomas delivered on it two months ago. Add support for shimming resources blocked by Tracking Protection. The progress is tracked on the webcompat OKR board. Latest update from Thomas is

We're likely to slip the release of ETP shims slip into the 83 release instead of 82, due to the UX team wanting some more time to think through the way the ETP "blocked content" interfaces interact with shims. In that case they will continue to be a nightly-only feature during the 82 release cycle.

Webcompat reported on Fenix

Congrats on Dennis for releasing AC Report Site Issue improvements. This is done.

WPT sync to Python 3

WPT stands for Web Platform Tests. The code was in python 2 and it has been fully ported to Python 3 by James. This created a major sync issue at a point that James recovered these last couple of days and started to implement safe guards for it to not happen again.

Webcompat Outreach Generator

Ksenia has been on a tool for generating outreach templates to contact people with regards to web compatibility issues. She has been using Svelte and the work seems to be in a pretty good shape. Mike and Guillaume have been helping with the review.

Webcompat triage and testing

Oana and Ciprian have tiredlessly triaged all the incoming issues of webcompat. And more specifically starting to test some JavaScript frameworks to detect webcompat issues. The JS frameworks were installed by Guillaume.

Webcompat Bug Triage Priority

How do define the priority on bugs causing webcompat issues?

  • P1: This bug breaks either a lot of sites, or a top site. It should be fixed first.
  • P2: This bug breaks either a lot of sites, or a top site. It should be fixed next.
  • P3: This bugs breaks some sites, and should eventually get next. These bugs probably end up as P2s and P1s at some point.

Some webcompat bugs

Some meetings

  • The Channel meeting is a twice weekly to check in on the status of the active releases with the release team. Latest happened on 2020-09-21. They include links to postmortern such as the release 80.

Some notes, thoughts

  • Too many documents are without public access, that's not a good thing for a project like Mozilla and it creates barrier for participation. In the context of the webcompat team, I'll try as much as possible to have all our work in public. We already do pretty good, but we can even do better.
  • Discovering about:pioneer in Firefox Nightly.
  • setup all the 1:1 meetings with my peers. It will be a busy Tuesday.
  • a lot of the passing of information is just copy of things already existing somewhere, but where the links were not given. Maybe it's just an intrinsic part of our human nature.
  • Solving some access issues for a bit of devops.
  • Hopefully next week will be less about understanding the tools and more about helping people work


Mozilla Localization (L10N)L10n Report: September 2020 Edition


New localizers

  • Victor and Orif are teaming up to re-build the Tajik community.
  • Théo of Corsican (co).
  • Jonathan of Luganda (lg).
  • Davud of Central Kurdish (ckb).

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

New content and projects

Infrastructure and

As part of the effort to streamline and rationalize the localization infrastructure, following the recent lay-offs, we have decided to decommission Elmo. Elmo is the project name of what has been the backbone of our localization infrastructure for over 10 years, and its public facing part was hosted on (el as in “el-10-en (l10n)”, m(ozilla), o(rg) = elmo).

The practical consequences of this change are:

  • There are no more sign-offs for Firefox. Beta builds are going to use the latest content available in the l10n repositories at the time of the build.
  • The deadline for localization moves to the Monday before Release Candidate week. That’s 8 days before release day, and 5 full more days available for localization compared to the previous schedule. For reference, the deadline will be set to the day before in Pontoon (Sunday), since the actual merge happens in the middle of the day on Monday.
  • will be redirected to (the 400 – Bad Gateway error currently displayed is a known problem).

What’s new or coming up in Firefox desktop

Upcoming deadlines:

  • Firefox 82 is currently in beta and will be released on October 20th. The deadline to update localization is on October 11 (see above to understand why it moved closer to the release date).

As you might have noticed, the number of new strings in Firefox has significantly decreased, with DevTools becoming less actively developed. Now more than ever it’s a good time to:

  • Test your builds.
  • Review pending suggestions in Pontoon for your locale, in Firefox but also other projects. Firefox alone has currently over 12 thousand suggestions pending across teams, with several locales well over 500 unreviewed suggestions.

What’s new or coming up in mobile

This last month, as announced – and as you have probably noticed – we have been reducing the number, and priority, of mobile products to localize. We are now focusing much more on Firefox for Android and Firefox for iOS – our original flagship products for mobile. Please thus refer to the “star” metric on Pontoon to prioritize your work for mobile.

The Firefox for Android schedule from now on should give two weeks out of four for localization work – as it did for Focus. This means strings will be landing during two weeks in Pontoon – and then you will have two weeks to work on those strings so they can make it into the next version. Check the deadline section in Pontoon to know when the l10n deadline for the next release is.

Concerning iOS: with iOS 14 we can now set Firefox as default! Thanks to everyone who has helped localize the new strings that will enable globally this functionality.

What’s new or coming up in web projects

Common Voice

The support will continue with reduced staff. Though there won’t be new features introduced in the next six months, the team is still committed to fixing high priority bugs, adding newly requested languages, and releasing updated dataset. It will take longer to implement than before. Please follow the project’s latest update on Discourse.

WebThings Gateway

The project is being spun out of Mozilla as an independent open source project. It will be renamed from Mozilla WebThings to WebThings and will be moved to a new home at For other FAQ, please check out here. When the transition is complete, we will update everyone as soon as it becomes available.

What’s new or coming up in SuMo

It would be great to get the following articles localized in Indonesian in the upcoming release for Firefox for iOS:

What’s new or coming up in Pontoon

  • Mentions. We have added the ability to mention users in comments. After you type @ followed by any character, a dropdown will show up allowing you to select users from the list using Tab, Enter or mouse. You can narrow down the list by typing more characters. Kudos to April who has done an excellent job from the design and research phase all the way to implementing the final details of this feature!Screenshot of Pontoon, showing the new "mentions" feature
  • Download Terminology and TM from dashboards. Thanks to our new contributor Anuj Pandey you can now download TBX and TMX files directly from Team and Localization dashboards, without needing to go to the Translate page. Anuj also fixed two other bugs that will make the Missing translation status more noticeable and remove hardcoded email addresses from the codebase.

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Mozilla VR BlogFirefox Reality 12

Firefox Reality 12

The latest version of Firefox Reality for standalone VR headsets brings a host of long-awaited features we're excited to reveal, as well as improved stability and performance.

Add-on support

Firefox Reality is the first and only browser to bring add-on support to the immersive web. Now you can download powerful extensions that help you take control of your VR browsing experience. We started with favorites like uBlock, Dark Reader, and Privacy Badger.


Ever get tired of typing your passwords in the browser? This can be tedious, especially using VR headset controllers. Now, your browser can do the work of remembering and entering your passwords and other frequent form text with our autofill feature.

Redesigned library and updated status bar

We’ve completely redesigned and streamlined our library and simplified our status bar. You can also find additional information on the status bar, including indicators for the battery levels of controllers and the headset, as well as time/date info.

Firefox Reality 12<figcaption>Find the Bookmarks menu in our redesigned Library interface.</figcaption>
Firefox Reality 12<figcaption>Indicators for controller and headset battery life</figcaption>
Firefox Reality 12<figcaption>Find the Addons list in our redesigned Library interface.</figcaption>

Redesigned Content Feed

We’ve also redesigned our content feed for ease of navigation and discovery of related content organized by the categories in the left menu. Stay tuned for this change rolling out to your platform of choice soon.

Firefox Reality 12

The future of Firefox Reality

Look for Firefox Reality 12 available now in the HTC, Pico and Oculus stores. This feature-packed release of Firefox Reality will be the last major feature release for a while as we gear up for a deeper investment in Hubs. But not to worry! Firefox Reality will still be well supported and maintained on your favorite standalone VR platform.

Contribute to Firefox Reality!

Firefox Reality is an open source project. We love hearing from and collaborating with our developer community. Check out Firefox Reality on GitHub and help build the open immersive web

Daniel Stenbergeverything curl five years

The first content to the book Everything curl was committed on September 24, 2015 but I didn’t blog about it until several months later in December 2015: Everything curl – work in progress.

At the time of that blog post, the book was already at 13,000 words and 115 written subsections. I still had that naive hope that I would have it nearly “complete” by the summer of 2016. Always the optimist.

Today, the book is at over 72,000 words with content in 600 subsections – with just 21 subtitles noted “TBD” to signal that there’s still content to add there. The PDF version of it now clocks in at over 400 pages.

I’ve come to realize and accept that it will never be “complete” and that we will just keep on working on it indefinitely since curl itself keeps changing and we keep improving and expanding texts in the book.

Right now, we have 21 sections marked as not done, but then we’ve also added features through these five years that we haven’t described in the book yet. And there are probably other areas still missing too that would benefit the book to add. There’s no hurry, we’ll just add more content when we get around to it.

Everything curl is quite clearly the most complete book and resource about curl, libcurl, the project and how all of it works. We have merged contributions from 39 different authors and we’re always interested in getting more help!

Printed version

We’ve printed two editions of the book. The 2017 and the 2018 versions. As of 2020, the latest edition is out of print. If you really want one, email Dan Fandrich as mention on the web page this link takes you to. Maybe we can make another edition reality again.

The book was always meant to remain open and free, we only sell the printed version because it costs actual money to produce it.

For a long time we also offered e-book versions of everything curl, but sadly gitbooks removed those options in a site upgrade a while ago so now unfortunately we only offer a web version and a PDF version.

Other books?

There are many books that mention curl and that have sections or parts devoted to various aspects of curl but there are not many books about just curl. curl programming (by Dan Gookin) is one of those rare ones.

Daniel StenbergReducing mallocs for fun

Everyone needs something fun to do in their spare time. And digging deep into curl internals is mighty fun!

One of the things I do in curl every now and then is to run a few typical command lines and count how much memory is allocated and how many memory allocation calls that are made. This is good project hygiene and is a basic check that we didn’t accidentally slip in a malloc/free sequence in the transfer path or something.

We have extensive memory checks for leaks etc in the test suite so I’m not worried about that. Those things we detect and fix immediately, even when the leaks occur in error paths – thanks to our fancy “torture tests” that do error injections.

The amount of memory needed or number of mallocs used is more of a boiling frog problem. We add one now, then another months later and a third the following year. Each added malloc call is motivated within the scope of that particular change. But taken all together, does the pattern of memory use make sense? Can we make it better?


Now this is easy because when we build curl debug enabled, we have a fancy logging system (we call it memdebug) that logs all calls to “fallible” system functions so after the test is completed we can just easily grep for them and count. It also logs the exact source code and line number.

cd tests
./runtests -n [number]
egrep -c 'alloc|strdup' log/memdump

Let’s start

Let me start out with a look at the history and how many allocations (calloc, malloc, realloc or strdup) we do to complete test 103. The reason I picked 103 is somewhat random, but I wanted to look at FTP and this test happens to do an “active” transfer of content and makes a total of 10 FTP commands in the process.

The reason I decided to take a closer look at FTP this time is because I fixed an issue in the main ftp source code file the other day and that made me remember the Curl_pp_send() function we have. It is the function that sends FTP commands (and IMAP, SMTP and POP3 commands too, the family of protocols we refer to as the “ping pong protocols” internally because of their command-response nature and that’s why it has “pp” in the name).

When I reviewed the function now with my malloc police hat on, I noticed how it made two calls to aprintf(). Our printf version that returns a freshly malloced area – which can even cause several reallocs in the worst case. But this meant at least two mallocs per issued command. That’s a bit unnecessary, isn’t it?

What about a few older versions

I picked a few random older versions, checked them out from git, built them and counted the number of allocs they did for test 103:

7.52.1: 141
7.68.0: 134
7.70.0: 137
7.72.0: 123

It’s been up but it has gone down too. Nothing alarming, Is that a good amount or a bad amount? We shall see…

Cleanup step one

The function gets printf style arguments and sends them to the server. The sent command also needs to append CRLF to the data. It was easy to make sure the CRLF appending wouldn’t need an extra malloc. That was just sloppy of us to have there in the first place. Instead of mallocing the new printf format string with CRLF appended, it could use one in a stack based buffer. I landed that as a first commit.

This trimmed off 10 mallocs for test 103.

Step two, bump it up a notch

The remaining malloc allocated the memory block for protocol content to send. It can be up to several kilobytes but is usually just a few bytes. It gets allocated in case it needs to be held on to if the entire thing cannot be sent off over the wire immediately. Remember, curl is non-blocking internally so it cannot just sit waiting for the data to get transferred.

I switched the malloc’ed buffer to instead use a ‘dynbuf’. That’s our internal “dynamic buffer” system that was introduced earlier this year and that we’re gradually switching all internals over to use instead of doing “custom” buffer management in various places. The internal API for dynbuf is documented here.

The internal API Curl_dyn_addf() adds a printf()-style string at the end of a “dynbuf”, and it seemed perfectly suitable to use here. I only needed to provide a vprintf() alternative since the printf() format was already received by Curl_pp_sendf()… I created Curl_dyn_vaddf() for this.

This single dynbuf is kept for the entire transfer so that it can be reused for subsequent commands and grow only if needed. Usually the initial 32 bytes malloc should be sufficient for all commands.

Not good enough

It didn’t help!

Counting the mallocs showed me with brutal clarity that my job wasn’t done there. Having dug this deep already I wasn’t ready to give this up just yet…

Why? Because Curl_dyn_addf() was still doing a separate alloc of the printf string that it then appended to the dynamic buffer. But okay, having our own printf() implementation in the code has its perks.

Add a printf() string without extra malloc

Back in May 2020 when I introduced this dynbuf thing, I converted the aprintf() code over to use dynbuf to truly unify our use of dynamically growing buffers. That was a main point with it after all.

As all the separate individual pieces I needed for this next step were already there, all I had to do was to add a new entry point to the printf() code that would accept a dynbuf as input and write directly into that (and grow it if needed), and then use that new function (Curl_dyn_vprintf) from the Curl_dyn_addf().

Phew. Now let’s see what we get…

There are 10 FTP commands that previously did 2 mallocs each: 20 mallocs were spent in this function when test 103 was executed. Now we are down to the ideal case of one alloc in there for the entire transfer.

Test 103 after polish

The code right now in master (to eventually get released as 7.73.0 in a few weeks), now shows a total of 104 allocations. Down from 123 in the previous release, which not entirely surprising is 19 fewer and thus perfectly matching the logic above.

All tests and CI ran fine. I merged it. This is a change that benefits all transfers done with any of the “ping pong protocols”. And it also makes the code easier to understand!

Compared to curl 7.52.1, this is a 26% reduction in number of allocation; pretty good, but even compared to 7.72.0 it is still a 15% reduction.


There is always more to do, but there’s also a question of diminishing returns. I will continue to look at curl’s memory use going forward too and make sure everything is motivated and reasonable. At least every once in a while.

I have some additional ideas for further improvements in the memory use area to look into. We’ll see if they pan out…

Don’t count on me to blog about every such finding with this level of detail! If you want to make sure you don’t miss any of these fine-tunes in the future, follow the curl github repo.


Image by Julio César Velásquez Mejía from Pixabay

The Talospace ProjectFirefox 81 on POWER

Firefox 81 is released. In addition to new themes of dubious colour coordination, media controls now move to keyboards and supported headsets, the built-in JavaScript PDF viewer now supports forms (if we ever get a JIT going this will work a lot better), and there are relatively few developer-relevant changes.

This release heralds the first official change in our standard POWER9 .mozconfig since Fx67. Link-time optimization continues to work well (and in 81 the LTO-enhanced build I'm using now benches about 6% faster than standard -O3 -mcpu=power9), so I'm now making it a standard part of my regular builds with a minor tweak we have to make due to bug 1644409. Build time still about doubles on this dual-8 Talos II and it peaks out at almost 84% of its 64GB RAM during LTO, but the result is worth it.

Unfortunately PGO (profile-guided optimization) still doesn't work right, probably due to bug 1601903. The build system does appear to generate a profile properly, i.e., a controlled browser instance pops up, runs some JavaScript code, does some browser operations and so forth, and I see gcc created .gcda files with all the proper count information, but then the build system can't seem to find them to actually tune the executable. This needs a little more hacking which I might work on as I have free time™. I'd also like to eliminate ac_add_options --disable-release as I suspect it is no longer necessary but I need to do some more thorough testing first.

In any event, reliable LTO at least with the current Fedora 32 toolchain is still continuous progress. I've heard concerns that some distributions are not making functional builds of Firefox for ppc64le (let alone ppc64, which has its own problems), though Fedora is not one of them. Still, if you have issues with your distribution's build and you are not able to build it for yourself, if there is interest I may put up a repo or a download spot for the binaries I use since I consider them reliable. Without further ado, here are the current .mozconfigs that I attest as functional.

Optimized Configuration

export CC=/usr/bin/gcc
export CXX=/usr/bin/g++

mk_add_options MOZ_MAKE_FLAGS="-j24"
ac_add_options --enable-application=browser
ac_add_options --enable-optimize="-O3 -mcpu=power9"
ac_add_options --disable-release
ac_add_options --enable-linker=bfd
ac_add_options --enable-lto=full

#export GN=/uncomment/and/set/path/if/you/haz
Debug Configuration

export CC=/usr/bin/gcc
export CXX=/usr/bin/g++

mk_add_options MOZ_MAKE_FLAGS="-j24"
ac_add_options --enable-application=browser
ac_add_options --enable-optimize="-Og -mcpu=power9"
ac_add_options --enable-debug
ac_add_options --disable-release
ac_add_options --enable-linker=bfd

#export GN=/uncomment/and/set/path/if/you/haz

About:CommunityContributors to Firefox 81 (and 80, whoops)

Errata: In our release notes for Firefox 80, we forgot to mention all the developers who contributed their first code change to Firefox in this release, 10 of whom were brand new volunteers! We’re grateful for their efforts, and apologize for not giving them the recognition they’re due on time. Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

As well, with the release of Firefox 81 we are once again honoured to welcome the developers who contributed their first code change to Firefox with this release, 18 of whom were brand new volunteers. Again, please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Firefox NightlyThese Weeks in Firefox: Issue 80


  • We now show three recommended articles when saving a webpage to Pocket.
The “Saved to Pocket” doorhanger is open. A “similar stories” section is open at the bottom with articles about cooking.

The “Saved to Pocket” doorhanger is open. A “similar stories” section is open at the bottom with articles about cooking.

    • To enable this, set extensions.pocket.onSaveRecs to true, and restart.
    • This is an enhancement and thus will be turned on by default without an experiment.
  • We implemented a minimal skeleton UI which will display immediately when starting Firefox, intended to give early visual feedback to users on slow systems. Windows users can turn this on by setting the pref “browser.startup.preXulSkeletonUI” to true (currently only works for the default Firefox theme.)
  • We’re mentoring a set of students from MSU on various Picture-in-Picture improvements. Recent fixes include:
  • Changes to the Add-on Manager to support the Promoted Add-ons pilot program have been landed in Firefox 82, in particular to:
    • show in about:addons the new “Verified” and “Line extension” badges (Bug 1657476)
    • allow “Verified”, “Recommended” and “Line” extensions to be hosted on third party websites (Bug 1659530)
  • Reminder: You can help us test Fission (out-of-process iframes) in Nightly by setting fission.autostart to true in about:config, and restarting the browser
    • If you find any Fission bugs, please report them under the meta fission-dogfooding. The Fission team appreciates your help. 🙂

Friends of the Firefox team

Resolved bugs (excluding employees)

Fixed more than one bug

  • Chris Jackson
  • Kriyszig
  • Michael Goossens
  • Reid Shinabarker

New contributors (🌟 = first patch)

  • 🌟 Tanner Drake made sure that we always respect the pref.
  • 🌟 Ben D (:rockingskier) implemented STOMP WebSocket message parsing.
  • 🌟 Chris Jackson, 🌟 Hunter Jones, 🌟 Reid Shinabarker, and 🌟 Manish Rajendran fixed many Picture-in-Picture bugs! See this issue’s PiP section for details.

Project Updates

Add-ons / Web Extensions

WebExtension APIs

Developer Tools

  • Shipping Server Sent Events (SSE) Inspector – Server-Sent Events (SSE) is a server push technology enabling a client to receive automatic updates from a server via HTTP connection (mdn). The SSE Inspector is part of the existing Network Panel in DevTools reusing the user interface for inspecting WebSockets.
    • Contributed by a student (former GSoC candidate)
  • DevTools Fission M2
    • Working on main architecture changes and adopting panel by panel.
    • Focusing on testing (preparation for the Fission Nightly experiment, Oct 9)
  • Marionette Fission
    • Main infrastructure changes landed, focusing on individual commands now.


  • The next milestone M6b has 29 remaining bugs. The most significant remaining change remaining is moving most of the session history to the parent process rather than each child. The current plan is to enable this for M6b. You can try it out by enabling the ‘fission.sessionHistoryInParent’ preference along with ‘fission.autostart’ preference and report any bugs related to bug 1656208.
  • A fission nightly experiment will be launched soon in early October.
  • In early October, Fission will also be available as an opt-in feature in about:preferences for Nightly only.

Installer & Updater

  • bytesized is taking on the long-standing update related papercut known as the staged updates bug (353804). When this happens, you may see Firefox update…only to be prompted to update immediately again afterwards. The work is still in planning stages (see details here) and will apply to partial updates only. Currently targeting Firefox 84.
  • nalexander is adding attribution support for macOS (1619353) which is targeting Firefox 83.

Password Manager

PDFs & Printing

  • Looking at potential blockers for 82
    • Some users are seeing long dialog loading times
    • Some users are seeing incorrect page sizes where the page is very small
    • Custom margins can no longer be set, adding support for them
    • Occasional preview errors when opening the dialog as the page is loading


  • bigiri has continued working on the ASRouter refactor and is down to just a few remaining broken tests
  • emalysz has been working on migrating the page action strings to fluent and lazifying the page action menu
  • gijs fixed pgo file writing to ensure we collect profiling data from non-webcontent child processes
  • gijs has been investigating different ways we can reduce the DOM size
  • mconley, gijs, florian, dthayer, and esmyth met to discuss the future of BHR, which the team will be working on next month


Search and Navigation

  • Separate private browsing engine feature (Nightly-only) has been disabled, while we figure out its destiny – Bug 1665301
  • Cleanup the search service after config modernization – Bug 1619922
    • The work completed, any remaining changes will be handled as usual bug fixes.
  • Consolidation of aliases and search keywords – Bug 1650874
    • UX working on a restyle of search preferences
Address Bar
  • Urlbar Update 2
    • Release moved to Firefox 83, including both search shortcuts and tab to search.
    • Lots of fixes and polish for the search shortcuts functionality (Bug 1659204, Bug 1662477, Bug 1657801, Bug 1658624, Bug1660778, …)
    • One-off buttons support key modifiers – CTRL/CMD to immediately search in new tab, SHIFT for the current tab (override search mode) – Bug 1657212

User Journey


  • New indicator has slipped again. Thankfully, we seem to have converged on a design that we think we can ship by default, so hopefully that will ride out in 83!
  • mconley is adding Notification Area icons to show device sharing state on Windows
    • This nearly made it in before the soft freeze, but got stymied by an unexpected shutdown leak, which mconley is now investigating.

The Mozilla BlogLaunching the European AI Fund

Right now, we’re in the early stages of the next phase of computing: AI. First we had the desktop. Then the internet. And smartphones. Increasingly, we’re living in a world where computing is built around vast troves of data and the algorithms that parse them. They power everything from the social platforms and smart speakers we use everyday, to the digital machinery of our governments and economies.

In parallel, we’re entering a new phase of  how we think about, deploy, and regulate technology. Will the AI era be defined by individual privacy and transparency into how these systems work? Or, will the worst parts of our current internet ecosystem — invasive data collection, monopoly, opaque systems — continue to be the norm?

A year ago, a group of funders came together at Mozilla’s Berlin office to talk about just this: how we, as a collective, could help shape the direction of AI in Europe. We agreed on the importance of a landscape where European public interest and civil society organisations — and not just big tech companies — have a real say in shaping policy and technology. The next phase of computing needs input from a diversity of actors that represent society as a whole.

Over the course of several months and with dozens of organizations around the table, we came up with the idea of a European AI Fund — a project we’re excited to launch this week.

The fund is supported by the Charles Stewart Mott Foundation, King Baudouin Foundation, Luminate, Mozilla, Oak Foundation, Open Society Foundations and Stiftung Mercator. We are a group of national, regional and international foundations in Europe that are dedicated to using our resources — financial and otherwise — to strengthen civil society. We seek to deepen the pool of experts across Europe who have the tools, capacity and know-how to catalogue and monitor the social and political impact of AI and data driven interventions — and hold them to account. The European AI Fund is hosted by the Network of European Foundations. I can’t imagine a better group to be around the table with.

Over the next five years, the European Commission and national governments across Europe will forge a plan for Europe’s digital transformation, including AI. But without a strong civil society taking part in the debate, Europe — and the world — risk missing critical opportunities and could face fundamental harms.

At Mozilla, we’ve seen first-hand the expertise that civil society can provide when it comes to the intersection of AI and consumer rights, racial justice, and economic justice. We’ve collaborated closely over the years with partners like European Digital Rights,  Access Now Algorithm Watch and Digital Freedom Fund. Alternatively, we’ve seen what can go wrong when diverse voices like these aren’t part of important conversations: AI systems that discriminate, surveil, radicalize.

At Mozilla, we believe that philanthropy has a key role to play in Europe’s digital transformation and in keeping AI trustworthy, as we’ve laid out in our trustworthy AI theory of change. We’re honoured to be working alongside this group of funders in an effort to strengthen civil society’s capacity to contribute to these tech policy discussions.

In its first step, the fund will launch with a 1,000,000 € open call for funding, open until November 1. Our aim is to build the capacity of those who already work on AI and Automated Decision Making (ADM). At the same time, we want to bring in new civil society actors to the debate, especially those who haven’t worked on issues relating to AI yet, but whose domain of work is affected by AI.

To learn more about the European AI Fund visit

The post Launching the European AI Fund appeared first on The Mozilla Blog.

The Firefox FrontierHow to spot (and do something) about real fake news

Think you can spot fake news when you see it? You might be surprised even the most digitally savvy folks can (at times) be fooled into believing a headline or … Read more

The post How to spot (and do something) about real fake news appeared first on The Firefox Frontier.

Daniel Stenberga Google grant for libcurl work

Earlier this year I was the recipient of a monetary Google patch grant with the expressed purpose of improving security in libcurl.

This was an upfront payout under this Google program describing itself as “an experimental program that rewards proactive security improvements to select open-source projects”.

I accepted this grant for the curl project and I intend to keep working fiercely on securing curl. I recognize the importance of curl security as curl remains one of the most widely used software components in the world, and even one that is doing network data transfers which typically is a risky business. curl is responsible for a measurable share of all Internet transfers done over the Internet an average day. My job is to make sure those transfers are done as safe and secure as possible. It isn’t my only responsibility of course, as I have other tasks to attend to as well, but still.

Do more

Security is already and always a top priority in the curl project and for myself personally. This grant will of course further my efforts to strengthen curl and by association, all the many users of it.

What I will not do

When security comes up in relation to curl, some people like to mention and propagate for other programming languages, But curl will not be rewritten in another language. Instead we will increase our efforts in writing good C and detecting problems in our code earlier and better.

Proactive counter-measures

Things we have done lately and working on to enforce everywhere:

String and buffer size limits – all string inputs and all buffers in libcurl that are allowed to grow now have a maximum allowed size, that makes sense. This stops malicious uses that could make things grow out of control and it helps detecting programming mistakes that would lead to the same problems. Also, by making sure strings and buffers are never ridiculously large, we avoid a whole class of integer overflow risks better.

Unified dynamic buffer functions – by reducing the number of different implementations that handle “growing buffers” we reduce the risk of a bug in one of them, even if it is used rarely or the spot is hard to reach with and “exercise” by the fuzzers. The “dynbuf” internal API first shipped in curl 7.71.0 (June 2020).

Realloc buffer growth unification – pretty much the same point as the previous, but we have earlier in our history had several issues when we had silly realloc() treatment that could lead to bad things. By limiting string sizes and unifying the buffer functions, we have reduced the number of places we use realloc and thus we reduce the number of places risking new realloc mistakes. The realloc mistakes were usually in combination with integer overflows.

Code style – we’ve gradually improved our code style checker ( over time and we’ve also gradually made our code style more strict, leading to less variations in code, in white spacing and in naming. I’m a firm believer this makes the code look more coherent and therefore become more readable which leads to fewer bugs and easier to debug code. It also makes it easier to grep and search for code as you have fewer variations to scan for.

More code analyzers – we run every commit and PR through a large number of code analyzers to help us catch mistakes early, and we always remove detected problems. Analyzers used at the time of this writing:, Codacy, Deepcode AI, Monocle AI, clang tidy, scan-build, CodeQL, Muse and Coverity. That’s of course in addition to the regular run-time tools such as valgrind and sanitizer builds that run the entire test suite.

Memory-safe components – curl already supports getting built with a plethora of different libraries and “backends” to cater for users’ needs and desires. By properly supporting and offering users to build with components that are written in for example rust – or other languages that help developers avoid pitfalls – future curl and libcurl builds could potentially avoid a whole section of risks. (Stay tuned for more on this topic in a near future.)

Reactive measures

Recognizing that whatever we do and however tight ship we run, we will continue to slip every once in a while, is important and we should make sure we find and fix such slip-ups as good and early as possible.

Raising bounty rewards. While not directly fixing things, offering more money in our bug-bounty program helps us get more attention from security researchers. Our ambition is to gently drive up the reward amounts progressively to perhaps multi-thousand dollars per flaw, as long as we have funds to pay for them and we mange keep the security vulnerabilities at a reasonably low frequency.

More fuzzing. I’ve said it before but let me say it again: fuzzing is really the top method to find problems in curl once we’ve fixed all flaws that the static analyzers we use have pointed out. The primary fuzzing for curl is done by OSS-Fuzz, that tirelessly keeps hammering on the most recent curl code.

Good fuzzing needs a certain degree of “hand-holding” to allow it to really test all the APIs and dig into the dustiest corners, and we should work on adding more “probes” and entry-points into libcurl for the fuzzer to make it exercise more code paths to potentially detect more mistakes.

See also my presentation testing curl for security.

Mike TaylorSeven Platform Updates from the Golden Era of Computing

Back in the Golden Era of Computing (which is what the industry has collectively agreed to call the years 2016 and 2017) I was giving semi-regular updates at the Mozilla Weekly Meeting.

Now this was also back when Potch was the Weekly Project All Hands Meeting module owner. If that sounds like a scary amount of power to entrust to that guy, well, that’s because it was.

(This doesn’t have anything to do with the point of this post, I’m just trying to game SEO with these outbound links.)

So anyways, the point of these updates was to improve communication between Firefox and Platform teams which were more siloed than you would expect, and generally just let people know about interesting Platform work other teams were doing. I don’t even remember how that task fell upon me, I think it was just cause I just showed up to do it.

Rumor has it that Chris Beard wanted to switch to Blink back then but was moved by my artwork, and that’s why Gecko still exists to this day.

(Full disclosure: I just made up this rumor, but please quote me as “Anonymous Source” and link back to here if anyone wants to run with it.)

This Week In RustThis Week in Rust 357

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

No newsletters this week.

Learn Standard Rust
Learn More Rust
Project Updates

Call for Blog Posts

The Rust Core Team wants input from the community! If you haven't already, read the official blog and submit a blog post - it will show up here! Here are the wonderful submissions since the call for blog posts:

Crate of the Week

This week's crate is cargo-about, a handy cargo subcommand to list the dependencies and their licenses!

Thanks to Jimuazu for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

373 pull requests were merged in the last week

Rust Compiler Performance Triage

  • 2020-09-21: 2 Regressions, 5 Improvements, 4 Mixed

This was the first week of semi-automated perf triage, and thank goodness: There was a lot going on. Most regressions are either quite small or already have a fix published.

#72412 is probably the most interesting case. It fixes a pathological problem involving nested closures by adding cycle detection to what seems to be a relatively hot part of the code. As a result, most users will see a slight compile-time regression for their crates.

See the full report for more.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

Upcoming Events

Asia Pacific

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Sometimes you don't want the code to compile. The compiler's job is often to tell you that your code doesn't compile, rather than trying to find some meaning that allows compiling your code.

Thanks to Jacob Pratt for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Firefox UXFrom a Feature to a Habit: Why are People Watching Videos in Picture-in-Picture?

At the end of 2019, if you were using Firefox to watch a video, you saw a new blue control with a simple label: “Picture-in-Picture.” Even after observing and carefully crafting the feature with feedback from in-progress versions of Firefox (Nightly and Beta), our Firefox team wasn’t really sure how people would react to it. So we were thrilled when we saw signals that the response was positive.

Firefox’s Picture-in-Picture allows you to watch videos in a floating window (always on top of other windows) so you can keep an eye on what you’re watching while interacting with other sites, or applications.

From a feature to a habit

About 6 months after PiP’s release, we started to see some trends from our data. We know from our internal data that people use Firefox to watch video. In fact, some people watch video over 60% of the time when they’re using Firefox. And, some of these people use PiP to do that. Further, our data shows that people who use Picture-in-Picture open more PiP windows over time. In short, we see that not everyone uses PiP, but those who do seem to be forming a habit with it.

A habit is a behaviour “done with little or no conscious thought.”  So we asked ourselves:

  • Why is PiP becoming a habit for some people?
  • What are peoples’ motivations behind using PiP?

Fogg’s Behavior Model describes habits and how they form. We already knew two parts of this equation: Behavior and Ability. But we didn’t know Motivation and Trigger.

Behavior = Motivation, Ability, Trigger

Fogg’s Behavior Model.

To get at these “why” questions, we conducted qualitative research with people who use PiP.  We conducted interviews with 11 people to learn more about how they discovered PiP and how they use it in their everyday browsing. We were even able to observe these people using PiP in action. It’s always a privilege to speak directly to people who are using the product. Talking to and observing peoples’ actions is an indispensable part of making something people find useful.

Now we’ll talk about the Motivation part of the habit equation by sharing how the people we interviewed use PiP.

Helps with my tasks

When we started to look at PiP, we were worried that the feature would bring some unintended consequences in peoples’ lives. Could PiP diminish their productivity by increasing distractibility? Surprisingly, from what we observed in these interviews, PiP helped some participants do their task, as opposed to being needlessly distracting. People are using PiP as a study tool, to improve their focus, or to motivate them to complete certain tasks.

PiP for note-taking

One of our participants was a student. He used Picture-in-Picture to watch lecture videos and take notes while doing his homework. PiP helped him complete and enhance a task.

PiP video open on the left with Pages applications in the main area of the screen

Taking notes in a native desktop application while watching a lecture video in picture-in-picture. (Recreation of what a participant did during an interview)

Breaks up the monotony of work

You might have this experience: listening to music or a podcast helps you “get in the zone” while you’re exercising or perhaps doing chores. It helps you lose yourself in the task, and make mundane tasks more bearable. Picture-in-Picture does the same for some people while they are at work, to avoid the surrounding silence.

“I just kind of like not having dead silence… I find it kind of motivating and I don’t know, it just makes the day seem less, less long.” — Executive Assistant to a Real Estate Developer

Calms me down

Multiple people told us they watch videos in PiP to calm themselves down. If they are reading a difficult article for work or study, or doing some art, watching ASMR or trance-like videos feels therapeutic. Not only does this calm people down, they said it can help them focus.

PiP on the bottom left with an article open in the main area of the screen

Reading an article in a native Desktop application while watching a soothing video of people running in picture-in-picture. (Recreation of what a participant did during an interview)

Keeps me entertained

And finally, some people use Picture-in-Picture for pure and simple entertainment. One person watches a comedic YouTuber talk about reptiles while playing a dragon-related browser game. Another person watches a friend’s live streaming gaming while playing a game themself.

PiP video in the upper left with a game in the main area of the screen

Playing a browser game while watching a funny YouTube video. (Recreation of what a participant showed us during an interview)

Our research impact 

Some people have habits with PiP for the reasons listed above, and we also learned there’s nothing gravely wrong with PiP to prevent habit-forming. Therefore, our impact is related to PiP’s strategy: Do not make “habit-forming” a measure of PiP’s success. Instead, better support what people already do with PiP. Particularly, PiP is getting more controls, for example, changing the volume.

Red panda in a PiP video

You don’t have to stop reading to watch this cute red panda in Picture-in-Picture


Share your stories

While conducting these interviews, we also prepared an experiment to test different versions of Picture-in-Picture, with the goal of increasing the number of people who discover it. We’ll talk more on that soon!

In the meantime, we’d like to hear even more stories. Are you using Picture-in-Picture in Firefox? Are you finding it useful? Please share your stories in the comments below, or send us a tweet @firefoxUX with a screenshot. We’d love to hear from you.


Thank you to Betsy Mikel for editing our blog post.

This post was originally published on Medium.

Mozilla VR BlogYour Security and Mozilla Hubs

Your Security and Mozilla Hubs

Mozilla and the Hubs team takes internet security seriously. We do our best to follow best practices for web security and securing data. This post will provide an overview of how we secure access to your rooms and your data.

Room Authentication

In the most basic scenario, only people who know the URL of your room can access your room. We use randomly generated strings in the URLs to obfuscate the URLs. If you need more security in your room, then you can limit your room to only allow users with Hubs accounts to join (usually, anyone can join regardless of account status). This is a server-wide setting, so you have to run your own Hubs Cloud instance to enable this setting.

You can also make rooms “invite only” which generates an additional key that needs to be used on the link to allow access. While the room ID can’t be changed, an “invite only” key can be revoked and regenerated, allowing you to revoke access to certain users.

Discord OAuth Integration

Alternatively, users can create a room via the Hubs Discord bot, and the room becomes bound to the security context of that Discord. In this scenario, a user’s identity is tied to their identity in Discord, and they only have access to rooms that are tied to channels they have access to. Users with “modify channel” permissions in Discord get corresponding “room owner” permissions in Hubs, which allows them to change room settings and kick users out of the room. For example, if I am a member of the private channel #standup, and there is a room tied to that channel, only members of that channel (including me) are allowed in the associated room. Anyone attempting to access the room will first need to authenticate via Discord.

How we secure your data

We collect minimal data on users. For any data that we do collect, all database data and backups are encrypted at rest. Additionally, we don’t store raw emails in our database--this means we can’t retrieve your email, we can only check to see if the email you enter for log in is in our database. All data is stored on a private subnet and is not accessible via the internet.

For example, let’s go through what happens when a user uploads a file inside a room. First, the user uploads a personal photo to the room to share with others. This generates a URL via a unique key, which is passed to all other users inside the room. Even if others find the URL of the file, they cannot decrypt the photo without this key (including the server operator!). The photo owner can choose to pin the photo to the room, which saves the encryption key in a database with the encrypted file. When you visit the room again, you can access the file, because the key is shared with room visitors. However, if the file owner leaves the room without pinning the photo, then the photo is considered ‘abandoned data’ and the key is erased. This means that no users can access the file anymore, and the data is erased within 72 hours.

All data is encrypted in transit via TLS. We do not currently support end-to-end encryption.

Hubs Cloud Security

When you deploy your own Hubs Cloud instance, you have full control over the instance and its data via AWS or DigitalOcean infrastructure--Mozilla simply provides the template and automatic updates. Therefore, you can integrate your own security measures and technology as you like. Everyone’s use case is different. Hubs cloud is an as-is product, and we’re unable to predict the performance as you make changes to the template.

Server access is limited by SSH and sometimes two-factor authentication. For additional security, you can set stack template rules to restrict which IP addresses can SSH into the server.

How do we maintain Hubs Cloud with the latest security updates

We automatically update packages for security updates, and update our version in a monthly cadence, but if there’s a security issue exposed (either in our software or third party software), we can immediately update all stacks. We inherit our network architecture from AWS, which includes load balancing and DDoS protection.

Your security on the web is non-negotiable. Between maintaining security updates, authenticating users, and encrypting data at rest and in transit, we prioritize our users security needs. For any additional questions, please reach out to us. To contribute to Hubs, visit

Mozilla VR BlogYour Privacy and Mozilla Hubs

Your Privacy and Mozilla Hubs

At Mozilla, we believe that privacy is fundamental to a healthy internet. We especially believe that this is the case in social VR platforms, which process and transmit large amounts of personal information. What happens in Hubs should stay in Hubs.

Privacy expectations in a Hubs room

First, let’s discuss what your privacy expectations should be when you’re in a Hubs room. In general, anything transmitted in a room is available to everyone connected to that room. They can save anything that you send. This is why it’s so important to only give the Hubs link out to people you want to be in the room, or to use Discord authentication so only authorized users can access a room.

While some rooms may have audio falloff to declutter the audio in a room, users should still have the expectation that anyone in the room (or in the lobby) can hear what’s being said. Audio falloff is performed in the client, so anyone who modifies their client can hear you from anywhere in the room.

Other users in the room have the ability to create recordings. While recording, the camera tool will display a red icon, and your avatar will indicate to others with a red icon that you are filming and capturing audio. All users are notified when a photo or video has been taken. However, users should still be aware that others could use screen recorders to capture what happens in a Hubs room without their knowledge.

Minimizing the data we collect on you

The only data we need to create an account for you is your email address, which we store hashed in an encrypted database. We don’t collect any additional personal information like birthdate, real name, or telephone numbers. Accounts aren’t required to use Hubs, and many features are available to users without accounts.

Processing data instead of collecting data

There’s a certain amount of information that we have to process in order to provide you with the Hubs experience. For example, we receive and send to others the name and likeness of your avatar, its position in the room, and your interactions with objects in the room. If you create an account, you can store custom avatars and their names.

We receive data about the virtual objects and avatars in a room in order to share that data with others in the room, but we don’t monitor the individual objects that are posted in a room. Users have the ability to permanently pin objects to a room, which will store them in the room until they’re deleted. Unpinned files are deleted from Mozilla’s servers after 72 hours.

We do collect basic metrics about how many rooms are being created and how many users are in those rooms, but we don’t tie that data to specific rooms or users. What we don’t do is collect or store any data without the user's explicit consent.

Hubs versus Hubs Cloud

Hubs Cloud owners have the capability to implement additional server-side analytics. We provide Hubs Cloud instances with their own versions of Hubs, with minimal data collection and no user monitoring, which they can then modify to suit their needs. Unfortunately, this means that we can’t make any guarantees about what individual Hubs Cloud instances do, so you’ll need to consult with the instance owner if you have any privacy concerns.

Our promise to you

We will never perform user monitoring or deep tracking, particularly using VR data sources like gaze-tracking. We will continue to minimize the personal data we collect, and when we do need to collect data, we will invest in privacy preserving solutions like differential privacy. For full details, see our privacy policy. Hubs is an open source project–to contribute to Hubs, visit

The Rust Programming Language BlogCall for 2021 Roadmap Blogs Ending Soon

We will be closing the collection of blog posts on October 5th. As a reminder, we plan to close the survey on September 24th, later this week.

If you haven't written a post yet, read the initial announcement.

Looking forward, we are expecting the following:

  • Roadmap RFC proposed by mid-November
  • Roadmap RFC merged by mid-December

We look forward to reading your posts!

Cameron KaiserTenFourFox FPR27 available

TenFourFox Feature Parity Release 27 final is now available for testing (downloads, hashes, release notes). Unfortunately, I have thus far been unable to solve issue 621 regarding the crashes on LinkedIn, so to avoid drive-by crashes, scripts are now globally disabled on LinkedIn until I can (no loss since it doesn't work anyway). If you need them on for some reason, create a pref tenfourfox.troublesome-js.allow and set it to true. I will keep working on this for FPR28 to see if I can at least come up with a better wallpaper, though keep in mind that even if I repair the crash it may still not actually work anyway. There are otherwise no new changes since the beta except for outstanding security updates, and it will go live Monday evening Pacific assuming no new issues.

For our struggling Intel friends, if you are using Firefox on 10.9 through 10.11 Firefox ESR 78 is officially your last port of call, and support for these versions of the operating system will end by July 2021 when support for 78ESR does. The Intel version of TenFourFox may run on these machines, though it will be rather less advanced, and of course there is no official support for any Intel build of TenFourFox.

Firefox NightlyThese Weeks in Firefox: Issue 79


  • We’re testing some variations on the Picture-in-Picture toggle
    • An animated GIF shows a Picture-in-Picture toggle being moused over. When the mouse reaches the toggle, it extends, showing the text “Watch in Picture-in-Picture”

      An animated GIF shows a Picture-in-Picture toggle being moused over. Text describing Picture-is-Picture is shown.

  • Camera and microphone global mutes have landed, but are being held to Nightly
    • The WebRTC sharing indicator shows microphone, camera, and minimize buttons. The microphone button shows that it is muted.

  • Urlbar Design Update 2 is live in Nightly. Access “search mode” from the refresh one-off buttons, including one-offs for bookmarks, history, and tabs. This feature is targeting 82. Please file bugs against Bug 1644572!

Friends of the Firefox team


  • Welcome mtigley and daisuke to the Firefox Desktop team!

Resolved bugs (excluding employees)

Fixed more than one bug

  • manas

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Shane landed a patch to make sure that Firefox will double-check the version compatibility for the installed langpacks and disable them if they are not strictly compatible with the currently running Firefox version (Bug 1646016), this was likely a major cause for some YSOD (yellow screen of death) issues that were originally triggered by an issue on the AMO side.


WebExtensions Framework
  • Matt Woodrow fixed a webRequest API regression which was preventing pages multipart/x-mixed-replace content to finish loading when extensions using webRequest blocking listeners are installed (e.g. uBlock origin) (Fixed in Bug 1638422, originally regressed by Bug 1600211)


WebExtensions API
  • As part of fission-related work on the extensions framework and APIs, Tomislav landed some changes needed to make the browser.tabs.captureTab API method to work with Fission iframes (Bug 1636508)


Sync and Storage
  • 98% of our sync storage nodes have been migrated over to the new Rust based sync storage service, aka “Durable Sync”.
  • JR Conlin is working on implementing a sync quota; we’ll limit users to 2GB per sync collection (ie, bookmarks, tabs, history, etc) and plan to roll this out in late September.


  • Fission Nightly experiment is tentatively targeted for Nightly 83

Installer & Updater

  • Mhowell and Nalexander are researching how to move forward with a Gecko based background update agent. Work will continue on this effort through the end of the year.
  • Bytesized has a patch open to add telemetry to track windows verified app settings to help us better understand barriers to installation for Win10 users.


  • Sonia has continued work on enabling rules that were previously disabled when *.xul files moved to *.xhtml, with toolkit and accessible landing in the last week.

Password Manager

PDFs & Printing

  • Beta uplifts are complete as of Thursday
  • QA has been looking over the feature and the old print UI on beta and haven’t found any blockers for backing out our latest uplifts
  • Go/no-go decision to be made on Friday, Sept 11



  • The toggle variation experiment is now live! We should hopefully have some data to help us make a selection on which toggle to proceed with soon.
      • Default = -1
      • Mode 1 = 1
      • Mode 2 = 2
      • “right” = right side (default)
      • “left” = left side
    • (only affects Mode 2)
      • true – the user has used Picture-in-Picture before in 80+
      • false (default) – the user has not used Picture-in-Picture before in 80+
  • MSU students are working on improving Picture-in-Picture! Here’s the metabug.

Search and Navigation

  • Cleanup the search service after modern configuration shipped – Bug 1619922
    • Legacy search configuration code has been removed – Bug 1619926, Bug 1642990
    • Work is ongoing to improve some of the architecture of the search service and should be complete in the 82 cycle.
  • Consolidation of search aliases and bookmark keywords – Bug 1650874
    • Internal search keywords are now shown in about:preferences#search – Bug 1658713
    • WIP – Initial implementation of user defined search engines – Bug 1106626
Address Bar
  • Urlbar Design Update 2
    • Behavior change: Left/Right keys on one-off buttons move the caret rather than trapping the user in one-off buttons – Bug 1632318
    • Improvement: Some restriction characters (*, %, ^) are converted to search mode when a space is typed after them to restrict results – Bug 1658964

User Journey


  • mconley is working on adding Task Tray icons on Windows to indicate that devices are being shared
    • We have something similar on macOS already

Cameron KaiserGoogle, nobody asked to make the Blogger interface permanent

As a followup to my previous rant on the obnoxious new Blogger "upgrade," I will grudgingly admit Blogger has done some listening. You can now embed images and links similarly to the way you used to, which restores some missing features and erases at least a part of my prior objections. But not the major one, because usability is still a rotting elephant's placenta. I remain an inveterate user of the HTML blog view and yet the HTML editor still thinks it knows better than you how to format your code and what tags you should use, you can't turn it off and you can't make it faster. And I remain unclear what the point of all this was because there is little improvement in functionality except mobile previewing.

Naturally, Google has removed the "return to legacy Blogger" button, but you can still get around that at least for the time being. On your main Blogger posts screen you will note a long multidigit number in the URL (perhaps that's why they're trying to hide URLs in Chrome). That's your blog ID. Copy that number and paste it in where the XXX is in this URL template (all one line):

Bookmark it and you're welcome. I look forward to some clever person making a Firefox extension to do this very thing very soon, and if you make one post it in the comments.

Daniel StenbergMy first 15,000 curl commits

I’ve long maintained that persistence is one of the main qualities you need in order to succeed with your (software) project. In order to manage to ship a product that truly conquers the world. By continuously and never-ending keeping at it: polishing away flaws and adding good features. On and on and on.

Today marks the day when I landed my 15,000th commit in the master branch in curl’s git repository – and we don’t do merge commits so this number doesn’t include such. Funnily enough, GitHub can’t count and shows a marginally lower number.

This is of course a totally meaningless number and I’m only mentioning it here because it’s even and an opportunity for me to celebrate something. To cross off an imaginary milestone. This is not even a year since we passed 25,000 total number of commits. Another meaningless number.

15,000 commits equals 57% of all commits done in curl so far and it makes me the only committer in the curl project with over 10% of the commits.

The curl git history starts on December 29 1999, so the first 19 months of commits from the early curl history are lost. 15,000 commits over this period equals a little less than 2 commits per day on average. I reached 10,000 commits in December 2011, so the latest 5,000 commits were done at a slower pace than the first 10,000.

I estimate that I’ve spent more than 15,000 hours working on curl over this period, so it would mean that I spend more than one hour of “curl time” per commit on average. According to gitstats, these 15,000 commits were done on 4,271 different days.

We also have other curl repositories that aren’t included in this commit number. For example, I have done over 4,400 commits in curl’s website repository.

With these my first 15,000 commits I’ve added 627,000 lines and removed 425,000, making an average commit adding 42 and removing 28 lines. (Feels pretty big but I figure the really large ones skew the average.)

The largest time gap ever between two of my commits in the curl tree is almost 35 days back in June 2000. If we limit the check to “modern times”, as in 2010 or later, there was a 19 day gap in July 2015. I do take vacations, but I usually keep up with the most important curl development even during those.

On average it is one commit done by me every 12.1 hours. Every 15.9 hours since 2010.

I’ve been working full time on curl since early 2019, up until then it was a spare time project only for me. Development with pull-requests and CI and things that verify a lot of the work before merge is a recent thing so one explanation for a slightly higher commit frequency in the past is that we then needed more “oops” commits to rectify mistakes. These days, most of them are done in the PR branches that are squashed when subsequently merged into master. Fewer commits with higher quality.

curl committers

We have merged commits authored by over 833 authors into the curl master repository. Out of these, 537 landed only a single commit (so far).

We are 48 authors who ever wrote 10 or more commits within the same year. 20 of us committed that amount of commits during more than one year.

We are 9 authors who wrote more than 1% of the commits each.

We are 5 authors who ever wrote 10 or more commits within the same year in 10 or more years.

Our second-most committer (by commit count) has not merged a commit for over seven years.

To reach curl’s top-100 committers list right now, you only need to land 6 commits.

can I keep it up?

I intend to stick around in the curl project going forward as well. If things just are this great and life remains fine, I hope that I will be maintaining roughly this commit speed for years to come. My prediction is therefore that it will take longer than another twenty years to reach 30,000 commits.

I’ve worked on curl and its precursors for almost twenty-four years. In another twenty-four years I will be well into my retirement years. At some point I will probably not be fit to shoulder this job anymore!

I have never planned long ahead before and I won’t start now. I will instead keep focused on keeping curl top quality, an exemplary open source project and a welcoming environment for newcomers and oldies alike. I will continue to make sure the project is able to function totally independently if I’m present or not.

The 15,000th commit?

So what exactly did I change in the project when I merged my 15,000th ever change into the branch?

It was a pretty boring and non-spectacular one. I removed a document (RESOURCES) from the docs/ folder as that has been a bit forgotten and now is just completely outdated. There’s a much better page for this provided on the web site:


I of coursed asked my twitter friends a few days ago on how this occasion is best celebrated:

I showed these results to my wife. She approved.

Mike TaylorUpcoming US Holidays (for Mike Taylor)

This is a copy of the email I sent a few days ago to all of Mozilla. I just realized that I’m possibly not the only person with a mail filter to auto-delete company-wide “Upcoming $COUNTRY Holidays” emails, so I’m reposting here.

Maybe I’ll blog later about my experience at Mozilla.

Subject: Upcoming US Holidays (for Mike Taylor)

Howdy all,

This is my last full week at Mozilla, with my last day being Monday, September 21. It’s been just over 7 years since I joined (some of them were really great, and others were fine, I guess).

I’m grateful to have met and worked with so many kind and smart people across the company.

I’m especially grateful for Karl Dubost inviting me to apply to Mozilla 7 years ago, and for getting to know and become friends with the people who joined our team after (Cipri, Dennis, James, Ksenia, Oana, Tom, Guillaume, Kate, et al). I believe they’ve made Firefox a significantly better browser for our users and will continue to unbreak the web.

Anyways, you can find me on the internet in all the usual places. Don’t be a stranger.

Blog: Twitter: Facebook: LinkedIn: Email: (redacted, stalkers. also it’s TOTALLY unguessable don’t even try)


– Mike Taylor Web Compat, Mozilla

The Mozilla BlogUpdate on Firefox Send and Firefox Notes

As Mozilla tightens and refines its product focus in 2020, today we are announcing the end of life for two legacy services that grew out of the Firefox Test Pilot program: Firefox Send and Firefox Notes. Both services are being decommissioned and will no longer be a part of our product family. Details and timelines are discussed below.

Firefox Send was a promising tool for encrypted file sharing. Send garnered good reach, a loyal audience, and real signs of value throughout its life.  Unfortunately, some abusive users were beginning to use Send to ship malware and conduct spear phishing attacks. This summer we took Firefox Send offline to address this challenge.

In the intervening period, as we weighed the cost of our overall portfolio and strategic focus, we made the decision not to relaunch the service. Because the service is already offline, no major changes in status are expected. You can read more here.

Firefox Notes was initially developed to experiment with new methods of encrypted data syncing. Having served that purpose, we kept the product as a little utility tool For Firefox and Android users. In early November, we will decommission the Android Notes app and syncing service. The Firefox Notes desktop browser extension will remain available for existing installs and we will include an option to export all notes, however it will no longer be maintained by Mozilla and will no longer be installable. You can learn more about how to export your notes here.

Thank you for your patience as we’ve refined our product strategy and portfolio over the course of 2020. While saying goodbye is never easy, this decision allows us to sharpen our focus on experiences like Mozilla VPN, Firefox Monitor, and Firefox Private Network.

The post Update on Firefox Send and Firefox Notes appeared first on The Mozilla Blog.

Mozilla Addons BlogDownload Statistics Update

In June, we announced that we were making changes to add-on usage statistics on (AMO).  Now, we’re making a similar change to add-on download statistics. These statistics are aggregated from the AMO server logs, do not contain any personally identifiable information, and are only available to add-ons developers via the Developer Hub.

Just like with usage stats, the new download stats will be less expensive to process and will be based on Firefox telemetry data. As users can opt out of telemetry reporting, the new download numbers will be generally lower than those reported from the server logs. Additionally, the download numbers are based on new telemetry introduced in Firefox 80, so they will be lower at first and increase as users update their Firefox. As before, we will only count downloads originating from AMO.

The good news is that it’ll be easier now to track attribution for downloads. The old download stats were based on a custom src parameter in the URL. The new ones will break down sources with the more standard UTM parameters, making it easier to measure the effect of social media and other online campaigns.

Here’s a preview of what the new downloads dashboard will look like:

A screenshot of the updated statistics dashboard

We expect to turn on the new downloads data on October 8. Make sure to export your current download numbers if you’re interested in preserving them.

The post Download Statistics Update appeared first on Mozilla Add-ons Blog.

Mozilla Privacy BlogMozilla files comments with the European Commission on safeguarding democracy in the digital age

As in many parts of the world, EU lawmakers are eager to get greater insight into the ways in which digital technologies and online discourse can serve to both enhance and create friction in democratic processes. In context of its recent ‘Democracy Action Plan’ (EDAP), we’ve just filed comments with the European Commission, with the aim of informing thoughtful and effective EU policy responses to key issues surrounding democracy and digital technologies.

Our submission complements our recent EU Digital Services Act filing, and focuses on four key areas:

  • The future of the EU Code of Practice on Disinformation: Mozilla was a founding signatory of the Code of Practice, and we recognise it as a considerable step forward. However, in policy terms, the Code is a starting point. There is more work to be done, both to ensure that the Code’s commitments are properly implemented, and to ensure that it is situated within a more coherent general EU policy approach to platform responsibility.
  • Meaningful transparency to address disinformation: To ensure transparency and to facilitate accountability in the effort to address the impact and spread of disinformation online, the European Commission should consider a mandate for broad disclosure of advertising through publicly available ad archive APIs.
  • Developing a meaningful problem definition for microtargeting: We welcome the Commission’s consideration of the role of microtargeting with respect to political advertising and its contribution to the disinformation problem. The EDAP provides an opportunity to gather the systematic insight that is a prerequisite for thoughtful policy responses to limit the harms in microtargeting of political content.
  • Addressing disinformation on messaging apps while maintaining trust and security: In its endeavours to address misinformation on messaging applications, the Commission should refrain from any interventions that would weaken encryption. Rather, its focus should be on enhancing digital literacy; encouraging responsive product design; and enhancing redress mechanisms.

A high-level overview of our filing can be read here, and the substantive questionnaire response can be read here.

We look forward to working alongside policymakers in the European Commission to give practical meaning to the political ambition expressed in the EDAP and the EU Code of Practice on Disinformation. This, as well as our work on the EU Digital Services Act will be a key focus of our public policy engagement in Europe in the coming months.

The post Mozilla files comments with the European Commission on safeguarding democracy in the digital age appeared first on Open Policy & Advocacy.

This Week In RustThis Week in Rust 356

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

No newsletters this week.

Learn Standard Rust
Learn More Rust
Project Updates

Call for Blog Posts

The Rust Core Team wants input from the community! If you haven't already, read the official blog and submit a blog post - it will show up here! Here are the wonderful submissions since the call for blog posts:

Crate of the Week

This week's crate is gitoxide, an idiomatic, modern, lean, fast, safe & pure Rust implementation of git.

Thanks again to Vlad Frolov for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No issues were proposed for CfP.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

336 pull requests were merged in the last week

Rust Compiler Performance Triage

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

Upcoming Events


If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

When you have a lifetime <'a> on a struct, that lifetime denotes references to values stored outside of the struct. If you try to store a reference that points inside the struct rather than outside, you will run into a compiler error when the compiler notices you lied to it.

Thanks to Tom Phinney for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

The Firefox FrontierMake Firefox your default browser on iOS (finally!)

With iOS 14, Apple users will finally have the power to choose any default browser on iPhones and iPads. And now that there’s a choice, make it count with Firefox! … Read more

The post Make Firefox your default browser on iOS (finally!) appeared first on The Firefox Frontier.

Mozilla Cloud Services BlogThe Future of Sync


There’s a new Sync back-end! The past year or so has been a year of a lot of changes and some of those changes broke things. Our group reorganized, we moved from IRC to Matrix, and a few other things caught us off guard and needed to be addressed. None of those should be excuses for why we kinda stopped keeping you up to date about Sync. We did write a lot of stuff about what we were going to do, but we forgot to share it outside of mozilla. Again, not an excuse, but just letting you know why we felt like we had talked about all of this, even though we absolutely had not.

So, allow me to introduce you to the four person “Services Engineering” team whose job it is to keep a bunch of back-end services running, including Push Notifications and Sync back-end, and a few other miscellaneous services.

For now, let’s focus on Sync.

Current Situation

Sync probably didn’t do what you thought it did.

Sync’s job is to make sure that the bookmarks, passwords, history, extensions and other bits you want to synchronize between one copy of Firefox gets to your other copies of Firefox. Those different copies of Firefox could be different profiles, or be on different devices. Not all of your copies of Firefox may be online or accessible all the time, though, so sync has to do is keep a temporary, encrypted copy on some backend servers which it can use to coordinate later. Since it’s encrypted, Mozilla can’t read that data, we just know it belongs to you. A side effect is that adding a new instance of Firefox (by installing and signing in on a new device, or uninstalling and reinstalling on the same device, or creating a new Firefox profile you then sign in to), just adds another copy of Firefox to Sync’s list of things to synchronize. It might be a bit confusing, but this is true even if you only had one copy of Firefox. If you “lost” a copy of Firefox because you uninstalled it, or your computer’s disc crashed, or your dog buried your phone in the backyard, when you re-installed Firefox, you add another copy of Firefox to your account. Sync would then synchronize your data to that new copy. Sync would just never get an update from the “old” version of Firefox you lost. Sync would just try to rebuild your data from the temporary echoes of the encrypted data that was still on our servers.

That’s great for short term things, but kinda terrible if you, say, shut down Firefox while you go on walk-about only to come back months later to a bad hard drive. You reinstall, try to set up sync, and due to an unexpected Sync server crash we wound up losing your data echos.

That was part of the problem. If we lost a server, we’d basically tell all the copies of Firefox that were using that server, “Whoops, go talk to this new server” and your copy of Firefox would then re-upload what it had. Sometimes this might result in you losing a line of history, sometimes you’d get a duplicate bookmark, but generally, Sync would tend to recover OK and you’d be none the wiser. If that happens when there are no other active copies of Firefox for your account , however, all bets were off and you’d probably lose everything since there were no other copies of your data anywhere.

A New Hope Service

A lot of folks expected it to be a Backup service. The good news is, now it is a backup service. Sync is more reliable now. We use a distributed database to store your data securely, so we no longer lose databases (or your data echos). There’s a lot of benefit for us as well. We were able to rewrite the service in Rust, a more efficient programming language that lets us run on less machines.

Of course, there are a few challenges we face when standing up a service like this.

Sync needs to run with new versions of Firefox, as well as older ones. In some cases, very old ones, which had some interesting “quirks”. It needs to continue to be at least as secure as before while hopefully giving devs a chance to fix some of the existing weirdness as well as add new features. Oh, and switching folks to the new service should be as transparent as possible.

It’s a long, complicated list of requirements.

How we got here

First off we had to decide a few things. Like what data store were we going to use. We picked Google Cloud’s Spanner database for its own pile of reasons, some technical, some non-technical. Spanner provides a SQL like database which means that we don’t have to radically change existing MySQL based code. This means that we can provide some level of abstraction allowing for those who want to self-host without radically altering internal data structures. In addition, Spanner provides us an overall cost savings in running our servers. It’s a SQL like database that should be able to handle what we need to do.

We then picked Rust as our development platform and Actix as the web base because we had pretty good experience with moving other Python projects to them. It’s not been magically easy, and there have been plenty of pain points we’ve hit, but by-and-large we’re confident in the code and it’s proven to be easy enough to work with. Rust has also allowed us to reduce the number of servers we have to run in order to provide the service at the scale we need to offer it, which also helps us reduce costs.

For folks interested in following our progress, we’re working with the syncstorage-rs repo on Github. We also are tracking a bunch of the other issues at the services engineering repo.

Because Rust is ever evolving, often massively useful features roll out on different schedules. For instance, we HEAVILY use the async/await code, which landed in late December of 2019, and is taking a bit to percolate through all the libraries. As those libraries update, we’re going to need to rebuild bits of our server to take advantage of them.

How you can help

Right now, all we can ask is some patience, and possibly help with some of our Good First Bugs. Google released a “stand-alone” spanner emulator that may help you work with our new sync server if you want to play with that part, or you can help us work on the traditional, MySQL stand alone side. That should let you start experimenting with the server and help us find bugs and issues.

To be honest, our initial focus was more on the Spanner integration work than the stand-alone SQL side. We have a number of existing unit tests that exercise both halves and there are a few of us who are very vocal about making sure we support stand-alone SQL databases, but we can use your help testing in more “real world” environments.

For now, folks interested in running the old python 2.7 syncserver still can while we continue to improve stand-alone support inside of syncstorage-rs.

Some folks who run stand-alone servers are well aware that Python 2.7 officially reached “end of life”, meaning no further updates or support is coming from the Python developers, however, we have a bit of leeway here. The Pypy group has said that they plan on offering some support for Python 2.7 for a while longer. Unfortunately, the libraries that we use continue to progress or get abandoned for python3. We’re trying to lock down versions as much as possible, but it’s not sustainable.

We finally have rust based sync storage working with our durable back end running and hosting users. Our goal is to now focus on the “stand-alone” version, and we’re making fairly good progress.

I’m sorry that things have been too quiet here. While we’ve been putting together lots of internal documents explaining how we’re going to do this move, we’ve not shared them publicly. Hopefully we can clean them up and do that.

We’re excited to offer a new version of Sync and look forward to telling you more about what’s coming up. Stay tuned!

Mozilla Privacy BlogMozilla announces partnership to explore new technology ideas in the Africa Region

Mozilla and AfriLabs – a Pan-African community and connector of African tech hubs with over 225 technology innovation hubs spread across 47 countries – have partnered to convene a series of roundtable discussions with African startups, entrepreneurs, developers and innovators to better understand the tech ecosystem and identify new product ideas – to spur the next generation of open innovation.

This strategic partnership will help develop more relevant, sustainable support for African innovators and entrepreneurs to build scalable resilient products while leveraging honest and candid discussions to identify areas of common interest. There is no shortage of innovators and creative talents across the African continent, diverse stakeholders coming together to form new ecosystems to solve social, economic problems that are unique to the region.

“Mozilla is pleased to be partnering with AfriLabs to learn more about the intersection of African product needs and capacity gaps and to co-create value with local entrepreneurs,” said Alice Munyua, Director of the Africa Innovation Program.

Mozilla is committed to supporting communities of technologists by putting people first while strengthening the knowledge-base. This partnership is part of Mozilla’s efforts to reinvest within the African tech ecosystem and support local innovators with scalable ideas that have the potential to impact across the continent.

The post Mozilla announces partnership to explore new technology ideas in the Africa Region appeared first on Open Policy & Advocacy.

Mozilla Attack & DefenseInspecting Just-in-Time Compiled JavaScript

The security implications of Just-in-Time (JIT) Compilers in browsers have been getting attention for the past decade and the references to more recent resources is too great to enumerate. While it’s not the only class of flaw in a browser, it is a common one; and diving deeply into it has a higher barrier to entry than, say, UXSS injection in the UI. This post is about lowering that barrier to entry.

If you want to understand what is happening under the hood in the JIT engine, you can read the source. But that’s kind of a tall order given that the folder js/ contains 500,000+ lines of code. Sometimes it’s easier to treat a target as a black box until you find something you want to dig into deeper. To aid in that endeavor, we’ve landed a feature in the js shell that allows you to get the assembly output of a Javascript function the JIT has processed. Disassembly is supported with the zydis disassembly library (our in-tree version).

To use the new feature; you’ll need to run the js interpreter. You can download the jsshell for any Nightly version of Firefox from our FTP server – for example here’s the latest Linux x64 jsshell. Helpfully, these links always point to the latest version available, historical versions can also be downloaded.

You can also build the js shell from source (which can be done separately from building Firefox, but doing the full browser build can also create the shell.)  If building from source, in your .mozconfig, you’ll want to following to get the tools and output you want but also emulate the shell as the Javascript engine is released to users:

ac_add_options --enable-application=js
ac_add_options --enable-js-shell
ac_add_options --enable-jitspew
ac_add_options --disable-debug
ac_add_options --enable-optimize

# If you want to experiment with the debug and optimize flags,
# you can build Firefox to different object directories
# (and avoid an entire recompilation)
mk_add_options MOZ_OBJDIR=@TOPSRCDIR@/obj-nodebug-opt
# mk_add_options MOZ_OBJDIR=@TOPSRCDIR@/obj-debug-noopt

After building the shell or Firefox, fire up `obj-dir/dist/bin/js[.exe]` and try the following script:

function add(x, y) { x = 0+x; y = 0+y; return x+y; }
for(i=0; i<500; i++) { add(2, i); }

You’ll be greeted by an initial line indicating which backend is being used. The possible values and their meanings are:

  • Wasm – A WebAssembly function0
  • Asmjs – An asm.js module or exported function
  • Baseline – indicates the Baseline JIT, a first-pass JIT Engine that collects type information (that can be used by Ion during a subsequent compilation).
  • Ion – indicates IonMonkey, a powerful optimizing JIT that performs aggressive compiler optimizations at the cost of additional compile time.

0 The WASM function itself might be Baseline WASM or compiled with an optimizing compiler Cranelift on Nightly; Ion otherwise – it’s not easily enumerated which the assembly function is, but identifying baseline or not becomes easier once you’ve looked at the assembly output a few times.

After running a function 100 times, we will trigger the Baseline compiler; after 1000 times we will trigger Ion, and after 100,000 times the full, more expensive, Ion compilation.

For more information about the differences and internals of the JIT Engines, we can point to the following articles:

Let’s dive into the output we just generated.  Here’s the output of the above script:

; backend=baseline
00000000   jmp 0x0000000000000028                           
00000005   mov $0x7F8A23923000, %rcx                           
0000000F   movq 0x170(%rcx), %rcx                           
00000016   movq %rsp, 0xD0(%rcx)                           
0000001D   movq $0x00, 0xD8(%rcx)                                 
00000028   push %rbp                                 

00000029   mov %rsp, %rbp                |                 
0000002C   sub $0x48, %rsp               | Allocating & initializing                  
00000030   movl $0x00, -0x10(%rbp)       | BaselineFrame structure on                         
00000037   movq 0x18(%rbp), %rcx         | stack.                        
0000003B   and $-0x04, %rcx              | (BaselineCompilerCodeGen::
0000003F   movq 0x28(%rcx), %rcx         |        emitInitFrameFields)                     
00000043   movq %rcx, -0x30(%rbp)        |                         
00000047   mov $0x7F8A239237E0, %r11     |                     
00000051   cmpq %rsp, (%r11)             |             
00000054   jbe 0x000000000000006C        |                  
0000005A   mov %rbp, %rbx                | Stackoverflow check         
0000005D   sub $0x48, %rbx               | (BaselineCodeGen::
00000061   push %rbx                     |              emitStackCheck)          
00000062   push $0x5821                  |        
00000067   call 0xFFFFFFFFFFFE1680       |                   

0000006C   mov $0x7F8A226CE0D8, %r11                           
00000076   addq $0x01, (%r11)                           

0000007A   mov $0x7F8A227F6E00, %rax     |                     
00000084   movl 0xC0(%rax), %ecx         |                 
0000008A   add $0x01, %ecx               |           
0000008D   movl %ecx, 0xC0(%rax)         |                 
00000093   cmp $0x3E8, %ecx              |            
00000099   jl 0x00000000000000CC         | Check if we should tier up to
0000009F   movq 0x88(%rax), %rax         | Ion code. 0x38 (1000) is the
000000A6   cmp $0x02, %rax               | threshold. After that check,          
000000AA   jz 0x00000000000000CC         | it checks 'are we already
000000B0   cmp $0x01, %rax               | compiling' and 'is Ion
000000B4   jz 0x00000000000000CC         | compilation impossible'                
000000BA   mov %rbp, %rcx                |          
000000BD   sub $0x48, %rcx               |           
000000C1   push %rcx                     |     
000000C2   push $0x5821                  |        
000000C7   call 0xFFFFFFFFFFFE34B0       |                   

000000CC   movq 0x28(%rbp), %rcx         |                 
000000D0   mov $0x7F8A227F6ED0, %r11     |                     
000000DA   movq (%r11), %rdi             |             
000000DD   callq (%rdi)                  |  
000000DF   movq 0x30(%rbp), %rcx         | Type Inference Type Monitors
000000E3   mov $0x7F8A227F6EE0, %r11     | for |this| and each arg.    
000000ED   movq (%r11), %rdi             | (This overhead is one of the           
000000F0   callq (%rdi)                  |  reasons we're doing
000000F2   movq 0x38(%rbp), %rcx         |  WARP - see below.)                                       
000000F6   mov $0x7F8A227F6EF0, %r11     |                     
00000100   movq (%r11), %rdi             |             
00000103   callq (%rdi)                  |        

00000105   movq 0x30(%rbp), %rbx         |                 
00000109   mov $0xFFF8800000000000, %rcx |                         
00000113   mov $0x7F8A227F6F00, %r11     | Load Int32Value(0) + arg1 and                    
0000011D   movq (%r11), %rdi             | calling an Inline Cache stub            
00000120   callq (%rdi)                  |        
00000122   movq %rcx, 0x30(%rbp)         |                 

00000126   movq 0x38(%rbp), %rbx         |                 
0000012A   mov $0xFFF8800000000000, %rcx | Load Int32Value(0) + arg2 and                        
00000134   mov $0x7F8A227F6F10, %r11     | calling an Inline Cache stub                               
0000013E   movq (%r11), %rdi             |             
00000141   callq (%rdi)                  |        
00000143   movq %rcx, 0x38(%rbp)         |                  

00000147   movq 0x38(%rbp), %rbx         |                 
0000014B   movq 0x30(%rbp), %rcx         |                 
0000014F   mov $0x7F8A227F6F20, %r11     |                     
00000159   movq (%r11), %rdi             | Final Add Inline Cache call
0000015C   callq (%rdi)                  | followed by epilogue code and
0000015E   jmp 0x0000000000000163        | return       
00000163   mov %rbp, %rsp                |          
00000166   pop %rbp                      |    
00000167   jmp 0x0000000000000171        |                  
0000016C   jmp 0xFFFFFFFFFFFE69E0                           
00000171   ret                           
00000172   ud2                           

So that’s the Baseline code. It’s the more simplistic JIT in Firefox. What about IonMonkey – its faster, more aggressive big brother?

If we preface our script with setJitCompilerOption("ion.warmup.trigger", 4); then we will induce the Ion compiler to trigger earlier instead of the aforementioned 1000 invocations. You can also set setJitCompilerOption("ion.full.warmup.trigger", 4); to trigger the more aggressive tier for Ion compilation that otherwise kicks in after 100,000 invocations. After triggering the ‘full’ layer, the output will look like:

; backend=ion
00000000    movq 0x20(%rsp), %rax         |
00000005    shr $0x2F, %rax               |
00000009    cmp $0x1FFF3, %eax            |
0000000E    jnz 0x0000000000000078        |
00000014    movq 0x28(%rsp), %rax         |
00000019    shr $0x2F, %rax               | Type Guards
0000001D    cmp $0x1FFF1, %eax            | for this variable,
00000022    jnz 0x0000000000000078        | arg1, & arg2
00000028    movq 0x30(%rsp), %rax         | 
0000002D    shr $0x2F, %rax               |
00000031    cmp $0x1FFF1, %eax            |
00000036    jnz 0x0000000000000078        |
0000003C    jmp 0x0000000000000041        |

00000041    movl 0x28(%rsp), %eax         |
00000045    movl 0x30(%rsp), %ecx         | Addition
00000049    add %ecx, %eax                |

0000004B    jo 0x000000000000007F         | Overflow Check

00000051    mov $0xFFF8800000000000, %rcx | Box int32 into
0000005B    or %rax, %rcx                 | Int32Value
0000005E    ret
0000005F    nop
00000060    nop
00000061    nop
00000062    nop
00000063    nop
00000064    nop
00000065    nop
00000066    nop
00000067    mov $0x7F8A23903FC0, %r11     |
00000071    push %r11                     |
00000073    jmp 0xFFFFFFFFFFFDED40        |
00000078    push $0x00                    |
0000007A    jmp 0x000000000000008D        |
0000007F    sub %ecx, %eax                | Out-of-line
00000081    jmp 0x0000000000000086        | error handling
00000086    push $0x0D                    | code
00000088    jmp 0x000000000000008D        |
0000008D    push $0x00                    |
0000008F    jmp 0xFFFFFFFFFFFDEC60        |
00000094    ud2                           |

There are some other things worth noting.

You can control the behavior of the JITs using environment variables, such as JIT_OPTION_fullDebugChecks=false (this will avoid running all the debug checks even in the debug build.)  The full list of JIT Options with documentation is available in JitOptions.cpp.

There are also a variety of command-line flags that can be used in place of environment variables or setJitCompilerOption. For instance --baseline-eager and --ion-eager will trigger JIT compilation immediately instead of requiring multiple compilations. (ion-eager triggers ‘full’ compilation, so avoid it if you want the non-full behavior.) --no-threads or --ion-offthread-compile=off will disable off-thread compilation that can make it harder to write reliable tests because it adds non-determinism. no-threads turns off all the background threads and implies ion-offthread-compile=off.

Finally, we have a new in-development frontend for Ion: WarpBuilder. You can learn more about WarpBuilder over in the spidermonkey newsletter or the Bugzilla bug. Enabling warp (by passing --warp to the js shell executable) significantly reduces the assembly generated, partly because we’re simplifying how type information is collected and updated.

If you’ve got other tricks or techniques you use to help you navigate our JIT(s), be sure to reply to our tweet so others can find them!

Mozilla Addons BlogExtensions in Firefox 81

In Firefox 81, we have improved error messages for extension developers and updated user-facing notifications  to provide more information on how extensions are modifying their settings.

For developers, the menus.create API now provides more meaningful error messages when supplying invalid match or url patterns.  This updated message should make it easier for developers to quickly identify and fix the error. In addition, webNavigation.getAllFrames and webNavigation.getFrame will return a promise resolved with null in case the tab is discarded, which is how these APIs behave in Chrome.

For users, we’ve added a notification when an add-on is controlling the “Ask to save logins and passwords for websites” setting, using the settings API. Users can see this notification in their preferences or by navigating to about:preferences#privacy.

Thank you Deepika Karanji for improving the error messages, and our WebExtensions and security engineering teams for making these changes possible. We’re looking forward to seeing what is next for Firefox 82.

The post Extensions in Firefox 81 appeared first on Mozilla Add-ons Blog.

Mozilla Privacy BlogMozilla applauds TRAI for maintaining the status quo on OTT regulation, upholding a key aspect of net neutrality in India

Mozilla applauds the Telecom Regulatory Authority of India (TRAI) for its decision to maintain the existing regulatory framework for OTT services in India. The regulation of OTT services sparked the fight for net neutrality in India in 2015, leading to over a million Indians asking TRAI to #SaveTheInternet and over time becoming one of the most successful grassroots campaigns in the history of digital activism. Mozilla’s CEO, Mitchell Baker, wrote an open letter to Prime Minister Modi at the time stating: “We stand firm in the belief that all users should be able to experience the full diversity of the Web. For this to be possible, Internet Service Providers must treat all content transmitted over the Internet equally, regardless of the sender or the receiver.”

Since then, as we have stated in public consultations in both 2015 and 2019, we believe that imposing a new uniform regulatory framework for OTT services, akin to how telecom operators are governed, would irredeemably harm the internet ecosystem in India. It would create legal uncertainty, chill innovation, undermine security best practices, and eventually, hurt the promise of Digital India. TRAI’s thoughtful and considered approach to the topic sets an example for regulators across the world and helps mitigate many of these concerns.  It is a historical step for a country which already has among the strongest net neutrality regulations in the world. We look forward to continuing to work with TRAI to create a progressive regulatory framework for the internet ecosystem in India.

The post Mozilla applauds TRAI for maintaining the status quo on OTT regulation, upholding a key aspect of net neutrality in India appeared first on Open Policy & Advocacy.

The Rust Programming Language BlogA call for contributors from the WG-prioritization team

Are you looking for opportunities to contribute to the Rust community? Have some spare time to donate? And maybe learn something interesting along the way?

The WG-prioritization can be the right place for you: we are looking for new contributors!

What is the WG-prioritization?

The Prioritization WG is a compiler Working Group dedicated to handling the most important bugs found in the Rust compiler (rustc), to ensure that they are resolved. We stand at the frontline of the Github Rust issue tracker and our job is to do triaging, mainly deciding which bugs are critical (potential release blockers) and prepare the weekly agenda for the Compiler Team with the most pressing issues to be taken care of.

Here is a bit more comprehensive description. How we work is detailed on the Rust Forge.

Our tooling is mainly the triagebot, a trustful messenger that helps us by sending notification to our Zulip stream when an issue on Github is labelled.

We also have a repository with some issues and meta-issues, where we basically note down how we would like our workflow to evolve. Contributions to these issues are welcome, but a bit more context about the workflow of this Working Group is probably necessary.

Documentation is also a fundamental part of the onboarding package that we provide to newcomers. As we basically "organize and sort stuff", a lot happens without writing a single line of code but rather applying procedures to optimize triaging and issues prioritization.

This requires our workflow to be as efficient and well documented as possible. As such, we are always open to contributions to clarify the documentation (and fresh eyeballs are especially precious for that!).

The typical week of a WG-prioritization member

Our week starts on Thursday/Friday after the Rust Compiler Team meeting (one of the cool teams that keep that beast at bay) by preparing a new agenda for the following meeting, leaving placeholders to be filled during the week.

In the following days the WG-prioritization and other teams will asynchronously monitor the issue tracker - everyone at their own pace, when time allows - trying to assign a priority to new issues. This greatly helps the compiler team to sort and prioritize their work.

If the issue priority is not immediately clear, it will be tagged with a temporary label and briefly discussed on Zulip by the WG-prioritization: is this issue critical? Is it clear? Does it need a minimal reproducible example (often abbreviated in MCVE) or even better a bisect to find a regression (we love contributors bisecting code)? We then assign the priority by choosing a value in a range from P-low to P-critical. The rationale behind the priority levels is detailed in our guide.

The day before the meeting the agenda is filled and handed to the Compiler Team.

Someone from the WG-Prioritization will attend the meeting and provide some support (if needed).

Rinse and repeat for the next meeting.

Everything is described in excruciating detail on Rust Forge. Feel free to have a look there to learn more. The quantity of information there can be a bit overwhelming at first (there is quite a bit of lingo we use), but things will become clearer.

How can I contribute?

  • Help with triaging compiler issues: helping keeping the issue tracker tidy is very important for any big project. Labelling and pinging people to work on MCVEs or bisection is very helpful to resolve any issue. We focus our attention on issues labelled with I-prioritize (issues that need a brief discussion before assigning a priority) but also P-critical and P-high (issues that need attention during the compiler meeting). All this is required for our next task:
  • Help with issues prioritization: keep an eye on the messages on our Zulip stream (about 10/15 issues a week) and cast a vote on what the priority should be. Analyze the issue, figure out how the release could be impacted. More votes balance the prioritization and with some experience, you will develop an instinct to prioritize issues :-)
  • Help properly summarize issues in the agenda: what is this issue about? What has been already done to frame a context? Is this a regression? We add any detail that could be relevant to the Compiler team during their meeting. These folks are busy and could use all the help to get the context of an issue at a glance.

Ok, but can I actually contribute? I don't feel skilled enough

Yes, you are! There will always be one or more members available to explain, mentor and clarify things. Don't be shy and do not refrain from asking questions. You will very quickly be able to give a helpful opinion in our discussions.

Everyone can contribute on their capacity and availability. The reward is the warm feeling to do something concrete to ensure that the Rust compiler, one of the cornerstone of the project, stays in good shape and improves continuously. Moreover, you will be exposed to a continuous stream of new bugs and seeing how they are evaluated and managed is pretty educational.

Where do we hang out

One of the great things of the Rust governance is its openness. Join our stream #t-compiler/wg-prioritization, peek at how we work and if you want, also keep an eye to the weekly Team Compiler official meetings on #t-compiler/meetings. Have a question? Don't hesitate to open a new topic in our stream!

You can even simply just hang out on our Zulip stream, see how things work and then get involved where you feel able.

We keep a separate substream #t-compiler/wg-prioritization/alerts where all the issues nominated for discussion will receive their own topic. Subscription to this stream is optional for the members of the Working Group as it has a non-negligible volume of notifications (it is public and freely accessible anyway).

The main contact points for this Working Group are Santiago Pastorino (@Santiago Pastorino on Zulip) and Wesley Wiser (@Wesley Wiser on Zulip).

See you there!

Henri SivonenRust 2021

It is again the time of year when the Rust team is calling for blog post as input to the next annual roadmap. This is my contribution.

The Foundation

I wish either the Rust Foundation itself or at least a sibling organization formed at the same time was domiciled in the EU. Within the EU, Germany looks like the appropriate member state.

Instead of simply treating the United States as the default jurisdiction for the Rust Foundation, I wish a look is taken at the relative benefits of other jurisdictions. The Document Foundation appears to be precedent of Germany recognizing Free Software development as having a public benefit purpose.

Even if the main Foundation ends up in the United States, I still think a sibling organization in the EU would be worthwhile. A substantial part of the Rust community is in Europe and in Germany specifically. Things can get problematic when the person doing the work resides in Europe but entity with the money is in the United States. It would be good to have a Rust Foundation-ish entity that can act as an European Economic Area-based employer.

Also, being domiciled in the European Union has the benefit of access to EU money. Notably, Eclipse Foundation is in the process of relocating from Canada to Belgium.

Technical Stuff

My technical wishes are a re-run of 2018, 2019, and 2020. Most of the text below is actual copypaste.

Promote packed_simd to std::simd

Rust has had awesome portable (i.e. cross-ISA) SIMD since 2015—first in the form of the simd crate and now in the form of the packed_simd crate. Yet, it’s still a nightly-only feature. As a developer working on a product that treats x86_64, aarch64, ARMv7+NEON, and x86 as tier-1, I wish packed_simd gets promoted to std::simd (on stable) in 2021. There now appears to be forward motion on this.

At this point, people tend to say: “SIMD is already stable.” No, not portable SIMD. What got promoted was vendor intrinsics for x86 and x86_64. This is the same non-portable feature that is available in C. Especially with Apple Silicon coming up, it’s bad if the most performant Rust code is built for x86_64 while aarch64 is left as a mere TODO item (not to mention less popular architectures). The longer Rust has vendor intrinsics on stable without portable SIMD on stable, the more the crate ecosystem becomes dependent on x86_64 intrinsics and the harder it becomes to restructure the crates to use portable SIMD where portable SIMD works and to confine vendor intrinsics only to specific operations.

Non-Nightly Benchmarking

The library support for the cargo bench feature has been in the state “basically, the design is problematic, but we haven’t had anyone work through those issues yet” since 2015. It’s a useful feature nonetheless. Like I said a year ago, the year before, and the year before that, it’s time to let go of the possibility of tweaking it for elegance and just let users use it on non-nighly Rust.

Better Integer Range Analysis for Bound Check Elision

As a developer writing performance-sensitive inner loops, I wish rustc/LLVM did better integer range analysis for bound check elision. See my Rust 2019 post.

likely() and unlikely() for Plain if Branch Prediction Hints

Also, as a developer writing performance-sensitive inner loops, I wish likely() and unlikely() were available on stable Rust. Like benchmarking, likely() and unlikely() are a simple feature that works but has stalled due to concerns about lack of perfection. Let’s have it for plain if and address match and if let once there actually is a workable design for those.


Rust has successfully delivered on “stability without stagnation” to the point that Red Hat delivers Rust updates for RHEL on a 3-month frequency instead of Rust getting stuck for the duration of the lifecycle of a RHEL version. That is, contrary to popular belief, the “stability” part works without an LTS. At this point, doing an LTS would be a stategic blunder that would jeopardize the “without stagnation” part.

Mozilla Privacy BlogIndia’s ambitious non personal data report should put privacy first, for both individuals and communities

After almost a year’s worth of deliberation, the Kris Gopalakrishnan Committee released its draft report on non-personal data regulation in India in July 2020. The report is one of the first comprehensive articulations of how non-personal data should be regulated by any country and breaks new ground in interesting ways. While seemingly well intentioned, many of the report’s recommendations leave much to be desired in both clarity and feasibility of implementation. In Mozilla’s response to the public consultation, we have argued for a consultative and rights respecting approach to non-personal data regulation that benefits communities, individuals and businesses alike while upholding their privacy and autonomy.

We welcome the consultation, and believe the concept of non-personal data will benefit from a robust public discussion. Such a process is essential to creating a rights-respecting law compatible with the Indian Constitution and its fundamental rights of equality, liberty and privacy.

The key issues outlined in our submission are:

  • Mitigating risks to privacy: Non-personal data can also often constitute protected trade secrets, and its sharing with third parties can raise significant privacy concerns. As we’ve stated before, information about sales location data from e-commerce platforms, for example, can be used to draw dangerous inferences and patterns regarding caste, religion, and sexuality.
  • Clarifying community data: Likewise, the paper proposes the nebulous concept of community data while failing to adequately provide for community rights. Replacing the fundamental right to privacy with a notion of ownership akin to property, vested in the individual but easily divested by state and non-state actors, leaves individual autonomy in a precarious position.
  • Privacy is not a zero-sum construct: More broadly, the paper focuses on how to extract data for the national interest, while continuing to ignore the urgent need to protect Indians’ privacy. Instead of contemplating how to force the transfer of non-personal data for the benefit of local companies, the Indian Government should leverage India’s place in the global economy by setting forth an interoperable and rights respecting vision of data governance.
  • Passing a comprehensive data protection law: The Indian government should prioritize the passage of a strong data protection law, accompanied by reform of government surveillance. Only after the implementation of such a law that makes the fundamental right of privacy a reality for all Indians to all should we begin to look into non-personal data.

The goal of data-driven innovation oriented towards societal benefit is a valuable one. However, any community-oriented data models must be predicated on a legal framework that secures the individual’s rights to their data, as affirmed by the Indian Constitution.  As we’ve argued extensively to MeitY and the Justice Srikrishna Committee, such a law has the opportunity to build on the globally standard of data protection set by Europe, and position India as a leader in internet regulation.

We look forward to engaging with the Indian government as it deliberates how to regulate non-personal data over the coming years.

Our full submission can be found here.

The post India’s ambitious non personal data report should put privacy first, for both individuals and communities appeared first on Open Policy & Advocacy.

Cameron KaiserTenFourFox FPR27b1 available (now with sticky Reader View)

TenFourFox Feature Parity Release 27 beta 1 is now available (downloads, hashes, release notes).

The big user-facing update for FPR27 is a first pass at "sticky" Reader View. I've been paying attention more to improving TenFourFox's implementation of Reader View because, especially for low-end Power Macs (and there's an argument to be made that all Power Macs are, by modern standards, low end), rendering articles in Reader View strips out extraneous elements, trackers, ads, social media, comments, etc., making them substantially lighter and faster than "full fat." Also, because the layout is simplified, this means less chance for exposing or choking on layout or JavaScript features TenFourFox currently doesn't support. However, in regular Firefox and FPR26, you have to go to a page and wait for some portion of it to render before you enter Reader View, which is inconvenient, and worse still if you click any link in a Reader-rendered article you exit Reader View and have to manually repeat the process. This can waste a non-trivial amount of processing time.

So when I say Reader View is now "sticky," that means links you click in an article in reader mode are also rendered in reader mode, and so on, until you explicitly exit it (then things go back to default). This loads pages much faster, in some cases nearly instantaneously. In addition, to make it easier to enter reader mode in fewer steps (and on slower systems, less time waiting for the reader icon in the address bar to be clickable), you can now right click on links and automatically pop the link into Reader View in a new tab ("Open Link in New Tab, Enter Reader View").

As always this is configurable, though "sticky" mode will be the default unless a serious bug is identified: if you set tenfourfox.reader.sticky to false, the old behaviour is restored. Also, since you may be interacting differently with new tabs you open in Reader View, it uses a separate option than Preferences' "When I open a link in a new tab, switch to it immediately." Immediately switching to the newly opened Reader View tab is the default, but you can make such tabs always open in the background by setting tenfourfox.reader.sticky.tabs.loadInBackground to false also.

Do keep in mind that not every page is suitable for Reader View, even though allowing you to try to render almost any page (except for a few domains on an internal blacklist) has been the default for several versions. The good news is it won't take very long to find out, and TenFourFox's internal version of Readability is current with mainline Firefox's, so many more pages render usefully. I intend to continue further work with this because I think it really is the best way to get around our machines' unfortunate limitations and once you get spoiled by the speed it's hard to read blogs and news sites any other way. (I use it heavily on my Pixel 3 running Firefox for Android, too.)

Additionally, this version completes the under-the-hood changes to get updates from Firefox 78ESR now that 68ESR is discontinued, including new certificate and EV roots as well as security patches. Part of the security updates involved pulling a couple of our internal libraries up to current versions, yielding both better security and performance improvements, and I will probably do a couple more as part of FPR28. Accordingly, you can now select Firefox 78ESR as a user-agent string from the TenFourFox preference pane if needed as well (though the usual advice to choose as old a user-agent string as you can get away with still applies). OlgaTPark also discovered what we were missing to fix enhanced tracking protection, so if you use that feature, it should stop spuriously blocking various innocent images and stylesheets.

What is not in this release is a fix for issue 621 where logging into LinkedIn crashes due to a JavaScript bug. I don't have a proper understanding of this crash, and a couple speculative ideas didn't pan out, but it is not PowerPC-specific or associated with the JavaScript JIT compiler as it occurs in Intel builds as well. (If any Mozillian JS deities have a good guess why an object might get created with the wrong number of slots, feel free to commment here or on Github.) Since it won't work anyway I may decide to temporarily blacklist LinkedIn to avoid drive-by crashes if I can't sort this out before final release, which will be on or around September 21.

Will Kahn-GreeneSocorro Engineering: Half in Review 2020 h1


2020h1 was rough. Layoffs, re-org, Berlin All Hands, Covid-19, focused on MLS for a while, then I switched back to Socorro/Tecken full time, then virtual All Hands.

It's September now and 2020h1 ended a long time ago, but I'm only just getting a chance to catch up and some things happened in 2020h1 that are important to divulge and we don't tell anyone about Socorro events via any other medium.

Prepare to dive in!

Read more… (15 min remaining to read)

The Firefox FrontierThe age of activism: Protect your digital security and know your rights

No matter where you have been getting your news these past few months, the rise of citizen protest and civil disobedience has captured headlines, top stories and trending topics. The … Read more

The post The age of activism: Protect your digital security and know your rights appeared first on The Firefox Frontier.

Firefox UXContent Strategy in Action on Firefox for Android

The Firefox for Android team recently celebrated an important milestone. We launched a completely overhauled Android experience that’s fast, personalized, and private by design.

Image of white Android phone with Firefox logo over a purple background.

Firefox recently launched launched a completely overhauled Android experience.

When I joined the mobile team six months ago as its first embedded content strategist, I quickly saw the opportunity to improve our process by codifying standards. This would help us avoid reinventing solutions so we could move faster and ultimately develop a more cohesive product for end users. Here are a few approaches I took to integrate systems thinking into our UX process.

Create documentation to streamline decision making

I had an immediate ask to write strings for several snackbars and confirmation dialogs. Dozens of these already existed in the app. They appear when you complete actions like saving a bookmark, closing a tab, or deleting browsing data.

Screenshots of a snackbar message and confirmation dialog message in the Firefox for Android app.

Snackbars and confirmation dialogs appear when you take certain actions inside the app, such as saving a bookmark or deleting your browsing data.

All I had to do was review the existing strings and follow the already-established patterns. That was easier said than done. Strings live in two XML files. Similar strings, like snackbars and dialogs, are rarely grouped together. It’s also difficult to understand the context of the interaction from an XML file.

Screenshot of the app's XML file, which contains all strings inside the Firefox for Android app.

It’s difficult to identify content patterns and inconsistencies from an XML file.

To see everything in one, more digestible place, I conducted a holistic audit of the snackbars and dialogs.

I downloaded the XML files and pulled all snackbar and dialog-related strings into a spreadsheet. I also went through the app and triggered as many of the messages as I could to add screenshots to my documentation. I audited a few competitors, too.  As the audit came together, I began to see patterns emerge.

Screenshot of spreadsheet for organizing strings for the Firefox for Android app.

Organizing and coding strings in a spreadsheet helped me identify patterns and inconsistencies.

I identified the following:

  • Inconsistencies in strings. Example: Some had terminal punctuation and others did not.
  • Inconsistencies in triggers and behaviors. Example: A snackbar should have appeared but didn’t, or should NOT have appeared but did.

I used this to create guidelines around our usage of these components. Now when a request for a snackbar or dialog comes up, I can close the loop much faster because I have documented guidelines to follow.

Define and standardize reusable patterns and components

Snackbars are one component of many within the app. Firefox for Android has buttons, permission prompts, error fields, modals, in-app messaging surfaces, and much more. Though the design team maintained a UI library, we didn’t have clear standards around the components themselves. This led to confusion and in some cases the same components being used in different ways.

I began to collect examples of various in-app components. I started small, tackling inconsistencies as I came across them and worked with individual designers to align our direction. After a final decision was made about a particular component, we shared back with the rest of the team. This helped to build the team’s institutional memory and improve transparency about a decision one or two people may have made.

Image of snackbar do's and don'ts with visual examples of how to properly use the component.

Example of guidance we now provide around when it’s appropriate to use a snackbar.

Note that you don’t need fancy tooling to begin auditing and aligning your components. Our team was in the middle of transitioning between tools, so I used a simple Google Slides deck with screenshots to start. It was also easy for other team members to contribute because a slide deck has a low barrier to entry.

Establish a framework for introducing new features

As we moved closer towards the product launch, we began to discuss which new features to add. This led to conversations around feature discoverability. How would users discover the features that served them best?

Content strategy advocated for a holistic approach; if we designed a solution for one feature independent of all others in the app, we could end up with a busy, overwhelming end experience that annoyed users. To help us define when, why, and how we would draw attention to in-app features, I developed a Feature Discovery Framework.

Bulleted list of 5 goals for the feature discoverability framework.

Excerpt from the Feature Discovery Framework we are piloting to provide users access to the right features at the right time.

The framework serves as an alignment tool between product owners and user experience to identify the best approach for a particular feature. It’s broken down into three steps. Each step poses a series of questions that are intended to create clarity around the feature itself.

Step 1: Understanding the feature

How does it map to user and business goals?

Step 2: Introducing the feature

How and when should the feature be surfaced?

Step 3: Defining success with data

How and when will we measure success?

After I had developed the first draft of the framework, I shared in our UX design critique for feedback. I was surprised to discover that my peers working on other products in Firefox were enthusiastic about applying the framework on their own teams. The feedback I gathered during the critique session helped me make improvements and clarify my thinking. On the mobile team, we’re now piloting the framework for future features.

Wrapping up

The words you see on a screen are the most tangible output of a content strategist’s work, but are a small sliver of what we do day-to-day. Developing documentation helps us align with our teams and move faster. Understanding user needs and business goals up front help us define what approach to take. To learn more about how we work as content strategists at Firefox, check out Driving Value as a Tiny UX Content Team.

This post was originally published on Medium.

About:CommunityWeaving Safety into the Fabric of Open Source Collaboration

At Mozilla, with over 400 staff in community-facing roles, and thousands of volunteer contributors across multiple projects: we believe that everyone deserves the right to work, contribute and convene with knowledge that their safety and well-being are at the forefront of how we operate and communicate as an organization.

In my 2017 research into the state of diversity and inclusion in open source, including qualitative interviews with over 90 community members, and a survey of over 204 open source projects, we found that while a majority of projects had  adopted a code of conduct, nearly half (47%) of community members did not trust (or doubted) its enforcement. That number jumped to 67% for those in underrepresented groups.

For mission driven organizations like Mozilla, and others building open source into their product development workflows, I found a lack of cross-organizational  strategy for enforcement.  A strategy that considers the intertwined nature of open source,  where staff and contributors regularly work together as teammates and colleagues.

It was clear, that the success of enforcement was dependent on the  organizational capacity to respond as a whole, and not as sole responsibility of community managers. This blog post describes our journey to get there.

Why This Matters


Truly ‘open’ participation requires that everyone feel safe, supported, and empowered in their roles.

From an ‘organizational health perspective this is also critical to get right as there are unique risks associated with code of conduct enforcement in open collaboration:

  • Safety  – Physical/Sexual/Psychological for all involved.
  • Privacy – Privacy, confidentiality, security of data related to reports.
  • Legal – Failure to recognize applicable law, or follow required processes, resulting in potential legal liability. This includes the geographic region where reporter & reported individuals reside.
  • Brand – Mishandling (or not handling) can create mistrust in organizations, and narratives beyond their ability to manage.

Where We Started (Staff)


Looking down, you can see someone's feet, on concrete next to a purple spraypainted lettering that says 'Start Here'

Photo by Jon Tyson on Unsplash

I want to first acknowledge that across Mozilla’s many projects and communities, maintainers, and project leads were doing an excellent job of managing low to moderate risk cases, including  ‘drive by’ trolling.

That said, our processes and program were immature. Many of those same staff found themselves without expertise, tools and lacking key partnerships required to manage escalations and high risk situations.  This caused stress and perhaps placed an unfair burden on those people to solve complex problems in real time. Specifically gaps were:

Investigative & HR skill set  –  Investigation requires both a mindset, and set of tactics to ensure that all relevant information is considered, before making a decision. This and other skills, related to supporting employees, sits in the HR department.

Legal – Legal partnership for both product and employment issues are key to high risk cases (in any of the previously mentioned categories) and those which may involve staff – either as the reporter or the reported.  The when and how for consulting legal wasn’t yet fully clear.

Incident Response  Incident response requires timing and a clear set of steps that ensures complex decisions like a project ban are executed in such a way that safety and privacy of all involved are at center.  This includes access to expertise and tools that help keep people safe.  There was no repeatable predictable, and visible process to plug into.

Centralized Data TrackingThere was no single, cohesive way to track HR, Legal and community violations of the CPG  across the project.   This means theoretically, that someone banned from the community could have potentially applied for a MOSS grant, fellowship or been invited to Mozilla’s bi-annual All Hands by another team – without that being readily flagged.

Where We Started (Community)


“Community does not pay attention to CPG ,  people don’t feel it will do anything.”  – 2017 Research into the state of diversity & inclusion in open source.

For those in our communities, 2017 research found little- to-no knowledge about how to report violations, and what to expect if they did.  In situations perceived as urgent, contributors would look for help from multiple staff members they already had a rapport with, and/or affiliated community leadership groups like Mozilla Reps council.  Those community leaders were often heroic in their endeavors to help, but again just lacked the same tools, processes and visibility into how the organization was set up to support them.

In open source more broadly, we have a long timeline of unaddressed toxic behavior, especially  from those in roles of influence .   It seems fair to hypothesize that the human and financial cost of unaddressed behavior is not unlike concerning numbers showing up in research about toxic work environments.

Where We Are Now


While this work is never done, I can say with a lot of confidence that the program we’ve built is solid, both from the perspective of systematically addressing the majority of the gaps I’ve mentioned, and set up to continually improve.

Investments required to build this program were both practical, in that we required resources, and budget, but also of the intangible – and emotional commitment to stand shoulder-to-shoulder with people in difficult circumstances, and potentially endure the response of those of those for whom equality felt uncomfortable.

Over time, and as processes became more efficient,  those investments have also been gradually reduced from two people, working full time, to only 25% of a full time employee’s time.  Even with recent layoffs at Mozilla, these programs are now lean enough to continue as is.

The CPG Program for Enforcement

To date, we’ve triaged 282 reports, consulted on countless issues related to enforcement, and fully rolled out 19 complex full project bans among other decisions ranked according to levels on our consequence ladder . We’ve also ensured that over 873 of Mozilla’s Github repositories use our template, which directs to our processes.



Who uses this program?   It might seem a bit odd to describe those who seek support in difficult situations as customers, or users but from the perspective of service design, thinking this way ensures we are designing with empathy, compassion and providing value for the journey of open source collaboration.

“I felt very supported by Mozilla’s CPG process when I was being harassed. It was clear who was involved in evaluating my case, and the consequences ladder helped jump-start a conversation about what steps would be taken to address the harassment. It also helped that the team listened to and honor my requests during the process.”  – Mozilla staff member

“I am not sure how I would have handled this on my own. I am grateful that Mozilla provided support to manage and fix a CPG issues in my community”  – Mozilla community member.

Obviously I cannot be specific to protect privacy of individuals involved, but I can group ‘users’ into three groups:

People – contributors, community leaders, community-facing staff, and their managers.

Mozilla Communities & Projects –  it’s hard to think of an area that has not leveraged this program in some capacity including: Firefox, Developer Tools,  SUMO, MDN, Fenix, Rust, Servo, Hubs, Addons, Firefox, Mozfest, All Hands, Mozilla Reps, Tech Speakers, Dev Rel, Reps, MOSS, L10N and regional communities are top of mind.

External Communities & Projects –  because we’ve shared our work openly,  we’ve seen adoption more broadly in the ecosystem including the Contributor’s Covenant ‘Enforcement Guidelines’.

Policy & Standards


Answering: “How might we scale our processes, in a way that ensures quality, safety, stability, reproducibility and ultimately builds trust across the board (staff and communities)?”.

This includes continual improvement of the CPG itself. This year, after interviewing an expert on the topic of caste discrimination, and its potentially negative impact on open communities, we  added caste as a protected group.   This year, we’ve also translated the policy into 7 more languages for a total of 15.  Finally, we added a How to Report page, including best practices for accepting a report, and ensuring compliance based on whether staff are reporting or the reporter.  All changes are tracked here.

For the enforcement policy itself we have the following standards:

  • A Decision making policy governs  cases where contributors are involved  – ensuring the right stakeholders are consulted,  scope of that consultation is limited to protect privacy of those involved.
  • Consequence ladder  guides decision-making, and  if required, provides guidance for escalation in cases of repeat violations.
  • For rare cases where a ban is temporary, we have a policy to invite members back through our CPG onboarding.
  • To roll out a decision across multiple systems, we’ve developed this project-wide process including communication templates.

To ensure unified understanding and visibility, we have the following supportive processes and tools:

Internal Partnerships


Answering:  “How might we unite people, and teams with critical skills needed to respond  efficiently and effectively (without that requiring a lot of formality)?”.

There were three categories of formal, and informal partnerships, internally:

Product  Partnerships – those with accountability, and skill sets related to product implementation of standards and policies. Primarily this is legal’s product team, and those administering the Mozilla GitHub organization.

Safety Partnerships – those with specific skill sets, required in emergency situations.  At Mozilla, this is Security Assurance, HR, Legal and Workplace Resources (WPR)  .

Enforcement Partnerships – Specifically this means alignment between HR and legal on which actions belong to which team.  That’s not to say, we always have the need for these, many smaller reports can easily be handled by the community team alone.

There are three circles, one that says 'Employee Support' and lists tasks of that role, the second says 'Investigation (HR)' and lists the tasks of that role, the third says 'Case Actions(community team)' and lists associated actions

An example of how a case involving an employee as reporter, or reported is managed between departments.

We also have less formalized partnerships (more of an intention to work together) across events like All Hands, and in collaboration with other enforcement leaders at Mozilla like those managing high-volume issues in places like Bugzilla.

Working Groups


Answering: “How can we convene representatives from different areas of the org, around specific problems?

Centralized CPG Enforcement Data – Working Group

To mitigate risk identified a working group consisting of HR (for both Mozilla Corporation, and Mozilla Foundation), legal and the community team come together periodically to triage requests for things like MOSS grants, community leadership roles, and in-person invites to events (pre-COVID-19)  reduced the potential for re-emergence of those with CPG violations risk in a number of areas.

Safety  – Working Group

When Mozillians feel threatened (perceived or actual), we want to make sure there is an accountable team, with access and ability to trigger broader responses across the organization, based on risk. This started first as a mailing list of accountable for Security, Community, Workplaces Resources (WPR) and HR, this group now has Security as a DRI, ensuring prompt incident response.

Each of these working groups started as an experiment, each having demonstrated value,  now has an accountable DRI (HR & Security Assurance respectively).



Answering: “How can we ensure an ongoing high standard of response, through knowledge sharing and training for contributors in roles of project leadership, and staff in community-facing roles(and their managers)?

We created two courses:

two course tiles are shown. One titled (contributor) community participation guidelines, the other (staff) participation guidelines. Each show link to FAQThese  are not intended to make people ‘enforcement experts’.  Instead, curriculum covers, at a high level (think ‘first aid’ for enforcement!), those topics critical to mitigating risk, and keeping people safe.

98% of the 501 staff who have completed this course said they understood how it applied to their role, and valued the experience.

Central to content is this triage process, for quick decision making, and escalation if needed.

A cropped inforgraphic showing a triage process for CPG violations. The first 4 steps are for P1 (and descriptions for what thsoe are), the next time is for p2 (with descriptions for what those are) with two more tiles for P3, and P4 each with descriptions.

CPG Triage Infographic


Last (but not least), these courses encourage learners to prioritize self-care , with available resources, and clear organizational support for doing so.

Supporting Systems


As part of our design and implementation we also found a need for systems to further our program’s effectiveness.  Those are:

Reporting Tool:  We needed a way to effectively and consistently accept, and document reports.  We decided to use a third party system that allowed Mozilla to create records directly and allowed contributors/community members to personally submit reports in the same tool.  This helped with making sure that we had one authorized system rather than a smattering of notes and documents being kept in an unstructured way.  It also allows people to report in their own language.

Learning Management System (LMS):  No program is effective without meaningful training. To support, we engaged with a third party tool that allowed us to create content that was easy to digest, but also provides assessment opportunities (quizzes) and ability to track course completion.



This, often invisible work of safety, is critical if open source is to reach it’s full potential. I want to thank, the many, many people who cared, advocated and contributed to this work and those that trusted us to help.

If you have any questions about this program, including how to leverage our open resources, please do reach out via our Github repository, or Discourse.


NOTE:  We determined Community-facing roles as those with ‘Community Manager’ in their title and:

  • Engineers, designers, and others working regularly with contributors on platforms like Mercurial, Bugzilla and Github.
  • Anyone organizing, speaking or hosting events on behalf of Mozilla.
  • Those with jobs requiring social media interactions with external audiences including blogpost authorship.

Daniel Stenbergstore the curl output over there

tldr: --output-dir [directory] comes in curl 7.73.0

The curl options to store the contents of a URL into a local file, -o (--output) and -O (--remote-name) were part of curl 4.0, the first ever release, already in March 1998.

Even though we often get to hear from users that they can’t remember which of the letter O’s to use, they’ve worked exactly the same for over twenty years. I believe the biggest reason why they’re hard to keep apart is because of other tools that use similar options for maybe not identical functionality so a command line cowboy really needs to remember the exact combination of tool and -o type.

Later on, we also brought -J to further complicate things. See below.

Let’s take a look at what these options do before we get into the new stuff:

--output [file]

This tells curl to store the downloaded contents in that given file. You can specify the file as a local file name for the current directory or you can specify the full path. Example, store the the HTML from in "/tmp/foo":

curl -o /tmp/foo


This option is probably much better known as its short form: -O (upper case letter o).

This tells curl to store the downloaded contents in a file name name that is extracted from the given URL’s path part. For example, if you download the URL "" users often think that saving that using the local file name “pancakes.jpg” is a good idea. -O does that for you. Example:

curl -O

The name is extracted from the given URL. Even if you tell curl to follow redirects, which then may go to URLs using different file names, the selected local file name is the one in the original URL. This way you know before you invoke the command which file name it will get.


This option is commonly used as -J (upper case letter j) and needs to be set in combination with --remote-name.

This makes curl parse incoming HTTP response headers to check for a Content-Disposition: header, and if one is present attempt to parse a file name out of it and then use that file name when saving the content.

This then naturally makes it impossible for a user to be really sure what file name it will end up with. You leave the decision entirely to the server. curl will make an effort to not overwrite any existing local file when doing this, and to reduce risks curl will always cut off any provided directory path from that file name.

Example download of the pancake image again, but allow the server to set the local file name:

curl -OJ

(it has been said that “-OJ is a killer feature” but I can’t take any credit for having come up with that.)

Which directory

So in particular with -O, with or without -J, the file is download in the current working directory. If you want the download to be put somewhere special, you had to first ‘cd’ there.

When saving multiple URLs within a single curl invocation using -O, storing those in different directories would thus be impossible as you can only cd between curl invokes.

Introducing --output-dir

In curl 7.73.0, we introduce this new command line option --output-dir that goes well together with all these output options. It tells curl in which directory to create the file. If you want to download the pancake image, and put it in /tmp no matter which your current directory is:

curl -O --output-dir /tmp

And if you allow the server to select the file name but still want it in /tmp

curl -OJ --output-dir /tmp

Create the directory!

This new option also goes well in combination with --create-dirs, so you can specify a non-existing directory with --output-dir and have curl create it for the download and then store the file in there:

curl --create-dirs -O --output-dir /tmp/receipes

Ships in 7.73.0

This new option comes in curl 7.73.0. It is curl’s 233rd command line option.

You can always find the man page description of the option on the curl website.


I (Daniel) wrote the code, docs and tests for this feature.

Image by Alexas_Fotos from Pixabay

Nicholas NethercoteHow to speed up the Rust compiler one last time

Due to recent changes at Mozilla my time working on the Rust compiler is drawing to a close. I am still at Mozilla, but I will be focusing on Firefox work for the foreseeable future.

So I thought I would wrap up my “How to speed up the Rust compiler” series, which started in 2016.

Looking back

I wrote ten “How to speed up the Rust compiler” posts.

  • How to speed up the Rust compiler.The original post, and the one where the title made the most sense. It focused mostly on how to set up the compiler for performance work, including profiling and benchmarking. It mentioned only four of my PRs, all of which optimized allocations.
  • How to speed up the Rust compiler some more. This post switched to focusing mostly on my performance-related PRs (15 of them), setting the tone for the rest of the series. I reused the “How to…” naming scheme because I liked the sound of it, even though it was a little inaccurate.
  • How to speed up the Rust compiler in 2018. I returned to Rust compiler work after a break of more than a year. This post included updated info on setting things up for profiling the compiler and described another 7 of my PRs.
  • How to speed up the Rust compiler some more in 2018. This post described some improvements to the standard benchmarking suite and support for more profiling tools, covering 14 of my PRs. Due to multiple requests from readers, I also included descriptions of failed optimization attempts, something that proved popular and that I did in several subsequent posts. (A few times, readers made suggestions that then led to subsequent improvements, which was great.)
  • How to speed up the Rustc compiler in 2018: NLL edition. This post described 13 of my PRs that helped massively speed up the new borrow checker, and featured my favourite paragraph of the entire series: “the html5ever benchmark was triggering out-of-memory failures on CI… over a period of 2.5 months we reduced the memory usage from 14 GB, to 10 GB, to 2 GB, to 1.2 GB, to 600 MB, to 501 MB, and finally to 266 MB”. This is some of the performance work I’m most proud of. The new borrow checker was a huge win for Rust’s usability and it shipped with very little hit to compile times, an outcome that was far from certain for several months.
  • How to speed up the Rust compiler in 2019. This post covered 44(!) of my PRs including ones relating to faster globals accesses, profiling improvements, pipelined compilation, and a whole raft of tiny wins from reducing allocations with the help of the new version of DHAT.
  • How to speed up the Rust compiler some more in 2019. This post described 11 of my PRs, including several minimising calls to memcpy, and several improving the ObligationForest data structure. It discussed some PRs by others that reduced library code bloat. I also included a table of overall performance changes since the previous post, something that I continued doing in subsequent posts.
  • How to speed up the Rust compiler one last time in 2019. This post described 21 of my PRs, including two separate sequences of refactoring PRs that unintentionally led to performance wins.
  • How to speed up the Rust compiler in 2020: This post described 23 of my successful PRs relating to performance, including a big change and win from avoiding the generation of LLVM bitcode when LTO is not being used (which is the common case). The post also described 5 of my PRs that represented failed attempts.
  • How to speed up the Rust compiler some more in 2020: This post described 19 of my PRs, including several relating to LLVM IR reductions found with cargo-llvm-lines, and several relating to improvements in profiling support. The post also described the important new weekly performance triage process that I started and is on track to be continued by others.

Beyond those, I wrote several other posts related to Rust compilation.

As well as sharing the work I’d been doing, a goal of the posts was to show that there are people who care about Rust compiler performance and that it was actively being worked on.

Lessons learned

Boiling down compiler speed to a single number is difficult, because there are so many ways to invoke a compiler, and such a wide variety of workloads. Nonetheless, I think it’s not inaccurate to say that the compiler is at least 2-3x faster than it was a few years ago in many cases. (This is the best long-range performance tracking I’m aware of.)

When I first started profiling the compiler, it was clear that it had not received much in the way of concerted profile-driven optimization work. (It’s only a small exaggeration to say that the compiler was basically a stress test for the allocator and the hash table implementation.) There was a lot of low-hanging fruit to be had, in the form of simple and obvious changes that had significant wins. Today, profiles are much flatter and obvious improvements are harder for me to find.

My approach has been heavily profiler-driven. The improvements I did are mostly what could be described as “bottom-up micro-optimizations”. By that I mean they are relatively small changes, made in response to profiles, that didn’t require much in the way of top-down understanding of the compiler’s architecture. Basically, a profile would indicate that a piece of code was hot, and I would try to either (a) make that code faster, or (b) avoid calling that code.

It’s rare that a single micro-optimization is a big deal, but dozens and dozens of them are. Persistence is key.

I spent a lot of time poring over profiles to find improvements. I have measured a variety of different things with different profilers. In order of most to least useful:

  • Instruction counts (Cachegrind and Callgrind)
  • Allocations (DHAT)
  • All manner of custom path and execution counts via ad hoc profiling (counts)
  • Memory use (DHAT and Massif)
  • Lines of LLVM IR generated by the front end (cargo-llvm-lines)
  • memcpys (DHAT)
  • Cycles (perf), but only after I discovered the excellent Hotspot viewer… I find perf’s own viewer tools to be almost unusable. (I haven’t found cycles that useful because they correlate strongly with instruction counts, and instruction count measurements are less noisy.)

Every time I did a new type of profiling, I found new things to improve. Often I would use multiple profilers in conjunction. For example, the improvements I made to DHAT for tracking allocations and memcpys were spurred by Cachegrind/Callgrind’s outputs showing that malloc/free and memcpy were among the hottest functions for many benchmarks. And I used counts many times to gain insight about a piece of hot code.

Off the top of my head, I can think of some unexplored (by me) profiling territories: self-profiling/queries, threading stuff (e.g. lock contention, especially in the parallel front-end), cache misses, branch mispredictions, syscalls, I/O (e.g. disk activity). Also, there are lots of profilers out there, each one has its strengths and weaknesses, and each person has their own areas of expertise, so I’m sure there are still improvement to be found even for the profiling metrics that I did consider closely.

I also did two larger “architectural” or “top-down” changes: pipelined compilation and LLVM bitcode elision. These kinds of changes are obviously great to do when you can, though they require top-down expertise and can be hard for newcomers to contribute to. I am pleased that there is an incremental compilation working group being spun up, because I think that is an area where there might be some big performance wins.

Good benchmarks are important because compiler inputs are complex and highly variable. Different inputs can stress the compiler in very different ways. I used rustc-perf almost exclusively as my benchmark suite and it served me well. That suite changed quite a bit over the past few years, with various benchmarks being added and removed. I put quite a bit of effort into getting all the different profilers to work with its harness. Because rustc-perf is so well set up for profiling, any time I needed to do some profiling of some new code I would simply drop it into my local copy of rustc-perf.

Compilers are really nice to profile and optimize because they are batch programs that are deterministic or almost-deterministic. Profiling the Rust compiler is much easier and more enjoyable than profiling Firefox, for example.

Contrary to what you might expect, instruction counts have proven much better than wall times when it comes to detecting performance changes on CI, because instruction counts are much less variable than wall times (e.g. ±0.1% vs ±3%; the former is highly useful, the latter is barely useful). Using instruction counts to compare the performance of two entirely different programs (e.g. GCC vs clang) would be foolish, but it’s reasonable to use them to compare the performance of two almost-identical programs (e.g. rustc before PR #12345 and rustc after PR #12345). It’s rare for instruction count changes to not match wall time changes in that situation. If the parallel version of the rustc front-end ever becomes the default, it will be interesting to see if instruction counts continue to be effective in this manner.

I was surprised by how many people said they enjoyed reading this blog post series. (The positive feedback partly explains why I wrote so many of them.) The appetite for “I squeezed some more blood from this stone” tales is high. Perhaps this relates to the high level of interest in Rust, and also the pain people feel from its compile times. People also loved reading about the failed optimization attempts.

Many thanks to all the people who helped me with this work. In particular:

  • Mark Rousskov, for maintaining rustc-perf and the CI performance infrastructure, and helping me with many rustc-perf changes;
  • Alex Crichton, for lots of help with pipelined compilation and LLVM bitcode elision;
  • Anthony Jones and Eric Rahm, for understanding how this Rust work benefits Firefox and letting me spend some of my Mozilla working hours on it.

Rust’s existence and success is something of a miracle. I look forward to being a Rust user for a long time. Thank you to everyone who has contributed, and good luck to all those who will contribute to it in the future!

The Rust Programming Language BlogLaunching the 2020 State of Rust Survey

It's that time again! Time for us to take a look at how the Rust project is doing, and what we should plan for the future. The Rust Community Team is pleased to announce our 2020 State of Rust Survey! Whether or not you use Rust today, we want to know your opinions. Your responses will help the project understand its strengths and weaknesses and establish development priorities for the future. (If you'd like to give longer form feedback on the Rust roadmap, we're also collecting blog posts!)

Completing this survey should take about 10–15 minutes and is anonymous unless you choose to give us your contact information. We will be accepting submissions for the next two weeks (until September 24th), and we will write up our findings afterwards to You can also check out last year’s results.

(If you speak multiple languages, please pick one)

Please help us spread the word by sharing the survey link on your social network feeds, at meetups, around your office, and in other communities.

If you have any questions, please see our frequently asked questions or email the Rust Community team at

Finally, we wanted to thank everyone who helped develop, polish, and test the survey. In particular, we'd like to thank all of the volunteers who worked to provide all of the translations available this year and who will help to translate the results.

Patrick ClokeInstantbird Blog: WordPress to Pelican

The Instantbird blog is now (as of mid-April 2020) hosted on GitHub Pages (instead of self-hosted WordPress) [1]. Hopefully it was converted faithfully, but feel free to let us know if you see something broken! You can file an issue at the repo for the blog or just comment below …

Mozilla Addons BlogIntroducing the Promoted Add-ons Pilot

Today, we are launching a pilot program to give developers a way to promote their add-ons on (AMO). This pilot program, which will run between the end of September and the end of November 2020, aims to expand the number of add-ons we can review and verify as compliant with Mozilla policies, and provides developers with options for boosting their discoverability on AMO.


Building Upon Recommended Extensions

We strive to maintain a balance between openness for our development ecosystem and security and privacy for our users. Last summer, we launched a program called Recommended Extensions consisting of a relatively small number of editorially chosen add-ons that are regularly reviewed for policy compliance and prominently recommended on AMO and other Mozilla channels. All other add-ons display a caution label on their listing pages letting users know that we may not have reviewed these add-ons.

We would love to review all add-ons on AMO for policy compliance, but the cost would be prohibitive because they are performed by humans. Still, developers often tell us they would like to have their add-ons reviewed and featured on AMO, and some have indicated a willingness to pay for these services if we provide them.

Introducing Promoted Add-ons

To support these developers, we are adding a new program called Promoted Add-ons, where add-ons can be manually reviewed and featured on the AMO homepage for a fee. Offering these services as paid options will help us expand the number of add-ons that are verified and give developers more ways to gain users.

There will be two levels of paid services available:

  • “Verified” badging: Developers will have all new versions of their add-on reviewed for security and policy compliance. If the add-on passes, it will receive a Verified badge on AMO and in the Firefox Add-ons Manager (about:addons). The caution label will no longer appear on the add-on’s AMO listing page.

Add-on listing page example with verified badge

  • Sponsored placement on the AMO homepage. Developers of add-ons that have a Verified badge have the option to reach more users by paying an additional fee for placement in a new Sponsored section of the AMO homepage. The AMO homepage receives about two million unique visits per month.

AMO Homepage with Sponsored Ssection

During the pilot program, these services will be provided to a small number of participants without cost. More details will be provided to participants and the larger community about the program, including pricing, in the coming months.

Sign up for the Pilot Program

If you are interested in participating in this pilot program, click here to sign up. Please note that space will be limited based on the following criteria and restrictions:

  • Your add-on must be listed on
  • You (or your company) must be based in the United States, Canada, New Zealand, Australia, the United Kingdom, Malaysia, or Singapore, because once the pilot ends, we can only accept payment from these countries. (If you’re interested in participating but live outside these regions, please sign up to join the waitlist. We’re currently looking into how we can expand to more countries.)
  • Up to 12 add-ons will be accepted to the pilot program due to our current capacity for manual reviews. We will prioritize add-ons that are actively maintained and have an established user base.
  • Prior to receiving the Verified badge, a participating add-on will need to pass manual review. This may require some time commitment from developers to respond to potential review requests in a timely manner.
  • Add-ons in the Recommended Extensions program do not need to apply, because they already receive verification and discovery benefits.

We’ll begin notifying developers who are selected to participate in the program on September 16, 2020. We may expand the program in the future if interest grows, so the sign-up sheet will remain open if you would like to join the waitlist.

Next Steps

We expect Verified badges and homepage sponsorships for pilot participants to go live in early October. We’ll run the pilot for a few weeks to monitor its performance and communicate the next phase in November.

For developers who do not wish to participate in this program but are interested in more ways to support their add-ons, we plan to streamline the contribution experience later this year and explore features that make it easier for people to financially support the add-ons they use regularly. These features will be free to all add-on developers, and remain available whether or not the Promoted Add-ons pilot graduates.

We look forward to your participation, and hope you stay tuned for updates! If you have any questions about the program, please post them to our community forum.

The post Introducing the Promoted Add-ons Pilot appeared first on Mozilla Add-ons Blog.

This Week In RustThis Week in Rust 355

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Learn Standard Rust
Learn More Rust
Project Updates

Crate of the Week

This week's crate is serde-query, an efficient query language for Serde.

Thanks to Vlad Frolov for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No issues were proposed for CfP.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

332 pull requests were merged in the last week

Rust Compiler Performance Triage

A few small compile-time regressions this week. The first was #70793, which added some specializations to the standard library in order to increase runtime performance. The second was #73996, which adds an option to the diagnostics code to print only the names of types and traits when they are unique instead of the whole path. The third was #75200, which refactored part of BTreeMap to avoid aliasing mutable references.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

No new RFCs were proposed this week.

Upcoming Events

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

It's amazing how frequent such "rare edge cases" can be. Especially when there are millions of people using billions of files originating from God know what operating systems. Far better things are checked properly if one want robust code. As Rust uses do.

Thanks to Edoardo Morandi for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Mozilla Privacy BlogMozilla offers a vision for how the EU Digital Services Act can build a better internet

Later this year the European Commission is expected to publish the Digital Services Act (DSA). These new draft laws will aim at radically transforming the regulatory environment for tech companies operating in Europe. The DSA will deal with everything from content moderation, to online advertising, to competition issues in digital markets. Today, Mozilla filed extensive comments with the Commission, to outline Mozilla’s vision for how the DSA can address structural issues facing the internet while safeguarding openness and fundamental rights.

The stakes at play for consumers and the internet ecosystem could not be higher. If developed carefully and with broad input from the internet health movement, the DSA could help create an internet experience for consumers that is defined by civil discourse, human dignity, and individual expression. In addition, it could unlock more consumer choice and consumer-facing innovation, by creating new market opportunities for small, medium, and independent companies in Europe.

Below are the key recommendations in Mozilla’s 90-page submission:

  • Content responsibility: The DSA provides a crucial opportunity to implement an effective and rights-protective framework for content responsibility on the part of large content-sharing platforms. Content responsibility should be assessed in terms of platforms’ trust & safety efforts and processes, and should scale depending on resources, business practices, and risk. At the same time, the DSA should avoid ‘down-stack’ content control measures (e.g. those targeting ISPs, browsers, etc). These interventions pose serious free expression risk and are easily circumventable, given the technical architecture of the internet.
  • Meaningful transparency to address disinformation: To ensure transparency and to facilitate accountability, the DSA should consider a mandate for broad disclosure of advertising through publicly available ad archive APIs.
  • Intermediary liability: The main principles of the E-Commerce directive still serve the intended purpose, and the European Commission should resist the temptation to weaken the directive in the effort to increase content responsibility.
  • Market contestability: The European Commission should consider how to best support healthy ecosystems with appropriate regulatory engagement that preserves the robust innovation we’ve seen to date — while also allowing for future competitive innovation from small, medium and independent companies without the same power as today’s major platforms.
  • Ad tech reform: The advertising ecosystem is a key driver of the digital economy, including for companies like Mozilla. However the ecosystem today is unwell, and a crucial first step towards restoring it to health would be for the DSA to address its present opacity.
  • Governance and oversight: Any new oversight bodies created by the DSA should be truly co-regulatory in nature, and be sufficiently resourced with technical, data science, and policy expertise.

Our submission to the DSA public consultation builds on this week’s open letter from our CEO Mitchell Baker to European Commission President Ursula von der Leyen. Together, they provide the vision and the practical guidance on how to make the DSA an effective regulatory tool.

In the coming months we’ll advance these substantive recommendations as the Digital Services Act takes shape. We look forward to working with EU lawmakers and the broader policy community to ensure the DSA succeeds in addressing the systemic challenges holding back the internet from what it should be.

A high-level overview of our DSA submission can be found here, and the complete 90-page submission can be found here.

The post Mozilla offers a vision for how the EU Digital Services Act can build a better internet appeared first on Open Policy & Advocacy.

Mozilla ThunderbirdOpenPGP in Thunderbird 78

Updating to Thunderbird 78 from 68

Soon the Thunderbird automatic update system will start to deliver the new Thunderbird 78 to current users of the previous release, Thunderbird 68. This blog post is intended to share with you details about our OpenPGP support in Thunderbird 78, and some details Enigmail add-on users should consider when updating. If you are interested in reading more about the other features in the Thunderbird 78 release, please see our previous blog post.

Updating to Thunderbird 78 is highly recommended to ensure you will receive security fixes, because no more fixes will be provided for Thunderbird 68 after September 2020.

The traditional Enigmail Add-on cannot be used with version 78, because of changes to the underlying Mozilla platform Thunderbird is built upon. Fortunately, it is no longer needed with Thunderbird version 78.2.1 because it enables a new built-in OpenPGP feature.

Not all of Enigmail’s functionality is offered by Thunderbird 78 yet – but there is more to come. And some functionality has been implemented differently, partly because of technical necessity, but also because we are simplifying the workflow for our users.

With the help of a migration tool provided by the Enigmail Add-on developer, users of Enigmail’s classic mode will get assistance to migrate their settings and keys. Users of Enigmail’s Junior Mode will be informed by Enigmail, upon update, about their options for using that mode with Thunderbird 78, which requires downloading software that isn’t provided by the Thunderbird project. Alternatively, users of Enigmail’s Junior Mode may attempt a manual migration to Thunderbird’s new integrated OpenPGP feature, as explained in our howto document listed below.

Unlike Enigmail, OpenPGP in Thunderbird 78 does not use GnuPG software by default. This change was necessary to provide a seamless and integrated experience to users on all platforms. Instead, the software of the RNP project was chosen for Thunderbird’s core OpenPGP engine. Because RNP is a newer project in comparison to GnuPG, it has certain limitations, for example it currently lacks support for OpenPGP smartcards. As a workaround, Thunderbird 78 offers an optional configuration for advanced users, which requires additional manual setup, but which can allow the optional use of separately installed GnuPG software for private key operations.

The Mozilla Open Source Support (MOSS) awards program has thankfully provided funding for an audit of the RNP library and Thunderbird’s related code, which was conducted by the Cure53 company.  We are happy to report that no critical or major security issues were found, all identified issues had a medium or low severity rating, and we will publish the results in the future.

More Info and Support

We have written a support article that lists questions that users might have, and it provides more detailed information on the technology, answers, and links to additional articles and resources. You may find it at:

If you have questions about the OpenPGP feature, please use Thunderbird’s discussion list for end-to-end encryption functionality at:

Several topics have already been discussed, so you might be able to find some answers in its archive.

The Mozilla BlogMozilla CEO Mitchell Baker urges European Commission to seize ‘once-in-a-generation’ opportunity

Today, Mozilla CEO Mitchell Baker published an open letter to European Commission President Ursula von der Leyen, urging her to seize a ‘once-in-a-generation’ opportunity to build a better internet through the opportunity presented by the upcoming Digital Services Act (“DSA”).

Mitchell’s letter coincides with the European Commission’s public consultation on the DSA, and sets out high-level recommendations to support President von der Leyen’s DSA policy agenda for emerging tech issues (more on that agenda and what we think of it here).

The letter sets out Mozilla’s recommendations to ensure:

  • Meaningful transparency with respect to disinformation;
  • More effective content accountability on the part of online platforms;
  • A healthier online advertising ecosystem; and,
  • Contestable digital markets

As Mitchell notes:

“The kind of change required to realise these recommendations is not only possible, but proven. Mozilla, like many of our innovative small and medium independent peers, is steeped in a history of challenging the status quo and embracing openness, whether it is through pioneering security standards, or developing industry-leading privacy tools.”

Mitchell’s full letter to Commission President von der Leyen can be read here.

The post Mozilla CEO Mitchell Baker urges European Commission to seize ‘once-in-a-generation’ opportunity appeared first on The Mozilla Blog.

Support.Mozilla.OrgIntroducing the Customer Experience Team

A few weeks ago, Rina discussed the impact of the recent changes in Mozilla on the SUMO team. This change has resulted in a more focused team that combines Pocket Support and Mozilla Support into a single team that we’re calling Customer Experience, led by Justin Rochell. Justin has been leading the support team in Pocket and will now broaden his responsibilities to oversee Mozilla’s products as well as SUMO community.

Here’s a short introduction from Justin:

Hey everyone! I’m excited and honored to be stepping into this new role leading our support and customer experience efforts at Mozilla. After heading up support at Pocket for the past 8 years, I’m excited to join forces with SUMO to improve our support strategy, collaborate more closely with our product teams, and ensure that our contributor community feels nurtured and valued. 

One of my first support jobs was for an email client called Postbox, which is built on top of Thunderbird. It feels as though I’ve come full circle, since was a valuable resource for me when answering support questions and writing knowledge base articles. 

You can find me on Matrix at – I’m eager to learn about your experience as a contributor, and I welcome you to get in touch. 

We’re also excited to welcomer Olivia Opdahl, who is a Senior Community Support Rep at Pocket and has been on the Pocket Support team since 2014. She’s been responsible for many things in addition to support, including QA, curating Pocket Hits, and doing social media for Pocket.

Here’s a short introduction from Olivia:

Hi all, my name is Olivia and I’m joining the newly combined Mozilla support team from Pocket. I’ve worked at Pocket since 2014 and have seen Pocket evolve many times into what we’re currently reorganizing as a more integrated part of Mozilla. I’m excited to work with you all and learn even more about the rest of Mozilla’s products! 

When I’m not working, I’m probably playing video games, hiking, learning programming, taking photos or attending concerts. These days, I’m trying to become a Top Chef, well, not really, but I’d like to learn how to make more than mac and cheese :D

Thanks for welcoming me to the new team! 

Besides Justin and Olivia, JR/Joe Johnson, who you might remember for being a maternity cover for Rina earlier this year, will step in as a Release/Insights Manager for the team and work closely with the product team. Joni will continue to be our Content Lead and Angela as a Technical Writer. I will also stay as a Support Community Manager.

We’ll be sharing more information about our team’s focus in the future as we get to know more. For now, please join me to welcome Justin and Olivia on the team!


On behalf of the Customer Experience Team,


The Mozilla BlogA look at password security, Part V: Disk Encryption

The previous posts ( I, II, III, IV) focused primarily on remote login, either to multiuser systems or Web sites (though the same principles also apply to other networked services like e-mail). However, another common case where users encounter passwords is for login to devices such as laptops, tablets, and phones. This post addresses that topic.

Threat Model

We need to start by talking about the threat model. As a general matter, the assumption here is that the attacker has some physical access to your device. While some devices do have password-controlled remote access, that’s not the focus here.

Generally, we can think of two kinds of attacker access.

Non-invasive: The attacker isn’t willing to take the device apart, perhaps because they only have the device temporarily and don’t want to leave traces of tampering that would alert you.

Invasive: The attacker is willing to take the device apart. Within invasive, there’s a broad range of how invasive the attacker is willing to be, starting with “open the device and take out the hard drive” and ending with “strip the packaging off all the chips and examine them with an electron microscope”.

How concerned you should be depends on who you are, the value of your data, and the kinds of attackers you face. If you’re an ordinary person and your laptop gets stolen out of your car, then attacks are probably going to be fairly primitive, maybe removing the hard disk but probably not using an electron microscope. On the other hand, if you have high value data and the attacker targets you specifically, then you should assume a fairly high degree of capability. And of course people in the computer security field routinely worry about attackers with nation state capabilities.

It’s the data that matters

It’s natural to think of passwords as a measure that protects access to the computer, but in most cases it’s really a matter of access to the data on your computer. If you make a copy of someone’s disk and put it in another computer that will be a pretty close clone of the original (that’s what a backup is, after all) and the attacker will be able to read all your sensitive data off the disk, and quite possibly impersonate you to cloud services.

This implies two very easy attacks:

  • Bypass the operating system on the computer and access the disk directly. For instance, on a Mac you can boot into recovery mode and just examine the disk. Many UNIX machines have something called single-user mode which boots up with administrative access.
  • Remove the disk and mount it in another computer as an external disk. This is trivial on most desktop computers, requiring only a screwdriver (if that) and on many laptops as well; if you have a Mac or a mobile device, the disk may be a soldered in Flash drive, which makes things harder but still doable.

The key thing to realize is that nearly all of the access controls on the computer are just implemented by the operating system software. If you can bypass that software by booting into an administrative mode or by using another computer, then you can get past all of them and just access the data directly.1

If you’re thinking that this is bad, you’re right. And the solution to this is to encrypt your disk. If you don’t do that, then basically your data will not be secure against any kind of dedicated attacker who has physical access to your device.

Password-Based Key Derivation

The good news is that basically all operating systems support disk encryption. The bad news is that the details of how it’s implemented vary dramatically in some security critical ways. I’m not talking here about the specific details about cryptographic algorithms and how each individual disk block is encrypted. That’s a fascinating topic (see here), but most operating systems do something mostly adequate. The most interesting question for users is how the disk encryption keys are handled and how the the password is used to gate access to those keys.

The obvious way to do this — and the way things used to work pretty much everywhere — is to generate the encryption key directly from the password. [Technical Note: You probably really want generate a random key and encrypt it with a key derived from the password. This way you can change your password without re-encrypting the whole disk. But from a security perspective these are fairly equivalent.] The technical term for this is a password-based key derivation function, which just means that it takes a password and outputs a key. For our purposes, this is the same as a password hashing function and it has the same problem: given an encrypted disk I can attempt to brute force the password by trying a large number of candidate passwords. The result is that you need to have a super-long password (or often a passphrase) in order to prevent this kind of attack. While it’s possible to memorize a long enough password, it’s no fun, as well as being a real pain to type in whenever you want to log in to your computer, let alone on your smartphone or tablet. As a result, most people use much shorter passwords, which of course weakens the security of disk encryption.

Hardware Security Modules

As we’ve seen before, the problem here is that the attacker gets to try candidate passwords very fast and the only real fix is to limit the rate at which they can try. This is what many modern devices do. Instead of just deriving the encryption key from the password, they generate a random encryption key inside of a piece of hardware security module (HSM).2 What “secure” means varies but ideally it’s something like:

  1. It can do encryption and decryption internally without ever exposing the keys.4
  2. It resists physical attacks to recover the keys. For instance it might erase them if you try to remove the casing from the HSM.

In order to actually encrypt or decrypt, you first unlock the HSM with the password, but that doesn’t give you the keys, but just lets you use the HSM to do encryption and decryption. However, until you enter the password, it won’t do anything.

The main function of the HSM is to limit the rate at which you can try passwords. This might happen by simply having a flat limit of X tries per second, or maybe it exponentially backs off the more passwords you try, or maybe it will only allow some small number of failures (10 is common) before it erases itself. If you’ve ever pulled your iPhone out of your pocket only to see “iPhone is disabled, try again in 5 minutes”, that’s the rate limiting mechanism in action. Whatever the technique, the idea is the same: prevent the attacker from quickly trying a large number of candidate passwords. With a properly designed rate limiting mechanism, you can get away with a much much shorter passwords. For instance, if you can only have 10 tries before the phone erases itself, then the attacker only has a 1/1000 chance of breaking a 4 digit PIN, let alone a 16 character password. Some HSMs can also do biometric authentication to unlock the encryption key, which is how features like TouchID and FaceID work.

So, having the encryption keys in an HSM is a big improvement to security and it doesn’t require any change in the user interface — you just type in your password — which is great. What’s not so great is that it’s not always clear whether your device has an HSM or not. As a practical matter, new Apple devices do, as does the Google Pixel. The situation on Windows 10 is maybe but many modern devices will.

It needs to be said that an HSM isn’t magic: iPhones store their keys in HSMs and it certainly makes it much harder to decrypt them, but there are also companies who sell technology for breaking into HSM-protected devices like iPhones (Cellebrite being probably the best known), but you’re far better off with a device like this than you are without. And of course all bets are off if someone takes your device when it’s unlocked. This is why it’s a good idea to have your screen set to lock automatically after a fairly short time; obviously that’s a lot more convenient if you have fingerprint or face ID.3


OK, so this has been a pretty long series, but I hope it’s given you an appreciation for all the different settings in which passwords are used and where they are safe(r) versus unsafe.

As always, I can be reached at if you have questions or comments.

  1. Some computers allow you to install a firmware password which will stop the computer from booting unless you enter the right password. This isn’t totally useless but it’s not a defense if the attacker is willing to remove the disk. 
  2. Also called a Secure Encryption Processor (SEP) or a Trusted Platform Module (TPM). 
  3. It’s not technically necessary to keep the keys in HSM in order to secure the device against password guessing. For instance, once the HSM is unlocked it could just output the key and let decryption happen on the main CPU. The problem is that this then exposes you to attacks on the non-tamper-resistant hardware that makes up the rest of the computer. For this reason, it’s better to have the key kept inside the HSM. Note that this only applies to the keys in the HSM, not the data in your computer’s memory, which generally isn’t encrypted, and there are ways to read that memory. If you are worried your computer might be seized and searched, as in a border crossing, do what the pros do and turn it off.
  4. Unfortunately, biometric ID also makes it a lot easier to be compelled to unlock your phone–whatever the legal situation in your jurisdiction, someone can just press your finger against the reader, but it’s a lot harder to make you punch in your PIN–so it’s a bit of a tradeoff. 

Update: 2020-09-07: Changed TPM to HSM once in the main text for consistency.

The post A look at password security, Part V: Disk Encryption appeared first on The Mozilla Blog.

Daniel Stenbergcurl help remodeled

curl 4.8 was released in 1998 and contained 46 command line options. curl --help would list them all. A decent set of options.

When we released curl 7.72.0 a few weeks ago, it contained 232 options… and curl --help still listed all available options.

What was once a long list of options grew over the decades into a crazy long wall of text shock to users who would enter this command and option, thinking they would figure out what command line options to try next.

–help me if you can

We’ve known about this usability flaw for a while but it took us some time to figure out how to approach it and decide what the best next step would be. Until this year when long time curl veteran Dan Fandrich did his presentation at curl up 2020 titled –help me if you can.

Emil Engler subsequently picked up the challenge and converted ideas surfaced by Dan into reality and proper code. Today we merged the refreshed and improved --help behavior in curl.

Perhaps the most notable change in curl for many users in a long time. Targeted for inclusion in the pending 7.73.0 release.

help categories

First out, curl --help will now by default only list a small subset of the most “important” and frequently used options. No massive wall, no shock. Not even necessary to pipe to more or less to see proper.

Then: each curl command line option now has one or more categories, and the help system can be asked to just show command line options belonging to the particular category that you’re interested in.

For example, let’s imagine you’re interested in seeing what curl options provide for your HTTP operations:

$ curl --help http
Usage: curl [options…]
http: HTTP and HTTPS protocol options
--alt-svc Enable alt-svc with this cache file
--anyauth Pick any authentication method
--compressed Request compressed response
-b, --cookie Send cookies from string/file
-c, --cookie-jar Write cookies to after operation
-d, --data HTTP POST data
--data-ascii HTTP POST ASCII data
--data-binary HTTP POST binary data
--data-raw HTTP POST data, '@' allowed
--data-urlencode HTTP POST data url encoded
--digest Use HTTP Digest Authentication

list categories

To figure out what help categories that exists, just ask with curl --help category, which will show you a list of the current twenty-two categories: auth, connection, curl, dns, file, ftp, http, imap, misc, output, pop3, post, proxy, scp, sftp, smtp, ssh, telnet ,tftp, tls, upload and verbose. It will also display a brief description of each category.

Each command line option can be put into multiple categories, so the same one may be displayed in both in the “http” category as well as in “upload” or “auth” etc.

--help all

You can of course still get the old list of every single command line option by issuing curl --help all. Handy for grepping the list and more.


The meta category “important” is what we use for the options that we show when just curl --help is issued. Presumably those options should be the most important, in some ways.


Code by Emil Engler. Ideas and research by Dan Fandrich.

This Week In RustThis Week in Rust 354

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Learn Standard Rust
Learn More Rust
Project Updates

Crate of the Week

This week's crate is GlueSQL, a SQL database engine written in Rust with WebAssembly support.

Thanks to Taehoon Moon for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

326 pull requests were merged in the last week

Rust Compiler Performance Triage

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

Upcoming Events

North America
Asia Pacific

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

When the answer to your question contains the word "variance" you're probably going to have a bad time.

Thanks to Michael Bryan for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

About:CommunityFive years of Tech Speakers

Given the recent restructuring at Mozilla, many teams have been affected by the layoff. Unfortunately, this includes the Mozilla Tech Speakers program. As one of the volunteers who’s been part of the program since the very beginning, I’d like to share some memories of the last five years, watching the Tech Speakers program grow from a small group of people to a worldwide community.

Mozilla Tech Speakers' photos

Mozilla Tech Speakers – a program to bring together volunteer contributors who are already speaking to technical audiences (developers/web designers/computer science and engineering students) about Firefox, Mozilla and the Open Web in communities around the world. We want to support you and amplify your work!

It all started as an experiment in 2015 designed by Havi Hoffman and Dietrich Ayala from the Developer Relations team. They invited a handful of volunteers who were passionate about giving talks at conferences on Mozilla-related technologies and the Open Web in general to trial a program that would support their conference speaking activities, and amplify their impact. That’s how Mozilla Tech Speakers were born.

It was a perfect symbiosis. A small, scrappy  Developer Relations team can’t cover all the web conferences everywhere, but with help from trained and knowledgeable volunteers that task becomes a lot easier. Volunteer local speakers can share information at regional conferences that are distant or inaccessible for staff. And for half a decade, it worked, and the program grew in reach and popularity.

Mozilla Tech Speakers: Whistler Videoshoot

Those volunteers, in return, were given training and support, including funding for conference travel and cool swag as a token of appreciation.

From the first cohort of eight people, the program grew over the years to have more than a hundred of expert technical speakers around the world, giving top quality talks at the best web conferences. Sometimes you couldn’t attend an event without randomly bumping into one or two Tech Speakers. It was a globally recognizable brand of passionate, tech-savvy Mozillians.

The meetups

After several years of growth, we realized that connecting remotely is one thing, but meeting in person is a totally different experience. That’s why the idea for Tech Speakers meetups was born. We’ve had three gatherings in the past four years: Berlin in 2016, Paris in 2018, and two events in 2019, in Amsterdam and Singapore to accommodate speakers on opposite sides of the globe.

Mozilla Tech Speakers: Berlin

The first Tech Speakers meetup in Berlin coincided with the 2016 View Source Conference, hosted by Mozilla. It was only one year after the program started, but we already had a few new cohorts trained and integrated into the group. During the Berlin meetup we gave short lightning talks in front of each other, and received feedback from our peers, as well as professional speaking coach Denise Graveline.

Mozilla Tech Speakers: View Source

After the meetup ended, we joined the conference as volunteers, helping out in the registration desk, talking to attendees in the booths, and making the speakers feel welcome.

Mozilla Tech Speakers: Paris

The second meetup took place two years later in Paris – hosted by Mozilla’s unique Paris office, looking literally like a palace. We participated in training workshops about Firefox Reality, IoT, WebAssembly, and Rust. We continued the approach of presenting lightning talks that were evaluated by experts in the web conference scene: Ada Rose Cannon, Jessica Rose, Vitaly Friedman, and Marc Thiele.

Mozilla Tech Speakers: Amsterdam

Mozilla hosted two meetups in 2019, before the 2020 pandemic put tech conferences and events on hold.  The European tech speakers met in Amsterdam, while folks from Asia and the Pacific region met in Singapore.

The experts giving feedback for our Amsterdam lightning talks were Joe Nash, Kristina Schneider, Jessica Rose, and Jeremy Keith, with support from Havi Hoffman, and Ali Spivak as well. The workshops included Firefox DevTools and Web Speech API.

The knowledge

The Tech Speakers program was intended to help developers grow, and share their incredible knowledge with the rest of the world. We had various learning opportunities – from the first training to actually becoming a Tech Speaker. We had access to updates from Mozilla engineering staff talking about various technologies (Tech Briefings), or from experts outside of the company (Masterclasses), to monthly calls where we talked about our own lessons learned.

Mozilla Tech Speakers: Paris lecture

People shared links, usually tips about speaking, teaching and learning, and everything tech related in between.

The perks

Quite often we were speaking at local or not-for-profit conferences, organized by passionate people like us, and having those costs covered, Mozilla was being presented as the partner of such a conference, which benefited all parties involved.

Mozilla Tech Speakers: JSConf Iceland

It was a fair trade – we were extending the reach of Mozilla’s Developer Relations team significantly, always happy to do it in our free time, while the costs of such activities were relatively low. Since we were properly appreciated by the Tech Speakers staff, it felt really professional at all times and we were happy with the outcomes.

The reports

At its peak, there were more than a hundred Tech Speakers  giving thousands of talks to tens of thousands of other developers around the world. Those activities were reported via a dedicated form, but writing trip reports was also a great way to summarize and memorialize  our involvement in a given event.

The Statistics

In the last full year of the program, 2019, we had over 600 engagements (out of which about 14% were workshops, the rest – talks at conferences) from 143 active speakers across 47 countries. This summed up to a total of about 70 000 talk audience and 4 000 workshop audience. We were collectively fluent in over 50 of the world’s most common languages.

The life-changing experience

I reported on more than one hundred events I attended as a speaker, workshop lead, or booth staff – many of which wouldn’t have been possible without Mozilla’s support for the Tech Speakers program. Last year I was invited to attend a W3C workshop on Web games in Redmond, and without the travel and accommodation coverage I received from Mozilla, I’d have missed a huge opportunity.

Mozilla Tech Speakers: W3C talk

At that particular event, I met Desigan Chinniah, who got me hooked on the concept of Web Monetization. I immediately went all in, and we quickly announced the Web Monetization category in the js13kGames competition, I was showcasing monetized games at MozFest Arcade in London, and later got awarded with the Grant for the Web. I don’t think it all would be possible without someone actually accepting my request to fly across the ocean to talk about an Indie perspective on Web games as a Tech Speaker.

The family

Aside from the “work” part, Tech Speakers have become literally one big family, best friends for life, and welcome visitors in each other’s cities. This is stronger than anything a company can offer to their volunteers, for which I’m eternally grateful. Tech Speakers were, and always will be, a bunch of cool people doing stuff out of pure passion.

Mozilla Tech Speakers: MozFest 2019

I’d like to thank Havi Hoffman most of all, as well as Dietrich Ayala, Jason Weathersby, Sandra Persing, Michael Ellis, Jean-Yves Perrier, Ali Spivak, István Flaki Szmozsánszky, Jessica Rose, and many others shaping the program over the years, and every single Tech Speaker who made this experience unforgettable.

I know I’ll be seeing you fine folks at conferences when the current global situation settles. We’ll be bumping casually into each other, remembering the good old days, and continuing to share our passions, present and talk about the Open Web. Much love, see you all around!

The Rust Programming Language BlogPlanning the 2021 Roadmap

The core team is beginning to think about the 2021 Roadmap, and we want to hear from the community. We’re going to be running two parallel efforts over the next several weeks: the 2020 Rust Survey, to be announced next week, and a call for blog posts.

Blog posts can contain anything related to Rust: language features, tooling improvements, organizational changes, ecosystem needs — everything is in scope. We encourage you to try to identify themes or broad areas into which your suggestions fit in, because these help guide the project as a whole.

One way of helping us understand the lens you're looking at Rust through is to give one (or more) statements of the form "As a X I want Rust to Y because Z". These then may provide motivation behind items you call out in your post. Some examples might be:

  • "As a day-to-day Rust developer, I want Rust to make consuming libraries a better experience so that I can more easily take advantage of the ecosystem"
  • "As an embedded developer who wants to grow the niche, I want Rust to make end-to-end embedded development easier so that newcomers can get started more easily"

This year, to make sure we don’t miss anything, when you write a post please submit it into this google form! We will try to look at posts not submitted via this form, too, but posts submitted here aren’t going to be missed. Any platform — from blogs to GitHub gists — is fine! We plan to close the form on October 5th.

To give you some context for the upcoming year, we established these high-level goals for 2020, and we wanted to take a look back at the first part of the year. We’ve made some excellent progress!

  • Prepare for a possible Rust 2021 Edition
  • Follow-through on in-progress designs and efforts
  • Improve project functioning and governance

Prepare for a possible Rust 2021 Edition

There is now an open RFC proposing a plan for the 2021 edition! There has been quite a bit of discussion, but we hope to have it merged within the next 6 weeks. The plan is for the new edition to be much smaller in scope than Rust 2018. It it is expected to include a few minor tweaks to improve language usability, along with the promotion of various edition idiom lints (like requiring dyn Trait over Trait) so that they will be “deny by default”. We believe that we are on track for being able to produce an edition in 2021.

Follow-through on in-progress designs and efforts

One of our goals for 2020 was to push “in progress” design efforts through to completion. We’ve seen a lot of efforts in this direction:

  • The inline assembly RFC has merged and new implementation ready for experimentation
  • Procedural macros have been stabilized in most positions as of Rust 1.45
  • There is a proposal for a MVP of const generics, which we’re hoping to ship in 2020
  • The async foundations group is expecting to post an RFC on the Stream trait soon
  • The FFI unwind project group is closing out a long-standing soundness hole, and the first RFC there has been merged
  • The safe transmute project group has proposed a draft RFC
  • The traits working group is polishing Chalk, preparing rustc integration, and seeing experimental usage in rust-analyzer. You can learn more in their blog posts.
  • We are transitioning to rust-analyzer as the official Rust IDE solution, with a merged RFC laying out the plan
  • Rust’s tier system is being formalized with guarantees and expectations set in an in-progress RFC
  • Compiler performance work continues, with wins of 10-30% on many of our benchmarks
  • Reading into uninitialized buffers has an open RFC, solving another long-standing problem for I/O in Rust
  • A project group proposal for portable SIMD in std has an open RFC
  • A project group proposal for error handling ergonomics, focusing on the std::error API, has an open RFC
  • std::sync module updates are in brainstorming phase
  • Rustdoc's support for intra-doc links is close to stabilization!

There’s been a lot of other work as well both within the Rust teams, but these items highlight some of the issues and designs that are being worked on actively by the Rust teams.

Improve project functioning and governance

Another goal was to document and improve our processes for running the project. We had three main subgoals.

Improved visibility into state of initiatives and design efforts

The Rust teams are moving to the use of project groups for exploratory work, aiming to create dedicated groups of people who can explore an area, propose a design, and see it through to completion. The language team has kicked us off with safe transmute, FFI unwind, and inline assembly project groups. All of these have been enormous successes! Other teams are looking to use this model as well.

The compiler team has begun publishing weekly performance triage reports, in the continuing drive to reduce compile times. The LLVM working group has also been helping to highlight performance regressions in LLVM itself, to reduce compile time performance regressions when updating LLVM.

The compiler team has introduced Major Change Proposals as a way to introduce larger changes to the implementation, surfacing design questions before implementation work begins. The language team is also experimenting with a similar process for gaining quick language team feedback on proposals and, potentially, forming project groups. These both give a high-level view of changes being proposed, letting interested parties follow along without needing to subscribe to our very busy repositories.

Increase mentoring, leadership, and organizational bandwidth

Making design discussions more productive and less exhausting

The primary effort here has been the project groups, which have so far been largely a success. We expect to do more here in the future.

Mozilla Addons BlogUpdate on extension support in the new Firefox for Android

Last week, we finished rolling out the new Firefox for Android experience. This launch was the culmination of a year and a half of work rebuilding the mobile browser for Android from the ground up, replacing the previous application’s codebase with GeckoView—Mozilla’s new mobile browser engine—to create a fast, private, and customizable mobile browser. With GeckoView, our mobile development team can build and ship features much faster than before. The launch is a starting point for our new Android experience, and we’re excited to continue developing and refining features.

This means continuing to build support for add-ons. In order to get the new browser to users as soon as possible—which was necessary to iterate quickly on user feedback and limit resources needed to maintain two different Firefox for Android applications—we made some tough decisions about our minimum criteria for launch. We looked at add-on usage on Android, and made the decision to start by building support for add-ons in the Recommended Extensions program that were commonly installed by our mobile users. Enabling a small number of extensions in the initial rollout also enabled us to ensure a good first experience with add-ons in the new browser that are both mobile-friendly and security-reviewed.

More Recommended Extensions will be enabled on release in the coming weeks as they are tested and optimized. We are also working on enabling support for persistent loading of all extensions listed on (AMO) on Firefox for Android Nightly. This should make it easier for mobile developers to test for compatibility, and for interested users to access add-ons that are not yet available on release. You can follow our progress by subscribing to this issue. We expect to have this enabled later this month.

Our plans for add-on support on release have not been solidified beyond what is outlined above. However, we are continuously working on increasing support, taking into account usage and feedback to ensure we are making the most of our available resources. We will post updates to this blog as plans solidify each quarter.

The post Update on extension support in the new Firefox for Android appeared first on Mozilla Add-ons Blog.

Cameron KaiserI'm trying really hard to like the new Android Firefox Daylight. Really, I am.

I've used Firefox for Android nearly since it was first made available (I still have an old version on my Android 2.3 Nexus One, which compared with my Pixel 3 now seems almost ridiculously small). I think it's essential to having a true choice of browsers on Android as opposed to "Chrome all the things" and I've used it just about exclusively on all my Android devices since. So, when Firefox Daylight presented itself, I upgraded, and I'm pained to say I've been struggling with it for the better part of a week. Yes, this is going to be another one of those "omg why didn't I wait" posts, but I've tried to be somewhat specific about what's giving me heartburn with the new version because it's not uniformly bad and a lot of things are rather good, but it's still got a lot of rough edges and I don't want Daylight to be another stick Mozilla gives to people to let them beat them with.

So, here's the Good:

Firefox Daylight is a lot faster than the old Firefox for Android. Being based on Firefox 79, Daylight also has noticeably better support for newer web features. Top Sites are more screen-sparing. Dark mode is awesome. I like the feature where having private tabs becomes a notification: tap it and instantly all your naughty pages private browsing goes poof (and it's a good reminder they're open), or, if this doesn't appeal to you, it's a regular notification and you can just turn it off. Collections sound like a neat idea and I'll probably start using them if things get a little unwieldy. I'm not a bar-on-the-bottom kind of guy myself, but I can see why people would like that and choice is always good.

All this is a win. Unfortunately, here's the Bad I'm running into so far:

This has been reported lots of places, but the vast majority of the extensions that used to work with the old Firefox suddenly disappeared. For me, the big loss was Cookie Quick Manager, which was a great mobile-friendly way to manage cookies. Now I can't. Hope I don't screw up trying to get around those paywalls sites storing data about me. At least I still have uBlock Origin but I don't have much else.

Firefox Reader doesn't universally appear on pages it used to. Sometimes reloading the page works, sometimes it doesn't. This is a big problem for mobile. Worse, the old hack of prepending an about:reader?url= doesn't seem to work anymore.

Pages that open new windows or tabs sometimes show content and sometimes don't. This actually affects some of my sites personally, so I filed a bug on it. Naturally, it works fine in desktop Firefox and Chrome, and of course the old Android Firefox.

Oh, and what happened to the Downloads list? (This is being fixed.)

Now, some pesky Nits. These are first world problems, I'll grant, but my muscle memory was used to them and getting people onto a new version of the browser shouldn't upset so many of these habits:

When I tapped on the URL to go to a new site, I used to see all my top sites, so I could just switch to them with a touch. Now there's just a whole lot of empty space (or maybe it offers to paste in a URL left over in the clipboard). I have to open a new tab, or partially type the URL, to get to a top site or bookmark. This might be getting fixed, too, but the description of exactly what's getting fixed is a little ambiguous. Related to this, if you enable search suggestions then they dominate the list of suggestions even if it's obvious you're typing part of a domain name you usually visit. In the old browser these were grouped, so it was easy to avoid them if you weren't actually searching.

I often open articles in private browsing mode, and then tap the back button to go to the regular tab I spawned it from. This doesn't work anymore; I have to either switch tab "stacks" or swipe away the private tab.

Anyway, that's enough whining.

I don't really want to have to go back to the old Firefox for Android. I think the new version has a lot to recommend it, and plus I really despise reading bug reports in TenFourFox where the filer drops a bug bomb on my head and then goes back to the previous version. Seriously, I hate that: it screams "I don't care, wake me when you fix it" (whether or not it's really my bug) and says they don't have enough respect even to test a fix, let alone write one.

So I'm sticking with Firefox Daylight, warts and all. But, for all its improvements, Daylight needs work and definitely not at a time when Mozilla has fewer resources to devote to it. I've got fewer resources too: still trying to work on TenFourFox and keep Firefox working right on OpenPOWER, and now I may have to start doing PRs on the Android browser if I want that fixed also. It just feels like everything's a struggle these days and this upgrade really shouldn't have been.

Nicholas NethercoteHow to speed up the Rust compiler some more in 2020

I last wrote in April about my work on speeding up the Rust compiler. Time for another update.

Weekly performance triage

First up is a process change: I have started doing weekly performance triage. Each Tuesday I have been looking at the performance results of all the PRs merged in the past week. For each PR that has regressed or improved performance by a non-negligible amount, I add a comment to the PR with a link to the measurements. I also gather these results into a weekly report, which is mentioned in This Week in Rust, and also looked at in the weekly compiler team meeting.

The goal of this is to ensure that regressions are caught quickly and appropriate action is taken, and to raise awareness of performance issues in general. It takes me about 45 minutes each time. The instructions are written in such a way that anyone can do it, though it will take a bit of practice for newcomers to become comfortable with the process. I have started sharing the task around, with Mark Rousskov doing the most recent triage.

This process change was inspired by the “Regressions prevented” section of an excellent blost post from Nikita Popov (a.k.a. nikic), about the work they have been doing to improve the speed of LLVM. (The process also takes some ideas from the Firefox Nightly crash triage that I set up a few years ago when I was leading Project Uptime.)

The speed of LLVM directly impacts the speed of rustc, because rustc uses LLVM for its backend. This is a big deal in practice. The upgrade to LLVM 10 caused some significant performance regressions for rustc, though enough other performance improvements landed around the same time that the relevant rustc release was still faster overall. However, thanks to nikic’s work, the upgrade to LLVM 11 will win back much of the performance lost in the upgrade to LLVM 10.

It seems that LLVM performance perhaps hasn’t received that much attention in the past, so I am pleased to see this new focus. Methodical performance work takes a lot of time and effort, and can’t effectively be done by a single person over the long-term. I strongly encourage those working on LLVM to make this a team effort, and anyone with the relevant skills and/or interest to get involved.

Better benchmarking and profiling

There have also been some major improvements to rustc-perf, the performance suite and harness that drives, and which is also used for local benchmarking and profiling.

#683: The command-line interface for the local benchmarking and profiling commands was ugly and confusing, so much so that one person mentioned on Zulip that they tried and failed to use them. We really want people to be doing local benchmarking and profiling, so I filed this issue and then implemented PRs #685 and #687 to fix it. To give you an idea of the improvement, the following shows the minimal commands to benchmark the entire suite.

# Old
target/release/collector --db <DB> bench_local --rustc <RUSTC> --cargo <CARGO> <ID>

# New
target/release/collector bench_local <RUSTC> <ID>

Full usage instructions are available in the README.

#675: Joshua Nelson added support for benchmarking rustdoc. This is good because rustdoc performance has received little attention in the past.

#699, #702, #727, #730: These PRs added some proper CI testing for the local benchmarking and profiling commands, which had a history of being unintentionally broken.

Mark Rousskov also made many small improvements to rustc-perf, including reducing the time it takes to run the suite, and improving the presentation of status information.


Last year I wrote about inlining and code bloat, and how they can have a major effect on compile times. I mentioned that tooling to measure code size would be helpful. So I was happy to learn about the wonderful cargo-llvm-lines, which measures how many lines of LLVM IR generated for each function. The results can be surprising, because generic functions (especially commons ones like Vec::push(), Option::map(), and Result::map_err()) can be instantiated dozens or even hundreds of times in a single crate.

I worked on multiple PRs involving cargo-llvm-lines.

#15: This PR added percentages to the output of cargo-llvm-lines, making it easier to tell how important each function’s contribution is to the total amount of code.

#20, #663: These PRs added support for cargo-llvm-lines within rustc-perf, which made it easy to measure the LLVM IR produced for the standard benchmarks.

#72013: RawVec::grow() is a function that gets called by Vec::push(). It’s a large generic function that deals with various cases relating to the growth of vectors. This PR moved most of the non-generic code into a separate non-generic function, for wins of up to 5%.

(Even after that PR, measurements show that the vector growth code still accounts for a non-trivial amount of code, and it feels like there is further room for improvement. I made several failed attempts to improve it further: #72189, #73912, #75093, #75129. Even though they reduced the amount of LLVM IR generated, they were performance losses. I suspect this is because these additional changes affected the inlining of some of these functions, which can be hot.)

#72166: This PR added some specialized Iterator methods  (for_each(), all(), any(), find(), find_map()) for slices, winning up to 9% on clap-rs, and up to 2% on various other benchmarks.

#72139: This PR added a direct implementation for Iterator::fold(), replacing the old implementation that called the more general Iterator::try_fold(). This won up to 2% on several benchmarks.

#73882: This PR streamlined the code in RawVec::allocate_in(), winning up to 1% on numerous benchmarks.

cargo-llvm-lines is also useful to application/crate authors. For example, Simon Sapin managed to speed up compilation of the largest crate in Servo by 28%! Install it with cargo install cargo-llvm-lines and then run it with cargo llvm-lines (for debug builds) or cargo llvm-lines --release (for release builds).


#71942: this PR shrunk the LocalDecl type from 128 bytes to 56 bytes, reducing peak memory usage of a few benchmarks by a few percent.

#72227: If you push multiple elements onto an empty Vec it has to repeatedly reallocate memory. The growth strategy in use resulted in the following sequence of capacities: 0, 1, 2, 4, 8, 16, etc. “Tiny vecs are dumb”, so this PR changed it to 0, 4, 8, 16, etc., in most cases, which reduced the number of allocations done by rustc itself by 10% or more and sped up many benchmarks by up to 4%. In theory, the change could increase memory usage, but in practice it doesn’t.

#74214: This PR eliminated some symbol interner accesses, for wins of up to 0.5%.

#74310: This PR changed SparseBitSet to use an ArrayVec instead of a SmallVec for its storage, which is possible because the maximum length is known in advance, for wins of up to 1%.

#75133: This PR eliminated two fields in a struct that were only used in the printing of an error message in the case of an internal compiler error, for wins of up to 2%.

Speed changes since April 2020

Since my last blog post, changes in compile times have been mixed (table, graphs). It’s disappointing to not see a sea of green in the table results like last time, and there are many regressions that seem to be alarming. But it’s not as bad as it first seems! Understanding this requires knowing a bit about the details of the benchmark suite.

Most of the benchmarks that saw large percentage regressions are extremely short-running. (The benchmark descriptions help make this clearer.) For example a non-incremental check build of helloworld went from 0.03s to 0.08s.  (#70107 and #74682) are two major causes.) In practice, a tiny additional overhead of a few 10s of milliseconds per crate isn’t going to be noticeable when many crates take seconds or tens of seconds to compile.

Among the “real-world” benchmarks, some of them saw mixed results (e.g. regex, ripgrep), while some of them saw clear improvement, some of which were large (e.g. clap-rs, style-servo, webrender, webrender-wrench).

With all that in mind, since my last post, the compiler is probably either no slower or somewhat faster for most real-world cases.

Another interesting data point about the speed of rustc over the long-term came from Hacker News: compilation of one project (lewton) got 2.5x faster over the past three years.

LLVM 11 hasn’t landed yet, so that will give some big improvements for real-world cases soon. Hopefully for my next post the results will be more uniformly positive.


Jan-Erik RedigerThis Week in Glean: Leveraging Rust to build cross-platform mobile libraries

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.)

All "This Week in Glean" blog posts are listed in the TWiG index (and on the Mozilla Data blog). This article is cross-posted on the Mozilla Data blog.

A couple of weeks ago I gave a talk titled "Leveraging Rust to build cross-platform mobile libraries". You can find my slides as a PDF. It was part of the Rusty Days Webference, an online conference that was initially planned to happen in Poland, but had to move online. Definitely check out the other talks.

One thing I wanted to achieve with that talk is putting that knowledge out there. While multiple teams at Mozilla are already building cross-platform libraries, with a focus on mobile integration, the available material and documentation is lacking. I'd like to see better guides online, and I probably have to start with what we have done. But that should also be encouragement for those out there doing similar things to blog, tweet & speak about it.

Who else is using Rust to build cross-platform libraries, targetting mobile?

I'd like to hear about it. Find me on Twitter (@badboy_) or drop me an email.

The Glean SDK

I won't reiterate the full talk (go watch it, really!), so this is just a brief overview of the Glean SDK itself.

The Glean SDK is our approach to build a modern Telemetry library, used in Mozilla's mobile products and soon in Firefox on Desktop as well.

The SDK consists of multiple components, spanning multiple programming languages for different implementations. All of the Glean SDK lives in the GitHub repository at mozilla/glean. This is a rough diagram of the Glean SDK tech stack:

Glean SDK Stack

On the very bottom we have glean-core, a pure Rust library that is the heart of the SDK. It's responsible for controlling the database, storing data and handling additional logic (e.g. assembling pings, clearing data, ..). As it is pure Rust we can rely on all Rust tooling for its development. We can write tests that cargo test picks up. We can generate the full API documentation thanks to rustdoc and we rely on clippy to tell us when our code is suboptimal. Working on glean-core should be possible for everyone that knows some Rust.

On top of that sits glean-ffi. This is the FFI layer connecting glean-core with everything else. While glean-core is pure Rust, it doesn't actually provide the nice API we intend for users of Glean. That one is later implemented on top of it all. glean-ffi doesn't contain much logic. It's a translation between the proper Rust API of glean-core and C-compatible functions exposed into the dynamic library. In it we rely on the excellent ffi-support crate. ffi-support knows how to translate between Rust and C types, offers a nice (and safer) abstraction for C strings. glean-ffi holds some state: the instantiated global Glean object and metric objects. We don't need to pass pointers back and forth. Instead we use opaque handles that index into a map held inside the FFI crate.

The top layer of the Glean SDK are the different language implementations. Language implementations expose a nice ergonomic API to initialize Glean and record metrics in the respective language. Additionally each implementation handles some special cases for the platform they are running on, like gathering application and platform data or hooking into system events. The nice API calls into the Glean SDK using the exposed FFI functions of glean-ffi. Unfortunately at the moment different language implementations carry different amounts of actual logic in them. Sometimes metric implementations require this (e.g. we rely on the clock source of Kotlin for timing metrics), in other parts we just didn't move the logic out of the implementations yet. We're actively working on moving logic into the Rust part where we can and might eventually use some code generation to unify the other parts. uniffi is a current experiment for a multi-language bindings generator for Rust we might end up using.

Data@MozillaThis Week in Glean: Leveraging Rust to build cross-platform mobile libraries

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.)

All “This Week in Glean” blog posts are listed in the TWiG index.

A couple of weeks ago I gave a talk titled “Leveraging Rust to build cross-platform mobile libraries”. You can find my slides as a PDF. It was part of the Rusty Days Webference, an online conference that was initially planned to happen in Poland, but had to move online. Definitely check out the other talks.

One thing I wanted to achieve with that talk is putting that knowledge out there. While multiple teams at Mozilla are already building cross-platform libraries, with a focus on mobile integration, the available material and documentation is lacking. I’d like to see better guides online, and I probably have to start with what we have done. But that should also be encouragement for those out there doing similar things to blog, tweet & speak about it.

Who else is using Rust to build cross-platform libraries, targetting mobile?

I’d like to hear about it. Find me on Twitter (@badboy_) or drop me an email.

The Glean SDK

I won’t reiterate the full talk (go watch it, really!), so this is just a brief overview of the Glean SDK itself.

The Glean SDK is our approach to build a modern Telemetry library, used in Mozilla’s mobile products and soon in Firefox on Desktop as well.

The SDK consists of multiple components, spanning multiple programming languages for different implementations. All of the Glean SDK lives in the GitHub repository at mozilla/glean. This is a rough diagram of the Glean SDK tech stack:

Glean SDK Stack

On the very bottom we have glean-core, a pure Rust library that is the heart of the SDK. It’s responsible for controlling the database, storing data and handling additional logic (e.g. assembling pings, clearing data, ..). As it is pure Rust we can rely on all Rust tooling for its development. We can write tests that cargo test picks up. We can generate the full API documentation thanks to rustdoc and we rely on clippy to tell us when our code is suboptimal. Working on glean-core should be possible for everyone that knows some Rust.

On top of that sits glean-ffi. This is the FFI layer connecting glean-core with everything else. While glean-core is pure Rust, it doesn’t actually provide the nice API we intend for users of Glean. That one is later implemented on top of it all. glean-ffi doesn’t contain much logic. It’s a translation between the proper Rust API of glean-core and C-compatible functions exposed into the dynamic library. In it we rely on the excellent ffi-support crate. ffi-support knows how to translate between Rust and C types, offers a nice (and safer) abstraction for C strings. glean-ffi holds some state: the instantiated global Glean object and metric objects. We don’t need to pass pointers back and forth. Instead we use opaque handles that index into a map held inside the FFI crate.

The top layer of the Glean SDK are the different language implementations. Language implementations expose a nice ergonomic API to initialize Glean and record metrics in the respective language. Additionally each implementation handles some special cases for the platform they are running on, like gathering application and platform data or hooking into system events. The nice API calls into the Glean SDK using the exposed FFI functions of glean-ffi. Unfortunately at the moment different language implementations carry different amounts of actual logic in them. Sometimes metric implementations require this (e.g. we rely on the clock source of Kotlin for timing metrics), in other parts we just didn’t move the logic out of the implementations yet. We’re actively working on moving logic into the Rust part where we can and might eventually use some code generation to unify the other parts. uniffi is a current experiment for a multi-language bindings generator for Rust we might end up using.

The Firefox FrontierNo judgment digital definitions: What is the difference between a VPN and a web proxy?

Virtual private networks (VPNs) and secure web proxies are solutions for better privacy and security online, but it can be confusing to figure out which one is right for you. … Read more

The post No judgment digital definitions: What is the difference between a VPN and a web proxy? appeared first on The Firefox Frontier.

Firefox NightlyThese Weeks in Firefox: Issue 78


  • The tab modal print UI work is still in full swing, and is aiming for Firefox 81.

A screenshot of the new printing dialog in Firefox. A pane on the left shows a render of how Firefox's Wikipedia page will appear on a printed page. Many printing options appear to the right, including the printer destination, number of copies, and whether to print in portrait or landscape.

  • A new and colorful “Alpenglow” theme is included in Firefox 81, and is available from about:addons, about:welcome and the Customize UI.

A screenshot of a theme picker. Text in the image reads: "Choose a look. Personalize Nightly with a theme." Under are four images representing Firefox themes, labelled "Automatic", "Light", "Dark", and "Firefox Alpenglow".

  • Address bar Design Update 2 is enabled in Nightly, including search mode. Search mode unifies search engine one-offs and @aliases.

A search for "coffee" in Google search mode. Coffee-related Google search suggestions are shown in the address bar.

A search for "coffee" in Wikipedia search mode. Coffee-related Wikipedia search suggestions are shown in the address bar.

Quickly switch between search engines!

Friends of the Firefox team

Resolved bugs (excluding employees)

Fixed more than one bug

  • Kriyszig
  • manas

New contributors (🌟 = first patch)

  • 🌟 aichi.p.chang fixed an issue with how we were passing parameters to observers in our region-detecting code.
  • 🌟 codywelsh improved the grid layout in DevTools fission preferences.
  • Duncangleeddean upgraded eslint-plugin-jest.
  • Luc4leone added an option to the debugger editor context menu for the user to be able to wrap / unwrap long lines.
  • Nikhilkumar.c16 swapped the collapse and block icons for blocked network messages in the DevTools Console.

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
WebExtension APIs
  • Fixed a regression on the API, which was excluding non-first party cookies in the requests triggered by the API if the extensions do not have explicit host permissions (regressed in Bug 1437626 and fixed in Bug 1655190)
  • Thanks to Tim Giles, Firefox will now be showing in about:preferences if an extension is controlling the password saving pref by using the API (Bug 1620753)

Developer Tools

  • Our top priorities for the next while are Fission compatibility, both for DevTools and Marionette (which is powers internal and third-party browser testing tools)


  • A slightly re-organized about:processes is coming:

about:processes shows a table of processes, and within each process, a tab and subframe that’s running inside of it

Installer & Updater

  • Bug 1647422 – Profile Counter Telemetry
    • Although there is an existing profile count metric, this new metric counts across OS users and across reinstallations.
    • Implemented as a scalar: browser.engagement.profile_count
  • Bug 1647443 – Langpack Updates
    • Users who are working with langpacks see their Firefox version flip back to the English with each update because the langpack hasn’t been updated yet.
    • At the start of patch download, we now call the add-ons manager to start the langpack update process. We then defer signalling that the update is ready until the language packs are staged.
  • Bug 1639067 – Downloadable Filetype Improvements
    • Similar to the work done to allow users opening PDFs to do so in Firefox, we’re adding to the list of file types that we’ll allow you to open with Firefox after downloading them. We’ll now support users who want to open .xml, .avif, .svg, and .webp files in Firefox in addition to .pdfs.


New Tab Page

  • Working on turning newtab Pocket stories on in new regions.
    • English stories for en-US/en-GB browsers in Ireland and India.
    • Creating a generic English story feed to turn use for en-US globally.
    • German stories for de browsers in Austria, Switzerland, and Belgium.
  • We’ll be running an experiment to show the default browser notification toolbar on the New Tab Page.

Password Manager


Performance Tools

  • Added experimental event delay tracks. They are disabled by default, you need to enable it from the devtools console by calling `experimental.enableEventDelayTracks()`. Example profile
    A Firefox Profiler timeline view is shown. The x axis is time, the y axis is CPU activity. The graph on the bottom shows jagged red lines representing event delays.
  • Added an “Uploaded recordings” page to see your previously uploaded profiles: More features will be added, like deleting uploaded profiles.

A screenshot from the Firefox Profiler. Text in the reads "Uploaded Recordings". Underneath, two timestamped text entries list profiler recordings.


  • Picture-in-Picture should now be compatible with out-of-process iframes
  • We’ll be testing some variations on toggle appearance and positioning in Firefox 81.
  • User Research is compiling a report based on a week of user studies on the Picture-in-Picture feature. We hope to use this to better inform our future investment in the feature.

Search and Navigation


User Journey


  • We’re testing a default placement of the indicator, where it’s placed at the top-center of the last browser that spawned it
  • mconley has patches up that update the mic/camera buttons in the new indicator to do global muting

The WebRTC global sharing indicator showing that the user is sharing a Firefox window, the camera and the microphone. The camera icon shows that the camera is globally muted.

  • The new indicator is sticking to Nightly while we continue to refine the UX.


Wladimir PalantA grim outlook on the future of browser add-ons

A few days ago Mozilla announced the release of their new Android browser. This release, dubbed “Firefox Daylight,” is supposed to achieve nothing less than to “revolutionize mobile browsing.” And that also goes for browser extensions of course:

Last but not least, we revamped the extensions experience. We know that add-ons play an important role for many Firefox users and we want to make sure to offer them the best possible experience when starting to use our newest Android browsing app. We’re kicking it off with the top 9 add-ons for enhanced privacy and user experience from our Recommended Extensions program.

What this text carefully avoids stating directly: that’s the only nine (as in: single-digit 9) add-ons which you will be able to install on Firefox for Android now. After being able to use thousands of add-ons before, this feels like a significant downgrade. Particularly given that there appears to be no technical reason why none of the other add-ons are allowed any more, it being merely a policy decision. I already verified that my add-ons can still run on Firefox for Android but aren’t allowed to, same should be true for the majority of other add-ons.

Historical Firefox browser extension icons (puzzle pieces) representing the past, an oddly shaped and inconvenient puzzle piece standing for the present and a tombstone for the potential future<figcaption> Evolution of browser extensions. Image credits: Mozilla, jean_victor_balin </figcaption>

Why would Mozilla kill mobile add-ons?

Before this release, Firefox was the only mobile browser to allow arbitrary add-ons. Chrome experimented with add-ons on mobile but never actually released this functionality. Safari implemented a halfhearted ad blocking interface, received much applause for it, but never made this feature truly useful or flexible. So it would seem that Firefox had a significant competitive advantage here. Why throw it away?

Unfortunately, supporting add-ons comes at a considerable cost. It isn’t merely the cost of developing and maintaining the necessary functionality, there is also the performance and security impact of browser extensions. Mozilla has been struggling with this for a while. The initial solution was reviewing all extensions before publication. It was a costly process which also introduced delays, so by now all add-ons are published immediately but are still supposed to be reviewed manually eventually.

Mozilla is currently facing challenges both in terms of market share and financially, the latter being linked to the former. This once again became obvious when Mozilla laid off a quarter of its workforce a few weeks ago. In the past, add-ons have done little to help Mozilla achieve a breakthrough on mobile, so costs being cut here isn’t much of a surprise. And properly reviewing nine extensions is certainly cheaper than keeping tabs on a thousand.

But won’t Mozilla add more add-ons later?

Yes, they also say that more add-ons will be made available later. But if you look closely, all of Mozilla’s communication around that matter has been focused on containing damage. I’ve looked through a bunch of blog posts, and nowhere did it simply say: “When this is released, only a handful add-ons will be allowed, and adding more will require our explicit approval.” A number of Firefox users relies on add-ons, so I suspect that the strategy is to prevent an outcry from those.

This might also be the reason why extension developers haven’t been warned about this “minor” change. Personally, I learned about it from a user’s issue report. While there has been some communication around Recommended Extensions program, it was never mentioned that participating in this program was a prerequisite for extensions to stay usable.

I definitely expect Mozilla to add more add-ons later. But it will be the ones that users are most vocal about. Niche add-ons with only few users? Bad luck for you…

What this also means: the current state of the add-on ecosystem is going to be preserved forever. If only popular add-ons are allowed, other add-ons won’t get a chance to become popular. And since every add-on has to start small, developing anything new is a wasted effort.

Update (2020-09-01): There are some objections from the Mozilla community stating that I’m overinterpreting this. Yes, maybe I am. Maybe add-ons are still a priority to Mozilla. So much that for this release they:

  • declared gatekeeping add-ons a virtue rather than a known limitation (“revamped the extensions experience”).
  • didn’t warn add-on developers about the user complains to be expected, leaving it to them to figure out what’s going on.
  • didn’t bother setting a timeline when the gatekeeping is supposed to end and in fact didn’t even state unambiguously that ending it is the plan.
  • didn’t document the current progress anywhere, so nobody knows what works and what doesn’t in terms of extension APIs (still work in progress at the time of writing).

I totally get it that the development team has more important issues to tackle now that their work has been made available to a wider audience. I’m merely not very confident that once they have all these issues sorted out they will still go back to the add-on support and fix it. Despite all the best intentions, there is nothing as permanent as a temporary stopgap solution.

Isn’t the state of affairs much better on the desktop?

Add-on support in desktop browsers looks much better of course, with all major browsers supporting add-ons. Gatekeeping also isn’t the norm here, with Apple being the only vendor so far to discourage newcomers. However, a steady degradation has been visible here as well, sadly an ongoing trend.

Browser extensions were pioneered by Mozilla and originally had the same level of access as the browser’s own code. This allowed amazingly powerful extensions, for example the vimperator extension implemented completely different user interface paradigms which were inspired by the vim editor. Whether you are a fan of vim or not (few people are), being able to do something like this was very empowering.

So it’s not surprising that Mozilla attracted a very active community of extension builders. There has been lots of innovation, extensions showcasing the full potential of the browser. Some of that functionality has been eventually adopted by the browsers. Remember Firebug for example? The similarity to Developer Tools as they are available in any modern browser is striking.

Historical Firefox browser extension icons (puzzle pieces) representing the past, an oddly shaped and inconvenient puzzle piece standing for the present and a tombstone for the potential future<figcaption> Firebug screenshot. Image credits: Wikipedia </figcaption>

Once Google Chrome came along, this extension system was doomed. It simply had too many downsides to survive the fierce competition in the browser market. David Teller explains in his blog post why Mozilla had no choice but to remove it, and he is absolutely correct of course.

As to the decision about what to replace it with, I’m still not convinced that Mozilla made a good choice when they decided to copy Chrome’s extension APIs. While this made development of cross-browser extensions easier, it also limited Firefox extensions to the functionality supported by Chrome. Starting out as a clear leader in terms of customization, Firefox was suddenly chasing Chrome and struggling to keep full compatibility. And of course Google refused to cooperate on standardization of its underdocumented extension APIs (surprise!).

Where is add-on support on desktop going?

Originally, Mozilla promised that they wouldn’t limit themselves to the capabilities provided by Chrome. They intended to add more functionality soon, so that more powerful extensions would be possible. They also intended to give extension developers a way to write new extension APIs themselves, so that innovation could go beyond what browser developers anticipated. None of this really materialized, other than a few trivial improvements to Chrome’s APIs.

And so Google with its Chrome browser is now determining what extensions should be able to do – in any browser. After all, Mozilla’s is the only remaining independent extensions implementation, and it is no real competition any more. Now that they have this definition power, Google unsurprisingly decided to cut the costs incurred by extensions. Among other things, this change will remove webRequest API which is the one most powerful tool currently available to extensions. I expect Mozilla to follow suit sooner or later. And this is unlikely to be the last functionality cut.


The recent browser wars set a very high bar on what a modern browser should be. We got our lean and fast browsers, supporting vast amounts of web standards and extremely powerful web applications. The cost was high however: users’ choice was reduced significantly, it’s essentially Firefox vs. Chrome in its numerous varieties now, other browser engines didn’t survive. The negative impacts of Google’s almost-monopole on web development aren’t too visible yet, but in the browser customization space they already show very clearly.

Google Chrome is now the baseline for browser customization. On mobile devices this means that anything beyond “no add-on support whatsoever” will be considered a revolutionary step. Mozilla isn’t the first mobile browser vendor to celebrate themselves for providing a few selected add-ons. Open add-on ecosystems for mobile browsers are just not going to happen any more.

And on desktop Google has little incentive to keep the bar high for add-on support. There will be further functionality losses here, all in the name of performance and security. And despite these noble goals it means that users are going to lose out: the innovative impact of add-ons is going away. In future, all innovation will have to originate from browser vendors themselves, there will be no space for experiments or niche solutions.

Anne van KesterenFarewell Emil

When I first moved to Zürich I had the good fortune to have dinner with Emil. I had never met someone before with such a passion for food. (That day I met two.) Except for the food we had a good time. I found it particularly enjoyable that he was so upset — though in a very upbeat manner — with the quality of the food that having dessert there was no longer on the table.

The last time I remember running into Emil was in Lisbon, enjoying hamburgers and fries of all things. (Rest assured, they were very good.)

Long before all that, I used to frequent, to learn how to make browsers do marvelous things and improve user-computer interaction.

Mike Hommey[Linux] Disabling CPU turbo, cores and threads without rebooting

[Disclaimer: this has been sitting as a draft for close to three months ; I forgot to publish it, this is now finally done.]

In my previous blog post, I built Firefox in a multiple different number of configurations where I’d disable the CPU turbo, some of its cores or some of its threads. That is something that was traditionally done via the BIOS, but rebooting between each attempt is not really a great experience.

Fortunately, the Linux kernel provides a large number of knobs that allow this at runtime.


This is the most straightforward:

$ echo 0 > /sys/devices/system/cpu/cpufreq/boost

Re-enable with

$ echo 1 > /sys/devices/system/cpu/cpufreq/boost

CPU frequency throttling

Even though I haven’t mentioned it, I might as well add this briefly. There are many knobs to tweak frequency throttling, but assuming your goal is to disable throttling and set the CPU frequency to its fastest non-Turbo frequency, this is how you do it:

$ echo performance > /sys/devices/system/cpu/cpu$n/cpufreq/scaling_governor

where $n is the id of the core you want to do that for, so if you want to do that for all the cores, you need to do that for cpu0, cpu1, etc.

Re-enable with:

$ echo ondemand > /sys/devices/system/cpu/cpu$n/cpufreq/scaling_governor

(assuming this was the value before you changed it ; ondemand is usually the default)

Cores and Threads

This one requires some attention, because you cannot assume anything about the CPU numbers. The first thing you want to do is to check those CPU numbers. You can do so by looking at the physical id and core id fields in /proc/cpuinfo, but the output from lscpu --extended is more convenient, and looks like the following:

0   0    0      0    0:0:0:0       yes    3700.0000 2200.0000
1   0    0      1    1:1:1:0       yes    3700.0000 2200.0000
2   0    0      2    2:2:2:0       yes    3700.0000 2200.0000
3   0    0      3    3:3:3:0       yes    3700.0000 2200.0000
4   0    0      4    4:4:4:1       yes    3700.0000 2200.0000
5   0    0      5    5:5:5:1       yes    3700.0000 2200.0000
6   0    0      6    6:6:6:1       yes    3700.0000 2200.0000
7   0    0      7    7:7:7:1       yes    3700.0000 2200.0000
32  0    0      0    0:0:0:0       yes    3700.0000 2200.0000
33  0    0      1    1:1:1:0       yes    3700.0000 2200.0000
34  0    0      2    2:2:2:0       yes    3700.0000 2200.0000
35  0    0      3    3:3:3:0       yes    3700.0000 2200.0000
36  0    0      4    4:4:4:1       yes    3700.0000 2200.0000
37  0    0      5    5:5:5:1       yes    3700.0000 2200.0000
38  0    0      6    6:6:6:1       yes    3700.0000 2200.0000
39  0    0      7    7:7:7:1       yes    3700.0000 2200.0000

Now, this output is actually the ideal case, where pairs of CPUs (virtual cores) on the same physical core are always n, n+32, but I’ve had them be pseudo-randomly spread in the past, so be careful.

To turn off a core, you want to turn off all the CPUs with the same CORE identifier. To turn off a thread (virtual core), you want to turn off one CPU. On machines with multiple sockets, you can also look at the SOCKET column.

Turning off one CPU is done with:

$ echo 0 > /sys/devices/system/cpu/cpu$n/online

Re-enable with:

$ echo 1 > /sys/devices/system/cpu/cpu$n/online

Extra: CPU sets

CPU sets are a feature of Linux’s cgroups. They allow to restrict groups of processes to a set of cores. The first step is to create a group like so:

$ mkdir /sys/fs/cgroup/cpuset/mygroup

Please note you may already have existing groups, and you may want to create subgroups. You can do so by creating subdirectories.

Then you can configure on which CPUs/cores/threads you want processes in this group to run on:

$ echo 0-7,16-23 > /sys/fs/cgroup/cpuset/mygroup/cpuset.cpus

The value you write in this file is a comma-separated list of CPU/core/thread numbers or ranges. 0-3 is the range for CPU/core/thread 0 to 3 and is thus equivalent to 0,1,2,3. The numbers correspond to /proc/cpuinfo or the output from lscpu as mentioned above.

There are also memory aspects to CPU sets, that I won’t detail here (because I don’t have a machine with multiple memory nodes), but you can start with:

$ cat /sys/fs/cgroup/cpuset/cpuset.mems > /sys/fs/cgroup/cpuset/mygroup/cpuset.mems

Now you’re ready to assign processes to this group:

$ echo $pid >> /sys/fs/cgroup/cpuset/mygroup/tasks

There are a number of tweaks you can do to this setup, I invite you to check out the cpuset(7) manual page.

Disabling a group is a little involved. First you need to move the processes to a different group:

$ while read pid; do echo $pid > /sys/fs/cgroup/cpuset/tasks; done < /sys/fs/cgroup/cpuset/mygroup/tasks

Then deassociate CPU and memory nodes:

$ > /sys/fs/cgroup/cpuset/mygroup/cpuset.cpus
$ > /sys/fs/cgroup/cpuset/mygroup/cpuset.mems

And finally remove the group:

$ rmdir /sys/fs/cgroup/cpuset/mygroup

The Servo BlogGSoC wrap-up - Implementing WebGPU in Servo


Hello everyone! I am Kunal(@kunalmohan), an undergrad student at Indian Institute of Technology Roorkee, India. As a part of Google Summer of Code(GSoC) 2020, I worked on implementing WebGPU in Servo under the mentorship of Mr. Dzmitry Malyshau(@kvark). I devoted the past 3 months working on ways to bring the API to fruition in Servo, so that Servo is able to run the existing examples and pass the Conformance Test Suite(CTS). This is going to be a brief account of how I started with the project, what challenges I faced, and how I overcame them.

What is WebGPU?

WebGPU is a future web standard, cross-platform graphics API aimed to make GPU capabilities more accessible on the web. WebGPU is designed from the ground up to efficiently map to the Vulkan, Direct3D 12, and Metal native GPU APIs. A native implementation of the API in Rust is developed in the wgpu project. Servo implementation of the API uses this crate.

The Project

At the start of the project the implementation was in a pretty raw state- Servo was only able to accept shaders as SPIRV binary and ran just the compute example. I had the following tasks in front of me:

  • Implement the various DOM interfaces that build up the API.
  • Setup a proper Id rotation for the GPU resources.
  • Integrate WebGPU with WebRender for presenting the rendering to HTML canvas.
  • Setup proper model model for async error recording.

The final goal was to be able to run the live examples at and pass a fair amount of the CTS.


Since Servo is a multi-process browser, GPU is accessed from a different process(server-side) than the one running the page content and scripts(content process). For better performance and asynchronous behaviour, we have a separate wgpu thread for each content process.

Setting up a proper Id rotation for the GPU resources was our first priority. I had to ensure that each Id generated was unique. This meant sharing the Identity Hub among all threads via Arc and Mutex. For recycling the Ids, wgpu exposes an IdentityHandler trait that must be implemented on the server-side interface of the browser and wgpu. This facilitates the following: when wgpu detects that an object has been dropped by the user (which is some time after the actual drop/garbage collection), wgpu calls the trait methods that are responsible for releasing the Id. In our case they send a message to the content process to free the Id and make it available for reuse.

Implementing the DOM Interfaces was pretty straight forward. A DOM object is just an opaque handle to an actual GPU resource. Whenever a method, that performs an operation, is called on a DOM object there are 2 things to be done- convert the IDL types to wgpu types. And send a message to the server to perform the operation. Most of the validation is done within wgpu.


WebGPU textures can be rendered to HTML canvas via GPUCanvasContext, which can be obtained from canvas.getContext('gpupresent'). All rendered images are served to WebRender as ExternalImages for rendering purpose. This is done via an async software presentation path. Each new GPUCanvasContext object is assigned a new ExternalImageId and a new swap chain is assigned a new ImageKey. Since WebGPU threads are spawned on-demand, an image handler for WebGPU is initialized at startup, stored in Constellation, and supplied to threads at the time of spawn. Each time GPUSwapChain.getCurrentTexture() is called the canvas is marked as dirty which is then flushed at the time of reflow. At the time of flush, a message is sent to the wgpu server to update the image data provided to WebRender. The following happens after this:

  • The contents of the rendered texture are copied to a buffer.
  • Buffer is mapped asynchronously for read.
  • The data read from the buffer is copied to a staging area in PresentionData. PresentationData stores the data and all the required machinery for this async presentation belt.
  • When WebRender wants to read the data, it locks on the data to prevent it from being altered during read. Data is served in the form of raw bytes.

The above process is not the best one, but the only option available to us for now. This also causes a few empty frames to be rendered at the start. A good thing, though, is that this works on all platforms and is a great fallback path while we’ll be adding hardware accelerate presentation in the future.

Buffer Mapping

When the user issues an async buffer map operation, the operation is queued on the server-side and all devices polled at a regular interval of 100ms for the same. As soon as the map operation is complete, data is read and sent to the content process where it is stored in the Heap. The user can read and edit this data by accessing it’s subranges via GPUBuffer.getMappedRange() which returns ExternalArrayBuffer pointing to the data in the Heap. On unmap, all the ExternalArrayBuffers are detached, and if the buffer was mapped for write, data sent back to server for write to the actual resource.

Error Reporting

To achieve maximum efficiency, WebGPU supports an asynchronous error model. The implementation keeps a stack of ErrorScopes that are responsible for capturing the errors that occur during operations performed in their scope. The user is responsible for pushing and popping an ErrorScope in the stack. Popping an ErrorScope returns a promise that is resolved to null if all the operations were successfull, otherwise it resolves to the first error that occurred.

When an operation is issued, scope_id of the ErrorScope on the top of the stack is sent to the server with it and operation-count of the scope is incremented. The result of the operation can be described by the enum-

pub enum WebGPUOpResult {

On receiving the result, we decrement the operation-count of the ErrorScope with the given scope_id. We further have 3 cases:

  • The result is Success. Do nothing.
  • The result is an error and the ErrorFilter matches the error. We record this error in the ErrorScopeInfo, and if the ErrorScope has been popped by the user, resolve the promise with it.
  • The result is an error but the ErrorFilter does not match the error. In this case, we find the nearest parent ErrorScope with the matching filter and record the error in it.

After the result is processed, we try to remove the ErrorScope from the stack- the user should have called popErrorScope() on the scope and the operation-count of the scope should be 0.

In case there are no error scopes on the stack or if ErrorFilter of none of the ErrorScopes match the error, the error is fired as an GPUUncapturedErrorEvent.

Conformance Test Suite

Conformance Test Suite is required for checking the accuracy of the implementation of the API and can be found here. Servo vendors it’s own copy of the CTS which, currently, needs to be updated manually for the latest changes. Here are a few statistics of the tests:

  • 14/36 pass completely
  • 5/36 have majority of subtests passing
  • 17/36 fail/crash/timeout

The wgpu team is actively working on improving the validation.

Unfinished business

A major portion of the project that was proposed has been completed, but there’s still work left to do. These are a few things that I was unable to cover under the proposed timeline:

  • Profiling and benchmarking the implementation against the WebGL implementation of Servo.
  • Handle canvas resize event smoothly.
  • Support Error recording on Workers.
  • Support WGSL shaders.
  • Pass the remaining tests in the CTS.

Important Links

The WebGPU specification can be found here. The PRs that I made as a part of the project can be accessed via the following links:

The progress of the project can be tracked in the GitHub project


WebGPU implementation in Servo supports all of the Austin’s samples. Thanks to CYBAI and Josh, Servo now supports dynamic import of modules and thus accept GLSL shaders. Here are a few samples of what Servo is capable of rendering at 60fps:

Fractal Cube

Instanced Cube

Compute Boids

I would like to thank Dzmitry and Josh for guiding me throughout the project and a big shoutout to the WebGPU and Servo community for doing such awesome work! I had a great experience contributing to Servo and WebGPU. I started as a complete beginner to Rust, graphics and browser internals, but learned a lot during the course of this project. I urge all WebGPU users and graphics enthusiasts out there to test their projects on Servo and help us improve the implementation and the API as well :)

Mozilla VR BlogWhy Researchers Should Conduct User Testing Sessions in Virtual Reality (VR): On Using Hubs by Mozilla for Immersive, Embodied User Feedback

Why Researchers Should Conduct User Testing Sessions in Virtual Reality (VR): On Using Hubs by Mozilla for Immersive, Embodied User Feedback

Amidst the pandemic, our research team from Mozilla and The Extended Mind performed user testing research entirely in a remote 3D virtual space where participants had to BYOD (Bring Your Own Device). This research aimed to test security concepts that could help users feel safe traversing links in the immersive web, the results of which are forthcoming in 2021. By utilizing a virtual space, we were able to get more intimate knowledge of how users would interact with these security concepts because they were immersed in a 3D environment.

The purpose of this article is to persuade you that Hubs, and other VR platforms offer unique affordances for qualitative research. In this blog post, I’ll discuss the three key benefits of using VR platforms for research, namely the ability to perform immersive and embodied research across distances, with global participants, and the ability to test out concepts prior to implementation. Additionally, I will discuss the unique accessibility of Hubs as a VR platform and the benefits it provided us in our research.

To perform security concept research in VR, The Extended Mind recruited nine Oculus Quest users and brought them into a staged Mozilla Hubs room where we walked them through each security concept design and asked them to rate their likelihood to click a link and continue to the next page. (Of the nine subjects, seven viewed the experience on the Quest and two did on PC due to technical issues). For each security concept, we walked them through the actual concept, as well as spoofs of the concept to see how well people understood the indicators of safety (or lack thereof) they should be looking for.

Why Researchers Should Conduct User Testing Sessions in Virtual Reality (VR): On Using Hubs by Mozilla for Immersive, Embodied User Feedback

Because we were able to walk the research subjects through each concept, in multiple iterations, we were able to get a sense not only of their opinion of the concepts, but data on what spoofed them. And giving our participants an embodied experience made it so that we, as the researchers, did not have to do as much explaining of the concepts. To fully illustrate the benefits of performing research in VR, we’ll walk through the key benefits it offers.

Immersive Research Across Distances

The number one affordance virtual reality offers qualitative researchers is the ability to perform immersive research remotely. Research participants can partake no matter where they live, and yet whatever concept is being studied can be approached as an embodied experience rather than a simple interview.

If researchers wanted qualitative feedback on a new product, for instance, they could provide participants with the opportunity to view the object in 360 degrees, manipulate it in space, and even create a mock up of its functionality for participants to interact with - all without having to collect participants in a single space or provide them with physical prototypes of the product.

Global Participants

The second affordance is that researchers can do global studies. The study we performed with Mozilla had participants from the USA, Canada, Australia and Singapore. Whether researchers want a global samping or to perform a cross-cultural analysis on a certain subject, VR provides qualitative researchers to collect that data through an immersive medium.

Collect Experiential Qualitative Feedback on Concepts Pre-Implementation

The third affordance is that researchers can gather immersive feedback on concepts before they are implemented. These concepts may be mock-ups of buildings, public spaces, or concepts for virtual applications, but across the board virtual reality offers researchers a deeper dive into participants’ experiences of new concepts and designs than other platforms.

We used flat images to simulate link traversal inside of Hubs by Mozilla and just arranged them in a way that conveyed the storytelling appropriately (one concept per room). Using Hubs to test concepts allows for rapid and inexpensive prototyping. One parallel to this type of research is when people in the Architecture, Engineer, and Construction (AEC) fields use interactive virtual and augmented reality models to drive design decisions. Getting user feedback inside of an immersive environment, regardless of the level of fidelity within, can benefit the final product design.

Accessibility of Hubs by Mozilla

Hubs by Mozilla provided an on-demand immersive environment for research. Mozilla Hubs can be accessed through a web browser, which makes it more accessible than your average virtual reality platform. For researchers who want to perform immersive research in a more accessible way, Mozilla Hubs is a great option.

In our case, Mozilla Hubs allowed researchers to participate through their browser and screen share via Zoom with colleagues, which allowed for a number of team members to observe without crowding the actual virtual space. It also provided participants who had technological issues with their headsets an easy alternative.


Virtual reality is an exciting new platform for qualitative research. It offers researchers new affordances that simply aren’t available through telephone calls or video conferencing. The ability to share a space with the participant and direct their attention towards an object in front of them expands not only the scope of what can be studied remotely through qualitative means, but also the depth of data that can be collected from the participants themselves.

Why Researchers Should Conduct User Testing Sessions in Virtual Reality (VR): On Using Hubs by Mozilla for Immersive, Embodied User Feedback

The more embodied an experience we can offer participants, the more detailed and nuanced that their opinions, thoughts, feelings, towards a new concept will be. This will make it easier for designers and developers to integrate the voice of the user into product creation.

Authors: Jessica Outlaw, Diane Hosfelt, Tyesha Snow, and Sara Carbonneau

Spidermonkey Development BlogSpiderMonkey Newsletter 6 (Firefox 80-81)

SpiderMonkey is the JavaScript engine used in Mozilla Firefox. This newsletter gives an overview of the JavaScript and WebAssembly work we’ve done as part of the Firefox 80 and 81 Nightly release cycles. If you like these newsletters, you may also enjoy Yulia’s Compiler Compiler live stream.

With the recent changes at Mozilla, some may be worried about what this means for SpiderMonkey. The team continues to remain strong, supported and is excited to show off a lot of cool things this year and into the future.


👷🏽‍♀️ New features

Firefox 80
  • Ted added support for the export * as ns from “module” syntax in modules proposal.
In progress

🗑️ Garbage Collection

  • Jon moved more decommit code to the background thread.
  • Jon added prefs and improved heuristics for the number of helper threads doing GC work.
  • Yoshi added GC sub-categories to the profiler.
  • Paul removed some unused GC telemetry code to improve memory usage.
  • Jon enabled compacting for ObjectGroups.
  • Yoshi fixed shutdown GCs to not start compression tasks.

❇️ Stencil

Stencil is our project to create an explicit interface between the frontend (parser, bytecode emitter) and the rest of the VM, decoupling those components. This lets us improve performance, simplify a lot of code and improve bytecode caching. It also makes it possible to rewrite our frontend in Rust (see SmooshMonkey item below).

We expect to switch the frontend to use ParserAtoms in the next Nightly cycle (Firefox 82). At that point the frontend will work without doing any GC allocations and will be more independent from the rest of the engine, unblocking further (performance) improvements.

  • Arai decoupled Stencil structures more from the frontend data structures.
  • Ted made module parsing work with Stencils and without allocating GC things.
  • Ted deferred GC allocation of ScriptSourceObject.
  • Ted cleaned up scope/environment handling.
  • Arai added a dumpStencil(code) function to the JS shell to print the Stencil data structures.
  • Kannan landed more changes preparing for the switch to ParserAtom.

🐒 SmooshMonkey

SmooshMonkey is our project to reimplement the frontend in a safe language (Rust) and will make it easier to implement new features and improve long-term maintainability of the code base.

  • Arai hooked up the output of the Rust frontend to Stencil’s function data structures.

🚀 WarpBuilder

WarpBuilder is the JIT project to replace the frontend of our optimizing JIT (IonBuilder) and the engine’s Type Inference mechanism with a new MIR builder based on compiling CacheIR to MIR. WarpBuilder will let us improve security, performance, memory usage and maintainability of the whole engine. Since the previous newsletter we’ve ported a lot of optimizations to CacheIR and Warp.

🧹 Miscellaneous changes

  • Tom fixed syntax errors in regular expressions to use the correct source location.
  • Jan added a cache for atomizing strings to fix a performance cliff affecting Reddit.
  • Yoshi landed more changes for integrating SpiderMonkey’s helper threads with the browser’s thread pool.
  • Philip Chimento added public APIs for working with BigInts.
  • Evan Welsh added a public API for enabling code coverage.
  • Kanishk fixed an over-recursion bug in ExtractLinearSum.
  • Logan cleaned up Debugger frames code.
  • Jan optimized and cleaned up some jump relocation and jump code on x64 and ARM64.
  • Tom removed some CallNonGenericMethod calls for Number and Date functions. This improved certain error messages.
  • André converted code from ValueToId to ToPropertyKey to fix subtle correctness bugs.
  • Jeff moved more code out of jsfriendapi.h into smaller headers.
  • Barun renamed gc::AbortReason to GCAbortReason to fix a conflict with jit::AbortReason.
  • Steve added an option to the shortestPaths test function to find out why a given JS value is alive.
  • Steve enabled the JS gdb prettyprinters for the firefox binary as well as making them independent from the working directory.


  • The Cranelift team landed various changes to prepare for enabling Cranelift by default for ARM64 platforms in Nightly. On ARM64 we currently only have the Wasm Baseline compiler so it’s the first platform where Cranelift will be rolled out.
  • Ryan implemented non-nullable references.
  • Lars landed optimizations for SIMD and implemented some experimental Wasm SIMD opcodes.
  • Lars added alignment of loop headers to the Ion backend.

Will Kahn-GreeneRustConf 2020 thoughts

Last year, I went to RustConf 2019 in Portland. It was a lovely conference. Everyone I saw was so exuberantly happy to be there--it was just remarkable. It was my first RustConf. Plus while I've been sort-of learning Rust for a while and cursorily related to Rust things (I work on crash ingestion and debug symbols things), I haven't really done any Rust work. Still, it was a remarkable and very exciting conference.

RustConf 2020 was entirely online. I'm in UTC-4, so it occurred during my afternoon and evening. I spent the entire time watching the RustConf 2020 stream and skimming the channels on Discord. Everyone I saw on the channels were so exuberantly happy to be there and supportive of one another--it was just remarkable. Again! Even virtually!

I missed the in-person aspect of a conference a bit. I've still got this thing about conferences that I'm getting over, so I liked that it was virtual because of that and also it meant I didn't have to travel to go.

I enjoyed all of the sessions--they're all top-notch! They were all pretty different in their topics and difficulty level. The organizers should get gold stars for the children's programming between sessions. I really enjoyed the "CAT!" sightings in the channels--that was worth the entrance fee.

This is a summary of the talks I wrote notes for.

Read more… (1 min remaining to read)

Mozilla Localization (L10N)L10n Report: August 2020 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

As you are probably aware, Mozilla just went through a massive round of layoffs. About 250 people were let go, reducing the overall size of the workforce by a quarter. The l10n-drivers team was heavily impacted, with Axel Hecht (aka Pike) leaving the company.

We are still in the process of understanding how the reorganization will affect our work and the products we localize. A first step was to remove some projects from Pontoon, and we’ll make sure to communicate any further changes in our communication channels.

Telegram channel and Matrix

The “bridge” between our Matrix and Telegram channel, i.e. the tool synchronizing content between the two, has been working only in one direction for a few weeks. For this reason, and given the unsupported status of this tool, we decided to remove it completely.

As of now:

  • Our Telegram and Matrix channels are completely independent from each other.
  • The l10n-community channel on Matrix is the primary channel for synchronous communications. The reason for this is that Matrix is supported as a whole by Mozilla, offering better moderation options among other things, and can be easily accessed from different platforms (browser, phone).

If you haven’t used Matrix yet, we encourage you to set it up following the instructions available in the Mozilla Wiki. You can also set an email address in your profile, to receive notifications (like pings) when you’re offline.

We plan to keep the Telegram channel around for now, but we might revisit this decision in the future.

New content and projects

What’s new or coming up in Firefox desktop

Upcoming deadlines:

  • Firefox 81 is currently in beta and will be released on September 22nd. The deadline to update localization is on September 8.

In terms of content and new features, most of the changes are around the new modal print preview, which can be currently tested on Nightly.

What’s new or coming up in mobile

The new Firefox for Android has been rolled out at 100%! You should therefore have either been upgraded from the older version (or will be in just a little bit) – or you can download it directly from the Play Store here.

Congratulations to everyone who has made this possible!

For the next Firefox for Android release, we are expecting string freeze to start towards the end of the week, which will give localizers two weeks to complete localizing and testing.

Concerning Firefox for iOS: v29 strings have been exposed on Pontoon. We are still working out screenshots for testing with iOS devs at the moment, but these should be available soon and as usual from the Pontoon project interface.

On another note, and as mentioned at the beginning of this blog post, due to the recent lay-offs, we have had to deactivate some projects from Pontoon. The mobile products are currently: Scryer, Firefox Lite and Lockwise iOS. More may be added to this list soon, so stay tuned. Once more, thanks to all the localizers who have contributed their time and effort to these projects across the years. Your help has been invaluable for Mozilla.

What’s new or coming up in web projects

Common Voice

The Common Voice team is greatly impacted due to the changes in recent announcement. The team has stopped the two-week sprint cycle and is working in a maintenance mode right now. String updates and new language requests would take longer time to process due to resource constraints

Some other changes to the project before the reorg:

  • New site name; All traffic from the old domain will be forwarded to the new domain automatically.
  • New GitHub repo name mozilla/common-voice and new branch name main. All traffic to the previous domain voice-web will be forwarded directly to the new repo, but you may need to manually update your git remote if you have a local copy of the site running.

An updated firefox/welcome/page4.ftl with new layout will be ready for localization in a few days. The turnaround time is rather short. Be on the lookout for it.

Along with this update is the temporary page called banners/firefox-daylight-launch.ftl that promotes Fenix. It has a life of a few weeks. Please localize it as soon as possible. Once done, you will see the localized banner on on production.

The star priority ratings in Pontoon are also revised. The highest priority pages are firefox/all.ftl, firefox/new/*.ftl, firefox/whatsnew/*.ftl, and brands.ftl. The next level priority pages are the shared files. Unless a page has a hard deadline to complete, the rest are normal priority with a 3-star rating and you can take time to localize them.

WebThings Gateway

The team is completely dissolved due to the reorg. At the moment, the project would not take any new language requests or update the repo with changes in Pontoon. The project is actively working to move into a community-maintained state. We will update everyone as soon as that information becomes available.

What’s new or coming up in Foundation projects

The Foundation website homepage got a major revamp, strings have been exposed to the relevant locales in the Engagement and Foundation website projects. There’s no strict deadline, you can complete this anytime. The content will be published live regularly, with a first push happening in a few days.

What’s new or coming up in Pontoon

Download Terminology as TBX

We’ve added the ability to download Terminology from Pontoon in the standardized TBX file format, which allows you to exchange it with other users and systems. To access the feature, click on your user icon in the top-right section of the translation workspace and select “Download Terminology”.

Improving Machinery with SYSTRAN

We have an update on the work we’ve been doing with SYSTRAN to provide you with better machine translation options in Pontoon.

SYSTRAN has published three NMT models (German, French, and Spanish) based on contributions of Mozilla localizers. They are available in the SYSTRAN Marketplace as free and open source and accessible to any existing SYSTRAN NMT customers. In the future, we hope to make those models available beyond the SYSTRAN system.

These models have been integrated with Pontoon and are available in the Machinery tab. Please report any feedback that you have for them, as we want to make sure these are a useful resource for your contributions.

We’ll be working with SYSTRAN to learn how to build the models for new language pairs in 2021, which should widely expand the language coverage.

Search with Enter

From now on you need to press Enter to trigger search in the string list. This change unifies the behaviour of the string list search box and the Machinery search box, which despite similar looks previously hadn’t worked the same way. Former had been searching after the last keystroke (with a 500 ms delay), while latter after Enter was pressed.

Search on every keystroke is great when it’s fast, but string list search is not always fast. It becomes really tedious if more than 500 ms pass between the keystrokes and search gets triggered too early.


  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Daniel StenbergEnabling better curl bindings

I think it is fair to say that libcurl is a library that is very widely spread, widely used and powers a sizable share of Internet transfers. It’s age, it’s availability, it’s stability and its API contribute to it having gotten to this position.

libcurl is in a position where it could remain for a long time to come, unless we do something wrong and given that we stay focused on what we are and what we’re here for. I believe curl and libcurl might still be very meaningful in ten years.

Bindings are key

Another explanation is the fact that there are a large number of bindings to libcurl. A binding is a piece of code that allows libcurl to be used directly and conveniently from another programming language. Bindings are typically authored and created by enthusiasts of a particular language. To bring libcurl powers to applications written in that language.

The list of known bindings we feature on the curl web sites lists around 70 bindings for 62 something different languages. You can access and use libcurl with (almost) any language you can dream of. I figure most mortals can’t even name half that many programming languages! The list starts out with Ada95, Basic, C++, Ch, Cocoa, Clojure, D, Delphi, Dylan, Eiffel, Euphoria and it goes on for quite a while more.

Keeping bindings in sync is work

The bindings are typically written to handle transfers with libcurl as it was working at a certain point in time, knowing what libcurl supported at that moment. But as readers of this blog and followers of the curl project know, libcurl keeps advancing and we change and improve things regularly. We add functionality and new features in almost every new release.

This rather fast pace of development offers a challenge to binding authors, as they need to write the binding in a very clever way and keep up with libcurl developments in order to offer their users the latest libcurl features via their binding.

With libcurl being the foundational underlying engine for so many applications and the number of applications and services accessing libcurl via bindings is truly uncountable – this work of keeping bindings in sync is not insignificant.

If we can provide mechanisms in libcurl to ease that work and to reduce friction, it can literally affect the world.

“easy options” are knobs and levers

Users of the libcurl knows that one of the key functions in the API is the curl_easy_setopt function. Using this function call, the application sets specific options for a transfer, asking for certain behaviors etc. The URL to use, user name, authentication methods, where to send the output, how to provide the input etc etc.

At the time I write this, this key function features no less than 277 different and well-documented options. Of course we should work hard at not adding new options unless really necessary and we should keep the option growth as slow as possible, but at the same time the Internet isn’t stopping and as the whole world is developing we need to follow along.

Options generally come using one of a set of predefined kinds. Like a string, a numerical value or list of strings etc. But the names of the options and knowing about their existence has always been knowledge that exists in the curl source tree, requiring each bindings to be synced with the latest curl in order to get knowledge about the most recent knobs libcurl offers.

Until now…

Introducing an easy options info API

Starting in the coming version 7.73.0 (due to be released on October 14, 2020), libcurl offers API functions that allow applications and bindings to query it for information about all the options this libcurl instance knows about.

curl_easy_option_next lets the application iterate over options, to either go through all of them or a set of them. For each option, there’s details to extract about it that tells what kind of input data that option expects.

curl_easy_option_by_name allows the application to look up details about a specific option using its name. If the application instead has the internal “id” for the option, it can look it up using curl_easy_option_by_id.

With these new functions, bindings should be able to better adapt to the current run-time version of the library and become less dependent on syncing with the latest libcurl source code. We hope this will make it easier to make bindings stay in sync with libcurl developments.

Legacy is still legacy

Lots of bindings have been around for a long time and many of them of course still want to support libcurl versions much older than 7.73.0 so jumping onto this bandwagon of new fancy API for this will not be an instant success or take away code needed for that long tail of old version everyone wants to keep supporting.

We can’t let the burden of legacy stand in the way for improvement and going forward. At least if you find that you are lucky enough to have 7.73.0 or later installed, you can dynamically figure out these things about options. Maybe down the line the number of legacy versions will shrink. Maybe if libcurl still is relevant in ten years none of the pre curl 7.73.0 versions need to be supported anymore!


Lots of the discussions and ideas for this API come from Jeroen Ooms, author of the R binding for libcurl.

Image by Rudy and Peter Skitterians from Pixabay