hacks.mozilla.orgPyodide Spin Out and 0.17 Release

We are happy to announce that Pyodide has become an independent and community-driven project. We are also pleased to announce the 0.17 release for Pyodide with many new features and improvements.

Pyodide consists of the CPython 3.8 interpreter compiled to WebAssembly which allows Python to run in the browser. Many popular scientific Python packages have also been compiled and made available. In addition, Pyodide can install any Python package with a pure Python wheel from the Python Package Index (PyPi). Pyodide also includes a comprehensive foreign function interface which exposes the ecosystem of Python packages to Javascript and the browser user interface, including the DOM, to Python.

You can try out the latest version of Pyodide in a REPL directly in your browser.

Pyodide is now an independent project

We are happy to announce that Pyodide now has a new home in a separate GitHub organisation (github.com/pyodide) and is maintained by a volunteer team of contributors. The project documentation is available on pyodide.org.

Pyodide was originally developed inside Mozilla to allow the use of Python in Iodide, an experimental effort to build an interactive scientific computing environment for the web.  Since its initial release and announcement, Pyodide has attracted a large amount of interest from the community, remains actively developed, and is used in many projects outside of Mozilla.

The core team has approved a transparent governance document  and has a roadmap for future developments. Pyodide also has a Code of Conduct which we expect all contributors and core members to adhere to.

New contributors are welcome to participate in the project development on Github. There are many ways to contribute, including code contributions, documentation improvements, adding packages, and using Pyodide for your applications and providing feedback.

The Pyodide 0.17 release

Pyodide 0.17.0 is a major step forward from previous versions. It includes:

  • major maintenance improvements,
  • a thorough redesign of the central APIs, and
  • careful elimination of error leaks and memory leaks

Type translation improvements

The type translations module was significantly reworked in v0.17 with the goal that round trip translations of objects between Python and Javascript produces an identical object.

In other words, Python -> JS -> Python translation and JS -> Python -> JS translation now produce objects that are  equal to the original object. (A couple of exceptions to this remain due to unavoidable design tradeoffs.)

One of Pyodide’s strengths is the foreign function interface between Python and Javascript, which at its best can practically erase the mental overhead of working with two different languages. All I/O must pass through the usual web APIs, so in order for Python code to take advantage of the browser’s strengths , we need to be able to support use cases like generating image data in Python and rendering the data to an HTML5 Canvas, or implementing event handlers in Python.

In the past we found that one of the major pain points in using Pyodide occurs when an object makes a round trip from Python to Javascript and back to Python and comes back different. This violated the expectations of the user and forced inelegant workarounds.

The issues with round trip translations were primarily caused by implicit conversion of Python types to Javascript. The implicit conversions were intended to be convenient, but the system was inflexible and surprising to users. We still implicitly convert strings, numbers, booleans, and None. Most other objects are shared between languages using proxies that allow methods and some operations to be called on the object from the other language. The proxies can be converted to native types with new explicit converter methods called .toJs and to_py.

For instance, given an Array in JavaScript,

window.x = [“a”, “b”, “c”];

We can access it in Python as,

>>> from js import x # import x from global Javascript scope
>>> type(x)
<class 'JsProxy'>
>>> x[0]    # can index x directly
>>> x[1] = ‘c’ # modify x
>>> x.to_py()   # convert x to a Python list
['a', 'c']

Several other conversion methods have been added for more complicated use cases. This gives the user much finer control over type conversions than was previously possible.

For example, suppose we have a Python list and want to use it as an argument to a Javascript function that expects an Array.  Either the caller or the callee needs to take care of the conversion. This allows us to directly call functions that are unaware of Pyodide.

Here is an example of calling a Javascript function from Python with argument conversion on the Python side:

function jsfunc(array) {
  return array.length;

from js import jsfunc
from pyodide import to_js

def pyfunc():
  mylist = [1,2,3]
  jslist = to_js(mylist)
  return jsfunc(jslist) # returns 4

This would work well in the case that jsfunc is a Javascript built-in and pyfunc is part of our codebase. If pyfunc is part of a Python package, we can handle the conversion in Javascript instead:

function jsfunc(pylist) {
  let array = pylist.toJs();
  return array.length;

See the type translation documentation for more information.

Asyncio support

Another major new feature is the implementation of a Python event loop that schedules coroutines to run on the browser event loop. This makes it possible to use asyncio in Pyodide.

Additionally, it is now possible to await Javascript Promises in Python and to await Python awaitables in Javascript. This allows for seamless interoperability between asyncio in Python and Javascript (though memory management issues may arise in complex use cases).

Here is an example where we define a Python async function that awaits the Javascript async function “fetch” and then we await the Python async function from Javascript.

async def test():
    from js import fetch
    # Fetch the Pyodide packages list
    r = await fetch("packages.json")
    data = await r.json()
    # return all available packages
    return data.dependencies.object_keys()

let test = pyodide.globals.get("test");

// we can await the test() coroutine from Javascript
result = await test();
// logs ["asciitree", "parso", "scikit-learn", ...]

Error Handling

Errors can now be thrown in Python and caught in Javascript or thrown in Javascript and caught in Python. Support for this is integrated at the lowest level, so calls between Javascript and C functions behave as expected. The error translation code is generated by C macros which makes implementing and debugging new logic dramatically simpler.

For example:

function jserror() {
  throw new Error(“ooops!”);

from js import jserror
from pyodide import JsException

except JsException as e:
  print(str(e)) # prints “TypeError: ooops!”

Emscripten update

Pyodide uses the Emscripten compiler toolchain to compile the CPython 3.8 interpreter and Python packages with C extensions to WebAssembly. In this release we finally completed the migration to the latest version of Emscripten that uses the upstream LLVM backend. This allows us to take advantage of recent improvements to the toolchain, including significant reductions in package size and execution time.

For instance, the SciPy package shrank dramatically from 92 MB to 15 MB so Scipy is now cached by browsers. This greatly improves the usability of scientific Python packages that depend on scipy, such as scikit-image and scikit-learn. The size of the base Pyodide environment with only the CPython standard library shrank from 8.1 MB to 6.4 MB.

On the performance side, the latest toolchain comes with a 25% to 30% run time improvement:

Performance ranges between near native to up to 3 to 5 times slower, depending on the benchmark.  The above benchmarks were created with Firefox 87.

Other changes

Other notable features include:

  • Fixed package loading for Safari v14+ and other Webkit-based browsers
  • Added support for relative URLs in micropip and loadPackage, and improved interaction between micropip and loadPackage
  • Support for implementing Python modules in Javascript

We also did a large amount of maintenance work and code quality improvements:

  • Lots of bug fixes
  • Upstreamed a number of patches to the emscripten compiler toolchain
  • Added systematic error handling to the C code, including automatic adaptors between Javascript errors and CPython errors
  • Added internal consistency checks to detect memory leaks, detect fatal errors, and improve ease of debugging

See the changelog for more details.

Winding down Iodide

Mozilla has made the difficult decision to wind down the Iodide project. While alpha.iodide.io will continue to be available for now (in part to provide a demonstration of Pyodide’s capabilities), we do not recommend using it for important work as it may shut down in the future. Since iodide’s release, there have been many efforts at creating interactive notebook environments based on Pyodide which are in active development and offer a similar environment for creating interactive visualizations in the browser using python.

Next steps for Pyodide

While many issues were addressed in this release, a number of other major steps remain on the roadmap. We can mention

  • Reducing download sizes and initialization times
  • Improve performance of Python code in Pyodide
  • Simplification of package loading system
  • Update scipy to a more recent version
  • Better project sustainability, for instance, by seeking synergies with the conda-forge project and its tooling.
  • Better support for web workers
  • Better support for synchronous IO (popular for programming education)

For additional information see the project roadmap.


Lots of thanks to:

  • Dexter Chua and Joe Marshall for improving the build setup and making Emscripten migration possible.
  • Hood Chatham for in-depth improvement of the type translation module and adding asyncio support
  • and Romain Casati for improving the Pyodide REPL console.

We are also grateful to all Pyodide contributors.

The post Pyodide Spin Out and 0.17 Release appeared first on Mozilla Hacks - the Web developer blog.

Open Policy & AdvocacyMozilla reacts to publication of EU’s draft regulation on AI

Today, the European Commission published its draft for a regulatory framework for artificial intelligence (AI). The proposal lays out comprehensive new rules for AI systems deployed in the EU. Mozilla welcomes the initiative to rein in the potential harms caused by AI, but much remains to be clarified.

Reacting to the European Commission’s proposal, Raegan MacDonald, Mozilla’s Director of Global Public Policy, said: 

“AI is a transformational technology that has the potential to create value and enable progress in so many ways, but we cannot lose sight of the real harms that can come if we fail to protect the rights and safety of people living in the EU. Mozilla is committed to ensuring that AI is trustworthy, that it helps people instead of harming them. The European Commission’s push to set ground rules is a step in the right direction and it is good to see that several of our recommendations to the Commission are reflected in the proposal – but there is more work to be done to ensure these principles can be meaningfully implemented, as some of the safeguards and red lines envisioned in the text leave a lot to be desired.

Systemic transparency is a critical enabler of accountability, which is crucial to advancing more trustworthy AI. We are therefore encouraged by the introduction of user-facing transparency obligations – for example for chatbots or so-called deepfakes – as well as a public register for high-risk AI systems in the European Commission’s proposal. But as always, details matter, and it will be important what information exactly this database will encompass. We look forward to contributing to this important debate.”


The post Mozilla reacts to publication of EU’s draft regulation on AI appeared first on Open Policy & Advocacy.

The Mozilla BlogMark Surman joins the Mozilla Foundation Board of Directors

In early 2020, I outlined our efforts to expand Mozilla’s boards. Over the past year, we’ve added three new external Mozilla board members: Navrina Singh and Wambui Kinya to the Mozilla Foundation board and Laura Chambers to the Mozilla Corporation board.

Today, I’m excited to welcome Mark Surman, Executive Director of the Mozilla Foundation, to the Foundation board.

As I said to staff prior to his appointment, when I think about who should hold the keys to Mozilla, Mark is high on that list. Mark has unique qualifications in terms of the overall direction of Mozilla, how our organizations interoperate, and if and how we create programs, structures or organizations. Mark is joining the Mozilla Foundation board as an individual based on these qualifications; we have not made the decision that the Executive Director is automatically a member of the Board.
Mark has demonstrated his commitment to Mozilla as a whole, over and over. The whole of Mozilla figures into his strategic thinking. He’s got a good sense of how Mozilla Foundation and Mozilla Corporation can magnify or reduce the effectiveness of Mozilla overall. Mark has a hunger for Mozilla to grow in impact. He has demonstrated an ability to think big, and to dive into the work that is in front of us today.

For those of you who don’t know Mark already, he brings over two decades of experience leading projects and organizations focused on the public interest side of the internet. In the 12 years since Mark joined Mozilla, he has built the Foundation into a leading philanthropic and advocacy voice championing the health of the internet. Prior to Mozilla, Mark spent 15 years working on everything from a non-profit internet provider to an early open source content management system to a global network of community-run cybercafes. Currently, Mark spends most of his time on Mozilla’s efforts to promote trustworthy AI in the tech industry, a major focus of the Foundation’s current efforts.
Please join me in welcoming Mark Surman to the Mozilla Foundation Board of Directors.

You can read Mark’s message about why he’s joining Mozilla here.

PS. As always, we continue to look for new members for both boards, with the goal of adding the skills, networks and diversity Mozilla will need to succeed in the future.

LinkedIn: https://www.linkedin.com/in/msurman/

The post Mark Surman joins the Mozilla Foundation Board of Directors appeared first on The Mozilla Blog.

The Mozilla BlogWearing more (Mozilla) hats

Mark Surman

For many years now — and well before I sought out the job I have today — I thought: the world needs more organizations like Mozilla. Given the state of the internet, it needs them now. And, it will likely need them for a very long time to come.

Why? In part because the internet was founded with public benefit in mind. And, as the Mozilla Manifesto declared back in 2007, “… (m)agnifying the public benefit aspects of the internet is an important goal, worthy of time, attention and commitment.”

Today, this sort of ‘time and attention’ is more important — and urgent — than ever. We live in an era where the biggest companies in the world are internet companies. Much of what they have created is good, even delightful. Yet, as the last few years have shown, leaving things to commercial actors alone can leave the internet — and society — in a bit of a mess. We need organizations like Mozilla — and many more like it — if we are to find our way out of this mess. And we need these organizations to think big!

It’s for this reason that I’m excited to add another ‘hat’ to my work: I am joining the Mozilla Foundation board today. This is something I will take on in addition to my role as executive director.

Why am I assuming this additional role? I believe Mozilla can play a bigger role in the world than it does today. And, I also believe we can inspire and support the growth of more organizations that share Mozilla’s commitment to the public benefit side of the internet. Wearing a board member hat — and working with other Foundation and Corporation board members — I will be in a better position to turn more of my attention to Mozilla’s long term impact and sustainability.

What does this mean in practice? It means spending some of my time on big picture ‘Pan Mozilla’ questions. How can Mozilla connect to more startups, developers, designers and activists who are trying to build a better, more humane internet? What might Mozilla develop or do to support these people? How can we work with policy makers who are trying to write regulations to ensure the internet benefits the public interest? And, how do we shift our attention and resources outside of the US and Europe, where we have traditionally focused? While I don’t have answers to all these questions, I do know we urgently need to ask them — and that we need to do so in an expansive way that goes beyond the current scope of our operating organizations. That’s something I’ll be well positioned to do wearing my new board member hat.

Of course, I still have much to do wearing my executive director hat. We set out a few years ago to evolve the Foundation into a ‘movement building arm’ for Mozilla. Concretely, this has meant building up teams with skills in philanthropy and advocacy who can rally more people around the cause of a healthy internet. And, it has meant picking a topic to focus on: trustworthy AI. Our movement building approach — and our trustworthy AI agenda — is getting traction. Yet, there is still a way to go to unlock the kind of sustained action and impact that we want. Leading the day to day side of this work remains my main focus at Mozilla.

As I said at the start of this post: I think the world will need organizations like Mozilla for a long time to come. As all corners of our lives become digital, we will increasingly need to stand firm for public interest principles like keeping the internet open and accessible to all. While we can all do this as individuals, we also need strong, long lasting organizations that can take this stand in many places and over many decades. Whatever hat I’m wearing, I continue to be deeply committed to building Mozilla into a vast, diverse and sustainable institution to do exactly this.

The post Wearing more (Mozilla) hats appeared first on The Mozilla Blog.

Open Policy & AdvocacyMozilla Mornings on the DSA: Setting the standard for third-party platform auditing

On 11 May, Mozilla will host the next instalment of Mozilla Mornings – our regular event series that brings together policy experts, policymakers and practitioners for insight and discussion on the latest EU digital policy developments.

This instalment will focus on the DSA’s provisions on third-party platform auditing, one of the stand-out features of its next-generation regulatory approach. We’re bringing together a panel of experts to unpack the provisions’ strengths and shortcomings; and to provide recommendations for how the DSA can build a standard-setting auditing regime for Very Large Online Platforms.


Alexandra Geese MEP
IMCO DSA shadow rapporteur
Group of the Greens/European Free Alliance

Amba Kak
Director of Global Programs and Policy
AI Now Institute

Dr Ben Wagner
Assistant Professor, Faculty of Technology, Policy and Management
TU Delft  

With opening remarks by Raegan MacDonald
Director of Global Public Policy

Moderated by Jennifer Baker
EU technology journalist


Logistical details

Tuesday 11 May, 14:00 – 15:00 CEST

Zoom Webinar

Register *here*

Webinar login details to be shared on day of event

The post Mozilla Mornings on the DSA: Setting the standard for third-party platform auditing appeared first on Open Policy & Advocacy.

hacks.mozilla.orgNever too late for Firefox 88

April is upon us, and we have a most timely release for you — Firefox 88. In this release you will find a bunch of nice CSS additions including :user-valid and :user-invalid support and image-set() support, support for regular expression match indices, removal of FTP protocol support for enhanced security, and more!

This blog post provides merely a set of highlights; for all the details, check out the following:

:user-valid and :user-invalid

There are a large number of HTML form-related pseudo-classes that allow us to specify styles for various data validity states, as you’ll see in our UI pseudo-classes tutorial. Firefox 88 introduces two more — :user-valid and :user-invalid.

You might be thinking “we already have :valid and :invalid for styling forms containing valid or invalid data — what’s the difference here?”

:user-valid and :user-invalid are similar, but have been designed with better user experience in mind. They effectively do the same thing — matching a form input that contains valid or invaid data — but :user-valid and :user-invalid only start matching after the user has stopped focusing on the element (e.g. by tabbing to the next input). This is a subtle but useful change, which we will now demonstrate.

Take our valid-invalid.html example. This uses the following CSS to provide clear indicators as to which fields contain valid and invalid data:

input:invalid {
  border: 2px solid red;

input:invalid + span::before {
  content: '✖';
  color: red;

input:valid + span::before {
  content: '✓';
  color: green;

The problem with this is shown when you try to enter data into the “E-mail address” field — as soon as you start typing an email address into the field the invalid styling kicks in, and remains right up until the point where the entered text constitutes a valid e-mail address. This experience can be a bit jarring, making the user think they are doing something wrong when they aren’t.

Now consider our user-valid-invalid.html example. This includes nearly the same CSS, except that it uses the newer :user-valid and :user-invalid pseudo-classes:

input:user-invalid {
  border: 2px solid red;

input:user-invalid + span::before {
  content: '✖';
  color: red;

input:user-valid + span::before {
  content: '✓';
  color: green;

In this example the valid/invalid styling only kicks in when the user has entered their value and removed focus from the input, giving them a chance to enter their complete value before receiving feedback. Much better!

Note: Previously to Firefox 88, the same effect could be achieved using the proprietary :-moz-ui-invalid and :-moz-ui-valid pseudo-classes.

image-set() support for content/cursor

The image-set() function provides a mechanism in CSS to allow the browser to pick the most suitable image for the device’s resolution from a list of options, in a similar manner to the HTML srcset attribute. For example, the following can be used to provide multiple background-images to choose from:

div {
  background-image: image-set(
    url("small-balloons.jpg") 1x,
    url("large-balloons.jpg") 2x);

Firefox 88 has added support for image-set() as a value of the content and cursor properties. So for example, you could provide multiple resolutions for generated content:

h2::before {
  content: image-set(
    url("small-icon.jpg") 1x,
    url("large-icon.jpg") 2x);

or custom cursors:

div {
  cursor: image-set(
    url("custom-cursor-small.png") 1x,
    url("custom-cursor-large.png") 2x),

outline now follows border-radius shape

The outline CSS property has been updated so that it now follows the outline shape created by border-radius. It is really nice to see a fix included in Firefox for this long standing problem. As part of this work the non-standard -moz-outline-radius property has been removed.

RegExp match indices

Firefox 88 supports the match indices feature of regular expressions, which makes an indices property available containing an array that stores the start and end positions of each matched capture group. This functionality is enabled using the d flag.

There is also a corresponding hasIndices boolean property that allows you to check whether a regex has this mode enabled.

So for example:

const regex1 = new RegExp('foo', 'd');
regex1.hasIndices // true
const test = regex1.exec('foo bar');
test // [ "foo" ]
test.indices // [ [ 0, 3 ] ]

For more useful information, see our RegExp.prototype.exec() page, and RegExp match indices on the V8 dev blog.

FTP support disabled

FTP support has been disabled from Firefox 88 onwards, and its full removal is (currently) planned for Firefox version 90. Addressing this security risk reduces the likelihood of an attack while also removing support for a non-encrypted protocol.

Complementing this change, the extension setting browserSettings.ftpProtocolEnabled has been made read-only, and web extensions can now register themselves as protocol handlers for FTP.

The post Never too late for Firefox 88 appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Add-ons BlogChanges to themeable areas of Firefox in version 89

Firefox’s visual appearance will be updated in version 89 to provide a cleaner, modernized interface. Since some of the changes will affect themeable areas of the browser, we wanted to give theme artists a preview of what to expect as the appearance of their themes may change when applied to version 89.

Tabs appearance

  • The property tab_background_separator, which controls the appearance of the vertical lines that separate tabs, will no longer be supported.
  • Currently, the tab_line property can set the color of an active tab’s thick top border. In Firefox 89, this property will set a color for all borders of an active tab, and the borders will be thinner.

URL and toolbar

  • The property toolbar_field_separator, which controls the color of the vertical line that separates the URL bar from the three-dot “meatball menu,” will no longer be supported.

  • The property toolbar_vertical_separator, which controls the vertical lines near the three-line “hamburger menu” and the line separating items in the bookmarks toolbar, will no longer appear next to the hamburger menu. You can still use this property to control the separators in the bookmarks toolbar.  (Note: users will need to enable the separator by right clicking on the bookmarks toolbar and selecting “Add Separator.”)

You can use the Nightly pre-release channel to start testing how your themes will look with Firefox 89. If you’d like to get more involved testing other changes planned for this release, please check out our foxfooding campaign, which runs until May 3, 2021.

Firefox 89 is currently set available on the Beta pre-release channel by April 23, 2021, and released on June 1, 2021.

As always, please post on our community forum if there are any questions.

The post Changes to themeable areas of Firefox in version 89 appeared first on Mozilla Add-ons Blog.

Web Application SecurityFirefox 88 combats window.name privacy abuses

We are pleased to announce that Firefox 88 is introducing a new protection against privacy leaks on the web. Under new limitations imposed by Firefox, trackers are no longer able to abuse the window.name property to track users across websites.

Since the late 1990s, web browsers have made the window.name property available to web pages as a place to store data. Unfortunately, data stored in window.name has been allowed by standard browser rules to leak between websites, enabling trackers to identify users or snoop on their browsing history. To close this leak, Firefox now confines the window.name property to the website that created it.

Leaking data through window.name

The window.name property of a window allows it to be able to be targeted by hyperlinks or forms to navigate the target window. The window.name property, available to any website you visit, is a “bucket” for storing any data the website may choose to place there. Historically, the data stored in window.name has been exempt from the same-origin policy enforced by browsers that prohibited some forms of data sharing between websites. Unfortunately, this meant that data stored in the window.name property was allowed by all major browsers to persist across page visits in the same tab, allowing different websites you visit to share data about you.

For example, suppose a page at https://example.com/ set the window.name property to “my-identity@email.com”. Traditionally, this information would persist even after you clicked on a link and navigated to https://malicious.com/. So the page at https://malicious.com/ would be able to read the information without your knowledge or consent:

Window.name persists across the cross-origin navigation.

Window.name persists across the cross-origin navigation.

Tracking companies have been abusing this property to leak information, and have effectively turned it into a communication channel for transporting data between websites. Worse, malicious sites have been able to observe the content of window.name to gather private user data that was inadvertently leaked by another website.

Clearing window.name to prevent leakage

To prevent the potential privacy leakage of window.name, Firefox will now clear the window.name property when you navigate between websites. Here’s how it looks:

Firefox 88 clearing window.name after cross-origin navigation.

Firefox 88 clearing window.name after cross-origin navigation.

Firefox will attempt to identify likely non-harmful usage of window.name and avoid clearing the property in such cases. Specifically, Firefox only clears window.name if the link being clicked does not open a pop-up window.

To avoid unnecessary breakage, if a user navigates back to a previous website, Firefox now restores the window.name property to its previous value for that website. Together, these dual rules for clearing and restoring window.name data effectively confine that data to the website where it was originally created, similar to how Firefox’s Total Cookie Protection confines cookies to the website where they were created. This confinement is essential for preventing malicious sites from abusing window.name to gather users’ personal data.

Firefox isn’t alone in making this change: web developers relying on window.name should note that Safari is also clearing the window.name property, and Chromium-based browsers are planning to do so. Going forward, developers should expect clearing to be the new standard way that browsers handle window.name.

If you are a Firefox user, you don’t have to do anything to benefit from this new privacy protection. As soon as your Firefox auto-updates to version 88, the new default window.name data confinement will be in effect for every website you visit. If you aren’t a Firefox user yet, you can download the latest version here to start benefiting from all the ways that Firefox works to protect your privacy.

The post Firefox 88 combats window.name privacy abuses appeared first on Mozilla Security Blog.

hacks.mozilla.orgQUIC and HTTP/3 Support now in Firefox Nightly and Beta

tl;dr: Support for QUIC and HTTP/3 is now enabled by default in Firefox Nightly and Firefox Beta. We are planning to start rollout on the release in Firefox Stable Release 88. HTTP/3 will be available by default by the end of May.

What is HTTP/3?

HTTP/3 is a new version of HTTP (the protocol that powers the Web) that is based on QUIC. HTTP/3 has three main performance improvements over HTTP/2:

  • Because it is based on UDP it takes less time to connect;
  • It does not have head of line blocking, where delays in delivering packets cause an entire connection to be delayed; and
  • It is better able to detect and repair packet loss.

QUIC also provides connection migration and other features that should improve performance and reliability. For more on QUIC, see this excellent blog post from Cloudflare.

How to use it?

Firefox Nightly and Firefox Beta will automatically try to use HTTP/3 if offered by the Web server (for instance, Google or Facebook). Web servers can indicate support by using the Alt-Svc response header or by advertising HTTP/3 support with a HTTPS DNS record. Both the client and server must support the same QUIC and HTTP/3 draft version to connect with each other. For example, Firefox currently supports drafts 27 to 32 of the specification, so the server must report support of one of these versions (e.g., “h3-32”) in Alt-Svc or HTTPS record for Firefox to try to use QUIC and HTTP/3 with that server. When visiting such a website, viewing the network request information in Dev Tools should show the Alt-Svc header, and also indicate that HTTP/3 was used.

If you encounter issues with these or other sites, please file a bug in Bugzilla.

The post QUIC and HTTP/3 Support now in Firefox Nightly and Beta appeared first on Mozilla Hacks - the Web developer blog.

about:communityNew Contributors To Firefox

With Firefox 88 in flight, we are pleased to welcome the long list of developers who’ve contributed their first code change to in this release, 24 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Mozilla L10NL10n Report: April 2021 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 


New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

  • Cebuano (ceb)
  • Hiligaynon (hil)
  • Meiteilon (mni)
  • Papiamento (pap-AW)
  • Shilha (shi)
  • Somali (so)
  • Uyghur (ug)

Update on the communication channels

On April 3rd, as part of a broader strategy change at Mozilla, we moved our existing mailing lists (dev-l10n, dev-l10n-web, dev-l10n-new-locales) to Discourse. If you are involved in localization, please make sure to create an account on Discourse and set up your profile to receive notifications when there are new messages in the Localization category.

We also decided to shut down our existing Telegram channel dedicated to localization. This was originally created to fill a gap, given its broad availability on mobile, and the steep entry barrier required to use IRC. In the meantime, IRC has been replaced by chat.mozilla.org, which offers a much better experience on mobile platforms. Please make sure to check out the dedicated Wiki page with instructions on how to connect, and join our #l10n-community room.

New content and projects

What’s new or coming up in Firefox desktop

For all localizers working on Firefox, there is now a Firefox L10n Newsletter, including all information regarding the next major release of Firefox (89, aka MR1). Here you can find the latest issue, and you can also subscribe to this thread in discourse to receive a message every time there’s an update.

One important update is that the Firefox 89 cycle will last 2 extra weeks in Beta. These are the important deadlines:

  • Firefox 89 will move from Nightly to Beta on April 19 (unchanged).
  • It will be possible to update localizations for Firefox 89 until May 23 (previously May 9).
  • Firefox 89 will be released on June 1.

As a consequence, the Nightly cycle for Firefox 90 will also be two weeks longer.

What’s new or coming up in mobile

Like Firefox desktop, Firefox for iOS and Firefox for Android are still on the road to the MR1 release. I’ve published some details on Discourse here. Dates and info are still relevant, nothing changes in terms of l10n.

All strings for Firefox for iOS should already have landed.

Most strings for Firefox for Android should have landed.

What’s new or coming up in web projects


The Voice Fill and Firefox Voice Beta extensions are being retired.

Common Voice:

The project is transitioning to Mozilla Foundation. The announcement was made earlier this week. Some of the Mozilla staff who worked closely with the project will continue working on it in their new roles. The web part, the part that contributes to the site localization will remain in Pontoon.

Firefox Accounts:

Beta was launched on March 17. The sprint cycle is now aligned with Firefox Nightly moving forward. The next code push will be on April 21. The cutoff to include localized strings is a week earlier than the code push date.


All locales are disabled with the exception of fr, ja, zh-CN and zh-TW. There is a blog on this decision. The team may add back more languages later. If it does happen, the attributes to the work done by community members will be retained in Pontoon. Nothing will be lost.

  • Migration from .lang to .ftl has completed. The strings containing brand and product names that were not converted properly will appear as warnings and would not be shown on the production site. Please resolve these issues as soon as possible.
  • A select few locales are chosen to be supported by vendor service: ar, hi-IN, id, ja, and ms. The community managers were reached out for this change. The website should be fully localized in these languages by the first week of May. For more details on this change and for ways to report translation issues, please check out the announcement on Discourse.


  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Blog of DataThis Week in Glean: rustc, iOS and an M1

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index (and on the Mozilla Data blog).

Back in February I got an M1 MacBook. That’s Apple’s new ARM-based hardware.

I got it with the explicit task to ensure that we are able to develop and build Glean on it. We maintain a Swift language binding, targeting iOS, and that one is used in Firefox iOS. Eventually these iOS developers will also have M1-based machines and want to test their code, thus Glean needs to work.

Here’s what we need to get to work:

  • Compile the Rust portions of Glean natively on an M1 machine
  • Build & test the Kotlin & Swift language bindings on an M1 machine, even if non-native (e.g. Rosetta 2 emulation for x86_64)
  • Build & test the Swift language bindings natively and in the iPhone simulator on an M1 machine
  • Stretch goal: Get iOS projects using Glean running as well

Rust on an M1

Work on getting Rust compiled on M1 hardware started last year in June already, with the availability of the first developer kits. See Rust issue 73908 for all the work and details. First and foremost this required a new target: aarch64-apple-darwin. This landed in August and was promoted to Tier 21 with the December release of Rust 1.49.0.

By the time I got my MacBook compiling Rust code on it was as easy as on an Intel MacBook. Developers on Intel MacBooks can cross-compile just as easily:

rustup target add aarch64-apple-darwin
cargo build --target aarch64-apple-darwin

Glean Python & Kotlin on an M1

Glean Python just … worked. We use cffi to load the native library into Python. It gained aarch642 macOS support in v14.4.1. My colleague glandium later contributed support code so we build release wheels for that target too. So it’s both possible to develop & test Glean Python, as well as use it as a dependency without having a full Rust development environment around.

Glean Android is not that straight forward. Some of our transitive dependencies are based on years-old pre-built binaries of SQLite and of course there’s not much support behind updating those Java libraries. It’s possible. A friend managed to compile and run that library on an M1. But for Glean development we simply recommend relying on Rosetta 2 (the x86_64 compatibility layer) for now. It’s as easy as:

arch -x86_64 $SHELL
make build-kotlin

At least if you have Java set up correctly… The default Android emulator isn’t usable on M1 hardware yet, but Google is working on a compatible one: Android M1 emulator preview. It’s usable enough for some testing, but for that part I most often switch back to my Linux Desktop (that has the additional CPU power on top).

Glean iOS on an M1

Now we’re getting to the interesting part: Native iOS development on an M1. Obviously for Apple this is a priority: Their new machines should become the main machine people do iOS development on. Thus Xcode gained aarch64 support in version 12 long before the hardware was available. That caused quite some issues with existing tooling, such as the dependency manager Carthage. Here’s the issue:

  • When compiling for iOS hardware you would pick a target named aarch64-apple-ios, because … iPhones and iPads are ARM-based since forever.
  • When compiling for the iOS simulator you would pick a target named x86_64-apple-ios, because conveniently the simulator uses the host’s CPU (that’s what makes it fast)

So when the compiler saw x86_64 and iOS it knew “Ah, simulator target” and when it saw aarch64 and ios it knew “Ah, hardware”. And everyone went with this, Xcode happily built both targets and, if asked to, was able to bundle them into one package.

With the introduction of Apple Silicion3 the iOS simulator run on these machines would also be aarch644, and also contain ios, but not be for the iOS hardware.

Now Xcode and the compiler will get confused what to put where when building on M1 hardware for both iOS hardware and the host architecture.

So the compiler toolchain gained knowledge of a new thing: arm64-apple-ios14.0-simulator, explicitly marking the simulator target. The compiler knows from where to pick the libraries and other SDK files when using that target. You still can’t put code compiled for arm64-apple-ios and arm64-apple-ios14.0-simulator into the same universal binary5, because you can have each architecture only once (the arm64 part in there). That’s what Carthage and others stumbled over.

Again Apple prepared for that and for a long time they have wanted you to use XCFramework bundles6. Carthage just didn’t used to support that. The 0.37.0 release fixed that.

That still leaves Rust behind, as it doesn’t know the new -simulator target. But as always the Rust community is ahead of the game and deg4uss3r started adding a new target in Rust PR #81966. He got half way there when I jumped in to push it over the finish line. How these targets work and how LLVM picks the right things to put into the compiled artifacts is severly underdocumented, so I had to go the trial-and-error route in combination with looking at LLVM source code to find the missing pieces. Turns out: the 14.0 in arm64-apple-ios14.0-simulator is actually important.

With the last missing piece in place, the new Rust target landed in February and is available in Nightly. Contrary to the main aarch64-apple-darwin or aarch64-apple-ios target, the simulator target is not Tier 2 yet and thus no prebuilt support is available. rustup target add aarch64-apple-darwin does not work right now. I am now in discussions to promote it to Tier 2, but it’s currently blocked by the RFC: Target Tier Policy.

It works on nightly however and in combination with another cargo capability I’m able to build libraries for the M1 iOS simulator:

cargo +nightly build -Z build-std --target aarch64-apple-ios-sim

For now Glean iOS development on an M1 is possible, but requires Nightly. Goal achieved, I can actually work with this!

In a future blog post I want to explain in more detail how to teach Xcode about all the different targets it should build native code for.

All The Other Projects

This was marked a stretch goal for a reason. This involves all the other teams with Rust code and the iOS teams too. We’re not there yet and there’s currently no explicit priority to make development of Firefox iOS on M1 hardware possible. But when it comes to it, Glean will be ready for it and the team can assist others to get it over the finish line.

Want to hear more about Glean and our cross-platform Rust development? Come to next week’s Rust Linz meetup, where I will be talking about this.


  1. See Platform Support for what the Tiers means.↩︎
  2. The other name for that target.↩︎
  3. “Apple Silicon” is yet another name for what is essentially the same as “M1” or “macOS aarch64”↩︎
  4. Or arm64 for that matter. Yes, yet another name for the same thing.↩︎
  5. “Universal Binaries” have existed for a long time now and allow for one binary to include the compiled artifacts for multiple targets. It’s how there’s only one Firefox for Mac download which runs natively on either Mac platform.↩︎
  6. Yup, the main documentation they link to is a WWDC 2019 talk recording video.↩︎

about:communityIn loving memory of Ricardo Pontes

It brings us great sadness to share the news that a beloved Brazilian community member and Rep alumnus, Ricardo Pontes has recently passed away.

Ricardo was one of the first Brazilian community members, contributing for more than 10 years, a good friend, and a mentor to other volunteers.

His work was instrumental on the Firefox OS days and his passion inspiring. His passing is finding us sadden and shocked. Our condolences to his family and friends.

Below are some words about Ricardo from fellow Mozillians (old and new)

  • Sérgio Oliveira (seocam): Everybody that knew Ricardo, or Pontes as we usually called him in the Mozilla community,  knows that he had a strong personality (despite his actual height). We always stood for what he believed was right and fought for it, but always smiling, making jokes and playing around with the situations. It was a real fun partner with him in many situations, even the not so easy. We are lucky to have photos of Ricardo, since he was always behind the camera taking pictures of us, and always great pictures. Pontes, it was a great pleasure to defend the free Web side-by-side with you. I’ll miss you my friend.
  • Felipe Gomes: O Ricardo sempre foi uma pessoa alegre, animada, e que tinha o dom de unir todos os grupos. Até em sua luta foi possível ver como as pessoas se uniram para rezar por ele e o quanto ele era querido para seus amigos e familiares. As memórias que temos dele são as memórias que ele registrou de nós através de sua câmera. Descanse em paz meu amigo.
  • Andrea Balle:  Pontes is and always will be part of Mozilla Brazil. One of the first members, the “jurassic team” as we called. Pontes was a generous, intelligent and high-spirited friend. I will always remeber him as a person with great enthusiasm for sharing the things that he loved, including bikes, photography, technology and the free web. He will be deeply missed.
  • Armando Neto: I met Ricardo 10 years ago, in a hotel hallway, we were chatting about something I don’t remember, but I do remember we’re laughing, and I will always remember him that way in that hallway.. laughing.
  • Luciana Viana: O Ricardo era quieto e calado, mas observava tudo e estava sempre atento aos movimentos. Nos conhecemos graças a Mozilla e tivemos a oportunidade de conviver graças às nossas inúmeras viagens juntos: Buenos Aires, Cartagena, Barcelona, Toronto, viagens inesquecíveis graças a sua presença, contribuições e senso de humor. Descance em paz querido Chuck. Peço a Deus que conforte o coração da família.
  • Clauber Stipkovic: Thank you for everything, my friend. For all the laughter, for all the late nights we spent talking about mozilla, about life and what we expected from the future. Thank you for being our photographer and recording so many cool moments, that we spent together. Unfortunately your future was very short, but I am sure that you recorded your name in the history of everything you did. May your passage be smooth and peaceful.
  • Luigui Delyer (luiguild): Ricardo was present in the best days I have ever had in my life as Mozillian, he taught me a lot, we enjoyed a lot, we travel a lot, we teach a lot, his legacy is inevitable, his name will be forever in Mozilla’s history and in our hearts. May the family feel embraced by the entire world community that he helped to build.
  • Fabricio Zuardi: As lembranças que tenho do Ricardo são todas de uma pessoa sorrindo, alegre e com alto astral. Nos deu ótimos registros de momentos felizes. Desejo conforto aos familiares e amigos, foi uma pessoa especial.
  • Guilermo Movia: I don’t remember when was the first time that I met Ricardo, but there were so many meetings and travels where our paths crossed. I remember him coming to Mar del Plata to help us talking pictures with ” De todos, para todos”  campaign. His pictures were always great, and show the best of the community. Hope you can rest in peace
  • Rosana Ardila: Ricardo was part of the soul of the Mozilla Brazil community, he was a kind and wonderful human being. It was humbling to see his commitment to the Mozilla Community. He’ll be deeply missed
  • Andre Garzia: Ricardo has been a friend and my Reps mentor for many years, it was through him and others that I discovered the joys of volunteering in a community. His example, wit, and smile, were always part of what made our community great. Ricardo has been an inspiring figure for me, not only because the love of the web that ties us all here but because he followed his passions and showed me that it was possible to pursue a career in what we loved. He loved photography, biking, and punk music, and that is how I chose to remember him. I’ll always cherish the memories we had travelling the world and exchanging stories. My heart and thoughts go to his beloved partner and family. I’ll miss you a lot my friend.
  • Lenno Azevedo: Ricardo foi o meu segundo mentor no programa Mozilla Reps, no qual me guiou dentro do projeto, me ensinando o caminho das pedras que ajudou a me tornar um bom Reps. Vou guarda pra sempre os ensinamentos e incentivos que me deu ao longo dos anos, principalmente na minha atual profissão. Te devo uma companheiro. Obrigado por tudo, descance em paz!
  • Reuben Morais: Ricardo was a beautiful soul, full of energy and smiles. Meeting him at events was always an inspiring opportunity. His energy always made every gathering feel like we all knew each other as childhood friends, I remember feeling this even when I was new. He’ll be missed by all who crossed paths with him.
  • Rubén Martín (nukeador): Ricardo was key to support the Mozilla community in Brazil, as a creative mind he was always behind his camera trying to capture and communicate what was going on, his work will remember him online. A great memory comes to my mind about the time we shared back in 2013 presenting Firefox OS to the world from Barcelona’s Mobile World Congress. You will be deeply missed, all my condolences to his family and close friends. Obrigado por tudo, descanse em paz!
  • Pierros Papadeas: A creative and kind soul, Ricardo will be surely missed by the communities he engaged so passionately.
  • Gloria Meneses: Taking amazing photos, skating and supporting his local community. A very active mozillian who loved parties after long working Reps sessions and a beer lover, that’s how I remember Ricardo. The most special memories I have from Ricardo are In Cartagena at Firefox OS event, in Barcelona at Mobile world congress taking photos, in Madrid at Reps meetings and in the IRC channel supporting Mozilla Brazil. I still can’t believe it. Rest in peace Ricardo.
  • William Quiviger: I remember Ricardo being very soft spoken and gently but fiercely passionate about Mozilla and our mission. I remember his eyes lighting up when I approached him about joining the Reps program. Rest in peace Ricardo.
  • Fernando García (stripTM): I am very shocked by this news. It is so sad and so unfair.
  • Mário Rinaldi: Ricardo era uma pessoa alegre e jovial, fará muita falta neste mundo.
  • Lourdes Castillo:  I will always remember Ricardo as a friend and brother who has always been dedicated to the Mozilla community. A tremendous person with a big heart. A hug to heaven and we will always remember you as a tremendous Mozillian and brother! Rest in peace my mozfriend
  • Luis Sánchez (lasr21) – The legacy of Ricardo’s passions will live throughout the hundreds of new contributors that his work reach.
  • Miguel Useche: Ricardo was one of the first mozillian I met outside my country. It was interesting to know someone that volunteered on Mozilla, did photography and loved to do skateboarding, just like me! I became a fan of his art and loved the few time I had the opportunity to share with him. Rest in peace bro!
  • Antonio Ladeia – Ricardo was a special guy, always happy and willing to help. I was presented with the pleasure of meeting him. His death will make this world a little sadder.
  • Eduardo Urcullú (Urcu): Ricardo o mejor conocido como “O Pontes” realmente fue un amigo muy divertido, aunque callado si cuando aún no lo conoces bien. Lo conocí en un evento de software libre allá por el año 2010 (cuando aún tenia el cabello largo xD), realmente las fotos quebtomaba con su cámara y su humor situacional son cosas para recordarlo. R.I.P. Pontes
  • Dave Villacreses (DaveEcu) Ricardo was part of the early group of supporters here in Latin America, he contributed to breathing lofe to our beloved Latam community. I remember he loved photography and was full of ideas and interesting comments to make every time. Smart and proactive. It is a really sad moment for our entire community.
  • Arturo Martinez: I met Ricardo during the MozCamp LATAM, and since then we became good friends, our paths crossed several times during events, flights, even at the MWC, he was an amazing Mozillian, always making us laugh, taking impressive pictures, with a willpower to defend what he believed, with few words but lots of passion, please rest in peace my friend.
  • Adriano Cupello:  The first time we met, we were in Cartagena for the launch of Firefox OS and I met one of the most amazing group of people of my life.  Pontes was one of them and very quickly became an “old friend” like the ones we have known at school all our lives.  He was an incredible and strong character and a great photographer.  Also he was my mentor at Mozilla reps program. The last time we talked, we tried to have a beer, but due to the circumstances of work, we were unable to.  We schedule it for the next time, and this time never came.  This week I will have this beer thinking about him.  I would like to invite all of you in the next beer that you have with your friends or alone, to dedicate this one to his memory and to the great moments we spent together with him.  My condolences and my prayers to the family and his partner @cahcontri who fought a very painful battle to report his situation until the last day with all his love.  Thank you for all lovely memories you left in my mind! We will miss you a lot! Cheers Pontes!
  • Rodrigo Padula: There were so many events, beers, good conversations and so many jokes and laughs that I don’t even remember when I met Ricardo. We shared the same sense of humor and bad jokes. Surely only good memories will remain! Rest in peace Ricado, we will miss you! 
  • Brian King: I was fortunate to have met Ricardo several times. Although quiet, you felt his presence and he was a very cool guy. We’ll miss you, I hope you get that big photo in the sky. RIP Ricardo.

Some pictures of Ricardo’s life as a Mozilla contributor can be found here

Mozilla Add-ons BlogBuilt-in FTP implementation to be removed in Firefox 90

Last year, the Firefox platform development team announced plans to remove the built-in FTP implementation from the browser. FTP is a protocol for transferring files from one host to another.

The implementation is currently disabled in the Firefox Nightly and Beta pre-release channels and will be disabled when Firefox 88 is released on April 19, 2021. The implementation will be removed in Firefox 90.  After FTP is disabled in Firefox, the browser will delegate ftp:// links to external applications in the same manner as other protocol handlers.

With the deprecation, browserSettings.ftpProtocolEnabled will become read-only. Attempts to set this value will have no effect.

Most places where an extension may pass “ftp” such as filters for proxy or webRequest should not result in an error, but the APIs will no longer handle requests of those types.

To help offset this removal, ftp  has been added to the list of supported protocol_handlers for browser extensions. This means that extensions will be able to prompt users to launch a FTP application to handle certain links.

Please let us if you have any questions on our developer community forum.

The post Built-in FTP implementation to be removed in Firefox 90 appeared first on Mozilla Add-ons Blog.

SeaMonkeyMailing Lists

Hi All,

With the decommissioning of the mozilla newsgroup/mailing list, we’ve migrated to Googlegroups at:

  1. Dev App : https://groups.google.com/a/lists.mozilla.org/g/dev-apps-seamonkey/
  2. Support: https://groups.google.com/a/lists.mozilla.org/g/support-seamonkey/

Having just posted on both lists,  I’m going to have to figure a better way of posting it on both lists simultaneously.



SeaMonkeySeaMonkey is released!

Hi all,

The SeaMonkey project would like to announce the official release of SeaMonkey  As this is a security release, please update if you can.


PS: Has the project ever announced an unofficial release?.. hrm..

The Mozilla BlogMozilla partners with NVIDIA to democratize and diversify voice technology

As technology makes massive shift to voice-enabled products, NVIDIA invests $1.5 million in Mozilla Common Voice to transform the voice recognition landscape 

Over the next decade, speech is expected to become the primary way people interact with devices — from laptops and phones to digital assistants and retail kiosks. Today’s voice-enabled devices, however, are inaccessible to much of humanity because they cannot understand vast swaths of the world’s languages, accents, and speech patterns.

To help ensure that people everywhere benefit from this massive technological shift, Mozilla is partnering with NVIDIA, which is investing $1.5 million in Mozilla Common Voice, an ambitious, open-source initiative aimed at democratizing and diversifying voice technology development.

Most of the voice data currently used to train machine learning algorithms is held by a handful of major companies. This poses challenges for others seeking to develop high-quality speech recognition technologies, while also exacerbating the voice recognition divide between English speakers and the rest of the world.

Launched in 2017, Common Voice aims to level the playing field while mitigating AI bias. It enables anyone to donate their voices to a free, publicly available database that startups, researchers, and developers can use to train voice-enabled apps, products, and services. Today, it represents the world’s largest multi-language public domain voice data set, with more than 9,000 hours of voice data in 60 different languages, including widely spoken languages and less used ones like Welsh and Kinyarwanda, which is spoken in Rwanda. More than 164,000 people worldwide have contributed to the project thus far.

This investment will accelerate the growth of Common Voice’s data set, engage more communities and volunteers in the project, and support the hiring of new staff.

To support the expansion, Common Voice will now operate under the umbrella of the Mozilla Foundation as part of its initiatives focused on making artificial intelligence more trustworthy. According to the Foundation’s Executive Director, Mark Surman, Common Voice is poised to pioneer data donation as an effective tool the public can use to shape the future of technology for the better.

“Language is a powerful part of who we are, and people, not profit-making companies, are the right guardians of how language appears in our digital lives,” said Surman. “By making it easy to donate voice data, Common Voice empowers people to play a direct role in creating technology that helps rather than harms humanity. Mozilla and NVIDIA both see voice as a prime opportunity where people can take back control of technology and unlock its full potential.”

“The demand for conversational AI is growing, with chatbots and virtual assistants impacting nearly every industry,” said Kari Briski, senior director of accelerated computing product management at NVIDIA. “With Common Voice’s large and open datasets, we’re able to develop pre-trained models and offer them back to the community for free. Together, we’re working toward a shared goal of supporting and building communities — particularly for under-resourced and under-served languages.”

The post Mozilla partners with NVIDIA to democratize and diversify voice technology appeared first on The Mozilla Blog.

SUMO BlogWhat’s up with SUMO – Q1 2021

Hey SUMO folks,

Starting from this month, we’d like to reenact our old tradition to have the summary of what’s happening in our SUMO nation. But instead of weekly like the old days, we’re going to have a monthly updates. This post will be an exception though, as we’d like to recap the entire Q1 of 2021.

So, let’s get to it!

Welcome on board!

  1. Welcome to bingchuanjuzi (rebug). Thank you for your contribution to 62 zh-CN articles despite just getting started in Oct 2020.
  2. Hello and welcome Vinay to the Gujarati localization group. Thanks for picking up the work in a locale that has been inactive for awhile.
  3. Welcome back to JCPlus. Thank you for stewarding the Norsk (No) locale.
  4. Welcome brisu and Manu! Thank you for helping us with Firefox for iOS questions.
  5. Welcome to Kaio Duarte to the Social Support program!
  6. Devin and Matt C for their comeback to Social Support program (Devin has helped us with Buffer Reply and Matt was part of Army of Awesome program in the past).

Last but not least, let’s join us to welcome to Fabi and Daryl to the SUMO team. Fabi is the new Technical Writer (although, I should note that she will be helping us with Spanish localization as well) and Daryl is joining us as a Senior User Experience Designer. Welcome both!

Community news

  • Play Store Support is transitioning to Conversocial. Please read the full announcement in our blog if you haven’t.
  • Are you following news about Firefox? If yes is your answer, then I have good news for you. You can now subscribe to Firefox Daily Digest to get updates about what people are talking about Firefox and other Mozilla products on social media like Reddit and Twitter.
  • Another good news from the Twitter-land. Finally, we regain our access to @SUMO_mozilla Twitter account (if you want to learn the backstory, go watch our community call in March). Also, go follow the account if you haven’t because we’re going to use it to share more community updates moving forward.
  • Check out the following release notes from Kitsune in the past quarter:

Community call

  • Watch the monthly community call if you haven’t. Learn more about what’s new in January, February, and March.
  • Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can address them during the meeting.

Community stats


KB Page views

Month Page views Vs previous month
January 2020 12,860,141 +3.72%
February 2020 11,749,283 -9.16%
March 2020 12,143,366 +3.2%

Top 5 KB contributors in the last 90 days: 

  1. AliceWyman
  2. Jeff
  3. Marchelo Ghelman
  4. Artist
  5. Underpass

KB Localization

Top 10 locale based on total page views

Locale Jan 2020 Feb 2020 Mar 2020 Localization progress (per 6 Apr)
de 11.69% 11.3% 10.4% 98%
fr 7.33% 7.23% 6.82% 90%
es 5.98% 6.48% 6.4% 47%
zh-CN 4.7% 4.14% 5.94% 97%
ru 4.56% 4.82% 4.41% 99%
pt-BR 4.56% 5.41% 5.8% 72%
ja 3.64% 3.61% 3.68% 57%
pl 2.56% 2.54% 2.44% 83%
it 2.5% 2.44% 2.45% 95%
nl 1.03% 0.99% 0.98% 98%

Top 5 localization contributor in the last 90 days: 

  1. Ihor_ck
  2. Artist
  3. Markh2
  4. JimSp472
  5. Goudron

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Jan 2020 3936 68.50% 15.52% 70.21%
Feb 2020 3582 65.33% 14.38% 77.50%
Mar 2020 3639 66.34% 14.70% 81.82%

Top 5 forum contributor in the last 90 days: 

  1. Cor-el
  2. FredMcD
  3. Jscher2000
  4. Sfhowes
  5. Seburo

Social Support

Channel Jan 2020 Feb 2020 Mar 2020
Total conv Conv handled Total conv Conv handled Total conv Conv handled
@firefox 3,675 668 3,403 136 2,998 496
@FirefoxSupport 274 239 188 55 290 206

Top 5 contributors in Q1 2021

  1. Md Monirul Alom
  2. Andrew Truong
  3. Matt C
  4. Devin E
  5. Christophe Villeneuve

Play Store Support

We don’t have enough data for the Play Store Support yet. However, you can check out the overall Respond Tool metrics here.

Product updates

Firefox desktop

Firefox mobile

  • What’s new in Firefox for Android
  • Additional messaging to set Firefox as a default app were added in Firefox for iOS 32.
  • There’s also additional widget for iOS as well as improvement on bookmarking that were introduced in V32.

Other products / Experiments

  • VPN MacOS and Linux Release.
  • VPN Feature Updates Release.
  • Firefox Accounts Settings Updates.
  • Mozilla ION → Rally name change
  • Add-ons project – restoring search engine defaults.
  • Sunset of Amazon Fire TV.


If you know anyone that we should feature here, please contact Kiki and we’ll make sure to   add them in our next edition.

Useful links:

The Mozilla BlogReflections on One Year as the CEO of Mozilla

If we want the internet to be different we can’t keep following the same roadmap.

I am celebrating a one-year anniversary at Mozilla this week, which is funny in a way, since I have been part of Mozilla since before it had a name. Mozilla is in my DNA–and some of my DNA is in Mozilla. Twenty-two years ago I wrote the open-source software licenses that still enable our vision, and throughout my years here I’ve worn many hats. But one year ago I became CEO for the second time, and I have to say up front that being CEO this time around is the hardest role I’ve held here. And perhaps the most rewarding.

On this anniversary, I want to open up about what it means to be the CEO of a mission-driven organization in 2021, with all the complications and potential that this era of the internet brings with it. Those of you who know me, know I am generally a private person. However, in a time of rapid change and tumult for our industry and the world, it feels right to share some of what this year has taught me.

Six lessons from my first year as CEO:

1 AS CEO I STRADDLE TWO WORLDS: There has always been a tension at Mozilla, between creating products that reflect our values as completely as we can imagine, and products that fit consumers’ needs and what is possible in the current environment. At Mozilla, we feel the push and pull of competing in the market, while always seeking results from a mission perspective. As CEO, I find myself embodying this central tension.

It’s a tension that excites and energizes me. As co-founder and Chair, and Chief Lizard Wrangler of the Mozilla project before that, I have been the flag-bearer for Mozilla’s value system for many years. I see this as a role that extends beyond Mozilla’s employees. The CEO is responsible for all the employees, volunteers, products and launches and success of the company, while also being responsible for living up to the values that are at Mozilla’s core. Now, I once again wear both of these hats.

I have leaned on the open-source playbook to help me fulfill both of these obligations, attempting to wear one hat at a time, sometimes taking one off and donning the other in the middle of the same meeting. But I also find I am becoming more adept at seamlessly switching between the two, and I find that I can be intensely product oriented, while maintaining our mission as my true north.

2 MOZILLA’S MISSION IS UNCHANGED BUT HOW WE GET THERE MUST: This extremely abnormal year, filled with violence, illness ,and struggle, has also confirmed something I already knew: that even amid so much flux, the DNA of Mozilla has not changed since we first formed the foundation out of the Netscape offices so many years ago. Yes, we expanded our mission statement once to be more explicit about the human experience as a more complete statement of our values.

What has changed is the world around us. And — to stick with the DNA metaphor for a second here — that has changed the epigenetics of Mozilla. In other words, it has changed the way our DNA is expressed.

3 CHANGE REQUIRES FOLLOWING A NEW PATH: We want the internet to be different. We feel an urgency to create a new and better infrastructure for the digital world, to help people get the value of data in a privacy-forward way, and to connect entrepreneurs who also want a better internet.

By definition, if you’re trying to end up in a different place, you can’t keep following the same path. This is my working philosophy. Let me tell a quick story to illustrate what I mean.

Lately we’ve been thinking a lot about data, and what it means to be a privacy-focused company that brings the benefits of data to our users. This balancing act between privacy and convenience is, of course, not a new problem, but as I was thinking about the current ways it manifests, I was reminded of the early days of Firefox.

When we first launched Firefox, we took the view that data was bad — even performance metrics about Firefox that could help us understand how Firefox performs outside of our own test environments, we viewed as private data we didn’t want. Well, you see where this is going, don’t you? We quickly learned that without such data (which we call telemetry), we couldn’t make a well functioning browser. We needed information about when or why a site crashed, how long load times were, etc. And so we took one huge step with launching Firefox, and then we had to take a step sideways, to add in the sufficient — but no more than that! — data that would allow the product to be what users wanted.

In this story you can see how we approach the dual goals of Mozilla: to be true to our values, and to create products that enable people to have a healthier experience on the internet. We find ourselves taking a step sideways to reach a new path to meet the needs of our values, our community and our product.

4 THE SUM OF OUR PARTS: Mozilla’s superpower is that our mission and our structure allow us to benefit from the aggregate strength that’s created by all our employees and volunteers and friends and users and supporters and customers.

We are more than the sum of our parts. This is my worldview, and one of the cornerstones of open-source philosophy. As CEO, one of my goals is to find new ways for Mozilla to connect with people who want to build a better internet. I know there are many people out there who share this vision, and a key goal of the coming era is finding ways to join or help communities that are also working toward a better internet.

5 BRING ME AMBITIOUS IDEAS: I am always looking for good ideas, for big ideas, and I have found that as CEO, more people are willing to come to me with their huge ambitions. I relish it. These ideas don’t always come from employees, though many do. They also come from volunteers, from people outside the company entirely, from academics, friends, all sorts of people. They honor me and Mozilla by sharing these visions, and it’s important to me to keep that dialogue open.

I am learning that it can be jarring to have your CEO randomly stop by your desk for a chat — or in remote working land, to Slack someone unexpectedly — so there need to be boundaries in place, but having a group of people who I can trust to be real with me, to think creatively with me, is essential.

The pandemic has made this part of my year harder, since it has removed the serendipity of conversations in the break room or even chance encounters at conferences that sometimes lead to the next great adventure. But Mozilla has been better poised than most businesses to have an entirely remote year, given that our workforce was already between 40 and 50 percent distributed to begin with.

6 WE SEEK TO BE AN EXAMPLE: One organization can’t change everything. At Mozilla, we dream of an internet and software ecosystem that is diverse and distributed, that uplifts and connects and enables visions for all, not just those companies or people with bottomless bank accounts. We can’t bring about this change single handedly, but we can try to change ourselves where we think we need improvement, and we can stand as an example of a different way to do things. That has always been what we wanted to do, and it remains one of our highest goals.

Above all, this year has reinforced for me that sometimes a deeply held mission requires massive wrenching change in order to be realized. I said last year that Mozilla was entering a new era that would require shifts. Our growing ambition for mission impact brings the will to make these changes, which are well underway. From the earliest days of our organization, people have been drawn to us because Mozilla captures an aspiration for something better and the drive to actually make that something happen. I cannot overstate how inspiring it is to see the dedication of the Mozilla community. I see it in our employees, I see it in our builders, I see it in our board members and our volunteers. I see it in all those who think of Mozilla and support our efforts to be more effective and have more impact. I wouldn’t be here without it. It’s the honor of my life to be in the thick of it with the Mozilla community.

– Mitchell

The post Reflections on One Year as the CEO of Mozilla appeared first on The Mozilla Blog.

Open Policy & AdvocacyMozilla weighs in on political advertising for Commission consultation

Later this year, the European Commission is set to propose new rules to govern political advertising. This is an important step towards increasing the resilience of European democracies, and to respond to the changes wrought by digital campaigning. As the Commission’s public consultation on the matter has come to a close, Mozilla stresses the important role of a healthy internet and reiterates its calls for systemic online advertising transparency globally. 

In recent years political campaigns have increasingly shifted to the digital realm – even more so during the pandemic. This allows campaigners to engage different constituencies in novel ways and enables them to campaign at all when canvassing in the streets is impossible due to public health reasons. However, it has also given rise to new risks. For instance, online political advertising can serve as an important and hidden vector for disinformation, defamation, voter suppression, and evading pushback from political opponents or fact checkers. The ways in which platforms’ design and practices facilitate this and the lack of transparency in this regard have therefore become subject to ever greater scrutiny. This reached a high point around the U.S. presidential elections last year, but it is important to continue to pay close attention to the issue as other countries go to the polls for major elections – in Europe and beyond.

At Mozilla, we have been working to hold platforms more accountable, particularly with regard to advertising transparency and disinformation (see, for example, here, here, here, and here). Pushing for wide-ranging transparency is critical in this context: it enables communities to uncover and protect from harms that platforms alone cannot or fail to avert. We therefore welcome the Commission’s initiative to develop new rules to this end, which Member States can expand upon depending on the country-specific context. The EU Code of Practice on Disinformation, launched in 2019 and which Mozilla is a signatory of, was a first step in the right direction to improve the scrutiny of and transparency around online advertisements. In recent years, large online platforms have made significant improvements in this regard – but they still fall short in various ways. This is why we continue to advocate the mandatory disclosure of all online advertisements, as reflected in our recommendations for the Digital Services Act (DSA) and the European Democracy Action Plan.

As the Commission prepares its proposal, we recommend lawmakers in the EU and elsewhere to consider the following measures that we believe can enhance transparency and accountability with respect to online political advertising, and ultimately increase the resilience of democracies everywhere:

  • Develop a clear definition of political advertising: Defining political advertising is a complicated exercise, forcing regulators to draw sharp lines over fuzzy boundaries. Nonetheless, in order to ensure heightened oversight, we need a functional definition of what does and does not constitute political advertising. In coming up with a definition, regulators should engage with experts from civil society, academia, and industry and draw inspiration from “offline” definitions of political advertising.
  • Address novel forms of political advertising online: When defining political advertising, regulators should also include political content that users are paid (i.e. paid influencer content) by political actors to create and promote. Platforms should provide self-disclosure mechanisms for users to indicate these partnerships when they upload content (as Instagram and YouTube have done). This self-disclosed political advertising should be labeled as such to end-users and be included in the ad archives maintained by platforms.
  • Ramp up disclosure obligations for ‘political’ advertising: As part of its proposal for the DSA, the Commission already foresees a mandate for large platforms to publicly disclose all advertisements through ad archive APIs in order to facilitate increased scrutiny and study of online advertising. These disclosure obligations closely resemble those previously advocated by Mozilla. Importantly, this would apply to all advertising so as to prevent under-disclosure should some relevant advertisements elude an eventual definition of political advertising. With this baseline, enhanced disclosure obligations should be required for advertisements that are considered political given their special role in and potentially harmful effects on the democratic process and public discourse. Amongst others, Stiftung Neue Verantwortung, the European Partnership for Democracy, and ourselves have offered ideas on the specifics of such an augmented disclosure regime. For example, this should include more fine-grained information on targeting parameters and methods used by advertisers, audience engagement, ad spend, and other versions of the ad in question that were used for a/b testing.
  • Enhance user-facing transparency: Information on political advertising should not only be available via ad archive APIs, but also directly to users as they encounter an advertisement. Such ads should be labeled in a way that clearly distinguishes them from organic content. Additional information, for example on the sponsor or on why a person was targeted, should be presented in an intelligible manner and either be included in the label or easily accessible from the specific content display. Further, platforms could be obliged to allow third parties to build tools providing users with new insights about, for instance, how and by whom they are being targeted.

Finally, we recommend the Commission to explore the following approaches should it seek to put limits on microtargeting of political advertisements in its upcoming proposal:

  • Restrict targeting parameters: Parameters for micro-targeting of political advertising could exclude sensitive, behavioral, and inferred information as well as information from external datasets uploaded by advertisers, as others have argued. In line with Mozilla’s commitment to Lean Data Practices, this would discourage large-scale data collection by political advertisers and level the playing field for those who lack large amounts of data – so that political campaigns remain competitions of ideas, not of who collects the most data.

While online political advertising and our understanding of the accompanying challenges will continue to evolve, the recommended measures would help make great strides towards protecting the integrity of elections and civic discourse. We look forward to working with lawmakers and the policy community to advance this shared objective and ensure that the EU’s new rules will hit the mark.

The post Mozilla weighs in on political advertising for Commission consultation appeared first on Open Policy & Advocacy.

Blog of DataThis Week in Glean: Publishing Glean.js or “How I configured an npm package that has multiple entry points”

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

A few weeks ago, it came the time for us to publish the first version of Glean.js in npm. (Yes, it has been published. Go take a look). In order to publish a package on npm, it is important to define the package entry points in the project’s package.json file. The entry point is the path to the file that should be loaded when users import a package through import Package from "package-name" or const Package = require("package-name").

My knowledge in this area went as far as “Hm, I think that main field in the package.json is where we define the entry point, right?”. Yes, I was right about that, but it turns out that was not enough for Glean.js.

The case of Glean.js

Glean.js is an implementation of Glean for Javascript environments. “Javascript environments” can mean multiple things: Node.js servers, Electron apps, websites, webextensions… The list goes on. To complicate things, Glean.js needs to access a bunch of platform specific APIs such as client side storage. We designed Glean.js in such a way that platform specific code is abstracted away under the Platform module, but when users import Glean.js all of this should be opaque.

So, we decided to provide a different package entry point per environment. This way, users can import the correct Glean for their environments and not care about internal architecture details e.g. import Glean from "glean/webext" imports the version of Glean that uses the web extensions implementaion of the Platform module.

The main field I mentioned above works when the package has one single entry point. What do you do when the package has multiple entry points?

The exports field

Lucky for us, starting from Node v12.7.0, Node recognizes the exports field in the package.json. This field accepts objects, so you can define mappings for all your package entry points.

  "name": "glean",
  "exports": {
    "./webext": "path/to/entry/point/webext.js",
    "./node": "path/to/entry/point/node.js",

Another nice thing about the exports field, is that it denies access to any other entry point that is not defined in the exports map. Users can’t just import any file in your package anymore. Neat.

We must also define entry points for the type declarations of our package. Type declarations are necessary for users attempting to import the package in Typescript code. Glean.js is in Typescript, so it is easy enough for us to generate the type definitions, but we hit a wall when want to expose the generated definitions. From the “Publishing” page on Typecript’s documentation, this is the example provided:

  "name": "awesome",
  "author": "Vandelay Industries",
  "version": "1.0.0",
  "main": "./lib/main.js",
  "types": "./lib/main.d.ts"

Notice the types property. It works just like the main property. It does not accept an object, only a single entry point. And here we go again, what do you do when the package has multiple entry points?

The typesVersions workaround

This time I won’t say “Lucky for us Typescript has this other property starting from version…”. Turns out Typescript, as I am writing this blog post, doesn’t yet provide a way for packages to define multiple entry points for their types declarations.

Typescript lets packages define different types declarations per Typescript version, through the typesVersions property. This property does accept mappings of entry points to files. Smart people on the internet figured out, that we can use this property to define different types declarations for each of our package entry points. For more discussion on the topic, follow issue #33079.

Back to our previous example, type definitions mappings would look like this in our package.json:

  "name": "glean",
  "exports": {
    "./webext": "path/to/entry/point/webext.js",
    "./node": "path/to/entry/point/node.js",
  "typesVersions": {
    "*": {
      "./webext": [ "path/to/types/definitions/webext.d.ts" ],
      "./node": [ "path/to/types/definitions/node.d.ts" ],

Alright, this is great. So we are done, right? Not yet.

Conditional exports

Our users can finally import our package in Javascript and Typescript and they have well defined entry points to choose from depending on the platform they are building for.

If they are building for Node.js though, they still might encounter issues. The default module system used by Node.js is commonjs. This is the one where we import packages by using the const Package = require("package") syntax and export modules by using the module.exports = Package syntax.

Newer versions of Node, also support the ECMAScript module system , also known as ESM. This is the offical Javascript module system and is the one where we import packages by using the import Package from "package" syntax and export modules by using the export default Package syntax.

Packages can provide different builds using each module system. In the exports field, Node.js allows packages to define different export paths to be imported depending on the module system a user is relying on. This feature is called “conditional exports”.

Assuming you have gone through all the setup involved in building a hybrid NPM module for both ESM and CommonJS (to learn more about how to do that, refer to this great blog post), this is how our example can be changed to use conditional exports:

  "name": "glean",
  "exports": {
    "./webext": "path/to/entry/point/webext.js",
    "./node": {
      "import": "path/to/entry/point/node.js",
      "require": "path/to/entry/point/node.cjs",
  "typesVersions": {
    "*": {
      "./webext": [ "path/to/types/definitions/webext.d.ts" ],
      "./node": [ "path/to/types/definitions/node.d.ts" ],

The same change is not necessary for the ./webext entry point, because users building for browsers will need to use bundlers such as Webpack and Rollup, which have their own implementation of import/require statement resolutions and are able to import both ESM and CommonJS modules either out-of-the-box or through plugins.

Note that there is also no need to change the typesVersions value for ./node after this change.

Final considerations

Although the steps in this post look straightforward enough, it took me quite a while to figure out the correct way to configure the Glean.js’ entry points. I encountered many caveats along the way, such as the typesVersions workaround I mentioned above, but also:

  • In order to support ES6 modules, it is necessary to include the filename and extension in all internal package import statements. CommonJS infers the extension and the filename when it is not provided, but ES6 doesn’t. This get’s extra weird in Glean.js’ codebase, because Glean.js is in Typescript and all our import statements still have the .js extension. See more discussion about this on this issue and our commit with this change.
  • Webpack, below version 5, does not have support for the exports field and is not able to import a package that defined entry points only using this feature. See the Webpack 5 release notes.
  • Other exports conditions such as browser, production or development are mentioned in the Node.js documentation, but are ultimately ignored by Node.js. They are used by bundlers such as Webpack and Rollup. The Webpack documentation has a comprehensive list of all the conditions you can possibly include in that list, which bundler supports each, and whether Node.js supports it too.

Hope this guide is helpful to other people on the internet. Bye! 👋

hacks.mozilla.orgEliminating Data Races in Firefox – A Technical Report

We successfully deployed ThreadSanitizer in the Firefox project to eliminate data races in our remaining C/C++ components. In the process, we found several impactful bugs and can safely say that data races are often underestimated in terms of their impact on program correctness. We recommend that all multithreaded C/C++ projects adopt the ThreadSanitizer tool to enhance code quality.

What is ThreadSanitizer?

ThreadSanitizer (TSan) is compile-time instrumentation to detect data races according to the C/C++ memory model on Linux. It is important to note that these data races are considered undefined behavior within the C/C++ specification. As such, the compiler is free to assume that data races do not happen and perform optimizations under that assumption. Detecting bugs resulting from such optimizations can be hard, and data races often have an intermittent nature due to thread scheduling.

Without a tool like ThreadSanitizer, even the most experienced developers can spend hours on locating such a bug. With ThreadSanitizer, you get a comprehensive data race report that often contains all of the information needed to fix the problem.

An example for a ThreadSanitizer report, showing where each thread is reading/writing, the location they both access and where the threads were created. ThreadSanitizer Output for this example program (shortened for article)

One important property of TSan is that, when properly deployed, the data race detection does not produce false positives. This is incredibly important for tool adoption, as developers quickly lose faith in tools that produce uncertain results.

Like other sanitizers, TSan is built into Clang and can be used with any recent Clang/LLVM toolchain. If your C/C++ project already uses e.g. AddressSanitizer (which we also highly recommend), deploying ThreadSanitizer will be very straightforward from a toolchain perspective.

Challenges in Deployment

Benign vs. Impactful Bugs

Despite ThreadSanitizer being a very well designed tool, we had to overcome a variety of challenges at Mozilla during the deployment phase. The most significant issue we faced was that it is really difficult to prove that data races are actually harmful at all and that they impact the everyday use of Firefox. In particular, the term “benign” came up often. Benign data races acknowledge that a particular data race is actually a race, but assume that it does not have any negative side effects.

While benign data races do exist, we found (in agreement with previous work on this subject [1] [2]) that data races are very easily misclassified as benign. The reasons for this are clear: It is hard to reason about what compilers can and will optimize, and confirmation for certain “benign” data races requires you to look at the assembler code that the compiler finally produces.

Needless to say, this procedure is often much more time consuming than fixing the actual data race and also not future-proof. As a result, we decided that the ultimate goal should be a “no data races” policy that declares even benign data races as undesirable due to their risk of misclassification, the required time for investigation and the potential risk from future compilers (with better optimizations) or future platforms (e.g. ARM).

However, it was clear that establishing such a policy would require a lot of work, both on the technical side as well as in convincing developers and management. In particular, we could not expect a large amount of resources to be dedicated to fixing data races with no clear product impact. This is where TSan’s suppression list came in handy:

We knew we had to stop the influx of new data races but at the same time get the tool usable without fixing all legacy issues. The suppression list (in particular the version compiled into Firefox) allowed us to temporarily ignore data races once we had them on file and ultimately bring up a TSan build of Firefox in CI that would automatically avoid further regressions. Of course, security bugs required specialized handling, but were usually easy to recognize (e.g. racing on non-thread safe pointers) and were fixed quickly without suppressions.

To help us understand the impact of our work, we maintained an internal list of all the most serious races that TSan detected (ones that had side-effects or could cause crashes). This data helped convince developers that the tool was making their lives easier while also clearly justifying the work to management.

In addition to this qualitative data, we also decided for a more quantitative approach: We looked at all the bugs we found over a year and how they were classified. Of the 64 bugs we looked at, 34% were classified as “benign” and 22% were “impactful” (the rest hadn’t been classified).

We knew there was a certain amount of misclassified benign issues to be expected, but what we really wanted to know was: Do benign issues pose a risk to the project? Assuming that all of these issues truly had no impact on the product, are we wasting a lot of resources on fixing them? Thankfully, we found that the majority of these fixes were trivial and/or improved code quality.

The trivial fixes were mostly turning non-atomic variables into atomics (20%), adding permanent suppressions for upstream issues that we couldn’t address immediately (15%), or removing overly complicated code (20%). Only 45% of the benign fixes actually required some sort of more elaborate patch (as in, the diff was larger than just a few lines of code and did not just remove code).

We concluded that the risk of benign issues being a major resource sink was not an issue and well acceptable for the overall gains that the project provided.

False Positives?

As mentioned in the beginning, TSan does not produce false positive data race reports when properly deployed, which includes instrumenting all code that is loaded into the process and avoiding primitives that TSan doesn’t understand (such as atomic fences). For most projects these conditions are trivial, but larger projects like Firefox require a bit more work. Thankfully this work largely amounted to a few lines in TSan’s robust suppression system.

Instrumenting all code in Firefox isn’t currently possible because it needs to use shared system libraries like GTK and X11. Fortunately, TSan offers the “called_from_lib” feature that can be used in the suppression list to ignore any calls originating from those shared libraries. Our other major source of uninstrumented code was build flags not being properly passed around, which was especially problematic for Rust code (see the Rust section below).

As for unsupported primitives, the only issue we ran into was the lack of support for fences. Most fences were the result of a standard atomic reference counting idiom which could be trivially replaced with an atomic load in TSan builds. Unfortunately, fences are fundamental to the design of the crossbeam crate (a foundational concurrency library in Rust), and the only solution for this was a suppression.

We also found that there is a (well known) false positive in deadlock detection that is however very easy to spot and also does not affect data race detection/reporting at all. In a nutshell, any deadlock report that only involves a single thread is likely this false positive.

The only true false positive we found so far turned out to be a rare bug in TSan and was fixed in the tool itself. However, developers claimed on various occasions that a particular report must be a false positive. In all of these cases, it turned out that TSan was indeed right and the problem was just very subtle and hard to understand. This is again confirming that we need tools like TSan to help us eliminate this class of bugs.

Interesting Bugs

Currently, the TSan bug-o-rama contains around 20 bugs. We’re still working on fixes for some of these bugs and would like to point out several particularly interesting/impactful ones.

Beware Bitfields

Bitfields are a handy little convenience to save space for storing lots of different small values. For instance, rather than having 30 bools taking up 240 bytes, they can all be packed into 4 bytes. For the most part this works fine, but it has one nasty consequence: different pieces of data now alias. This means that accessing “neighboring” bitfields is actually accessing the same memory, and therefore a potential data race.

In practical terms, this means that if two threads are writing to two neighboring bitfields, one of the writes can get lost, as both of those writes are actually read-modify-write operations of all the bitfields:

If you’re familiar with bitfields and actively thinking about them, this might be obvious, but when you’re just saying myVal.isInitialized = true you may not think about or even realize that you’re accessing a bitfield.

We have had many instances of this problem, but let’s look at bug 1601940 and its (trimmed) race report:

When we first saw this report, it was puzzling because the two threads in question touch different fields (mAsyncTransformAppliedToContent vs. mTestAttributeAppliers). However, as it turns out, these two fields are both adjacent bitfields in the class.

This was causing intermittent failures in our CI and cost a maintainer of this code valuable time. We find this bug particularly interesting because it demonstrates how hard it is to diagnose data races without appropriate tooling and we found more instances of this type of bug (racy bitfield write/write) in our codebase. One of the other instances even had the potential to cause network loads to supply invalid cache content, another hard-to-debug situation, especially when it is intermittent and therefore not easily reproducible.

We encountered this enough that we eventually introduced a MOZ_ATOMIC_BITFIELDS macro that generates bitfields with atomic load/store methods. This allowed us to quickly fix problematic bitfields for the maintainers of each component without having to redesign their types.

Oops That Wasn’t Supposed To Be Multithreaded

We also found several instances of components which were explicitly designed to be single-threaded accidentally being used by multiple threads, such as bug 1681950:

The race itself here is rather simple, we are racing on the same file through stat64 and understanding the report was not the problem this time. However, as can be seen from frame 10, this call originates from the PreferencesWriter, which is responsible for writing changes to the prefs.js file, the central storage for Firefox preferences.

It was never intended for this to be called on multiple threads at the same time and we believe that this had the potential to corrupt the prefs.js file. As a result, during the next startup the file would fail to load and be discarded (reset to default prefs). Over the years, we’ve had quite a few bug reports related to this file magically losing its custom preferences but we were never able to find the root cause. We now believe that this bug is at least partially responsible for these losses.

We think this is a particularly good example of a failure for two reasons: it was a race that had more harmful effects than just a crash, and it caught a larger logic error of something being used outside of its original design parameters.

Late-Validated Races

On several occasions we encountered a pattern that lies on the boundary of benign that we think merits some extra attention: intentionally racily reading a value, but then later doing checks that properly validate it. For instance, code like:

See for example, this instance we encountered in SQLite.

Please Don’t Do This. These patterns are really fragile and they’re ultimately undefined behavior, even if they generally work right. Just write proper atomic code — you’ll usually find that the performance is perfectly fine.

What about Rust?

Another difficulty that we had to solve during TSan deployment was due to part of our codebase now being written in Rust, which has much less mature support for sanitizers. This meant that we spent a significant portion of our bringup with all Rust code suppressed while that tooling was still being developed.

We weren’t particularly concerned with our Rust code having a lot of races, but rather races in C++ code being obfuscated by passing through Rust. In fact, we strongly recommend writing new projects entirely in Rust to avoid data races altogether.

The hardest part in particular is the need to rebuild the Rust standard library with TSan instrumentation. On nightly there is an unstable feature, -Zbuild-std, that lets us do exactly that, but it still has a lot of rough edges.

Our biggest hurdle with build-std was that it’s currently incompatible with vendored build environments, which Firefox uses. Fixing this isn’t simple because cargo’s tools for patching in dependencies aren’t designed for affecting only a subgraph (i.e. just std and not your own code). So far, we have mitigated this by maintaining a small set of patches on top of rustc/cargo which implement this well-enough for Firefox but need further work to go upstream.

But with build-std hacked into working for us we were able to instrument our Rust code and were happy to find that there were very few problems! Most of the things we discovered were C++ races that happened to pass through some Rust code and had therefore been hidden by our blanket suppressions.

We did however find two pure Rust races:

The first was bug 1674770, which was a bug in the parking_lot library. This Rust library provides synchronization primitives and other concurrency tools and is written and maintained by experts. We did not investigate the impact but the issue was a couple atomic orderings being too weak and was fixed quickly by the authors. This is yet another example that proves how difficult it is to write bug-free concurrent code.

The second was bug 1686158, which was some code in WebRender’s software OpenGL shim. They were maintaining some hand-rolled shared-mutable state using raw atomics for part of the implementation but forgot to make one of the fields atomic. This was easy enough to fix.

Overall Rust appears to be fulfilling one of its original design goals: allowing us to write more concurrent code safely. Both WebRender and Stylo are very large and pervasively multi-threaded, but have had minimal threading issues. What issues we did find were mistakes in the implementations of low-level and explicitly unsafe multithreading abstractions — and those mistakes were simple to fix.

This is in contrast to many of our C++ races, which often involved things being randomly accessed on different threads with unclear semantics, necessitating non-trivial refactorings of the code.


Data races are an underestimated problem. Due to their complexity and intermittency, we often struggle to identify them, locate their cause and judge their impact correctly. In many cases, this is also a time-consuming process, wasting valuable resources. ThreadSanitizer has proven to be not just effective in locating data races and providing adequate debug information, but also to be practical even on a project as large as Firefox.


We would like to thank the authors of ThreadSanitizer for providing the tool and in particular Dmitry Vyukov (Google) for helping us with some complex, Firefox-specific edge cases during deployment.

The post Eliminating Data Races in Firefox – A Technical Report appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogSoftware Innovation Prevails in Landmark Supreme Court Ruling in Google v. Oracle

In an important victory for software developers, the Supreme Court ruled today that reimplementing an API is fair use under US copyright law. The Court’s reasoning should apply to all cases where developers reimplement an API, to enable interoperability, or to allow developers to use familiar commands. This resolves years of uncertainty, and will enable more competition and follow-on innovation in software.

Yes you would – Credit: Parker Higgins (https://twitter.com/XOR)

This ruling arrives after more than ten years of litigation, including two trials and two appellate rulings from the Federal Circuit. Mozilla, together with other amici, filed several briefs throughout this time because we believed the rulings were at odds with how software is developed, and could hinder the industry. Fortunately, in a 6-2 decision authored by Justice Breyer, the Supreme Court overturned the Federal Circuit’s error.

When the case reached the Supreme Court, Mozilla filed an amicus brief arguing that APIs should not be copyrightable or, alternatively, reimplementation of APIs should be covered by fair use. The Court took the second of these options:

We reach the conclusion that in this case, where Google reimplemented a user interface, taking only what was needed to allow users to put their accrued talents to work in a new and transformative program, Google’s copying of the Sun Java API was a fair use of that material as a matter of law.

In reaching his conclusion, Justice Breyer noted that reimplementing an API “can further the development of computer programs.” This is because it enables programmers to use their knowledge and skills to build new software. The value of APIs is not so much in the creative content of the API itself (e.g. whether a particular API is “Java.lang.Math.max” or, as the Federal Circuit once suggested as an alternative, ““Java.lang.Arith.Larger”) but in the acquired experience of the developer community that uses it.

We are pleased that the Supreme Court has reached this decision and that copyright will no longer stand in the way of software developers reimplementing APIs in socially, technologically, and economically beneficial ways.

The post Software Innovation Prevails in Landmark Supreme Court Ruling in Google v. Oracle appeared first on The Mozilla Blog.

hacks.mozilla.orgA web testing deep dive: The MDN web testing report

For the last couple of years, we’ve run the MDN Web Developer Needs Assessment (DNA) Report, which aims to highlight the key issues faced by developers building web sites and applications. This has proved to be an invaluable source of data for browser vendors and other organizations to prioritize improvements to the web platform. This year we did a deep dive into web testing, and we are delighted to be able to announce the publication of this follow-on work, available at our insights.developer.mozilla.org site along with our other Web DNA publications.

Why web testing?

In the Web DNA studies for 2019 and 2020, developers ranked the need “Having to support specific browsers, (e.g., IE11)” as the most frustrating aspect of web development, among 28 needs. The 2nd and 3rd rankings were also related to browser compatibility:

  1. Avoiding or removing a feature that doesn’t work across browsers
  2. Making a design look/work the same across browsers

In 2020, we released our browser compatibility research results — a deeper dive into identifying specific issues around browser compatibility and pinpointing what can be done to mitigate these issues.

This year we decided to follow up with another deep dive focused on the 4th most frustrating aspect of developing for the web, “Testing across browsers.” It follows on nicely from the previous deep dive, and also concerns much-sought-after information.

You can download this report directly — see the Web Testing Report (PDF, 0.6MB).

A new question for 2020

Based on the 2019 ranking of “testing across browsers”, we introduced a new question to the DNA survey in 2020: “What are the biggest pain points for you when it comes to web testing?” We wanted to understand more about this need and what some of the underlying issues are.

Respondents could choose one or more of the following answers:

  • Time spent on manual testing (e.g. due to lack of automation).
  • Slow-running tests.
  • Running tests across multiple browsers.
  • Test failures are hard to debug or reproduce.
  • Lack of debug tooling support (browser dev tools or IDE integration).
  • Difficulty diagnosing performance issues.
  • Tests are difficult to write.
  • Difficult to set up an adequate test environment.
  • No pain points.
  • Other.

Results summary

7.5% of respondents (out of 6,645) said they don’t have pain points with web testing. For those who did, the biggest pain point is the time spent on manual testing.

To better understand the nuances behind these results, we ran a qualitative study on web testing. The study consisted of twenty one-hour interviews with web developers who took the 2020 DNA survey and agreed to participate in follow-up research.

The results will help browser vendors understand whether to accelerate work on WebDriver Bidirectional Protocol (BiDi) or if the unmet needs lie elsewhere. Our analysis on WebDriver BiDi is based on the assumption that the feature gap between single-browser test tooling and cross-browser test tooling is a source of pain. Future research on the struggles developers have will be able to focus the priorities and technical design of that specification to address the pain points.

Key Takeaways

  • In the 2020 Web DNA report, we included the results of a segmentation study. One of the seven segments that emerged was “Testing Technicians”. The name implies that the segment does testing and therefore finds frustration while doing tests. This is correct, but what’s also true is that developers commonly see a high entry barrier to testing, which contributes to their frustration.
  • Defining a testing workflow, choosing tools, writing tests, and running tests all take time. Many developers face pressure to develop and launch products under tight deadlines. Testing or not testing is a tradeoff between the perceived value that testing adds compared to the time it will take to implement.
  • Some developers are aware of testing but limited by their lack of knowledge in the area. This lack of knowledge is a barrier to successfully implementing a testing strategy. Other developers are aware of what testing is and how to do it, but they still consider it frustrating. Rather than lacking knowledge, this second group lacks the time and resources to run tests to the degree that they’d ideally like.
  • For some developers, what constitutes a test type is unclear. Additionally, the line between coding and testing can be blurry.
  • For developers who have established a testing workflow, the best way to describe how that came to be is evolutionary. The evolutionary workflow is generally being continuously improved.
  • Browser vendors assumed unit testing to be a common type of testing and that it’s a well-developed space without a lot of pain points. However, what we learned is that there are more challenges with unit testing code that runs in the browser than anticipated, and there’s the same time pressure as elsewhere, meaning it doesn’t happen as frequently as expected.
  • In the most general of summaries, one could conclude that testing should take less time than it does.
  • Stakeholders had assumed that developers want to test their code in as many browsers as they can and they’re just limited by the browsers their tools support. What we learned is that the decision of which browsers they support does not depend on the tools they use. Conversely, what browsers they support drives the decisions for which tools they use.8

The post A web testing deep dive: The MDN web testing report appeared first on Mozilla Hacks - the Web developer blog.

SeaMonkeySeaMonkey Support newsgroups and mailing lists…


Another quick (another albeit old) notice.

Both the SeaMonkey-dev, and Seamonkey-support newsgroups on news.mozilla.org (as well as their associated mailing lists) are going to go the way of the Dodo.  They’re going to be removed.  Gone.  Hogged-tied and sent to the farm (no clue where this came from).


News.mozilla.org is going to be decommissioned.  Removed from service…  etc.


Coz the Rock says so!

Well.. Not really.  The Rock didn’t say anything… nor would he care about this.  Or maybe?  Hey, Dwayne,  if you’re reading this..  Hi!  If you’re not reading this…   *shrugs*.

Anyway, the rationale for the decommissioning of news.mozilla.org is…

Nevermind.   No point in crying over spilt milk.  Mozilla says it’s going.

So.. My point.

We are in the midst of finding a suitable replacement.  Please check up on this blog from regularly as I will most likely post an update on this soon.  Or on irc.   Freenode.  #seamonkey.   Little point in checking up on the newsgroups unless I get a post in before the closure…. but then again, no one would be able to read it.  Would it?  *sigh*

As you probably have guessed.  I’m feeling snarky today.  Snark Day.  ;P

Oh.  Perhaps Cynic day?   Since this blog is also hosted by Mozilla….  *sigh*.

Anyways, will keep you all posted here.

Cynically and Snarkily yours,


PS: Or maybe sighday?

SeaMonkeySeaMonkey 2.53.7 has been released!

Hi All,

Just want to (albeit belatedly) announce that SeaMonkey 2.53.7 has been released…  Yesterday…  In the afternoon…

Yes.  I hardly call this news then.  Well… what can I say.

Absolutely nothing…   [Doesn’t this trigger you thinking about Edwin Starr’s song “War (What is it good for)”? ]  Hm.  I think this is going to be stuck in my brain for the rest of the day.   Oh yay.

I digress.

Kudos to those involved.  You know who you are.  *wink* *wink* *nudge* *nudge* Say no more! Say No More!

Back to your regularly scheduled program..



Blog of DataMaking your Data Work for you with Mozilla Rally

Every week brings new reports of data leaks, privacy violations, rampant misinformation, or discriminatory AIs. It’s frustrating, because we have so little insight into how major technology companies shape our online experiences.  We also don’t understand the extent of data that online companies collect from us. Without meaningful transparency, we will never address the roots of these problems. 

We are exploring ways to change the dynamics of who controls our data and how we understand our everyday online experiences. In the coming weeks we will launch Mozilla Rally, a participatory data science platform for the Mozilla community.  Rally will invite people to put their data to work, not only for themselves, but for a better society.  

Working alongside other mission-aligned partners, we’ll shine a light on the Internet’s big problems.  We’ll explore ideas for new data products that tip the balance back to consumers. And we’ll do all of this out in the open, sharing and documenting every part of our journey together. You can sign up for the Rally waitlist to be notified when we launch.

Stay tuned!

The Mozilla BlogLatest Mozilla VPN features keep your data safe

It’s been less than a year since we launched Mozilla VPN, our fast and easy-to-use Virtual Private Network service brought to you by a trusted name in online consumer security and privacy services. Since then we added our Mozilla VPN service to Mac and Linux platforms, joining our VPN service offerings on Windows, Android and iOS platforms. As restrictions are slowly easing up and people are becoming more comfortable leaving their homes, one of the ways to keep your information safe when you go online is our Mozilla VPN service. Our Mozilla VPN provides encryption and device-level protection of your connection and information when you are on the Web.

Today, we’re launching two new features to give you an added layer of protection with our trusted Mozilla VPN service. Mozilla has a reputation for building products that help you keep your information safe. These new features will help users do the following:

For those who watch out for unsecure networks

If you’re someone who keeps our Mozilla VPN service off and prefers to manually turn it on yourself, this feature will help you out. We’ll notify you when you’ve joined a network that is not password protected or has weak encryptions. By just clicking on the notification you can turn the Mozilla VPN service on, giving you an added layer of protection ensuring every conversation you have is encrypted over the network.  This feature is available on Windows, Linux, Mac, Android and iOS platforms.

For those at home, who want to keep all your devices connected

Occasionally, you might need to print out forms for an upcoming doctor visit or your kid’s worksheets to keep them busy. Now, we’ve added Local Area Network Access, so your devices can talk with each other without having to turn off your VPN. Just make sure that the box is checked in Network Settings when you are on your home network.  This feature is available on Windows, Linux, Mac and Android platforms.

Why use our trusted Mozilla VPN service?

Since our launch last year, we’ve had thousands of people sign up to use our trusted Mozilla VPN service. Mozilla has built a reputation for building products that respect your privacy and keeps your information safe. With Mozilla VPN service you can be sure your activity is encrypted across all applications and websites, whatever device you are on.

With no long-term contracts required, the Mozilla VPN is available for just $4.99 USD per month in the United States, Canada, the United Kingdom, Singapore, Malaysia, and New Zealand. We have plans to expand to other countries this Spring.

We know that it’s more important than ever for you to feel safe, and for you to know that what you do online is your own business. Check out the Mozilla VPN and subscribe today from our website.


The post Latest Mozilla VPN features keep your data safe appeared first on The Mozilla Blog.

SUMO BlogIntoducing Daryl Alexsy

Hey everybody,

Please join us to welcome Daryl Alexsy to he Customer Experience team! Daryl is a Senior User Experience Designer who will be helping SUMO as well as the MDN team. Please, say hi to Daryl!


Here’s a short introduction from her:

Hi everyone! I’m Daryl, and I’ll be joining the SUMO team as a UX designer. I am looking forward to working together with you all to create a better experience for both readers and contributors of the platform, so please don’t hesitate to reach out with any observations or suggestions for how we can make that happen.


Welcome Daryl!

hacks.mozilla.orgMDN localization in March — Tier 1 locales unfrozen, and future plans

Since we last talked about MDN localization, a lot of progress has been made. In this post we’ll talk you through the unfreezing of Tier 1 locales, and the next steps in our plans to stop displaying non-active and unmaintained locales.

Tier 1 locales unfrozen!

It has been a long time coming, but we’ve finally achieved our goal of unfreezing our Tier 1 locales. the fr, ja, ru, zh-CN, and zh-TW locales can now be edited, and we have active teams working on each of these locales. We added Russian (ru) to the list very recently, after great interest from the community helped us to rapidly assemble a team to maintain those docs — we are really excited about making progress here!

If you are interested in helping out with these locales, or asking questions, you can find all the information you need at our all-new translated-content README. This includes:

  • How to contribute
  • The policies in place to govern the work
  • Who is in the active localization teams
  • How the structure is kept in sync with the en-US version.

We’d like to thank everyone who helped us get to this stage, especially the localization team members who have stepped up to help us maintain our localized content:

Stopping the display of unmaintained locales on MDN

Previously we said that we were planning to stop the display of all locales except for en-US, and our Tier 1 locales.

We’ve revised this plan a little since then — we looked at the readership figures of each locale, as a percentage of the total MDN traffic, and decided that we should keep a few more than just the 5 we previously mentioned. Some of the viewing figures for non-active locales are quite high, so we thought it would be wise to keep them and try to encourage teams to start maintaining them.

In the end, we decided to keep the following locales:

  • en-US
  • es
  • ru (already unfrozen)
  • fr (already unfrozen)
  • zh-CN (already unfrozen)
  • ja (already unfrozen)
  • pt-BR
  • ko
  • de
  • pl
  • zh-TW (already unfrozen)

We are planning to stop displaying the other 21 locales. Many of them have very few pages, a high percentage of which are out-of-date or otherwise flawed, and we estimate that the total traffic we will lose by removing all these locales is less than 2%.

So what does this mean?

We are intending to stop displaying all locales outside the top ten by a certain date. The date we have chosen is April 30th.

We will remove all the source content for those locales from the translated-content repo, and put it in a new retired translated content repo, so that anyone who still wants to use this content in some way is welcome to do so. We highly respect the work that so many people have done on translating MDN content over the years, and want to preserve it in some way.

We will redirect the URLs for all removed articles to their en-US equivalents — this solves an often-mentioned issue whereby people would rather view the up-to-date English article than the low-quality or out-of-date version in their own language, but find it difficult to do so because of they way MDN works.

We are also intending to create a new tool whereby if you see a really outdated page, you can press a button saying “retire content” to open up a pull request that when merged will check it out to the retired content repo.

After this point, we won’t revive anything — the journey to retirement is one way. This may sound harsh, but we are taking determined steps to clean up MDN and get rid of out-of-date and out-of-remit content that has been around for years in some cases.

The post MDN localization in March — Tier 1 locales unfrozen, and future plans appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Add-ons BlogFriend of Add-ons: Mélanie Chauvel

I’m pleased to announce our newest Friend of Add-ons, Mélanie Chauvel! After becoming interested in free and open source software in 2012, Mélanie started contributing code to Tab Center Redux, a Firefox extension that displays tabs vertically on the sidebar. When the developer stopped maintaining it, she forked a version and released it as Tab Center Reborn.

As she worked on Tab Center Reborn, Mélanie became thoroughly acquainted with the tabs API. After running into a number of issues where the API didn’t behave as expected, or didn’t provide the functionality her extension needed, she started filing bugs and proposing new features for the WebExtensions API.

Changing code in Firefox can be scary to new contributors because of the size and complexity of the codebase. As she started looking into her pain points, Mélanie realized that she could make some of the changes she wanted to see. “WebExtensions APIs are implemented in JavaScript and are relatively isolated from the rest of the codebase,” she says. “I saw that I could fix some of the issues that bothered me and took a stab at it.”

Mélanie added two new APIs: sidebarAction.toggle, which can toggle the visibility of the sidebar if it belongs to an extension, and tabs.warmup, which can reduce the amount of time it takes for an inactive tab to load. She also made several improvements to the tabs.duplicate API. Thanks to her contributions, new duplicated tabs are activated as soon as they are opened, extensions can choose where a duplicate tab should be opened, and duplicating a pinned tab no longer causes unexpected visual glitches.

Mélanie is also excited to see and help others contribute to open source projects. One of her most meaningful experiences at Mozilla has been filing an issue and seeing a new contributor fix it a few weeks later. “It made me happy to be part of the path of someone else contributing to important projects like Firefox. We often feel powerless in our lives, and I’m glad I was able to help others participate in something bigger than them,” Mélanie says.

These days, Mélanie is working on translating Tab Center Reborn into French and Esperanto and contributing code to other open-source projects including Mastodon, Tusky, Rust, Exa, and KDE. She also enjoys playing puzzle games, exploring vegan cooking and baking, and watching TV shows and movies with friends.

Thank you for all of your contributions, Mélanie! If you’re a fan of Mélanie’s work and wish to offer support, you can buy her a coffee or contribute on Liberapay.

If you are interested in contributing to the add-ons ecosystem, please visit our Contribution wiki.

The post Friend of Add-ons: Mélanie Chauvel appeared first on Mozilla Add-ons Blog.

The Mozilla Thunderbird BlogMailfence Encrypted Email Suite in Thunderbird

Mailfence Encrypted Email Suite in Thunderbird

Today, the Thunderbird team is happy to announce that we have partnered with Mailfence to offer their encrypted email service in Thunderbird’s account setup. To check this out, you click on “Get a new email address…” when you are setting up an account. We are excited that those using Thunderbird will have this easily accessible option to get a new email address from a privacy-focused provider with just a few clicks.

Why partner with Mailfence?

It comes down to two important shared values: a commitment to privacy and open standards. Mailfence has built a private and secure email experience, whilst using open standards that ensure its users can use clients like Thunderbird with no extra hoops to jump through – which respects their freedom. Also, Mailfence has been doing this for longer than most providers have been around and this shows real commitment to their cause.

We’ve known we wanted to work with the Mailfence team for well over a year, and this is just the beginning of our collaboration. We’ve made it easy to get an email address from Mailfence, and their team has created many great guides on how to get the most out of their service in Thunderbird. But this is just the beginning. The goal is that, in the near future, Mailfence users will benefit from the automatic sync of their contacts and calendars – as well as their email.

Why is this important?

If we’ve learned anything about the tech landscape these last few years it’s that big tech doesn’t always have your best interests in mind. Big tech has based its business model on the harvesting and exploitation of data. Your data that the companies gobble up is used for discrimination and manipulation – not to mention the damage done when this data is sold to or stolen by really bad actors.

We wanted to give our users an alternative, and we want to continue to show our users that you can communicate online and leverage the power of the Internet without giving up your right to privacy. Mailfence is a great service that we want to share with our community and users, to show there are good options out there.

Patrick De-Schutter, Co-Founder of Mailfence, makes an excellent case for why this partnership is important:

“Thunderbird’s mission and values completely align with ours. We live in times of ever growing Internet domination by big tech companies. These have repeatedly shown a total disrespect of online privacy and oblige their users to sign away their privacy through unreadable Terms of Service. We believe this is wrong and dangerous. Privacy is a fundamental human right. With this partnership, we create a user-friendly privacy-respecting alternative to the Big Tech offerings that are centered around the commodification of personal data.”

How to try out Mailfence

If you want to give Mailfence a try right now (and are already using Thunderbird), just open Thunderbird account settings, click “Account Actions” and then “Add Mail Account”, it is there that you will see the option to “Get a new email address”. There you can select Mailfence as your provider and choose your desired username, then you will be prompted to set up your account. Once you have done this your account will be set up in Thunderbird and you will be able to start your Mailfence trial.

It is our sincere hope that our users will give Mailfence a try because using services that respect your freedom and privacy is better for you, and better for society at large. We look forward to deepening our relationship with Mailfence and working hand-in-hand with them to improve the Thunderbird experience for those using their service.

We’ll share more about our partnership with Mailfence, as well as our other efforts to promote privacy and open standards as the year progresses. We’re so grateful to get to work with great people who share our values, and to then share that work with the world.

SUMO BlogPlay Store Support Program Updates

TL;DR: By the end of March, 2021, the Play Store Support program will be moving from the Respond Tool to Conversocial. If you want to keep helping Firefox for Android users by responding to their reviews in the Google Play Store, please fill out this form to request a Conversocial account. You can learn more about the program here


In late August last year, to support the transition of Firefox for Android from the old engine (fennec) to the new one (fenix), we officially introduced a tool that we build in-house called the Respond Tool to support the Play Store Support campaign. The Respond Tool lets contributors and staff provide answers to reviews under 3-stars on the Google Play Store. That program was known as Play Store Support.

We learned a lot from the campaign and identified a number of improvements to functionality and user experience that were necessary. In the end, we decided to migrate the program from the Respond Tool to Conversocial, a third-party tool that we are already using with our community to support users on Twitter. This change will enable us to:

  • Segment reviews and set priorities.
  • Filter out reviews with profanity.
  • See when users change their ratings.
  • Track trends with a powerful reporting dashboard.
  • Save costs and engineering resources.

As a consequence of this change, we’re going to decommission the Respond Tool by March 31, 2021. You’re encouraged to request an account in Conversocial if you want to keep supporting Firefox for Android users. You can read more about the decommission plan in the Contributor Forum.

We have also updated the guidelines to reflect this change that you can learn more from the following article: Getting started with Play Store Support.

This will not be possible without your help

All this will not be possible without contributors like you, who have been helping us to provide great support for Firefox for Android users through the Respond Tool. From the Play Store Support campaign last year until today, 99 contributors have helped to reply to a total of 14484 reviews on the Google Play Store.

I’d like to extend my gratitude to Paul W, Christophe V, Andrew Truong, Danny Colin, and Ankit Kumar who have been very supportive and accommodating by giving us feedback throughout the transition process.

We’re excited about this change and hope that you can help us to spread the word and share this announcement to your fellow contributors.

Let’s keep on rocking the helpful web!


On behalf of the SUMO team,


hacks.mozilla.orgIn March, we see Firefox 87

Nearing the end of March now, and we have a new version of Firefox ready to deliver some interesting new features to your door. This month, we’ve got some rather nice DevTools additions in the form of prefers-color-scheme media query emulation and toggling :target pseudo-classes, some very useful additions to editable DOM elements: the beforeinput event and getTargetRanges() method, and some nice security, privacy, and macOS screenreader support updates.

This blog post provides merely a set of highlights; for all the details, check out the following:

Developer tools

In developer tools this time around, we’ve first of all updated the Page Inspector to allow simulation of prefers-color-scheme media queries, without having to change the operating system to trigger light or dark mode.

Open the DevTools, and you’ll see a new set of buttons in the top right corner:

Two buttons marked with sun and moon icons

When pressed, these enable the light and dark preference, respectively. Selecting either button deselects the other. If neither button is selected then the simulator does not set a preference, and the browser renders using the default feature value set by the operating system.

And another nice addition to mention is that the Page Inspector’s CSS pane can now be used to toggle the :target pseudo-class for the currently selected element, in addition to a number of others that were already available (:hover, :active, etc.)

Firefox devtools CSS rules pane, showing a body selector with a number of following declarations, and a bar up the top with several pseudo classes written inside it

Find more out about this at Viewing common pseudo-classes.

Better control over user input: beforeinput and getTargetRanges()

The beforeinput event and getTargetRanges() method are now enabled by default. They allow web apps to override text edit behavior before the browser modifies the DOM tree, providing more control over text input to improve performance.

The global beforeinput event is sent to an <input> element — or any element whose contenteditable attribute is set to true — immediately before the element’s value changes. The getTargetRanges() method of the InputEvent interface returns an array of static ranges that will be affected by a change to the DOM if the input event is not canceled.

As an example, say we have a simple comment system where users are able to edit their comments live using a contenteditable container, but we don’t want them to edit the commenter’s name or other valuable meta data? Some sample markup might look like so:

<p contenteditable>
  <span>Mr Bungle:</span>
  This is my comment; isn't it good!
  <em>-- 09/16/21, 09.24</em>

Using beforeinput and getTargetRanges(), this is now really simple:

const editable = document.querySelector('[contenteditable]');

editable.addEventListener('beforeinput', e => {
  const targetRanges = e.getTargetRanges();
  if(targetRanges[0].startContainer.parentElement.tagName === 'SPAN' ||
     targetRanges[0].startContainer.parentElement.tagName === 'EM') {

Here we respond to the beforeinput event so that each time a change to the text is attempted, we get the target range that would be affected by the change, find out if it is inside a <span> or <em> element, and if so, run preventDefault() to stop the edit happening. Voila — non-editable text regions inside editable text. Granted, this could be handled in other ways, but think beyond this trivial example — there is a lot of power to unlock here in terms of the control you’ve now got over text input.

Security and privacy

Firefox 87 sees some valuable security and privacy changes.

Referrer-Policy changes

First of all, the default Referrer-Policy has been changed to strict-origin-when-cross-origin (from no-referrer-when-downgrade), reducing the risk of leaking sensitive information in cross-origin requests. Essentially this means that by default, path and query string information are no longer included in HTTP Referrers.

You can find out more about this change at Firefox 87 trims HTTP Referrers by default to protect user privacy.


We also wanted to bring our new SmartBlock feature to the attention of our readers. SmartBlock provides stand-ins for tracking scripts blocked by Firefox (e.g. when in private browsing mode), getting round the often-experienced problem of sites failing to load or not working properly when those tracking scripts are blocked and therefore not present.

The provided stand-in scripts behave close enough to the original ones that they allow sites that rely on them to load and behave normally. And best of all, these stand-ins are bundled with Firefox. No communication needs to happen with the third-party at all, so the potential for any tracking to occur is greatly diminished, and the affected sites may even load quicker than before.

Learn more about SmartBlock at Introducing SmartBlock

VoiceOver support on macOS

Firefox 87 sees us shipping our VoiceOver screen reader support on macOS! No longer will you have to switch over to Chrome or Safari to do significant parts of your accessibility testing.

Check it out now, and let us know what you think.

The post In March, we see Firefox 87 appeared first on Mozilla Hacks - the Web developer blog.

Web Application SecurityFirefox 87 introduces SmartBlock for Private Browsing

Today, with the launch of Firefox 87, we are excited to introduce SmartBlock, a new intelligent tracker blocking mechanism for Firefox Private Browsing and Strict Mode. SmartBlock ensures that strong privacy protections in Firefox are accompanied by a great web browsing experience.

Privacy is hard

At Mozilla, we believe that privacy is a fundamental right and that everyone deserves to have their privacy protected while they browse the web. Since 2015, as part of the effort to provide a strong privacy option, Firefox has included the built-in Content Blocking feature that operates in Private Browsing windows and Strict Tracking Protection Mode. This feature automatically blocks third-party scripts, images, and other content from being loaded from cross-site tracking companies reported by Disconnect. By blocking these tracking components, Firefox Private Browsing windows prevent them from watching you as you browse.

In building these extra-strong privacy protections in Private Browsing windows and Strict Mode, we have been confronted with a fundamental problem: introducing a policy that outright blocks trackers on the web inevitably risks blocking components that are essential for some websites to function properly. This can result in images not appearing, features not working, poor performance, or even the entire page not loading at all.

New Feature: SmartBlock

To reduce this breakage, Firefox 87 is now introducing a new privacy feature we are calling SmartBlock. SmartBlock intelligently fixes up web pages that are broken by our tracking protections, without compromising user privacy.

SmartBlock does this by providing local stand-ins for blocked third-party tracking scripts. These stand-in scripts behave just enough like the original ones to make sure that the website works properly. They allow broken sites relying on the original scripts to load with their functionality intact.

The SmartBlock stand-ins are bundled with Firefox: no actual third-party content from the trackers are loaded at all, so there is no chance for them to track you this way. And, of course, the stand-ins themselves do not contain any code that would support tracking functionality.

In Firefox 87, SmartBlock will silently stand in for a number of common scripts classified as trackers on the Disconnect Tracking Protection List. Here’s an example of a performance improvement:

Side by side comparison: before and after SmartBlock.

An example of SmartBlock in action. Previously (left), the website tiny.cloud had poor loading performance in Private Browsing windows in Firefox because of an incompatibility with strong Tracking Protection. With SmartBlock (right), the website loads properly again, while you are still fully protected from trackers found on the page.

We believe the SmartBlock approach provides the best of both worlds: strong protection of your privacy with a great browsing experience as well.

These new protections in Firefox 87 are just the start! Stay tuned for more SmartBlock innovations in upcoming versions of Firefox.

The team

This work was carried out in a collaboration between the Firefox webcompat and anti-tracking teams, including Thomas Wisniewski, Paul Zühlcke and Dimi Lee with support from many Mozillians including Johann Hofmann, Rob Wu, Wennie Leung, Mikal Lewis, Tim Huang, Ethan Tseng, Selena Deckelmann, Prangya Basu, Arturo Marmol, Tanvi Vyas, Karl Dubost, Oana Arbuzov, Sergiu Logigan, Cipriani Ciocan, Mike Taylor, Arthur Edelstein, and Steven Englehardt.

We also want to acknowledge the NoScript and uBlock Origin teams for helping to pioneer this approach.


The post Firefox 87 introduces SmartBlock for Private Browsing appeared first on Mozilla Security Blog.

about:communityContributors To Firefox 87

With the release of Firefox 87 we are delighted to introduce the contributors who’ve shipped their first code changes to Firefox in this release, all of whom were brand new volunteers! Please join us in thanking each of these diligent, committed individuals, and take a look at their contributions:

hacks.mozilla.orgHow MDN’s site-search works

tl;dr: Periodically, the whole of MDN is built, by our Node code, in a GitHub Action. A Python script bulk-publishes this to Elasticsearch. Our Django server queries the same Elasticsearch via /api/v1/search. The site-search page is a static single-page app that sends XHR requests to the /api/v1/search endpoint. Search results’ sort-order is determined by match and “popularity”.


The challenge with “Jamstack” websites is with data that is too vast and dynamic that it doesn’t make sense to build statically. Search is one of those. For the record, as of Feb 2021, MDN consists of 11,619 documents (aka. articles) in English. Roughly another 40,000 translated documents. In English alone, there are 5.3 million words. So to build a good search experience we need to, as a static site build side-effect, index all of this in a full-text search database. And Elasticsearch is one such database and it’s good. In particular, Elasticsearch is something MDN is already quite familiar with because it’s what was used from within the Django app when MDN was a wiki.

Note: MDN gets about 20k site-searches per day from within the site.


When we build the whole site, it’s a script that basically loops over all the raw content, applies macros and fixes, dumps one index.html (via React server-side rendering) and one index.json. The index.json contains all the fully rendered text (as HTML!) in blocks of “prose”. It looks something like this:

  "doc": {
    "title": "DOCUMENT TITLE",
    "summary": "DOCUMENT SUMMARY",
    "body": [
        "type": "prose", 
        "value": {
          "id": "introduction", 
          "title": "INTRODUCTION",
          "content": "<p>FIRST BLOCK OF TEXTS</p>"
   "popularity": 0.12345,

You can see one here: /en-US/docs/Web/index.json


Next, after all the index.json files have been produced, a Python script takes over and it traverses all the index.json files and based on that structure it figures out the, title, summary, and the whole body (as HTML).

Next up, before sending this into the bulk-publisher in Elasticsearch it strips the HTML. It’s a bit more than just turning <p>Some <em>cool</em> text.</p> to Some cool text. because it also cleans up things like <div class="hidden"> and certain <div class="notecard warning"> blocks.

One thing worth noting is that this whole thing runs roughly every 24 hours and then it builds everything. But what if, between two runs, a certain page has been removed (or moved), how do you remove what was previously added to Elasticsearch? The solution is simple: it deletes and re-creates the index from scratch every day. The whole bulk-publish takes a while so right after the index has been deleted, the searches won’t be that great. Someone could be unlucky in that they’re searching MDN a couple of seconds after the index was deleted and now waiting for it to build up again.
It’s an unfortunate reality but it’s a risk worth taking for the sake of simplicity. Also, most people are searching for things in English and specifically the Web/ tree so the bulk-publishing is done in a way the most popular content is bulk-published first and the rest was done after. Here’s what the build output logs:

Found 50,461 (potential) documents to index
Deleting any possible existing index and creating a new one called mdn_docs
Took 3m 35s to index 50,362 documents. Approximately 234.1 docs/second
Counts per priority prefixes:
    en-us/docs/web                 9,056
    *rest*                         41,306

So, yes, for 3m 35s there’s stuff missing from the index and some unlucky few will get fewer search results than they should. But we can optimize this in the future.


The way you connect to Elasticsearch is simply by a URL it looks something like this:


It’s an Elasticsearch cluster managed by Elastic running inside AWS. Our job is to make sure that we put the exact same URL in our GitHub Action (“the writer”) as we put it into our Django server (“the reader”).
In fact, we have 3 Elastic clusters: Prod, Stage, Dev.
And we have 2 Django servers: Prod, Stage.
So we just need to carefully make sure the secrets are set correctly to match the right environment.

Now, in the Django server, we just need to convert a request like GET /api/v1/search?q=foo&locale=fr (for example) to a query to send to Elasticsearch. We have a simple Django view function that validates the query string parameters, does some rate-limiting, creates a query (using elasticsearch-dsl) and packages the Elasticsearch results back to JSON.

How we make that query is important. In here lies the most important feature of the search; how it sorts results.

In one simple explanation, the sort order is a combination of popularity and “matchness”. The assumption is that most people want the popular content. I.e. they search for foreach and mean to go to /en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach not /en-US/docs/Web/API/NodeList/forEach both of which contains forEach in the title. The “popularity” is based on Google Analytics pageviews which we download periodically, normalize into a floating-point number between 1 and 0. At the time of writing the scoring function does something like this:

rank = doc.popularity * 10 + search.score

This seems to produce pretty reasonable results.

But there’s more to the “matchness” too. Elasticsearch has its own API for defining boosting and the way we apply is:

  • match phrase in the title: Boost = 10.0
  • match phrase in the body: Boost = 5.0
  • match in title: Boost = 2.0
  • match in body: Boost = 1.0

This is then applied on top of whatever else Elasticsearch does such as “Term Frequency” and “Inverse Document Frequency” (tf and if). This article is a helpful introduction.

We’re most likely not done with this. There’s probably a lot more we can do to tune this myriad of knobs and sliders to get the best possible ranking of documents that match.

Web UI

The last piece of the puzzle is how we display all of this to the user. The way it works is that developer.mozilla.org/$locale/search returns a static page that is blank. As soon as the page has loaded, it lazy-loads JavaScript that can actually issue the XHR request to get and display search results. The code looks something like this:

function SearchResults() {
  const [searchParams] = useSearchParams();
  const sp = createSearchParams(searchParams);
  // add defaults and stuff here
  const fetchURL = `/api/v1/search?${sp.toString()}`;

  const { data, error } = useSWR(
    async (url) => {
      const response = await fetch(URL);
      // various checks on the response.statusCode here
      return await response.json();

  // render 'data' or 'error' accordingly here

A lot of interesting details are omitted from this code snippet. You have to check it out for yourself to get a more up-to-date insight into how it actually works. But basically, the window.location (and pushState) query string drives the fetch() call and then all the component has to do is display the search results with some highlighting.

The /api/v1/search endpoint also runs a suggestion query as part of the main search query. This extracts out interest alternative search queries. These are filtered and scored and we issue “sub-queries” just to get a count for each. Now we can do one of those “Did you mean…”. For example: search for intersections.

In conclusion

There are a lot of interesting, important, and careful details that are glossed over here in this blog post. It’s a constantly evolving system and we’re constantly trying to improve and perfect the system in a way that it fits what users expect.

A lot of people reach MDN via a Google search (e.g. mdn array foreach) but despite that, nearly 5% of all traffic on MDN is the site-search functionality. The /$locale/search?... endpoint is the most frequently viewed page of all of MDN. And having a good search engine that’s reliable is nevertheless important. By owning and controlling the whole pipeline allows us to do specific things that are unique to MDN that other websites don’t need. For example, we index a lot of raw HTML (e.g. <video>) and we have code snippets that needs to be searchable.

Hopefully, the MDN site-search will elevate from being known to be very limited to something now that can genuinely help people get to the exact page better than Google can. Yes, it’s worth aiming high!

(Originally posted on personal blog)

The post How MDN’s site-search works appeared first on Mozilla Hacks - the Web developer blog.

Web Application SecurityFirefox 87 trims HTTP Referrers by default to protect user privacy


We are pleased to announce that Firefox 87 will introduce a stricter, more privacy-preserving default Referrer Policy. From now on, by default, Firefox will trim path and query string information from referrer headers to prevent sites from accidentally leaking sensitive user data.


Referrer headers and Referrer Policy

Browsers send the HTTP Referrer header (note: original specification name is ‘HTTP Referer’) to signal to a website which location “referred” the user to that website’s server. More precisely, browsers have traditionally sent the full URL of the referring document (typically the URL in the address bar) in the HTTP Referrer header with virtually every navigation or subresource (image, style, script) request. Websites can use referrer information for many fairly innocent uses, including analytics, logging, or for optimizing caching.

Unfortunately, the HTTP Referrer header often contains private user data: it can reveal which articles a user is reading on the referring website, or even include information on a user’s account on a website.

The introduction of the Referrer Policy in browsers in 2016-2018 allowed websites to gain more control over the referrer values on their site, and hence provided a mechanism to protect the privacy of their users. However, if a website does not set any kind of referrer policy, then web browsers have traditionally defaulted to using a policy of ‘no-referrer-when-downgrade’, which trims the referrer when navigating to a less secure destination (e.g., navigating from https: to http:) but otherwise sends the full URL including path, and query information of the originating document as the referrer.


A new Policy for an evolving Web

The ‘no-referrer-when-downgrade’ policy is a relic of the past web, when sensitive web browsing was thought to occur over HTTPS connections and as such should not leak information in HTTP requests. Today’s web looks much different: the web is on a path to becoming HTTPS-only, and browsers are taking steps to curtail information leakage across websites. It is time we change our default Referrer Policy in line with these new goals.


Firefox 87 new default Referrer Policy ‘strict-origin-when-cross-origin’ trimming user sensitive information like path and query string to protect privacy.


Starting with Firefox 87, we set the default Referrer Policy to ‘strict-origin-when-cross-origin’ which will trim user sensitive information accessible in the URL. As illustrated in the example above, this new stricter referrer policy will not only trim information for requests going from HTTPS to HTTP, but will also trim path and query information for all cross-origin requests. With that update Firefox will apply the new default Referrer Policy to all navigational requests, redirected requests, and subresource (image, style, script) requests, thereby providing a significantly more private browsing experience.

If you are a Firefox user, you don’t have to do anything to benefit from this change. As soon as your Firefox auto-updates to version 87, the new default policy will be in effect for every website you visit. If you aren’t a Firefox user yet, you can download it here to start taking advantage of all the ways Firefox works to improve your privacy step by step with every new release.”

The post Firefox 87 trims HTTP Referrers by default to protect user privacy appeared first on Mozilla Security Blog.

The Mozilla BlogReinstating net neutrality in the US

Today, Mozilla together with other internet companies ADT, Dropbox, Eventbrite, Reddit, Vimeo, Wikimedia, sent a letter to the FCC asking the agency to reinstate net neutrality as a matter of urgency.

For almost a decade, Mozilla has defended user access to the internet, in the US and around the world. Our work to preserve net neutrality has been a critical part of that effort, including our lawsuit against the Federal Communications Commission (FCC) to keep these protections in place for users in the US.

With the recent appointment of Acting Chairwoman Jessica Rosenworcel to lead the agency, there will be a new opportunity to establish net neutrality rules at the federal level in the near future, ensuring that families and businesses across the country can enjoy these fundamental rights.

Net neutrality preserves the environment that allowed the internet to become an engine for economic growth. In a marketplace where users frequently do not have access to more than one internet service provider (ISP), these rules ensure that data is treated equally across the network by gatekeepers. More specifically, net neutrality prevents ISPs from leveraging their market power to slow, block, or prioritize content–ensuring that users can freely access ideas and services without unnecessary roadblocks. Without these rules in place, ISPs can make it more difficult for new ideas or applications to succeed, potentially stifling innovation across the internet.

The need for net neutrality protections has become even more apparent during the pandemic. In a moment where classrooms and offices have moved online by necessity, it is critically important to have rules paired with strong government oversight and enforcement to protect families and businesses from predatory practices. In California, residents will have the benefit of these fundamental safeguards as a result of a recent court decision that will allow the state to enforce its state net neutrality law. However, we believe that users nationwide deserve the same ability to control their own online experiences.

While there are many challenges that need to be resolved to fix the internet, reinstating net neutrality is a crucial down payment on the much broader internet reform that we need. Net neutrality is good for people and for personal expression. It is good for business, for innovation, for our economic recovery. It is good for the internet. It has long enjoyed bipartisan support among the American public. There is no reason to further delay its reinstatement once the FCC is in working order.

The post Reinstating net neutrality in the US appeared first on The Mozilla Blog.

Blog of DataThis Week in Glean: Reducing Release Friction

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)


One thing that I feel less confident in myself about is the build and release process behind the software components I have been working on recently.  That’s why I was excited to take on prototyping a “Better Build” in order to get a better understanding of how the build and release process works and hopefully make it a little better in the process.  What is a “Better Build”?  Well that’s what we have been calling the investigation into how to reduce the overall pain of releasing our Rust based components to consumers on Android, iOS, and desktop platforms.


Getting changes out the door from a Rust component like Glean all the way into Firefox for iOS is somewhat non-trivial right now and requires multiple steps in multiple repositories, each of which has its own different procedures and ownership.  Glean in Firefox for iOS currently ships via the Application Services iOS megazord, mostly because that allows us to compile most of the Rust code together to make a smaller impact on the app.  That means, if we need to ship a bug fix in Glean on iOS we need to:

  • Create a PR that fixes the bug in the Glean repo, get it reviewed, and land it.  This requires a Glean team member’s review to land.
  • Cut a release of Glean with the update which requires a Glean team member’s review to accomplish.
  • Open a PR in the Application Services repository, updating Glean (which is pulled in as a git submodule), get it reviewed, and land it.  This requires an Application Services team member for review, so now we have another team pulled into the mix.
  • Now we need a new release of the appservices megazord, which means a new release must be cut for the Application Services repo, again requiring the involvement of 1-2 members of that team.
  • Not done yet, now we go to Firefox for iOS and we can finally update the dependencies on Glean to get the fix!  This PR will require product team review since it is in their repo.
  • Oh wait…  there were other breaking changes in Application Services that we found as a side effect of shipping this update that we have to fix…   *sigh*


That’s a process that can take multiple days to accomplish and requires the involvement of multiple members of multiple teams.  Even then, we can run into unexpected hiccups that slow the process down, like breaking changes in the other components that we have bundled together.  Getting things into Fenix isn’t much easier, especially because there is yet another repository and release process involved with Android Components in the mix.


This creates a situation where we hold back on the frequency of releases and try to bundle as many fixes and changes as possible to reduce the number of times we have to subject ourselves to the release process.  This same situation makes errors and bugs harder to find, because, once they have been introduced into a component it may be days or weeks before they show up.  Once the errors do show up, we hope that they register as test failures and get caught before going live, but sometimes we see the results in crash reports or in data analysis.  It is then not a simple task to determine what you are looking for when there is a Glean release that’s in an Application Services release that’s in an Android Components release that’s in a Fenix release…  all of which have different versions.


It might be easier if each of our components were a stand-alone dependency of the consuming application, but our Rust components want and need to call each other.  So there is some interdependence between them which requires us to build them together if we want to take the best advantage of calling things in other crates in Rust.  Building things together also helps to minimize the size impact of the library on consuming applications, which is especially important for mobile.


So how was I going to make any of this part of a “Better Build”?  The first thing I needed to do was to create a new git repository that combined Application Services, Glean, Nimbus, and Uniffi.  There were a couple of different ways to accomplish this and I chose to go with git submodules as that seemed to be the simplest path to getting everything in one place so I could start trying to build things together.  The first thing that complicated this approach was that Application Services already pulls in Glean and Nimbus as submodules, so I spent some time hacking around removing those so that all I had was the versions in the submodules I had added.  Upon reflecting on this later, I probably should have just worked off of a fork of Application Services since it basically already had everything I needed in it, just lacking all the things in the Android and iOS builds.  Git submodules didn’t seem to make things too terribly difficult to update, and should be possible to automate as part of a build script.  I do foresee each component repository needing something like a release branch that would always track the latest release so that we don’t have to go in and update the tag that the submodule in the Better Builds repo points at.  The idea being that the combined repo wouldn’t need to know about the releases or release schedule of the submodules, pushing that responsibility to the submodule’s original repo to advertise releases in a standardized way like with a release branch.  This would allow us to have a regular release schedule for the Better Build that could in turn be picked up by automation in downstream consumers.


Now that I had everything in one place, the next step was to build the Rusty parts together so there was something to link the Android and iOS builds to, because some of the platform bindings of the components we build have Kotlin and Swift stuff that needs to be packaged on top of the Rust stuff, or at least need repackaged in a format suitable for consumers on the platform.  Let me just say right here, Cargo made this very easy for me to figure out.  It took only a little while to set up the build dependencies.  With each project already having a “root” workspace Cargo.toml, I learned that I couldn’t nest workspaces.  Not to fear, I just needed to exclude those directories from my root workspace Cargo.toml and it just worked.  Finally, a few patch directives were needed to ensure that everyone was using the correct local copies of things like viaduct, Uniffi, and Glean.  After a few tweaks, I was able to build all the Rust components in under 2 minutes from cargo build to done.


Armed with these newly built libs, I next set off to tackle an Android build using Gradle.  I had the most prior art to see how to do this so I figured it wouldn’t be too terrible.  In fact, it was here that I ran into a bit of a brick wall.  My first approach was to try and build everything as subprojects of the new repo, but unfortunately, there was a lot of references to rootProject that meant “the root project I think I am in, not this new root project” and so I found myself changing more and more build.gradle files embedded in the components.  After struggling with this for a day or so, I then switched to trying a composite build of the Android bits of all the components.  This allowed me to at least build, once I had everything set up right.  It was also at this point that I realized that having the embedded submodules for Nimbus and Glean inside of Application Services was causing me some problems, and so I ended up dropping Nimbus from the base Better Build repo and just using the one pulled into Application Services.  Once I had done this, the gradle composite build was just a matter of including the Glean build and the Application Services build in the settings.gradle file.  Along with a simple build.gradle file, I was able to build a JAR file which appeared to have all the right things in it, and was approximately the size I would expect when combining everything.  I was now definitely at the end of my Gradle knowledge, and I wasn’t sure how to set up the publishing to get the AAR file that would be consumed by downstream applications.


I was starting to run out of time in my timebox, so I decided to tinker around with the iOS side of things and see how unforgiving Xcode might be.  Part of the challenge here was that Nimbus didn’t really have iOS bindings yet, and we have already shown that this can be done with Application Services and Glean via the iOS megazord, so I started by trying to get Xcode to generate the Uniffi bindings in Swift for Nimbus.  Knowing that a build phase was probably the best bet, I started by writing a script that would invoke the call to uniffi-bindgen with the proper flags to give me the Swift bindings, and then added the output file.  But, no matter what I tried, I couldn’t get Xcode to invoke Cargo within a build phase script execution to run uniffi-bindgen.  Since I was now out of time in my investigation, I couldn’t dig any deeper into this and I hope that it’s just some configuration problem in my local environment or something.


I took some time to consolidate and share my notes about what I had learned, and I did learn a lot, especially about Cargo and Gradle.  At least I know that learning more about Gradle would be useful, but I was still disappointed that I couldn’t have made it a little further along to try and answer more of the questions about automation which is ultimately the real key to solving the pain I mentioned earlier.  I was hoping to have a couple of prototype GitHub actions that I could demo, but I didn’t quite get there without being able to generate the proper artifacts.


The final lesson I learned was that this was definitely something that was outside of my comfort zone.  And you know what?  That was okay.  I identified an area of my knowledge that I wanted to and could improve.  While it was a little scary to go out and dive into something that was both important to the project and the team as well as something that I wasn’t really sure I could do, there were a lot of people who helped me through answering the questions I had.

Mozilla Add-ons BlogTwo-factor authentication required for extension developers

At the end of 2019, we announced an upcoming requirement for extension developers to enable two-factor authentication (2FA) for their Firefox Accounts, which are used to log into addons.mozilla.org (AMO). This requirement is intended to protect add-on developers and  users from malicious actors if they somehow get a hold of your login credentials, and it will go into effect starting March 15, 2021.

If you are an extension developer and  have not enabled 2FA by this date, you will be directed to your Firefox Account settings to turn it on the next time you log into AMO.

Instructions for enabling 2FA for your Firefox Account can be found on support.mozilla.org. Once you’ve finished the set-up process, be sure to download or print your recovery codes and keep them in a safe place. If you ever lose access to your 2FA devices and get locked out of your account, you will need to provide one of your recovery codes to regain access. Misplacing these codes can lead to permanent loss of access to your account and your add-ons on AMO. Mozilla cannot restore your account if you have lost access to it.

If you only upload using the AMO external API, you can continue using your API keys and you will not be asked to provide the second factor.

March 24, 2021 update: If your authenticator offers you an 8 character token, check its settings to see if it can provide a 6 character token. Firefox Accounts will not accept 8 character tokens.

The post Two-factor authentication required for extension developers appeared first on Mozilla Add-ons Blog.

SeaMonkeySeaMonkey 2.53.7 Beta 1 has been released!

Hi All,

The SeaMonkey Project is happy to announce the release of SeaMonkey 2.53.7b1.

Please try it out.

Best Regards, keep safe and healthy!



Mozilla Gfx TeamWebGPU progress

WebGPU is a new standard for graphics and computing on the Web. Our team is actively involved in the design and specification process, while developing an implementation in Gecko. We’ve made a lot of progress since the last public update in Mozilla Hacks blog, and we’d like to share!

webgpu-cube<figcaption>WebGPU textured+lit cube in Firefox Nightly with WGSL shaders.
See full code in the fork.</figcaption>

API Tracing

Trouble-shooting graphics issues can be tough without proper tools. In WebRender, we have the capture infrastructure that allows us to save the state of the rendering pipeline at any given moment to disk, and replayed independently in a standalone environment. In WebGPU, we integrated something similar, called API tracing. Instead of slicing through the state at any given time, it records every command executed by WebGPU implementation from the start. The produced traces are ultimately portable, they can be replayed in a standalone environment on a different system. This infrastructure helps us breeze through the issues, fixing them quickly and not letting them stall the progress.

Rust Serialization

Gecko implementation of WebGPU has to talk in multiple languages: WebIDL, in which the specification is written, C++ – the main language of Gecko, IPDL – the description of inter-process communication (IPC), and Rust, in which wgpu library (the core of WebGPU) is implemented. This variety caused a lot of friction when updating the WebIDL API to latest, it was easy to introduce bugs, which were hard to find later. This architectural problem has been mostly solved by making our IPC rely on Rust serde+bincode. This allows Rust logic on the content process side to communicate with Rust logic on the GPU process side with minimal friction. It was made possible by the change to Rust structures to use Cow types aggressively, which are flexible and efficient, even though we don’t use the “write” part of the copy-on-write semantics.

API Coverage


The API on the Web is required to be safe and portable, which is enforced by the validation logic. We’ve made a lot of progress in this area: wgpu now has a first-class concept of “error” objects, which is what normal objects become if their creation fails on the server side (the GPU process). We allow these error objects to be used by the content side, and at the same time it returns the errors to the GPU process C++ code, which routes them back to the content side. There, we are now properly triggering the “uncaptured error” events with actual error messages:

In a draw command, indexed:false indirect:false, caused by: vertex buffer 0 must be set


What this means for us, as well as the brave experimental users, is better robustness and safety, less annoying panics/crashes, and less time wasted on investigating issues. The validation logic is not yet comprehensive, there is a lot yet to be done, but the basic infrastructure is mostly in place. We validate the creation of buffers, textures, bind group layouts, pipelines, and we validate the encoded commands, including the compute and render pass operations. We also validate the shader interface, and we validate the basic properties of the shader (e.g. the types making sense). We even implement the logic to check the uniformity requirements of the control flow, ahead of the specification, although it’s new and fragile at the moment.

Shading Language

WebGPU Shading Language, or WGSL for short, is a new secure shading language for the Web, targeting SPIR-V, HLSL, and MSL on the native platforms. It’s exceptionally hard to support right now because of how young it is. The screenshot above was rendered with WGSL shaders in Firefox Nightly, you can get a feel of it by looking at the code.

Our recent update carried basic support for WGSL, using Naga library. The main code path in Gecko right now involves the following stages:

  • When a shader module is created:
    1. Parsing WGSL with Naga and building an intermediate representation (IR).
    2. Validating and analyzing the IR (with Naga) for the usage of global handles (such as texture-sample pairs) and the uniformity of control flow.
    3. Producing a SPIR-V module by Naga.
  • When a pipeline is created:
    1. Naga reflection information of the entry points is validated against the pipeline and each other input/output requirements.
    2. The SPIR-V module is then passed down to gfx-hal, where the backends invoke SPIRV-Cross to generate the platform-specific shaders.

Next Steps

One of the areas of improvement here is related to SPIR-V. In the future, we don’t want to unconditionally route the shader translation through SPIR-V, and we don’t want to rely on SPIRV-Cross, which is currently a giant C++ dependency that is hard to secure. Instead, we want to generate the platform-specific shaders straight from Naga IR ourselves. This will drastically reduce the amount of code involved, cut down the dependencies, and make the shader generation faster and more robust, but it requires more work.

Another missing bit is shader sanitation. In order to allow shaders to execute safely on GPU, it’s not enough to enable the safety features of the underlying APIs. We also need to insert bound checks in the shader code, where we aren’t sure about resource bounds being respected. These changes will be very sensitive to performance of some of the heaviest GPU users, such as TFjs.

Most importantly, we need to start testing Gecko’s implementation on the conformance test suite (CTS) that is developed by WebGPU group. This would uncover most of the missing bits in the implementation, and will make it easier to avoid regressions in the near future. Hopefully, the API has stabilized enough today that we can all use the same tests.


There is a large community around the Rust projects involved in our implementation. We welcome anyone to join the fun, and are willing to mentor them. Please hop into a relevant Matrix room to chat:

Rumbling Edge - ThunderbirdBiggest Casino Technology Innovations

Technology has influenced every activity we do and has also enabled us to do the unimaginable. It is not a surprise that technology has changed the gambling industry  like Vera & John online casino too.

With so many innovations, we might think we are at a golden age, but technology surpasses and sets the bar high every time we feel we have had the best. It is only the best one yet.

Time flies

We all are aware of how there are no clocks inside a casino, and when you are gambling, you tend to lose track of time. The idea of making customers stay and gamble more is fascinating and has been practised for a long time now.

Technology has made it possible for us to have casino applications on our watches. When we say watches, we are talking about smartwatches, and now gamblers can even get a casino experience on their watches, from not knowing how time flies while gambling to gambling on a time device, we have come a full circle.

Time flies

Virtual reality

Until now, we were lauding online gambling as a cutting-edge innovation but then came the Virtual reality that topped online gambling and gave the gamblers an even more realistic experience. With virtual reality, you can sit in your living room and feel like you are at the casino.

While playing online on your computer or mobile device you are so indulged in the game that you forget your surroundings, but when you look away from the screen even for a second, you realise your reality. The virtual reality changes this, when you put on your VR goggles and have the other gadgets, there is no looking away from the screen. It feels like you are in the casino, no matter where you are.

Virtual reality


With changing times, not everyone wants to go to a physical casino, and since gamblers usually have just one application or website that they trust, they are monitored. The gamblers are closely monitored, and their behaviour and habits are tracked. Information on how often they play, how much they bet and the time spent on each game is kept track. The first-time visitors are valued as they can be lured and be made into frequent gamblers. The first-time visitors are given offers that they can’t refuse.

Chip tracking

Chips are the currency of the casino, and some fraudulent people can manufacture fake chips and fool the casino and exchange the fake ones for more money. There might be some cheaters and thieves who steal the higher denomination chips and come back later to exchange it. With tracking devices in every chip, the casino can keep track of where the chips are, and if they suspect that someone has stolen, then they can easily declare the chips invalid.

The trends in technology have helped casino protect their customers so they can have a safe experience.

The post Biggest Casino Technology Innovations first appeared on Rumbling Edge.

Mozilla L10NConcordance search lands in Pontoon

Having the ability to search through existing translations is a crucial tool for assuring translation consistency. It allows you to see how the same expression was translated in the past or verify that the translation you intend to use is consistent with the rest of the corpus.

Pontoon tries to automate that as much as possible with Machinery suggestions by querying Translation Memory as soon as the string is opened for translation. That being said, Translation Memory is only queried for strings as a whole. If you’re interested in the translation of a word “file”, but your corpus only uses that word as part of longer strings like “Select a file…” or “Source file uploaded”, Machinery won’t find any matches.

Enter concordance search

Concordance search allows you to search Translation Memory for a specific word or phrase. Thanks to April and Jotes, it’s now available as a standalone feature in Pontoon. No more searching in the All Projects view in a separate tab!

Simply type some search text into the Concordance Search text field in the Machinery tab and hit Enter. Every search keyword is searched separately, unless it’s part of a phrase within double quotes – in this case the entire phrase within the quotes is searched as a whole. Each search result is accompanied with a list of projects it belongs to.

Concordance search

Concordance search

Open Policy & AdvocacyIndia’s new intermediary liability and digital media regulations will harm the open internet

Last week, in a sudden move that will have disastrous consequences for the open internet, the Indian government notified a new regime for intermediary liability and digital media regulation. Intermediary liability (or “safe harbor”) protections have been fundamental to growth and innovation on the internet as an open and secure medium of communication and commerce. By expanding the “due diligence” obligations that intermediaries will have to follow to avail safe harbor, these rules will harm end to end encryption, substantially increase surveillance, promote automated filtering and prompt a fragmentation of the internet that would harm users while failing to empower Indians. While many of the most onerous provisions only apply to “significant social media intermediaries” (a new classification scheme), the ripple effects of these provisions will have a devastating impact on freedom of expression, privacy and security.

As we explain below, the current rules are not fit-for-purpose and will have a series of unintended consequences on the health of the internet as a whole:

  • Traceability of Encrypted Content: Under the new rules, law enforcement agencies can demand that companies trace the ‘first originator’ of any message. Many popular services today deploy end-to-end encryption and do not store source information so as to enhance the security of their systems and the privacy they guarantee users. When the first originator is from outside India, the significant intermediary must identify the first originator within the country, making an already impossible task more difficult. This would essentially be a mandate requiring encrypted services to either store additional sensitive information or/and break end-to-end encryption which would weaken overall security, harm privacy and contradict the principles of data minimization endorsed in the Ministry of Electronic and Information Technology’s (MeitY) draft of the data protection bill.
  • Harsh Content Take Down and Data Sharing Timelines: Short timelines of 36 hours for content take downs and 72 hours for the sharing of user data for all intermediaries pose significant implementation and freedom of expression challenges. Intermediaries, especially small and medium service providers, would not have sufficient time to analyze the requests or seek any further clarifications or other remedies under the current rules. This would likely create a perverse incentive to take down content and share user data without sufficient due process safeguards, with the fundamental right to privacy and freedom of expression (as we’ve said before) suffering as a result.
  • User Directed Take Downs of Non-Consensual Sexually Explicit Content and Morphed/Impersonated Content: All intermediaries have to remove or disable access to information within 24 hours of being notified by users or their representatives (not necessarily government agencies or courts) when it comes to non-consensual sexually explicit content (revenge pornography, etc.) and impersonation in an electronic form (deep fakes, etc.). While it attempts to solve for a legitimate and concerning issue, this solution is overbroad and goes against the landmark Shreya Singhal judgment, by the Indian Supreme Court, which had clarified in 2015 that companies would only be expected to remove content when directed by a court order or a government agency to do so.
  • Social Media User Verification: In a move that could be dangerous for the privacy and anonymity of internet users, the law contains a provision requiring significant intermediaries to provide the option for users to voluntarily verify their identities. This would likely entail users sharing phone numbers or sending photos of government issued IDs to the companies. This provision will incentivize the collection of sensitive personal data that are submitted for this verification, which can then be also used to profile and target users (the law does seem to require explicit consent to do so). This is not hypothetical conjecture – we have already seen phone numbers collected for security purposes being used for profiling. This provision will also increase the risk from data breaches and entrench power in the hands of large players in the social media and messaging space who can afford to build and maintain such verification systems. There is no evidence to prove that this measure will help fight misinformation (its motivating factor), and it ignores the benefits that anonymity can bring to the internet, such as whistle blowing and protection from stalkers.
  • Automated Filtering: While improved from its earlier iteration in the 2018 draft, the provisions to “endeavor” to carry out automated filtering for child sexual abuse materials (CSAM), non-consensual sexual acts and previously removed content apply to all significant social media intermediaries (including end to end encrypted messaging applications). These are likely fundamentally incompatible with end to end encryption and will weaken protections that millions of users have come to rely on in their daily lives by requiring companies to embed monitoring infrastructure in order to continuously surveil the activities of users with disastrous implications for freedom of expression and privacy.
  • Digital Media Regulation: In a surprising expansion of scope, the new rules also contain government registration and content take down provisions for online news websites, online news aggregators and curated audio-visual platforms. After some self regulatory stages, it essentially gives government agencies the ability to order the take down of news and current affairs content online by publishers (which are not intermediaries), with very few meaningful checks and balances against over reach.

The final rules do contain some improvements from the 2011 original law and the 2018 draft such as limiting the scope of some provisions to significant social media intermediaries, user and public transparency requirements, due process checks and balances around traceability requests, limiting the automated filtering provision and an explicit recognition of the “good samaritan” principle for voluntary enforcement of platform guidelines. In their overall scope, however, they are a dangerous precedent for internet regulation and need urgent reform.

Ultimately, illegal and harmful content on the web, the lack of sufficient accountability  and substandard responses to it undermine the overall health of the internet and as such, are a core concern for Mozilla. We have been at the forefront of these conversations globally (such as the UK, EU and even the 2018 version of this draft in India), pushing for approaches that manage the harms of illegal content online within a rights-protective framework. The regulation of speech online necessarily calls into play numerous fundamental rights and freedoms guaranteed by the Indian constitution (freedom of speech, right to privacy, due process, etc), as well as crucial technical considerations (‘does the architecture of the internet render this type of measure possible or not’, etc). This is a delicate and critical balance, and not one that should be approached with blunt policy proposals.

These rules are already binding law, with the provisions for significant social media intermediaries coming into force 3 months from now (approximately late May 2021). Given the many new provisions in these rules, we recommend that they should be withdrawn and be accompanied by wide ranging and participatory consultations with all relevant stakeholders prior to notification.

The post India’s new intermediary liability and digital media regulations will harm the open internet appeared first on Open Policy & Advocacy.

The Mozilla BlogNotes on Addressing Supply Chain Vulnerabilities

Addressing Supply Chain Vulnerabilities

One of the unsung achievements of modern software development is the degree to which it has become componentized: not that long ago, when you wanted to write a piece of software you had to write pretty much the whole thing using whatever tools were provided by the language you were writing in, maybe with a few specialized libraries like OpenSSL. No longer. The combination of newer languages, Open Source development and easy-to-use package management systems like JavaScript’s npm or Rust’s Cargo/crates.io has revolutionized how people write software, making it standard practice to pull in third party libraries even for the simplest tasks; it’s not at all uncommon for programs to depend on hundreds or thousands of third party packages.

Supply Chain Attacks

While this new paradigm has revolutionized software development, it has also greatly increased the risk of supply chain attacks, in which an attacker compromises one of your dependencies and through that your software.[1] A famous example of this is provided by the 2018 compromise of the event-stream package to steal Bitcoin from people’s computers. The Register’s brief history provides a sense of the scale of the problem:

Ayrton Sparling, a computer science student at California State University, Fullerton (FallingSnow on GitHub), flagged the problem last week in a GitHub issues post. According to Sparling, a commit to the event-stream module added flatmap-stream as a dependency, which then included injection code targeting another package, ps-tree.

There are a number of ways in which an attacker might manage to inject malware into a package. In this case, what seems to have happened is that the original maintainer of event-stream was no longer working on it and someone else volunteered to take it over. Normally, that would be great, but here it seems that volunteer was malicious, so it’s not great.

Standards for Critical Packages

Recently, Eric Brewer, Rob Pike, Abhishek Arya, Anne Bertucio and Kim Lewandowski posted a proposal on the Google security blog for addressing vulnerabilities in Open Source software. They cover a number of issues including vulnerability management and security of compilation, and there’s a lot of good stuff here, but the part that has received the most attention is the suggestion that certain packages should be designated “critical”[2]:

For software that is critical to security, we need to agree on development processes that ensure sufficient review, avoid unilateral changes, and transparently lead to well-defined, verifiable official versions.

These are good development practices, and ones we follow here at Mozilla, so I certainly encourage people to adopt them. However, trying to require them for critical software seems like it will have some problems.

It creates friction for the package developer

One of the real benefits of this new model of software development is that it’s low friction: it’s easy to develop a library and make it available — you just write it put it up on a package repository like crates.io — and it’s easy to use those packages — you just add them to your build configuration. But then you’re successful and suddenly your package is widely used and gets deemed “critical” and now you have to put in place all kinds of new practices. It probably would be better if you did this, but what if you don’t? At this point your package is widely used — or it wouldn’t be critical — so what now?

It’s not enough

Even packages which are well maintained and have good development practices routinely have vulnerabilities. For example, Firefox recently released a new version that fixed a vulnerability in the popular ANGLE graphics engine, which is maintained by Google. Both Mozilla and Google follow the practices that this blog post recommends, but it’s just the case that people make mistakes. To (possibly mis)quote Steve Bellovin, “Software has bugs. Security-relevant software has security-relevant bugs”. So, while these practices are important to reduce the risk of vulnerabilities, we know they can’t eliminate them.

Of course this applies to inadvertant vulnerabilities, but what about malicious actors (though note that Brewer et al. observe that “Taking a step back, although supply-chain attacks are a risk, the vast majority of vulnerabilities are mundane and unintentional—honest errors made by well-intentioned developers.”)? It’s possible that some of their proposed changes (in particular forbidding anonymous authors) might have an impact here, but it’s really hard to see how this is actionable. What’s the standard for not being anonymous? That you have an e-mail address? A Web page? A DUNS number?[3] None of these seem particularly difficult for a dedicated attacker to fake and of course the more strict you make the requirements the more it’s a burden for the (vast majority) of legitimate developers.

I do want to acknowledge at this point that Brewer et al. clearly state that multiple layers of protection needed and that it’s necessary to have robust mechanisms for handling vulnerability defenses. I agree with all that, I’m just less certain about this particular piece.

Redefining Critical

Part of the difficulty here is that there are ways in which a piece of software can be “critical”:

  • It can do something which is inherently security sensitive (e.g., the OpenSSL SSL/TLS stack which is responsible for securing a huge fraction of Internet traffic).
  • It can be widely used (e.g., the Rust log) crate, but not inherently that sensitive.

The vast majority of packages — widely used or not — fall into the second category: they do something important but that isn’t security critical. Unfortunately, because of the way that software is generally built, this doesn’t matter: even when software is built out of a pile of small components, when they’re packaged up into a single program, each component has all the privileges that that program has. So, for instance, suppose you include a component for doing statistical calculations: if that component is compromised nothing stops it from opening up files on your disk and stealing your passwords or Bitcoins or whatever. This is true whether the compromise is due to an inadvertant vulnerability or malware injected into the package: a problem in any component compromises the whole system.[4] Indeed, minor non-security components make attractive targets because they may not have had as much scrutiny as high profile security components.

Least Privilege in Practice: Better Sandboxing

When looked at from this perspective, it’s clear that we have a technology problem: There’s no good reason for individual components to have this much power. Rather, they should only have the capabilities they need to do the job they are intended to to (the technical term is least privilege); it’s just that the software tools we have don’t do a good job of providing this property. This is a situation which has long been recognized in complicated pieces of software like Web browsers, which employ a technique called “process sandboxing” (pioneered by Chrome) in which the code that interacts with the Web site is run in its own “sandbox” and has limited abilities to interact with your computer. When it wants to do something that it’s not allowed to do, it talks to the main Web browser code and asks it to do it for it, thus allowing that code to enforce the rules without being exposed to vulnerabilities in the rest of the browser.

Process sandboxing is an important and powerful tool, but it’s a heavyweight one; it’s not practical to separate out every subcomponent of a large program into its own process. The good news is that there are several recent technologies which do allow this kind of fine-grained sandboxing, both based on WebAssembly. For WebAssembly programs, nanoprocesses allow individual components to run in their own sandbox with component-specific access control lists. More recently, we have been experimenting with a technology called called RLBox developed by researchers at UCSD, UT Austin, and Stanford which allows regular programs such as Firefox to run sandboxed components. The basic idea behind both of these is the same: use static compilation techniques to ensure that the component is memory-safe (i.e., cannot reach outside of itself to touch other parts of the program) and then give it only the capabilities it needs to do its job.

Techniques like this point the way to a scalable technical approach for protecting yourself from third party components: each component is isolated in its own sandbox and comes with a list of the capabilities that it needs (often called a manifest) with the compiler enforcing that it has no other capabilities (this is not too dissimilar from — but much more granular than — the permissions that mobile applications request). This makes the problem of including a new component much simpler because you can just look at the capabilities it requests, without needing verify that the code itself is behaving correctly.

Making Auditing Easier

While powerful, sandboxing itself — whether of the traditional process or WebAssembly variety — isn’t enough, for two reasons. First, the APIs that we have to work with aren’t sufficiently fine-grained. Consider the case of a component which is designed to let you open and process files on the disk; this necessarily needs to be able to open files, but what stops it from reading your Bitcoins instead of the files that the programmer wanted it to read? It might be possible to create a capability list that includes just reading certain files, but that’s not the API the operating system gives you, so now we need to invent something. There are a lot of cases like this, so things get complicated.

The second reason is that some components are critical because they perform critical functions. For instance, no matter how much you sandbox OpenSSL, you still have to worry about the fact that it’s handling your sensitive data, and so if compromised it might leak that. Fortunately, this class of critical components is smaller, but it’s non-zero.

This isn’t to say that sandboxing isn’t useful, merely that it’s insufficient. What we need is multiple layers of protection[5], with the first layer being procedural mechanisms to defend against code being compromised and the second layer being fine-grained sandboxing to contain the impact of compromise. As noted earlier, it seems problematic to put the burden of better processes on the developer of the component, especially when there are a large number of dependent projects, many of them very well funded.

Something we have been looking at internally at Mozilla is a way for those projects to tag the dependencies they use and depend on. The way that this would work is that each project would then be tagged with a set of other projects which used it (e.g., “Firefox uses this crate”). Then when you are considering using a component you could look to see who else uses it, which gives you some measure of confidence. Of course, you don’t know what sort of auditing those organizations do, but if you know that Project X is very security conscious and they use component Y, that should give you some level of confidence. This is really just a automating something that already happens informally: people judge components by who else uses them. There are some obvious extensions here, for instance labelling specific versions, having indications of what kind of auditing the depending project did, or allowing people to configure their build systems to automatically trust projects vouched for by some set of other projects and refuse to include unvouched projects, maintaining a database of insecure versions (this is something the Brewer et al. proposal suggests too). The advantage of this kind of approach is that it puts the burden on the people benefitting from a project, rather than having some widely used project suddenly subject to a whole pile of new requirements which they may not be interested in meeting. This work is still in the exploratory stages, so reach out to me if you’re interested.

Obviously, this only works if people actually do some kind of due diligence prior to depending on a component. Here at Mozilla, we do that to some extent, though it’s not really practical to review every line of code in a giant package like WebRTC There is some hope here as well: because modern languages such as Rust or Go are memory safe, it’s much easier to convince yourself that certain behaviors are impossible — even if the program has a defect — which makes it easier to audit.[6] Here too it’s possible to have clear manifests that describe what capabilities the program needs and verify (after some work) that those are accurate.


As I said at the beginning, Brewer et al. are definitely right to be worried about this kind of attack. It’s very convenient to be able to build on other people’s work, but the difficulty of ascertaining the quality of that work is an enormous problem[7]. Fortunately, we’re seeing a whole series of technological advancements that point the way to a solution without having to go back to the bad old days of writing everything yourself.

  1. Supply chain attacks can be mounted via a number of other mechanisms, but in this post, we are going to focus on this threat vector. ↩︎
  2. Where “critical” is defined by a somewhat complicated formula based roughly on the age of the project, how actively maintained it seems to be, how many other projects seem to use it, etc. It’s actually not clear to me that this is metric is that good a predictor of criticality; it seems mostly to have the advantage that it’s possible to evaluate purely by looking at the code repository, but presumably one could develop a metric that would be good. ↩︎
  3. Experience with TLS Extended Validation certificates, which attempt to verify company identity, suggests that this level of identity is straightforward to fake. ↩︎
  4. Allan Schiffman used to call this phenomenen a “distributed single point of failure”. ↩︎
  5. The technical term here is defense in depth. ↩︎
  6. Even better are verifiable systems such the HaCl* cryptographic library that Firefox depends on. HaCl* comes with a machine-checkable proof of correctness, which significantly reducing the need to audit all the code. Right now it’s only practical to do this kind of verification for relatively small programs, in large part because describing the specification that you are proving the program conforms to is hard, but the technology is rapidly getting better. ↩︎
  7. This is true even for basic quality reasons. Which of the two thousand ORMs for node is the best one to use? ↩︎

The post Notes on Addressing Supply Chain Vulnerabilities appeared first on The Mozilla Blog.

hacks.mozilla.orgHere’s what’s happening with the Firefox Nightly logo

Fox Gate

The internet was set on fire (pun intended) this week, by what I’m calling ‘fox gate’, and chances are you might have seen a meme or two about the Firefox logo. Many people were pulling up for a battle royale because they thought we had scrubbed fox imagery from our browser.

This is definitely not happening.

The logo causing all the stir is one we created a while ago with input from our users. Back in 2019, we updated the Firefox browser logo and added the parent brand logo. 

What we learned throughout this, is that many of our users aren’t actually using the browser because then they’d know (no shade) the beloved fox icon is alive and well in Firefox on your desktop.

Shameless plug – you can download the browser here

Firefox logo OSX Dock

You can read more about how all this spiralled in the mini-case study on how the ‘fox gate’ misinformation spread online here.

screenshot of tweet with firefox logos, including parent logo

Long story short, the fox is here to stay and for our Firefox Nightly users out there, we’re bringing back a very special version of an older logo, as a treat.

Firefox Browser Nightly

Our commitment to privacy and a safe and open web remains the same. We hope you enjoy the nightly version of the logo and take some time to read up on spotting misinformation and fake news.


The post Here’s what’s happening with the Firefox Nightly logo appeared first on Mozilla Hacks - the Web developer blog.

Mozilla VR BlogBehind-the-Scenes at Hubs Hack Week

Behind-the-Scenes at Hubs Hack Week

Earlier this month, the Hubs team spent a week working on an internal hackathon. We figured that the start of a new year is a great time to get our roadmap in order, do some investigations about possible new features to explore this year, and bring in some fresh perspectives on what we could accomplish. Plus, we figured that it wouldn’t hurt to have a little fun doing it! Our first hack week was a huge success, and today we’re sharing what we worked on this month so you can get a “behind the scenes” peek at what it’s like to work on Hubs.

Try on a new look

As part of our work on Hubs, we think a lot about expression and identity. From the beginning, we've made it a priority to allow creators to develop their own avatar styles, which is why you might find yourself in a Hubs room with robots, humans, parrots, carrots, and everything in between.

We don’t make assumptions about how you want to look on a given day with a specific group of people. That's why when an artist on the team built a series of components for a modular avatar system, we built a standalone editor instead of integrating one directly into Hubs itself.

Over the past year, we’ve been delighted to see avatar editors popping up for Hubs, like Rhiannan’s editor and Ready Player Me. For hack week, we added one more to the collection for you the community to play with, tinker on, and modify to your liking!

Behind-the-Scenes at Hubs Hack Week

To get started, head to the hack week avatar maker website. The avatar you see when you first arrive is made out of  a random combination of kit components. Use the drop down menus on the left hand side of the screen to pick your favorite features and accessories.

To import your avatar into Hubs, click the “Export Avatar” button to save it to your local computer, and follow these steps to upload it into Hubs.

Can you see me now? Experimenting with video feeds in Hubs

Social distancing can be tough! While Hubs is built for avatar-based communication, sometimes it’s nice to see a friendly face. We’ve gotten lots of feedback from community members asking for new ways to share their webcams in Hubs. We took that feedback to heart, and set off to see what we could do.

Our team philosophy tends to fall on the side of giving people different options, so we took on two different projects: one that would explore having camera feeds as part of the 2D user interface, and one that put them onto avatars.

Behind-the-Scenes at Hubs Hack Week

Avatar-based chat apps can be a lot to take in if you’re new to 3D, so we experimented with a video feed layer that would sit on top of the Hubs world. While this is still just in a prototype stage, there’s a lot of potential here. We’re looking at doing a deeper dive into this type of feature later in the year when we can devote some time to figuring out how this could tie into the spatial audio in Hubs and our upcoming explorations into navigation in general.

Behind-the-Scenes at Hubs Hack Week

While we’re probably still a ways off from having true holograms, we did figure out that we could get some virtual holograms in a Hubs room using some new billboard techniques and a couple of filters. These video avatars allow you to use your webcam to represent you in a Hubs space, and shares your video with the room such that it sticks to you as you move. When you’re wearing a video avatar, a new option will appear to share your webcam onto the video texture component specified. We are extremely excited to see what the community comes up with using these avatar screens.

Our first iteration of the video avatars is shipping with the webcam as-is in the near future, but as you can see in the photo above, we’re experimenting with filters (hello, green screen!) to make video avatars even more personalized. Using filters remains in the prototype stage, and we will continue how we can explore how we can incorporate it into Hubs over the coming year. Keep an eye out for featured video avatars landing on hubs.mozilla.com soon!

Web content in a room? Calling in to Hubs from Zoom? Finally adding lighting bloom?

A long-term goal of ours with Hubs is to make the platform easily extensible, so that more types of content can be shared in a 3D space. We had two hack week projects explore this in more detail, one specifically focused on the experience of calling into a Zoom room from Hubs and one that focused more on providing a general-purpose solution for sharing 2D web content. Extensibility is also the reason we’re able to build features like bloom (thanks, glTF!) so you can get an effect like the one pictured on the robot’s eyes below.

Behind-the-Scenes at Hubs Hack Week

Video conferencing + Hubs

Behind-the-Scenes at Hubs Hack Week

One of our goals with the recent redesign was to make it easier to use Hubs from different devices - to meet people where they are. We took this a step further by extending that mindset to meeting people where they were meeting: Zoom! While some of our team members have played around with using external tools to make a Hubs window show up as a virtual camera feed for video calls, one engineer took it a step further and brought Zoom directly into Hubs by implementing a 2-way portal between apps. Pretty cool! While we probably won’t get around to shipping this project in the next few weeks, we’d love to hear your feedback about it to get a better sense of how you might use it with Hubs.

Embedding 2D Web Content with 3D CSS iFrames

We have a lot of interest in supporting general web content in Hubs, but (for good reason!) there are a lot of security considerations for websites having some limitations about what that looks like. One avenue that we’re exploring to do this is by using CSS to draw an iFrame window “in” the 3D space, and it shows a lot of promise. Using iFrames means that each client resolves the web content on its own, which has security benefits but comes with some tradeoffs. Applications that are already networked (like Google Docs or collaboration tools such as Figma or Miro) work great using this method, but there’s some additional work that has to be done to synchronize non-networked content.

Behind the scenes of the behind the scenes: prototyping new server features related to streams

Sometimes progress isn’t always visible, but that doesn’t mean that it’s not worth celebrating! In addition to some of the features that are easy to see, we also had the opportunity to dig into some server-side developments. One of these projects explored the possibility of streaming from Hubs directly to a third-party service by adding a new server-side feature to encode from a camera stream, and another tested out some alternative audio processing codecs. Both of these projects provided some great insight into the capabilities of the code base, and prompted some good ideas for future projects down the line.

Blog of DataThis Week in Glean: Boring Monitoring

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.)

All “This Week in Glean” blog posts are listed in the TWiG index (and on the Mozilla Data blog).

Every Monday the Glean has its weekly Glean SDK meeting. This meeting is used for 2 main parts: First discussing the features and bugs the team is currently investigating or that were requested by outside stakeholders. And second bug triage & monitoring of data that Glean reports in the wild.

Most of the time looking at our monitoring is boring and that’s a good thing.

From the beginning the Glean SDK supported extensive error reporting on data collected by the framework inside end-user applications. Errors are produced when the application tries to record invalid values. That could be a negative value for a counter that should only ever go up or stopping a timer that was never started. Sometimes this comes down to a simple bug in the code logic and should be fixed in the implementation. But often this is due to unexpected and surprising behavior of the application the developers definitely didn’t think about. Do you know all the ways that your Android application can be started? There’s a whole lot of events that can launch it, even in the background, and you might miss instrumenting all the right parts sometimes. Of course this should then also be fixed in the implementation.

Monitoring Firefox for Android

For our weekly monitoring we look at one application in particular: Firefox for Android. Because errors are reported in the same way as other metrics we are able to query our database, aggregate the data by specific metrics and errors, generate graphs from it and create dashboards on our instance of Redash.

Graph of the error counts for different metrics in Firefox for Android<figcaption>Graph of the error counts for different metrics in Firefox for Android</figcaption>

The above graph displays error counts for different metrics. Each line is a specific metric and error (such as Invalid Value or Invalid State). The exact numbers are not important. What we’re interested in is the general trend. Are the errors per metrics stable or are there sudden jumps? Upward jumps indicate a problem, downward jumps probably means the underlying bug got fixed and is finally rolled out in an update to users.

Rate of affected clients in Firefox for Android<figcaption>Rate of affected clients in Firefox for Android</figcaption>

We have another graph that doesn’t take the raw number of errors, but averages it across the entire population. A sharp increase in error counts sometimes comes from a small number of clients, whereas the errors for others stay at the same low-level. That’s still a concern for us, but knowing that a potential bug is limited to a small number of clients may help with finding and fixing it. And sometimes it’s really just bogus client data we get and can dismiss fully.

Most of the time these graphs stay rather flat and boring and we can quickly continue with other work. Sometimes though we can catch potential issues in the first days after a rollout.

Sudden jump upwards in errors for 2 metrics in Firefox for Android Nightly<figcaption>Sudden jump upwards in errors for 2 metrics in Firefox for Android Nightly</figcaption>

In this graph from the nightly release of Firefox for Android two metrics started reporting a number of errors that’s far above any other error we see. We can then quickly find the implementation of these metrics and report that to the responsible team (Filed bug, and the remediation PR).

But can’t that be automated?

It probably can! But it requires more work than throwing together a dashboard with graphs. It’s also not as easy to define thresholds on these changes and when to report them. There’s work underway that hopefully enables us to more quickly build up these dashboards for any product using the Glean SDK, which we can then also extend to do more reporting automated. The final goal should be that the product teams themselves are responsible for monitoring their data.

SUMO BlogIntroducing Fabiola Lopez

Hi everyone,

Please join us in welcoming Fabiola Lopez (Fabi) to the team. Fabi will be helping us with support content in English and Spanish, so you’ll see her in both locales. Here’s a little more about Fabi:


Hi, everyone! I’m Fabi, and I am a content writer and a translator. I will be working with you to create content for all our users. You will surely find me writing, proofreading, editing and localizing articles. If you have any ideas to help make our content more user-friendly, please reach out to me. Thanks to your help, we make this possible.


Also, Angela’s contract was ended last week. We’d like to thank Angela for her support for the past year.

hacks.mozilla.orgA Fabulous February Firefox — 86!

Looking into the near distance, we can see the end of February loitering on the horizon, threatening to give way to March at any moment. To keep you engaged until then, we’d like to introduce you to Firefox 86. The new version features some interesting and fun new goodies including support for the Intl.DisplayNames object, the :autofill pseudo-class, and a much better <iframe> inspection feature in DevTools.

This blog post provides merely a set of highlights; for all the details, check out the following:

Better <iframe> inspection

The Firefox web console used to include a cd() helper command that enabled developers to change the DevTools’ context to inspect a specific <iframe> present on the page. This helper has been removed in favor of the iframe context picker, which is much easier to use.

When inspecting a page with <iframe>s present, the DevTools will show the iframe context picker button.

Firefox devtools, showing the select iframe dropdown menu, a list of the iframes on the page that can be selected from

When pressed, it will display a drop-down menu listing all the URLs of content embedded in the page inside <iframe>s. Choose one of these, and the inspector, console, debugger, and all other developer tools will then target that <iframe>, essentially behaving as if the rest of the page does not exist.


The :autofill CSS pseudo-class matches when an <input> element has had its value auto-filled by the browser. The class stops matching as soon as the user edits the field.

For example:

input:-webkit-autofill {
  border: 3px solid blue;

input:autofill {
  border: 3px solid blue;

Firefox 86 supports the unprefixed version with the -webkit-prefixed version also supported as an alias. Most other browsers just support the prefixed version, so you should provide both for maximum browser support.


The Intl.DisplayNames built-in object has been enabled by default in Firefox 86. This enables the consistent translation of language, region, and script display names. A simple example looks like so:

// Get English currency code display names
let currencyNames = new Intl.DisplayNames(['en'], {type: 'currency'});

// Get currency names
currencyNames.of('USD'); // "US Dollar"
currencyNames.of('EUR'); // "Euro"

Nightly preview — image-set()

The image-set() CSS function lets the browser pick the most appropriate CSS image from a provided set. This is useful for implementing responsive images in CSS, respecting the fact that resolution and bandwidth differ by device and network access.

The syntax looks like so:

background-image: image-set("cat.png" 1x,
                            "cat-2x.png" 2x,
                            "cat-print.png" 600dpi);

Given the set of options, the browser will choose the most appropriate one for the current device’s resolution — users of lower-resolution devices will appreciate not having to download a large hi-res image that they don’t need, which users of more modern devices will be happy to receive a sharper, crisper image that looks better on their device.


As part of our work on Manifest V3, we have landed an experimental base content security policy (CSP) behind a preference in Firefox 86. The new CSP disallows remote code execution. This restriction only applies to extensions using manifest_version 3, which is not currently supported in Firefox (currently, only manifest_version 2 is supported).

If you would like to test the new CSP for extension pages and content scripts, you must change your extension’s manifest_version to 3 and set extensions.manifestv3.enabled to true in about:config. Because this is a highly experimental and evolving feature, we want developers to be aware that extensions that work with the new CSP may break as more changes are implemented.

The post A Fabulous February Firefox — 86! appeared first on Mozilla Hacks - the Web developer blog.

about:communityNew Contributors In Firefox 86

With the release of Firefox 86, we are pleased to welcome many new friends of the Fox, developers who’ve contributed their first code changes to Firefox in version 86. 25 were brand new volunteers! Please join us in congratulating, thanking and welcoming all of these diligent and enthusiastic contributors, and take a look at their excellent work:

The Mozilla BlogLatest Firefox release includes Multiple Picture-in-Picture and Total Cookie Protection

Beginning last year, the internet began playing a bigger role in our lives than ever before. In the US, we went from only three percent of workers to more than forty percent working from home in 2020, all powered by the web. We also relied on it to stay informed, and connect with friends and family when we couldn’t meet in-person.

And despite the many difficulties we all have faced online and offline, we’re proud to keep making Firefox an essential part of what makes the web work.

Today I’m sharing two new features: multiple picture-in-picture (multi-PiP) and our latest privacy protection combo. Multi-PiP allows multiple videos to play at the same time — all the adorable animal videos or NCAA Tournament anyone? And our latest privacy protection, the dynamic duo of Total Cookie Protection (technically known as State Partitioning or Dynamic First-Party Isolation) and Supercookie Protections (launched in last month’s release) are here to combat cross-site cookie tracking once and for all.

Today’s Firefox features:

Multiple Picture-in-Picture to help multi-task

Our Picture-in-Picture feature topped our Best of Firefox 2020 features list and we heard from people who wanted more than just one picture-in-picture view. In today’s release, we added multiple picture-in-picture views, available on Mac, Linux and Windows, and includes keyboard controls for fast forward and rewind. Haven’t been to a zoo in a while? Now, you can visit your favorite animal at the zoo, along with any other animals around the world with multiple views.  Also, we can’t help that it coincides with one of the biggest sports events this year in March.  :basketball: :wink:

New privacy protections to stop cookie tracking

Today, we are announcing Total Cookie Protection for Firefox, a major new milestone in our work to protect your privacy. Total Cookie Protection stops cookies from tracking you around the web by creating a separate cookie jar for every website. Total Cookie Protection joins our suite of privacy protections called ETP (Enhanced Tracking Protection). In combining Total Cookie Protection with last month’s supercookie protections, Firefox is now armed with very strong, comprehensive protection against cookie tracking. This will be available in ETP Strict Mode in both the desktop and Android version. Here’s how it works:

Total Cookie Protection confines all cookies from each website in a separate cookie jar

In our ongoing commitment to bring the best innovations in privacy, we are working tirelessly to improve how Firefox protects our users from tracking. In 2019, Firefox introduced Enhanced Tracking Protection (ETP) which blocks cookies from known, identified trackers, based on the Disconnect list. To bring even more comprehensive protection, Total Cookie Protection confines all cookies from each website in a separate cookie jar so that cookies can no longer be used to track you across the web as you browse from site to site. For a technical look at how this works, you can dig into the details in our post on our Security Blog. You can turn on Total Cookie Protection by setting your Firefox privacy controls to Strict mode.

Join our journey to evolve Firefox

If it’s been a while since you’ve used Firefox, now is the time to try Firefox again and see today’s features. You can download the latest version of Firefox for your desktop and mobile devices and get ready for an exciting year ahead.

The post Latest Firefox release includes Multiple Picture-in-Picture and Total Cookie Protection appeared first on The Mozilla Blog.

Web Application SecurityFirefox 86 Introduces Total Cookie Protection

Today we are pleased to announce Total Cookie Protection, a major privacy advance in Firefox built into ETP Strict Mode. Total Cookie Protection confines cookies to the site where they were created, which prevents tracking companies from using these cookies to track your browsing from site to site.

Cookies, those well-known morsels of data that web browsers store on a website’s behalf, are a useful technology, but also a serious privacy vulnerability. That’s because the prevailing behavior of web browsers allows cookies to be shared between websites, thereby enabling those who would spy on you to “tag” your browser and track you as you browse. This type of cookie-based tracking has long been the most prevalent method for gathering intelligence on users. It’s a key component of the mass commercial tracking that allows advertising companies to quietly build a detailed personal profile of you.

In 2019, Firefox introduced Enhanced Tracking Protection by default, blocking cookies from companies that have been identified as trackers by our partners at Disconnect. But we wanted to take protections to the next level and create even more comprehensive protections against cookie-based tracking to ensure that no cookies can be used to track you from site to site as you browse the web.

Our new feature, Total Cookie Protection, works by maintaining a separate “cookie jar” for each website you visit. Any time a website, or third-party content embedded in a website, deposits a cookie in your browser, that cookie is confined to the cookie jar assigned to that website, such that it is not allowed to be shared with any other website.

Total Cookie Protection creates a separate cookie jar for each website you visit. (Illustration: Meghan Newell)

In addition, Total Cookie Protection makes a limited exception for cross-site cookies when they are needed for non-tracking purposes, such as those used by popular third-party login providers. Only when Total Cookie Protection detects that you intend to use a provider, will it give that provider permission to use a cross-site cookie specifically for the site you’re currently visiting. Such momentary exceptions allow for strong privacy protection without affecting your browsing experience.

In combination with the Supercookie Protections we announced last month, Total Cookie Protection provides comprehensive partitioning of cookies and other site data between websites in Firefox. Together these features prevent websites from being able to “tag” your browser,  thereby eliminating the most pervasive cross-site tracking technique.

To learn more technical details about how Total Cookie Protection works under the hood, you can read the MDN page on State Partitioning and our blog post on Mozilla Hacks.

Thank you

Total Cookie Protection touches many parts of Firefox, and was the work of many members of our engineering team: Andrea Marchesini, Gary Chen, Nihanth Subramanya, Paul Zühlcke, Steven Englehardt, Tanvi Vyas, Anne van Kesteren, Ethan Tseng, Prangya Basu, Wennie Leung, Ehsan Akhgari, and Dimi Lee.

We wish to express our gratitude to the many Mozillians who contributed to and supported this work, including: Selena Deckelmann, Mikal Lewis, Tom Ritter, Eric Rescorla, Olli Pettay, Kim Moir, Gregory Mierzwinski, Doug Thayer, and Vicky Chin.

Total Cookie Protection is an evolution of the First-Party-Isolation feature, a privacy protection that is shipped in Tor Browser. We are thankful to the Tor Project for that close collaboration.

We also want to acknowledge past and ongoing work by colleagues in the Brave, Chrome, and Safari teams to develop state partitioning in their own browsers.

The post Firefox 86 Introduces Total Cookie Protection appeared first on Mozilla Security Blog.

hacks.mozilla.orgIntroducing State Partitioning

State Partitioning is the technical term for a new privacy feature in Firefox called Total Cookie Protection, which will be available in ETP Strict Mode in Firefox 86. This article shows how State Partitioning works inside of Firefox and explains what developers of third-party integrations can do to stay compatible with the latest changes.

Web sites utilize a variety of different APIs to store data in the browser. Most famous are cookies, which are commonly used to build login sessions and provide a customized user experience. We call these stateful APIs, because they are able to establish state that will persist through reloads, navigations and browser restarts. While these APIs allow developers to enrich a user’s web experience, they also enable nefarious web tracking which jeopardizes user privacy. To fight abuse of these APIs Mozilla is introducing State Partitioning in Firefox 86.

Stateful Web APIs in Firefox are:

  • Storage: Cookies, Local Storage, Session Storage, Cache Storage, and IndexedDB
  • Workers: SharedWorkers and ServiceWorkers
  • Communication channel: Broadcast channel

To fight against web tracking, Firefox currently relies on Enhanced Tracking Protection (ETP) which blocks cookies and other shared state from known trackers, based on the Disconnect list. This form of cookie blocking is an effective approach to stop tracking, but it has its limitations. ETP protects users from the 3000 most common and pervasive identified trackers, but its protection relies on the fact that the list is complete and always up-to-date. Ensuring completeness is difficult, and trackers can try to circumvent the list by registering new domain names. Additionally, identifying trackers is a time-consuming task and commonly adds a delay on a scale of months before a new tracking domain is added to the list.

To address the limitations of ETP and provide comprehensive protection against trackers, we introduce a technique called State Partitioning, which will prevent cookie-based tracking universally, without the need for a list.

State Partitioning is complemented by our efforts to eliminate the usage of non-traditional storage mechanisms (“supercookies”) as a tracking vector, for example through the partitioning of network state, which was recently rolled out in Firefox 85.

State Partitioning – How it works in Firefox

To explain State Partitioning, we should first take a look at how stateful Web APIs enable tracking on the Web.  While these APIs were not designed for tracking, their state is shared with a website regardless of whether it is loaded as a first-party or embedded as a third-party, for example in an iframe or as a simple image (“tracking pixel”). This shared state allows trackers embedded in other websites to track you across the Web, most commonly by setting cookies.

For example, a cookie of www.tracker.com will be shared on foo.com and bar.com if they both embed www.tracker.com as a third-party. So, www.tracker.com can connect your activities on both sites by using the cookie as an identifier.

ETP will prevent this by simply blocking access to shared state for embedded instances of www.tracker.com. Without the ability to set cookies, the tracker can not easily re-identify you.

Cookie-based tracking without protections, both instances of www.tracker.com share the same cookie.


In comparison, State Partitioning will also prevent shared third-party state, but it does so without blocking cookie access entirely. With State Partitioning, shared state such as cookies, localStorage, etc. will be partitioned (isolated) by the top-level website you’re visiting. In other words, every first party and its embedded third-party contexts will be put into a self-contained bucket.

Firefox is using double-keying to implement State Partitioning, which will add an additional key to the origin of the website that is accessing these states. We use the scheme and registrable domain (also known as eTLD+1) of the top-level site as the additional key. Following the above example, cookies for www.tracker.com will be keyed differently under foo.com and bar.com. Instead of looking up the cookie jar for www.tracker.com, Storage partitioning will use www.tracker.com^http://foo.com and www.tracker.com^http://bar.com respectively.

Cookie-based tracking prevented by State Partitioning, by double-keying both instances of www.tracker.com.


Thus, there will be two distinct cookie jars for www.tracker.com under these two top-level websites.

This takes away the tracker’s ability to use cookies and other previously shared state to identify individuals across sites. Now the state is separate (“partitioned”) instead of shared across different first-party domains.

It is important to understand that State Partitioning will apply to every embedded third-party resource, regardless of whether it is a tracker or not.

This brings great benefits for privacy allowing us to extend protections beyond the Disconnect list and it allows embedded websites to continue to use their cookies and storage as they normally would, as long as they don’t need cross-site access. In the next section we will examine what embedded websites can do if they have a legitimate need for cross-site shared state.

State Partitioning – Web Compatibility

Given that State Partitioning brings a fundamental change to Firefox, ensuring web compatibility and an unbroken user and developer experience is a top concern. Inevitably, State Partitioning will break websites by preventing legitimate uses of third-party state. For example, Single Sign-On (SSO) services rely on third-party cookies to sign in users across multiple websites. State Partitioning will break SSO because the SSO provider will not be able to access its first-party state when embedded in another top-level website so that it is unable to recognize a logged-in user.

Third-party SSO cookies partitioned by State Partitioning, the SSO iframe cannot get the first-party cookie access.


In order to resolve these compatibility issues of State Partitioning, we allow the state to be unpartitioned in certain cases. When unpartitioning is taking effect, we will stop using double-keying and revert the ordinary (first-party) key.

Given the above example, after unpartitioning, the top-level SSO site and the embedded SSO service’s iframe will start to use the same storage key, meaning that they will both access the same cookie jar. So, the iframe can get the login credentials via a third-party cookie.

The SSO site has been unpartitioned, the SSO iframe gets the first-party cookie access.


State Partitioning – Getting Cross-Site Cookie Access

There are two scenarios in which Firefox might unpartition states for websites to allow for access to first-party (cross-site) cookies and other state:

  1. When an embedded iframe calls the Storage Access API.
  2. Based on a set of automated heuristics.

Storage Access API

The Storage Access API is a newly proposed JavaScript API to handle legitimate exceptions from privacy protections in modern browsers, such as ETP in Firefox or Intelligent Tracking Prevention in Safari. It allows the restricted third-party context to request first-party storage access (access to cookies and other shared state) for itself. In some cases, browsers will show a permission prompt to decide whether they trust the third party enough to allow this access.

The Firefox user prompt of the Storage Access API.


A partitioned third-party context can use the Storage Access API to gain a storage permission which grants unpartitioned access to its first-party state.

This functionality is expressed through the document.requestStorageAccess method. Another method, document.hasStorageAccess, can be used to find out whether your current browsing context has access to its first party storage. As outlined on MDN, document.requestStorage is subject to a number of restrictions, standardized as well as browser-specific, that protect users from abuse. Developers should make note of these and adjust their site’s user experience accordingly.

As an example, Zendesk will show a message with a call-to-action element to handle the standard requirement of transient user activation (e.g. a button click). In Safari, it will also spawn a popup that the user has to activate to ensure that the browser-specific requirements of Webkit are satisfied.

Zendesk notification to allow the user to trigger the request for storage access.


After the user has granted access, Firefox will remember the storage permission for 30 days.

Note that the third-party context will only be unpartitioned under the top-level domain for which the storage access has been requested. For other top-level domains, the third-party context will still be partitioned. Let’s say there is a cross-origin iframe example.com which is embedded in foo.com. And example.com uses the Storage Access API to request first-party access on foo.com and the user allows it. In this case, example.com will have unpartitioned access to its own first-party cookies on foo.com. Later, the user loads another page bar.com which also embeds example.com. But, this time, the example.com will remain partitioned under bar.com because there is no storage permission here.

document.hasStorageAccess().then(hasAccess => {
if (!hasAccess) {
return document.requestStorageAccess();
}).then(_ => {
// Now we have unpartitioned storage access for the next 30 days!
// …
}).catch(_ => {
// error obtaining storage access.

Javascript example of using the Storage Access API to get the storage access from users.

Currently, the Storage Access API is supported by Safari, Edge, Firefox. It is behind a feature flag in Chrome.

Automatic unpartitioning through heuristics

In the Firefox storage access policy, we have defined several heuristics to address Web compatibility issues. The heuristics are designed to catch the most common scenarios of using third-party storage on the web (outside of tracking) and allow storage access in order to make websites continue normally. For example, in Single-Sign-On flows it is common to open a popup that allows the user to sign in, and transmit that sign-in information back to the website that opened the popup. Firefox will detect this case and automatically grant storage access.

Note that these heuristics are not designed for the long term. Using the Storage Access API is the recommended solution for websites that need unpartitioned access. We will continually evaluate the necessity of the restrictions and remove them as appropriate. Therefore, developers should not rely on them now or in the future.

State Partitioning – User controls for Cross-Site Cookie Access

We have introduced a new UI for State Partitioning which allows users to be aware of which third parties have acquired unpartitioned storage and provides fine-grain control of storage access. Firefox will show the unpartitioned domains in the permission section of the site identity panel. The “Cross-site Cookies” permission indicates the unpartitioned domain, and users can delete the permission from the UI directly by clicking the cancel button alongside the permission entries.

The post Introducing State Partitioning appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogExpanding Mozilla’s Boards

I’m delighted to share that the Mozilla Foundation and Corporation Boards are each welcoming a new member.

Wambui Kinya is Vice President of Partner Engineering at Andela, a Lagos-based global talent network that connects companies with vetted, remote engineers from Africa and other emerging markets. Andela’s vision is a world where the most talented people can build a career commensurate with their ability – not their race, gender, or geography. Wambui joins the Mozilla Foundation Board and you can read more from her, here, on why she is joining. Motivated by the intersection of Africa, technology and social impact, Wambui has led business development and technology delivery, digital technology implementation, and marketing enablement across Africa, the United States, Europe and South America. In 2020 she was selected as one of the “Top 30 Most Influential Women” by CIO East Africa.

Laura Chambers is Chief Executive Officer of Willow Innovations, which addresses one of the biggest challenges for mothers, with the world’s first quiet, all-in-one, in-bra, wearable breast pump. She joins the Mozilla Corporation Board. Laura holds a wealth of knowledge in internet product, marketplace, payment, and community engagement from her time at AirBnB, eBay, PayPal, and Skype, as well as her current role at Willow. Her experience also includes business operations, marketing, shipping, global customer trust and community engagement. Laura brings a clear understanding of the challenges we face in building a better internet, coupled with strong business acumen, and an acute ability to hone in on key issues and potential solutions. You can read more from Laura about why she is joining here.

At Mozilla, we invite our Board members to be more involved with management, employees and volunteers than is generally the case, as I’ve written about in the past. To ready them for this, Wambui and Laura met with existing Board members, members of the management team, individual contributors and volunteers.

We know that the challenges of the modern internet are so big, and that expanding our capacity will help us develop solutions to those challenges. I am sure that Laura and Wambui’s insights and strategic thinking will be a great addition to our boards.

The post Expanding Mozilla’s Boards appeared first on The Mozilla Blog.

Open Policy & AdvocacyMozilla Mornings: Unpacking the DSA’s risk-based approach

On 25 February, Mozilla will host the next installment of Mozilla Mornings – our regular event series that brings together policy experts, policymakers and practitioners for insight and discussion on the latest EU digital policy developments.

This installment of Mozilla Mornings will focus on the DSA’s risk-based approach, specifically the draft law’s provisions on risk assessment, risk mitigation, and auditing for very large online platforms. We’ll be looking at what these provisions seek to solve for; how they’re likely to work in practice; and what we can learn from related proposals in other jurisdictions.


Carly Kind
Director, Ada Lovelace Institute

Ben Scott
Executive Director, Reset

Owen Bennett
Senior Policy Manager, Mozilla Corporation

Moderated by Brian Maguire
EU journalist and broadcaster


Logistical information

25 February, 2021

11:00-12:00 CET

Zoom Webinar (conferencing details to be provided on morning of event)

Register your attendance here

The post Mozilla Mornings: Unpacking the DSA’s risk-based approach appeared first on Open Policy & Advocacy.

Project TofinoIntroducing Project Mentat, a flexible embedded knowledge store

Edit, January 2017: to avoid confusion and to better follow Mozilla’s early-stage project naming guidelines, we’ve renamed Datomish to Project Mentat. This post has been altered to match.

For several months now, a small team at Mozilla has been exploring new ways of building a browser. We called that effort Tofino, and it’s now morphed into the Browser Futures Group.

As part of that, Nick Alexander and I have been working on a persistent embedded knowledge store called Project Mentat. Mentat is designed to ship in client applications, storing relational data on disk with a flexible schema.

It’s a little different to most of the storage systems you’re used to, so let’s start at the beginning and explain why. If you’re only interested in the what, skip down to just above the example code.

As we began building Tofino’s data layer, we observed a few things:

  • We knew we’d need to store new types of data as our product goals shifted: page metadata, saved content, browsing activity, location. The set of actions the user can take, and the data they generate, is bound to grow over time. We didn’t (don’t!) know what these were in advance.
  • We wanted to support front-end innovation without being gated on some storage developer writing a complicated migration. We’ve seen database evolution become a locus of significant complexity and risk — “here be dragons” — in several other applications. Ultimately it becomes easier to store data elsewhere (a new database, simple prefs files, a key-value table, or JSON on disk) than to properly integrate it into the existing database schema.
  • As part of that front-end innovation, sometimes we’d have two different ‘forks’ both growing the data model in two directions at once. That’s a difficult problem to address with a tool like SQLite.
  • Front-end developers were interested in looser approaches to accessing stored data than specialized query endpoints: e.g., Lin Clark suggested that GraphQL might be a better fit. Only a month or two into building Tofino we already saw the number of API endpoints, parameters, and fields growing as we added features. Specialized API endpoints turn into ad hoc query languages.
  • Syncability was a constant specter hovering at the back of our minds: getting the data model right for future syncing (or partial hosting on a service) was important.

Many of these concerns happen to be shared across other projects at Mozilla: Activity Stream, for example, also needs to store a growing set of page summary attributes for visited pages, and join those attributes against your main browsing history.

Nick and I started out supporting Tofino with a simple store in SQLite. We knew it had to adapt to an unknown set of use cases, so we decided to follow the principles of CQRS.

CQRS — Command Query Responsibility Segregation — recognizes that it’s hard to pick a single data storage model that works for all of your readers and writers… particularly the ones you don’t know about yet.

As you begin building an application, it’s easy to dive head-first into storing data to directly support your first user experience. As the experience changes, and new experiences are added, your single data model is pulled in diverging directions.

A common second system syndrome for this is to reactively aim for maximum generality. You build a single normalized super-flexible data model (or key-value store, or document store)… and soon you find that it’s expensive to query, complex to maintain, has designed-in capabilities that will never be used, and you still have tensions between different consumers.

The CQRS approach, at its root, is to separate the ‘command’ from the ‘query’: store a data model that’s very close to what the writer knows (typically a stream of events), and then materialize as many query-side data stores as you need to support your readers. When you need to support a new kind of fast read, you only need to do two things: figure out how to materialize a view from history, and figure out how to incrementally update it as new events arrive. You shouldn’t need to touch the base storage schema at all. When a consumer is ripped out of the product, you just throw away their materialized views.

Viewed through that lens, everything you do in a browser is an event with a context and a timestamp: “the user bookmarked page X at time T in session S”, “the user visited URL X at time T in session S for reason R, coming from visit V1”. Store everything you know, materialize everything you need.

We built that with SQLite.

This was a clear and flexible concept, and it allowed us to adapt, but the implementation in JS involved lots of boilerplate and was somewhat cumbersome to maintain manually: the programmer does the work of defining how events are stored, how they map to more efficient views for querying, and how tables are migrated when the schema changes. You can see this starting to get painful even early in Tofino’s evolution, even without data migrations.

Quite soon it became clear that a conventional embedded SQL database wasn’t a direct fit for a problem in which the schema grows organically — particularly not one in which multiple experimental interfaces might be sharing a database. Furthermore, being elbow-deep in SQL wasn’t second-nature for Tofino’s webby team, so the work of evolving storage fell to just a few of us. (Does any project ever have enough people to work on storage?) We began to look for alternatives.

We explored a range of existing solutions: key-value stores, graph databases, and document stores, as well as the usual relational databases. Each seemed to be missing some key feature.

Most good storage systems simply aren’t suitable for embedding in a client application. There are lots of great storage systems that run on the JVM and scale across clusters, but we need to run on your Windows tablet! At the other end of the spectrum, most webby storage libraries aren’t intended to scale to the amount of data we need to store. Most graph and key-value stores are missing one or more of full-text indexing (crucial for the content we handle), expressive querying, defined schemas, or the kinds of indexing we need (e.g., fast range queries over visit timestamps). ‘Easy’ storage systems of all stripes often neglect concurrency, or transactionality, or multiple consumers. And most don’t give much thought to how materialized views and caches would be built on top to address the tension between flexibility and speed.

We found a couple of solutions that seemed to have the right shape (which I’ll discuss below), but weren’t quite something we could ship. Datomic is a production-grade JVM-based clustered relational knowledge store. It’s great, as you’d expect from Cognitect, but it’s not open-source and we couldn’t feasibly embed it in a Mozilla product. DataScript is a ClojureScript implementation of Datomic’s ideas, but it’s intended for in-memory use, and we need persistent storage for our datoms.

Nick and I try to be responsible engineers, so we explored the cheap solution first: adding persistence to DataScript. We thought we might be able to leverage all of the work that went into DataScript, and just flush data to disk. It soon became apparent that we couldn’t resolve the impedance mismatch between a synchronous in-memory store and asynchronous persistence, and we had concerns about memory usage with large datasets. Project Mentat was born.

Mentat is built on top of SQLite, so it gets all of SQLite’s reliability and features: full-text search, transactionality, durable storage, and a small memory footprint.

On top of that we’ve layered ideas from DataScript and Datomic: a transaction log with first-class transactions so we can see and annotate a history of events without boilerplate; a first-class mutable schema, so we can easily grow the knowledge store in new directions and introspect it at runtime; Datalog for storage-agnostic querying; and an expressive strongly typed schema language.

Datalog queries are translated into SQL for execution, taking full advantage of both the application’s rich schema and SQLite’s fast indices and mature SQL query planner.

You can see more comparisons between Project Mentat and those storage systems in the README.

A proper tutorial will take more space than this blog post allows, but you can see a brief example in JS. It looks a little like this:

// Open a database.
let db = await datomish.open("/tmp/testing.db");
// Make sure we have our current schema.
await db.ensureSchema(schema);
// Add some data. Note that we use a temporary ID (the real ID
// will be assigned by Mentat).
let txResult = await db.transact([
{"db/id": datomish.tempid(),
"page/url": "https://mozilla.org/",
"page/title": "Mozilla"}
// Let's extend our schema. In the real world this would
// typically happen across releases.
schema.attributes.push({"name": "page/visitedAt",
"type": "instant",
"cardinality": "many",
"doc": "A visit to the page."});
await db.ensureSchema(schema);
// Now we can make assertions with the new vocabulary
// about existing entities.
// Note that we simply let Mentat find which page
// we're talking about by URL -- the URL is a unique property
// -- so we just use a tempid again.
await db.transact([
{"db/id": datomish.tempid(),
"page/url": "https://mozilla.org/",
"page/visitedAt": (new Date())}
// When did we most recently visit this page?
let date = (await db.q(
`[:find (max ?date) .
:in $ ?url
[?page :page/url ?url]
[?page :page/visitedAt ?date]]`,
{"inputs": {"url": "https://mozilla.org/"}}));
console.log("Most recent visit: " + date);

Project Mentat is implemented in ClojureScript, and currently runs on three platforms: Node, Firefox (using Sqlite.jsm), and the JVM. We use DataScript’s excellent parser (thanks to Nikita Prokopov, principal author of DataScript!).

Addition, January 2017: we are in the process of rewriting Mentat in Rust. More blog posts to follow!

Nick has just finished porting Tofino’s User Agent Service to use Mentat for storage, which is an important milestone for us, and a bigger example of Mentat in use if you’re looking for one.

What’s next?

We’re hoping to learn some lessons. We think we’ve built a system that makes good tradeoffs: Mentat delivers schema flexibility with minimal boilerplate, and achieves similar query speeds to an application-specific normalized schema. Even the storage space overhead is acceptable.

I’m sure Tofino will push our performance boundaries, and we have a few ideas about how to exploit Mentat’s schema flexibility to help the rest of the Tofino team continue to move quickly. It’s exciting to have a solution that we feel strikes a good balance between storage rigor and real-world flexibility, and I can’t wait to see where else it’ll be a good fit.

If you’d like to come along on this journey with us, feel free to take a look at the GitHub repo, come find us on Slack in #mentat, or drop me an email with any questions. Mentat isn’t yet complete, but the API is quite stable. If you’re adventurous, consider using it for your next Electron app or Firefox add-on (there’s an example in the GitHub repository)… and please do send us feedback and file issues!


Many thanks to Lina Cambridge, Grisha Kruglov, Joe Walker, Erik Rose, and Nicholas Alexander for reviewing drafts of this post.

Introducing Project Mentat, a flexible embedded knowledge store was originally published in Project Tofino on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla L10NL10n Report: February 2021 Edition


New localizers

  • Ibrahim of Hausa (ha) drove the Common Voice web part to completion shortly after he joined the community.
  • Crowdsource Kurdish, and Amed of Kurmanji Kurdish (kmr) teamed up to finish the Common Voice site localization.
  • Saltykimchi of Malay (ms) joins us from the Common Voice community.
  • Ibrahimi of Pashto (ps) completed the Common Voice site localization in a few days!
  • Reem of Swahili (sw) has been laser focused on the Terminology project.

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

  • Mossi (mos)
  • Pashto (ps)

New content and projects

What’s new or coming up in Firefox desktop

First of all, let’s all congratulate the Silesian (szl) team for making their way into the official builds of Firefox. After spending several months in Nightly, they’re now ready for general audience and will ride the trains to Beta and Release with Firefox 87.

Upcoming deadlines:

  • Firefox 86 is currently in Beta and will be released on February 23. The deadline to update localizations is on February 14.
  • Firefox 87 is in Nightly and will move to Beta on February 22.

This means that, as of February 23, we’ll be only two cycles away from the next big release of Firefox (89), which will include the UI redesign internally called Proton. Several strings have already been exposed for localization, and you can start testing them – always in a new profile! – by manually setting these preferences to true in about:config:

  • browser.proton.appmenu.enabled
  • browser.proton.enabled
  • browser.proton.tabs.enabled

It’s a constant work in progress, so expect the UI to change frequently, as new elements are added every few days.

One important thing to note: English will change several elements of the UI from Title Case to Sentence case. These changes will not require locales to retranslate all the strings, but it also expects each locale to have clearly defined rules in their style guides about the correct capitalization to use for each part of the UI. If your locale is following the same capitalization rules as en-US, then you’ll need to manually change these strings to match the updated version.

We’ll have more detailed follow-ups in the coming week about Proton, highlighting the key areas to test. In the meantime, make sure that your style guides are in good shape, and get in touch if you don’t know how to work on them in GitHub.

What’s new or coming up in mobile

You may have noticed some changes to the Firefox for Android (“Fenix”) release schedule – that affects in turn our l10n schedule for the project.

In fact, Firefox for Android is now mirroring the Firefox Desktop schedule (as much as possible). While you will notice that the Pontoon l10n deadlines are not quite the same between Firefox Android and Firefox Desktop, their release cadence will be the same, and this will help streamline our main products.

Firefox for iOS remains unchanged for now – although the team is aiming to streamline the release process as well. However, this also depends on Apple, so this may take more time to implement.

Concerning the Proton redesign (see section above about Desktop), we still do not know to what extent it will affect mobile. Stay tuned!

What’s new or coming up in web projects

Firefox Accounts:

The payment settings feature is going to be updated later this month through a Beta release. It will be open for localization at a later date. Stay tuned!


Migration to Fluent format continues, and the webdev team aims at wrapping up migration by the end of February. Kindly remind all the communities to check the migrated files for warnings, fix them right away. Otherwise, the strings will appear in English in an activated page on production. Or the page may resort to English because it can’t meet the activation threshold of 80% completion. Please follow the priority of the pages and work through them one at a time.

Common Voice

The project will be moved to Mozilla Foundation later this year. More details will be shared as soon as they become available.

Fairly small release as the transition details are being finalized.

  • Fixed bug where “Voices Online” wasn’t tracking activity anymore
  • Redirected language request modal to Github issue template
  • Updated average seconds based on corpus 6.1
  • Increased leaderboards “load more” function from 5 additional records to 20
  • Localization/sentence updates

What’s new or coming up in SuMo

Since the beginning of 2021, SUMO has been supporting Firefox 85. You can see the full list of articles that we added and updated for Firefox 85 in the SUMO Sprint wiki page here.

We also have good news from the Dutch team who’s been changing their team formation and finally managed to localize 100% support articles in SUMO. This is a huge milestone for the team, who has been a little bit behind in the past couple of years.

There are a lot more interesting changes coming up in our pipeline. Feel free to join SUMO Matrix room to discuss or just say hi.

Friends of the Lion

Image by Elio Qoshi

  • The Friesian (fy-NL) community hit the national news with the Voice Challenge, thanks to Wim for leading the effort. It was a competition between Friesian and Dutch languages, a campaign to encourage more people to donate their voices through different platforms and capture the broadest demographics. The ultimate goal is to collect about 300 hours of Frisian text.
  • Dutch team (nl) in SUMO, especially Tim Maks, Wim Benes, Onno Ekker, and Mark Heijl for completing 100% localization of the support articles in SUMO.

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

hacks.mozilla.orgMDN localization update, February 2021

In our previous post, An update on MDN Web Docs’ localization strategy, we explained our broad strategy for moving forward with allowing translation edits on MDN again. The MDN localization communities are waiting for news of our progress on unfreezing the top-tier locales, and here we are. In this post we’ll look at where we’ve got to so far in 2021, and what you can expect moving forward.

Normalizing slugs between locales

Previously on MDN, we allowed translators to localize document URL slugs as well as the document title and body content. This sounds good in principle, but has created a bunch of problems. It has resulted in situations where it is very difficult to keep document structures consistent.

If you want to change the structure or location of a set of documentation, it can be nearly impossible to verify that you’ve moved all of the localized versions along with the en-US versions — some of them will be under differently-named slugs both in the original and new locations, meaning that you’d have to spend time tracking them down, and time creating new parent pages with the correct slugs, etc.

As a knock-on effect, this has also resulted in a number of localized pages being orphaned (not being attached to any parent en-US pages), and a number of en-US pages being translated more than once (e.g. localized once under the existing en-US slug, and then again under a localized slug).

For example, the following table shows the top-level directories in the en-US locale as of Feb 1, 2021, compared to that of the fr locale.

en-US fr

To make the non-en-US locales consistent and manageable, we are going to move to having en-US slugs only — all localized pages will be moved under their equivalent location in the en-US tree. In cases where that location cannot be reliably determined — e.g. where the documents are orphans or duplicates — we will put those documents into a specific storage directory, give them an appropriate prefix, and ask the maintenance communities for each unfrozen locale to sort out what to do with them.

  • Every localized document will be kept in a separate repo to the en-US content, but will have a corresponding en-US document with the same slug (folder path).
  • At first this will be enforced during deployment — we will move all the localized documents so that their locations are synchronized with their en-US equivalents. Every document that does not have a corresponding en-US document will be prefixed with orphaned during deployment. We plan to further automate this to check whenever a PR is created against the repo. We will also funnel back changes from the main en-US content repo, i.e. if an en-US page is moved, the localized equivalents will be automatically moved too.
  • All locales will be migrated, unfortunately, some documents will be marked as orphaned and some others will be marked as conflicting (as in adding a prefix conflicting to their slug). Conflicting documents have a corresponding en-US document with multiple translations in the same locale.
  • We plan to delete, archive, or move out orphaned/conflicting content.
  • Nothing will be lost since everything is in a git repo (even if something is deleted, it can still be recovered from the git history).

Processes for identifying unmaintained content

The other problem we have been wrestling with is how to identify what localized content is worth keeping, and what isn’t. Since many locales have been largely unmaintained for a long time, they contain a lot of content that is very out-of-date and getting further out-of-date as time goes on. Many of these documents are either not relevant any more at all, incomplete, or simply too much work to bring up to date (it would be better to just start from nothing).

It would be better for everyone involved to just delete this unmaintained content, so we can concentrate on higher-value content.

The criteria we have identified so far to indicate unmaintained content is as follows:

  • Pages that should have compat tables, which are missing them.
  • Pages that should have interactive examples and/or embedded examples, which are missing them.
  • Pages that should have a sidebar, but don’t.
  • Pages where the KumaScript is breaking so much that it’s not really renderable in a usable way.

These criteria are largely measurable; we ran some scripts on the translated pages to calculate which ones could be marked as unmaintained (they match one or more of the above). The results are as follows:

If you look for compat, interactive examples, live samples, orphans, and all sidebars:

  • Unmaintained: 30.3%
  • Disconnected (orphaned): 3.1%

If you look for compat, interactive examples, live samples, orphans, but not sidebars:

  • Unmaintained: 27.5%
  • Disconnected (orphaned):  3.1%

This would allow us to get rid of a large number of low-quality pages, and make dealing with localizations easier.

We created a spreadsheet that lists all the pages that would be put in the unmaintained category under the above rules, in case you were interested in checking them out.

Stopping the display of non-tier 1 locales

After we have unfrozen the “tier 1” locales (fr, ja, zh-CN, zh-TW), we are planning to stop displaying other locales. If no-one has the time to maintain a locale, and it is getting more out-of-date all the time, it is better to just not show it rather than have potentially harmful unmaintained content available to mislead people.

This makes sense considering how the system currently works. If someone has their browser language set to say fr, we will automatically serve them the fr version of a page, if it exists, rather than the en-US version — even if the fr version is old and really out-of-date, and the en-US version is high-quality and up-to-date.

Going forward, we will show en-US and the tier 1 locales that have active maintenance communities, but we will not display the other locales. To get a locale displayed again, we require an active community to step up and agree to have responsibility for maintaining that locale (which means reviewing pull requests, fixing issues filed against that locale, and doing a reasonable job of keeping the content up to date as new content is added to the en-US docs).

If you are interested in maintaining an unmaintained locale, we are more than happy to talk to you. We just need a plan. Please get in touch!

Note: Not showing the non-tier 1 locales doesn’t mean that we will delete all the content. We are intending to keep it available in our archived-content repo in case anyone needs to access it.

Next steps

The immediate next step is to get the tier 1 locales unfrozen, so we can start to get those communities active again and make that content better. We are hoping to get this done by the start of March. The normalizing slugs work will happen as part of this.

After that we will start to look at stopping the display of non-tier 1 localized content — that will follow soon after.

Identifying and removing unmaintained content will be a longer game to play — we want to involve our active localization communities in this work for the tier 1 locales, so this will be done after the other two items.

The post MDN localization update, February 2021 appeared first on Mozilla Hacks - the Web developer blog.

Blog of DataThis Week in Glean: Backfilling rejected GPUActive Telemetry data

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla uses to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as Glean inspires it. You can find an index of all TWiG posts online.)

Data ingestion is a process that involves decompressing, validating, and transforming millions of documents every hour. The schemas of data coming into our systems are ever-evolving, sometimes causing partial outages of data availability when the conditions are ripe. Once the outage has been resolved, we run a backfill to fill in the gaps for all the missing data. In this post, I’ll discuss the error discovery and recovery processes through a recent bug.

Catching and fixing the error


Every Monday, a group of data engineers pours over a set of dashboards and plots indicating data ingestion health. On 2020-08-04, we filed a bug where we observed an elevated rate of schema validation errors coming from environment/system/gfx/adapters/N/GPUActive. For mistakes like these that are small fractions of our overall volume, partial outages are typically not urgent (as in not “we need to drop everything right now and resolve this stat!” critical). We called the subject experts and found out that the code responsible for reporting multiple GPUs in the environment had changed.

An intern reached out to me about a DNS study running a few weeks after filing the bug about GPUActive. I helped figure out that his external monitor setup with his Macbook was causing rejections like the ones that we had seen weeks before. One PR and one deploy later, I watched the error rates for the GPUActive field abruptly drop to zero.

Figure: Error counts for environment/system/gfx/adapters/N/GPUActive

The schema’s misspecification resulted in 4.1 million documents between 2020-07-04 and 2020-08-20 to be sent to our error stream, awaiting reprocessing.

Running a backfill

In January of 2021, we ran the backfill of the GPUActive rejects. First, we determined the backfill range by querying the relevant error table:

  DATE(submission_timestamp) AS dt,
  submission_timestamp < '2020-08-21'
  AND submission_timestamp > '2020-07-03'
  AND exception_class = 'org.everit.json.schema.ValidationException'
  AND error_message LIKE '%GPUActive%'

The query helped verify the date range of the errors and their counts: 2020-07-04 through 2020-08-20. The following tables were affected:


We isolated the error documents into a backfill project named moz-fx-data-backfill-7 and mirrored our production BigQuery datasets and tables into it.

  DATE(submission_timestamp) BETWEEN "2020-07-04"
  AND "2020-08-20"
  AND exception_class = 'org.everit.json.schema.ValidationException'
  AND error_message LIKE '%GPUActive%'

Then we ran a suitable Dataflow job to populate our tables using the same ingestion code as the production jobs. It took about 31 minutes to run to completion. We copied and deduplicated the data into a dataset that mirrored our production environment.

gcloud config set project moz-fx-data-backfill-7
dates=$(python3 -c 'from datetime import datetime as dt, timedelta;
  print(" ".join([(start + timedelta(i)).isoformat()[:10] for i in range(days)]))')
./script/copy_deduplicate --project-id moz-fx-data-backfill-7 --dates $(echo $dates)

This query took hours because it iterated over all tables for ~50 days, regardless of whether it contained data. Future backfills should probably remove empty tables before kicking off this script.

Now that tables were populated, we handled data deletion requests since the time of the initial error. A module named Shredder serves the self-service deletion requests in BigQuery ETL. We ran Shredder from the bigquery-etl root.

  --billing-projects moz-fx-data-backfill-7
  --source-project moz-fx-data-shared-prod
  --target-project moz-fx-data-backfill-7
  --start_date 2020-06-01
  --only 'telemetry_stable.*'

This removed relevant rows from our final tables.

INFO:root:Scanned 515495784 bytes and deleted 1280 rows from moz-fx-data-backfill-7.telemetry_stable.crash_v4
INFO:root:Scanned 35301644397 bytes and deleted 45159 rows from moz-fx-data-backfill-7.telemetry_stable.event_v4
INFO:root:Scanned 1059770786 bytes and deleted 169 rows from moz-fx-data-backfill-7.telemetry_stable.first_shutdown_v4
INFO:root:Scanned 286322673 bytes and deleted 2 rows from moz-fx-data-backfill-7.telemetry_stable.heartbeat_v4
INFO:root:Scanned 134028021311 bytes and deleted 13872 rows from moz-fx-data-backfill-7.telemetry_stable.main_v4
INFO:root:Scanned 2795691020 bytes and deleted 1071 rows from moz-fx-data-backfill-7.telemetry_stable.modules_v4
INFO:root:Scanned 302643221 bytes and deleted 163 rows from moz-fx-data-backfill-7.telemetry_stable.new_profile_v4
INFO:root:Scanned 1245911143 bytes and deleted 6477 rows from moz-fx-data-backfill-7.telemetry_stable.update_v4
INFO:root:Scanned 286924248 bytes and deleted 10 rows from moz-fx-data-backfill-7.telemetry_stable.voice_v4
INFO:root:Scanned 175822424583 and deleted 68203 rows in total

After this is all done, we append each of these tables to the production tables. Appends requires superuser permissions, so it was handed off to another engineer to finalize the deed. Afterward, we deleted the rows in the error table corresponding to the backfilled pings from the backfill-7 project.

  DATE(submission_timestamp) BETWEEN "2020-07-04"
  AND "2020-08-20"
  AND exception_class = 'org.everit.json.schema.ValidationException'
  AND error_message LIKE '%GPUActive%'

Finally, we updated the production errors with new errors generated from the backfill process.

bq cp --append_table   moz-fx-data-backfill-7:payload_bytes_error.telemetry   moz-fx-data-shared-prod:payload_bytes_error.telemetry

Now those rejected pings are available for analysis down the line. For the unadulterated backfill logs, see this PR to bigquery-backfill.


No system is perfect, but the processes we have in place allow us to systematically understand the surface area of issues and systematically address failures. Our health check meeting improves our situational awareness of changes upstream in applications like Firefox, while our backfill logs in bigquery-backfill allow us to practice dealing with the complexities of recovering from partial outages. These underlying processes and systems are the same ones that facilitate the broader Glean ecosystem at Mozilla and will continue to exist as long as the data flows.

Mozilla Add-ons BlogExtensions in Firefox 86

Firefox 86 will be released on February 23, 2021. We’d like to call out two highlights and several bug fixes for the WebExtensions API that will ship with this release.


  • Extensions that have host permissions for tabs no longer need to request the broader “tabs” permission to have access to the tab URL, title, and favicon URL.
  • As part of our work on Manifest V3, we have landed an experimental base content security policy (CSP) behind a preference in Firefox 86.  The new CSP disallows remote code execution. This restriction only applies to extensions using manifest_version 3, which is not currently supported in Firefox (currently, only manifest_version 2 is supported). If you would like to test the new CSP for extension pages and content scripts, you must change your extension’s manifest_version to 3 and set extensions.manifestv3.enabled to true in about:config. Because this is a highly experimental and evolving feature, we want developers to be aware that extensions that work with the new CSP may break tomorrow as more changes are implemented.

Bug fixes

  • Redirected URIs can now be set to a loopback address in the identity.launchWebAuthFlow API. This fix makes it possible for extensions to successfully integrate with OAuth authentication for some common web services like Google and Facebook. This will also be uplifted to Firefox Extended Support Release (ESR) 78.
  • Firefox 76 introduced a regression where webRequest.StreamFilter did not disconnect after an API, causing the loading icon on tabs to run persistently. We’ve also fixed a bug that caused crashes when using view-source requests.
  • The zoom levels for the extensions options pages embedded in the Firefox Add-ons Manager (about:addons) tabs should work as expected.
  • Now that the tabs hiding API is enabled by default, the extensions.webextensions.tabhide.enabled preference is no longer displayed and references to it have been removed.

As a quick note, going forward we’ll be publishing release updates in the Firefox developer release notes on MDN. We will still announce major changes to the WebExtensions API, like new APIs, significant enhancements, and deprecation notices, on this blog as they become available.


Many thanks to community members Sonia Singla, Tilden Windsor, robbendebiene, and Brenda Natalia for their contributions to this release!

The post Extensions in Firefox 86 appeared first on Mozilla Add-ons Blog.

hacks.mozilla.orgBrowser fuzzing at Mozilla


Mozilla has been fuzzing Firefox and its underlying components for a while. It has proven to be one of the most efficient ways to identify quality and security issues. In general, we apply fuzzing on different levels: there is fuzzing the browser as a whole, but a significant amount of time is also spent on fuzzing isolated code (e.g. with libFuzzer) or whole components such as the JS engine using separate shells. In this blog post, we will talk specifically about browser fuzzing only, and go into detail on the pipeline we’ve developed. This single pipeline is the result of years of work that the fuzzing team has put into aggregating our browser fuzzing efforts to provide consistently actionable issues to developers and to ease integration of internal and external fuzzing tools as they become available.

Diagram showing interaction of systems used in Mozilla's browser fuzzing workflow

Build instrumentation

To be as effective as possible we make use of different methods of detecting errors. These include sanitizers such as AddressSanitizer (with LeakSanitizer), ThreadSanitizer, and UndefinedBehaviorSanitizer, as well as using debug builds that enable assertions and other runtime checks. We also make use of debuggers such as rr and Valgrind. Each of these tools provides a different lens to help uncover specific bug types, but many are incompatible with each other or require their own custom build to function or provide optimal results. Besides providing debugging and error detection, some tools cannot work without build instrumentation, such as code coverage and libFuzzer. Each operating system and architecture combination requires a unique build and may only support a subset of these tools.

Last, each variation has multiple active branches including Release, Beta, Nightly, and Extended Support Release (ESR). The Firefox CI Taskcluster instance builds each of these periodically.

Downloading builds

Taskcluster makes it easy to find and download the latest build to test. We discussed above the number of variants created by different instrumentation types, and we need to fuzz them in automation. Because of the large number of combinations of builds, artifacts, architectures, operating systems, and unpacking each, downloading is a non-trivial task.

To help reduce the complexity of build management, we developed a tool called fuzzfetch. Fuzzfetch makes it easy to specify the required build parameters and it will download and unpack the build. It also supports downloading specified revisions to make it useful with bisection tools.

How we generate the test cases

As the goal of this blog post is to explain the whole pipeline, we won’t spend much time explaining fuzzers. If you are interested, please read “Fuzzing Firefox with WebIDL” and the in-tree documentation. We use a combination of publicly available and custom-built fuzzers to generate test cases.

How we execute, report, and scale

For fuzzers that target the browser, Grizzly manages and runs test cases and monitors for results. Creating an adapter allows us to easily run existing fuzzers in Grizzly.

Simplified Python code for a Grizzly adaptor using an external fuzzer.

To make full use of available resources on any given machine, we run multiple instances of Grizzly in parallel.

For each fuzzer, we create containers to encapsulate the configuration required to run it. These exist in the Orion monorepo. Each fuzzer has a configuration with deployment specifics and resource allocation depending on the priority of the fuzzer. Taskcluster continuously deploys these configurations to distribute work and manage fuzzing nodes.

Grizzly Target handles the detection of issues such as hangs, crashes, and other defects. Target is an interface between Grizzly and the browser. Detected issues are automatically packaged and reported to a FuzzManager server. The FuzzManager server provides automation and a UI for triaging the results.

Other more targeted fuzzers use JS shell and libFuzzer based targets use the fuzzing interface. Many third-party libraries are also fuzzed in OSS-Fuzz. These deserve mention but are outside of the scope of this post.

Managing results

Running multiple fuzzers against various targets at scale generates a large amount of data. These crashes are not suitable for direct entry into a bug tracking system like Bugzilla. We have tools to manage this data and get it ready to report.

The FuzzManager client library filters out crash variations and duplicate results before they leave the fuzzing node. Unique results are reported to a FuzzManager server. The FuzzManager web interface allows for the creation of signatures that help group reports together in buckets to aid the client in detecting duplicate results.

Fuzzers commonly generate test cases that are hundreds or even thousands of lines long. FuzzManager buckets are automatically scanned to queue reduction tasks in Taskcluster. These reduction tasks use Grizzly Reduce and Lithium to apply different reduction strategies, often removing the majority of the unnecessary data. Each bucket is continually processed until a successful reduction is complete. Then an engineer can do a final inspection of the minimized test case and attach it to a bug report. The final result is often used as a crash test in the Firefox test suite.

Animation showing an example testcase reduction using Grizzly

Code coverage of the fuzzer is also measured periodically. FuzzManager is used again to collect code coverage data and generate coverage reports.

Creating optimal bug reports

Our goal is to create actionable bug reports to get issues fixed as soon as possible while minimizing overhead for developers.

We do this by providing:

  • crash information such as logs and a stack trace
  • build and environment information
  • reduced test case
  • Pernosco session
  • regression range (bisections via Bugmon)
  • verification via Bugmon

Grizzly Replay is a tool that forms the basic execution engine for Bugmon and Grizzly Reduce, and makes it easy to collect rr traces to submit to Pernosco. It makes re-running browser test cases easy both in automation and for manual use. It simplifies working with stubborn test cases and test cases that trigger multiple results.

As mentioned, we have also been making use of Pernosco. Pernosco is a tool that provides a web interface for rr traces and makes them available to developers without the need for direct access to the execution environment. It is an amazing tool developed by a company of the same name which significantly helps to debug massively parallel applications. It is also very helpful when test cases are too unreliable to reduce or attach to bug reports. Creating an rr trace and uploading it can make stalled bug reports actionable.

The combination of Grizzly and Pernosco have had the added benefit of making infrequent, hard to reproduce issues, actionable. A test case for a very inconsistent issue can be run hundreds or thousands of times until the desired crash occurs under rr. The trace is automatically collected and ready to be submitted to Pernosco and fixed by a developer, instead of being passed over because it was not actionable.

How we interact with developers

To request new features get a proper assessment, the fuzzing team can be reached at fuzzing@mozilla.com or on Matrix. This is also a great way to get in touch for any reason. We are happy to help you with any fuzzing related questions or ideas. We will also reach out when we receive information about new initiatives and features that we think will require attention. Once fuzzing of a component begins, we communicate mainly via Bugzilla. As mentioned, we strive to open actionable issues or enhance existing issues logged by others.

Bugmon is used to automatically bisect regression ranges. This notifies the appropriate people as quickly as possible and verifies bugs once they are marked as FIXED. Closing a bug automatically removes it from FuzzManager, so if a similar bug finds its way into the code base, it can be identified again.

Some issues found during fuzzing will prevent us from effectively fuzzing a feature or build variant. These are known as fuzz-blockers, and they come in a few different forms. These issues may seem benign from a product perspective, but they can block fuzzers from targeting important code paths or even prevent fuzzing a target altogether. Prioritizing these issues appropriately and getting them fixed quickly is very helpful and much appreciated by the fuzzing team.

PrefPicker manages the set of Firefox preferences used for fuzzing. When adding features behind a pref, consider adding it to the PrefPicker fuzzing template to have it enabled during fuzzing. Periodic audits of the PrefPicker fuzzing template can help ensure areas are not missed and resources are used as effectively as possible.

Measuring success

As in other fields, measurement is a key part of evaluating success. We leverage the meta bug feature of Bugzilla to help us keep track of the issues identified by fuzzers. We strive to have a meta bug per fuzzer and for each new component fuzzed.

For example, the meta bug for Domino lists all the issues (over 1100!) identified by this tool. Using this Bugzilla data, we are able to show the impact over the years of our various fuzzers.

Bar graph showing number of bugs reported by Domino over time

Number of bugs reported by Domino over time

These dashboards help evaluate the return on investment of a fuzzer.


There are many components in the fuzzing pipeline. These components are constantly evolving to keep up with changes in debugging tools, execution environments, and browser internals. Developers are always adding, removing, and updating browser features. Bugs are being detected, triaged, and logged. Keeping everything running continuously and targeting as much code as possible requires constant and ongoing efforts.

If you work on Firefox, you can help by keeping us informed of new features and initiatives that may affect or require fuzzing, by prioritizing fuzz-blockers, and by curating fuzzing preferences in PrefPicker. If fuzzing interests you, please take part in the bug bounty program. Our tools are available publicly, and we encourage bug hunting.

The post Browser fuzzing at Mozilla appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Gfx TeamImproving texture atlas allocation in WebRender

This is going to be a rather technical dive into a recent improvement that went into WebRender.

Texture atlas allocation

In order to submit work to the GPU efficiently, WebRender groups as many drawing primitives as it can into what we call batches. A batch is submitted to the GPU as a single drawing command and has a few constraints. for example a batch can only reference a fixed set of resources (such as GPU buffers and textures). So in order to group as many drawing primitives as possible in a single batch we need to place as many drawing parameters as possible in few resources. When rendering text, WebRender pre-renders the glyphs before compositing them on the screen so this means packing as many pre-rendered glyphs as possible into a single texture, and the same applies for rendering images and various other things.

For a moment let’s simplify the case of images and text and assume that it is the same problem: input images (rectangles) of various rectangular sizes that we need to pack into a larger textures. This is the job of the texture atlas allocator. Another common name for this is rectangle bin packing.

Many in game and web development are used to packing many images into fewer assets. In most cases this can be achieved at build time Which means that the texture atlas allocator isn’t constrained by allocation performance and only needs to find a good layout for a fixed set of rectangles without supporting dynamic allocation/deallocation within the atlas at run time. I call this “static” atlas allocation as opposed to “dynamic” atlas allocation.

There’s a lot more literature out there about static than dynamic atlas allocation. I recommend reading A thousand ways to pack the bin which is a very good survey of various static packing algorithms. Dynamic atlas allocation is unfortunately more difficult to implement while keeping good run-time performance. WebRender needs to maintain texture atlases into which items are added and removed over time. In other words we don’t have a way around needing dynamic atlas allocation.

A while back

A while back, WebRender used a simple implementation of the guillotine algorithm (explained in A thousand ways to pack the bin). This algorithm strikes a good compromise between packing quality and implementation complexity.
The main idea behind it can be explained simply: “Maintain a list of free rectangles, find one that can hold your allocation, split the requested allocation size out of it, creating up to two additional rectangles that are added back to the free list.”. There is subtlety in which free rectangle to choose and how to split it, but the overall, the algorithm is built upon reassuringly understandable concepts.

Deallocation could simply consist of adding the deallocated rectangle back to the free list, but without some way to merge back neighbor free rectangles, the atlas would quickly get into a fragmented stated with a lot of small free rectangles and can’t allocate larger ones anymore.

<figcaption>Lots of free space, but too fragmented to host large-ish allocations.</figcaption>

To address that, WebRender’s implementation would regularly do a O(n²) complexity search to find and merge neighbor free rectangles, which was very slow when dealing with thousands of items. Eventually we stopped using the guillotine allocator in systems that needed support for deallocation, replacing it with a very simple slab allocator which I’ll get back to further down this post.

Moving to a worse allocator because of the run-time defragmentation issue was rubbing me the wrong way, so as a side project I wrote a guillotine allocator that tracks rectangle splits in a tree in order to find and merge neighbor free rectangle in constant instead of quadratic time. I published it in the guillotiere crate. I wrote about how it works in details in the documentation so I won’t go over it here. I’m quite happy about how it turned out, although I haven’t pushed to use it in WebRender, mostly because I wanted to first see evidence that this type of change was needed and I already had evidence for many other things that needed to be worked on.

The slab allocator

What replaced WebRender’s guillotine allocator in the texture cache was a very simple one based on fixed power-of-two square slabs, with a few special-cased rectangular slab sizes for tall and narrow items to avoid wasting too much space. The texture is split into 512 by 512 regions, each region is split into a grid of slabs with a fixed slab size per region.

<figcaption>The slab allocator in action. This is a debugging view generated from a real browsing session.</figcaption>

This is a very simple scheme with very fast allocation and deallocation, however it tends to waste a lot of texture memory. For example allocating an 8×10 pixels glyph occupies a 16×16 slot, wasting more than twice the requested space. Ouch!
In addition, since regions can allocate a single slab size, space can be wasted by having a region with few allocations because the slab size happens to be uncommon.

Improvements to the slab allocator

Images and glyphs used to be cached in the same textures. However we render images and glyphs with different shaders, so currently they can never be used in the same rendering batches. I changed image and glyphs to be cached into a separate set of textures which provided with a few opportunities.

Not mixing images and glyphs means the glyph textures get more room for glyphs which reduces the number of textures containing glyphs overall. In other words, less chances to break batches. The same naturally applies to images. This is of course at the expense of allocating more textures on average, but it is a good trade-off for us and we are about to compensate the memory increase by using tighter packing.

In addition, glyphs and images are different types of workloads: we usually have a few hundred images of all sizes in the cache, while we have thousands of glyphs most of which have similar small sizes. Separating them allows us to introduce some simple workload-specific optimizations.

The first optimization came from noticing that glyphs are almost never larger than 128px. Having more and smaller regions, reduces the amount of atlas space that is wasted by partially empty regions, and allows us to hold more slab sizes at a given time so I reduced the region size from 512×512 to 128×128 in the glyph atlases. In the unlikely event that a glyph is larger than 128×128, it will go into the image atlas.

Next, I recorded the allocations and deallocations browsing different pages, gathered some statistics about most common glyph sizes and noticed that on a low-dpi screen, a quarter of the glyphs would land in a 16×16 slab but would have fit in a 8×16 slab. In latin scripts at least, glyphs are usually taller than wide. Adding 8×16 and 16×32 slab sizes that take advantage of this helps a lot.
I could have further optimized specific slab sizes by looking at the data I had collected, but the more slab sizes I would add, the higher the risk of regressing different workloads. This problem is called over-fitting. I don’t know enough about the many non-latin scripts used around the world to trust that my testing workloads were representative enough, so I decided that I should stick to safe bets (such as “glyphs are usually small”) and avoid piling up optimizations that might penalize some languages. Adding two slab sizes was fine (and worth it!) but I wouldn’t add ten more of them.

<figcaption>The original slab allocator needed two textures to store a workload that the improved allocator can fit into a single one.</figcaption>

At this point, I had nice improvements to glyph allocation using the slab allocator, but I had a clear picture of the ceiling I would hit from the fixed slab allocation approach.

Shelf packing allocators

I already had guillotiere in my toolbelt, in addition to which I experimented with two algorithms derived from the shelf packing allocation strategy, both of them released in the Rust crate etagere. The general idea behind shelf packing is to separate the 2-dimensional allocation problem into a 1D vertical allocator for the shelves and within each shelf, 1D horizontal allocation for the items.

The atlas is initialized with no shelf. When allocating an item, we first find the shelf that is the best fit for the item vertically, if there is none or the best fit wastes too much vertical space, we add a shelf. Once we have found or added a suitable shelf, an horizontal slice of it is used to host the allocation.

At a glance we can see that this scheme is likely to provide much better packing than the slab allocator. For one, items are tightly packed horizontally within the shelves. That alone saves a lot of space compared to the power-of-two slab widths. A bit of waste happens vertically, between an item and the top of its shelf. How much the shelf allocator wastes vertically depends on how the shelve heights are chosen. Since we aren’t constrained to power-of-two size, we can also do much better than the slab allocator vertically.

The bucketed shelf allocator

The first shelf allocator I implemented was inspired from Mapbox’s shelf-pack allocator written in JavaScript. It has an interesting bucketing strategy: items are accumulated into fixed size “buckets” that behave like a small bump allocators. Shelves are divided into a certain number of buckets and buckets are only freed when all elements are freed. The trade-off here is to keep atlas space occupied for longer in order to reduce the CPU cost of allocating and deallocating. Only the top-most shelf is removed when empty so consecutive empty shelves in the middle aren’t merged until they become the top-most shelves, which can cause a bit of vertical fragmentation for long running sessions. When the atlas is full of (potentially empty) shelves the chance that a new item is too tall to fit into one of the existing shelves depends on how common the item height is. Glyphs tend to be of similar (small) heights so it works out well enough.

I added very limited support for merging neighbor empty shelves. When an allocation fails, the atlas iterates over the shelves and checks if there is a sequence of empty shelves that in total would be able to fit the requested allocation. If so, the first shelf of the sequence becomes the size of the sum, and the other shelves are squashed to zero height. It sounds like a band aid (it is) but the code is simple and it is working within the constraints that make the rest of the allocator very simple and fast. It’s only a limited form of support for merging empty shelves but it was an improvement for workloads that contain both small and large items.

<figcaption>Image generated from the glyph cache in a real borwsing session via a debugging tool. We see fewer/wider boxes rather than many thin boxes because the allocator internally doesn’t keep track of each item rectangle individually, so we only see buckets filling up instead.</figcaption>

This allocator worked quite well for the glyph texture (unsurprisingly, as Mapbox’s implementation it was inspired from is used with their glyph cache). The bucketing strategy was problematic, however, with large images. The relative cost of keeping allocated space longer was higher with larger items. Especially with long running sessions, this allocator was good candidate for the glyph cache but not for the image cache.

The simple shelf allocator

The guillotine allocator was working rather well with images. I was close to just using it for the image cache and move on. However, having spent a lot of time looking at various allocations patterns, my intuition was that we could do better. This is largely thanks to being able to visualize the algorithms via our integrated debugging tool that could generate nice SVG visualizations.

It motivated experimenting with a second shelf allocator. This one is conceptually even simpler: A basic vertical 1D allocator for shelves with a basic horizontal 1D allocator per shelf. Since all items are managed individually, they are deallocated eagerly which is the main advantage over the bucketed implementation. It is also why it is slower than the bucketed allocator, especially when the number of items is high. This allocator also has full support for merging/splitting empty shelves wherever they are in the atlas.

<figcaption>This was generated from the same glyph cache wokrload as the previous image.</figcaption>

Unlike the Bucketed allocator, this one worked very well for the image workloads. For short sessions (visiting only a handful of web pages) it was not packing as tightly as the guillotine allocator, but after browsing for longer period of time, it had a tendency to better deal with fragmentation.

<figcaption>The simple shelf allocator used on the image cache. Notice how different the image workloads look (using the same texture size), with much fewer items and a mix of large and small items sizes. </figcaption>

The implementation is very simple, scanning shelves linearly and then within the selected shelf another linear scan to find a spot for the allocation. I expected performance to scale somewhat poorly with high number of glyphs (we are dealing in the thousands of glyphs which arguably isn’t that high), but the performance hit wasn’t as bad I had anticipated, probably helped by mostly cache friendly underlying data-structure.

A few other experiments

For both allocators I implemented the ability to split the atlas into a fixed number of columns. Adding columns means more (smaller) shelves in the atlas, which further reduces vertical fragmentation issues, at the cost of wasting some space at the end of the shelves. Good results were obtained on 2048×2048 atlases with two columns. You can see in the previous two images that the shelf allocator was configured to use two columns.

The shelf allocators support arranging items in vertical shelves instead of horizontal ones. It can have an impact depending on the type of workload, for example if there is more variation in width than height for the requested allocations. As far as my testing went, it did not make a significant difference with workloads recorded in Firefox so I kept the default horizontal shelves.

The allocators also support enforcing specific alignments in x and y (effectively, rounding up the size of allocated items to a multiple of the x and y alignment). This introduces a bit of wasted space but avoids leaving tiny holes in some cases. Some platforms also require a certain alignment for various texture transfer operations so it is useful to have this knob to tweak at our disposal. In the Firefox integration, we use different alignments for each type of atlas, favoring small alignments for atlases that mostly contain small items to keep the relative wasted space small.


<figcaption>Various visualizations generated while I was working on this. It’s been really fun to be able “look” at the algorithms at each step of the process. </figcaption>

The guillotine allocator is the best at keeping track of all available space and can provide the best packing of the algorithms I tried. The shelf allocators waste a bit of space by simplifying the arrangement into shelves, and the slab allocator wastes a lot of space for the sake of simplicity. On the other hand the guillotine allocator is the slowest when dealing with multiple thousands of items and can suffer from fragmentations in some of the workloads I recorded. Overall the best compromise was the simple shelf allocator which I ended up integrating in Firefox for both glyph and image textures in the cache (in both cases configured to have two columns per texture). The bucketed allocator is still a very reasonable option for glyphs and we could switch to it in the future if we feel we should trade some packing efficiency for allocation/deallocation performance. In other parts of WebRender, for short lived atlases (a single frame), the guillotine allocation algorithm is used.

These observations are mostly workload-dependent, though. Workloads are rarely completely random so results may vary.

There are other algorithms I could have explored (and maybe will someday, who knows), but I had found a satisfying compromise between simplicity, packing efficiency, and performance. I wasn’t aiming for state of the art packing efficiency. Simplicity was a very important parameter and whatever solutions I came up with would have to be simple enough to ship it in a web browser without risks.

To recap, my goals were to:

  • allow packing more texture cache items into fewer textures,
  • reduce the amount of texture allocation/deallocation churn,
  • avoid increasing GPU memory usage, and if possible reduce it.

This was achieved by improving atlas packing to the point that we more rarely have to allocate multiple textures for each item type . The results look pretty good so far. Before the changes in Firefox, glyphs would often be spread over a number of textures after having visited a couple of websites, Currently the cache eviction is set so that we rarely need more than than one or two textures with the new allocator and I am planning to crank it up so we only use a single texture. For images, the shelf allocator is pretty big win as well. what used to fit into five textures now fits into two or three. Today this translates into fewer draw calls and less CPU-to-GPU transfers which has a noticeable impact on performance on low end Intel GPUs, in addition to reducing GPU memory usage.

The slab allocator improvements landed in bug 1674443 and shipped in Firefox 85, while the shelf allocator integration work went in bug 1679751 and will make it hit the release channel in Firefox 86. The interesting parts of this work are packaged in a couple of rust crates under permissive MIT OR Apache-2.0 license:

Mozilla Add-ons Blogaddons.mozilla.org API v3 Deprecation

The addons.mozilla.org (AMO) external API can be used by users and developers to get information about add-ons available on AMO, and to submit new add-on versions for signing.  It’s also used by Firefox for recommendations, among other things, by the web-ext tool, and internally within the addons.mozilla.org website.

We plan to shut down Version 3 (v3) of the AMO API on December 31, 2021. If you have any personal scripts that rely on v3 of the API, or if you interact with the API through other means, we recommend that you switch to the stable v4. You don’t need to take any action if you don’t use the AMO API directly. The AMO API v3 is entirely unconnected to manifest v3 for the WebExtensions API, which is the umbrella project for major changes to the extensions platform itself.

Roughly five years ago, we introduced v3 of the AMO API for add-on signing. Since then, we have continued developing additional versions of the API to fulfill new requirements, but have maintained v3 to preserve backwards compatibility. However, having to maintain multiple different versions has become a burden. This is why we’re planning to update dependent projects to use v4 of the API soon and shut down v3 at the end of the year.

You can find more information about v3 and v4 on our API documentation site.  When updating your scripts, we suggest just making the change from “/v3/” to “/v4” and seeing if everything still works – in most cases it will.

Feel free to contact us if you have any difficulties.

The post addons.mozilla.org API v3 Deprecation appeared first on Mozilla Add-ons Blog.

Blog of DataThis Week in Glean: The Glean Dictionary

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

On behalf of Mozilla’s Data group, I’m happy to announce the availability of the first milestone of the Glean Dictionary, a project to provide a comprehensive “data dictionary” of the data Mozilla collects inside its products and how it makes use of it. You can access it via this development URL:


The goal of this first milestone was to provide an equivalent to the popular “probe” dictionary for newer applications which use the Glean SDK, such as Firefox for Android. As Firefox on Glean (FoG) comes together, this will also serve as an index of what data is available for Firefox and how to access it.

Part of the vision of this project is to act as a showcase for Mozilla’s practices around lean data and data governance: you’ll note that every metric and ping in the Glean Dictionary has a data review associated with it — giving the general public a window into what we’re collecting and why.

In addition to displaying a browsable inventory of the low-level metrics which these applications collect, the Glean Dictionary also provides:

  • Code search functionality (via Searchfox) to see where any given data collection is defined and used.
  • Information on how this information is represented inside Mozilla’s BigQuery data store.
  • Where available, links to browse / view this information using the Glean Aggregated Metrics Dashboard (GLAM).

Over the next few months, we’ll be expanding the Glean Dictionary to include derived datasets and dashboards / reports built using this data, as well as allow users to add their own annotations on metric behaviour via a GitHub-based documentation system. For more information, see the project proposal.

The Glean Dictionary is the result of the efforts of many contributors, both inside and outside Mozilla Data. Special shout-out to Linh Nguyen, who has been moving mountains inside the codebase as part of an Outreachy internship with us. We welcome your feedback and involvement! For more information, see our project repository and Matrix channel (#glean-dictionary on chat.mozilla.org).

Open Policy & AdvocacyFive issues shaping data, tech and privacy in the African region in 2021

The COVID 19 crisis increased our reliance on technology and accelerated tech disruption and innovation, as we innovated to fight the virus and cushion the impact. Nowhere was this felt more keenly than in the African region, where the number of people with internet access continued to increase and the corresponding risks to their privacy and data protection rose in tandem. On the eve of 2021 Data Privacy Day, we take stock of the key issues that will shape data and privacy in the Africa region in the coming year.

  • Data Protection: Africa is often used as a testing ground for technologies produced in other countries. As a result, people’s data is increasingly stored in hundreds of databases globally. While many in the region are still excluded from enjoying basic rights, their personal information is a valuable commodity in the global market, even when no safeguard mechanisms exist. Where safeguards exist, they are still woefully inadequate. One of the reasons Cambridge Analytica could amass large databases of personal information was the lack of data protection mechanisms in countries where they operated. This 2017 global scandal served as a wakeup call for stronger data protection for many African states. Many countries are therefore strengthening their legal provisions regarding access to personal data, with over 30 countries having enacted Data Protection legislation. Legislators have the African Union Convention on Cybersecurity and Personal Data Protection (Malabo Convention) 2014 to draw upon and we are likely to see more countries enacting privacy laws and setting up Data Protection Authorities in 2021, in a region that would otherwise have taken a decade or more to enact similar protections.
  • Digital ID:The UN’s Sustainable Development Goal 16.9 aims to provide legal identity for all, including birth registration. But this goal is often conflated with and used to justify high-tech biometric ID schemes. These national level identification projects are frequently funded through development aid agencies or development loans from multilaterals and are often duplicative of existing schemes. They are set up as unique, centralised single sources of information on people and meant to replace existing sectoral databases, and guarantee access to a series of critical public and private services. However, they risk abusing privacy and amplifying patterns of discrimination and exclusion. Given the impending rollout of COVID-19 vaccination procedures, we can expect digital ID to remain a key issue across the region. It will be vital that the discrimination risks inherent within digital IDs are not amplified to deny basic health benefits.
  • Behavioural Biometrics: In addition to government-issued digital IDs, our online and offline lives now require either identifying ourselves or being identified. Social login services which let us log in with Facebook, ad IDs that are used to target ads to us, or an Apple ID that connects our text messages, music preferences, app purchases, and payments to a single identifier, are all examples of private companies using identity systems to amass vast databases about us. Today’s behavioural biometrics technologies go further and use hundreds of unique parameters to analyse how someone uses their digital devices, their browsing history, how they hold their phone, and even how quickly they text, providing a mechanism to passively authenticate people without their knowledge. For example, mobile lending services are “commodifying the routine habits of Kenyans, transforming their behaviour into reputational data” to be monitored, assessed, and shared adding another worryingly sophisticated layer to identity verification leading to invasion of privacy and exclusion.
  • Fintech: An estimated 1.7 billion people lack access to financial services. FinTech solutions e.g. mobile money, have assumed a role in improving financial inclusion, while also serving as a catalyst for innovation in sectors like health, agriculture, etc. These solutions are becoming a way of life in many African countries, attracting significant investments in new transaction technologies. FinTech products collect significant amounts of personal data, including users’ names, location records, bank account details, email addresses, and sensitive data relating to religious practices, ethnicity, race, credit information, etc. The sheer volume of information increases its sensitivity and over time a FinTech company may generate a very detailed and complete picture of an individual while also collecting data that may have nothing to do with financial scope, for example, text messages, call logs, and address books. Credit analytics firms like Cignifi are extracting data from unbanked users to develop predictive algorithms. As FinTech continues to grow exponentially across the region in 2021 we can expect a lot of focus on ensuring companies adopt responsible, secure, and privacy-protective practices.
  • Surveillance and facial recognition technologies: Increased state and corporate surveillance through foreign-sourced technologies raises questions of how best to safeguard privacy on the continent. Governments in the region are likely to use surveillance technologies more to curb freedom of expression and freedom of assembly in contexts of political control. The increasing use of facial recognition technologies without accompanying legislation to mitigate privacy, security and discrimination risks, is of great concern. The effort to call out and reign in bad practices and ensure legislative safeguards will continue in 2021. Fortunately, we already have some successes to build on. For instance, in South Africa, certain provisions of the Regulation of Interception of Communications and Provision of Communication Related Information Act have been declared unconstitutional.

As we move through 2021, the African region will continue to see Big Tech’s unencumbered rise, with vulnerable peoples’ data being used to enhance companies’ innovations, entrench their economic and political power, while impacting the social lives of billions of people. Ahead of Data Privacy Day, we must remember that our work to ensure data protection and data privacy will not be complete until all individuals, no matter where they are located in the world, enjoy the same rights and protections.

The post Five issues shaping data, tech and privacy in the African region in 2021 appeared first on Open Policy & Advocacy.

about:communityNew contributors to Firefox 85

With Firefox 85 fresh out of the oven, we are delighted to welcome the developers who contributed their first code change to Firefox in this release, 13 of whom are new volunteers! Please join us in thanking each of them, and take a look at their contributions:

hacks.mozilla.orgJanuary brings us Firefox 85

To wrap up January, we are proud to bring you the release of Firefox 85. In this version we are bringing you support for the :focus-visible pseudo-class in CSS and associated devtools, <link rel="preload">, and the complete removal of Flash support from Firefox. We’d also like to invite you to preview two exciting new JavaScript features in the current Firefox Nightly — top-level await and relative indexing via the .at() method. Have fun!

This blog post provides merely a set of highlights; for all the details, check out the following:


The :focus-visible pseudo-class, previously supported in Firefox via the proprietary :-moz-focusring pseudo-class, allows the developer to apply styling to elements in cases where browsers use heuristics to determine that focus should be made evident on the element.

The most obvious case is when you use the keyboard to focus an element such as a button or link. There are often cases where designers will want to get rid of the ugly focus-ring, commonly achieved using something like :focus { outline: none }, but this causes problems for keyboard users, for whom the focus-ring is an essential accessibility aid.

:focus-visible allows you to apply a focus-ring alternative style only when the element is focused using the keyboard, and not when it is clicked.

For example, this HTML:

<p><button>Test button</button></p>
<p><input type="text" value="Test input"></p>
<p><a href="#">Test link</a></p>

Could be styled like this:

/* remove the default focus outline only on browsers that support :focus-visible  */
a:not(:focus-visible), button:not(:focus-visible), button:not(:focus-visible) {
  outline: none;

/* Add a strong indication on browsers that support :focus-visible */
a:focus-visible, button:focus-visible, input:focus-visible {
  outline: 4px dashed orange;

And as another nice addition, the Firefox DevTools’ Page Inspector now allows you to toggle :focus-visible styles in its Rules View. See Viewing common pseudo-classes for more details.


After a couple of false starts in previous versions, we are now proud to announce support for <link rel="preload">, which allows developers to instruct the browser to preemptively fetch and cache high-importance resources ahead of time. This ensures they are available earlier and are less likely to block page rendering, improving performance.

This done by including rel="preload" on your link element, and an as attribute containing the type of resource that is being preloaded, for example:

<link rel="preload" href="style.css" as="style">
<link rel="preload" href="main.js" as="script">

You can also include a type attribute containing the MIME type of the resource, so a browser can quickly see what resources are on offer, and ignore ones that it doesn’t support:

<link rel="preload" href="video.mp4" as="video" type="video/mp4">
<link rel="preload" href="image.webp" as="image" type="image/webp">

See Preloading content with rel=”preload” for more information.

The Flash is dead, long live the Flash

Firefox 85 sees the complete removal of Flash support from the browser, with no means to turn it back on. This is a coordinated effort across browsers, and as our plugin roadmap shows, it has been on the cards for a long time.

For some like myself — who have many nostalgic memories of the early days of the web, and all the creativity, innovation, and just plain fun that Flash brought us — this is a bittersweet day. It is sad to say goodbye to it, but at the same time the advantages of doing so are clear. Rest well, dear Flash.

Nightly previews

There are a couple of upcoming additions to Gecko that are currently available only in our Nightly Preview. We thought you’d like to get a chance to test them early and give us feedback, so please let us know what you think in the comments below!

Top-level await

async/await has been around for a while now, and is proving popular with JavaScript developers because it allows us to write promise-based async code more cleanly and logically. This following trivial example illustrates the idea of using the await keyword inside an async function to turn a returned value into a resolved promise.

async function hello() {
  return greeting = await Promise.resolve("Hello");


The trouble here is that await was originally only allowed inside async functions, and not in the global scope. The experimental top-level await proposal addresses this, by allowing global awaits. This has many advantages in situations like wanting to await the loading of modules in your JS application. Check out the proposal for some useful examples.

What’re you pointing at() ?

Currently an ECMAScript stage 3 draft proposal, the relative indexing method .at() has been added to Array, String, and TypedArray instances to provide an easy way of returning specific index values in a relative manner. You can use a positive index to count forwards from position 0, or a negative value to count backwards from the highest index position.

Try these, for example:

let myString = 'Hello, how are you?';

let myArray = [0, 10, 35, 70, 100, 300];


Last but not least, let’s look at what has changed in our WebExtensions implementation in Fx 85.

And finally, we want to remind you about upcoming site isolation changes with Project Fission. As we previously mentioned, the drawWindow() method is being deprecated as part of this work. If you use this API, we recommend that you switch to using the captureTab() method instead.

The post January brings us Firefox 85 appeared first on Mozilla Hacks - the Web developer blog.

Web Application SecurityFirefox 85 Cracks Down on Supercookies

Trackers and adtech companies have long abused browser features to follow people around the web. Since 2018, we have been dedicated to reducing the number of ways our users can be tracked. As a first line of defense, we’ve blocked cookies from known trackers and scripts from known fingerprinting companies.

In Firefox 85, we’re introducing a fundamental change in the browser’s network architecture to make all of our users safer: we now partition network connections and caches by the website being visited. Trackers can abuse caches to create supercookies and can use connection identifiers to track users. But by isolating caches and network connections to the website they were created on, we make them useless for cross-site tracking.

What are supercookies?

In short, supercookies can be used in place of ordinary cookies to store user identifiers, but  they are much more difficult to delete and block. This makes it nearly impossible for users to protect their privacy as they browse the web. Over the years, trackers have been found storing user identifiers as supercookies in increasingly obscure parts of the browser, including in Flash storage, ETags, and HSTS flags.

The changes we’re making in Firefox 85 greatly reduce the effectiveness of cache-based supercookies by eliminating a tracker’s ability to use them across websites.

How does partitioning network state prevent cross-site tracking?

Like all web browsers, Firefox shares some internal resources between websites to reduce overhead. Firefox’s image cache is a good example: if the same image is embedded on multiple websites, Firefox will load the image from the network during a visit to the first website and on subsequent websites would traditionally load the image from the browser’s local image cache (rather than reloading from the network). Similarly, Firefox would reuse a single network connection when loading resources from the same party embedded on multiple websites. These techniques are intended to save a user bandwidth and time.

Unfortunately, some trackers have found ways to abuse these shared resources to follow users around the web. In the case of Firefox’s image cache, a tracker can create a supercookie by  “encoding” an identifier for the user in a cached image on one website, and then “retrieving” that identifier on a different website by embedding the same image. To prevent this possibility, Firefox 85 uses a different image cache for every website a user visits. That means we still load cached images when a user revisits the same site, but we don’t share those caches across sites.

In fact, there are many different caches trackers can abuse to build supercookies. Firefox 85 partitions all of the following caches by the top-level site being visited: HTTP cache, image cache, favicon cache, HSTS cache, OCSP cache, style sheet cache, font cache, DNS cache, HTTP Authentication cache, Alt-Svc cache, and TLS certificate cache.

To further protect users from connection-based tracking, Firefox 85 also partitions pooled connections, prefetch connections, preconnect connections, speculative connections, and TLS session identifiers.

This partitioning applies to all third-party resources embedded on a website, regardless of whether Firefox considers that resource to have loaded from a tracking domain. Our metrics show a very modest impact on page load time: between a 0.09% and 0.75% increase at the 80th percentile and below, and a maximum increase of 1.32% at the 85th percentile. These impacts are similar to those reported by the Chrome team for similar cache protections they are planning to roll out.

Systematic network partitioning makes it harder for trackers to circumvent Firefox’s anti-tracking features, but we still have more work to do to continue to strengthen our protections. Stay tuned for more privacy protections in the coming months!

Thank you

Re-architecting how Firefox handles network connections and caches was no small task, and would not have been possible without the tireless work of our engineering team: Andrea Marchesini, Tim Huang, Gary Chen, Johann Hofmann, Tanvi Vyas, Anne van Kesteren, Ethan Tseng, Prangya Basu, Wennie Leung, Ehsan Akhgari, and Dimi Lee.

We wish to express our gratitude to the many Mozillians who contributed to and supported this work, including: Selena Deckelmann, Mikal Lewis, Tom Ritter, Eric Rescorla, Olli Pettay, Kim Moir, Gregory Mierzwinski, Doug Thayer, and Vicky Chin.

We also want to acknowledge past and ongoing efforts carried out by colleagues in the Brave, Chrome, Safari and Tor Browser teams to combat supercookies in their own browsers.

The post Firefox 85 Cracks Down on Supercookies appeared first on Mozilla Security Blog.

hacks.mozilla.orgWelcoming Open Web Docs to the MDN family

Collaborating with the community has always been at the heart of MDN Web Docs content work — individual community members constantly make small (and not so small) fixes to help incrementally improve the content, and our partner orgs regularly come on board to help with strategy and documenting web platform features that they have an interest in.

At the end of the 2020, we launched our new Yari platform, which exposes our content in a GitHub repo and therefore opens up many more valuable contribution opportunities than before.

And today, we wanted to spread the word about another fantastic event for enabling more collaboration on MDN — the launch of the Open Web Docs organization.

Open Web Docs

Open Web Docs (OWD) is an open collective, created in collaboration between several key MDN partner organizations to ensure the long-term health of open web platform documentation on de facto standard resources like MDN Web Docs, independently of any single vendor or organization. It will do this by collecting funding to finance writing staff and helping manage the communities and processes that will deliver on present and future documentation needs.

You will hear more about OWD, MDN, and opportunities to collaborate on web standards documentation very soon — a future post will outline exactly how the MDN collaborative content process will work going forward.

Until then, we are proud to join our partners in welcoming OWD into the world.

See also

The post Welcoming Open Web Docs to the MDN family appeared first on Mozilla Hacks - the Web developer blog.

Mozilla VR BlogA New Year, A New Hubs

A New Year, A New Hubs

An updated look & feel for Hubs, with an all-new user interface, is now live.

Just over two years ago, we introduced a preview release of Hubs. Our hope was to bring people together to create, socialize and collaborate around the world in a new and fun way. Since then, we’ve watched our community grow and use Hubs in ways we could only imagine. We’ve seen students use Hubs to celebrate their graduations last May, educational organizations use Hubs to help educators adapt to this new world we’re in, and heck, even NASA has used Hubs to feature new ways of working. In today’s world where we’re spending more time online, Hubs has been the go-to online place to have fun and try new experiences.

Today’s update brings new features including a chat sidebar, a new streamlined design for desktop and mobile devices, and a support forum to help our community get the most out of their Hubs experience.

The New Hubs Experience

We’re excited to announce a new update to Hubs that makes it easier than ever to connect with the people you care about remotely. The update includes:

Stay in the Conversation with new Chat sidebar

Chat scroll back has been a highly requested feature in Hubs. Before today’s update, messages sent in Hubs were ephemeral and disappeared after just a few seconds. The chat messages were also displayed drawn over the room UI, which could prevent scene content from being viewed. With the new chat sidebar, you’ll be able to see chat from the moment you join the lobby, and choose when to show or hide the panel. On desktop, if the chat panel is closed, you’ll still get the quick text notifications, which have moved from the center of the screen to the bottom-left.

A New Year, A New Hubs<figcaption>A preview of the new chat feature in the lobby of a Hubs room</figcaption>

Streamlined experience for desktop and mobile

In the past, our team took a design approach that kept the desktop, mobile, and virtual reality interfaces tightly coupled. This often meant that the application’s interactions were tailored primarily to virtual reality devices, but in practice, the vast majority of Hubs users are visiting rooms on non-VR devices. This update separates the desktop and mobile interfaces to align more to industry-standard best practices, and makes the experience of being in a Hubs room more tailored to the device you’re using at any given time.  We’ve improved menu navigation by making these full-screen on mobile devices, and by consolidating options and preferences for personalizing your experience.

A New Year, A New Hubs<figcaption>The preferences menu on mobile (left) and desktop (right)</figcaption>

For our Hubs Cloud customers, we’re planning to release the UI changes after March 25th, 2021.  If you’re running Hubs Cloud out of the box on AWS, no manual updates will be required. If you have a custom fork, you will need to pull the changes into your client manually. We’ve created a guide here to explain what changes need to be made. For help with updates to Hubs Cloud or custom clients, you can connect with us on GitHub. We will be releasing an update to Hubs Cloud next week that does not include the UI redesign.

Helping you get the most out of your Hubs experience through our community

We’re excited to share that you can now get answers to questions about Hubs using support.mozilla.org. In addition to articles to help with basic Hubs setup and troubleshooting, the ‘Ask a Question’ forum is now available. This is a new place for the community and team to help answer questions about Hubs. If you’re an active Hubs user, you can contribute by answering questions and flagging information for the team. If you’re new to Hubs and find yourself needing some help getting up and running, pop over and let us know how we can help.

In the coming months, we’ll have additional detail to share about accessibility and localization in the new Hubs client. In the meantime, we invite you to check out the new Hubs experience on either your mobile or desktop device and let us know what you think!

Thank you to the following community members for letting us include clips of their scenes and events in our promo video: Paradowski, XP Portal, Narratify, REM5 For Good, Innovación Educativa del Tecnológico de Monterrey, Jordan Elevons, and Brendan Bradley. For more information, see the video description on Vimeo.

SeaMonkeyComments back..

Hi All,

Just want to mention that the comments (from what I see, and requiring a browser window reload) seem to be back.  I had to change to a different theme.


SeaMonkeyComments on this blog…

.. are ignored.

Hahha..   ;/  Just kidding.

Seriously, the comments *are* there.  It’s just that for some reasons, the comments aren’t showing.  I’m in the middle of figuring this out.  Please stand by.