Firefox UXOrdering browser tabs chronologically to support task continuity

Ordering Browser Tabs Chronologically to Support Task Continuity

Product teams working on Firefox at Mozilla have long been interested in helping people get things done, whether that’s completing homework for school, shopping for a pair of shoes, or doing one’s taxes. We are deeply invested in how we can support task continuity, the various steps that people take in getting things done, in our browser products. And we know that in our browsers, tabs play an important role for people carrying out tasks.

Task continuity model

In 2015, Firefox researchers Gemma Petrie and Bill Selman developed a model to explain different types of task continuity strategies, which are represented in the middle of the diagram below.

Passive strategies include behaviors like leaving a tab open, such as a page for a product that one is considering purchasing. Active strategies include actions like emailing a link, for example a link to a recipe to cook at a later time, to oneself. Share strategies might involve using social media to share content, such as a news article, with other people.

Fast forward to this year and the team working on Firefox for iOS was interested in how we might support task continuity involving leaving tabs open. We continued to see in user research the important role that tabs play in task continuity, and we wanted to explore how to make tab retrieval and overall tab management easier.

In most web browsers on smartphones, tabs are ordered based on when a person first opened them, with the oldest tabs on one end of the interface (top, bottom, left, or right) and the newest tabs stacking to the opposite end of the interface. This ordering logic gets more complex if a new tab is prompted to open when someone taps on a link in an existing tab. A site may be designed to launch links in new tabs or a person may choose to open new tabs for links. The new tab, in that case, typically will open immediately next to the tab where the link was tapped, pushing all other later tabs toward the other end of the interface. All of this gets even trickier when managing more than just a few tabs.

Based on a trove of user research, the iOS team raised the following question:

Would ordering tabs chronologically in Firefox for iOS make it easier for people to stay organized and feel more in control of their tabs?

The team conducted user research, led by Elisabeth Klann, in April of this year to understand current tab behaviors and to evaluate a basic prototype of the concept of chronological tabs.

<figcaption>A screenshot of the prototype used for the concept evaluation in April 2020, showing a fictional set of open tabs in Firefox for iOS</figcaption>

We recruited 10 adult participants in the US, half of whom were already using Firefox for iOS and half of whom used either Safari or Chrome as their main browser on their iPhone.

What we learned from the first round of user research

From asking participants about their existing behaviors with browser tabs on their phones, the Firefox for iOS team was pleasantly surprised to hear participants describe the order of their tabs in terms of time. Participants fell into three categories in terms of their tab habits:

  • “I keep it clean” when the participant generally tried to avoid clutter and closed individual tabs often
  • “I keep forgetting” when the participant was not conscious of accumulating tabs and typically closed tabs in batches when the experience became cumbersome
  • “I keep tabs open for reference…short term” when the participant was more strategic in leaving tabs open for a few sessions until a task was complete

All participants were able to discern the chronological ordering of tabs in the prototype and reported that the ordering was helpful, particularly the chronological ordering of the most recent tabs. It was important to participants that they be able to delete single tabs and batches of tabs, and we identified an opportunity for making batch deletion more discoverable in the UI. Following this round of user research, the team made numerous changes to the tab design, led by Nicole Weber, which were incorporated into a beta build of Firefox for iOS.

Tab date stamps before and after the concept evaluation<figcaption>One change made after the concept evaluation was to attach dates to the “Today” and “Yesterday” categories of open tabs and to change the “Older” label to the more specific “Last Week.”</figcaption>
Delete tab functionality before and after the concept evaluation<figcaption>Another change made after the concept evaluation was to make the functionality for deleting a tab easier to access.</figcaption>

Continuing to learn with a beta build in a diary study

With the beta build, an early version of Firefox for iOS with chronological tabs and only available to research participants, the Mozilla team wanted to do another round of user research to understand the perceptions and utility of chronological tabs, this time in the real-world context with participants using their own devices rather than the pre-designed tabs of a prototype. We recruited 10 new participants, adults in Canada and the US and again a mix of people already using Firefox on their iPhones and people using other browsers.

Participants used the beta build of Firefox for iOS with chronological tabs as their primary iPhone browser for three days and answered a brief survey at the end of each of those days about their experience moving between web pages and of Firefox overall. Survey questions included:

  • Did you use Firefox Beta to visit more than one web page today?
  • Thinking about when you moved between tabs you were using today, was there anything particularly easy, difficult, and/or confusing about that?
  • Did you revisit any tabs from yesterday or before yesterday? If so, can you please describe what you were doing and what it was like to revisit that older tab?

After three days, we interviewed participants to discuss their survey responses and overall experience with chronological tabs.

From the second round of user research, we learned that while the chronological order of tabs did not seem to break any workflows, it was the overall design of the tabs themselves — the thumbnail image, page title and/or URL, and date stamp in a list-like format — that made tabs more helpful than existing designs such as the undated, untitled, deck-like tabs in Safari on iPhone. One participant explained that the formatting of the tabs reminded her of tasks she wanted to complete. She said:

“So is it was this layout that kind of nudged me because I was going back to a page. And I was like, oh yeah, I went to that one, too. That’s right. And then I went back and did that task.”

Another participant also said, in going back to the view of all of his open tabs with the small images, he remembered the shoes he was shopping for the day before and his desire to return to that shopping. He returned to the tab with the shoes during our interview.

<figcaption>Participant C1’s open tabs in Firefox Beta, including a tab with a thumbnail of a shoe</figcaption>

There were instances, however, when the proposed design broke. A bug rendered some tabs unintelligible due to thumbnail images not populating. Also, several participants used enlarged text on their devices, a setting we did not anticipate that resulted in truncated tab titles and URLs. Participants for whom thumbnails were not populating and tab titles were truncated had a particularly difficult time discerning tabs. We also identified an opportunity, which we know is also an opportunity in the desktop browser, to make tabs more discernible in situations when a person has multiple tabs that look similar, particularly at thumbnail scale, like several Amazon pages or pages from different retailers all for the same product.

<figcaption>Participant C3’s open tabs in Firefox Beta with blank thumbnail images and truncated tab titles and URLs</figcaption>

While we are actively working on fixing the bug related to the thumbnail images, it was nevertheless helpful to learn about situations where the design fell short — the key takeaway being that the different parts of the design, the date stamps, the thumbnail image, the page title, and the URL work in concert to help people remember pages they have visited and the context for those visits.

Next: Setting out to understand if iOS findings carry over to other platforms

The team, led by Ashley Thomas, plans to continue work on chronological tabs, such as investigating how we can make tab meta data populate more reliably and planning user research to evaluate the proposed design on Android, tablets, and desktop. Some of the questions the team is excited to pursue in coming weeks include:

  • Are there ways to improve further the accessibility of the proposed design?
  • Will complex workflows common to larger form factors help us uncover new insights about chronological tabs?
  • Is tab functionality most helpful when it is the same across platforms or might platform-specific designs better support task continuity?
  • Are there other ways of sorting tabs that would support people’s workflows?

Thank you to the Firefox for iOS team and the many Mozillians, including people outside of the iOS team, who reviewed and provided valuable feedback on an early draft of this post.

Also published on the Firefox UX blog

Ordering browser tabs chronologically to support task continuity was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla UXOrdering Browser Tabs Chronologically to Support Task Continuity

Product teams working on Firefox at Mozilla have long been interested in helping people get things done, whether that’s completing homework for school, shopping for a pair of shoes, or doing one’s taxes. We are deeply invested in how we can support task continuity, the various steps that people take in getting things done, in our browser products. And we know that in our browsers, tabs play an important role for people carrying out tasks.

Task continuity model

In 2015, Firefox researchers Gemma Petrie and Bill Selman developed a model to explain different types of task continuity strategies, which are represented in the middle of the diagram below.

Task continuity model diagram

Passive strategies include behaviors like leaving a tab open, such as a page for a product that one is considering purchasing. Active strategies include actions like emailing a link, for example a link to a recipe to cook at a later time, to oneself. Share strategies might involve using social media to share content, such as a news article, with other people.

Fast forward to this year and the team working on Firefox for iOS was interested in how we might support task continuity involving leaving tabs open. We continued to see in user research the important role that tabs play in task continuity, and we wanted to explore how to make tab retrieval and overall tab management easier.

In most web browsers on smartphones, tabs are ordered based on when a person first opened them, with the oldest tabs on one end of the interface (top, bottom, left, or right) and the newest tabs stacking to the opposite end of the interface. This ordering logic gets more complex if a new tab is prompted to open when someone taps on a link in an existing tab. A site may be designed to launch links in new tabs or a person may choose to open new tabs for links. The new tab, in that case, typically will open immediately next to the tab where the link was tapped, pushing all other later tabs toward the other end of the interface. All of this gets even trickier when managing more than just a few tabs. This brief demonstration illustrates tab ordering logic in Firefox for iOS before chronological tabs using the example of someone shopping for a food processor.

<figcaption class="imageCaption"></figcaption>

Based on a trove of user research, the iOS team raised the following question:

Would ordering tabs chronologically in Firefox for iOS make it easier for people to stay organized and feel more in control of their tabs?

The team conducted user research, led by Elisabeth Klann, in April of this year to understand current tab behaviors and to evaluate a basic prototype of the concept of chronological tabs.

A screenshot of the prototype used for the concept evaluation in April 2020, showing a fictional set of open tabs in Firefox for iOS

We recruited 10 adult participants in the US, half of whom were already using Firefox for iOS and half of whom used either Safari or Chrome as their main browser on their iPhone.

What we learned from the first round of user research

From asking participants about their existing behaviors with browser tabs on their phones, the Firefox for iOS team was pleasantly surprised to hear participants describe the order of their tabs in terms of time. Participants fell into three categories in terms of their tab habits:

  • “I keep it clean” when the participant generally tried to avoid clutter and closed individual tabs often
  • “I keep forgetting” when the participant was not conscious of accumulating tabs and typically closed tabs in batches when the experience became cumbersome
  • “I keep tabs open for reference…short term” when the participant was more strategic in leaving tabs open for a few sessions until a task was complete

All participants were able to discern the chronological ordering of tabs in the prototype and reported that the ordering was helpful, particularly the chronological ordering of the most recent tabs. It was important to participants that they be able to delete single tabs and batches of tabs, and we identified an opportunity for making batch deletion more discoverable in the UI. Following this round of user research, the team made numerous changes to the tab design, led by Nicole Weber, which were incorporated into a beta build of Firefox for iOS.

Tab date stamps before and after the concept evaluation

One change made after the concept evaluation was to attach dates to the “Today” and “Yesterday” categories of open tabs and to change the “Older” label to the more specific “Last Week.”

Delete tab functionality before and after the concept evaluation

Another change made after the concept evaluation was to make the functionality for deleting a tab easier to access.

Continuing to learn with a beta build in a diary study

With the beta build, an early version of Firefox for iOS with chronological tabs and only available to research participants, the Mozilla team wanted to do another round of user research to understand the perceptions and utility of chronological tabs, this time in the real-world context with participants using their own devices rather than the pre-designed tabs of a prototype. We recruited 10 new participants, adults in Canada and the US and again a mix of people already using Firefox on their iPhones and people using other browsers.

Participants used the beta build of Firefox for iOS with chronological tabs as their primary iPhone browser for three days and answered a brief survey at the end of each of those days about their experience moving between web pages and of Firefox overall. Survey questions included:

  • Did you use Firefox Beta to visit more than one web page today?
  • Thinking about when you moved between tabs you were using today, was there anything particularly easy, difficult, and/or confusing about that?
  • Did you revisit any tabs from yesterday or before yesterday? If so, can you please describe what you were doing and what it was like to revisit that older tab?

After three days, we interviewed participants to discuss their survey responses and overall experience with chronological tabs.

From the second round of user research, we learned that while the chronological order of tabs did not seem to break any workflows, it was the overall design of the tabs themselves — the thumbnail image, page title and/or URL, and date stamp in a list-like format — that made tabs more helpful than existing designs such as the undated, untitled, deck-like tabs in Safari on iPhone. One participant explained that the formatting of the tabs reminded her of tasks she wanted to complete. She said:

“So is it was this layout that kind of nudged me because I was going back to a page. And I was like, oh yeah, I went to that one, too. That’s right. And then I went back and did that task.”

Another participant also said, in going back to the view of all of his open tabs with the small images, he remembered the shoes he was shopping for the day before and his desire to return to that shopping. He returned to the tab with the shoes during our interview.

Participant C1’s open tabs in Firefox Beta, including a tab with a thumbnail of a shoe

There were instances, however, when the proposed design broke. A bug rendered some tabs unintelligible due to thumbnail images not populating. Also, several participants used enlarged text on their devices, a setting we did not anticipate that resulted in truncated tab titles and URLs. Participants for whom thumbnails were not populating and tab titles were truncated had a particularly difficult time discerning tabs. We also identified an opportunity, which we know is also an opportunity in the desktop browser, to make tabs more discernible in situations when a person has multiple tabs that look similar, particularly at thumbnail scale, like several Amazon pages or pages from different retailers all for the same product.

Participant C3’s open tabs in Firefox Beta with blank thumbnail images and truncated tab titles and URLs

While we are actively working on fixing the bug related to the thumbnail images, it was nevertheless helpful to learn about situations where the design fell short — the key takeaway being that the different parts of the design, the date stamps, the thumbnail image, the page title, and the URL work in concert to help people remember pages they have visited and the context for those visits.

Next: Setting out to understand if iOS findings carry over to other platforms

The team, led by Ashley Thomas, plans to continue work on chronological tabs, such as investigating how we can make tab meta data populate more reliably and planning user research to evaluate the proposed design on Android, tablets, and desktop. Some of the questions the team is excited to pursue in coming weeks include:

  • Are there ways to improve further the accessibility of the proposed design?
  • Will complex workflows common to larger form factors help us uncover new insights about chronological tabs?
  • Is tab functionality most helpful when it is the same across platforms or might platform-specific designs better support task continuity?
  • Are there other ways of sorting tabs that would support people’s workflows?

Thank you to the Firefox for iOS team and the many Mozillians, including people outside of the iOS team, who reviewed and provided valuable feedback on an early draft of this post.

Originally published on

Firefox UXSimplifying the Complex: Crafting Content for Meaningful Privacy Experiences

How content strategy simplified the language and improved content design around a core Firefox feature.

Image of a shield on purple background with the words “Enhanced Tracking Protection.”<figcaption>Enhanced Tracking Protection is a feature of the Firefox browser that automatically protects your privacy behind the scenes.</figcaption>

Firefox protects your privacy behind the scenes while you browse. These protections work invisibly, blocking advertising trackers, data collectors, and other annoyances that lurk quietly on websites. This core feature of the Firefox browser is called Enhanced Tracking Protection (ETP). When the user experience team redesigned the front-end interface, content strategy led efforts to simplify the language and improve content design.

Aligning around new nomenclature

The feature had been previously named Content Blocking. Content blockers are extensions that block trackers placed by ads, analytics companies, and social media. It’s a standardized term among tech-savvy, privacy-conscious users. However, it wasn’t well understood in user testing. Some participants perceived the protections to be too broad, assuming it blocked adult content (it doesn’t). Others thought the feature only blocked pop-ups.

This was an opportunity to improve the clarity and comprehension of the feature name itself. The content strategy team renamed the feature to Enhanced Tracking Protection.

A chart outlining the differences between Content Blocking and Enhanced Tracking Protection.<figcaption>The feature was previously called Content Blocking, which was a reflection of its technical functionality. To bring focus to the feature’s end benefits to users, we renamed it to Enhanced Tracking Protection.</figcaption>

Renaming a core browser feature is 5 percent coming up with a name, 95 percent getting everyone on the same page. Content strategy led the communication and coordination of the name change. This included alignment efforts with marketing, localization, support, engineering, design, and legal.

Revisiting content hierarchy and information architecture

To see what Firefox blocked on a particular website, you can select the shield to the left of your address bar.

Image of shield in the Firefox address bar highlighted.<figcaption>To access the Enhanced Tracking Protection panel in Firefox, select the shield to the left of the address bar.</figcaption>

Previously, this opened a panel jam-packed with an overwhelming amount of information — only a portion of which pertained to the actual feature. User testing participants struggled to parse everything on the panel.

The new Enhanced Tracking Protection panel needed to do a few things well:

  • Communicate if the feature was on and working
  • Make it easy to turn the feature off and back on
  • Provide just-enough detail about which trackers Firefox blocks
  • Offer a path to adjust your settings and visit your Protections Dashboard
Image of the previous Content Blocking panel alongside the new Enhanced Tracking Protection Panel.<figcaption>The panel previously included information about a site’s connection, the Content Blocking feature, and site-level permissions. The redesigned panel focuses only on Enhanced Tracking Protection.</figcaption>

We made Enhanced Tracking Protection the panel’s only focus. Information pertaining to a site’s security and permissions were moved to a different part of the UI. We made design and content changes to reinforce the feature’s on/off state. Additional modifications were made to the content hierarchy to improve scannability.

Solving problems without words

A chart outlining variations of button copy and the recommendation to change the design element entirely.<figcaption>The function of the button to turn the feature off wasn’t clear to users. Clearer copy alone couldn’t solve the problem.</figcaption>

Words alone can’t solve certain problems. The enable/disable button is a perfect example. Users can go to their settings and manually opt in to a stricter level of Enhanced Tracking Protection. Websites occasionally don’t work as expected in strict ETP. There’s an easy fix: Turn it off right from the panel.

User testing participants couldn’t figure out how to do it. Though there was a button on the panel, its function was far from obvious. A slight improvement was made by updating the copy from ‘enable/disable’ to ‘turn on/turn off.’ Ultimately, the best solution was not better button copy. It was removing the button entirely and replacing it with a different design element.

A chart outlining the differences between a button and a switch design element.<figcaption>A switch better communicated the on/off state of the feature.</figcaption>

We also moved this element to the top of the panel for easier access.

Image of the previous button, which read “Disable Blocking for this Site” beside the new switch element.<figcaption>User testing participants struggled to understand how to turn the feature off. The solution was not better button copy, but replacing the button with an on/off switch and moving it higher up on the panel for better visibility.</figcaption>

Lastly, we added a sub-panel to inform users how turning off ETP might fix their problem. We used one of the best-kept secrets of the content strategy trade: a bulleted list to make this sub-panel scannable.

Image of Enhanced Tracking Protection panel and its sub-panel, which explains reasons why a site might not be working.<figcaption>A sub-panel outlines reasons why you might want to turn Enhanced Tracking Protection off.</figcaption>

Improving the clarity of language on Protections Dashboard

An image of the previous Protections Dashboard beside the revised content and design.<figcaption>Adding clarifying language to Protections Dashboard provides an overview of the page, offers users a path to adjust their settings, and reinforces that the Enhanced Tracking Protection feature is always on.</figcaption>

Firefox also launched a Protections Dashboard to give users more visibility into their privacy and security protections. After user research was conducted, we made further changes to the content design and copy. All credit goes to my fellow content strategist Meridel Walkington for making these improvements.

A chart outlining issues identified in user research and changes that were made.<figcaption>Content strategy recommended improvements to the content design and language of the Protections Dashboard to improve comprehension.</figcaption>

Explaining jargon in clear and simple terms

Jargon can’t always be avoided. Terms like ‘cryptominers,’ ‘fingerprinters,’ and other trackers Firefox blocks are technical by nature. Most users aren’t familiar with these terms, so the Protections Dashboard offers a short definition to break down each in clear, simple language. We also offer a path for users to explore these terms in more detail. The goal was to provide just-enough information without overwhelming users when they landed on their dashboard.

Descriptions for types of trackers that Firefox blocks.<figcaption>Descriptions of each type of tracker help explain terms that are inherently technical.</figcaption>
“Mozilla doesn’t just throw stats at users. The dashboard has a minimalist design and uses color-coded bar graphs to provide a simple overview of the different types of trackers blocked. It also features explainers clearly describing what the different types of trackers do.” — Fast Company: Firefox at 15: its rise, fall, and privacy-first renaissance

Wrapping up

Our goal in creating meaningful privacy experiences is to educate and empower users without overwhelming and paralyzing them. It’s an often delicate dance that requires deep partnership between product management, engineering, design, research and content strategy. Enhanced Tracking Protection is just one example of this type of collaboration. For any product to be successful, it’s important that our cross-functional teams align early on the user problems so we can design the best experience to meet them where they are.


Thank you to Michelle Heubusch for your ongoing support in this work and to Meridel Walkington for making it even better. All user research research conducted by Alice Rhee. Design by Bryan Bell and Eric Pang.

Simplifying the Complex: Crafting Content for Meaningful Privacy Experiences was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Firefox UXDesigning a content-first experience on Firefox Monitor

Designing a Content-First Experience on Firefox Monitor

Six methods a UX content strategist at Firefox used during the product’s redesign.

Image of Firefox Monitor logo with text.<figcaption>Firefox Monitor is a data breach notification service created by Mozilla.</figcaption>

As a UX content strategist at Mozilla, I support not only our browsers, but also stand alone products like Firefox Monitor. It’s a service that notifies you when you’ve been in a data breach and what steps to take to protect your personal info.

Designing content for a product that delivers bad news presents unique challenges. Data breaches are personal. They can be stressful. Sometimes the information exposed can be sensitive. How we organize, structure, and surface content in Firefox Monitor matters.

When we had the opportunity to redesign the end-to-end experience, content strategy played a key role. I identified ways our users were struggling with the current site, then worked in collaboration with interaction designer Ryan Gaddis to re-architect and design a new experience. These are a few of the tools and methods I leveraged.

About Firefox Monitor

<figcaption>Enter your email address to see if you’ve been part of an online data breach, then sign up for ongoing alerts about new breaches.</figcaption>

The goal of Firefox Monitor is to help people take back control of their personal data so they can protect their digital identity and feel safer online. When we first introduced Monitor to the world, our goal was 10,000 subscribers. Within six weeks, over 200,000 people had signed up.

As subscriptions continued climbing, the constraints of the MVP experience challenged our users and our ability to serve them. The one-page website offered the same generic recommendations to all visitors, regardless of the breach. It was complicated to add new functionality. We sought to create a more flexible platform that would better serve our users and our own business goals.

A year after the redesign launched, Monitor has 8 million subscribers and counting. The website is localized into 34 languages, making it free and accessible to as many people globally as possible.

1. Listen intently to user problems to identify areas for improvement

A side-by-side comparison of before and after of breach alert emails users receive.<figcaption>Users are notified via email when their information appears in a new data breach. Culling through the replies to these emails helped us understand user pain points.</figcaption>

If you’re tasked with improving the user experience of an existing experience, listening to current users’ pain points is a great place to start. You can do this even if you don’t have direct access to those users. I culled through hundreds of user replies to automated emails.

When a data breach becomes public, Monitor alerts affected users via email. People replied to those email alerts with questions, concerns, and comments.

I reviewed these emails and grouped them into themes to identify trends.

  • Users weren’t sure what to do to resolve a breach.
  • They didn’t recognize the name of the breached site.
  • They were confused by the length of time it took to be notified.
  • They found language too jargony or confusing.
  • They weren’t sure who was responsible for the breach.

These were all helpful inputs to drive the information architecture and content decisions for improving the Firefox Monitor experience. We kept these pain points top of mind, working to ensure that the answers to these questions were proactively provided.

2. Run a usability test to learn what’s tripping people up

<figcaption>In the usability study, participants misunderstood what Monitor did. They thought the service might protect against a variety of threats, such as against viruses, phishing, or unsafe websites.</figcaption>

When you’re immersed in the redesign of an experience, it’s impossible to know how it will be perceived by someone who experiences it for the first time. You simply know too much about how all the plumbing works. Usability testing provides valuable insight to uncover those blind spots.

Yixin Zou, a PhD student at the University of Michigan School of Information, led a usability study on the current experience. I observed all these sessions, taking notes on what could inform content and design improvements.

What mental models about data breaches and privacy did participants have? What were they confused about? In their own words, how did they describe what was happening and how they felt?

For example, we learned that the homepage copy was a bit too broad. Some users wondered if Monitor could protect them from phishing attempts, viruses, or visiting unsafe sites. In response, I tightened the focus on data breaches so people understood exactly what Monitor could do for them.

<figcaption>The redesign reduced the amount of content and clarified the purpose of Firefox Monitor.</figcaption>

3. Re-architect content hierarchy to reduce cognitive load

Firefox Monitor delivers stressful news. You may learn that your personal data has been exposed in many breaches. The compromised account might be one you signed up for a decade ago. Or, you may have signed up for an account that has since been sold or acquired by another company you don’t recognize.

As user experience practitioners, it’s our job to consider these stress cases. We can’t eliminate the anxiety of the situation, but we can make the information about it as easy to parse as possible.

Usability testing helped us realize how overwhelming it could be to see a full list of past breaches at once. People didn’t know where to start. There was simply too much information on one screen.

A side-by-side comparison of breach results before and after the redesign.<figcaption>The new experience reduces the cognitive load for users through improved content hierarchy. We also removed less critical content on first view so it would be easier for users to identify which breaches were most concerning to them.</figcaption>

Lead with the most relevant information

Getting to the point is the kindest thing we can do. Since many users check multiple email addresses for breaches, we now lead with the email and number of reported breaches.

Remove total number of compromised accounts

Some breaches affect millions of accounts. Those large numbers added complexity and are not relevant to you. Users are concerned about the impact on their own data — not what happened to millions of other people.

House more detailed information one layer deeper

Some breaches expose dozens of different types of data. When all compromised data for all breaches was displayed on a single page, it was too much to take in. Creating more detailed breach pages allowed us to simplify the report view, making it more scannable and digestible.

Screenshot of a breach detail page, localized into French.<figcaption>To reduce cognitive load and help users parse information about each breach, we introduced a dedicated page for every single breach.</figcaption>

4. Collaborate with design and engineering to increase impact

Mobile wireframe of the Firefox Monitor homepage.<figcaption>We avoided lorem ipsum and built out mobile-first wireframes with real copy.</figcaption>

Improving the user experience was a collaborative effort. Interaction designer Ryan Gaddis and I worked together to define the messaging goals and user needs. Though each of us brought different skills to the table, we welcomed each other’s feedback to make the end product better. We were able to avoid lorem ipsum throughout the entire design process.

Together, we designed the new mobile-first experience. This encouraged us to keep our design and content focused. Throughout multiple rounds of wireframes, I wrote multiple iterations of copy.

Co-designing in this way also allowed us to execute more quickly. By the time the website was ready for visual application and front-end development, most of the hard content problems had already been solved.

As the wireframes came together, we brought engineering into the conversation. This helped us understand what was feasible from a technical perspective. We were able to make adjustments based on engineering’s early input. The development process moved more quickly because engineering had early visibility into the design and content thinking.

5. Validate the appropriate tone to engage users

Table displaying neutral, positive, and negative variants of tone.<figcaption>Examples of neutral, positive, and negative variants of tone we tested.</figcaption>

At Firefox, we’re committed to making the web open and accessible to all. We want people to engage with the web, not be scared off by it.

In our privacy and security products, we must call out risks and inform users of threats. But we are thoughtful with our tone. Scare tactics are cheap shots. We don’t take them. Instead, we provide measured facts. We want to earn users’ trust while educating and empowering them to better protect themselves online.

To validate that our tone was appropriate, I worked again with PhD student Yixin Zou to design a tone variants study. I wrote neutral, positive, and negative variants of protective actions Firefox Monitor recommends.


→ Showing what the action is about and the high-level steps required to take the action.
→ Does not emphasize benefits or risks.


→ Empowering, encouraging, friendly.
→ Emphasizing benefits of taking action. Try to make these actions simple and easy.


→ Forbidding, warning.
→ Emphasizing risks of not taking action. Try to make these actions urgent and necessary.

We discovered that the negative tone did little to improve comprehension or willingness to take action. This validated our current course of action: to be direct, positive, and encouraging.

Here’s an example of what that content looks like in practice on the Monitor site.

Screenshot of passwords recommendations on the Firefox Monitor website.<figcaption>These are the steps we recommend users take to keep their personal info safe and to protect their digital identity.</figcaption>

6. Bring legal and localization into the process

Examples of English, French, and German alerts that appear on sites where breaches have occurred.<figcaption>If you visit a site that’s been breached, Firefox will direct you to Monitor to check to see if your information was compromised.</figcaption>

Legal and localization are key partners in the content strategy process. They need to have visibility into the work we do and have the opportunity to provide feedback so we can course-correct if needed.

The privacy and security space presents legal risk. Even though these data breaches are publicly known, companies aren’t always happy to get extra attention from Monitor. We also need to be careful that Monitor doesn’t over-promise its capabilities. Michael Feldman, our in-house product counsel, helps us ensure our content considers legal implications without reading like confusing legalese jargon.

Firefox Monitor is also localized into 34 languages. Our global community of localizers from all over the world help us extend the reach of Firefox Monitor. We strive to avoid US-centric jargon, phrasing, and figures of speech. We work with our localization project manager, Francesco Lodolo, to identify problematic language and sentence construction before it’s sent to localizers.

Wrapping up

Content strategy never works alone. We collaborate cross-functionally to deliver experiences that are useful and appropriate. The words you see on the page are one tangible output, but most of the work is done behind-the-scenes to drive the UX strategy.

The product continues to evolve as we look for more opportunities to help our users protect their online digital selves. To monitor known data breaches for your personal info, head to Firefox Monitor to sign up.


Thank you to Tony Cinotto, Luke Crouch, Jennifer Davidson, Peter DeHaan, Michael Feldman, Ryan Gaddis, Michelle Heubusch, Wennie Leung, Francesco Lodolo, Bob Micheletto, Lesley Norton, Cherry Park, Sandy Sage, Joni Savage, Nihanth Subramanya, Philip Walmsley, Yixin Zhou, and the entire Mozilla localization community for your contributions to Firefox Monitor. And thanks to Meridel Walkington for your editing help.

This post was also published on the Firefox UX blog

Designing a content-first experience on Firefox Monitor was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Firefox UXUX Book Club Recap: Writing is Designing, in Conversation with the Authors

Michael Metts and Andy Welfle joined the Firefox UX team to discuss their book Writing is Designing: Words and the User Experience.

Firefox UXRemote UX collaboration across different time zones (yes, it can be done!)

Even in the “before” times, the Firefox UX team was distributed across many different time zones. Some of us already worked remotely from…

Mozilla Add-ons BlogOpenness and security: a balancing act for the add-ons ecosystem

Add-ons offer a powerful way for people to customize their web experience in Firefox. From content blocking and media enhancement to productivity tooling, add-ons allow third-party developers to create, remix, and share new products and experiences for the web. The same extensibility that allows developers to create utility and delight in Firefox, however, can also be used by malicious actors to harvest and sell user data.

With an ecosystem of 20,000+ extensions hosted on (AMO), hundreds of thousands of self-distributed extensions, and millions of users around the world, finding the right balance between openness and security is a key challenge for our small team. Developers need to feel supported on our platform, and users need to feel safe installing add-ons, so we continually make adjustments to balance these interests.

Adapting our review model

Prior to the adoption of a new extensions API in 2017, buggy or malicious add-ons could take nearly full control of Firefox, and in some cases, a user’s device. Because these extensions could do so much potential damage, all add-ons hosted on (AMO) had to pass human review before they could be released to users. This led to long delays where developers sometimes waited weeks, if not months, for their submissions to be reviewed. In some cases, developers waited months for an add-on to be reviewed, only to have it rejected.

The transition to the new extensions API greatly limited the potential for add-ons to cause damage. Reducing the attack surface enabled us to move to a post-submission review model, where extensions undergo automated checks and are prioritized for human review based on certain risk factors before becoming available, usually within a few hours. All add-ons are subject to human review at any time after publication.

Human reviews are still necessary

Since the transition to a post-submission review model, we have continued to make adjustments to our products, systems, and processes to maintain a balance between user safety and developer support. While we’ve made gains in new mechanisms to combat malicious activity, human review remains the most reliable method for verifying the safety of an add-on because of the complex and contextual nature of add-on code written in JavaScript.

However, human code review is a resource-intensive activity. As we weighed our options for how to keep add-ons safe for users in 2019, it became clear that we only possessed the resources to guarantee human reviews for a small number of extensions. Because we already had an editorial program in place for identifying and featuring add-ons, it made sense to build a trusted add-on program off past curatorial efforts. This became the Recommended Extensions program.

Currently, we human-review every version of each of our 100+ Recommended Extensions before publication. Beyond that, our limited review resources are focused on monitoring and stamping out malicious activity that may be lurking in our ecosystem. For a sense of scale, AMO receives 20,000+ new version submissions per month.

Since we can only guarantee human-review for all versions of Recommended Extensions, AMO applies a warning message to the listing pages of all non-Recommended extensions. The intention of this message is to let users know that since a non-Recommended extension may not have been reviewed by a human, we can’t guarantee it’s safe.

Developer feedback and future plans

We’ve heard feedback from developers whose add-ons are not in the Recommended program that they are concerned the warning message can discourage users from installing their add-ons. Some have asked whether it’s possible to request human reviews for their add-ons so they can be badged as safe to install. We are exploring ways to better support these developers and provide more discovery opportunities for them.

During the remainder of 2020, we will experiment with new programs to address these issues and help more extensions become successful. Please stay tuned to this blog for updates on the upcoming experiments and opportunities for participation, and head to our community forum with any questions or feedback.

The post Openness and security: a balancing act for the add-ons ecosystem appeared first on Mozilla Add-ons Blog.

hacks.mozilla.orgFirefox 79: The safe return of shared memory, new tooling, and platform updates

A new stable version of Firefox brings July to a close with the return of shared memory! Firefox 79 also offers a new Promise method, more secure target=_blank links, logical assignment operators, and other updates of interest to web developers.

This blog post provides merely a set of highlights; for all the details, check out the following:

New in Developer Tools

First, we look at the new additions to the Firefox DevTools in version 79.

JavaScript logging and debugging capabilities

Async stack traces everywhere

Modern JavaScript depends on promises, async/await, events, and timeouts to orchestrate complex scheduling between your code, libraries, and the browser. And yet, it can be challenging to debug async code to understand control and data flow. Operations are broken up over time. Async stack traces solve this by combining the live synchronous part of the stack with the part that is captured and asynchronous.

Now you can enjoy detailed async execution chains in the Firefox JavaScript Debugger’s call stack, Console errors, and Network initiators.

Async stacks in Console & Debugger

To make this work, the JavaScript engine captures the stack when a promise is allocated or when some async operation begins. Then the captured stack is appended to any new stacks captured.

Better debugging for erroneous network responses

Failing server requests can lead to a cascade of errors. Previously, you had to switch between the Console and Network panels to debug, or enable the XHR/Requests filters in the Console. With Firefox 79, the Console shows network requests with 4xx/5xx error status codes by default. In addition, the request/response details can be expanded to inspect the full details. These are also available in the Network Inspector.

Console showing details for erroneous responses

Tip: To further debug, retry, or verify server-side changes, use the “Resend Request” context-menu option. It’s available in both the Console and Network panels. You can send a new request with the same parameters and headers. The additional “Edit and Resend” option is only available in the Network panel. It opens an editor to tweak the request before sending it.

Debugger highlights errors in code

Many debugging sessions start by jumping from a logged JavaScript error to the Debugger. To make this flow easier, errors are now highlighted in their corresponding source location in the Debugger. Furthermore, relevant details are shown on hover, in the context of the code, and paused variable state.

Error highlighted in the Debugger

We’d like to say thanks to core contributor Stepan Stava, who is already building this feature out, further blurring the line between logging and debugging.

Restart frame in Call Stack

When you restart frames from the Debugger, the call stack moves the execution pointer to the top of the function. With the caveat that the state of variables is not reset, this allows time-traveling within the current call stack.

Restarting frames in Debugger

“Restart Frame” is now available as a context-menu option in the Debugger’s call stack. Again, we have Stepan Stava to thank for this addition, which Debugger users will recognize from Chrome and VS Code.

Faster JavaScript debugging

Performance improvements in this release speed up debugging, particularly for projects with large files. We also fixed a bottleneck that affected eval-heavy code patterns, which will now just work.

Inspector updates

Better source map references for SCSS and CSS-in-JS

We’ve improved source map handling across all panels, so that opening SCSS and CSS-in-JS sources from the Inspector now works more reliably. You can quickly jump from the rules definitions in the Inspector side panel to the original file in the Style Editor.

New Inspect accessibility properties context menu

The Accessibility Inspector is now always available in the browser context menu. allows you can open the element in the Accessibility panel directly, to inspect ARIA properties and run audits.

More tooling updates

  • The “Disable Cache” option in the Network panel now also deactivates CORS preflight request caching. This makes it easier to iterate on your web security settings.
  • Contributor KC aligned the styling for blocked requests shown in Console with their appearance in the Network panel.
  • Richard Sherman extended the reach of tooltips, which now describe the type and value for previewed object values across Console and Debugger.
  • To consolidate sidebar tabs, Farooq AR moved Network’s WebSocket “Messages” tab into the “Response” tab.
  • Debugger’s references to “blackbox” were renamed “ignore”, to align wording with other tools and make it more inclusive. Thanks to Richard Sherman for this update too!

Web platform updates

Implicit rel=noopener with target=_blank links

To prevent the DOM property window.opener from being abused by untrusted third-party sites, Firefox 79 now automatically sets rel=noopener for all links that contain target=_blank. Previously, you had to set rel=noopener manually to make window.opener = null for every link that uses target=_blank. In case you need window.opener, explicitly enable it using rel=opener.

SharedArrayBuffer returns

At the start of 2018, Shared Memory and high-resolution timers were effectively disabled in light of Spectre. In 2020, a new, more secure approach has been standardized to re-enable shared memory. As a baseline requirement, your document needs to be in a secure context. For top-level documents, you must set two headers to cross-origin isolate your document:

To check if cross-origin isolation has been successful, you can test against the crossOriginIsolated property available to window and worker contexts:

if (crossOriginIsolated) {
// use postMessage and SharedArrayBuffer
} else {
// Do something else

Read more in the post Safely reviving shared memory.

Promise.any support

The new Promise.any() method takes an iterable of Promise objects and, as soon as one of the promises in the iterable fulfills, returns a single promise resolving to the value from that promise. Essentially, this method is the opposite of Promise.all(). Additionally, Promise.any() is different from Promise.race(). What matters is the order in which a promise is fulfilled, as opposed to which promise settles first.

If all of the promises given are rejected, a new error class called AggregateError is returned. In addition, it indicates the reason for the rejection(s).

const promise1 = Promise.reject(0);
const promise2 = new Promise((resolve) => setTimeout(resolve, 100, 'quick'));
const promise3 = new Promise((resolve) => setTimeout(resolve, 500, 'slow'));
const promises = [promise1, promise2, promise3];

Promise.any(promises).then((value) => console.log(value));
// quick wins

Logical assignment operators

JavaScript supports a variety of assignment operators already. The Logical Assignment Operator Proposal specifies three new logical operators that are now enabled by default in Firefox:

These new logical assignment operators have the same short-circuit behavior that the existing logical operations implement already. Assignment only happens if the logical operation would evaluate the right-hand side.

For example, if the “lyrics” element is empty, set the innerHTML to a default value:

document.getElementById('lyrics').innerHTML ||= '<i>No lyrics.</i>'

Here the short-circuit is especially beneficial, since the element will not be updated unnecessarily. Moreover, it won’t cause unwanted side-effects such as additional parsing or rendering work, or loss of focus.

Weakly held references

In JavaScript, references between objects are generally 1-1: if you have a reference to one object so that it cannot be garbage collected, then none of the objects it references can be collected either. This changed with the addition of WeakMap and WeakSet in ES2015, where you now need to have a reference to both the WeakMap and a key in order to prevent the corresponding value from being collected.

Since that time, JavaScript has not provided a more advanced API for creating weakly held references, until now. The WeakRef proposal adds this capability. Now Firefox supports the WeakRef and FinalizationRegistry objects.

Hop over to the MDN docs for example usage of WeakRef. Garbage collectors are complicated, so make sure you also read this note of caution before using WeakRefs.


Firefox 79 includes new WebAssembly functionality:

  • First off, seven new built-in operations are provided for bulk memory operations. For example, copying and initializing allow WebAssembly to model native functions such as memcpy and memmove in a more efficient, performant way.
  • The reference-types proposal is now supported. It provides a new type, externref, which can hold any JavaScript value, for example strings, DOM references, or objects. The wasm-bindgen documentation includes guidance for taking advantage of externref from Rust.
  • With the return of SharedArrayBuffer objects, we’re now also able to support WebAssembly threads. Thus, it is now possible for WebAssembly Memory objects to be shared across multiple WebAssembly instances running in separate Web Workers. The outcome? Very fast communication between Workers, as well as significant performance gains in web applications.

WebExtensions updates

Starting with Firefox 79, developers of tab management extensions can improve the perceived performance when users switch tabs. The new tabs.warmup() function will prepare the tab to be displayed. Developers can use this function, when they anticipate a tab switch, e.g. when hovering over a button or link.

If you’re an extension developer and your extensions sync items across multiple devices, be aware that we ported storage.sync area to a Rust-based implementation. Extension data that had been stored locally in existing profiles will automatically migrate the first time an installed extension tries to access storage.sync data in Firefox 79. As a quick note, the new implementation enforces client-side quota limits. You should estimate how much data your extension stores locally and test how your extension behaves once the data limit is exceeded. Check out this post for testing instructions and more information about this change.

Take a look at the Add-ons Blog for more updates to the WebExtensions API in Firefox 79!


As always, feel free to share constructive feedback and ask questions in the comments. And thanks for keeping your Firefox up to date!

The post Firefox 79: The safe return of shared memory, new tooling, and platform updates appeared first on Mozilla Hacks - the Web developer blog.

Open Policy & AdvocacyAustralian watchdog recommends major changes to exceptional access law TOLA

Australia’s Independent National Security Legislation Monitor (INSLM) earlier this month released a 316-page report calling for significant, and much needed, reforms to the nation’s 2018 Telecommunications and Other Legislation Amendment (TOLA) law. The Parliamentary Joint Committee on Intelligence and Security (PJCIS) will meet later this month to consider the INSLM’s recommendations. While we still believe this dangerous law should be repealed, if enacted, these recommendations would go a long way in reducing the risk of this flawed piece of legislation.

This legislation – which Mozilla has continually opposed – allows Australian authorities to force nearly all actors in the digital ecosystem (Designated Communications Providers or DCPs) to do “acts or things” with an explicit goal of weakening security safeguards. For example, under this law, using a Technical Assistance Notice (TAN), Australian authorities could force a company to turn over sensitive security information, or using a Technical Capability Notice (TCN), they could force a company to redesign its software.

In his report, the INSLM offered a wide range of critiques and recommendations to limit the scope of TOLA. Of particular note, the INSLM offered the following key proposals:

  • Judicial review – The INSLM noted that all non-government stakeholders, including Mozilla, raised concerns that expansive new powers granted by TOLA could be used without judicial review or authorization. The most important recommendation in the INSLM’s report is to require TANs and TCNs to be reviewed and approved by the Administrative Appeals Tribunal (AAT). The AAT is a well-respected, quasi-judicial body with the power to conduct classified hearings and adjudications which the INSLM proposes would be led by a new Investigatory Powers Commissioner (IPC). As in the UK, the IPC would be a retired high ranking judge with access to its own independent technical advisors. While implementation of these recommendations will help in limiting the harm of TANs and TCNs, we still do not think Australian authorities should have these powers.
  • Definitions of systemic weakness and target technology – While there is a safeguard in TOLA that orders under this law cannot be used to force the creation of a systemic weakness or vulnerability, these terms are worryingly, vaguely defined: “a systemic vulnerability means a vulnerability that affects a whole class of technology, but does not include a vulnerability that is selectively introduced to one or more target technologies that are connected with a particular person.” The INSLM’s report recommends helpful amendments to the definition of systemic weakness, and recommends the removal of the term “systemic vulnerability” entirely. Furthermore, we’ve previously noted that TOLA is unclear on what constitutes a “class of technology.” Is the Firefox browser a class of technology unto itself? It seems contrary to the spirit of this limitation to allow Australian authorities to compromise the security of the hundreds of millions of Firefox users who have never been under suspicion of any wrongdoing. Crucially, the INSLM clarifies that target technology should refer to “the specific instance used by the intended target,” which would narrow the scope so that targeting is more likely to affect the target alone.
  • Employee protection – Mozilla, among many other DCPs, has been concerned by the risk that the definition of DCP in the law could be read to allow Australian authorities to serve an order on any employee of a DCP. The INSLM recommended that a natural person should only be considered to be a DCP where that natural person is a sole trader. We agree with the INSLM that “it is necessary to put this issue beyond doubt” and urge the PJCIS to amend TOLA to reflect this interpretation.

While the INSLM has suggested a number of positive changes, we were disappointed by his recommendations regarding restrictions on disclosure. As it stands, TOLA limits companies from disclosing the fact that they have been served with these orders. The INSLM’s report suggests that Commonwealth officials be authorized to disclose TAN/TCN info (as well as that of TARs, which are voluntary Technical Assistance Requests) to the public and to government officials when disclosure is in the national or public interest. In our view this is inadequate to address the underlying concern. Companies can’t be transparent with their users nor can there be a robust public debate about the wisdom of certain technical capabilities when companies are still restricted from disclosure. Moreover, such a lack of transparency is at odds with basic open source and security engineering principles.

TOLA also presently lacks crucial restrictions on the ability of foreign authorities to exercise the powers the law grants. The INSLM notes that a large overhaul of the procedural safeguards around mutual legal assistance in criminal matters is likely forthcoming in the International Production Orders (IPO) Bill, which Australia is expected to enact later this year as it pursues acceptance under the U.S. Cloud Act. We continue to advocate for strict limitations on how and when foreign countries can request the assistance of Australian authorities through TOLA.

Mozilla has been involved throughout the legislative process and the development of the INSLM’s report. We filed comments to the PJCIS in late 2018 and early 2019 warning of TOLA’s dangerous effects. Martin Thomson, Mozilla Distinguished Engineer, testified at a hearing held by the INSLM – which ultimately proceeded to quote a portion of Martin’s testimony in his final report. Moreover, our team has provided comments to the Australian Ministry of Communications, Cyber Safety & the Arts relating specifically to the significant security risks posed by TCNs. Our December 2019 cover letter to the INSLM contributing input to his report can be found here. A detailed list of Mozilla’s recommendations alongside related INSLM recommendations can be found here.

The PJCIS will hold a hearing later this month to discuss the recommendations and likely begin the process of discussing amendments to TOLA. This presents the PJCIS with a unique opportunity to demonstrate leadership in defending individuals’ online privacy and security while enabling effective access to justice. The implementation of TOLA continues to pose serious privacy, security, and due process issues for both users and developers, and Mozilla will continue to oppose this law. In the event that the bill is not repealed, we strongly urge the involved MPs and Senators to adopt the INSLM’s recommendations which may help soften the blow of some of the law’s most damaging provisions.

The post Australian watchdog recommends major changes to exceptional access law TOLA appeared first on Open Policy & Advocacy.

Open Policy & AdvocacyThe Open Technology Fund’s vital role for democracy worldwide should not be undermined

The Open Technology Fund plays a vital role for democracy worldwide. That’s why Mozilla on Friday joined a friend of the court brief in support of the Open Technology Fund’s independence from government control as OTF’s case moves forward to the D.C. Circuit Court of Appeals.

The Open Technology Fund is a U.S. government funded, independent nonprofit corporation with a mission to support development of open-source technologies that “increase free expression, circumvent censorship, and obstruct repressive surveillance as a way to promote human rights and open societies.” One such OTF-supported project is Tor Browser, which is built on the Firefox codebase and enables encrypted access to the web for anonymous browsing. Another is Let’s Encrypt, a free certificate authority enabling more secure web connections that began as a project of Mozilla, EFF, and the University of Michigan. These are invaluable tools not only to citizens of authoritarian regimes, but more broadly to internet users everywhere who rely on them to protect the privacy of their personal associations, communications, and interests.

OTF’s vital role in promoting internet freedom worldwide was severely threatened last month when Michael Pack, the newly installed CEO of the U.S. Agency for Global Media (USAGM), fired the head of OTF and appointed a new acting director, a move that we do not believe he has the legal authority to take. Originally a project of Radio Free Asia, which is supervised by USAGM along with Voice of America and other government-funded media outlets, OTF in 2019 spun off into its own independent nonprofit corporation while continuing to receive federal funding. In response to Mr. Pack’s recent actions, OTF filed suit, challenging his authority to dictate the leadership of the organization under the new structure.

OTF’s independence from any government is critical to its mission. Digital tools to make the internet more secure and safer for speech will be less effective if they are perceived to be influenced by government interests. At a time when surveillance and censorship are increasing worldwide, this consequence would be particularly troubling. Moreover, the first amendment implications of USAGM’s actions are significant; as the brief notes: “the independence of private entities and civil society from the government is a hallmark of our democracy.” It is Mozilla’s hope that the Court will recognize these concerns and deliver an opinion that preserves OTF’s ability to serve as an indispensable resource for digital privacy and security, and for democracy.

The post The Open Technology Fund’s vital role for democracy worldwide should not be undermined appeared first on Open Policy & Advocacy.

Mozilla Add-ons BlogExtensions in Firefox 79

We have a little more news this release: a new API method, a reminder about a recently announced change, a preview of some things to come, and a few interesting improvements. Let’s get started!

Warming up tabs

To optimize resource usage, render information on inactive tabs is discarded. When Firefox anticipates that a tab will be activated, the tab is “warmed up”. Switching to it then feels much more instantaneous. With the new tabs.warmup function, tab manager extensions will be able to benefit from the same perceived performance improvements. Note this API does not work on discarded tabs and does not need to be called immediately prior to switching tabs. It is merely a performance improvement when the tab switch can be anticipated, such as when hovering over a button that when clicked would switch to the tab.

Changes to storage.sync

We’ve blogged about this recently, but given this is part of Firefox 79 I wanted to make sure to remind you about the storage.sync changes we’ve been working on. Storage quotas for the storage.sync API are now being enforced as part of backend changes we’ve introduced for better scalability and performance.

There is no immediate action required if you don’t use the storage.sync API or are only storing small amounts of data. We encourage you to make your code resilient while your storage needs grow by checking for quota errors. Also, if you are getting support requests from users related to stored preferences you may want to keep this change in mind and support them in filing a bug as necessary.

For more information and how to file a bug in case you come across issues with this change, please see the blog post.

Firefox site isolation coming later this year

The Firefox platform team has been working on a new security architecture that isolates sites from each other, down to separating cross-origin iframes from the tab’s process. This new model, nicknamed Fission, is currently available for opt-in testing in Nightly. The platform team is planning to begin roll-out to Nightly and Beta users later this year.

So far, we have identified two changes with Fission enabled that will impact extensions:

  • Content scripts injecting extension iframes (from a moz-extension:// url) and accessing them directly via the contentWindow property will be incompatible with Fission, since that iframe will run in a different process. The recommended pattern, as always, is to use postMessage and extension messaging instead.
  • The synchronous canvas drawWindow API will be deprecated, since it’s unable to draw out-of-process iframes. You should switch to the captureTab method, which we are looking to extend with more functionality to provide a sufficient replacement.

If you are the developer of an extension that uses one of these features, we recommend that you update your extension in the coming months to avoid potential breakages.

We’re working to make the transition to Fission as smooth as possible for users and extension developers, so we need your help: please test your extensions with Fission enabled, and report any issues on Bugzilla as blocking the fission-webext meta bug. If you need help or have any questions, come find us on our community forum or Matrix.

We will continue to monitor changes that will require add-ons to be updated. We encourage you to subscribe to our blog to stay up to date on the latest developments. If more changes to add-ons are necessary we will reach out to developers individually or announce the changes here.


  • Extensions can use webRequest listeners to observe their own requests initiated by the downloads API.
  • The tabs.duplicate API now makes the tab active before resolving the promise, for parity with Chrome.
  • Disabling and re-enabling a WebExtension which provides a default search engine now correctly sets the engine as default again.

Special thanks in this release goes to community members Myeongjun Go, Sonia Singla, Deepika Karanji, Harsh Arora, and my friends at Mozilla that have put a lot of effort into making Firefox 79 successful. Also a special thanks to the Fission team for supporting us through the changes to the extension architecture. Stay tuned for next time!

The post Extensions in Firefox 79 appeared first on Mozilla Add-ons Blog.

Mozilla VR BlogA browser plugin for Unity

A browser plugin for Unity

Unity's development tools and engine are far and away the most common way to build applications for VR and AR today. Previously, we've made it possible to export web-based experiences from Unity. Today, we're excited to show some early work addressing the other way that Unity developers want to use the web: as a component in their Unity-based virtual environments.

Building on our work porting a browser engine to many platforms and embedding scenarios, including as Firefox Reality AR for HoloLens 2, we have built a new Unity component based on Servo, a modern web engine written in the Rust language.

The Unity engine has a very adaptable multi-platform plugin system with a healthy ecosystem of third-party plugins, both open-source and proprietary. The plugin system allows us to run OS-native modules and connect them directly to components executing in the Unity scripting environment.

The goals of the experiments were to build a Unity native plugin and a set of Unity C# script components that would allow third parties to incorporate Servo browser windows into Unity scenes, and optionally, provide support for using the browser surface in VR and AR apps built in Unity.

Today, we’re releasing a fully-functional prototype of the Servo web browser running inside a Unity plugin. This is an early-stage look into our work, but we know excitement is high for this kind of solution, so we hope you’ll try out this prototype, provide your feedback, and join us in building things with it. The version released today targets the macOS platform, but we will add some of the other platforms supported by Servo very soon.

Getting started

We’ve open-sourced the plugin, at Head on over, click the star and fork the code, check it out to your local machine, and then open the project inside Unity.

A browser plugin for Unity

Developer instructions are in the README file in the repository.

What it does

You can work directly with the browser window and controls inside the Unity Editor. Top-level config is on the ServoUnityController object. Other important objects in the scene include the ServoUnityWindow, ServoUnityNavbarController, and ServoUnityMousePointer.

The ServoUnityWindow can be positioned anywhere in a Unity scene. Here, we’ve dropped it into the Mozilla mushroom cave (familiar to users of Firefox Reality, by the amazing artist Jasmin Habezai-Fekri), and provided a camera manipulator that allows us to move around the scene and see that it is a 3D view of the browser content.

Servo has high-quality media playback via the GStreamer framework, including audio support. Here we’re viewing sample MPEG4 video, running inside a deployed Unity player build.

Customizable search is included in the plugin. A wide variety of web content is viewable with the current version of Servo, with greater web compatibility being actively worked on (more on that below). WebGL content works too.

How it works


Development in Unity uses a component-based architecture, where Unity executes user code attached to GameObjects, organised into scenes. Users customise GameObjects by attaching scripts which execute in a C# environment, either using the Mono runtime or the IL2CPP ahead-of-time compiler. The Unity event lifecycle is accessible to user scripts inheriting from the Unity C# class MonoBehaviour. User scripts can invoke native code in plugins (which are just OS-native dynamic shared objects) via the C# runtime’s P/Invoke mechanism. In fact, Unity’s core itself is implemented in C++ and provides native code in plugins with a second set of C/C++-accessible interfaces to assist in some low-level plugin tasks.

Servo is itself a complex piece of software. By design, most of its non user-facing functionality is compiled into a Rust library, libservo. For this first phase of the project, we make use of a simplified C-compatible interface in another Rust library named libsimpleservo2. This library exposes C-callable functions and callback hooks to control the browser and view its output. Around libsimpleservo2, we put in place native C++ abstractions that encapsulate the Unity model of threads and rendering, and expose a Unity-callable set of interfaces that are in turn operated by our C# script components.

A browser plugin for Unity

Getting the browser content into Unity

We create an object in Unity, an instance of ServoUnityWindow, to wrap an instance of Unity’s Texture2D class and treat it as a browser content pane. When using Unity’s OpenGL renderer, the Texture2D class is backed by a native OpenGL texture, and we pass the OpenGL texture “name” (i.e. an ID) to the plugin, which binds the texture to a framebuffer object which receives the final composited texture from Servo.

As we do not have control over the binding of the texture and the Unity context, the current design for updating this texture uses a blit (copy) via Servo’s surfman-chains API. Essentially, Servo’s WebRender writes to an OS-specific surface buffer on one thread, and then this surface buffer is bound read-only to Unity’s render thread and a texture copy is made using OpenGL APIs. In the initial macOS implementation for example, the surface buffer is an IOSurface which can be zero-cost moved between threads, allowing an efficient implementation where the browser compositor can write in a different thread to the thread displaying the texture in Unity.

Control and page meta-data is communicated separately, via a set of APIs that allow search and navigation to URLs, updating of page titles, and the usual back/forward/stop/home button set.

Because the browser content and controls are all ultimately Unity objects, the Unity application you’re building can position, style, or programmatically control these in any way you like.


Getting the project to this stage has not been without its challenges, some of which we are still addressing. Unity’s scripting environment runs largely single-threaded, with the exception of rendering operations which take place on a separate thread on a different cadence. Servo, however, spawns potentially dozens of lightweight threads for a variety of tasks. We have taken care to marshal returning work items from Servo back to the correct threads in Unity. There are some remaining optimizations to be made in deciding when to refresh the Unity texture. Currently, it just refreshes every frame, but we are adding an API to the embedding interface to allow finer-grained control.

As an incubator for browser technology, Servo is focused on developing new technologies. Notable tech that has moved from Servo to the Gecko engine powering Firefox include the GPU-based rendering engine WebRender, and the CSS engine Stylo. Those successes aside, full web compatibility is still an area where Servo has a significant gap, as we have focused primarily on big improvements for the user and specific experiences over the long tail of the web. A recent effort by the Servo community has seen great advances in Servo’s webcompat, so we expect the subset of the web browsable by Servo to continue to grow rapidly.

Development plans

Supporting the full range of platforms currently supported by Servo is our first follow-up development priority, with Windows Win32 and Windows UWP support at the top of the list. Many of you have seen our Firefox Reality AR for HoloLens 2 app, and UWP support will allow you to build Servo into a your own AR apps for the HoloLens platform using the same underlying browser engine.

We’d also like to support a greater subset of the full browser capability. High on the list is multiple-window support. We’re currently working on graduating the plugin from the libsimpleservo2 interface to a new interface that will allow applications to instantiate multiple windows, tabs, and implement features like history, bookmarks and more.

This first release is focused on the web browsable through 2D web pages. Servo also supports the immersive web through the WebXR API, and we’re exploring connecting WebXR to Unity’s XR hardware support through the plugin interface. We’ll be starting with support for viewing 360° video, which we know from our Firefox Reality user base is a prime use case for the browser.


Whether it’s a media player, an in-game interface to the open web, browser-as-UI, bringing in specific web experiences, or the myriad of other possibilities, we can’t wait to see some of the imaginative ways developers will exploit Servo’s power and performance inside Unity-built apps.

Blog of DataThis Week in Glean: Automated end-to-end tests for Glean

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

This is a special guest post by non-Glean-team member Raphael Pierzina! 👨🏻‍💻


Mozilla All-Hands

Last year at the Mozilla All-Hands in Whistler, Canada I went for a walk with my colleague Mark Reid who manages our Data Platform team. We caught up on personal stuff and discussed ongoing projects as well as shared objectives for the next half-year. These in-person conversations with colleagues are my favorite activity at our semi-annual gatherings and are helpful in ensuring that my team is working on the most impactful projects and that our tests create value for the teams we support. 📈


For Mozilla, getting reliable data from our products is critical to inform our decision making. Glean is a new product analytics and telemetry solution that provides a consistent experience and behavior across all of our products. Mark and I agreed that it would be fantastic if we had automated end-to-end tests to complement existing test suites and alert us of potential issues with the system as quickly as possible.


We wrote a project proposal, consulted with the various stakeholders and presented it to the Data Engineering and Data Operations teams, before scoping out the work for the different teams and getting started on the project. Fast forward to today, I’m excited to share that we have recently reached a major milestone and successfully completed a proof of concept!

The burnham project is an end-to-end test suite that aims to automatically verify that Glean-based products correctly measure, collect, and submit non-personal information to the GCP-based Data Platform and that the received telemetry data is then correctly processed, stored to the respective tables and made available in BigQuery. 👩‍🚀📈🤖

The project theme is inspired by Michael Burnham, the fictional protagonist on the web television series Star Trek: Discovery portrayed by Sonequa Martin-Green. Burnham is a science specialist on the Discovery. She and her crew do research on spore drive technology and complete missions in outer space and these themes of scientific exploration and space travel are a perfect fit for this project.



We have developed a command-line application based on the Glean SDK Python bindings for producing test data as part of the automated end-to-end test suite. The burnham application submits custom discovery Glean pings to the Data Platform which validates and stores these pings in the burnham_live.discovery_v1 BigQuery table.

Every mission identifier that is passed as an argument to the burnham CLI corresponds to a class, which defines a series of actions for the space ship in its complete method:

class MissionF:
    """Warp two times and jump one time."""

    identifier: ClassVar[str] = "MISSION F: TWO WARPS, ONE JUMP"

    def complete(self, space_ship: Discovery) -> None:

The actions record to a custom labeled_counter metric, as you can see in the following snippet. After completing a mission burnham submits a discovery ping that contains the recorded metrics.

class WarpDrive:
    """Space-travel technology."""

    def __call__(self, coordinates: str) -> str:
        """Warp to the given coordinates."""["warp_drive"].add(1)
        logger.debug("Warp to %s using space-travel technology", coordinates)

        return coordinates

The example test scenario in a following section shows how we access this data. 📊

You can find the code for the burnham application in the application directory of the burnham repository. 👩‍🚀


We also developed a test suite based on the pytest framework that dynamically generates tests. Each test runs a specific query on BigQuery to verify a certain test scenario.

The following snippet shows how we generate the tests in a pytest hook:

def pytest_generate_tests(metafunc):
    """Generate tests from test run information."""

    ids = []
    argvalues = []

    for scenario in metafunc.config.burnham_run.scenarios:
        query_job_config = bigquery.QueryJobConfig(
            # The SQL query is expected to contain a @burnham_test_run parameter
            # and the value is passed in for the --run-id CLI option.
        argvalues.append([query_job_config, scenario.query, scenario.want])

        ["query_job_config", "query", "want"], argvalues, ids=ids

The test suite code is located in the bigquery directory of the burnham repository. 📊


We build and push Docker images for both burnham and burnham-bigquery on CI for pushes to the main branch of the burnham repository. The end-to-end test suite is configured as a DAG on telemetry-airflow on the Data Platform and scheduled to run daily (this is the same infrastructure we use to generate all the derived datasets). It runs several instances of a burnham Docker container to produce Glean telemetry, uses an Airflow sensor to wait for the data to be available in the burnham live tables, and then runs burnham-bigquery to verify the results.

The following snippet shows how we call a helper function that returns a GKEPodOperator which runs the burnham Docker container in a Kubernetes pod. We pass in information about the current test run which we later use in the SQL queries to filter out rows from other test runs. We also specify the missions burnham runs and whether the spore-drive experiment is active for this client:

client2 = burnham_run(
        "MISSION A: ONE WARP",

Please see the burnham DAG for more information. 📋

Test scenarios

The burnham Docker image and the burnham-bigquery Docker image support parameters which control their behavior. This means we can modify the client automation and the test scenarios from the burnham DAG and the DAG effectively becomes the test runner.

We currently test four different scenarios. Every scenario consists of a BigQuery SQL query and a corresponding list of expected result rows.

Example test scenario

The test_labeled_counter_metrics test verifies that labeled_counter metric values reported by the Glean SDK across several documents from three different clients are correct.

The UNNEST operator in the following SQL query takes metrics.labeled_counter.technology_space_travel and returns a table, with one row for each element in the ARRAY. The CROSS JOIN adds the values for the other columns to the table, which we check in the WHERE clause:

  SUM(technology_space_travel.value) AS value_sum
  UNNEST(metrics.labeled_counter.technology_space_travel) AS technology_space_travel
  AND metrics.uuid.test_run = @burnham_test_run

The burnham DAG instructs client1 to perform 5 warps and 5 jumps, client2 to perform 10 warps and 8 jumps, and client3 to perform 3 warps. We expect a total number of 18 warps with a warp drive and a total number of 13 jumps with a spore drive across the three clients:

    {"key": "spore_drive", "value_sum": 13},
    {"key": "warp_drive", "value_sum": 18},

Next steps

We currently monitor the test run results via the Airflow dashboard and have set up email alerts for when the burnham DAG fails. Airflow stores logs for every task allowing us to diagnose failures.

We are now working on storing the test results along with the test report from burnham-bigquery in a new BigQuery table. This will enable us to create dashboards and monitor test results over time. We plan to also add additional test scenarios to the suite as for example a test to verify that different naming schemes for pings are working as designed in the Glean SDK and on the Data Platform.

It has been amazing to collaborate with folks from various teams at Mozilla on the Glean end-to-end tests project and I’m excited to continue this work with my fellow Mozillians in the next half-year. 👨‍🚀

This is a syndicated copy of the original post at

about:communityFirefox 79 new contributors

With the release of Firefox 79, we are pleased to welcome the 21 developers who contributed their first code change to Firefox in this release, 18 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

The Mozilla BlogMozilla Joins New Partners to Fund Open Source Digital Infrastructure Research

Today, Mozilla is pleased to announce that we’re joining the Ford Foundation, the Sloan Foundation, and the Open Society Foundations to launch a request for proposals (RFP) for research on open source digital infrastructure. To kick off this RFP, we’re joining with our philanthropic partners to host a webinar today at 9:30 AM Pacific. The Mozilla Open Source Support Program (MOSS) is contributing $25,000 to this effort.

Nearly everything in our modern society, from hospitals and banks to universities and social media platforms, runs on “digital infrastructure” – a foundation of open source code that is designed to solve common challenges. The benefits of digital infrastructure are numerous: it can reduce the cost of setting up new businesses, support data-driven discovery across research disciplines, enable complex technologies such as smartphones to talk to each other, and allow everyone to have access to important innovations like encryption that would otherwise be too expensive.

In joining with these partners for this funding effort, Mozilla hopes to propel further investigation into the sustainability of open source digital infrastructure. Selected researchers will help determine the role companies and other private institutions should play in maintaining a stable ecosystem of open source technology, the policy and regulatory considerations for the long-term sustainability of digital infrastructure, and much more. These aims align with Mozilla’s pledge for a healthy internet, and we’re confident that these projects will go a long way towards deepening a crucial collective understanding of the industrial maintenance of digital infrastructure.

We’re pleased to invite interested researchers to apply to the RFP, using the application found here. The application opened on July 20, 2020, and will close on September 4, 2020. Finalists will be notified in October, at which point full proposals will be requested. Final proposals will be selected in November.

More information about the RFP is available here.

The post Mozilla Joins New Partners to Fund Open Source Digital Infrastructure Research appeared first on The Mozilla Blog.

hacks.mozilla.orgMDN Web Docs: 15 years young

On July 23, MDN Web Docs turned 15 years old. From humble beginnings, rising out of the ashes of Netscape DevEdge, MDN has grown to be one of the best-respected web platform documentation sites out there. Our popularity is growing, and new content and features arrive just about every day.

When we turned 10, we had a similar celebration, talking about MDN Web Docs’ origins, history, and what we’d achieved up until then. Refer to MDN at ten if you want to go further back!

In the last five years, we’ve broken much more ground. These days, we can boast roughly 15 million views per month, a comprehensive browser compatibility database, an active beginner’s learning community, editable interactive examples, and many other exciting features that didn’t exist in 2015. An anniversary to be proud of!  image of a party dino, celebrating 15 years

In this article, we present 15 sections highlighting our most significant achievements over the last five years. Read on and enjoy, and please let us know what MDN means to you in the comments section.

1. We’ve got an MDN Web Docs Swag Store

Launched earlier this year, the MDN Web Docs Store is the place to go to show your support for web standards documentation and get your MDN Web Docs merchandise. Whether it’s clothing, bags, or other accessories featuring your favorite dino head or MDN Web Docs logos, we’ve got something for you.

And, for a limited time only, you can pick up special 15th anniversary designs.

2. MDN’s audience has grown like never before

In 2015, MDN served 4.5 million users on a monthly basis. A year later, we launched a product strategy designed to better serve Web Developers and increase MDN’s reach. We improved the site’s performance significantly. Page load time has gone down from 5s to 3.5s for the slowest 90th percentile on MDN, in the last two years alone.

We fixed many issues that made it harder to surface MDN results in search engines, from removing spam to removing hundreds of thousands of pages from indexing. We listened to users to address an under-served audience on MDN: action-oriented developers, those who like actionable information right away. You can read below about some of the specific changes we made to better serve this audience.

With over 3,000 new articles in the last 3 years, 260,000 article edits, and all the other goodness you can read about here, MDN has grown in double-digit percentages, year over year, every year — since 2015. Today MDN is serving more than 15 million web developers on a monthly basis. And, it’s serving them better than ever before.

3. Satisfaction guaranteed

When we first started tracking task completion and satisfaction on MDN Web Docs 4 years ago, we were thrilled to see that more than 78% of MDN users were either satisfied or very satisfied with MDN, and 87% of MDN users reported that they were able to complete the task that brought them to the site.

Since then it has been our goal to address a larger share of the developer audience while still maintaining these levels of satisfaction and task completion. Today, even though we have tripled our audience size, the share of people satisfied or very satisfied with MDN has gone up to 80%. Task completion has increased to a phenomenal 92%.

4. The learning area: MDN becomes beginner-friendly

Around the middle of 2015, the writers’ team began to act on user feedback that MDN wasn’t very beginner-friendly. We heard from novice web developers that MDN had been recommended as a good source of documentation. However, when they went to check out the site, they found it too advanced for their needs.


In response to this feedback, we started the Learn Web Development section, informally known as the learning area. This area initially covered a variety of beginner’s topics ranging from what tools you need and how to get content on the web, to the very basics of web languages like HTML, CSS, and JavaScript. Getting started with the web was the first fully-fledged learning module to be published. It paved the way nicely for what was to come.

From simple beginnings, Learn Web Development has grown to over 330 articles covering all the essentials for aspiring web developers. We serve over 3 million page views per month (a little under 10% of all monthly MDN views). And you’ll find an active learner community over on our discourse forums.

5. The Front-end developer learning pathway

By 2019, the learning area was doing well, but we felt that something was still missing. There is a huge demand for training material on client-side JavaScript frameworks, and structured learning pathways. Serious students tend to learn with a goal in a mind such as becoming a front-end developer.

Here’s what happened next:

  1. To figure out exactly what to cover, we did some research. This culminated in the publication of the Introduction to client-side frameworks and Understanding client-side web development tools modules, which have already been very well-received. We now provide introductory material on React, Ember, and Vue, with more framework documentation to come in the future. And in general, we provide beginners with an overview of available tools, how to apply them, and how they relate to what they already know.
  2. We organized the content we’ve published so far into the Front-end developer learning pathway — an opinionated pathway containing all the knowledge you’ll need to become a front-end web developer, along with time estimates, suggested order of learning, etc.

Some folks have expressed concern over MDN’s framework-oriented content. MDN is supposed to be the neutral docs site, and focus purely on the standards! We understand this concern. And yet, the learning area has been created from a very pragmatic standpoint. Today’s web development jobs demand knowledge of frameworks and other modern tooling, and to pretend that these don’t exist would be bad for the resource (and its users).

Instead, we aim to strike a balance, providing framework coverage as a neutral observer, offering opinions on when to use frameworks and when not to, and introducing them atop a solid grounding of standards and best practices. We show you how to use frameworks while adhering to essential best practices like accessibility.

6. Bringing interactive examples to reference pages

The 2016 MDN product strategy highlighted an opportunity to add interactive examples to our reference docs. From user feedback, we knew that users value easy availability of simple code examples to copy, paste, and experiment with. It is a feature of documentation resources they care deeply about, and we certainly weren’t going to disappoint.

So between 2017 and 2019, a small team of writers and developers designed and refined editors for interactive examples. They wrote hundreds of examples for our JavaScript, CSS, and HTML reference pages, which you can now find at the top of most of the reference pages in those areas. See Bringing interactive examples to MDN for more details.

an interactive code example on MDN

The most significant recent change to this system was a contribution from @ikarasz. We now run ESLint on our JavaScript examples, so we can guarantee a consistent code style.

In the future, we’d love to add interactive examples for some of the Web API reference documentation.

7. MDN revolutionizes browser compat data

In 2017, the team started a project to completely redo MDN Web Docs’ compatibility data tables. The wiki had hand-maintained compat sections on about 6,000 pages, and these differed greatly in terms of quality, style, and completeness.

Given that the biggest web developer pain point is dealing with browser compatibility and interoperability, our compat sections needed to become a lot more reliable.

Throughout 2017 and 2018, the MDN community cleaned up the data. Over the course of many sprints, such as Hack on MDN: Building useful tools with browser compatibility data, compatibility information moved from the wiki tables into a structured JSON format in a GitHub repository.

About half-way through the project we saw the first fruits of this work. Read MDN browser compatibility data: Taking the guesswork out of web compatibility for more details of what we’d achieved by early 2018.

It took until the end of 2018 to finish the migration. Today more than 8,000 English pages show compat data from our BCD repo – a place where all major browser vendors come together to maintain compatibility information.

A browser compat data table on MDN, showing that foreeach has good cross-browser support

Over time, other projects have become interested in using the data as well. MDN compat data is now shown in VS Code, webhint, and other tools besides. And even the premier site about compat info — — has switched to use MDN compat data, as announced in 2019. (Read Caniuse and MDN compatibility data collaboration.)

Soon compat info about CSS will also ship in Firefox Devtools, giving web developers even more insights into potential compatibility breakages. This feature is currently in beta in Firefox Developer Edition.

8. WebExtensions docs

In 2015 Mozilla announced plans to introduce a new browser extension system that would eventually replace the existing ones. This system is based on, and largely compatible with, Chrome’s extension APIs. Over the next couple of years, as the Add-ons team worked on the WebExtensions APIs, we documented their work, writing hundreds of pages of API reference documentation, guides, tutorials, and how-to pages. (See the Browser Extensions docs landing page to start exploring.)

We also wrote dozens of example extensions, illustrating how to use the APIs. Then we prototyped a new way to represent browser compatibility data as JSON, which enabled us to publish a single page showing the complete compat status of the APIs. In fact, this work helped inspire and form the basis of what became the browser compat data project (see above).

9. The MDN Product Advisory Board

On MDN Web Docs, we’ve always collaborated and shared goals with standards bodies, browser vendors, and other interested parties. Around three years ago, we started making such collaborations more official with the MDN Product Advisory Board (PAB), a group of individuals and representatives from various organizations that meet regularly to discuss MDN-related issues and projects. This helps us recognise problems earlier, prioritize content creation, and find collaborators to speed up our work.

The PAB as it existed in early 2019. From left to right — Chris Mills (Mozilla), Kadir Topal (Mozilla), Patrick Kettner (Microsoft), Dominique Hazael-Massieux (W3C), Meggin Kearney (Google), Dan Applequist (Samsung), Jory Burson (Bocoup), Ali Spivak (Mozilla), and Robert Nyman (Google).


Under normal circumstances, we tend to have around 4 meetings per year — a combination of face-to-face and virtual meetups. This year, since the 2020 pandemic, we’ve started to have shorter, more regular virtual meetups. You can find the PAB meeting minutes on GitHub, if you are interested in seeing our discussions.

10. JavaScript error messages

Usually MDN Web Docs is there for you when you search for an API or a problem you need help solving. Most of MDN’s traffic is from search engines. In 2016, we thought about ways in which our content could come closer to you. When a JavaScript error appears in the console, we know that you need help, so we created [Learn more] links in the console that point to JavaScript error docs on MDN. These provide more information to help you debug your code. You can read more about this effort in Helping web developers with JavaScript errors.

We’ve also provided error documentation for other error types, such as CORS errors.

11. Our fabulous new mobile layout

For some time, MDN Web Docs’ layout had a basic level of responsiveness, but the experience on mobile was not very satisfying. The jump menu and breadcrumb trail took up too much space, and the result just wasn’t very readable.

In 2020, our dev team decided to do something about this, and the result is much nicer. The jump menu is now collapsed by default, and expands only when you need it. And the breadcrumb trail only shows the “previous page” breadcrumb, not the entire trail.

the mobile view of MDN web docs, showing a much cleaner UI than it had previously

Please have a look at MDN on your mobile device, and let us know what you think! And please be aware that this represents the first step towards MDN Web Docs rolling out a fully-fledged design system to enforce consistency and quality of its UI elements.

12. HTTP docs

In 2016, we drafted a plan to create HTTP docs. Traditionally, MDN has been very much focused on the client-side, but more recently developers have been called upon to understand new network APIs like Fetch, and more and more HTTP headers. In addition, HTTP is another key building block of the web. So, we decided to create an entire new docs section to cover it.

Today, MDN documents more than 100 HTTP headers, provides in-depth information about CSP and CORS, and helps web developers to secure their sites — together with the Mozilla Observatory.

13. A hat tip to our community

We would be remiss not to mention our wonderful contributor community in this post. Our community of volunteers has made us what we are. They have created significantly more content on MDN over the years than our paid staff, jumped into action to help with new initiatives (such as interactive examples and browser compat data), evangelised MDN Web Docs far and wide, and generally made the site a more diverse, more fun, and brighter place to be around.

To give you an idea of our community’s significance, have a look at the Mozilla & the Rebel Alliance report 2020, in which MDN is shown to be the largest community cluster in Mozilla, after Firefox.

A graphical representation of the size of Mozilla community contributions. MDN is on the bottom-right.


And we’d also like to give the browser compat data repo an honourable mention as one of the most active GitHub repos in the overall Mozilla GitHub presence.

Our community members are too numerous to thank individually, but we’d like to extend our warmest regards and heartfelt thanks to you all! You know who you are.

14. MDN Web Docs infrastructure modernization

It’s hard to believe that at the beginning of 2016, MDN Web Docs was served from a fixed set of servers in our old Santa Clara data center. Those servers were managed by a separate team, and modifications had to be coordinated far in advance. There was no way to quickly grow our server capacity to meet increasing demand. Deployments of new code always generated undue anxiety, and infrastructure-related problems were often difficult to diagnose, involving engineers from multiple teams.

Fast-forward to today, and so much has changed for the better. We’re serving MDN via a CDN backed by multiple services running within an AWS EKS Kubernetes cluster — with both cluster and pod auto-scaling. This is a huge step forward. We can not only grow our capacity automatically to meet increasing demand and deploy new code more reliably, but we can manage the infrastructure ourselves, in the open. You can visit the MDN Infra repo today. You’ll see that the infrastructure itself is defined as a set of files, and evolves in the open just like any other public repository on GitHub.

This transition was a huge, complex effort, involving many collaborators, and it was all accomplished without any significant disruption in service. We’ve placed MDN on a solid foundation, but we’re not resting. We’ll continue to evolve the foundation to meet the demands of an even brighter future!

15. Make way for Web DNA

In 2019, we were thinking about how to gain more insight into web developer problems, in order to make our content better address their needs. In the end, we decided to invest in an in-depth survey to highlight web developer pain points, in collaboration with the other members of the MDN PAB (see above). This effort was termed the Web Developer Needs Assessment (or Web DNA).

The survey results were widely publicized (download the 2019 Web DNA report (PDF)), and proved popular and influential. MDN Web docs and many other projects and organizations used the results to help shape their future strategies.

And the good news is that we have secured funding to run a new Web DNA in 2020! Later on this year we’ll have updated findings to publish, so watch this space.

What’s next

That’s the story up to now, but it doesn’t end here. MDN Web Docs will carry on improving. Our next major move is a significant platform and content update to simplify our architecture and make MDN usage and contribution quicker and more effective. This also includes reinventing our content storage as structured data on GitHub. This approach has many advantages over our current storage in a MySQL database — including easier mass updates and linting, better consistency, improved community and contribution workflow, and more besides.

We hope you enjoyed reading. Stay tuned for more Web Docs excitement. And please don’t forget to share your thoughts and feedback below.

The post MDN Web Docs: 15 years young appeared first on Mozilla Hacks - the Web developer blog.

Mozilla KoreaMozilla, 신뢰할 수 있는 VPN 서비스 출시

오늘부터 당신이 신뢰하는 회사의 VPN을 시장에서 찾아볼 수 있게 되었습니다. 바로 Mozilla VPN(Virtual Private Network)입니다. Mozilla VPN은 Windows 및 Android 기기에서 사용할 수 있습니다. 이 빠르고 사용하기 쉬운 VPN 서비스는 Firefox의 제조업체이면서 온라인 소비자 보안 및 개인 정보 보호 서비스에 있어 믿을만한 이름을 지닌 Mozilla에 의해 제공됩니다.

Mozilla VPN의 작동 방식을 직접 확인해 보세요 :

Mozilla VPN을 설치하고 나서 가장 먼저 알아차릴 수 있는 것은 속도가 매우 빠르다는 사실입니다. 그것은 Mozilla VPN이 현대적이고 간결한 기술에 기반을 두고 있기 때문인데, 와이어가드 프로토콜을 사용하는 4000줄의 코드는 다른 VPN 서비스 제공자들이 사용하는 레거시 프로토콜과 비교하자면 극히 일부에 불과한 크기입니다.

VPN을 처음 접하는 사람을 위해서, 그리고 VPN을 설정하고 웹에 접속하려는 사람을 위해서 무척 사용하기 쉽고 간단하게 만들어진 인터페이스도 눈에 띄는 부분입니다.

긴 기간의 계약 혹은 약정 등이 필요하지 않은 Mozilla VPN은 월 4.99달러만 내면 이용할 수 있으며, 현재는 미국, 캐나다, 영국, 싱가포르, 말레이시아, 뉴질랜드에서 최초 이용할 수 있습니다. 올 가을에는 다른 나라로 확대할 계획입니다.

프라이버시와 보안에 대한 약속을 하는 기업들로 붐비는 업계에서, 누구를 믿어야 할지 알기란 매우 어려운 일입니다. Mozilla는 사용자의 정보를 안전하게 지키는 데 도움을 주는 제품을 만드는 것으로 유명합니다. 우리는 우리가 서비스를 제공하는 데 필요한 정보에만 집중할 수 있도록 하는, 아주 읽기 쉬운 개인 정보 보호 원칙을 따릅니다. 우리는 사용자 데이터 로그를 보관하지 않습니다.

우리는 당신이 온라인에서 하는 일들을 분석하고, 이를 바탕으로 프로필을 만들고 싶어하는 제3자 데이터 분석 플랫폼과 제휴하지 않습니다. 사용자로써 당신이 이 제품에 지출하는 금액은 당신이 최고 수준의 VPN을 가질 수 있게 해 줄 뿐 아니라, 세상의 모든 사람들을 위해 인터넷을 더 좋게 만드는데 사용됩니다.

<figcaption>단순하고, 매우 이용하기 쉽습니다</figcaption>

작년에 우리는 이 VPN 서비스의 베타 테스트를 운영하였습니다. 이 서비스는 웹에서 당신이 연결하는 정보들에 대해 암호화하고 장치 레벨에서의 프로텍션을 제공해 주었습니다. 많은 이용자들이 왜 이 서비스가 필요한지에 대한 생각을 공유해 주었습니다.

사람들이 VPN을 사용하는 이유로 꼽은 주요 사항은 다음과 같습니다 :

모든 장치에 대한 보안 – 사용자는 온라인에서 추가적인 보호를 받기 위해 VPN으로 몰려들고 있습니다. 모질라 VPN을 사용하면 어떤 기기를 사용하고 있더라도 모든 응용 프로그램과 웹 사이트에서 활동이 암호화됩니다.
개인 정보에 대한 보호 기능 추가 – 미국과 영국의 VPN 사용자 중 50% 이상이 공용 Wi-Fi를 사용할 때 개인 정보를 보호하고자 함이 VPN 서비스를 선택하는 가장 큰 이유라고 말했습니다.
익명으로 웹 탐색 – 사용자는 익명성에 크게 관심을 기울입니다. VPN은 모든 트래픽을 암호화하고 IP 주소와 위치를 보호하는 핵심 구성 요소입니다.
더 안전한 통신 – VPN을 사용하면 보호 계층이 추가되므로 네트워크에서 모든 대화가 암호화된다고 보장할 수 있습니다.

전 세계적으로 ‘예측 불허’가 ‘새로운 정상(new normal)’이 된 세상에서, 우리는 여러분이 안전하다고 느끼는 것이 그 어느 때보다 중요하다는 것을 알고 있습니다. 그리고 여러분이 온라인에서 하는 일이 여러분 자신만의 일이라는 사실 역시 중요하다는 것을 알고 있습니다.

Mozilla VPN에 대해 웹사이트에서 알아보거나, Google Play 스토어에서 다운로드하세요.

이 글은 Mozilla Puts Its Trusted Stamp on VPN 의 한국어 번역입니다.

Mozilla L10NL10n Report: July 2020 Edition


New localizers

Welcome Prasanta Hembram, Cloud-Prakash and Chakulu Hembram, from the newly created Santali community! They are currently localizing Firefox for iOS in Santali Ol Chiki script.

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

Santali (Ol-Chiki script “sat-Olck”) has been added to Pontoon.

New content and projects

What’s new or coming up in Firefox desktop


Upcoming deadlines:

  • Firefox 79 is currently in beta and will be released on July 28. The deadline to update localization was on July 14.
  • The deadline to update localizations for Firefox 80, currently in Nightly, will be August 11 (4 weeks after the previous deadline).
Using a more inclusive language

As explained in a recent email to dev-l10n, we’re in the process of removing English terms that make direct or indirect references to racial oppression and discrimination.

In terms of impact on localization, that mainly involves the Master Password feature, which is now called Primary Password, starting from Firefox 80.

A Primary Password is a password that unlocks the other passwords saved locally in Firefox. Primary passwords are not synced between profiles or devices.

We ask all localizers to keep these implications in mind when translating, and to evaluate the translations previously used for “Master Password” in this light. If you identify other terms in your localizations or in the en-US version of our products that you feel are racially-charged, please raise the issue in Bugzilla and CC any of the l10n-drivers.

Most string changes regarding this update already landed in the last few days, and are available for translation in Pontoon. There is also going to be an alert in Firefox 80, to warn the users about the change:

If your translations for “Master Password” and “Primary Password” are identical, you can leave that string empty, otherwise you should translate “Formerly known as Master Password” accordingly, so that the warning is displayed. The string should be exposed in Pontoon shortly after this l10n report is published.

New Onboarding

Make sure to test the new about:welcome in Nightly. As usual, it’s a good idea to test this type of changes in a new profile.

Note that a few more string updates and changes are expected to land this week, before Firefox 80 moves to beta.

New Experiments Section

Firefox 80 has a new Experiments section in Preferences (about:preferences#experimental). By the end of this Nightly cycle, there should be about 20 experiments listed there, generating a sizable content to translate, and often quite technical.

These are experiments that existed in Firefox for a while (since Firefox 70), but could only be manually enabled in about:config before this UI existed. Once the initial landing is complete, this feature will not require such a large amount of translation on a regular basis.

Most of these experiments will be available only in Nightly, and will be hidden in more stable versions, so it’s important – as always – to test your translations in Nightly. Given this, you should also prioritize translation for these strings accordingly, and focus on more visible parts first (always check the priority assigned to files in Pontoon).

What’s new or coming up in mobile

As many are already aware, the l10n deadline for getting strings into the Fenix release version was this past Saturday July 18th. Out of the 90 locales working on Fenix on Pontoon, 85 made it to release! Congratulations to everyone for their hard work and dedication in trying to keep the same mobile experience to all our global users! This was a very critical step in the adventure of our new Android mobile browser.

Since we are now past string freeze, we have exposed new strings for the upcoming release. More details on the l10n timeline will come soon, so stay tuned.

There will also be a new version of Firefox for iOS (v28) soon: the l10n deadline to complete strings – as well as testing – is today, Wednesday July 22nd (PDT, end of day).

We have screenshots in Taskcluster now so that you can test your work (vs Google Drive): feel free to send me feedback about those. A big thank you to the iOS team (especially Isabel Rios and Johan Lorenzo from RelEng) for getting these ready and updated regularly!

What’s new or coming up in web projects

The web team continues making progress in migrating files to Fluent. Please take some time to review the files. Here are a few things to pay attention to:

String with a warning: It is important to check the strings with warnings first. They are usually caused by brands and product names not converted correctly because the names were translated. As long as these strings contain warnings, they can’t be activated at string level on production. Localized string with a warning will fallback to the English string. Since page activation threshold is at 80% completion, this means a page that was fully localized in the old format, if containing a warning, will appear to mix with English text.

String with error but no warning: All migrated pages need a thorough review. Even when a page doesn’t have warnings, it may contain errors that a script can’t detect. Here is an example:

  • .lang format:
    • en-US: Firefox browser – MSI installer
    • es-AR: Navegador Firefox – instalador MSI
  • .ftl format:
    • en: { -brand-name-firefox-browser } – MSI installer
    • es-AR after migration: Navegador { -brand-name-firefox } – instalador MSI
    • es-AR corrected: { -brand-name-firefox-browser } – instalador MSI

Testing on staging: Other than a few files that are “shared” or for forms, meaning the content in the file is not page specific, most files have a page specific URL for review. Here is an example to figure out how to test Firefox/enterprise.ftl:

  • Staging server:{locale_code}
  • File path in Pontoon:{locale_code}/mozillaorg/en/firefox/enterprise.ftl/
  • Staging for the page:{locale_code}/firefox/enterprise/
  • Example for es-AR:

What’s new or coming up in Pontoon

Keeping track of Machinery translations.

Pontoon now stores Machinery source(s) of translations copied from the Machinery panel. The feature will help us evaluate performance of each Machinery source and make improvements in the future.

It should also help reviewers, who can instantly see if and which Machinery source was used while suggesting a translation. If it was, a “Copy” icon will appear over the author’s avatar and the Machinery sources will be revealed on hover.

Newly published localizer facing documentation


We have a new locale available in Nightly for Firefox desktop: Silesian (szl). In less than 5 months, they managed to get over 60% completion, with most of the high priority parts close to 100%. Kudos to Rafał and Grzegorz for the great work.

Friends of the Lion

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

hacks.mozilla.orgSafely reviving shared memory

At Mozilla, we want the web to be capable of running high-performance applications so that users and content authors can choose the safety, agency, and openness of the web platform. One essential low-level building block for many high-performance applications is shared-memory multi-threading. That’s why it was so exciting to deliver shared memory to JavaScript and WebAssembly in 2016. This provided extremely fast communication between threads.

However, we also want the web to be secure from attackers. Keeping users safe is paramount, which is why shared memory and high-resolution timers were effectively disabled at the start of 2018, in light of Spectre. Unfortunately, Spectre-attacks are made significantly more effective with high-resolution timers. And such timers can be created with shared memory. (This is accomplished by having one thread increment a shared memory location in a tight loop that another thread can sample as a nanosecond-resolution timer.)

Back to the drawing board

Fundamentally, for a Spectre attack to work, an attacker and victim need to reside in the same process. Like most applications on your computer, browsers used to use a single process. This would allow two open sites, say attacker.example and victim.example, to Spectre-attack each other’s data as well as other data the browser might keep such as bookmarks or history. Browsers have long since become multi-process. With Chrome’s Site Isolation and Firefox’s Project Fission, browsers will isolate each site into its own process. This is possible due to the web platform’s retrofitted same-origin policy.

Unfortunately, isolating each site into its own process is still not sufficient for these reasons:

  1. The same-origin policy has a number of holes, two of which strongly informed our thinking during the design process:
    1. attacker.example can fetch arbitrary victim.example resources into attacker.example’s process, e.g., through the <img> element.
    2. Due to the existence of document.domain, the minimal isolation boundary is a site (roughly the scheme and registrable domain of a website’s host) and not an origin (roughly a website’s scheme, host, and port).
  2. At this point, we don’t know if it’s feasible to isolate each site into its own process across all platforms. It is still a challenging endeavor on mobile. While possibly not a long-term problem, we would prefer a solution that allows reviving shared memory on mobile soon.

Distilling requirements

We need to address the issues above to revive shared memory and high-resolution timers. As such, we have been working on a system that meets the following requirements:

  1. It allows a website to process-isolate itself from attackers and thereby shield itself from intra-process high-resolution timer attacks.
  2. If a website wants to use these high-performance features, it also needs to process-isolate itself from victims. In particular, this means that it has to give up the ability to fetch arbitrary subresources from any site (e.g., through an <img> element) because these end up in the same process. Instead, it can only fetch cross-origin resources from consenting origins.
  3. It allows a browser to run the entire website, including all of its frames and popups, in a single process. This is important to keep the web platform a consistent system across devices.
  4. It allows a browser to run each participating origin (i.e., not site) in its own process. This is the ideal end state across devices and it is important for the design to not prevent this.
  5. The system maintains backwards compatibility. We cannot ask billions of websites to rewrite their code.

Due to these requirements, the system must provide an opt-in mechanism. We cannot forbid websites from fetching cross-origin subresources, as this would not be backwards compatible. Sadly, restricting document.domain is not backwards compatible either. More importantly, it would be unsafe to allow a website to embed cross-origin documents via an <iframe> element and have those cross-origin resources end up in the same process without opting in.

Cross-origin isolated

New headers

Together with others in the WHATWG community, we designed a set of headers that meet these requirements.

The Cross-Origin-Opener-Policy header allows you to process-isolate yourself from attackers. It also has the desirable effect that attackers cannot have access to your global object if they were to open you in a popup. This prevents XS-Leaks and various navigation attacks. Adopt this header even if you have no intention of using shared memory!

The Cross-Origin-Embedder-Policy header with value require-corp tells the browser to only allow this document to fetch cross-origin subresources from consenting websites. Technically, the way that this works is that those cross-origin resources need to specify the Cross-Origin-Resource-Policy header with value cross-origin to indicate consent.

Impact on documents

If the Cross-Origin-Opener Policy and Cross-Origin-Embedder-Policy headers are set for a top-level document with the same-origin and require-corp values respectively, then:

  1. That document will be cross-origin isolated.
  2. Any descendant documents that also set Cross-Origin-Embedder-Policy to require-corp will be cross-origin isolated. (Not setting it results in a network error.)
  3. Any popups these documents open will either be cross-origin isolated or will not have a direct relationship with these documents. This is to say that there is no direct access through window.opener or equivalent (i.e., it’s as if they were created using rel="noopener").

A document that is cross-origin isolated will have access to shared memory, both in JavaScript and WebAssembly. It will only be able to share memory with same-origin documents and dedicated workers in the same “tab” and its popups (technically, same-origin agents in a single browsing context group). It will also have access to the highest-resolution available. Evidently, it will not have access to a functional document.domain.

The way these headers ensure mutual consent between origins gives browsers the freedom to put an entire website into a single process or put each of the origins into their own process, or something in between. While process-per-origin would be ideal, this is not always feasible on all devices. So having everything that is pulled into these one-or-more processes consent is a decent middle ground.

Safety backstop

We created a safety backstop to be able to deal with novel cross-process attacks. And used an approach that avoids having to disable shared memory entirely to remain web compatible.

The result is Firefox’s JSExecutionManager. This allows us to regulate the execution of different JavaScript contexts with relation to each other. The JSExecutionManager can be used to throttle CPU and power usage by background tabs. Using the JSExecutionManager, we created a dynamic switch (dom.workers.serialized-sab-access in about:config) that prevents all JavaScript threads that share memory from ever running code concurrently, effectively executing these threads as if on a single-core machine. Because creating a high-resolution timer using shared memory requires two threads to run simultaneously, this switch effectively prevents the creation of a high-resolution timer without breaking websites.

By default, this switch is off, but in the case of a novel cross-process attack, we could quickly flip it on. With this switch as a backstop, we can feel confident enabling shared memory in cross-origin isolated websites even when considering unlikely future worst-case scenarios.


Many thanks to Bas Schouten and Luke Wagner for their contributions to this post. And also, in no particular order, many thanks to Nika Layzell, Tom Tung, Valentin Gosu, Eden Chuang, Jens Manuel Stutte, Luke Wagner, Bas Schouten, Neha Kochar, Andrew Sutherland, Andrew Overholt, 蔡欣宜 (Hsin-Yi Tsai), Perry Jiang, Steve Fink, Mike Conca, Lars Thomas Hansen, Jeff Walden, Junior Hsu, Selena Deckelmann, and Eric Rescorla for their help getting this done in Firefox!

The post Safely reviving shared memory appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogA look at password security, Part III: More secure login protocols

In part II, we looked at the problem of Web authentication and covered the twin problems of phishing and password database compromise. In this post, I’ll be covering some of the technologies that have been developed to address these issues.

This is mostly a story of failure, though with a sort of hopeful note at the end. The ironic thing here is that we’ve known for decades how to build authentication technologies which are much more secure than the kind of passwords we use on the Web. In fact, we use one of these technologies — public key authentication via digital certificates — to authenticate the server side of every HTTPS transaction before you send your password over. HTTPS supports certificate-base client authentication as well, and while it’s commonly used in other settings, such as SSH, it’s rarely used on the Web. Even if we restrict ourselves to passwords, we have long had technologies for password authentication which completely resist phishing, but they are not integrated into the Web technology stack at all. The problem, unfortunately, is less about cryptography than about deployability, as we’ll see below.

Two Factor Authentication and One-Time Passwords

The most widely deployed technology for improving password security goes by the name one-time passwords (OTP) or (more recently) two-factor authentication (2FA). OTP actually goes back to well before the widespread use of encrypted communications or even the Web to the days when people would log in to servers in the clear using Telnet. It was of course well known that Telnet was insecure and that anyone who shared the network with you could just sniff your password off the wire1 and then login with it [Technical note: this is called a replay attack.] One partial fix for this attack was to supplement the user password with another secret which wasn’t static but rather changed every time you logged in (hence a “one-time” password).

OTP systems came in a variety of forms but the most common was a token about the size of a car key fob but with an LCD display, like this:

The token would produce a new pseudorandom numeric code every 30 seconds or so and when you went to log in to the server you would provide both your password and the current code. That way, even if the attacker got the code they still couldn’t log in as you for more than a brief period2 unless they also stole your token. If all of this looks familiar, it’s because this is more or less the same as modern OTP systems such as Google Authenticator, except that instead of a hardware token, these systems tend to use an app on your phone and have you log into some Web form rather than over Telnet. The reason this is called “two-factor authentication” is that authenticating requires both a value you know (the password) and something you have (the device). Some other systems use a code that is sent over SMS but the basic idea is the same.

OTP systems don’t provide perfect security, but they do significantly improve the security of a password-only system in two respects:

  1. They guarantee a strong, non-reused secret. Even if you reuse passwords and your password on site A is compromised, the attacker still won’t have the right code for site B.3
  2. They mitigate the effect of phishing. If you are successfully phished the attacker will get the current code for the site and can log in as you, but they won’t be able to log in in the future because knowing the current code doesn’t let you predict a future code. This isn’t great but it’s better than nothing.

The nice thing about a 2FA system is that it’s comparatively easy to deploy: it’s a phone app you download plus another code that the site prompts you for. As a result, phone-based 2FA systems are very popular (and if that’s all you have, I advise you to use it, but really you want WebAuthn, which I’ll be describing in my next post).

Password Authenticated Key Agreement

One of the nice properties of 2FA systems is that they do not require modifying the client at all, which is obviously convenient for deployment. That way you don’t care if users are running Firefox or Safari or Chrome, you just tell them to get the second factor app and you’re good to go. However, if you can modify the client you can protect your password rather than just limiting the impact of having it stolen. The technology to do this is called a Password Authenticated Key Agreement (PAKE) protocol.

The way a PAKE would work on the Web is that it would be integrated into the TLS connection that already secures your data on its way to the Web server. On the client side when you enter your password the browser feeds it into TLS and on the other side, the server feeds in a verifier (effectively a password hash). If the password matches the verifier, then the connection succeeds, otherwise it fails. PAKEs aren’t easy to design — the tricky part is ensuring that the attacker has to reconnect to the server for each guess at the password — but it’s a reasonably well understood problem at this point and there are several PAKEs which can be integrated with TLS.

What a PAKE gets you is security against phishing: even if you connect to the wrong server, it doesn’t learn anything about your password that it doesn’t already know because you just get a cryptographic failure. PAKEs don’t help against password file compromise because the server still has to store the verifier, so the attacker can perform a password cracking attack on the verifier just as they would on the password hash. But phishing is a big deal, so why doesn’t everyone use PAKEs? The answer here seems to be surprisingly mundane but also critically important: user interface.

The way that most Web sites authenticate is by showing you a Web page with a field where you can enter your password, as shown below:

Firefox accounts login box
















When you click the “Sign In” button, your password gets sent to the server which checks it against the hash as described in part I. The browser doesn’t have to do anything special here (though often the password field will be specially labelled so that the browser can automatically mask out your password when you type); it just sends the contents of the field to the server.

In order to use a PAKE, you would need to replace this with a mechanism where you gave the browser your password directly. Browsers actually have something for this, dating back to the earliest days of the Web. On Firefox it looks like this:

Basic auth login








Hideous, right? And I haven’t even mentioned the part where it’s a modal dialog that takes over your experience. In principle, of course, this might be fixable, but it would take a lot of work and would still leave the site with a lot less control over their login experience than they have now; understandably they’re not that excited about that. Additionally, while a PAKE is secure from phishing if you use it, it’s not secure if you don’t, and nothing stops the phishing site from skipping the PAKE step and just giving you an ordinary login page, hoping you’ll type in your password as usual.

None of this is to say that PAKEs aren’t cool tech, and they make a lot of sense in systems that have less flexible authentication experiences; for instance, your email client probably already requires you to enter your authentication credentials into a dialog box, and so that could use a PAKE. They’re also useful for things like device pairing or account access where you want to start with a small secret and bootstrap into a secure connection. Apple is known to use SRP, a particular PAKE, for exactly this reason. But because the Web already offers a flexible experience, it’s hard to ask sites to take a step backwards and PAKEs have never really taken off for the Web.

Public Key Authentication

From a security perspective, the strongest thing would be to have the user authenticate with a public private key pair, just like the Web server does. As I said above, this is a feature of TLS that browsers actually have supported (sort of) for a really long time but the user experience is even more appalling than for builtin passwords.4 In principle, some of these technical issues could have been fixed, but even if the interface had been better, many sites would probably still have wanted to control the experience themselves. In any case, public key authentication saw very little usage.

It’s worth mentioning that public key authentication actually is reasonably common in dedicated applications, especially in software development settings. For instance, the popular SSH remote login tool (replacing the unencrypted Telnet) is commonly used with public key authentication. In the consumer setting, Apple Airdrop usesiCloud-issued certificates with TLS to authenticate your contacts.

Up Next: FIDO/WebAuthn

This was the situation for about 20 years: in theory public key authentication was great, but in practice it was nearly unusable on the Web. Everyone used passwords, some with 2FA and some without, and nobody was really happy. There had been a few attempts to try to fix things but nothing really stuck. However, in the past few years a new technology called WebAuthn has been developed. At heart, WebAuthn is just public key authentication but it’s integrated into the Web in a novel way which seems to be a lot more deployable than what has come before. I’ll be covering WebAuthn in the next post.

  1. And by “wire” I mean a literal wire, though such sniffing attacks are prevalent in wireless networks such as those protected by WPA2 
  2. Note that to really make this work well, you also need to require a new code in order to change your password, otherwise the attacker can change your password for you in that window. 
  3. Interestingly, OTP systems are still subject to server-side compromise attacks. The way that most of the common systems work is to have a per-user secret which is then used to generate a series of codes, e.g., truncated HMAC(Secret, time) (see RFC6238). If an attacker compromises the secret, then they can generate the codes themselves. One might ask whether it’s possible to design a system which didn’t store a secret on the server but rather some public verifier (e.g., a public key) but this does not appear to be secure if you also want to have short (e.g., six digits) codes. The reason is that if the information that is used to verify is public, the attacker can just iterate through every possible 6 digit code and try to verify it themselves. This is easily possible during the 30 second or so lifetime of the codes. Thanks to Dan Boneh for this insight. 
  4. The details are kind of complicated here, but just some of the problems (1) TLS client authentication is mostly tied to certificates and the process of getting a certificate into the browser was just terrible (2) The certificate selection interface is clunky (3) Until TLS 1.3, the certificate was actually sent in the clear unless you did TLS renegotiation, which had its own problems, particularly around privacy.

Update: 2020-07-21: Fixed up a sentence.

The post A look at password security, Part III: More secure login protocols appeared first on The Mozilla Blog.

The Mozilla Thunderbird BlogWhat’s New in Thunderbird 78

Thunderbird 78 is our newest ESR (extended-support release), which comes out yearly and is considered the latest stable release. Right now you can download the newest version from our website, and existing users will be automatically updated in the near future. We encourage those who rely on the popular add-on Enigmail to wait to update until the automatic update rolls out to them to ensure their encrypted email settings are properly imported into Thunderbird’s new built-in OpenPGP encrypted email feature.

Last year’s release focused on ensuring Thunderbird has a stable foundation on which to build. The new Thunderbird 78 aims to improve the experience of using Thunderbird, adding many quality-of-life features to the application and making it easier to use.

Compose Window Redesign

Compose Window Comparison, 68 and 78

The compose window has been reworked to help users find features more easily and to make composing a message faster and more straightforward. The compose window now also takes up less space with recipients listed in “pills” instead of an entire line for every address.

Dark Mode

Dark Mode

Thunderbird’s new Dark Mode is easier on the eyes for those working in the dark, and it has the added benefit of looking really cool! The Dark Mode even works when writing and reading emails – so you are not suddenly blinded while you work. Thunderbird will look at your operating system settings to see if you have enabled dark mode OS-wide and respect those settings. Here are the instructions for setting dark mode in Mac, and setting dark mode in Windows.

Calendar and Tasks Integrated

Thunderbird’s Lightning calendar and tasks add-on is now a part of the application itself, which means everyone now has access to these features the moment they install Thunderbird. This change also sets the stage for a number of future improvements the Thunderbird team will make in the calendar. Much of this will be focused on improved interoperability with the mail part of Thunderbird, as well as improving the user experience of the calendar.

Account Setup & Account Central Updated

Account Setup and Account Central Updated, comparison between 68 and 78

The Account Setup window and the Account Central tab, which appears when you do not have an account setup or when you select an existing account in the folder tree, have both been updated. The layout and dialogues have been improved in order to make it easier to understand the information displayed and to find relevant settings. The Account Central tab also has new information about the Thunderbird project and displays the version you are using.

Folder Icons and Colors Update

New Folder Icons and Colors for Thunderbird 78

Folder icons have been replaced and modernized with a new vector style. This will ensure better compatibility with HiDPI monitors and dark mode. Vector icons also means you will be able to customize their default colors to better distinguish and categorize your folders list.

Minimize to Tray

Windows users have reason to rejoice, as Thunderbird 78 can now be minimized to tray. This has been a repeatedly requested feature that has been available through many popular add-ons, but it is now part of Thunderbird core – no add-on needed! This feature has been a long time coming and we hope to bring more operating-system specific features for each platform to Thunderbird in the coming releases.

End-to-End Encrypted Email Support

New end-to-end encryption preferences tab.

Thunderbird 78.2, due out in the coming months, will offer a new feature that allows you to end-to-end encrypt your email messages via OpenPGP. In the past this feature was achieved in Thunderbird primarily with the Enigmail add-on, however, in this release we have brought this functionality into core Thunderbird. We’d like to offer a special thanks to Patrick Brunschwig for his years of work on Enigmail, which laid the groundwork for this integrated feature, and for his assistance throughout its development. The new feature is also enabled by the RNP library, and we’d like to thank the project’s developers for their close collaboration and hard work addressing our needs.

End-to-end encryption for email can be used to ensure that only the sender and the recipients of a message can read the contents. Without this protection it is easy for network administrators, email providers and government agencies to read your messages. If you would like to learn more about how end-to-end encryption in Thunderbird works, check out our article on Introduction to End-to-end encryption in Thunderbird. If you would like to learn more about the development of this feature or participate in testing, check out the OpenPGP Thunderbird wiki page.

About Add-ons

As with previous major releases, it may take time for authors of legacy extensions to update their add-ons to support the new release. So if you are using add-ons we recommend you not update manually to 78.0, and instead wait for Thunderbird to automatically update to 78. We encourage users to reach out to their add-on’s author to let them know that you are interested in using it in 78.

Learn More

If we listed all the improvements in Thunderbird 78 in this blog post, you’d be stuck reading this for the whole day. So we will save you from that, and let you know that if you want to see a longer list of changes for the new release – check the release notes on our website.

Great Release, Bright Future

The past year has been an amazing year for Thunderbird. We had an incredible release in version 68 that was popular with our users, and laid the groundwork for much of what we did in 78. On top of great improvements in the product, we moved into a new financial and legal home, and we grew our team to thirteen people (soon to be even more)!

We’re so grateful to all our users and contributors who have stuck with us all these years, and we hope to earn your dedication for the years to come. Thunderbird 78 is the beginning of a new era for the project, as we attempt to bring our users the features that they want and need to be productive in the 2020s – while also maintaining what has made Thunderbird so great all these years.

Thank you to our wonderful community, please enjoy Thunderbird 78.

Download the newest release from our website.

Blog of DataMozilla Telemetry in 2020: From “Just Firefox” to a “Galaxy of Data”

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

This is a special guest post by non-Glean-team member William Lachance!

In the last year or so, there’s been a significant shift in the way we (Data Engineering) think about application-submitted data @ Mozilla, but although we have a new application-based SDK based on these principles (the Glean SDK), most of our data tools and documentation have not yet been updated to reflect this new state of affairs.

Much of this story is known inside Mozilla Data Engineering, but I thought it might be worth jotting them down into a blog post as a point of reference for people outside the immediate team. Knowing this may provide some context for some our activities and efforts over the next year or two, at least until our tools, documentation, and tribal knowledge evolve.

In sum, the key differences are:

  • Instead of just one application we care about, there are many.
  • Instead of just caring about (mostly1) one type of ping (the Firefox main ping), an individual application may submit many different types of pings in the course of their use.
  • Instead of having both probes (histogram, scalar, or other data type) and bespoke parametric values in a JSON schema like the telemetry environment, there are now only metric types which are explicitly defined as part of each ping.

The new world is pretty exciting and freeing, but there is some new domain complexity that we need to figure out how to navigate. I’ll discuss that in my last section.

The Old World: Firefox is king

Up until roughly mid–2019, Firefox was the centre of Mozilla’s data world (with the occasional nod to Firefox for Android, which uses the same source repository). The Data Platform (often called “Telemetry”) was explicitly designed to cater to the needs of Firefox Developers (and to a lesser extent, product/program managers) and a set of bespoke tooling was built on top of our data pipeline architecture – this blog post from 2017 describes much of it.

In outline, the model is simple: on the client side, assuming a given user had not turned off Telemetry, during the course of a day’s operation Firefox would keep track of various measures, called “probes”. At the end of that duration, it would submit a JSON-encoded “main ping” to our servers with the probe information and a bunch of other mostly hand-specified junk, which would then find its way to a “data lake” (read: an Amazon S3 bucket). On top of this, we provided a python API (built on top of PySpark) which enabled people inside Mozilla to query all submitted pings across our usage population.

The only type of low-level object that was hard to keep track of was the list of probes: Firefox is a complex piece of software and there are many aspects of it we wanted to instrument to validate performance and quality of the product – especially on the more-experimental Nightly and Beta channels. To solve this problem, a probe dictionary was created to help developers find measures that corresponded to the product area that they were interested in.

On a higher-level, accessing this type of data using the python API quickly became slow and frustrating: the aggregation of years of Firefox ping data was hundreds of terabytes big, and even taking advantage of PySpark’s impressive capabilities, querying the data across any reasonably large timescale was slow and expensive. Here, the solution was to create derived datasets which enabled fast(er) access to pings and other derived measures, document them on, and then allow access to them through tools like or the Measurement Dashboard.

The New World: More of everything

Even in the old world, other products that submitted telemetry existed (e.g. Firefox for Android, Firefox for iOS, the venerable FirefoxOS) but I would not call them first-class citizens. Most of our documentation treated them as (at best) weird edge cases. At the time of this writing, you can see this distinction clearly on where there is one (fairly detailed) tutorial called “Choosing a Desktop Dataset” while essentially all other products are lumped into “Choosing a Mobile Dataset”.

While the new universe of mobile products are probably the most notable addition to our list of things we want to keep track of, they’re only one piece of the puzzle. Really we’re interested in measuring all the things (in accordance with our lean data practices, of course) including tools we use to build our products like mozphab and mozregression.

In expanding our scope, we’ve found that mobile (and other products) have different requirements that influence what data we would want to send and when. For example, sending one blob of JSON multiple times per day might make sense for performance metrics on a desktop product (which is usually on a fast, unmetered network) but is much less acceptable on mobile (where every byte counts). For this reason, it makes sense to have different ping types for the same product, not just one. For example, Fenix (the new Firefox for Android) sends a tiny baseline ping2 on every run to (roughly) measure daily active users and a larger metrics ping sent on a (roughly) daily interval to measure (for example) a distribution of page load times.

Finally, we found that naively collecting certain types of data as raw histograms or inside the schema didn’t always work well. For example, encoding session lengths as plain integers would often produce weird results in the case of clock skew. For this reason, we decided to standardize on a set of well-defined metrics using Glean, which tries to minimize footguns. We explicitly no longer allow clients to submit arbitrary JSON or values as part of a telemetry ping: if you have a use case not covered by the existing metrics, make a case for it and add it to the list!

To illustrate this, let’s take a (subset) of what we might be looking at in terms of what the Fenix application sends:

mermaid source

At the top level we segment based on the “application” (just Fenix in this example). Just below that, there are the pings that this application might submit (I listed three: the baseline and metrics pings described above, along with a “migration” ping, which tracks metrics when a user migrates from Fennec to Fenix). And below that there are different types of metrics included in the pings: I listed a few that came out of a quick scan of the Fenix BigQuery tables using my prototype schema dictionary.

This is actually only the surface-level: at the time of this writing, Fenix has no fewer than 12 different ping types and many different metrics inside each of them.3 On a client level, the new Glean SDK provides easy-to-use primitives to help developers collect this type of information in a principled, privacy-preserving way: for example, data review is built into every metric type. But what about after it hits our ingestion endpoints?

Hand-crafting schemas, data ingestion pipelines, and individualized ETL scripts for such a large matrix of applications, ping types, and measurements would quickly become intractable. Instead, we (Mozilla Data Engineering) refactored our data pipeline to parse out the information from the Glean schemas and then create tables in our BigQuery datastore corresponding to what’s in them – this has proceeded as an extension to our (now somewhat misnamed) probe-scraper tool.

You can then query this data directly (see accessing glean data) or build up a derived dataset using our SQL-based ETL system, BigQuery-ETL. This part of the equation has been working fairly well, I’d say: we now have a diverse set of products producing Glean telemetry and submitting it to our servers, and the amount of manual effort required to add each application was minimal (aside from adding new capabilities to the platform as we went along).

What hasn’t quite kept pace is our tooling to make navigating and using this new collection of data tractable.

What could bring this all together?

As mentioned before, this new world is quite powerful and gives Mozilla a bunch of new capabilities but it isn’t yet well documented and we lack the tools to easily connect the dots from “I have a product question” to “I know how to write an SQL query / Spark Job to answer it” or (better yet) “this product dashboard will answer it”.

Up until now, our defacto answer has been some combination of “Use the probe dictionary /” and/or “refer to”. I submit that we’re at the point where these approaches break down: as mentioned above, there are many more types of data we now need to care about than just “probes” (or “metrics”, in Glean-parlance). When we just cared about the main ping, we could write dataset documentation for its recommended access point (main_summary) and the raw number of derived datasets was managable. But in this new world, where we have N applications times M ping types, the number of canonical ping tables are now so many that documenting them all on no longer makes sense.

A few months ago, I thought that Google’s Data Catalog (billed as offering “a unified view of all your datasets”) might provide a solution, but on further examination it only solves part of the problem: it provides only a view on your BigQuery tables and it isn’t designed to provide detailed information on the domain objects we care about (products, pings, measures, and tools). You can map some of the properties from these objects onto the tables (e.g. adding a probe’s description field to the column representing it in the BigQuery table), but Data Calalog’s interface to surfacing and filtering through this information is rather slow and clumsy and requires detailed knowledge of how these higher level concepts relate to BigQuery primitives.

Instead, what I think we need is a new system which allows a data practitioner (Data Scientist, Firefox Engineer, Data Engineer, Product Manager, whoever) to visualize the relevant set of domain objects relevant to their product/feature of interest quickly then map them to specific BigQuery tables and other resources (e.g. visualizations using tools like GLAM) which allow people to quickly answer questions so we can make better products. Basically, I am thinking of some combination of:

  • The existing probe dictionary (derived from existing product metadata)
  • A new “application” dictionary (derived from some simple to-be-defined application metadata description)
  • A new “ping” dictionary (derived from existing product metadata)
  • A BigQuery schema dictionary (I wrote up a prototype of this a couple weeks ago) to map between these higher-level objects and what’s in our low-level data store
  • Documentation for derived datasets generated by BigQuery-ETL (ideally stored alongside the ETL code itself, so it’s easy to keep up to date)
  • A data tool dictionary describing how to easily access the above data in various ways (e.g. SQL query, dashboard plot, etc.)

This might sound ambitious, but it’s basically just a system for collecting and visualizing various types of documentation— something we have proven we know how to do. And I think a product like this could be incredibly empowering, not only for the internal audience at Mozilla but also the external audience who wants to support us but has valid concerns about what we’re collecting and why: since this system is based entirely on systems which are already open (inside GitHub or Mercurial repositories), there is no reason we can’t make it available to the public.

  1. Technically, there are various other types of pings submitted by Firefox, but the main ping is the one 99% of people care about. 
  2. This is actually a capability that the Glean SDK provides, so other products (e.g. Lockwise, Firefox for iOS) also benefit from this capability. 
  3. The scope of this data collection comes from the fact that Fenix is a very large and complex application. rather than a desire to collect everything just because we can— smaller efforts like mozregression collect a much more limited set of data

SUMO BlogIntroducing Mozilla VPN

Hi everyone,

You might remember that we first introduced the Firefox Private Network (FPN) back then in December 2019. At that time, we had two types of offerings available only in the U.S: FPN Browser Level Protection (by using an extension) and FPN Device Protection (which is available for Windows 10, iOS, and Android).

Today will mark another milestone for FPN since we’ll be changing the name from FPN full-device VPN to simply the Mozilla VPN. For now, this change will only include the Windows 10 version as well as the Android version. Currently, the iOS version is still called FPN on the Apple Store, although our team is currently working hard to change it to Mozilla VPN as well. Meanwhile, FPN Browser Level Protection will remain the same until we make further decisions.

On top of that, we will start offering Mozilla VPN in more countries outside of the US. The new countries will be Canada, the UK, New Zealand, Singapore, and Malaysia.

What does this mean for the community?

We’ve changed the product name in Kitsune (although the URL is still the same). Since most of the new countries are English speaking countries, we will not require the support articles to be translated for this release.

And as usual, support requests will be handled through Zendesk and the forum will continue to be managed by our designated staff members, Brady and Eve. However, we also welcome everyone who wants to help.

We are enthusiastic about this new opportunity and hope that you’ll support us along the way. If you have any questions or concerns, please let me/Giulia know.

Mozilla VR BlogRecording inside of Hubs

Recording inside of Hubs

(This Post is for recording inside of Hubs via a laptop or desktop computer. Recording Hubs experiences from inside Virtual Reality is for another post.)

Minimum Requirements


  • A computer
  • Internet connection
  • Storage space to save recordings


  • Screen capture software
  • Browser that supports Hubs

Hubs by Mozilla; the future of remote collaboration.

Accessible from a web browser and on a range of devices, Hubs allows users to meet in a virtual space and share ideas, images and files. The global pandemic is keeping us distant socially but Hubs is helping us to bridge that gap!

We’re often asked how to record your time inside of Hubs, either for a personal record or to share with others. Here I will share what has worked for me to capture usable footage from inside of a Hubs environment.

Firstly, like traditional videography, you’re going to need the appropriate hardware and software.


I have a need to capture footage at the highest manageable resolution and frames per second so I use a high powered desktop PC with a good graphics card and 32Gb of RAM. When I’m wearing my editor’s hat, I like to have the freedom to make precise cuts and the ability to zoom in on a particular part of the frame and still maintain image quality. However, my needs for quality  are probably higher than most people looking to capture film inside of Hubs and an average laptop is generally going to be perfectly fine to get decent shot quality.

A strong internet connection is going to be essential to ensure the avatar animations and videos in-world function smoothly on your screen. Also, a good amount of bandwidth is required if you plan on live streaming video content out of your Hubs space to platforms such as Zoom or Twitch.

This may be especially relevant during the pandemic lockdown, with increased usage/burden on home internet.

Next up is storage to record your video. I use a 5TB external hard drive as my main storage device to ensure I never run out of space. A ten minute video that is 1280x720 and 30fps is roughly about 1Gb of data so it can add up pretty quickly!

One last piece of hardware I use but is not essential is a good gaming mouse. This offers better tracking and response time, allowing for more accurate cursor control and ultimately smoother camera movement inside of Hubs.

Another benefit I gain is customizability. Adjusting tracking sensitivity and adding macro commands to the additional buttons has greatly improved my experience recording inside of Hubs.

Now that we have the hardware, let’s talk about software.


OBS (Open Broadcast Software) is open source and also free! This application allows video recording and live streaming and is a popular choice for capturing and sharing your streams. This is a great piece of software and allows you full control over both your incoming and outgoing video streams.

My need for the highest available capture has led me to use Nvidia’s Geforce experience software. This is an application that complements my Geforce GTX graphics card and gives me the ability to optimize my settings.

So now that we’re up to speed with the hardware and software, it’s time to set up for recording.

As I mentioned earlier, I set up my software to get the best results possible from my hardware. The settings you choose will be dependent on your hardware and may take some experimentation to perfect. I tend to run my settings at 1920x1080 and 60fps. It’s good practice to run with commonly used resolution scales and frames per second to make editing, exporting and sharing as painless as possible. 1280x720 @ 30fps is a common and respectable setting.

These frame sizes have a 16:9 aspect ratio which is a widely used scale.

Audio is pretty straightforward: 44.1kHz is a good enough sample rate to get a usable recording. The main things to note are the spatial audio properties from avatars speaking and objects with audio attached inside of Hubs. Finding a position that allows for clean and balanced sound is important. It can also be handy to turn off sound effects from the preferences menu. That way if it’s a chat-heavy environment, the bubble sounds don’t interrupt the speaker in the recording. Another option to isolate the speaker’s audio is to have the camera avatar mute everyone else manually before recording.

Before I hit record there are a few other things I like to set up. One is maximizing my window in the browser settings (Not surprisingly I use Firefox..) and another is choosing which user interface graphics are showing. Personally, I prefer to disable all my U.I. so all that is showing is the scene inside of Hubs. I do this by using the tilde (~) (hot) key or hitting camera mode in the options menu and then selecting hide all in the bottom right corner of the screen. The second option here is only available to people who have been promoted to room moderator so be sure to check that before you begin!

Additionally, there is an option under the misc tab to turn Avatars’ name tags on or off which can be helpful depending on your needs. It's a good rule of thumb to get the permission (or at least notify) those who will be in the space that you will be recording so they can adjust their own settings or name tag accordingly, but if it's not practicable to get individuals' permission, you may want to consider turning name tags off, just in case.

Once you get to this point it pretty closely resembles the role of a traditional camera operator. You’ll need to consider close-ups, wide shots, scenes and avatars, while maintaining a balanced audio feed.

Depending on the scene creator’s settings, you may have the option to fly in Hubs. This can open up some options for more creative or cinematic camera work. Another possibility is to have multiple computers recording different angles, enabling the editor to switch between perspectives.

And that's a basic introduction on how to record inside of Hubs from a computer! Stay tuned for how to set up recording your Hubs experience from inside of virtual reality.

Stay safe, stay healthy and keep on rockin’ the free web!

The Mozilla BlogMozilla Puts Its Trusted Stamp on VPN

Starting today, there’s a VPN on the market from a company you trust. The Mozilla VPN (Virtual Private Network) is now available on Windows, Android and iOS devices. This fast and easy-to-use VPN service is brought to you by Mozilla, the makers of Firefox, and a trusted name in online consumer security and privacy services.

See for yourself how the Mozilla VPN works:


The first thing you may notice when you install the Mozilla VPN is how fast your browsing experience is. That’s because the Mozilla VPN is based on modern and lean technology, the WireGuard protocol’s 4,000 lines of code, is a fraction in size of legacy protocols used by other VPN service providers.

You will also see an easy-to-use and simple interface for anyone who is new to VPN, or those who want to set it and get onto the web.

With no long-term contracts required, the Mozilla VPN is available for just $4.99 USD per month and will initially be available in the United States, Canada, the United Kingdom, Singapore, Malaysia, and New Zealand, with plans to expand to other countries this Fall.

In a market crowded by companies making promises about privacy and security, it can be hard to know who to trust. Mozilla has a reputation for building products that help you keep your information safe. We follow our easy to read, no-nonsense Data Privacy Principles which allow us to focus only on the information we need to provide a service. We don’t keep user data logs.

We don’t partner with third-party analytics platforms who want to build a profile of what you do online. And since the makers of this VPN are backed by a mission-driven company you can trust that the dollars you spend for this product will not only ensure you have a top-notch VPN, but also are making the internet better for everyone.

Simple and easy-to-use switch

Last year, we beta tested our VPN service which provided encryption and device-level protection of your connection and information on the Web. Many users shared their thoughts on why they needed this service.

Some of the top reasons users cited for using a VPN:

  • Security for all your devices Users are flocking to VPNs for added protection online. With Mozilla VPN you can be sure your activity is encrypted across all applications and websites, whatever device you are on.
  • Added protection for your private information – Over 50 percent of VPN users in the US and UK said that seeking protection when using public wi-fi was a top reason for choosing a VPN service.
  • Browse more anonymously – Users care immensely about being anonymous when they choose to. A VPN is a key component as it encrypts all your traffic and protects your IP address and location.
  • Communicate more securely – Using a VPN can give an added layer of protection, ensuring every conversation you have is encrypted over the network.

In a world where unpredictability has become the “new normal,” we know that it’s more important than ever for you to feel safe, and for you to know that what you do online is your own business.

Check out the Mozilla VPN and download it from our website,  Google Play store or Apple App store.

*Updated July 27, 2020 to reflect the availability of Mozilla VPN on iOS devices

The post Mozilla Puts Its Trusted Stamp on VPN appeared first on The Mozilla Blog.

The Mozilla BlogA look at password security, Part II: Web Sites

In part I, we took a look at the design of password authentication systems for old-school multiuser systems. While timesharing is mostly gone, most of us continue to use multiuser systems; we just call them Web sites. In this post, I’ll be covering some of the problems of Web authentication using passwords.

As I discussed previously, the strength of passwords depends to a great extent on how fast the attacker can try candidate passwords. The nature of a Web application inherently limits the velocity at which you can try passwords quite a bit. Even ignoring limits on the rate which you can transmit stuff over the network, real systems — at least well managed ones — have all kinds of monitoring software which is designed to detect large numbers of login attempts, so just trying millions of candidate passwords is not very effective. This doesn’t mean that remote attacks aren’t possible: you can of course try to log in with some of the obvious passwords and hope you get lucky, and if you have a good idea of a candidate password, you can try that (see below), but this kind of attack is inherently somewhat limited.

Remote compromise and password cracking

Of course, this kind of limitation in the number of login attempts you could make also applied to the old multiuser systems and the way you attack Web sites is the same: get a copy of the password file and remotely crack it.

The way this plays out is that somehow the attacker exploits a vulnerability in the server’s system to compromise the password database.1 They can then crack it offline and try to recover people’s passwords. Once they’ve done that, they can then use those passwords to log into the site themselves. If a site’s password database is stolen, their strongest defense is to reset everyone’s password, which is obviously really inconvenient, harms the site’s brand, and runs the risk of user attrition, and so doesn’t always happen.

To make matters worse, many users use the same password on multiple sites, so once you have broken someone’s password on one site, you can then try to login as them on other sites with the same password, even if the user’s password was reset on the site which was originally compromised. Even though this is an online attack, it’s still very effective, because password reuse is so common (this is one reason why it’s a bad idea to reuse passwords).

Password database disclosure is unfortunately quite a common occurrence, so much so that there are services such as Firefox Monitor and Have I been pwned? devoted to letting users know when some service they have an account on has been compromised.

Assuming a site is already following best practices (long passwords, slow password hashing algorithms, salting, etc.) then the next step is to either make it harder to steal the password hash or to make the password hash less useful. A good example here is the Facebook system described in this talk by Alec Muffett (famous for, among other things, the Crack password cracker). The system uses multiple layers of hashing, one of which is a keyed hash [technically, HMAC-SHA256] performed on a separate, hardened, machine. Even if you compromise the password hash database, it’s not useful without the key, which means you would also have to compromise that machine as well.2

Another defense is to use one-time password systems (often also called two-factor authentication systems). I’ll cover those in a future post.


Leaked passwords aren’t the only threat to password authentication on Web sites. The other big issue is what’s called phishing. In the basic phishing attack, the attacker sends you an e-mail inviting you to log into your account. Often this will be phrased in some scary way like telling you your account will be deleted if you don’t log in immediately. The e-mail will helpfully contain a link to use to log in, but of course this link will go not to the real site but to the attacker’s site, which will usually look just like the real site and may even have a similar domain name (e.g., instead of When the user clicks on the link and logs in, the attacker captures their username and password and can then log into the real site. Note that having users use good passwords totally doesn’t help here because the user gives the site their whole password.

Preventing phishing has proven to be a really stubborn challenge because, well, people are not as suspicious as they should be and it’s actually fairly hard on casual examination to determine whether you are on the right site. Most modern browsers try to warn users if they are going to known phishing sites (Firefox uses the Google Safe Browsing service for this). In addition, if you use a password manager, then it shouldn’t automatically fill in your password on a phishing site because password managers key off of the domain name and just looking similar isn’t good enough. Of course, both of these defenses are imperfect: the lists of phishing sites can be incomplete and if users don’t use password managers or are willing to manually cut and paste their passwords, then phishing attacks are still possible.3

Beyond Passwords

The good news is that we now have standards and technologies which are better than simple passwords and are more resistant to these kinds of attacks. I’ll be talking about them in the next post.

  1. A more fatal security issue occurs when application developers mistakenly write plaintext user passwords to debug logs. This allows the attacker to target the logging system and get immediate access to passwords without having to do any sort of computational work. 
  2. The Facebook system is actually pretty ornate. At least as of 2014 they had four separate layers: MD5, HMAC-SHA1 (with a public salt), HMAC-SHA256(with a secret key), and Scrypt, and then HMAC-SHA256 (with public salt) again, Muffet’s talk and this post do a good job of providing the detail, but this design is due to a combination of technical requirements. In particular, the reason for the MD5 stage is that an older system just had MD5-hashed passwords and because Facebook doesn’t know the original password they can’t convert them to some other algorithm; it’s easiest to just layer another hash on. 
  3. This is an example of a situation in which the difficulty of implementing a good password manager makes the problem much worse. Sites vary a lot in how they present their password dialogs and so password managers have trouble finding the right place to fill in the password. This means that users sometimes have to type the password in themselves even if there is actually a stored password, teaching them bad habits which phishers can then exploit. 

The post A look at password security, Part II: Web Sites appeared first on The Mozilla Blog.

The Mozilla BlogSustainability needs culture change. Introducing Environmental Champions.

Sustainability is not just about ticking a few boxes by getting your Greenhouse Gas emissions (GHG) inventory, adopting goals for reduction and mitigation, and accounting in shape. Any transformation towards sustainability also needs culture change.

In launching Mozilla‘s Sustainability Programme, our Environmental Champions are a key part of driving this organisational culture change.

Recruiting, training, and working with a first cohort of Environmental Champions has been a highlight of my job in the last couple of months. I can’t wait to see their initiatives taking root across all parts of Mozilla.

We have 14 passionate and driven individuals in this first cohort. They are critical amplifiers who will nudge each and every one us to incorporate sustainability into everything we do.


What makes people Champions?

“We don’t need hope, we need courage: The courage to change and impact our own decisions.”

This was among the top take-aways of our initial level-setting workshop on climate change science. In kicking off conversations around how to adjust our everyday work at Mozilla to a more sustainability-focused mindset, it was clear that hope won’t get us to where we need to be. This will require boldness and dedication.

Our champions volunteer their time for this effort. All of them have full-time roles and it was important to structure this process so that it is inviting, empowering, and impactful. To me this meant ensuring manager buy-in and securing executive sponsorship to make sure that our champions have the support to grow professionally in their sustainability work.

In the selection of this cohort, we captured the whole breadth of Mozilla: representatives from all departments, spread across regions, including office as well as remote workers, people with different tenure and job levels, and a diversity in roles. Some are involved with our GHG assessment, others are design thinkers, engineers, or programme managers, and yet others will focus on external awareness raising.


Responsibilities and benefits

In a nutshell, we agreed on these conditions:

Environmental Champions are:

  • Engaged through a peer learning platform with monthly meetings for all champions, including occasional conversations with sustainability experts. We currently alternate between four time zones, starting at 8am CEST (UTC+2), CST (UTC+8), EDT (UTC-4), PDT (UTC-7), respectively to equally spread the burden of global working hours.
  • Committed to spend about 2-5h each month supporting sustainability efforts at Mozilla.
  • Committed to participate in at least 1 initiative a year.
  • Committed to regularly share initiatives they are driving or participating in.
  • Dedicated to set positive examples and highlight sustainability as a catalyst of innovation.
  • Set up to provide feedback in their teams/departments, raise questions and draw attention to sustainability considerations.

The Sustainability team:

  • Provides introductory training on climate science and how to incorporate it into our everyday work at Mozilla. Introductory training will be provided at least once a year or as soon as we have a critical mass of new champions joining us on this journey.
  • Commits to inviting champions for initial feedback on new projects, e.g. sustainability policy, input on reports, research to be commissioned.
  • Regularly amplifies progress and successes of champions’ initiatives to wider staff.
  • May offer occasional access to consultants, support for evangelism (speaking, visibility, support for professional development) or other resources, where necessary and to the extent possible.


Curious about their initiatives?

We are just setting out and we already have a range of ambitious, inspiring projects lined up.

Sharmili, our Global Space Planner, is not only gathering necessary information around the impact of our global office spaces, she will also be leading on our reduction targets for real estate and office supplies. She puts it like this: “Reducing our Real Estate Footprint and promoting the 3 R’s (reduce, reuse, recycle) is as straight-forward as it can be tough in practice. We’ll make it happen either way.”

Ian, a machine learning engineer, is looking at Pocket recommendation guidelines and is keen to see more collections like this Earth Day 2020 one in the future.

Daria, Head of Product Design in Emerging Technologies, says: “There are many opportunities for designers to develop responsible technologies and to bring experiences that prioritize sustainability principles. It’s time we unlocked them.” She is planning to develop and apply a Sustainability Impact Assessment Tool that will be used in decision-making around product design and development.

We’ll also be looking at Firefox performance and web power usage, starting with explorations for how to better measure the impact of our products. DOM engineer, Olli will be stewarding these.

And the behind the scenes editorial support thinking through content, timing, and outreach? That’s Daniel for you.

We’ll be sharing more initiatives and the progress they are all making as we move forward. In the meantime, do join us on our Matrix channel to continue the conversation.

The post Sustainability needs culture change. Introducing Environmental Champions. appeared first on The Mozilla Blog.

Mozilla Gfx Teammoz://gfx newsletter #53

Bonjour à tous et à toutes, this is episode 53 of your favorite and only Firefox graphics newsletter. From now on instead of peeling through commit logs, I will be simply gathering notes sent to me by the rest of the team. This means the newsletter will be shorter, hopefully a bit less overwhelming with only the juicier bits. It will also give yours-truly more time to fix bugs instead of writing about it.

Lately we have been enabling WebRender for a lot more users. For the first time, WebRender is enabled by default in Nightly for Windows 7 and macOS users with modern GPUs. Today 78% of Nightly users have WebRender enabled, 40% on beta, and 22% on release. Not all of these configurations are ready to ride the trains yet, but the numbers are going to keep going up over the next few releases.


WebRender is a GPU based 2D rendering engine for the web written in Rust, currently powering Firefox‘s rendering engine as well as Mozilla’s research web browser Servo.

Ongoing work

  • Part of the team is now focusing on shipping WebRender on some flavors of Linux as well.
  • Worth highlighting also is the ongoing work by Martin Stránský and Robert Mader to switch Firefox on Linux from GLX to EGL. EGL is a more modern and better supported API, it will also let us share more code between Linux and Android.
  • Lee and Jim continue work on WebRender’s software backend. It has had a bunch of correctness improvements, works properly on Windows now and has more performance improvements in the pipeline. It works on all desktop platforms and can be enabled via the pref “”.


One of the projects that we worked on the last little while has been improving performance on lower-end/older Intel GPUs.

  • Glenn fixed a picture caching issue while scrolling gmail
  • Glenn fixed some over-invalidation on small screen resolutions.
  • Glenn reduced extra invalidation some more.
  • Dzmitry switched WebRender to a different CPU-to-GPU transfer strategy on Intel hardware on Windows. This avoid stalls during rendering.

Some other performance improvements that we made are:

  • Nical reduced CPU usage by re-building the scene a lot less often during scrolling.
  • Nical removed a lot of costly vector reallocation during scene building.
  • Nical reduced the amount of synchronous queries submitted to the X server on Linux, removing a lot of stalls when the GPU busy.
  • Nical landed a series of frame building optimizations.
  • Glenn improved texture cache eviction handling. This means lower memory usage and better performance.
  • Jeff enabled GPU switching for WebRender on Mac in Nightly. Previously WebRender only used the GPU that Firefox was started with. If the GPU was switched Firefox would have very bad performance because we would be drawing with the wrong GPU.
  • Markus finished and preffed on the OS compositor configuration of WR on macOS, which uses CoreAnimation for efficient scrolling.

Driver bugs

  • Dzmitry worked around a driver bug causing visual artifacts in Firefox’s toolbar on Intel Skylake and re-enabled direct composition on these configurations.

Desktop zooming

  • Botond announced on dev-platform that desktop zooming is ready for dogfooding by Nightly users who would like to try it out by flipping the pref.
  • Botond landed a series of patches that re-works how main-thread hit testing accounts for differences between the visual and layout viewports. This fixes a number of scenarios involving the experimental desktop zooming feature (enabled using apz.allow_zooming=true), including allowing scrollbars to be dragged with desktop zooming enabled.
  • Timothy landed support for DirectManipulation preffed off. It allows users to pinch-zoom on touchpads on Windows. It can be enabled by setting

The Mozilla BlogThank you, Julie Hanna

Over the last three plus years, Julie Hanna has brought extensive experience on innovation processes, global business operations, and mission-driven organizations to her role as a board member of Mozilla Corporation. We have deeply appreciated her contributions to Mozilla throughout this period, and thank her for her time and her work with the board.

Julie is now stepping back from her board commitment at Mozilla Corporation to focus more fully on her longstanding passion and mission to help pioneer and bring to market technologies that meaningfully advance social, economic and ecological justice, as evidenced by her work with Kiva, Obvious Ventures and X (formerly Google X), Alphabet’s Moonshot Factory. We look forward to continuing to see her play a key role in shaping and evolving purpose-driven technology companies across industries.

We are actively looking for a new member to join the board and seeking candidates with a range of backgrounds and experiences.

The post Thank you, Julie Hanna appeared first on The Mozilla Blog.

Open Policy & AdvocacyLaws designed to protect online security should not undermine it

Mozilla, Atlassian, and Shopify yesterday filed a friend-of-the-court brief in Van Buren v. U.S. asking the U.S. Supreme Court to consider implications of the Computer Fraud and Abuse Act for online security and privacy.

Mozilla’s involvement in this case comes from our interest in making sure that the law doesn’t stand in the way of effective online security. The Computer Fraud and Abuse Act (CFAA) was passed as a tool to combat online hacking through civil and criminal liability. However, over the years various federal circuit courts have interpreted the law so broadly as to threaten important practices for managing computer security used by Mozilla and many others. Contrary to the purpose of the statute, the lower court’s decision in this case would take a law meant to increase security and interpret it in a way that undermines that goal.

System vulnerabilities are common among even the most security conscious platforms. Finding and addressing as many of these vulnerabilities as possible relies on reporting from independent security researchers who probe and test our network. In fact, Mozilla was one of the first to offer a bug bounty program with financial rewards specifically for the purpose of encouraging external researchers to report vulnerabilities to us so we can fix them before they become widely known. By sweeping in pro-security research activities, overbroad readings of the CFAA discourage independent investigation and reporting of security flaws. The possibility of criminal liability as well as civil intensifies that chilling effect.

We encourage the Supreme Court to protect strong cybersecurity by striking the lower court’s overbroad statutory interpretation.

The post Laws designed to protect online security should not undermine it appeared first on Open Policy & Advocacy.

Mozilla Add-ons BlogChanges to storage.sync in Firefox 79

Firefox 79, which will be released on July 28, includes changes to the storage.sync area. Items that extensions store in this area are automatically synced to all devices signed in to the same Firefox Account, similar to how Firefox Sync handles bookmarks and passwords. The storage.sync area has been ported to a new Rust-based implementation, allowing extension storage to share the same infrastructure and backend used by Firefox Sync.

Extension data that had been stored locally in existing profiles will automatically migrate the first time an installed extension tries to access storage.sync data in Firefox 79. After the migration, the data will be stored locally in a new storage-sync2.sqlite file in the profile directory.

If you are the developer of an extension that syncs extension storage, you should be aware that the new implementation now enforces client-side quota limits. This means that:

  • You can make a call using storage.sync.GetBytesInUse to estimate how much data your extension is storing locally over the limit.
  • If your extension previously stored data above quota limits, all that data will be migrated and available to your extension, and will be synced. However, attempting to add new data will fail.
  • If your extension tries to store data above quota limits, the storage.sync API call will raise an error. However, the extension should still successfully retrieve existing data.

We encourage you to use the Firefox Beta channel to test all extension features that use the storage.sync API to see how they behave if the client-side storage quota is exceeded before Firefox 79 is released. If you notice any regressions, please check your about:config preferences to ensure that is set to false and then file a bug. We do not recommend flipping this preference to true as doing so may result in data loss.

If your users report that their extension data does not sync after they upgrade to Firefox 79, please also file a bug. This is likely related to the storage.sync data migration.

Please let us know if there are any questions on our developer community forum.

The post Changes to storage.sync in Firefox 79 appeared first on Mozilla Add-ons Blog.

Web Application SecurityReducing TLS Certificate Lifespans to 398 Days


We intend to update Mozilla’s Root Store Policy to reduce the maximum lifetime of TLS certificates from 825 days to 398 days, with the aim of protecting our user’s HTTPS connections. Many reasons for reducing the lifetime of certificates have been provided and summarized in the CA/Browser Forum’s Ballot SC22. Here are Mozilla’s top three reasons for supporting this change.

1. Agility

Certificates with lifetimes longer than 398 days delay responding to major incidents and upgrading to more secure technology. Certificate revocation is highly disruptive and difficult to plan for. Certificate expiration and renewal is the least disruptive way to replace an obsolete certificate, because it happens at a pre-scheduled time, whereas revocation suddenly causes a site to stop working. Certificates with lifetimes of no more than 398 days help mitigate the threat across the entire ecosystem when a major incident requires certificate or key replacements. Additionally, phasing out certificates with MD5-based signatures took five years, because TLS certificates were valid for up to five years. Phasing out certificates with SHA-1-based signatures took three years, because the maximum lifetime of TLS certificates was three years. Weakness in hash algorithms can lead to situations in which attackers can forge certificates, so users were at risk for years after collision attacks against these algorithms were proven feasible.

2. Limit exposure to compromise

Keys valid for longer than one year have greater exposure to compromise, and a compromised key could enable an attacker to intercept secure communications and/or impersonate a website until the TLS certificate expires. A good security practice is to change key pairs frequently, which should happen when you obtain a new certificate. Thus, one-year certificates will lead to more frequent generation of new keys.

3. TLS Certificates Outliving Domain Ownership

TLS certificates provide authentication, meaning that you can be sure that you are sending information to the correct server and not to an imposter trying to steal your information. If the owner of the domain changes or the cloud service provider changes, the holder of the TLS certificate’s private key (e.g. the previous owner of the domain or the previous cloud service provider) can impersonate the website until that TLS certificate expires. The Insecure Design Demo site describes two problems with TLS certificates outliving their domain ownership:

  • “If a company acquires a previously owned domain, the previous owner could still have a valid certificate, which could allow them to MitM the SSL connection with their prior certificate.”
  • “If a certificate has a subject alt-name for a domain no longer owned by the certificate user, it is possible to revoke the certificate that has both the vulnerable alt-name and other domains. You can DoS the service if the shared certificate is still in use!”

The change to reduce the maximum validity period of TLS certificates to 398 days is being discussed in the CA/Browser Forum’s Ballot SC31 and can have two possible outcomes:

     a) If that ballot passes, then the requirement will automatically apply to Mozilla’s Root Store Policy by reference.

     b) If that ballot does not pass, then we intend to proceed with our regular process for updating Mozilla’s Root Store Policy, which will involve discussion in

In preparation for updating our root store policy, we surveyed all of the certificate authorities (CAs) in our program and found that they all intend to limit TLS certificate validity periods to 398 days or less by September 1, 2020.

We believe that the best approach to safeguarding secure browsing is to work with CAs as partners, to foster open and frank communication, and to be diligent in looking for ways to keep our users safe.

The post Reducing TLS Certificate Lifespans to 398 Days appeared first on Mozilla Security Blog.

hacks.mozilla.orgTesting Firefox more efficiently with machine learning

Author’s note: July 13th 9:02am PT – Corrected number of test files and related calculations.

A browser is an incredibly complex piece of software. With such enormous complexity, the only way to maintain a rapid pace of development is through an extensive CI system that can give developers confidence that their changes won’t introduce bugs. Given the scale of our CI, we’re always looking for ways to reduce load while maintaining a high standard of product quality. We wondered if we could use machine learning to reach a higher degree of efficiency.

Continuous integration at scale

At Mozilla we have around 85,000 unique test files. Each contain many test functions. These tests need to run on all our supported platforms (Windows, Mac, Linux, Android) against a variety of build configurations (PGO, debug, ASan, etc.), with a range of runtime parameters (site isolation, WebRender, multi-process, etc.).

While we don’t test against every possible combination of the above, there are still over 90 unique configurations that we do test against. In other words, for each change that developers push to the repository, we could potentially run all 85k tests 90 different times. On an average work day we see nearly 300 pushes (including our testing branch). If we simply ran every test on every configuration on every push, we’d run approximately 2.3 billion test files per day! While we do throw money at this problem to some extent, as an independent non-profit organization, our budget is finite.

So how do we keep our CI load manageable? First, we recognize that some of those ninety unique configurations are more important than others. Many of the less important ones only run a small subset of the tests, or only run on a handful of pushes per day, or both. Second, in the case of our testing branch, we rely on our developers to specify which configurations and tests are most relevant to their changes. Third, we use an integration branch.

Basically, when a patch is pushed to the integration branch, we only run a small subset of tests against it. We then periodically run everything and employ code sheriffs to figure out if we missed any regressions. If so, they back out the offending patch. The integration branch is periodically merged to the main branch once everything looks good.

Example of a mozilla-central push on TreeherderA subset of the tasks we run on a single mozilla-central push. The full set of tasks was too hard to distinguish when scaled to fit in a single image.

A new approach to efficient testing

These methods have served us well for many years, but it turns out they’re still very expensive. Even with all of these optimizations our CI still runs around 10 compute years per day! Part of the problem is that we have been using a naive heuristic to choose which tasks to run on the integration branch. The heuristic ranks tasks based on how frequently they have failed in the past. The ranking is unrelated to the contents of the patch. So a push that modifies a README file would run the same tasks as a push that turns on site isolation. Additionally, the responsibility for determining which tests and configurations to run on the testing branch has shifted over to the developers themselves. This wastes their valuable time and tends towards over-selection of tests.

About a year ago, we started asking ourselves: how can we do better? We realized that the current implementation of our CI relies heavily on human intervention. What if we could instead correlate patches to tests using historical regression data? Could we use a machine learning algorithm to figure out the optimal set of tests to run? We hypothesized that we could simultaneously save money by running fewer tests, get results faster, and reduce the cognitive burden on developers. In the process, we would build out the infrastructure necessary to keep our CI pipeline running efficiently.

Having fun with historical failures

The main prerequisite to a machine-learning-based solution is collecting a large and precise enough regression dataset. On the surface this appears easy. We already store the status of all test executions in a data warehouse called ActiveData. But in reality, it’s very hard to do for the reasons below.

Since we only run a subset of tests on any given push (and then periodically run all of them), it’s not always obvious when a regression was introduced. Consider the following scenario:

Test A Test B

It is easy to see that the “Test A” failure was regressed by Patch 2, as that’s where it first started failing. However with the “Test B” failure, we can’t really be sure. Was it caused by Patch 2 or 3? Now imagine there are 8 patches in between the last PASS and the first FAIL. That adds a lot of uncertainty!

Intermittent (aka flaky) failures also make it hard to collect regression data. Sometimes tests can both pass and fail on the same codebase for all sorts of different reasons. It turns out we can’t be sure that Patch 2 regressed “Test A” in the table above after all! That is unless we re-run the failure enough times to be statistically confident. Even worse, the patch itself could have introduced the intermittent failure in the first place. We can’t assume that just because a failure is intermittent that it’s not a regression.

Futurama Fry not sure if memeThe writers of this post having a hard time.

Our heuristics

In order to solve these problems, we have built quite a large and complicated set of heuristics to predict which regressions are caused by which patch. For example, if a patch is later backed out, we check the status of the tests on the backout push. If they’re still failing, we can be pretty sure the failures were not due to the patch. Conversely, if they start passing we can be pretty sure that the patch was at fault.

Some failures are classified by humans. This can work to our advantage. Part of the code sheriff’s job is annotating failures (e.g. “intermittent” or “fixed by commit” for failures fixed at some later point). These classifications are a huge help finding regressions in the face of missing or intermittent tests. Unfortunately, due to the sheer number of patches and failures happening continuously, 100% accuracy is not attainable. So we even have heuristics to evaluate the accuracy of the classifications!

tweet from @MozSherifMemes "Today's menu: Intermittent code linting failures based on the same revision.Sheriffs complaining about intermittent failures.

Another trick for handling missing data is to backfill missing tests. We select tests to run on older pushes where they didn’t initially run, for the purpose of finding which push caused a regression. Currently, sheriffs do this manually. However, there are plans to automate it in certain circumstances in the future.

Collecting data about patches

We also need to collect data about the patches themselves, including files modified and the diff.  This allows us to correlate with the test failure data. In this way, the machine learning model can determine the set of tests most likely to fail for a given patch.

Collecting data about patches is way easier, as it is totally deterministic. We iterate through all the commits in our Mercurial repository, parsing patches with our rust-parsepatch project and analyzing source code with our rust-code-analysis project.

Designing the training set

Now that we have a dataset of patches and associated tests (both passes and failures), we can build a training set and a validation set to teach our machines how to select tests for us.

90% of the dataset is used as a training set, 10% is used as a validation set. The split must be done carefully. All patches in the validation set must be posterior to those in the training set. If we were to split randomly, we’d leak information from the future into the training set, causing the resulting model to be biased and artificially making its results look better than they actually are.

For example, consider a test which had never failed until last week and has failed a few times since then. If we train the model with a randomly picked training set, we might find ourselves in the situation where a few failures are in the training set and a few in the validation set. The model might be able to correctly predict the failures in the validation set, since it saw some examples in the training set.

In a real-world scenario though, we can’t look into the future. The model can’t know what will happen in the next week, but only what has happened so far. To evaluate properly, we need to pretend we are in the past, and future data (relative to the training set) must be inaccessible.

Diagram showing scale of training set (90%) to validation set (10%).Visualization of our split between training and validation set.

Building the model

We train an XGBoost model, using features from both test, patch, and the links between them, e.g:

  • In the past, how often did this test fail when the same files were touched?
  • How far in the directory tree are the source files from the test files?
  • How often in the VCS history were the source files modified together with the test files?

Full view of the model training infrastructure.

The input to the model is a tuple (TEST, PATCH), and the label is a binary FAIL or NOT FAIL. This means we have a single model that is able to take care of all tests. This architecture allows us to exploit the commonalities between test selection decisions in an easy way. A normal multi-label model, where each test is a completely separate label, would not be able to extrapolate the information about a given test and apply it to another completely unrelated test.

Given that we have tens of thousands of tests, even if our model was 99.9% accurate (which is pretty accurate, just one error every 1000 evaluations), we’d still be making mistakes for pretty much every patch! Luckily the cost associated with false positives (tests which are selected by the model for a given patch but do not fail) is not as high in our domain, as it would be if say, we were trying to recognize faces for policing purposes. The only price we pay is running some useless tests. At the same time we avoided running hundreds of them, so the net result is a huge savings!

As developers periodically switch what they are working on the dataset we train on evolves. So we currently retrain the model every two weeks.

Optimizing configurations

After we have chosen which tests to run, we can further improve the selection by choosing where the tests should run. In other words, the set of configurations they should run on. We use the dataset we’ve collected to identify redundant configurations for any given test. For instance, is it really worth running a test on both Windows 7 and Windows 10? To identify these redundancies, we use a solution similar to frequent itemset mining:

  1. Collect failure statistics for groups of tests and configurations
  2. Calculate the “support” as the number of pushes in which both X and Y failed over the number of pushes in which they both run
  3. Calculate the “confidence” as the number of pushes in which both X and Y failed over the number of pushes in which they both run and only one of the two failed.

We only select configuration groups where the support is high (low support would mean we don’t have enough proof) and the confidence is high (low confidence would mean we had many cases where the redundancy did not apply).

Once we have the set of tests to run, information on whether their results are configuration-dependent or not, and a set of machines (with their associated cost) on which to run them; we can formulate a mathematical optimization problem which we solve with a mixed-integer programming solver. This way, we can easily change the optimization objective we want to achieve without invasive changes to the optimization algorithm. At the moment, the optimization objective is to select the cheapest configurations on which to run the tests.

For the mathematically inclined among you, an instance of the optimization problem for a theoretical situation with three tests and three configurations. Test 1 and Test 3 are fully platform-independent. Test 2 must run on configuration 3 and on one of configuration 1 or configuration 2.
Subject to

Using the model

A machine learning model is only as useful as a consumer’s ability to use it. To that end, we decided to host a service on Heroku using dedicated worker dynos to service requests and Redis Queues to bridge between the backend and frontend. The frontend exposes a simple REST API, so consumers need only specify the push they are interested in (identified by the branch and topmost revision). The backend will automatically determine the files changed and their contents using a clone of mozilla-central.

Depending on the size of the push and the number of pushes in the queue to be analyzed, the service can take several minutes to compute the results. We therefore ensure that we never queue up more than a single job for any given push. We cache results once computed. This allows consumers to kick off a query asynchronously, and periodically poll to see if the results are ready.

We currently use the service when scheduling tasks on our integration branch. It’s also used when developers run the special mach try auto command to test their changes on the testing branch. In the future, we may also use it to determine which tests a developer should run locally.

Sequence diagram depicting the communication between the various actors in our infrastructure.

Measuring and comparing results

From the outset of this project, we felt it was crucial that we be able to run and compare experiments, measure our success and be confident that the changes to our algorithms were actually an improvement on the status quo. There are effectively two variables that we care about in a scheduling algorithm:

  1. The amount of resources used (measured in hours or dollars).
  2. The regression detection rate. That is, the percentage of introduced regressions that were caught directly on the push that caused them. In other words, we didn’t have to rely on a human to backfill the failure to figure out which push was the culprit.

We defined our metric:

scheduler effectiveness = 1000 * regression detection rate / hours per push

The higher this metric, the more effective a scheduling algorithm is. Now that we had our metric, we invented the concept of a “shadow scheduler”. Shadow schedulers are tasks that run on every push, which shadow the actual scheduling algorithm. Only rather than actually scheduling things, they output what they would have scheduled had they been the default. Each shadow scheduler may interpret the data returned by our machine learning service a bit differently. Or they may run additional optimizations on top of what the machine learning model recommends.

Finally we wrote an ETL to query the results of all these shadow schedulers, compute the scheduler effectiveness metric of each, and plot them all in a dashboard. At the moment, there are about a dozen different shadow schedulers that we’re monitoring and fine-tuning to find the best possible outcome. Once we’ve identified a winner, we make it the default algorithm. And then we start the process over again, creating further experiments.


The early results of this project have been very promising. Compared to our previous solution, we’ve reduced the number of test tasks on our integration branch by 70%! Compared to a CI system with no test selection, by almost 99%! We’ve also seen pretty fast adoption of our mach try auto tool, suggesting a usability improvement (since developers no longer need to think about what to select). But there is still a long way to go!

We need to improve the model’s ability to select configurations and default to that. Our regression detection heuristics and the quality of our dataset needs to improve. We have yet to implement usability and stability fixes to mach try auto.

And while we can’t make any promises, we’d love to package the model and service up in a way that is useful to organizations outside of Mozilla. Currently, this effort is part of a larger project that contains other machine learning infrastructure originally created to help manage Mozilla’s Bugzilla instance. Stay tuned!

If you’d like to learn more about this project or Firefox’s CI system in general, feel free to ask on our Matrix channel,

The post Testing Firefox more efficiently with machine learning appeared first on Mozilla Hacks - the Web developer blog.

Open Policy & AdvocacyCriminal proceedings against Malaysiakini will harm free expression in Malaysia

The Malaysian government’s decision to initiate criminal contempt proceedings against Malaysiakini for third party comments on the news portal’s website is deeply concerning. The move sets a dangerous precedent against intermediary liability and freedom of expression. It ignores the internationally accepted norm that holding publishers responsible for third party comments has a chilling effect on democratic discourse. The legal outcome the Malaysian government is seeking would upend the careful balance which places liability on the bad actors who engage in illegal activities, and only holds companies accountable when they know of such acts.

Intermediary liability safe harbour protections have been fundamental to the growth of the internet. They have enabled hosting and media platforms to innovate and flourish without the fear that they would be crushed by a failure to police every action of their users. Imposing the risk of criminal liability for such content would place a tremendous, and in many cases fatal, burden on many online intermediaries while negatively impacting international confidence in Malaysia as a digital destination.

We urge the Malyasian government to drop the proceedings and hope the Federal Court of Malaysia will meaningfully uphold the right to freedom of expression guaranteed by Malaysia’s Federal Constitution.


The post Criminal proceedings against Malaysiakini will harm free expression in Malaysia appeared first on Open Policy & Advocacy.

The Mozilla BlogA look at password security, Part I: history and background

Today I’d like to talk about passwords. Yes, I know, passwords are the worst, but why? This is the first of a series of posts about passwords, with this one focusing on the origins of our current password systems starting with log in for multi-user systems.

The conventional story for what’s wrong with passwords goes something like this: Passwords are simultaneously too long for users to memorize and too short to be secure.

It’s easy to see how to get to this conclusion. If we restrict ourselves to just letters and numbers, then there are about 26 one character passwords, 212 two character passwords, etc. The fastest password cracking systems can check about 236 passwords/second, so if you want a password which takes a year to crack, you need a password of 10 characters long or longer.

The situation is actually far worse than this; most people don’t use randomly generated passwords because they are hard to generate and hard to remember. Instead they tend to use words, sometimes adding a number, punctuation, or capitalization here and there. The result is passwords that are easy to crack, hence the need for password managers and the like.

This analysis isn’t wrong, precisely; but if you’ve ever watched a movie where someone tries to break into a computer by typing passwords over and over, you’re probably thinking “nobody is a fast enough typist to try billions of passwords a second”. This is obviously true, so where does password cracking come into it?

How to design a password system

The design of password systems dates back to the UNIX operating system, designed back in the 1970s. This is before personal computers and so most computers were shared, with multiple people having accounts and the operating system being responsible for protecting one user’s data from another. Passwords were used to prevent someone else from logging into your account.

The obvious way to implement a password system is just to store all the passwords on the disk and then when someone types in their password, you just compare what they typed in to what was stored. This has the obvious problem that if the password file is compromised, then every password in the system is also compromised. This means that any operating system vulnerability that allows a user to read the password file can be used to log in as other users. To make matters worse, multiuser systems like UNIX would usually have administrator accounts that had special privileges (the UNIX account is called “root”). Thus, if a user could compromise the password file they could gain root access (this is known as a “privilege escalation” attack).

The UNIX designers realized that a better approach is to use what’s now called password hashing: instead of storing the password itself you store what’s called a one-way function of the password. A one-way function is just a function H that’s easy to compute in one direction but not the other.1 This is conventionally done with what’s called a hash function, and so the technique is known as “password hashing” and the stored values as “password hashes”

In this case, what that means is you store the pair: (Username, H(Password)). [Technical note: I’m omitting salt which is used to mitigate offline pre-computation attacks against the password file.] When the user tries to log in, you take the password they enter P and compute H(P). If H(P) is the same as the stored password, then you know their password is right (with overwhelming probability) and you allow them to log in, otherwise you return an error. The cool thing about this design is that even if the password file is leaked, the attacker learns only the password hashes.2

Problems and countermeasures

This design is a huge improvement over just having a file with cleartext passwords and it might seem at this point like you didn’t need to stop people from reading the password file at all. In fact, on the original UNIX systems where this design was used, the /etc/passwd file was publicly readable. However, upon further reflection, it has the drawback that it’s cheap to verify a guess for a given password: just compute H(guess) and compare it to what’s been stored. This wouldn’t be much of an issue if people used strong passwords, but because people generally choose bad passwords, it is possible to write password cracking programs which would try out candidate passwords (typically starting with a list of common passwords and then trying variants) to see if any of these matched. Programs to do this task quickly emerged.

The key thing to realize is that the computation of H(guess) can be done offline. Once you have a copy of the password file, you can compare your pre-computed hashes of candidate passwords against the password file without interacting with the system at all. By contrast, in an online attack you have to interact with the system for each guess, which gives it an opportunity to rate limit you in various ways (for instance by taking a long time to return an answer or by locking out the account after some number of failures). In an offline attack, this kind of countermeasure is ineffective.

There are three obvious defenses to this kind of attack:

  • Make the password file unreadable: If the attacker can’t read the password, they can’t attack it. It took a while to do this on UNIX systems, because the password file also held a lot of other user-type information that you didn’t want kept secret, but eventually that got split out into another file in what’s called “shadow passwords” (the passwords themselves are stored in /etc/shadow. Of course, this is just the natural design for Web-type applications where people log into a server.
  • Make the password hash slower: The cost of cracking is linear in the cost of checking a single password, so if you make the password hash slower, then you make cracking slower. Of course, you also make logging in slower, but as long as you keep that time reasonably short (below a second or so) then users don’t notice. The tricky part here is that attackers can build specialized hardware that is much faster than the commodity hardware running on your machine, and designing hashes which are thought to be slow even on specialized hardware is a whole subfield of cryptography.
  • Get people to choose better passwords: In theory this sounds good, but in practice it’s resulted in enormous numbers of conflicting rules about password construction. When you create an account and are told you need to have a password between 8 and 12 characters with one lowercase letter, one capital letter, a number and one special character from this set — but not from this other set — what they’re hoping you will do is create a strong passwords. Experience suggests you are pretty likely to use Passw0rd!, so the situation here has not improved that much unless people use password managers which generate passwords for them.

The modern setting

At this point you’re probably wondering what this has to do with you: almost nobody uses multiuser timesharing systems any more (although a huge fraction of the devices people use are effectively UNIX: MacOS is a straight-up descendent of UNIX and Linux and Android are UNIX clones). The multiuser systems that people do use are mostly Web sites, which of course use usernames and passwords. In future posts I will cover password security for Web sites and personal devices.

  1. Strictly speaking we need the function not just to be one-way but also to be preimage resistant, meaning that given H(P) it’s hard to find any input p such that H(p) == H(P)
  2. For more information on this, see Morris and Thompson for quite readable history of the UNIX design. One very interesting feature is that at the time this system was designed generic hash functions didn’t exist, and so they instead used a variant of DES. The password was converted into a DES key and then used to encrypt a fixed value. This is actually a pretty good design and even included a feature designed to prevent attacks using custom DES hardware. However, it had the unfortunate property that passwords were limited to 8 characters, necessitating new algorithms that would accept a longer password. 

The post A look at password security, Part I: history and background appeared first on The Mozilla Blog.

Mozilla Add-ons BlogAdditional JavaScript syntax support in add-on developer tools

When an add-on is submitted to Firefox for validation, the add-ons linter checks its code and displays relevant errors, warnings, or friendly messages for the developer to review. JavaScript is constantly evolving, and when the linter lags behind the language, developers may see syntax errors for code that is generally considered acceptable. These errors block developers from getting their add-on signed or listed on

Example of JavaScript syntax error

On July 2, the linter was updated from ESLint 5.16 to ESLint 7.3 for JavaScript validation. This upgrades linter support to most ECMAScript 2020 syntax, including features like optional chaining, BigInt, and dynamic imports. As a quick note, the linter is still slightly behind what Firefox allows. We will post again in this blog the next time we make an update.

Want to help us keep the linter up-to-date? We welcome code contributions and encourage developers to report bugs found in our validation process.

The post Additional JavaScript syntax support in add-on developer tools appeared first on Mozilla Add-ons Blog.

Mozilla Add-ons BlogNew Extensions in Firefox for Android Nightly (Previously Firefox Preview)

Firefox for Android Nightly (formerly known as Firefox Preview) is a sneak peek of the new Firefox for Android experience. The browser is being rebuilt based on GeckoView, an embeddable component for Android, and we are continuing to gradually roll out extension support.

Including the add-ons from our last announcement, there are currently nine Recommended Extensions available to users. The latest three additions are in Firefox for Android Nightly and will be available on Firefox for Android Beta soon:

Decentraleyes prevents your mobile device from making requests to content delivery networks (i.e. advertisers), and instead provides local copies of common libraries. In addition to the benefit of increased privacy, Decentraleyes also reduces bandwidth usage—a huge benefit in the mobile space.

Privacy Possum has a unique approach to dealing with trackers. Instead of playing along with the cat and mouse game of removing trackers, it falsifies the information trackers used to create a profile of you, in addition to other anti-tracking techniques.

Youtube High Definition gives you more control over how videos are displayed on Youtube. You have the opportunity to set your preferred visual quality option and have it shine on your high-DPI device, or use a lower quality to save bandwidth.If you have more questions on extensions in Firefox for Android Nightly, please check out our FAQ. We will be posting further updates about our future plans on this blog.

The post New Extensions in Firefox for Android Nightly (Previously Firefox Preview) appeared first on Mozilla Add-ons Blog.

hacks.mozilla.orgAdding prefers-contrast to Firefox

In this article, we’ll walk through the design and implementation of the prefers-contrast media query in Firefox. We’ll start by defining high contrast mode, then we’ll cover the importance of prefers-contrast. Finally, we’ll walk through the media query implementation in Firefox. By the end, you’ll have a greater understanding of how media queries work in Firefox, and why the prefers-contrast query is important and exciting.

When we talk about the contrast of a page we’re assessing how the web author’s color choices impact readability. For visitors with low vision web pages with low or insufficient contrast can be hard to use. The lack of distinction between text and its background can cause them to “bleed” together.

The What of prefers-contrast

Though the WCAG (Web Content Accessibility Guidelines) set standards for contrast that authors should abide by, not all sites do. To keep the web accessible, many browsers and OSes offer high-contrast settings to change how web pages and content looks. When these settings are enabled we say that a website visitor has high contrast mode enabled.

High contrast mode increases the contrast of the screen so that users with low vision have an easier time getting around. Depending on what operating system is being used, high contrast mode can make a wide variety of changes. It can reduce the visual complexity of the screen, force high contrast colors between text and backgrounds, apply filters to the screen, and more. Doing this all automatically and in a way that works for every application and website is hard.

For example, how should high contrast mode handle images? Photos taken in high or low light may lack contrast, and their subjects may be hard to distinguish. What about text that is set on top of images? If the image isn’t a single color, some parts may have high contrast, but others may not. At the moment, Firefox deals with text on images by drawing a backplate on the text. All this is great, but it’s still not quite ideal. Ideally, webpages could detect when high contrast mode is enabled and then make themselves more accessible. To do that we need to know how different operating systems implement high contrast mode.

OS-level high-contrast settings

Most operating systems offer high-contrast settings. On macOS, users can indicate that they’d prefer high contrast in System Preferences → Accessibility → Display. To honor this preference, macOS applies a high contrast filter to the screen. However, it won’t do anything to inform applications that high contrast is enabled or adjust the layout of the screen. This makes it hard for apps running on macOS to adjust themselves for high-contrast mode users. Furthermore, it means that users are completely dependent on the operating system to make the right modifications.

Windows takes a very different approach. When high contrast mode is enabled, Windows exposes this information to applications. Rather than apply a filter to the screen, it forces applications to use certain high contrast (or user-defined) colors. Unlike macOS, Windows also tells applications when high-contrast settings are enabled. In this way, applications can adjust themselves to be more high-contrast friendly.

Similarly, Firefox lets users customize high contrast colors or apply different colors to web content. This option can be enabled via the colors option under “Language and Appearance” in Firefox’s “Preferences” settings on all operating systems. When we talk about colors set by the user instead of by the page or application, we describe them as forced.

Forced colors in Firefox

a screenshot of Firefox Forced Colors Menu on a dark background

As we can see, different operating systems handle high-contrast settings in different ways. This impacts how prefers-contrast works on these platforms. On Windows, because Firefox is told when a high-contrast theme is in use, prefers-contrast can detect both high contrast from Windows and forced colors from within Firefox. On macOS, because Firefox isn’t told when a high-contrast theme is in use, prefers-contrast can only detect when colors are being forced from within the browser.

Want to see what something with forced colors looks like? Here is the Google homepage on Firefox with the default Windows high-contrast theme enabled:

google homepage with windows high contrast mode enabled

Notice how Firefox overrides the background colors (forced) to black and overrides outlines to yellow.

Some things are left to be desired by this forced colors approach. On the Google homepage above, you’ll notice that the profile image no longer appears next to the sign-in button. Here’s the Amazon homepage, also in Firefox, with the same Windows high-contrast theme enabled:

screenshot of high-contrast Amazon homepage with dark background

The images under “Ride electric” and “Current customer favorites” have disappeared, and the text in the “Father’s Day deals” section has not increased in contrast.

The Why of prefers-contrast

We can’t fault Google and Amazon for the missing images and other issues in the appearance of these high-contrast homepages. Without the prefers-contrast media query, there is no standardized way to detect a visitor’s contrast preferences. Even if Google and Amazon wanted to change their webpages to make them more accessible for different contrast preferences, they couldn’t. They have no way of knowing when a user has high-contrast mode enabled, even though the browser can tell.

That’s why prefers-contrast is so important. The prefers-contrast media query allows website authors to determine a visitor’s contrast preferences and update the website accordingly. Using prefers-contrast, a website author can differentiate between low and high contrast and detect when colors are being forced like this:

@media (prefers-contrast: forced) {
    /* some awesome, accessible, high contrast css */

This is great because well-informed website designers are much better at making their webpages accessible than automatic high contrast settings.

The How of prefers-contrast

This section covers how something like prefers-contrast actually gets implemented in Firefox. It’s an interesting dive into the internals of a browser, but if you’re just interested in the what and why of perfers-contrast then you’re welcome to move on to the conclusion.


We’ll start our media query implementation journey with parsing. Parsing handles turning CSS and HTML into an internal representation that the browser understands. Firefox uses a browser engine called Servo to handle this. Luckily for us, Servo makes things pretty straightforward. To hook up parsing for our media query, we’ll head over to in the Servo codebase and we’ll add an enum to represent our media query.

/// Possible values for prefers-contrast media query.
#[derive(Clone, Copy, Debug, FromPrimitive, PartialEq, Parse, ToCss)]
enum PrefersContrast {

Because we use #[derive(Parse)], Stylo will take care of generating the parsing code for us using the name of our enum and its options. It is seriously that easy. :-)

Evaluating the media query

Now that we’ve got our parsing logic hooked up, we’ll add some logic for evaluating our media query. If prefers-contrast only exposed low, no-preference, and high, then this would be as simple as creating some function that returns an instance of our enum above.

That said, the addition of a forced option adds some interesting gotchas to our media query. It’s not possible to simultaneously prefer low and high-contrast. However, it’s quite common for website visitors to prefer high contrast and have forced colors. Like we discussed earlier if a visitor is on Windows enabling high contrast also forces colors on webpages. Because enums can only be in one of their states at a time (i.e., the prefers-contrast enum can’t be high-contrast and fixed simultaneously) we’ll need to make some modifications to a single function design.

To properly represent prefers-contrast, we’ll split our logic in half. The first half will determine if colors are being forced and the second will determine the website visitor’s contrast preference. We can represent the presence or absence of forced colors with a boolean, but we’ll need a new enum for contrast preference. Let’s go ahead and add that to

/// Represents the parts of prefers-contrast that explicitly deal with
/// contrast. Used in combination with information about rather or not
/// forced colors are active this allows for evaluation of the
/// prefers-contrast media query.
#[derive(Clone, Copy, Debug, FromPrimitive, PartialEq)]
pub enum ContrastPref {
    /// High contrast is preferred. Corresponds to an accessibility theme
    /// being enabled or firefox forcing high contrast colors.
    /// Low contrast is prefered. Corresponds to the
    /// browser.display.prefers_low_contrast pref being true.
    /// The default value if neither high nor low contrast is enabled.

Voila! We have parsing and enums to represent the possible states of the prefers-contrast media query and a website visitor’s contrast preference done.

Adding functions in C++ and Rust

Now we add some logic to make prefers-contrast tick. We’ll do that in two steps. First, we’ll add a C++ function to determine contrast preferences, and then we’ll add a Rust function to call it and evaluate the media query.

Our C++ function will live in Gecko, Firefox’s layout engine. Information about high contrast settings is also collected in Gecko. This is quite handy for us. We’d like our C++ function to return our ContrastPref enum from earlier. Let’s start by generating bindings from Rust to C++ for that.

Starting in ServoBindings.toml we’ll add a mapping from our Stylo type to a Gecko type:

cbindgen-types = [
    # ...
    { gecko = "StyleContrastPref", servo = "gecko::media_features::ContrastPref" },
    # ...

Then, we’ll add a similar thing to Servo’s cbindgen.toml:

include = [
    # ...
    # ...

And with that, we’ve done it! cbindgen will generate the bindings so we have an enum to use and return from C++ code.

We’ve written a C++ function that’s relatively straightforward. We’ll move over to nsMediaFeatures.cpp and add it. If the browser is resisting fingerprinting, we’ll return no-preference. Otherwise, we’ll return high- or no-preference based on whether or not we’ve enabled high contrast mode (UseAccessibilityTheme).

StyleContrastPref Gecko_MediaFeatures_PrefersContrast(const Document* aDocument, const bool aForcedColors) {
    if (nsContentUtils::ShouldResistFingerprinting(aDocument)) {
        return StyleContrastPref::NoPreference;
    // Neither Linux, Windows, nor Mac has a way to indicate that low
    // contrast is preferred so the presence of an accessibility theme
    // implies that high contrast is preferred.
    // Note that MacOS does not expose whether or not high contrast is
    // enabled so for MacOS users this will always evaluate to
    // false. For more information and discussion see:
    if (!!LookAndFeel::GetInt(LookAndFeel::IntID::UseAccessibilityTheme, 0)) {
        return StyleContrastPref::High;
    return StyleContrastPref::NoPreference;

Aside: This implementation doesn’t have a way to detect a preference for low contrast. As we discussed earlier neither Windows, macOS, nor Linux has a standard way to indicate that low contrast is preferred. Thus, for our initial implementation, we opted to keep things simple and make it impossible to toggle. That’s not to say that there isn’t room for improvement here. There are various less standard ways for users to indicate that they prefer low contrast — like forcing low contrast colors on Windows, Linux, or in Firefox.

Determining contrast preferences in Firefox

Finally, we’ll add the function definition to GeckoBindings.h so that our Rust code can call it.

mozilla::StyleContrastPref Gecko_MediaFeatures_PrefersContrast(
    const mozilla::dom::Document*, const bool aForcedColors);

Now that parsing, logic, and C++ bindings are set up, we’re ready to add our Rust function for evaluating the media query. Moving back over to, we’ll go ahead and add a function to do that.

Our function takes a device with information about where the media query is being evaluated. It includes an optional query value, representing the value that the media query is being evaluated against. The query value is optional because sometimes the media query can be evaluated without a query. In this case, we evaluate the truthiness of the contrast-preference that we normally would compare to the query. This is called evaluating the media query in the “boolean context”. If the contrast preference is anything other than no-preference, we go ahead and apply the CSS inside of the media query.

Contrast preference examples

That’s a lot of information, so here are some examples:

@media (prefers-contrast: high) { } /* query_value: Some(high) */
@media (prefers-contrast: low) { } /* query_value: Some(low) */
@media (prefers-contrast) { } /* query_value: None | "eval in boolean context" */

In the boolean context (the third example above) we first determine the actual contrast preference. Then, if it’s not no-preference the media query will evaluate to true and apply the CSS inside. On the other hand, if it is no-preference, the media query evaluates to false and we don’t apply the CSS.

With that in mind, let’s put together the logic for our media query!

fn eval_prefers_contrast(device: &Device, query_value: Option) -> bool {
    let forced_colors = !device.use_document_colors();
    let contrast_pref =
        unsafe { bindings::Gecko_MediaFeatures_PrefersContrast(device.document(), forced_colors) };
    if let Some(query_value) = query_value {
        match query_value {
            PrefersContrast::Forced => forced_colors,
            PrefersContrast::High => contrast_pref == ContrastPref::High,
            PrefersContrast::Low => contrast_pref == ContrastPref::Low,
            PrefersContrast::NoPreference => contrast_pref == ContrastPref::NoPreference,
    } else {
        // Only prefers-contrast: no-preference evaluates to false.
        forced_colors || (contrast_pref != ContrastPref::NoPreference)

The last step is to register our media query with Firefox. Still in, we’ll let Stylo know we’re done. Then we can add our function and enum to the media features list:

pub static MEDIA_FEATURES: [MediaFeatureDescription; 54] = [
    // ...
        keyword_evaluator!(eval_prefers_contrast, PrefersContrast),
        // Note: by default this is only enabled in browser chrome and
        // ua. It can be enabled on the web via the
        // layout.css.prefers-contrast.enabled preference. See
        // disabled_by_pref in for how that
        // is done.
    // ...

In conclusion

And with that, we’ve finished! With some care, we’ve walked through a near-complete implementation of prefers-contrast in Firefox. Triggered updates and tests are not covered, but are relatively small details. If you’d like to see all of the code and tests for prefers-contrast take a look at the Phabricator patch here.

prefers-contrast is a powerful and important media query that makes it easier for web authors to create accessible web pages. Using prefers-contrast websites can adjust to high and forced contrast preferences in ways that they were entirely unable to before. To get prefers-contrast, grab a copy of Firefox Nightly and set layout.css.prefers-contrast.enabled to true in about:config. Now, go forth and build a more accessible web! 🎉

Mozilla works to make the internet a global public resource that is open and accessible to all. The prefers-contrast media query, and other work by our accessibility team, ensures we uphold that commitment to our low-vision users and other users with disabilities. If you’re interested in learning more about Mozilla’s accessibility work you can check out the accessibility blog or the accessibility wiki page.

The post Adding prefers-contrast to Firefox appeared first on Mozilla Hacks - the Web developer blog.

SeaMonkeySeaMonkey 2.53.3 has been released!

Hi everyone,

Just a quick note to notify all that SeaMonkey 2.53.3 has been released.

Keep safe and healthy, everyone!


Open Policy & AdvocacyNext Steps for Net Neutrality

Two years ago we first brought Mozilla v. FCC in federal court, in an effort to save the net neutrality rules protecting American consumers. Mozilla has long fought for net neutrality because we believe that the internet works best when people control their own online experiences.

Today is the deadline to petition the Supreme Court for review of the D.C. Circuit decision in Mozilla v. FCC. After careful consideration, Mozilla—as well as its partners in this litigation—are not seeking Supreme Court review of the D.C. Circuit decision. Even though we did not achieve all that we hoped for in the lower court, the court recognized the flaws of the FCC’s action and sent parts of it back to the agency for reconsideration. And the court cleared a path for net neutrality to move forward at the state level. We believe the fight is best pursued there, as well as on other fronts including Congress or a future FCC.

Net neutrality is more than a legal construct. It is a reflection of the fundamental belief that ISPs have tremendous power over our online experiences and that power should not be further concentrated in actors that have often demonstrated a disregard for consumers and their digital rights. The global pandemic has moved even more of our daily lives—our work, school, conversations with friends and family—online. Internet videos and social media debates are fueling an essential conversation about systemic racism in America. At this moment, net neutrality protections ensuring equal treatment of online traffic are critical. Recent moves by ISPs to favor their own content channels or impose data caps and usage-based pricing make concerns about the need for protections all the more real.

The fight for net neutrality will continue on. The D.C. Circuit decision positions the net neutrality movement to continue on many fronts, starting with a defense of California’s strong new law to protect consumers online—a law that was on hold pending resolution of this case.

Other states have followed suit and we expect more to take up the mantle. We will look to a future Congress or future FCC to take up the issue in the coming months and years. Mozilla is committed to continuing our work, with our broad community of allies, in this movement to defend the web and consumers and ensure the internet remains open and accessible to all.

The post Next Steps for Net Neutrality appeared first on Open Policy & Advocacy.

Web Application SecurityPerformance Improvements via Formally-Verified Cryptography in Firefox

Cryptographic primitives, while extremely complex and difficult to implement, audit, and validate, are critical for security on the web. To ensure that NSS (Network Security Services, the cryptography library behind Firefox) abides by Mozilla’s principle of user security being fundamental, we’ve been working with Project Everest and the HACL* team to bring formally-verified cryptography into Firefox.

In Firefox 57, we introduced formally-verified Curve25519, which is a mechanism used for key establishment in TLS and other protocols. In Firefox 60, we added ChaCha20 and Poly1305, providing high-assurance authenticated encryption. Firefox 69, 77, and 79 improve and expand these implementations, providing increased performance while retaining the assurance granted by formal verification.

Performance & Specifics

For key establishment, we recently replaced the 32-bit implementation of Curve25519 with one from the Fiat-Crypto project. The arbitrary-precision arithmetic functions of this implementation are proven to be functionally correct, and it improves performance by nearly 10x over the previous code. Firefox 77 updates the 64-bit implementation with new HACL* code, benefitting from a ~27% speedup. Most recently, Firefox 79 also brings this update to Windows. These improvements are significant: Telemetry shows Curve25519 to be the most widely used elliptic curve for ECDH(E) key establishment in Firefox, and increased throughput reduces energy consumption, which is particularly important for mobile devices.

64-bit Curve25519 with HACL*

32-bit Curve25519 with Fiat-Crypto

For encryption and decryption, we improved the performance of ChaCha20-Poly1305 in Firefox 77. Throughput is doubled by taking advantage of vectorization with 128-bit and 256-bit integer arithmetic (via the AVX2 instruction set on x86-64 CPUs). When these features are unavailable, NSS will fall back to an AVX or scalar implementation, both of which have been further optimized.

ChaCha20-Poly1305 with HACL* and AVX2

The HACL* project has introduced new techniques and libraries to improve efficiency in writing verified primitives for both scalar and vectorized variants. This allows aggressive code sharing and reduces the verification effort across many different platforms.

What’s Next?

For Firefox 81, we intend to incorporate a formally-verified implementation of the P256 elliptic curve for ECDSA and ECDH. Middle-term targets for verified implementations include GCM, the P384 and P521 elliptic curves, and the ECDSA signature scheme itself. While there remains work to be done, these updates provide an improved user experience and ease the implementation burden for future inclusion of platform-optimized primitives.

The post Performance Improvements via Formally-Verified Cryptography in Firefox appeared first on Mozilla Security Blog.

Mozilla UXUX Book Club Recap: Writing is Designing, in Conversation with the Authors

The Firefox UX book club comes together a few times a year to discuss books related to the user experience practice. We recently welcomed authors Michael Metts and Andy Welfle to discuss their book Writing is Designing: Words and the User Experience (Rosenfeld Media, Jan. 2020).

Photo of Writing is Designing with notebook, coffee cup, and computer mouse on a table.

To make the most of our time, we collected questions from the group beforehand, organized them into themes, and asked people to upvote the ones they were most interested in.

An overview of Writing is Designing

“In many product teams, the words are an afterthought, and come after the “design,” or the visual and experiential system. It shouldn’t be like that: the writer should be creating words as the rest of the experience is developed. They should be iterative, validated with research, and highly collaborative. Writing is part of the design process, and writers are designers.” — Writing is Designing

Andy and Michael kicked things off with a brief overview of Writing is Designing. They highlighted how writing is about fitting words together and design is about solving problems. Writing as design brings the two together. These activities — writing and designing — need to be done together to create a cohesive user experience.

They reiterated that effective product content must be:

  • Usable: It makes it easier to do something. Writing should be clear, simple, and easy.
  • Useful: It supports user goals. Writers need to understand a product’s purpose and their audience’s needs to create useful experiences.
  • Responsible: What we write can be misused by people or even algorithms. We must take care in the language we use.

We then moved onto Q&A which covered these themes and ideas.

On writing a book that’s not just for UX writers

“Even if you only do this type of writing occasionally, you’ll learn from this book. If you’re a designer, product manager, developer, or anyone else who writes for your users, you’ll benefit from it. This book will also help people who manage or collaborate with writers, since you’ll get to see what goes into this type of writing, and how it fits into the product design and development process.” — Writing is Designing

You don’t have to be a UX writer or content strategy to benefit from Writing Is Designing. The book includes guidance for anyone involved in creating content for a user experience, including designers, researchers, engineers, and product managers. Writing is just as much of a design tool as Sketch or Figma—it’s just that the material is words not pixels.

When language perpetuates racism

“The more you learn and the more you are able to engage in discussions about racial justice, the more you are able to see how it impacts everything we do. Not questioning systems can lead to perpetuating injustice. It starts with our workplaces. People are having important conversations and questioning things that already should have been questioned.” — Michael Metts

Given the global focus on racial justice issues, it wasn’t surprising that we spent a good part of our time discussing how the conversation intersects with our day-to-day work.

Andy talked about the effort at Adobe, where he is the UX Content Strategy Manager, to expose racist terminology in its products, such as ‘master-slave’ and ‘whitelist-blacklist’ pairings. It’s not just about finding a neutral replacement term that appears to users in the interface, but rethinking how we’ve defined these terms and underlying structures entirely in our code.

Moving beyond anti-racist language

“We need to focus on who we are doing this for. We worry what we look like and that we’re doing the right thing. And that’s not the priority. The goal is to dismantle harmful systems. It’s important for white people to get away from your own feelings of wanting to look good. And focus on who you are doing it for and making it a better world for those people.” — Michael Metts

Beyond the language that appears in our products, Michael encouraged the group to educate themselves, follow Black writers and designers, and be open and willing to change. Any effective UX practitioner needs to approach their work with a sense of humility and openness to being wrong.

Supporting racial justice and the Black Lives Matter movement must also include raising long-needed conversations in the workplace, asking tough questions, and sitting with discomfort. Michael recommended reading How To Be An Antiracist by Ibram X. Kendi and So You Want to Talk About Race by Ijeoma Oluo.

Re-examining and revisiting norms in design systems

“In design systems, those who document and write are the ones who are codifying information for long term. It’s how terms like whitelist and blacklist, and master/slave keep showing up, decade after decade, in our stuff. We have a responsibility not to be complicit in codifying and continuing racist systems.” — Andy Welfle

Part of our jobs as UX practitioners is to codify and frame decisions. Design systems, for example, document content standards and design patterns. Andy reminded us that our own biases and assumptions can be built-in to these systems. Not questioning the systems we build and contribute to can perpetuate injustice.

It’s important to keep revisiting our own systems and asking questions about them. Why did we frame it this way? Could we frame it in another way?

Driving towards clarity early on in the design process

“It’s hard to write about something without understanding it. While you need clarity if you want to do your job well, your team and your users will benefit from it, too.” — Writing is Designing

Helping teams align and get clear on goals and user problems is a big part of a product writer’s job. While writers are often the ones to ask these clarifying questions, every member of the design team can and should participate in this clarification work—it’s the deep strategy work we must do before we can write and visualize the surface manifestation in products.

Before you open your favorite design tool (be it Sketch, Figma, or Adobe XD) Andy and Michael recommend writers and visual designers start with the simplest tool of all: a text editor. There you can do the foundational design work of figuring out what you’re trying to accomplish.

The longevity of good content

A book club member asked, “How long does good content last?” Andy’s response: “As long as it needs to.”

Software work is never ‘done.’ Products and the technology that supports them continue to evolve. With that in mind, there are key touch points to revisit copy. For example, when a piece of desktop software becomes available on a different platform like tablet or mobile, it’s a good time to revisit your context (and entire experience, in fact) to see if it still works.

Final thoughts—an ‘everything first’ approach

In the grand scheme of tech things, UX writing is still a relatively new discipline. Books like Writing for Designing are helping to define and shape the practice.

When asked (at another meet-up, not our own) if he’s advocating for a ‘content-first approach,’ Michael’s response was that we need an ‘everything first approach’ — meaning, all parties involved in the design and development of a product should come to the planning table together, early on in the process. By making the case for writing as a strategic design practice, this book helps solidify a spot at that table for UX writers.

Prior texts read by Mozilla’s UX book club

SUMO BlogLet’s meet online: Virtual All Hands 2020

Hi folks,

Here I am again sharing with you the amazing experience of another All Hands.

This time no traveling was involved, and every meeting, coffee, and chat were left online.

Virtuality seems the focus of this 2020 and if on one side we strongly missed the possibility of being together with colleagues and contributors, on the other hand, we were grateful for the possibility of being able to connect.

Virtual All Hands has been running for a week, from the 15th of June to the 18th, and has been full of events and meetups.

As SUMO team we had three events running on Tuesday, Wednesday, and Thursday, along with the plenaries and Demos that were presented on Hubs. Floating in virtual reality space while experiencing and listening to new products and features that will be introduced in the second part of the year has been a super exciting experience and something really enjoyable.

Let’s talk about our schedule, shall we?

On Tuesday we run our Community update meeting in which we focussed around what happened in the last 6 months, the projects that we successfully completed, and the ones that we have left for the next half of the year.

We talked a lot about the community plan, and which are the next steps we need to take to complete everything and release the new onboarding experience before the end of the year.

We did not forget to mention everything that happened to the platform. The new responsive redesign and the ask-a-question flow have greatly changed the face of the support forum, and everything was implemented while the team was working on a solution for the spam flow we have been experiencing in the last month.

If you want to read more about this, here are some forum posts we wrote in the last few weeks you can go through regarding these topics:

On Wednesday we focused on presenting the campaign for the Respond Tool. For those of you who don’t know what I am talking about, we shared some resources regarding the tool here. The campaign will run up until today, but we still need your intake on many aspects, so join us on the tool!

The main points we went through during the meeting were:

  • Introduction about the tool and the announcement on the forum
  • Updates on Mozilla Firefox Browser
  • Update about the Respond Tool
  • Demo (how to reply, moderate, or use canned response) – Teachable course
  • Bugs. If you use the Respond Tool, please file bugs here
  • German and Spanish speakers needed: we have a high volume of review in Spanish and German that need your help!

On Thursday we took care of Conversocial, the new tool that substitutes Buffer from now on. We have already some contributors joining us on the tool and we are really happy with everyone ‘s excitement in using the tool and finally having a full twitter account dedicated to SUMO. @firefoxsupport is here, please go, share and follow!

The agenda of the meeting was the following:

  • Introduction about the tool
  • Contributor roles
  • Escalation process
  • Demo on Conversocial
  • @FirefoxSupport overview

If you were invited to the All Hands or you have NDA access you can access to the meetings at this link:

Thank you for your participation and your enthusiasm as always, we are missing live interaction but we have the opportunity to use some great tools as well. We are happy that so many people could enjoy those opportunities and created such a nice environment during the few days of the All Hands.

See you really soon!

The SUMO Team

hacks.mozilla.orgSecuring Gamepad API

Firefox release dates for Gamepad API updates

As part of Mozilla’s ongoing commitment to improve the privacy and security of the web platform, over the next few months we will be making some changes to how the Gamepad_API works.

Here are the important dates to keep in mind:

25 of August 2020 (Firefox 81 Beta/Developer Edition):
.getGamepads() method will only return game pads if called in a “secure context” (e.g., https://).
22 of September 2020 (Firefox 82 Beta/Developer Edition):
Switch to requiring a permission policy for third-party contexts/iframes.

We are collaborating on making these changes with folks from the Chrome team and other browser vendors. We will update this post with links to their announcements as they become available.

Restricting gamepads to secure contexts

Starting with Firefox 81, the Gamepad API will be restricted to what are known as “secure contexts” (bug 1591329). Basically, this means that Gamepad API will only work on sites served as “https://”.

For the next few months, we will show a developer console warning whenever .getGamepads() method is called from an insecure context.

From Firefox 81, we plan to require secure context for .getGamepads() by default. To avoid significant code breakage, calling .getGamepads() will return an empty array. We will display this console warning indefinitely:

Firefox developer console

The developer console nows shows a warning when .getGamepads() method is called from insecure contexts

Permission Policy integration

From Firefox 82, third-party contexts (i.e., <iframe>s that are not same origin) that require access to the Gamepad API will have to be explicitly granted access by the hosting website via a Permissions Policy.

In order for a third-party context to be able to use the Gamepad API, you will need to add an “allow” attribute to your HTML like so:

  <iframe allow="gamepad" src="">

Once this ships, calling .getGamepads() from a disallowed third-party context will throw a JavaScript security error.

You can our track our implementation progress in bug 1640086.


As WebVR and WebXR already require a secure context to work, these changes
shouldn’t affect any sites relying on .getGamepads(). In fact, everything should continue to work as it does today.

Future improvements to privacy and security

When we ship APIs we often find that sites use them in unintended ways – mostly creatively, sometimes maliciously. As new privacy and security capabilities are added to the web platform, we retrofit those solutions to better protect users from malicious sites and third-party trackers.

Adding “secure contexts” and “permission policy” to the Gamepad API is part of this ongoing effort to improve the overall privacy and security of the web. Although we know these changes can be a short-term inconvenience to developers, we believe it’s important to constantly evolve the web to be as secure and privacy-preserving as it can be for all users.

The post Securing Gamepad API appeared first on Mozilla Hacks - the Web developer blog.

Mozilla L10NL10n Report: June 2020 Edition


New community/locales added

New content and projects

What’s new or coming up in Firefox desktop


Upcoming deadlines:

  • Firefox 78 is currently in beta and will be released on June 30. The deadline to update localization was on Jun 16.
  • The deadline to update localizations for Firefox 79, currently in Nightly, will be July 14 (4 weeks after the previous deadline).
Fluent and migration wizard

Going back to the topic of how to use Fluent’s flexibility at your advantage, we recently ported the Migration Wizard to Fluent. That’s the dialog displayed to users when they import content from other browsers.

Before Fluent, this is how the messages for “Bookmarks” would look like:


That’s one string for each supported browser, even if they’re all identical. This is how the same message looks like in Fluent:

browser-data-bookmarks-checkbox =
  .label = { $browser ->
     [ie] Favorites
     [edge] Favorites
    *[other] Bookmarks

If all browsers use the same translations in a specific language, this can take advantage of the asymmetric localization concept available in Fluent, and be simplified (“flattened”) to just:

browser-data-bookmarks-checkbox =
  .label = Translated_bookmarks

The same is true the other way around. The section comment associated to this group of strings says:

## Browser data types
## All of these strings get a $browser variable passed in.
## You can use the browser variable to differentiate the name of items,
## which may have different labels in different browsers.
## The supported values for the $browser variable are:
##   360se
##   chrome
##   edge
##   firefox
##   ie
##   safari
## The various beta and development versions of edge and chrome all get
## normalized to just "edge" and "chrome" for these strings.

So, if English has a flat string without selectors:

browser-data-cookies-checkbox =
    .label = Cookies

A localization can still provide variants if, for example, Firefox is using a different term for cookies than other browsers:

browser-data-cookies-checkbox =
    .label = { $browser ->
        [firefox] Macarons
       *[other] Cookies
HTTPS-Only Error page

There’s a new mode, called “HTTPS-Only”, currently tested in Nightly: when users visit a page not available with a secure connection, Firefox will display a warning.

In order to test this page, you can change the value of the preference in about:config, then visit this website. Make sure to test the page with the window at different sizes, to make sure all elements fit.

What’s new or coming up in mobile

Concerning mobile right now, we just got updated screenshots for the latest v27 of Firefox for iOS:

We are trying out several options for screenshots going forwards, so stayed tuned so you can tell us which one you prefer.

Otherwise our Fenix launch is still in progress. We are string frozen now, so if you’d like to catch up and test your work, it’s this way:

You should have until July 18th to finish all l10n work on this project, before the cut-off date.

What’s new or coming up in web projects

Firefox Accounts

A third file called main.ftl was added to Pontoon a couple of weeks ago in preparation to support subscription based products. This component contains payment strings for the subscription platform, which will be rolled out to a few countries initially. The staging server will be opened up for localization testing in the coming days. An email with testing instruction and information on supported markets will be sent out as soon as all the information is gathered and confirmed. Stay tuned.

In the past month, several dozens of files were added to Pontoon, including new pages. Many of the migrated pages include updates. To help prioritize, please focus on

  • resolving the orange warnings first. This usually means that a brand or product name was not converted in the process. Placeables no longer match those in English.
  • completing translation one page at a time. Coordinate with other community members, splitting up the work page by page, and conduct peer review.

Speaking of brands, the browser comparison pages are laden with brand and product names, well-known company names. Not all the brand names went to the brands.ftl. This is due to some of the names being mentioned once or twice, or limited to just one file. We do not want to overload the brands.ftl with too many of these rarely used names. The general rule for these third party brands and product names is, keep them unchanged whenever possible.

We skipped WNP#78 but we will have WNP#79 ready for localization in the coming weeks.

Transvision now supports in Fluent format. You can leverage the tool the same way you did before.

What’s new or coming up in Foundation projects

Donate websites

Back in November last year, we mentioned we were working on making localizable the remaining part of the content (the content stored in a CMS) from the new donate website. The site was launched in February, but the CMS localization systems still need some work before the CMS-based content can be properly localized.

Over the next few weeks, Théo will be working closely with the makers of the CMS the site is using, to fix the remaining issues, develop new localization capabilities and enable CMS content localization.

Once the systems are operational and if you’re already translating the Donate website UI project, we will add the following two new projects to your dashboard with the remaining content, one for the Thunderbird instance and another one for the Mozilla instance. The vast majority of this content has already been translated, so you should be able to leverage previous translations using the translation memory feature in Pontoon. But because some longer strings may have been split differently by the system, they may not show up in translation memory. For this reason, we will enable back the old “Fundraising” project in Pontoon, in read-only mode, so that you can easily search and access those translations if you need to.

What’s new or coming up in Pontoon

  • Translate Terminology. We’ve added a new Terminology project to Pontoon, which contains all Terms from Mozilla’s termbase and lets you translate them. As new terms will be added to Pontoon, they will instantly appear in the project and be ready for translation.There’s also a “Translate” link next to each term in the Terms tab and panel, which makes it easy to translate terms as they are used.
  • More relevant API results. Thanks to Vishnudas, system projects (e.g. Tutorial) are now excluded from the default list of projects returned by the API. You can still include system projects in the response if you use set the includeSystem flag to true.


  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver, and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

  • Robb P. , who has not only become top localizer for the Romanian community, but has become a reliable and proactive localizer.

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

hacks.mozilla.orgNew in Firefox 78: DevTools improvements, new regex engine, and abundant web platform updates

A new stable Firefox version rolls out today, providing new features for web developers. A new regex engine, updates to the ECMAScript Intl API, new CSS selectors, enhanced support for WebAssembly, and many improvements to the Firefox Developer Tools await you.

This blog post provides merely a set of highlights; for all the details, check out the following:

Developer tool improvements

Source-mapped variables, now also in Logpoints

With our improvements over the recent releases, debugging your projects with source maps will feel more reliable and faster than ever. But there are more capabilities that we can squeeze out of source maps. Did you know that Firefox’s Debugger also maps variables back to their original name? This especially helps babel-compiled code with changed variable names and added helper variables. To use this feature, pause execution and enable the “Map” option in the Debugger’s “Scopes” pane.

As a hybrid between the worlds of the DevTools Console and Debugger, Logpoints make it easy to add console logs to live code–or any code, once you’ve added them to your toolbelt. New in Firefox 75, original variable names in Logpoints are mapped to the compiled scopes, so references will always work as expected.

Using variable mapping and logpoints in Debugger

To make mapping scopes work, ensure that your source maps are correctly generated and include enough data. In Webpack this means avoid the “cheap” and “nosources” options for the “devtools” configuration.

Promises and frameworks error logs get more detailed

Uncaught promise errors are critical in modern asynchronous JavaScript, and even more so in frameworks like Angular. In Firefox 78, you can expect to see all details for thrown errors show up properly, including their name and stack:

Before/after comparison for improved error logs

The implementation of this functionality was only possible through the close collaboration between the SpiderMonkey engineering team and a contributor, Tom Schuster. We are investigating how to improve error logging further, so please let us know if you have suggestions.

Monitoring failed request issues

Failed or blocked network requests come in many varieties. Resources may be blocked by tracking protection, add-ons, CSP/CORS security configurations, or flaky connectivity, for example. A resilient web tries to gracefully recover from as many of these cases as possible automatically, and an improved Network monitor can help you with debugging them.

Failed and blocked requests are annotated with additional reasons

Firefox 78 provides detailed reports in the Network panel for requests blocked by Enhanced Tracking Protection, add-ons, and CORS.

Quality improvements

Faster DOM navigation in Inspector

Inspector now opens and navigates a lot faster than before, particularly on sites with many CSS custom properties. Some modern CSS frameworks were especially affected by slowdowns in the past. If you see other cases where Inspector isn’t as fast as expected, please report a performance issue. We really appreciate your help in reporting performance issues so that we can keep improving.

Remotely navigate your Firefox for Android for debugging

Remote debugging’s new navigation elements make it more seamless to test your content for mobile with the forthcoming new edition of Firefox for Android. After hooking up the phone via USB and connecting remote debugging to a tab, you can navigate and refresh pages from your desktop.

Early-access DevTools features in Developer Edition

Developer Edition is Firefox’s pre-release channel. You get early access to tooling and platform features. Its settings enable more functionality for developers by default. We like to bring new features quickly to Developer Edition to gather your feedback, including the following highlights.

Async stacks in Console & Debugger

We’ve built new functionality to better support async stacks in the Console and Debugger, extending stacks with information about the events, timers, and promises that lead the execution of a specific line of code. We have been improving asynchronous stacks for a while now, based on early feedback from developers using Firefox DevEdition. In Firefox 79, we expect to enable this feature across all release channels.

Async stacks add promise execution for both Console and Debugger

Console shows failed requests

Network requests with 4xx/5xx status codes now log as errors in the Console by default. To make them easier to understand, each entry can be expanded to view embedded network details.

Server responses with 4xx/5xx status responses logged in the Console

Web platform updates

New CSS selectors :is and :where

Version 78 sees Firefox add support for the :is() and :where() pseudo-classes, which allow you to present a list of selectors to the browser. The browser will then apply the rule to any element that matches one of those selectors. This can be useful for reducing repetition when writing a selector that matches a large number of different elements. For example:

header p, main p, footer p,
header ul, main ul, footer ul { … }

Can be cut down to

:is(header, main, footer) :is(p, ul) { … }

Note that :is() is not particularly a new thing—it has been supported for a while in various browsers. Sometimes this has been with a prefix and the name any (e.g. :-moz-any). Other browsers have used the name :matches(). :is() is the final standard name that the CSSWG agreed on.

:is() and :where() basically do the same thing, but what is the difference? Well, :is() counts towards the specificity of the overall selector, taking the specificity of its most specific argument. However, :where() has a specificity value of 0 — it was introduced to provide a solution to the problems found with :is() affecting specificity.

What if you want to add styling to a bunch of elements with :is(), but then later on want to override those styles using a simple selector? You won’t be able to because class selectors have a higher specificity. This is a situation in which :where() can help. See our :where() example for a good illustration.

Styling forms with CSS :read-only and :read-write

At this point, HTML forms have a large number of pseudo-classes available to style inputs based on different states related to their validity — whether they are required or optional, whether their data is valid or invalid, and so on. You can find a lot more information in our UI pseudo-classes article.

In this version, Firefox has enabled support for the non-prefixed versions of :read-only and :read-write. As their name suggests, they style elements based on whether their content is editable or not:

input:read-only, textarea:read-only {
  border: 0;
  box-shadow: none;
  background-color: white;

textarea:read-write {
  box-shadow: inset 1px 1px 3px #ccc;
  border-radius: 5px;

(Note: Firefox has supported these pseudo-classes with a -moz- prefix for a long time now.)

You should be aware that these pseudo-classes are not limited to form elements. You can use them to style any element based on whether it is editable or not, for example a <p> element with or without contenteditable set:

p:read-only {
  background-color: red;
  color: white;

p:read-write {
  background-color: lime;

New regex engine

Thanks to the RegExp engine in SpiderMonkey, Firefox now supports all new regular expression features introduced in ECMAScript 2018, including lookbehinds (positive and negative), the dotAll flag, Unicode property escapes, and named capture groups.

Lookbehind and negative lookbehind assertions make it possible to find patterns that are (or are not) preceded by another pattern. In this example, a negative lookbehind is used to match a number only if it is not preceded by a minus sign. A positive lookbehind would match values not preceded by a minus sign.

'1 2 -3 0 -5'.match(/(?<!-)\d+/g);
// → Array [ "1", "2", "0" ]

'1 2 -3 0 -5'.match(/(?<=-)\d+/g);
// → Array [ "3", "5" ]

Unicode property escapes are written in the form \p{…} and \{…}. They can be used to match any decimal number in Unicode, for example. Here’s a unicode-aware version of \d that matches any Unicode decimal number instead of just the ASCII numbers 0-9.

const regex = /^\p{Decimal_Number}+$/u;

Named capture groups allow you to refer to a certain portion of a string that a regular expression matches, as in:

let re = /(?<year>\d{4})-(?<month>\d{2})-(?<day>\d{2})/u;
let result = re.exec('2020-06-30');
// → { year: "2020", month: "06", day: "30" }

ECMAScript Intl API updates

Rules for formatting lists vary from language to language. Implementing your own proper list formatting is neither straightforward nor fast. Thanks to the new Intl.ListFormat API, the JavaScript engine can now format lists for you:

const lf = new Intl.ListFormat('en');
lf.format(["apples", "pears", "bananas"]):
// → "apples, pears, and bananas"

const lfdis = new Intl.ListFormat('en', { type: 'disjunction' });
lfdis.format(["apples", "pears", "bananas"]):
// → "apples, pears, or bananas"

Enhanced language-sensitive number formatting as defined in the Unified NumberFormat proposal is now fully implemented in Firefox. See the NumberFormat constructor documentation for the new options available.


Firefox now supports ParentNode.replaceChildren(), which replaces the existing children of a Node with a specified new set of children. This is typically represented as a NodeList, such as that returned by Document.querySelectorAll().

This method provides an elegant way to empty a node of children, if you call replaceChildren() with no arguments. It also is a nice way to shift nodes from one element to another. For example, in this case, we use two buttons to transfer selected options from one <select> box to another:

const noSelect = document.getElementById('no');
const yesSelect = document.getElementById('yes');
const noBtn = document.getElementById('to-no');
const yesBtn = document.getElementById('to-yes');
yesBtn.addEventListener('click', () => {
  const selectedTransferOptions = document.querySelectorAll('#no option:checked');
  const existingYesOptions = document.querySelectorAll('#yes option');
  yesSelect.replaceChildren(...selectedTransferOptions, ...existingYesOptions);

noBtn.addEventListener('click', () => {
  const selectedTransferOptions = document.querySelectorAll('#yes option:checked');
  const existingNoOptions = document.querySelectorAll('#no option');
  noSelect.replaceChildren(...selectedTransferOptions, ...existingNoOptions);

You can see the full example at ParentNode.replaceChildren().

WebAssembly multi-value support

Multi-value is a proposed extension to core WebAssembly that enables functions to return many values, and enables instruction sequences to consume and produce multiple stack values. The article Multi-Value All The Wasm! explains what this means in greater detail.

WebAssembly large integer support

WebAssembly now supports import and export of 64-bit integer function parameters (i64) using BigInt from JavaScript.


We’d like to highlight three changes to the WebExtensions API for this release:

  • When using proxy.onRequest, a filter that limits based on tab id or window id is now correctly applied. This is useful for add-ons that want to provide proxy functionality in just one window.
  • Clicking within the context menu from the “all tabs” dropdown now passes the appropriate tab object. In the past, the active tab was erroneously passed.
  • When using with the saveAs option, the recently used directory is now remembered. While this data is not available to developers, it is very convenient to users.

TLS 1.0 and 1.1 removal

Support for the Transport Layer Security (TLS) protocol’s version 1.0 and 1.1, has been dropped from all browsers as of Firefox 78 and Chrome 84. Read TLS 1.0 and 1.1 Removal Update for the previous announcement and what actions to take if you are affected.

Firefox 78 is an ESR release

Firefox follows a rapid release schedule: every four weeks we release a new version of Firefox.

In addition to that, we provide a new Extended Support Release (ESR) for enterprise users once a year. Firefox 78 ESR includes all of the enhancements since the last ESR (Firefox 68), along with many new features to make your enterprise deployment easier.

A noteworthy feature: In previous ESR versions, Service workers (and the Push API) were disabled. Firefox 78 is the first ESR release to support them. If your enterprise web application uses AppCache to provide offline support, you should migrate to these new APIs as soon as possible as AppCache will not be available in the next major ESR in 2021.

Firefox 78 is the last supported Firefox version for macOS users of OS X 10.9 Mavericks, OS X 10.10 Yosemite and OS X 10.11 El Capitan. These users will be moved to the Firefox ESR channel by an application update. For more details, see the Mozilla support page.

See also the release notes for Firefox for Enterprise 78.

The post New in Firefox 78: DevTools improvements, new regex engine, and abundant web platform updates appeared first on Mozilla Hacks - the Web developer blog.

Open Policy & AdvocacyAnálise da Mozilla: a proposta de lei sobre notícias falsas do Brasil prejudica a privacidade, a segurança e a liberdade de expressão

ATUALIZAÇÃO: no dia 30 de junho de 2020, o senado brasileiro aprovou o “PL 2630/2020” (Lei das Fake News) com algumas emendas importantes que tornaram opcional a verificação de identidade de contas pelo governo, excluíram as redes sociais da disposição de rastreabilidade obrigatória (mas mantiveram esse requisito para serviços de mensagens como Signal e WhatsApp) e outras alterações relacionadas. Todas as outras questões indicadas abaixo continuam a fazer parte do projeto de lei aprovado pelo Senado. Além disso, o artigo 37 da lei determina que as redes sociais e os aplicativos de mensagens privadas devem nomear representantes legais no Brasil com o poder de acessar remotamente os bancos de dados/registros do usuário. Essa pseudomedida de localização de dados suscita enormes preocupações sobre a privacidade e compromete as proteções do devido processo legal proporcionadas pelas leis dos Estados Unidos, como a Lei CLOUD e a Lei de Privacidade de Comunicações Eletrônicas. Ambas as leis exigem que os provedores norte-americanos cumpram certas salvaguardas processuais antes de entregar dados privados a agentes de segurança pública estrangeiros.

O texto está agora na Câmara dos Deputados para debate e aprovação. As mudanças feitas na lei desde a apresentação de sua versão inicial, em 25 de junho, mostram que, embora tenha havido algumas melhorias (em face de críticas generalizadas), muitas disposições perigosas permanecem. Continuamos comprometidos em colaborar com os legisladores brasileiros para resolver os problemas subjacentes enquanto protegemos a privacidade, a segurança e a liberdade de expressão. A Coalizão Direitos na Rede, uma sociedade civil local, tem sido muito influente no debate até agora e deve ser consultada à medida que o projeto for para a Câmara dos Deputados. Além disso, é uma boa fonte de informações sobre o que está acontecendo.

Publicação original, de 29 de junho de 2020

Embora as notícias falsas sejam um problema real, a Lei Brasileira de Liberdade, Responsabilidade e Transparência na Internet (conhecida como “Lei das Fake News”) não é uma solução. Esse projeto de lei escrito às pressas – que pode ser aprovado pelo Senado hoje – representa uma séria ameaça à privacidade, à segurança e à liberdade de expressão. O projeto de lei é um grande retrocesso para um país que tem sido aclamado em todo o mundo por seu pioneirismo com a Lei de Direitos Civis da Internet (Marco Civil da Internet) e, mais recentemente, com a Lei Geral de Proteção a Dados Pessoais.

Questões fundamentais

Embora esse projeto represente muitas ameaças à saúde da internet, estamos particularmente preocupados com as seguintes disposições:

Violação da criptografia de ponta a ponta: de acordo com o mais recente relatório informal do congresso, a lei determinaria que todos os provedores de comunicação retivessem registros de encaminhamentos e outras formas de comunicação em massa, incluindo a origem, por um período de três meses. Como as empresas são obrigadas a repassar muitas dessas informações ao governo, em essência, essa disposição criaria um registro centralizado e perpetuamente atualizado de interações digitais de quase todos os usuários no Brasil. Além dos riscos à privacidade e à segurança que esse mandado de retenção de dados tão abrangente implica, a lei parece ser inviável de implementar em serviços criptografados de ponta a ponta, como Signal e WhatsApp. Esse projeto forçaria as empresas a deixar o país ou enfraqueceria as proteções técnicas nas quais os brasileiros confiam para manter seguras mensagens, prontuários médicos, dados bancárias e outras informações privadas.

Exigência de identidades reais para a criação de contas: o projeto de lei também ataca amplamente o anonimato e o uso de pseudônimo. Se aprovado, para usar as redes sociais, os usuários brasileiros teriam de verificar sua identidade por meio de um número de telefone (o que por si só já exige identificação oficial no Brasil), e os estrangeiros teriam que fornecer um passaporte. O projeto também exige que as empresas de telecomunicações compartilhem uma lista de usuários ativos (com seus números de celular) com as empresas de redes sociais para evitar fraudes. Em um momento em que muitos estão preocupados com a economia da vigilância, essa expansão maciça da coleta e identificação de dados parece particularmente significativa. Há apenas algumas semanas, o Supremo Tribunal Federal brasileiro declarou ilegal o compartilhamento obrigatório de dados de assinantes pelas empresas de telecomunicações, tornando essa disposição legalmente tênue.

Como já dissemos antes, esse movimento seria desastroso para a privacidade e o anonimato dos usuários da internet, além de prejudicar a inclusão. Isso porque as pessoas que acessam a rede pela primeira vez (geralmente de residências com apenas um celular compartilhado) não poderiam criar um e-mail ou uma conta em rede social sem um número de celular exclusivo.

Essa disposição também aumentaria o risco de violações de dados e consolidaria o poder nas mãos de grandes players das redes sociais, que podem se dar ao luxo de construir e manter sistemas de verificação tão grandes. Não há evidências para provar que essa medida ajudaria a combater a desinformação (seu fator motivador), e a medida ignora os benefícios que o anonimato pode trazer para a internet, como denúncias e proteção contra o assédio.

Disposições penais vagas: na semana passada, a versão preliminar da lei possuía disposições penais adicionais que tornam ilegal:

  • criar ou compartilhar conteúdos que constituam um sério risco para a “paz social ou ordem econômica” do Brasil, sem que nenhum dos termos esteja claramente definido, OU
  • ser membro de um grupo on-line sabendo que sua atividade principal é compartilhar mensagens difamatórias.

Essas disposições, que podem ser modificadas nas versões seguintes com base numa oposição generalizada, claramente colocariam restrições subjetivas e insustentáveis aos direitos de livre expressão dos brasileiros e teriam um efeito assustador em sua capacidade de se envolver em discursos on-line. O projeto de lei também contém outras disposições relativas a moderação de conteúdo, revisão judicial e transparência on-line que apresentam desafios significativos à liberdade de expressão.

Preocupações processuais, história e próximos passos

Esse projeto de lei foi oficialmente introduzido no Congresso Brasileiro em abril de 2020. No entanto, em 25 de junho, uma versão radicalmente diferente e substancialmente mais perigosa do projeto foi apresentada aos senadores poucas horas antes de ser posta em votação. Isso levou os senadores a recuarem e pedirem mais tempo para avaliar as mudanças, além de ampla condenação internacional por grupos da sociedade civil.

Graças à oposição substancial de grupos da sociedade civil, como o Coalizão Direitos na Rede, algumas das mudanças mais drásticas no rascunho de 25 de junho (como localização de dados e bloqueio de serviços fora da conformidade) foram agora informalmente abandonadas pelo relator, que continua a pressionar para que a lei seja aprovada o mais rápido possível. Apesar dessas melhorias, as propostas mais preocupantes permanecem, e essa lei poderá ser aprovada pelo Senado amanhã, 30 de junho de 2020.

Próximos passos

Apelamos ao senador Angelo Coronel e ao senado brasileiro para que retirem imediatamente esse projeto de lei da pauta e realizem uma rigorosa consulta pública sobre questões de má informação e desinformação antes de prosseguir com a criação de qualquer lei. A Comissão de Constituição, Justiça e Cidadania do senado continua sendo um dos melhores caminhos para que essa revisão ocorra e deve buscar a contribuição de todas as partes interessadas, especialmente da sociedade civil. Continuamos comprometidos a trabalhar com o governo para resolver essas questões importantes, mas não à custa da privacidade, segurança e liberdade de expressão dos brasileiros.


This blog post is a translated version of the original English version available here.

The post Análise da Mozilla: a proposta de lei sobre notícias falsas do Brasil prejudica a privacidade, a segurança e a liberdade de expressão appeared first on Open Policy & Advocacy.

Open Policy & AdvocacyBrazil’s fake news law will harm users

The “fake news” law being rushed through Brazil’s Senate will massively harm privacy and freedom of expression online. Among other dangerous provisions, this bill would force traceability of forwarded messages, which will require breaking end-to-end encryption. This legislation will substantially harm online security, while entrenching state surveillance.

Brazil currently enjoys some of the most comprehensive digital protections in the world, via its Internet Bill of Rights and the upcoming data protection law is poised to add even more protections. In order to preserve these rights, the ‘fake news’ law should be immediately withdrawn from consideration and be subject to rigorous congressional review with input from all affected parties.

The post Brazil’s fake news law will harm users appeared first on Open Policy & Advocacy.

Mozilla Add-ons BlogExtensions in Firefox 78

In Firefox 78, we’ve done a lot of the changes under the hood. This includes preparation for changes coming up in Firefox 79, improvements to our tests, and improvements to make our code more resilient. There are three things I’d like to highlight for this release:

  • When using proxy.onRequest, a filter that limits based on tab ID or window ID is now correctly applied. We’ve also greatly improved the performance of these filters. This could be useful for add-ons that want to provide proxy functionality in just one window.
  • Clicking within the context menu from the “all tabs” dropdown now passes the appropriate tab object. In the past, the active tab was erroneously passed.
  • When using with the saveAs option set to true, the recently used directory is now remembered on a per-extension basis. For example, a user of a video downloader would benefit from not having to navigate to their videos folder every time the extension offers a file to download.

These and other changes were brought to you by Atique Ahmed Ziad, Tom Schuster, Mark Smith, as well as various teams at Mozilla. A big thanks to everyone involved in the subtle but important changes to WebExtensions in Firefox.

The post Extensions in Firefox 78 appeared first on Mozilla Add-ons Blog.

Open Policy & AdvocacyMozilla’s analysis: Brazil’s fake news law harms privacy, security, and free expression

UPDATE: On 30 June 2020, the Brazilian Senate passed “PLS 2630/2020” (the fake news law) with some key amendments that made government identity verification for accounts optional, excluded social media networks from the mandatory traceability provision (while keeping this requirement in place for messaging services like Signal and Whats App) and some other scope related changes. All the other concerns highlighted below remain a part of the bill passed by the Senate. Additionally, Article 37 of the law mandates that social networks and private messaging apps must appoint legal representatives in Brazil with the power to remotely access to user databases/logs. This pseudo data localization measure poses massive privacy concerns while undermining the due process protections provided by US laws such as the CLOUD Act and Electronic Communications Privacy Act. Both of these laws require US providers to satisfy certain procedural safeguards before turning over private data to foreign law enforcement agents.

The law will now move to the Chamber of Deputies, the lower house of the National Congress in Brazil, for debate and passage. The changes made to the law since the introduction of its most regressive version on June 25 showcase that while there have been some improvements (in the face of widespread criticism), many dangerous provisions remain. We remain committed to engaging with Brazilian policymakers to resolve the underlying issues while protecting privacy, security, and freedom of expression. Local civil society Coalizão Direitos na Rede have been very influential in the debate so far, should be consulted as the bill moves to the Chamber of Deputies, and are a good source of information about what’s happening.

Original Post from 29 June 2020

While fake news is a real problem, the Brazilian Law of Freedom, Liability, and Transparency on the Internet (colloquially referred to as the “fake news law”) is not a solution. This hastily written legislation — which could be approved by the Senate as soon as today — represents a serious threat to privacy, security, and free expression. The legislation is a major step backwards for a country that has been hailed around the world for its landmark Internet Civil Rights Law (Marco Civil) and its more recent data protection law.

Substantive concerns

While this bill poses many threats to internet health, we are particularly concerned by the following provisions:

Breaking end-to-end encryption: According to the latest informal congressional report, the law would mandate all communication providers to retain records of forwards and other forms of bulk communications, including origination, for a period of three months. As companies are required to report much of this information to the government, in essence, this provision would create a perpetually updating, centralized log of digital interactions of nearly every user within Brazil. Apart from the privacy and security risks such a vast data retention mandate entails, the law seems to be infeasible to implement in end-to-end encrypted services such as Signal and WhatsApp. This bill would force companies to leave the country or weaken the technical protections that Brazilians rely on to keep their messages, health records, banking details, and other private information secure.

Mandating real identities for account creation: The bill also broadly attacks anonymity and pseudonymity. If passed, in order to use social media, Brazilian users would have to verify their identity with a phone number (which itself requires government ID in Brazil), and foreigners would have to provide a passport. The bill also requires telecommunication companies to share a list of active users (with their cellphone numbers) to social media companies to prevent fraud. At a time when many are rightly concerned about the surveillance economy, this massive expansion of data collection and identification seems particularly egregious. Just weeks ago, the Brazilian Supreme Court held that mandatory sharing of subscriber data by telecom companies was illegal, making such a provision legally tenuous.

As we have stated before, such a move would be disastrous for the privacy and anonymity of internet users while also harming inclusion. This is because people coming online for the first time (often from households with just one shared phone) would not be able to create an email or social media account without a unique mobile phone number.

This provision would also increase the risk from data breaches and entrench power in the hands of large players in the social media space who can afford to build and maintain such large verification systems. There is no evidence to prove that this measure would help fight misinformation (its motivating factor), and it ignores the benefits that anonymity can bring to the internet, such as whistleblowing and protection from stalkers.

Vague Criminal Provisions: The draft version of the law over the past week has additional criminal provisions that make it illegal to:

  • create or share content that poses a serious risk to “social peace or to the economic order” of Brazil, with neither term clearly defined, OR
  • be a member of an online group knowing that its primary activity is sharing defamatory messages.

These provisions, which might be modified in the subsequent drafts based on widespread opposition, would clearly place untenable, subjective restrictions on the free expression rights of Brazilians and have a chilling effect on their ability to engage in discourse online. The draft law also contains other concerning provisions surrounding content moderation, judicial review, and online transparency that pose significant challenges for freedom of expression.

Procedural concerns, history, and next steps

This legislation was nominally first introduced into the Brazilian Congress in April 2020. However, on June 25, a radically different and substantially more dangerous version of the bill was sprung on Senators mere hours ahead of being put to a vote. This led to push back from Senators, who asked for more time to pursue the changes, accompanied by widespread international condemnation from civil society groups.

Thanks to concentrated push back from civil society groups such as the Coalizão Direitos na Rede, some of the most drastic changes in the June 25 draft (such as data localisation and the blocking of non-compliant services) have now been informally dropped by the Rapporteur who is still pushing for the law to be passed as soon as possible. Despite these improvements, the most worrying proposals remain, and this legislation could pass the Senate as soon as tomorrow, 30 June 2020.

Next steps

We urge Senator Angelo Coronel and the Brazilian Senate to immediately withdraw this bill, and hold a rigorous public consultation on the issues of misinformation and disinformation before proceeding with any legislation. The Commission on Constitution, Justice, and Citizenship in the Senate remains one of the best avenues for such a review to take place, and should seek the input of all affected stakeholders, especially civil society. We remain committed to working with the government to address these important issues, but not at the cost of Brazilians’ privacy, security, and free expression.

The post Mozilla’s analysis: Brazil’s fake news law harms privacy, security, and free expression appeared first on Open Policy & Advocacy.

Mozilla UXThe Poetics of Product Copy: What UX Writers Can Learn From Poetry

Two excerpts appear side-by-side to create a comparison. On the left, an excerpt of the poem “This Is Just To Say” by William Carlos Williams: "Forgive me / they were delicious / so sweet/ and so cold." On the right, an excerpt of a Firefox error message that reads, "Sorry. We're having trouble getting your pages back. We are having trouble restoring your last browsing session. Select Restore Session to try again."

Excerpts: “This Is Just To Say” by William Carlos Williams and a Firefox error message


Word nerds make their way into user experience (UX) writing from a variety of professional backgrounds. Some of the more common inroads are journalism and copywriting. Another, perhaps less expected path is poetry.

I’m a UX content strategist, but I spent many of my academic years studying and writing poetry. As it turns out, those years weren’t just enjoyable — they were useful preparation for designing product copy.

Poetry and product copy wrestle with similar constraints and considerations. They are each often limited to a small amount of space and thus require an especially thoughtful handling of language that results in a particular kind of grace.

While the high art of poetry and the practical, business-oriented work of UX are certainly not synonymous, there are some key parallels to learn from as a practicing content designer.

1. Both consider the human experience closely

Poets look closely at the human experience. We use the details of the personal to communicate a universal truth. And how that truth is communicated — the context, style, and tone — reflect the culture and moment in time. When a poem makes its mark, it hits a collective nerve.

The poem “Tired” by Langston Hughes floats in a white box: "I am so tired of waiting. Aren’t you, for the world to become good and beautiful and kind? Let us take a knife and cut the world in two — and see what worms are eating at the rind.”

“Tired” by Langston Hughes


Like poetry, product copy looks closely at the human experience, and its language reflects the culture from which it was born. As technology has become omnipresent in our lives, the language of the interface has, in turn, become more conversational. “404 Not Found” messages are (ideally) replaced with plain language. Emojis and Hmms are sprinkled throughout the digital experience, riding the tide of memes and tweets that signify an increasingly informal culture. You can read more about the relationship between technology and communication in Erika Hall’s seminal work, Conversational Design.

While the topic at hand is often considerably less exalted than that of poetry, a UX writer similarly considers the details of a moment in time. Good copy is informed by what the user is experiencing and feeling — the frustration of a failed page load or the success of a saved login — and crafts content sensitive to that context.

Product copy strikes the wrong note when it fails to be empathetic to that moment. For example, it’s unhelpful to use technical jargon or make a clever joke when a user encounters a dead end. This insensitivity is made more acute if the person is using the interface to navigate a stressful life event, like filing for leave when a loved one is ill. What they need in that moment is plain language and clear instructions on a path forward.

2. They make sense of complexity with language

Poetry helps us make sense of complexity through language. We turn to poetry to feel our way through dark times — the loss of a loved one or a major illness — and to commemorate happy times — new love, the beauty of the natural world. Poetry finds the words to help us understand an experience and (hopefully) move forward.

Excerpt of the poem, "Toad," by Diane Seuss floats in a white box: "The grief, when I finally contacted it decades later, was black, tarry, hot, like the yarrow-edged side roads we walked barefoot in the summer."

Excerpt: “Toad” by Diane Seuss

<figcaption class="imageCaption"></figcaption>


UX writers also use the building blocks of language to help a user move forward and through an experience. UX writing requires a variety of skills, including the ability to ask good questions, to listen well, to collaborate, and to conduct research. The foundational skill, however, is using language to bring clarity to an experience. Words are the material UX writers use to co-create experiences with designers, researchers, and developers.

Screenshot of the modal which allows a user to identify the issue they are having with an extension. Clipped image displays three possible reasons with examples, including "It claims to be something it's not," "I never wanted it and don’t know how to get rid of it," and "It contains hateful, violent, or illegal content."

Excerpt of a screen for Firefox users to report an issue with a browser extension. The flow enables the user to report an extension, troubleshoot issues, and remove the extension. Co-created with designer Philip Walmsley.

3. Words are selected carefully within a small canvas

“Poetry is your best source of deliberate intentional language that has nothing to do with your actual work. Reading it will descale your mind, like vinegar in a coffee maker.” — Conversational Design, Erika Hall

Poetry considers word choice carefully. And, while poetry takes many forms and lengths, its hallmark is brevity. Unlike a novel, a poem can begin and end on one page, or even a few words. The poet often uses language to get the reader to pause and reflect.

Product copy should help users complete tasks. Clarity trumps conciseness, but we often find that fewer words — or no words at all — are what the user needs to get things done. While we will include additional language and actions to add friction to an experience when necessary, our goal in UX writing is often to get out of the user’s way. In this way, while poetry has a slowing function, product copy can have a streamlining function.

Working within these constraints requires UX writers to also consider each word very carefully. A button that says “Okay!” can mean something very different, and has a different tone, than a button that says, “Submit.” Seemingly subtle changes in word choice or phrasing can have a big impact, as they do in poetry.

Two screenshots, side-by-side, of a doorhannger in Firefox that promotes the "Pin Tab" feature. Includes header, body copy, illustration of the feature, and primary and secondary buttons. Lay-out is consistent between the two but there are slight changes in body copy.

Left: Early draft of a recommendation panel for the Firefox Pin Tab feature. Right: final copy, which does not include the descriptors “tap strip” or “tab bar” because users might not be familiar with these terms. A small copy change like using “open in a tab” instead of “tab strip” can have a big impact on user comprehension. Co-created with designer Amy Lee.

4. Moment and movement

Reading a poem can feel like you are walking into the middle of a conversation. And you have — the poet invites you to reflect on a moment in time, a feeling, a place. And yet, even as you pause, poetry has a sense of moment — metaphor and imagery connect and build quickly in a small amount of space. You tumble over one line break on to the next.

Excerpt of the poem, "Bedtime Story," by Franny Choi, floats in a white box. Every other line is heavily indented line to give it a sense of movement: "Outside, cicadas threw their jagged whines into the dark. Inside, three children, tucked in our mattresses flat as rice cakes against the floor. Pink quilts, Mickey Mouse cotton – why is it that all my childhood comforts turn out to be imperialism’s drippings?"

Excerpt: “Bedtime Story” by Franny Choi

<figcaption class="imageCaption"></figcaption>


Product copy captures a series of moments in time. But, rather than walking into a conversation, you are initiating it and participating in it. One of the hallmarks of product copy, in contrast to other types of professional writing, is its movement — you aren’t writing for a billboard, but for an interface that is responsive and conditional.

A video clip shows the installation process for an extension, which includes a button to add it to Firefox, then a doorhanger that asks the user to confirm they want to add it, a message confirming it has been added, and then another message notifying user when the browser takes action (in this case changing a new tab to an image of a cat).

The installation flow for the browser extension, Tabby Cat, demonstrates the changing nature of UX copy. Co-created with designer Emanuela Damiani.

5. Form is considered

Poetry communicates through language, but also through visual presentation. Unlike a novel, where words can run from page to page like water, a poet conducts flow more tightly in its physical space. Line breaks are chosen with intention. A poem can sit squat, crisp and contained as a haiku, or expand like Allen Ginsburg’s Howl across the page, mirroring the wild discontent of the counterculture movement it captures.

Product copy is also conscious of space, and uses it to communicate a message. We parse and prioritize UX copy into headers and subheadings. We chunk explanatory content into paragraphs and bullet points to make the content more consumable.

Screenshot of the Firefox Notes extension welcome screen. Includes larger welcome text and instructions in in bullet points on how to use the app.

The introductory note for the Firefox Notes extension uses type size, bold text, and bullet points to organize the instructions and increase scannability.

6. Meaning can trump grammar

Poetry often plays with the rules of grammar. Words can be untethered from sentences, floating off across the page. Sentences are uncontained with no periods, frequently enjambed.

 Excerpt of "i carry your heart with me(i carry it in" by E. E. Cummings floats in a white box: "i carry your heart with me(i carry it in my heart)i am never without it(anywhere i go you go,my dear;and whatever is done by only me is your doing,my darling)"

Excerpt: “[i carry your heart with me(i carry it in]” by E. E. Cummings


In product writing, we also play with grammar. We assign different rules to text elements for purposes of clarity — for example, allowing fragments for form labels and radio buttons. While poetry employs these devices to make meaning, product writing bends or breaks grammar rules so content doesn’t get in the way of meaning — excessive punctuation and title case can slow a reader down, for example.

“While mechanics and sentence structure are important, it’s more important that your writing is clear, helpful, and appropriate for each situation.” — Michael Metts and Andy Welfle, Writing is Designing

Closing thoughts, topped with truffle foam

While people come to this growing profession from different fields, there’s no “right” one that makes you a good UX writer.

As we continue to define and professionalize the practice, it’s useful to reflect on what we can incorporate from our origin fields. In the case of poetry, key points are constraint and consideration. Both poet and product writer often have a small amount of space to move the audience — emotionally, as is the case for poetry, and literally as is the case for product copy.

If we consider the metaphor of baking, a novel would be more like a Thanksgiving meal. You have many hours and dishes to choreograph an experience. Many opportunities to get something wrong or right. A poem, and a piece of product copy, have just one chance to make an impression and do their work.

In this way, poetry and product copy are more like a single scallop served at a Michelin restaurant — but one that has been marinated in carefully chosen spices, and artfully arranged with a puff of lemon truffle foam and Timut pepper reduction. Each element in this tiny concert of flavors carefully, painstakingly composed.



Thank you to Michelle Heubusch and Betsy Mikel for your review.

The Mozilla BlogMore details on Comcast as a Trusted Recursive Resolver

Yesterday Mozilla and Comcast announced that Comcast was the latest member of Mozilla’s Trusted Recursive Resolver program, joining current partners Cloudflare and NextDNS. Comcast is the first Internet Service Provider (ISP) to become a TRR and this represents a new phase in our DoH/TRR deployment.

What does this mean?

When Mozilla first started looking at how to deploy DoH we quickly realized that it wasn’t enough to just encrypt the data; we had to ensure that Firefox used a resolver which they could trust. To do this, we created the Trusted Recursive Resolver (TRR) program which allowed us to partner with specific resolvers committed to strong policies for protecting user data. We selected Cloudflare as our first TRR (and the current default) because they shared our commitment to user privacy and security because we knew that they were able to handle as much traffic as we could send them. This allowed us to provide secure DNS resolution to as many users as possible but also meant changing people’s resolver to Cloudflare. We know that there have been some concerns about this. In particular:

  • It may result in less optimal traffic routing. Some ISP resolvers cooperate with CDNs and other big services to steer traffic to local servers. This is harder (though not impossible) for Cloudflare to do because they have less knowledge of the local network. Our measurements haven’t shown this to be a problem but it’s still a possible concern.
  • If the ISP is providing value added services (e.g., malware blocking or parental controls) via DNS, then these stop working. Firefox tries to avoid enabling DoH in these cases because we don’t want to break services we know people have opted into, but we know those mechanisms are imperfect.

If we were able to verify that the ISP had strong privacy policies then we could use their resolver instead of a public resolver like Cloudflare. Verifying this would of course require that the ISP deploy DoH — which more and more ISPs are doing — and join our TRR program, which is exactly what Comcast has done. Over the next few months we’ll be experimenting with using Comcast’s DoH resolver when we detect that we are on a Comcast network.

How does it work?

Jason Livingood from Comcast and I have published an Internet-Draft describing how resolver selection works, but here’s the short version of what we’re going to be experimenting with. Note: this is all written in the present tense, but we haven’t rolled the experiment out just yet, so this isn’t what’s happening now. It’s also US only, because this is the only place where we have DoH on by default.

First, Comcast inserts a new DNS record on their own recursive resolver for a “special use” domain called doh.test with a value of The meaning of this record is just “this network supports DoH and here is the name of the resolver.”

When Firefox joins a network, it uses the ordinary system resolver to look up doh.test. If there’s nothing there, then it just uses the default TRR (currently Cloudflare). However, if there is a record there, Firefox looks it up in an internal list of TRRs. If there is a match to Comcast (or a future ISP TRR) then we use that TRR instead. Otherwise, we fall back to the default.

What’s special about the “doh.test” name is that nobody owns  “.test”; it’s specifically reserved for local use so it’s fine for Comcast to put its own data there. If another ISP were to want to do the same thing, they would populate doh.test with their own resolver name. This means that Firefox can do the same check on every network.

The end result is that if we’re on a network whose resolver is part of our TRR program then we use that resolver. Otherwise we use the default resolver.

What is the privacy impact?

One natural question to ask is how this impacts user privacy? We need to analyze this in two parts.

First, let’s examine the case of someone who only uses their computer on a Comcast network (if you never use a Comcast network, then this has no impact on you). Right now, we would send your DNS traffic to Cloudflare, but the mechanism above would send it to Comcast instead. As I mentioned above, both Comcast and Cloudflare have committed to strong privacy policies, and so the choice between trusted resolvers is less important than it otherwise might be. Put differently: every resolver in the TRR list is trusted, so choosing between them is not a problem.

With that said, we should also look at the technical situation (see here for more thoughts on technical versus policy controls). In the current setting, using your ISP resolver probably results in somewhat less exposure of your data to third parties because the ISP has a number of other — albeit less convenient — mechanisms for learning about your browsing history, such as the IP addresses you are going to and the TLS Server Name Indication field. However, once TLS Encrypted Client Hello starts being deployed, the Server Name Indication will be less useful and so there will be less difference between the cases.

The situation is somewhat more complicated for someone who uses both a Comcast and non-Comcast network. In that case, both Comcast and Cloudflare will see pieces of their browsing history, which isn’t totally ideal and is something we otherwise try to avoid. Our current view is that the advantages of using a trusted local resolver when available outweigh the disadvantages of using multiple trusted resolvers, but we’re still analyzing the situation and our thinking may change as we get more data.

One thing I want to emphasize here is that if you have a DoH resolver you prefer to use, you can set it yourself in Firefox Network Settings and that will override the automatic selection mechanisms.

Bottom Line

As we said when we started working on DoH/TRR deployment two years ago, you can’t practically negotiate with your resolver, but Firefox can do it for you, so we’re really pleased to have Comcast join us as a TRR partner.

The post More details on Comcast as a Trusted Recursive Resolver appeared first on The Mozilla Blog.

The Mozilla BlogComcast’s Xfinity Internet Service Joins Firefox’s Trusted Recursive Resolver Program

Committing to Data Retention and Transparency Requirements That Protect Customer Privacy

Today, Mozilla, the maker of Firefox, and Comcast have announced Comcast as the first Internet Service Provider (ISP) to provide Firefox users with private and secure encrypted Domain Name System (DNS) services through Mozilla’s Trusted Recursive Resolver (TRR) Program. Comcast has taken major steps to protect customer privacy as it works to evolve DNS resolution.

“Comcast has moved quickly to adopt DNS encryption technology and we’re excited to have them join the TRR program,” said Eric Rescorla, Firefox CTO. “Bringing ISPs into the TRR program helps us protect user privacy online without disrupting existing user experiences. We hope this sets a precedent for further cooperation between browsers and ISPs.”

For more than 35 years, DNS has served as a key mechanism for accessing sites and services on the internet. Functioning as the internet’s address book, DNS translates website names, like and, into the internet addresses that a computer understands so that the browser can load the correct website.

Over the last few years, Mozilla, Comcast, and other industry stakeholders have been working to develop, standardize, and deploy a technology called (DoH). DoH helps to protect browsing activity from interception, manipulation, and collection in the middle of the network by encrypting the DNS data.

Encrypting DNS data with DoH is the first step. A necessary second step is to require that the companies handling this data have appropriate rules in place – like the ones outlined in Mozilla’s TRR Program. This program aims to standardize requirements in three areas: limiting data collection and retention from the resolver, ensuring transparency for any data retention that does occur, and limiting any potential use of the resolver to block access or modify content. By combining the technology, DoH, with strict operational requirements for those implementing it, participants take an important step toward improving user privacy.

Comcast launched public beta testing of DoH in October 2019. Since then, the company has continued to improve the service and has collaborated with others in the industry via the Internet Engineering Task Force, the Encrypted DNS Deployment Initiative, and other industry organizations around the world. This collaboration also helps to ensure that users’ security and parental control functions that depend on DNS are not disrupted in the upgrade to encryption whenever possible. Also in October, Comcast announced a series of key privacy commitments, including reaffirming its longstanding commitment not to track the websites that customers visit or the apps they use through their broadband connections. Comcast also introduced a new Xfinity Privacy Center to help customers manage and control their privacy settings and learn about its privacy policy in detail.

“We’re proud to be the first ISP to join with Mozilla to support this important evolution of DNS privacy. Engaging with the global technology community gives us better tools to protect our customers, and partnerships like this advance our mission to make our customers’ internet experience more private and secure,” said Jason Livingood, Vice President, Technology Policy and Standards at Comcast Cable.

Comcast is the latest resolver, and the first ISP, to join Firefox’s TRR Program, joining Cloudflare and NextDNS. Mozilla began the rollout of encrypted DNS over HTTPS (DoH) by default for US-based Firefox users in February 2020, but began testing the protocol in 2018.

Adding ISPs in the TRR Program paves the way for providing customers with the security of trusted DNS resolution, while also offering the benefits of a resolver provided by their ISP such as parental control services and better optimized, localized results. Mozilla and Comcast will be jointly running tests to inform how Firefox can assign the best available TRR to each user.

The post Comcast’s Xfinity Internet Service Joins Firefox’s Trusted Recursive Resolver Program appeared first on The Mozilla Blog.

The Mozilla BlogImmigrants Remain Core to the U.S.’ Strength

By its very design the internet has accelerated the sharing of ideas and information across borders, languages, cultures and time zones. Despite the awesome reach and power of what the web has enabled, there is still no substitute for the chemistry that happens when human beings of different backgrounds and experiences come together to live and work in the same community.

Immigration brings a wealth of diverse viewpoints, drives innovation and creative thinking, and is central to building the internet into a global public resource that is open and accessible to all.

This is why the current U.S. administration’s recent actions are so troubling. On June 22, 2020 President Donald Trump issued an Executive Order suspending entry of immigrants under the premise that they present a risk to the United States’ labor market recovery from the COVID-19 pandemic. This decision will likely have far-reaching and unintended consequences for industries like Mozilla’s and throughout the country.

Technology companies, including Mozilla, rely on brilliant minds from around the globe. This mix of people and ideas has generated significant technological advances that currently fuel our global economy and will undoubtedly be essential for future economic recovery and growth.

This is also why we’re eager to see lawmakers create a permanent solution for (DACA (Deferred Action for Childhood Arrivals). We hope that in light of the recent U.S. Supreme Court ruling, the White House does not continue to pursue plans to end the program that currently protects about 700,000 young immigrants known as Dreamers from deportation. These young people were brought to the U.S. as minors, and raised and educated here. We’ve made this point before, but it bears repeating: Breaking the promise made to these young people and preventing these future leaders from having a legal pathway to citizenship is short-sighted and morally wrong. We owe it to them and to the country to give them every opportunity to succeed here in the U.S.

Immigrants have been a core part of the United States’ strength since its inception. A global pandemic hasn’t changed that. At a time when the United States is grappling with how to make right so many of the wrongs of its past, the country can’t afford to double down on policies that shut out diverse voices and contributions of people from around the world. As they have throughout the country’s history, our immigrant family members, friends, neighbors and colleagues must be allowed to continue playing a vital role in moving the U.S. forward.

The post Immigrants Remain Core to the U.S.’ Strength appeared first on The Mozilla Blog.

Mozilla UXDesigning for voice

In the future people will use their voice to access the internet as often as they use a screen. We’re already in the early stages of this trend: As of 2016 Google reported 20% of searches on mobile devices used voice, last year smart speakers sales topped 146 million units — a 70% jump from 2018, and I’m willing to bet your mom or dad have adopted voice to make a phone call or dictate a text message.

I’ve been exploring voice interactions as the design lead for Mozilla’s Emerging Technologies team for the past two years. In that time we’ve developed Pocket Listen (a Text-to-Speech platform, capable of converting any published web article into audio) and Firefox Voice (an experiment accessing the internet with voice in the browser). This blog post is an introduction to designing for voice, based on the lessons our team learned researching and developing these projects. Luckily, if you’re a designer transitioning to working with voice, and you already have a solid design process in place, you’ll find many of your skills transfer seamlessly. But, some things are very different, so let’s dive in.

The benefits of voice

As with any design it’s best to ground the work in the value it can bring people.

The accessibility benefits to a person with a physical impairment should be clear, but voice has the opportunity to aid an even larger population. Small screens are hard to read with aging eyes, typing on a virtual keyboard can be difficult, and understanding complex technology is always a challenge. Voice is emerging as a tool to overcome these limitations, turning cumbersome tasks into simple verbal interactions.

How voice technology can improve the user experience?

As designers, we’re often tasked with creating efficient and effortless interactions. Watch someone play music on a smart speaker and you’ll see how quickly thought turns to action when friction is removed. They don’t have to find and unlock their phone, launch an app, scroll through a list of songs and tap. Requesting a song happens in an instant with voice. A quote from one of our survey respondents summed it up perfectly:

“Being able to talk without thinking. It’s essentially effortless information ingestion.“

When is voice valuable?

When and where voice is likely to be used

Talking out loud to a device isn’t always appropriate or socially acceptable. We see this over and over again in research and real world usage. People are generally uncomfortable talking to devices in public. The more private, the better.

Graph showing Home, In the car, and At a friends house being the top 3 places people are comfortable using voice.

Hands-free and multi-tasking also drive voice usage — cooking, washing the dishes, or driving in a car. These situations present opportunities to use voice because our hands or eyes are otherwise occupied.

But, voice isn’t just used for giving commands. Text-to-Speech can generate content from anything written, including articles. It’s a technology we successfully used to build and deploy Pocket Listen, which allows you to listen to articles you’d saved for later.

Pocket Listen usage Feb 2020, United Kingdom

In the graph above you’ll see that people primarily use Pocket Listen while commuting. By creating a new format to deliver the content, we’ve expanded when and where the product provides value.

Why is designing for voice hard?

Now that you know ‘why’ and ‘when’ voice is valuable, let’s talk about what makes it hard. These are the pitfalls to watch for when building a voice product.

What’s hard about designing for voice?

Voice is still a new technology, and, as such, it can feel open ended. There’s a wide variety of uses and devices it works well with. It can be incorporated using input (Speech-to-Text) or output (Text-to-Speech), with a screen or without a screen. You may be designing with a “Voice first mindset” as Amazon recommends for the Echo Show, or the entire experience might unfold while the phone is buried in someone’s pocket.

In many ways, this kind of divergence is familiar if you’ve worked with mobile apps or responsive design. Personally, the biggest adjustment for me has been the infinite nature of voice. The limited real estate of a screen imposes constraints on the number and types of interactions available. With voice, there’s often no interface to guide an action and it’s more personal than a screen, so request and utterance vary greatly by personality and culture.

In a voice user interface, a person can ask anything and they can ask it a hundred different ways. A list is a great example: on a screen it’s easy to display a handful of options. In a voice interface, listing more than two options quickly breaks down. The user can’t remember the first choice or the exact phrasing they should say if they want to make a selection.

Which brings us to discovery — often cited as the biggest challenge facing voice designers and developers. It’s difficult for a user to know what features are available, what can they say, how do they have to say it? It becomes essential to teach a systems capabilities but difficult in practice. Even when you teach a few key phrases early in the experience, human recall of proper voice commands and syntax is limited. People rarely remember more than a few phrases.

The exciting future of voice

It’s still early days for voice interactions and while the challenges are real, so are the opportunities. Voice brings the potential to deliver valuable new experiences that improve our connections to each other and the vast knowledge available on the internet. These are just a few examples of what I look forward to seeing more of:

“I like that my voice is the interface. When the assistant works well, it lets me do what I wanted to do quickly, without unlocking my phone, opening an app / going on my computer, loading a site, etc.“

As you can see, we’re at the beginning of an exciting journey into voice. Hopefully this intro has motivated you to dig deeper and ask how voice can play a role in one of your projects. If you want to explore more, have questions or just want to chat feel free to get in touch.

hacks.mozilla.orgMozilla WebThings Gateway Kit by OKdo

We’re excited about this week’s news from OKdo, highlighting a new kit built around Mozilla’s WebThings Gateway. OKdo is a UK-based global technology company focused on IoT offerings for hobbyists, educators, and entrepreneurs. Their idea is to make it easy to get a private and secure “web of things” environment up and running in either home or classroom. OKdo chose to build this kit around the Mozilla WebThings Gateway, and we’ve been delighted to work with them on it.

The WebThings Gateway is an open source software distribution focused on privacy, security, and interoperability. It provides a web-based user interface to monitor and control smart home devices, along with a rules engine to automate them. In addition, a data logging subsystem monitors device changes over time. Thanks to extensive contributions from our open source community, you’ll find an add-on system to extend the gateway with support for a wide range of existing smart home products.

With the WebThings Gateway, users always have complete control. You can directly monitor and control your home and devices over the web. In fact, you’ll never have to share data with a cloud service or vendor. This diagram of our architecture shows how it works:

A diagram comparing the features of Mozilla IoT privacy with a more typical cloud-based IoT approach

Mozilla WebThings Gateway Kit details

The Mozilla WebThings Gateway Kit, available now from OKdo, includes:

  • Raspberry Pi 4 and case
  • MicroSD card pre-flashed with Mozilla WebThings Gateway software
  • Power supply
  • “Getting Started Guide” to help you easily get your project up and running

an image of the OKdo Mozilla WebThings Kit

You can find out more about the OKdo kit and how to purchase it for either home or classroom from their website.

To learn more about WebThings, visit Mozilla’s IoT website or join in the discussion on Discourse. WebThings is completely open source. All of our code is freely available on GitHub. We would love to have you join the community by filing issues, fixing bugs, implementing new features, or adding support for new devices. Also, you can help spread the word about WebThings by giving talks at conferences or local maker groups.

The post Mozilla WebThings Gateway Kit by OKdo appeared first on Mozilla Hacks - the Web developer blog.

about:communityFirefox 78 new contributors

With the release of Firefox 78, we are pleased to welcome the 34 developers who contributed their first code change to Firefox in this release, 28 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

hacks.mozilla.orgWelcoming Safari to the WebExtensions Community

Browser extensions provide a convenient and powerful way for people to take control of how they experience the web. From blocking ads to organizing tabs, extensions let people solve everyday problems and add whimsy to their online lives.

At yesterday’s WWDC event, Apple announced that Safari is adopting a web-based API for browser extensions similar to Firefox’s WebExtensions API. Built using familiar web technologies such as JavaScript, HTML, and CSS, the API makes it easy for developers to write one code base that will work in Firefox, Chrome, Opera, and Edge with minimal browser-specific changes. We’re excited to see expanded support for this common set of browser extension APIs.

What this means for you

Interested in porting your browser extension to Safari? Visit MDN to see which APIs are currently supported. Developers can start testing the new API in Safari 14 using the seed build for macOS Big Sur. The API will be available in Safari 14 on macOS Mojave and macOS Catalina in the future.

Or, maybe you’re new to browser extension development. Check out our guides and tutorials to learn more about the WebExtensions API. Then, visit Firefox Extension Workshop to find information about development tools, security best practices, and tips for creating a great user experience. Be sure to take a look at our guide for how to build a cross-browser extension.

Ready to share your extension with the world (or even just a few friends!)? Our documentation will guide you through the process of making your extension available for Firefox users.

Happy developing!

The post Welcoming Safari to the WebExtensions Community appeared first on Mozilla Hacks - the Web developer blog.

Open Policy & AdvocacyMozilla’s response to EU Commission Public Consultation on AI

In Q4 2020 the EU will propose what’s likely to be the world’s first general AI regulation. While there is still much to be defined, the EU looks set to establish rules and obligations around what it’s proposing to define as ‘high-risk’ AI applications. In advance of that initiative, we’ve filed comments with the European Commission, providing guidance and recommendations on how it should develop the new law. Our filing brings together insights from our work in Open Innovation and Emerging Technologies, as well as the Mozilla Foundation’s work to advance trustworthy AI in Europe.

We are in alignment with the Commission’s objective outlined in its strategy to develop a human-centric approach to AI in the EU. There is promise and the potential for new and cutting edge technologies that we often collectively refer to as “AI” to provide immense benefits and advancements to our societies, for instance through medicine and food production. At the same time, we have seen some harmful uses of AI amplify discrimination and bias, undermine privacy, and violate trust online. Thus the challenge before the EU institutions is to create the space for AI innovation, while remaining cognisant of, and protecting against, the risks.

We have advised that the EC’s approach should be built around four key pillars:

  • Accountability: ensuring the regulatory framework will protect against the harms that may arise from certain applications of AI. That will likely involve developing new regulatory tools (such as the ‘risk-based approach’) as well as enhancing the enforcement of existing relevant rules (such as consumer protection laws).
  • Scrutiny: ensuring that individuals, researchers, and governments are empowered to understand and evaluate AI applications, and AI-enabled decisions – through for instance algorithmic inspection, auditing, and user-facing transparency.
  • Documentation: striving to ensure better awareness of AI deployment (especially in the public sector), and to ensure that applications allow for documentation where necessary – such as human rights impact assessments in the product design phase, or government registries that map public sector AI deployment.
  • Contestability: ensuring that individuals and groups who are negatively impacted by specific AI applications have the ability to contest those impacts and seek redress e.g. through collective action.

The Commission’s consultation focuses heavily on issues related to AI accountability. Our submission therefore provides specific recommendations on how the Commission could better realise the principle of accountability in its upcoming work. Building on the consultation questions, we provide further insight on:

  • Assessment of applicable legislation: In addition to ensuring the enforcement of the GDPR, we underline the need to take account of existing rights and protections afforded by EU law concerning discrimination, such as the Racial Equality directive and the Employment Equality directive.
  • Assessing and mitigating “high risk” applications: We encourage the Commission to further develop (and/or clarify) its risk mitigation strategy, in particular how, by whom, and when risk is being assessed. There are a range of points we have highlighted here, from the importance of context and use being critical components of risk assessment, to the need for comprehensive safeguards, the importance of diversity in the risk assessment process, and that “risk” should not be the only tool in the mitigation toolbox (e.g. consider moratoriums).
  • Use of biometric data: the collection and use of biometric data comes with significant privacy risks and should be carefully considered where possible in an open, consultative, and evidence-based process. Any AI applications harnessing biometric data should conform to existing legal standards governing the collection and processing of biometric data in the GDPR. Besides questions of enforcement and risk-mitigation, we also encourage the Commission to explore edge-cases around biometric data that are likely to come to prominence in the AI sphere, such as voice recognition.

A special thanks goes to the Mozilla Fellows 2020 cohort, who contributed to the development of our submission, in particular Frederike Kaltheuner, Fieke Jansen, Harriet Kingaby, Karolina Iwanska, Daniel Leufer, Richard Whitt, Petra Molnar, and Julia Reinhardt.

This public consultation is one of the first steps in the Commission’s lawmaking process. Consultations in various forms will continue through the end of the year when the draft legislation is planned to be proposed. We’ll continue to build out our thinking on these recommendations, and look forward to collaborating further with the EU institutions and key partners to develop a strong framework for the development of a trusted AI ecosystem. You can find our full submission here.

The post Mozilla’s response to EU Commission Public Consultation on AI appeared first on Open Policy & Advocacy.

SUMO BlogSocial Support program updates

TL;DR: The Social Support Program is moving from Buffer Reply to Conversocial per June, 1st 2020. We’re also going to reply from @FirefoxSupport now instead of the official brand account. If you’re interested to join us in Conversocial, please fill out this form (make sure you meet the requirements before you fill out the form). 


We have very exciting news from the Social Support Program. In the past, we invited a few trusted contributors to Buffer Reply in order to let them reply to Twitter conversations from the official account. However, since Buffer sunset their Reply service per the 1st of June, now we officially moved to Conversocial to replace Buffer Reply.

Conversocial is one of a few tools that stood out from the search process that began at the beginning of the year because it focuses on support rather than social media management. We like the pricing model as well since it doesn’t restrict us from adding more contributors because it’s volume-based instead of seat-based.

If you’re interested to join us on Conversocial, please fill out this form. However, please be advised that we have a few requirements before we can let you into the tool.

Here are a few resources that we’ve updated to reflect the changes in the Social Support program:

We also just acquire @FirefoxSupport account on Twitter with the help of the Marketing team. Moving forward, contributors from the social support program will continue to reply from this account instead of the official brand account. This will allow the official brand account to focus on brand engagement and will also give us an opportunity to utilize the greater functionality of a full account.

We’re happy about the change and excited to see how we can scale the program moving forward. I hope you all share the same excitement and will continue to support and rocking the helpful web!

hacks.mozilla.orgCompiler Compiler: A Twitch series about working on a JavaScript engine

Last week, I finished a three-part pilot for a new stream called Compiler Compiler, which looks at how the JavaScript Specification, ECMA-262, is implemented in SpiderMonkey.

JavaScript …is a programming language. Some people love it, others don’t.  JavaScript might be a bit messy, but it’s easy to get started with. It’s the programming language that taught me how to program and introduced me to the wider world of programming languages. So, it has a special place in my heart. As I taught myself, I realized that other people were probably facing a lot of the same struggles as I was. And really that is what Compiler Compiler is about.

The first bug of the stream was a test failure around increment/decrement. If you want to catch up on the series so far, the pilot episodes have been posted and you can watch those in the playlist here:

Future episodes will be scheduled here with descriptions, in case there is a specific topic you are interested in. Look for blog posts here to wrap up each bug as we go.

What is SpiderMonkey?

SpiderMonkey is the JavaScript engine for Firefox. Along with V8, JSC, and other implementations, it is what makes JavaScript run. Contributing to an engine might be daunting due to the sheer amount of underlying knowledge associated with it.

  • Compilers are well studied, but the materials available to learn about them (such as the Dragon book, and other texts on compilers) are usually oriented to university-setting study — with large dedicated periods of time to understanding and practicing. This dedicated time isn’t available for everyone.
  • SpiderMonkey is written in C++. If you come from an interpreted language, there are a number of tools to learn in order to really get comfortable with it.
  • It is an implementation of the ECMA-262 standard, the standard that defines JavaScript. If you have never read programming language grammars or a standard text, this can be difficult to read.

The Compiler Compiler stream is about making contributing easier. If you are not sure how to get started, this is for you!

The Goals and the Structure

I have two goals for this series. The first, and more important one, is to introduce people to the world of language specification and implementation through SpiderMonkey. The second is to make SpiderMonkey as conformant to the ECMA-262 specification as possible, which luckily is a great framing device for the first goal.

I have organized the stream as a series of segments with repeating elements, every segment consisting of about 5 episodes. A segment will start from the ECMA-262 conformance test suite (Test262) with a test that is failing on SpiderMonkey. We will take some time to understand what the failing test is telling us about the language and the SpiderMonkey implementation. From there we will read and understand the behavior specified in the ECMA-262 text. We will implement the fix, step by step, in the engine, and explore any other issues that arise.

Each episode in a segment will be 1 hour long, followed by free chat for 30 minutes afterwards. If you have questions, feel free to ask them at any time. I will try to post materials ahead of time for you to read about before the stream.

If you missed part of the series, you can join at the beginning of any segment. If you have watched previous segments, then new segments will uncover new parts of the specification for you, and the repetition will make it easier to learn. A blog post summarizing the information in the stream will follow each completed segment.


Last but not least, a few thank yous


I have been fortunate enough to have my colleagues from the SpiderMonkey team and TC39 join the chat. Thank you to Iain Ireland, Jason Orendorff and Gus Caplan for joining the streams and answering questions for people. Thank you to Jan de Mooij and André Bargull for reviews and comments. Also a huge thank you to Sandra Persing, Rainer Cvillink, Val Grimm and Melissa Thermidor for the support in production and in getting the stream going, and to Mike Conley for the streaming tips.

The post Compiler Compiler: A Twitch series about working on a JavaScript engine appeared first on Mozilla Hacks - the Web developer blog.

Mozilla UXRemote UX Collaboration Across Different Time Zones (Yes, It Can Be Done!)

Even in the “before” times, the Firefox UX team was distributed across many different time zones. Some of us already worked remotely from our home offices or co-working spaces. Other team members worked from one of the Mozilla offices around the world.

Map with pins of Firefox UX team members around the world.

The content strategists, designers, and researchers on the Firefox UX team span many time zones.

That said, remote collaboration still has its challenges. When you’re not in the same room with your teammates — or even the same time zone — problem solving and iterating together might not come as naturally. Don’t get discouraged. It can be done, and done well.

We recently built a prototype for the Firefox Private Network extension. Fast iteration on the current experience was needed to introduce new functionality. The challenge? Our content strategist was based in Chicago, and our interaction designer was 7 hours ahead in Berlin. Here’s how we came together to co-design the prototype despite the time zone challenges.

Align on your goals and working process.

You have a deadline. You have a general idea of what you need to do to get there. You’re enthusiastic and ready to go. But wait! Don’t get started just yet.

First, schedule time with your teammate. Grab a cup of coffee (if you’re Betsy) or espresso (if you’re Emanuela). Talk about how you plan to work together. By building a set of shared expectations at the outset, you can minimize confusion and frustration later.

Screenshot of Zoom with Betsy and Emanuela, each holding coffee cups.

Betsy is just starting her day in Chicago while it’s afternoon for Emanuela in Berlin.

Here are a few questions to help guide this conversation.

  • What tools will you use? Is there a shared workspace where you can both contribute?
  • When will you converge and diverge? Find times when your working hours overlap so you can have in-person conversations.
  • How will you communicate asynchronously? Will you keep a running conversation in Slack? Or leave comments for each other on documents or Miro boards?
  • Who will own certain aspects of the work?

Even though we had already worked together, we came up with a set of guidelines to best approach this specific challenge. The important part is that you align as a team about your approach. You can always fine-tune as you go.

The process we defined for our asynchronous collaboration.

Our task was to make significant functionality changes to an existing experience for the Firefox Private Network extension. We agreed on these tools and process.

  1. We’d use Miro, a collaborative whiteboarding tool.
  2. Emanuela would place her first iteration of screens on the board, starting with existing copy.
  3. Betsy would post questions as comments. She would post copy changes as sticky notes.
  4. If the question could be easily answered, Emanuela would reply and resolve it. If it wasn’t a quick answer, we would discuss it at our next in-person check-in.
  5. Emanuela would incorporate copy changes and move the sticky notes off the board to signal this task was done.

Now we were ready to get to work!

Share your ideas early and often to build trust.

It’s easy to quickly build trust when you work together in the same physical space. You can better read how the other person is reacting to your ideas. You ideate, sketch, and design together at the same time.

To best replicate this trust building when working independently in different time zones, we used Miro as a shared collaboration workspace. We agreed to put concepts on the board before they truly felt “done.” This allowed us to share our thinking as it evolved and prevented our efforts from becoming too siloed. When each of us started our work day, the board looked a lot different than it had our previous evening.

We could then bounce ideas back and forth, just as we would have done when sitting in the same room sketching together. We each added stickies, product screenshots, and wireframes to visualize our ideas. Content and design were so closely knit throughout that we found ourselves contributing ideas in both domains.

Ask clarifying questions and over-communicate your thinking.

Because there are 7 hours between us, we weren’t always able to work together at the same time. We both did quite a bit of work on our own time. Then we’d come together and sync up. We converged and diverged frequently throughout the project.

Inevitably, you might be a little confused by a design or copy decision the other person made during their solo work time, then placed on your shared board. This is totally natural. If you don’t understand something, just ask for clarification. Nicely. The next time you meet on Zoom, you can talk it out.

If you decide to take things in a little of a different direction, take the extra few minutes to write down your rationale and share that back with your teammate. It’s the equivalent of explaining your thinking out loud.

Yes, over-communicating takes a bit of extra work, but in our experience it doesn’t slow you down. This exercise actually helps you crystallize your thinking and will come in handy later if you need to explain the “why” behind your decisions to stakeholders.

Allow space for spontaneous chats and check-ins.

We would do a lot of quick check-ins on Zoom, but these were never pre-scheduled or official meetings. During the few hours our time zones overlapped, our chats were mostly spontaneous, quick, and informal. We would often drum up a Zoom meeting directly from Slack.

We usually checked in with each other when Betsy came online in the morning, then again before Emanuela signed off for the day. Even with a tight deadline, we stayed in sync every step of the way and laid the foundation for a strong, trusting partnership.

Mozilla Add-ons BlogFriend of Add-ons: Juraj Mäsiar

Our newest Friend of Add-ons is Juraj Mäsiar! Juraj is the developer of several extensions for Firefox, including Scroll Anywhere, which is part of our Recommended Extensions program. He is also a frequent contributor on our community forums, where he offers friendly advice and input for extension developers looking for help.

Juraj first started building extensions for Firefox in 2016 during a quiet weekend trip to his hometown. The transition to the WebExtensions API was less than a year away, and developers were starting to discuss their migration plans. After discovering many of his favorite extensions weren’t going to port to the new API, Juraj decided to try the migration process himself to give a few extensions a second life.  “I was surprised to see it’s just normal JavaScript, HTML and CSS — things I already knew,” he says. “I put some code together and just a few moments later I had a working prototype of my ScrollAnywhere add-on. It was amazing!”

Juraj immersed himself in exploring the WebExtensions API and developing extensions for Firefox. It wasn’t always a smooth process, and he’s eager to share some tips and tricks to make the development experience easier and more efficient. “Split your code to ES6 modules. Share common code between your add-ons — you can use `git submodule` for that. Automate whatever can be automated. If you don’t know how, spend the time learning how to automate it instead of doing it manually,” he advises. Developers can also save energy by not reinventing the wheel. “If you need a build script, use webpack. Don’t build your own DOM handling library. If you need complex UI, use existing libraries like Vue.js.”

Juraj recommends staying active, saying. “Doing enough sport every day will keep your mind fresh and ready for new challenges.” He stays active by playing VR games and rollerblading.

Currently, Juraj is experimenting with the CryptoAPI and testing it with a new extension that will encrypt user notes and synchronize them with Firefox Sync. The goal is to create a secure extension that can be used to store sensitive material, like a server configuration or a home wifi password.

On behalf of the Add-ons Team, thank you for all of your wonderful contributions to our community, Juraj!

If you are interested in getting involved with the add-ons community, please take a look at our current contribution opportunities.

The post Friend of Add-ons: Juraj Mäsiar appeared first on Mozilla Add-ons Blog.

SeaMonkeySeaMonkey 2.53.3b1 has been released!

Hi everyone,

Just want to make a quick post (which I should’ve done about four hours ago but got sidetracked) to mention that SeaMonkey 2.53.3b1 is released!

Please do check out this new release.  Why?  “Coz the Rock says so.” (some obscure 90’s WWE quote).

Incidentally, the automation is still being worked on; which means that the builds were generated by IanN and frg.  That said, I am closer to getting something running.  At least the automation builds *are* green.  Now it’s the process of releasing…


Blog of DataThis Week in Glean: Project FOG Update, end of H12020

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

It’s been a while since last I wrote on Project FOG, so I figure I should update all of you on the progress we’ve made.

A reminder: Project FOG (Firefox on Glean) is the year-long effort to bring the Glean SDK to Firefox. This means answering such varied questions as “Where are the docs going to live?” (here) “How do we update the SDK when we need to?” (this way) “How are tests gonna work?” (with difficulty) and so forth. In a project this long you can expect updates from time-to-time. So where are we?

First, we’ve added the Glean SDK to Firefox Desktop and include it in Firefox Nightly. This is only a partial integration, though, so the only builtin ping it sends is the “deletion-request” ping when the user opts out of data collection in the Preferences. We don’t actually collect any data, so the ping doesn’t do anything, but we’re sending it and soon we’ll have a test ensuring that we keep sending it. So that’s nice.

Second, we’ve written a lot of Design Proposals. The Glean Team and all the other teams our work impacts are widely distributed across a non-trivial fragment of the globe. To work together and not step on each others’ toes we have a culture of putting most things larger than a bugfix into Proposal Documents which we then pass around asynchronously for ideation, feedback, review, and signoff. For something the size and scope of adding a data collection library to Firefox Desktop, we’ve needed more than one. These design proposals are Google Docs for now, but will evolve to in-tree documentation (like this) as the proposals become code. This way the docs live with the code and hopefully remain up-to-date for our users (product developers, data engineers, data scientists, and other data consumers), and are made open to anyone in the community who’s interested in learning how it all works.

Third, we have a Glean SDK Rust API! Sorta. To limit scope creep we haven’t added the Rust API to mozilla/glean and are testing its suitability in FOG itself. This allows us to move a little faster by mixing our IPC implementation directly into the API, at the expense of needing to extract the common foundation later. But when we do extract it, it will be fully-formed and ready for consumers since it’ll already have been serving the demanding needs of FOG.

Fourth, we have tests. This was a bit of a struggle as the build order of Firefox means that any Rust code we write that touches Firefox internals can’t be tested in Rust tests (they must be tested by higher-level integration tests instead). By damming off the Firefox-adjacent pieces of the code we’ve been able to write and run Rust tests of the metrics API after all. Our code coverage is still a little low, but it’s better than it was.

Fifth, we are using Firefox’s own network stack to send pings. In a stroke of good fortune the application-services team (responsible for fan-favourite Firefox features “Sync”, “Send Tab”, and “Firefox Accounts”) was bringing a straightforward Rust networking API called Viaduct to Firefox Desktop almost exactly when we found ourselves in need of one. Plugging into Viaduct was a breeze, and now our “deletion-request” pings can correctly work their way through all the various proxies and protocols to get to Mozilla’s servers.

Sixth, we have firm designs on how to implement both the C++ and JS APIs in Firefox. They won’t be fully-fledged language bindings the way that Kotlin, Python, and Swift are (( they’ll be built atop the Rust language binding so they’re really more like shims )), but they need to have every metric type and every metric instance that a full language binding would have, so it’s no small amount of work.

But where does that leave our data consumers? For now, sadly, there’s little to report on both the input and output sides: We have no way for product engineers to collect data in Firefox Desktop (and no pings to send the data on), and we have no support in the pipeline for receiving data, not that we have any to analyse. These will be coming soon, and when they do we’ll start cautiously reaching out to potential first customers to see whether their needs can be satisfied by the pieces we’ve built so far.

And after that? Well, we need to do some validation work to ensure we’re doing things properly. We need to implement the designs we proposed. We need to establish how tasks accomplished in Telemetry can now be accomplished in the Glean SDK. We need to start building and shipping FOG and the Glean SDK beyond Nightly to Beta and Release. We need to implement the builtin Glean SDK pings. We need to document the designs so others can understand them, best practices so our users can follow them, APIs so engineers can use them, test guarantees so QA can validate them, and grand processes for migration from Telemetry to Glean so that organizations can start roadmapping their conversions.

In short: plenty has been done, and there’s still plenty to do.

I guess we’d better be about it, then.


(( this is a syndicated copy of the original post ))

Mozilla Add-ons BlogRecommended extensions — recent additions

When the Recommended Extensions program debuted last year, it listed about 60 extensions. Today the program has grown to just over a hundred as we continue to evaluate new nominations and carefully grow the list. The curated collection grows slowly because one of the program’s goals is to cultivate a fairly fixed list of content so users can feel confident the Recommended extensions they install will be monitored for safety and security for the foreseeable future.

Here are some of the more exciting recent additions to the program…

DuckDuckGo Privacy Essentials provides a slew of great privacy features, like advanced ad tracker and search protection, encryption enforcement, and more.

Read Aloud: Text to Speech converts any web page text (even PDF’s) to audio. This can be a very useful extension for everyone from folks with eyesight or reading issues to someone who just wants their web content narrated to them while their eyes roam elsewhere.

SponsorBlock addresses the nuisance of this newer, more intrusive type of video advertising.

SponsorBlock for YouTube is one of the more original content blockers we’ve seen in a while. Leveraging crowdsourced data, the extension skips those interruptive sponsored content segments of YouTube clips.

Metastream Remote has been extremely valuable to many of us during pandemic related home confinement. It allows you to host streaming video watch parties with friends. Metastream will work with any video streaming platform, so long as the video has a URL (in the case of paid platforms like Netflix, Hulu, or Disney+, they too will work provided all watch party participants have their own accounts).

Cookie AutoDelete summarizes its utility right in the title. This simple but powerful extension will automatically delete your cookies from closed tabs. Customization features include whitelist support and informative visibility into the number of cookies used on any given site.

AdGuard AdBlocker is a popular and highly respected content blocker that works to block all ads—banner, video, pop-ups, text ads—all of it. You may also notice the nice side benefit of faster page loads, since AdGuard prohibits so much content you didn’t want anyway.

If you’re the creator of an extension you feel would make a strong candidate for the Recommended program, or even if you’re just a huge fan of an extension you think merits consideration, please submit nominations to amo-featured [at] mozilla [dot] org. Due to the high volume of submissions we receive, please understand we’re unable to respond to every inquiry.

The post Recommended extensions — recent additions appeared first on Mozilla Add-ons Blog.

hacks.mozilla.orgIntroducing the MDN Web Docs Front-end developer learning pathway

The MDN Web Docs Learning Area (LA) was first launched in 2015, with the aim of providing a useful counterpart to the regular MDN reference and guide material. MDN had traditionally been aimed at web professionals, but we were getting regular feedback that a lot of our audience found MDN too difficult to understand, and that it lacked coverage of basic topics.

Fast forward 5 years, and the Learning Area material is well-received. It boasts around 3.5–4 million page views per month; a little under 10% of MDN Web Docs’ monthly web traffic.

At this point, the Learning Area does its job pretty well. A lot of people use it to study client-side web technologies, and its loosely-structured, unopinionated, modular nature makes it easy to pick and choose subjects at your own pace. Teachers like it because it is easy to include in their own courses.

However, at the beginning of the year, this area had two shortcomings that we wanted to improve upon:

  1. We’d gotten significant feedback that our users wanted a more opinionated, structured approach to learning web development.
  2. We didn’t include any information on client-side tooling, such as JavaScript frameworks, transformation tools, and deployment tools widely used in the web developer’s workplace.

To remedy these issues, we created the Front-end developer learning pathway (FED learning pathway).

Structured learning

Take a look at the Front-end developer pathway linked above  — you’ll see that it provides a clear structure for learning front-end web development. This is our opinion on how you should get started if you want to become a front-end developer. For example, you should really learn vanilla HTML, CSS, and JavaScript before jumping into frameworks and other such tooling. Accessibility should be front and center in all you do. (All Learning Area sections try to follow accessibility best practices as much as possible).

While the included content isn’t completely exhaustive, it delivers the essentials you need, along with the confidence to look up other information on your own.

The pathway starts by clearly stating the subjects taught, prerequisite knowledge, and where to get help. After that, we provide some useful background reading on how to set up a minimal coding environment. This will allow you to work through all the examples you’ll encounter. We explain what web standards are and how web technologies work together, as well as how to learn and get help effectively.

The bulk of the pathway is dedicated to detailed guides covering:

  • HTML
  • CSS
  • JavaScript
  • Web forms
  • Testing and accessibility
  • Modern client-side tooling (which includes client-side JavaScript frameworks)

Throughout the pathway we aim to provide clear direction — where you are now, what you are learning next, and why. We offer enough assessments to provide you with a challenge, and an acknowledgement that you are ready to go on to the next section.


MDN’s aim is to document native web technologies — those supported in browsers. We don’t tend to document tooling built on top of native web technologies because:

  • The creators of that tooling tend to produce their own documentation resources.  To repeat such content would be a waste of effort, and confusing for the community.
  • Libraries and frameworks tend to change much more often than native web technologies. Keeping the documentation up to date would require a lot of effort. Alas, we don’t have the bandwidth to perform regular large-scale testing and updates.
  • MDN is seen as a neutral documentation provider. Documenting tooling is seen by many as a departure from neutrality, especially for tooling created by major players such as Facebook or Google.

Therefore, it came as a surprise to some that we were looking to document such tooling. So why did we do it? Well, the word here is pragmatism. We want to provide the information people need to build sites and apps on the web. Client-side frameworks and other tools are an unmistakable part of that. It would look foolish to leave out that entire part of the ecosystem. So we opted to provide coverage of a subset of tooling “essentials” — enough information to understand the tools, and use them at a basic level. We aim to provide the confidence to look up more advanced information on your own.

New Tools and testing modules

In the Tools and testing Learning Area topic, we’ve provided the following new modules:

  1. Understanding client-side web development tools: An introduction to the different types of client-side tools that are available, how to use the command line to install and use tools. This section delivers a crash course in package managers. It includes a walkthrough of how to set up and use a typical toolchain, from enhancing your code writing experience to deploying your app.
  2. Understanding client-side JavaScript frameworks: A useful grounding in client-side frameworks, in which we aim to answer questions such as “why use a framework?”, “what problems do they solve?”, and “how do they relate to vanilla JavaScript?” We give the reader a basic tutorial series in some of the most popular frameworks. At the time of writing, this includes React, Ember, and Vue.
  3. Git and GitHub: Using links to Github’s guides, we’ve assembled a quickfire guide to Git and GitHub basics, with the intention of writing our own set of guides sometime later on.

Further work

The intention is not just to stop here and call the FED learning pathway done. We are always interested in improving our material to keep it up to date and make it as useful as possible to aspiring developers. And we are interested in expanding our coverage, if that is what our audience wants. For example, our frameworks tutorials are fairly generic to begin with, to allow us to use them as a test bed, while providing some immediate value to readers.


We don’t want to just copy the material provided by tooling vendors, for reasons given above. Instead we want to listen, to find out what the biggest pain points are in learning front-end web development. We’d like to see where you need more coverage, and expand our material to suit. We would like to cover more client-side JavaScript frameworks (we have already got a Svelte tutorial on the way), provide deeper coverage of other tool types (such as transformation tools, testing frameworks, and static site generators), and other things besides.

Your feedback please!

To enable us to make more intelligent choices, we would love your help. If you’ve got a strong idea abou tools or web technologies we should cover on MDN Web Docs, or you think some existing learning material needs improvement, please let us know the details! The best ways to do this are:

  1. Leave a comment on this article.
  2. Fill in our questionnaire (it should only take 5–10 minutes).

So that draws us to a close. Thank you for reading, and for any feedback you choose to share.

We will use it to help improve our education resources, helping the next generation of web devs learn the skills they need to create a better web of tomorrow.

The post Introducing the MDN Web Docs Front-end developer learning pathway appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Add-ons BlogImprovements to Statistics Processing on AMO

We’re revamping the statistics we make available to add-on developers on (AMO).

These stats are aggregated from add-on update logs and don’t include any personally identifiable user data. They give developers information about user adoption, general demographics, and other insights that might help them make changes and improvements.

The current system is costly to run, and glitches in the data have been a long-standing recurring issue. We are addressing these issues by changing the data source, which will improve reliability and reduce processing costs.

Usage Statistics

Until now, add-on usage statistics have been based on add-on updates. Firefox checks AMO daily for updates for add-ons that are hosted there (self-distributed add-ons generally check for updates on a server specified by the developer). The server logs for these update requests are aggregated and used to calculate the user counts shown on add-on pages on AMO. They also power a statistics dashboard for developers that breaks down the usage data by language, platform, application, etc.

Stats dashboard example

Stats dashboard showing new version adoption for uBlock Origin

In a few weeks, we will stop using the daily pings as the data source for usage statistics. The new statistics will be based on Firefox telemetry data. As with the current stats, all data is aggregated and no personally identifiable user data is shared with developers.

The data shown on AMO and shared with developers will be essentially the same, but the move to telemetry means that the numbers will change a little. Firefox users can opt out of sending telemetry data, and the way they are counted is different. Our current stats system counts distinct users by IP address, while telemetry uses a per-profile ID. For most add-ons you should expect usage totals to be lower, but usage trends and fluctuations should be nearly identical.

Telemetry data will enable us to show data for add-on versions that are not listed on AMO, so all developers will now be able to analyze their add-on usage stats, regardless of how the add-on is distributed. This also means some add-ons will have higher usage numbers, since the average will be calculated including both AMO-hosted and self-hosted versions.

Other changes that will happen due to this update:

  • The dashboards will only show data for enabled installs. There won’t be a breakdown of usage by add-on status anymore.
  • A breakdown of usage by country will be added.
  • Usage data for our current Firefox for Android browser (also known as Fennec) isn’t included. We’re working on adding data for our next mobile browser (Fenix), currently in development.
  • It won’t be possible to make your statistics dashboard publicly available anymore. Dashboards will only be accessible to add-on developers and admins, starting on June 11. If you are a member of a team that maintains an add-on and you need to access its stats dashboard, please ask your team to add you as an author in the Manage Authors & License page on AMO. The Listed property can be checked off so you don’t show up in the add-on’s public listing page.

We will begin gradually rolling out the new dashboard on June 11. During the rollout, a fraction of add-on dashboards will default to show the new data, but they will also have a link to access the old data. We expect to complete the rollout and discontinue the old dashboards on July 9. If you want to export any of your old stats, make sure you do it before then.

Download Statistics

We plan to make a similar overhaul to download statistics in the coming months. For now they will remain the same. You should expect an announcement around August, when we are closer to switching over to the new download data.

The post Improvements to Statistics Processing on AMO appeared first on Mozilla Add-ons Blog.

Open Policy & AdvocacyMozilla releases recommendations on EU Data Strategy

Mozilla recently submitted our response to the European Commission’s public consultation on its European Strategy for Data.  The Commission’s data strategy is one of the pillars of its tech strategy, which was published in early 2020 (more on that here). To European policymakers, promoting proper use and management of data can play a key role in a modern industrial policy, particularly as it can provide a general basis for insights and innovations that advance the public interest.

Our recommendations provide insights on how to manage data in a way that protects the rights of individuals, maintains trust, and allows for innovation. In addition to highlighting some of Mozilla’s practices and policies which underscore our commitment to ethical data and working in the open – such as our Lean Data Practices Toolkit, the Data Stewardship Program, and the Firefox Public Data Report – our key recommendations for the European Commission are the following:

  • Address collective harms: In order to foster the development of data ecosystems where data can be leveraged to serve collective benefits, legal and policy frameworks must also reflect an understanding of potential collective harms arising from abusive data practices and how to mitigate them.
  • Empower users: While enhancing data literacy is a laudable objective, data literacy is not a silver bullet in mitigating the risks and harms that would emerge in an unbridled data economy. Data literacy – i.e. the ability to understand, assess, and ultimately choose between certain data-driven market offerings – is effective only if there is actually meaningful choice of privacy-respecting goods and services for consumers. Creating the conditions for privacy-respecting goods and services to thrive should be a key objective of the strategy.
  • Explore data stewardship models (with caution): We welcome the Commission’s exploration of novel means of data governance and management. We believe data trusts and other models and structures of data governance may hold promise. However, there are a range of challenges and complexities associated with the concept that will require careful navigation in order for new data governance structures to meaningfully improve the state of data management and to serve as the foundation for a truly ethical and trustworthy data ecosystem.

We’ll continue to build out our thinking on these recommendations, and will work with the European Commission and other stakeholders to make them a reality in the EU data strategy. For now, you can find our full submission here.


The post Mozilla releases recommendations on EU Data Strategy appeared first on Open Policy & Advocacy.

hacks.mozilla.orgA New RegExp Engine in SpiderMonkey

Background: RegExps in SpiderMonkey

Regular expressions – commonly known as RegExps – are a powerful tool in JavaScript for manipulating strings. They provide a rich syntax to describe and capture character information. They’re also heavily used, so it’s important for SpiderMonkey (the JavaScript engine in Firefox) to optimize them well.

Over the years, we’ve had several approaches to RegExps. Conveniently, there’s a fairly clear dividing line between the RegExp engine and the rest of SpiderMonkey. It’s still not easy to replace the RegExp engine, but it can be done without too much impact on the rest of SpiderMonkey.

In 2014, we took advantage of this flexibility to replace YARR (our previous RegExp engine) with a forked copy of Irregexp, the engine used in V8. This raised a tricky question: how do you make code designed for one engine work inside another? Irregexp uses a number of V8 APIs, including core concepts like the representation of strings, the object model, and the garbage collector.

At the time, we chose to heavily rewrite Irregexp to use our own internal APIs. This made it easier for us to work with, but much harder to import new changes from upstream. RegExps were changing relatively infrequently, so this seemed like a good trade-off. At first, it worked out well for us. When new features like the ‘\u’ flag were introduced, we added them to Irregexp. Over time, though, we began to fall behind. ES2018 added four new RegExp features: the dotAll flag, named capture groups, Unicode property escapes, and look-behind assertions. The V8 team added Irregexp support for those features, but the SpiderMonkey copy of Irregexp had diverged enough to make it difficult to apply the same changes.

We began to rethink our approach. Was there a way for us to support modern RegExp features, with less of an ongoing maintenance burden? What would our RegExp engine look like if we prioritized keeping it up to date? How close could we stay to upstream Irregexp?

Solution: Building a shim layer for Irregexp

The answer, it turns out, is very close indeed. As of the writing of this post, SpiderMonkey is using the very latest version of Irregexp, imported from the V8 repository, with no changes other than mechanically rewritten #include statements. Refreshing the import requires minimal work beyond running an update script. We are actively contributing bug reports and patches upstream.

How did we get to this point? Our approach was to build a shim layer between SpiderMonkey and Irregexp. This shim provides Irregexp with access to all the functionality that it normally gets from V8: everything from memory allocation, to code generation, to a variety of utility functions and data structures.

A diagram showing the architecture of Irregexp inside SpiderMonkey. SpiderMonkey calls through the shim layer into Irregexp, providing a RegExp pattern. The Irregexp parser converts the pattern into an internal representation. The Irregexp compiler uses the MacroAssembler API to call either the SpiderMonkey macro-assembler, or the Irregexp bytecode generator. The SpiderMonkey macro-assembler produces native code which can be executed directly. The bytecode generator produces bytecode, which is interpreted by the Irregexp interpreter. In both cases, this produces a match result, which is returned to SpiderMonkey.

This took some work. A lot of it was a straightforward matter of hooking things together. For example, the Irregexp parser and compiler use V8’s Zone, an arena-style memory allocator, to allocate temporary objects and discard them efficiently. SpiderMonkey’s equivalent is called a LifoAlloc, but it has a very similar interface. Our shim was able to implement calls to Zone methods by forwarding them directly to their LifoAlloc equivalents.

Other areas had more interesting solutions. A few examples:

Code Generation

Irregexp has two strategies for executing RegExps: a bytecode interpreter, and a just-in-time compiler. The former generates denser code (using less memory), and can be used on systems where native code generation is not available. The latter generates code that runs faster, which is important for RegExps that are executed repeatedly. Both SpiderMonkey and V8 interpret RegExps on first use, then tier up to compiling them later.

Tools for generating native code are very engine-specific. Fortunately, Irregexp has a well-designed API for code generation, called RegExpMacroAssembler. After parsing and optimizing the RegExp, the RegExpCompiler will make a series of calls to a RegExpMacroAssembler to generate code. For example, to determine whether the next character in the string matches a particular character, the compiler will call CheckCharacter. To backtrack if a back-reference fails to match, the compiler will call CheckNotBackReference.

Overall, there are roughly 40 available operations. Together, these operations can represent any JavaScript RegExp. The macro-assembler is responsible for converting these abstract operations into a final executable form. V8 contains no less than nine separate implementations of RegExpMacroAssembler: one for each of the eight architectures it supports, and a final implementation that generates bytecode for the interpreter. SpiderMonkey can reuse the bytecode generator and the interpreter, but we needed our own macro-assembler. Fortunately, a couple of things were working in our favour.

First, SpiderMonkey’s native code generation tools work at a higher level than V8’s. Instead of having to implement a macro-assembler for each architecture, we only needed one, which could target any supported machine. Second, much of the work to implement RegExpMacroAssembler using SpiderMonkey’s code generator had already been done for our first import of Irregexp. We had to make quite a few changes to support new features (especially look-behind references), but the existing code gave us an excellent starting point.

Garbage Collection

Memory in JavaScript is automatically managed. When memory runs short, the garbage collector (GC) walks through the program and cleans up any memory that is no longer in use. If you’re writing JavaScript, this happens behind the scenes. If you’re implementing JavaScript, though, it means you have to be careful. When you’re working with something that might be garbage-collected – a string, say, that you’re matching against a RegExp – you need to inform the GC. Otherwise, if you call a function that triggers a garbage collection, the GC might move your string somewhere else (or even get rid of it entirely, if you were the only remaining reference). For obvious reasons, this is a bad thing. The process of telling the GC about the objects you’re using is called rooting. One of the most interesting challenges for our shim implementation was the difference between the way SpiderMonkey and V8 root things.

SpiderMonkey creates its roots right on the C++ stack. For example, if you want to root a string, you create a Rooted<JSString*> that lives in your local stack frame. When your function returns, the root disappears and the GC is free to collect your JSString. In V8, you create a Handle. Under the hood, V8 creates a root and stores it in a parallel stack. The lifetime of roots in V8 is controlled by HandleScope objects, which mark a point on the root stack when they are created, and clear out every root newer than the marked point when they are destroyed.

To make our shim work, we implemented our own miniature version of V8’s HandleScopes. As an extra complication, some types of objects are garbage-collected in V8, but are regular non-GC objects in SpiderMonkey. To handle those objects (no pun intended), we added a parallel stack of “PseudoHandles”, which look like normal Handles to Irregexp, but are backed by (non-GC) unique pointers.


None of this would have been possible without the support and advice of the V8 team. In particular, Jakob Gruber has been exceptionally helpful. It turns out that this project aligns nicely with a pre-existing desire on the V8 team to make Irregexp more independent of V8. While we tried to make our shim as complete as possible, there were some circumstances where upstream changes were the best solution. Many of those changes were quite minor. Some were more interesting.

Some code at the interface between V8 and Irregexp turned out to be too hard to use in SpiderMonkey. For example, to execute a compiled RegExp, Irregexp calls NativeRegExpMacroAssembler::Match. That function was tightly entangled with V8’s string representation. The string implementations in the two engines are surprisingly close, but not so close that we could share the code. Our solution was to move that code out of Irregexp entirely, and to hide other unusable code behind an embedder-specific #ifdef. These changes are not particularly interesting from a technical perspective, but from a software engineering perspective they give us a clearer sense of where the API boundary might be drawn in a future project to separate Irregexp from V8.

As our prototype implementation neared completion, we realized that one of the remaining failures in SpiderMonkey’s test suite was also failing in V8. Upon investigation, we determined that there was a subtle mismatch between Irregexp and the JavaScript specification when it came to case-insensitive, non-unicode RegExps. We contributed a patch upstream to rewrite Irregexp’s handling of characters with non-standard case-folding behaviour (like ‘ß’, LATIN SMALL LETTER SHARP S, which gives “SS” when upper-cased).

Our opportunities to help improve Irregexp didn’t stop there. Shortly after we landed the new version of Irregexp in Firefox Nightly, our intrepid fuzzing team discovered a convoluted RegExp that crashed in debug builds of both SpiderMonkey and V8. Fortunately, upon further investigation, it turned out to be an overly strict assertion. It did, however, inspire some additional code quality improvements in the RegExp interpreter.

Conclusion: Up to date and ready to go


What did we get for all this work, aside from some improved subscores on the JetStream2 benchmark?

Most importantly, we got full support for all the new RegExp features. Unicode property escapes and look-behind references only affect RegExp matching, so they worked as soon as the shim was complete. The dotAll flag only required a small amount of additional work to support. Named captures involved slightly more support from the rest of SpiderMonkey, but a couple of weeks after the new engine was enabled, named captures landed too. (While testing them, we turned up one last bug in the equivalent V8 code.) This brings Firefox fully up to date with the latest ECMAScript standards for JavaScript.

We also have a stronger foundation for future RegExp support. More collaboration on Irregexp is mutually beneficial. SpiderMonkey can add new RegExp syntax much more quickly. V8 gains an extra set of eyes and hands to find and fix bugs. Hypothetical future embedders of Irregexp have a proven starting point.

The new engine is available in Firefox 78, which is currently in our Developer Edition browser release. Hopefully, this work will be the basis for RegExps in Firefox for years to come.


The post A New RegExp Engine in SpiderMonkey appeared first on Mozilla Hacks - the Web developer blog.

Blog of DataThis Week in Glean: The Glean SDK and iOS Application Extensions, or A Tale of Two Sandboxes

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

Recently, I had the pleasure of working with our wonderful iOS developers here at Mozilla in instrumenting Lockwise, one of our iOS applications, with the Glean SDK.  At this point, I’ve already helped integrate it with several other applications, all of which went pretty smoothly, and Lockwise for iOS held true to that.  It wasn’t until later, when unexpected things started happening, that I realized something was amiss…

Integrating the Glean SDK with a new product is a fairly straightforward process.  On iOS, it amounts to adding the dependency via Carthage, and adding a couple of build-steps to get it to do its thing.  After this is done, we generally smoke test the data using the built in debugging tools.  If everything looks good, we submit a request for data review for collecting the new metrics.  Once a data steward has signed off on our request to collect new data, we can then release a new version of the application with its Glean SDK powered telemetry.  Finally, we collect a few weeks of data to validate that everything looks good, such as user counts, distribution of locales, and we look for anything that might indicate that the data isn’t getting collected like we expected, such as holes in sequence numbers or missing fields.  In Lockwise for iOS’s case, all of this went just as expected.

One part of the Glean SDK integration that I haven’t mentioned yet is enabling the application in our data ingestion pipeline via the probe-scraper so that we can accept data from it.  On iOS, the Glean SDK makes use of the application bundle identifier to uniquely identify the app to our pipeline, so enabling the app means letting the pipeline know about this id so that it won’t turn away the data.  This identifier also determines the table that the data ultimately ends up in, so it’s a key identifier in the process.

So, here’s where I learned something new about iOS architecture, especially as it relates to embedded application extensions.  Application extensions are a cool and handy way of adding additional features and functionality to your application in the Apple ecosystem.  In the case of Lockwise, they are using a form of extension that provides credentials to other applications.  This allows the credentials stored in Lockwise to be used to authenticate in websites and other apps installed on the device.  I knew about extensions but hadn’t really worked with them much until now, so it was pretty interesting to see how it all worked in Lockwise.

Here’s where a brick smacks into the story.  Remember that bundle identifier that I said was used to uniquely identify the app?  Well, it turns out that application extensions in iOS modify this a bit by adding to it to uniquely identify themselves!  We realized this when we started to see our pipeline reject this new identifier, because it wasn’t an exact match for the identifier that we expected and had allowed through.  The id we expected was org-mozilla-ios-lockbox, but the extension was reporting org-mozilla-ios-Lockbox-CredentialProvider.  Using a different bundle identifier totally makes sense, since they run as a separate process within their own application sandbox container.  The OS needs to see them differently because an extension can run even if the base application isn’t running.  Unfortunately, the Glean SDK is purposefully built to not care about, or even know about different processes so we had a bit of a blind spot in the application extension.  Not only that, but remember I mentioned that the extension’s storage container is a separate sandbox from the base application?  Well, since the extension runs in a different process from the base application, and it has a separate storage, the Glean SDK running in the extension acted just like the extension was a completely separate application.  With separate storage, it happily generates a different unique identifier for the client, which does not match the id generated for the base application.  So there was no way to attribute the information in the extension to the base application that contained it because the ingestion pipeline saw these as separate applications with no way to associate the client ids between the two.  These were two sandboxes that just couldn’t interact with each other.  To be fair, Apple does provide a way to share data between extensions and applications, but it requires creating a completely separate shared sandbox, and this doesn’t solve the problem that the same Glean SDK instance just shouldn’t be used directly by multiple processes at the same time.

Well, that wasn’t ideal, to say the least, so we began an investigation to determine what course of action we should (or could) take.  We went back and forth over the details but ultimately we determined that the Glean SDK shouldn’t know about processes and that there wasn’t much we could do aside from blocking it from running in the extensions and documenting the fact that it was up to the Glean SDK-using application to ensure that metrics were only collected by the main process application.  I was a bit sad that there wasn’t much we could do to make the user-experience better for Glean SDK consumers, but sometimes you just can’t predict the challenges you will face when implementing a truly cross-platform thing.  I still hold out hope that a way will open up to make this easier, but the lesson I learned from all of this is that sometimes you can’t win but it’s important to stick to the design and do the best you can.

hacks.mozilla.orgNew in Firefox 77: DevTool improvements and web platform updates

Note: This post is also available in: 简体中文 (Chinese (Simplified)), 繁體中文 (Chinese (Traditional)), and Español (Spanish).

A new stable Firefox version is rolling out. Version 77 comes with a few new features for web developers.

This blog post provides merely a set of highlights; for all the details, check out the following:

Developer tools improvements

Let’s start by reviewing the most interesting Developer Tools improvements and additions for 77. If you like to see more of the work in progress to give feedback, get Firefox DevEdition for early access.

Faster, leaner JavaScript debugging

Large web apps can provide a challenge for DevTools as bundling, live reloading, and dependencies need to be handled fast and correctly. With 77, Firefox’s Debugger learned a few more tricks, so you can focus on debugging.

After we improved debugging performance over many releases, we did run out of actionable, high-impact bugs. So to find the last remaining bottlenecks, we have been actively reaching out to our community. Thanks to many detailed reports we received, we were able to land performance improvements that not only speed up pausing and stepping but also cut down on memory usage over time.

JavaScript & CSS Source Maps that just work

Source maps were part of this outreach and saw their own share of performance boosts. Some cases of inline source maps improved 10x in load time. More importantly though, we improved reliability for many more source map configurations. We were able to tweak the fallbacks for parsing and mapping, thanks to your reports about specific cases of slightly-incorrect generated source maps. Overall, you should now see projects that just work, that previously failed to load your original CSS and JavaScript/TypeScript/etc code.

Step JavaScript in the selected stack frame

Stepping is a big part of debugging but not intuitive. You can easily lose your way and overstep when moving in and out of functions, and between libraries and your own code.

The debugger will now respect the currently selected stack when stepping. This is useful when you’ve stepped into a function call or paused in a library method further down in the stack. Just select the right function in the Call Stack to jump to its currently paused line and continue stepping from there.

Navigating the call stack and continuing stepping further in that function

We hope that this makes stepping through code execution more intuitive and less likely for you to miss an important line.

Overflow settings for Network and Debugger

To make for a leaner toolbar, Network and Debugger follow Console’s example in combining existing and new checkboxes into a new settings menu. This puts powerful options like “Disable JavaScript” right at your fingertips and gives room for more powerful options in the future.

Overflow settings menus in both Network and Debugger toolbar.

Pause on property read & write

Understanding state changes is a problem that is often investigated by console logging or debugging. Watchpoints, which landed in Firefox 72, can pause execution while a script reads a property or writes it. Right-click a property in the Scopes panel when paused to attach them.

Right-click on object properties in Debugger's Scopes to break on get/set

Contributor Janelle deMent made watchpoints easier to use with a new option that combines get/set, so any script reference will trigger a pause.

Improved Network data preview

Step by step over each release, the Network details panels have been rearchitected. The old interface had event handling bugs that made selecting and copying text too flaky. While we were at it, we also improved performance for larger data entries.

This is part of a larger interface cleanup in the Network panel, which we have been surveying our community about via @FirefoxDevTools Twitter and Mozilla’s Matrix community. Join us there to have your voice heard. More parts of the Network-panel sidebar redesign are also available in Firefox DevEdition for early access.

Web platform updates

Firefox 77 supports a couple of new web platform features.


Firefox 67 introduced String#matchAll, a more convenient way to iterate over regex result matches. In Firefox 77 we’re adding more comfort: String#replaceAll helps with replacing all occurrences of a string – an operation that’s probably one of those things you have searched for a thousand times in the past already (thanks StackOverflow for being so helpful!).

Previously, when trying to replace all cats with dogs, you had to use a global regular expression

.replace(/cats/g, 'dogs');

Or, you could use split and join:


Now, thanks to String#replaceAll, this becomes much more readable:

.replaceAll('cats', 'dogs');

IndexedDB cursor requests

Firefox 77 exposes the request that an IDBCursor originated from as an attribute on that cursor. This is a nice improvement that makes it easier to write things like wrapper functions that “upgrade” database features. Previously, to do such an upgrade on a cursor you’d have to pass in the cursor object and the request object that it originated from, as the former is reliant on the latter. With this change, you now only need to pass in the cursor object, as the request is available on the cursor.

Extensions in Firefox 77: Fewer permission requests and more

Since Firefox 57, users see the permissions an extension wants to access during installation or when any new permissions are added during an update. The frequency of these prompts can be overwhelming, and failure to accept a new permission request during an extension’s update can leave users stranded on an old version. We’re making it easier for extension developers to avoid triggering as many prompts by making more permissions available as optional permissions. Optional permissions don’t trigger a permission request upon installation or when they are added to an extension update, and can also be requested at runtime so users see what permissions are being requested in context.

Visit the Add-ons Blog to see more updates for extensions in Firefox 77!


These are the highligts of Firefox 77! Check out the new features and have fun playing! As always, feel free to give feedback and ask questions in the comments.

The post New in Firefox 77: DevTool improvements and web platform updates appeared first on Mozilla Hacks - the Web developer blog.

about:communityFirefox 77 new contributors

With the release of Firefox 77, we are pleased to welcome the 38 developers who contributed their first code change to Firefox in this release, 36 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Mozilla VR BlogWebXR Viewer 2.0 Released

WebXR Viewer 2.0 Released

We are happy to announce that version 2.0 of WebXR Viewer, released today, is the first web browser on iOS to implement the new WebXR Device API, enabling high-performance AR experiences on the web that don't share pictures of your private spaces with third party Javascript libraries and websites.  

It's been almost a year since the previous release (version 1.17) of our experimental WebXR platform for iOS, and over the past year we've been working on two major changes to the app:  (1) we updated the Javascript API to implement the official WebXR Device API specification, and (2) we ported our ARKit-based WebXR implementation from our minimal single-page web browser to the full-featured Firefox for iOS code-base.

WebXR Device API: Past, Present, and Future

The first goal this release is to update the browser's Javascript API for WebXR to support the official WebXR Device API, including an assortment of approved and proposed AR features. The original goal with the WebXR Viewer was to give us an iOS-based platform to experiment with AR features for WebXR, and we've written previous posts about experimenting with privacy and world structure, computer vision, and progressive and responsive WebXR design. We would like to continue those explorations in the context of the emerging standard.

We developed the API used in the first version of the WebXR Viewer more than 3 years ago (as a proposal for how WebXR might combine AR and VR; see the WebXR API proposal here, if you are interested), and then updated it a year ago to match the evolving standard. While very similar to the official API, this early version of the official API is not compatible with the final standard, in some substantial ways.  Now that WebXR is appearing in mainstream browsers, it's confusing to developers to have an old, incompatible API out in the world.  

Over the past year, we rebuilt the API to conform to the official spec, and either updated our old API features to match current proposals (e.g., anchors, hit testing, and DOM overlay), marked them more explicitly as non-standard (e.g., by adding a nonStandard_ prefix to the method names), or removed them from the new version (e.g., camera access). Most WebXR AR examples on the web now work with the WebXR Viewer, such as the "galaxy" example from the WebXR Samples repository shown in the banner above.

(The WebXR Viewer's Javascript API is entirely defined in the Javascript library in the webxr-ios-js repository linked above, and the examples are there as well; the library is loaded on demand from the github master branch when a page calls one of the WebXR API calls. You can build the API yourself, and change the URL in the iOS app settings to load your version instead of ours, if you want to experiment with changes to the API.  We'd be happy to receive PRs, and issues, at that github repository.)

WebXR Viewer 2.0 Released<figcaption>Standards-based version of our old "Peoples" demo (left); the Three.js WebXR AR Paint demo (center); Brandon Jones' XR Dinosaurs demo (right)</figcaption>

In the near future, we're interested in continuing to experiment with more advanced AR capabilities for WebXR, and seeing what kinds of experimentation developers do with those capabilities. Most AR use cases need to integrate virtual content with meaningful things in the world;  putting cute dinosaurs or superheros on flat surfaces in the world makes for fun demos that run anywhere, but genuinely useful consumer and business applications need to sense, track, and augment "people, places, and things" and have content that persists over time. Enhancing the Immersive Web with these abilities, especially in a "webby" manner that offers privacy and security to users, is a key area Mozilla will be working on next. We need to ensure that there is a standards-based solution that is secure and private, unlike the proprietary solutions currently in the market that are siloed to create new, closed markets controlled by single companies.

While purely vision-based AR applications (implemented inside web pages using direct access to camera video) are showing great engagement, failing to use the underlying platform technology limits their capabilities, as well as using so much CPU and GPU resources that they can only run for a few seconds to minutes before thermal throttling renders them unusable (or your battery dead). WebXR offers the possibility for the underlying vision-based sensing techniques to be implemented natively so they can take advantage of the underlying platform APIs (both to maximize performance and to minimize CPU, GPU, and battery use).  

It is too early to standardize some of these capabilities and implement them in a open, cross-platform way (e.g., persistent anchors), but others could be implemented now (e.g., the face and image tracking examples shown below). In the newly announced Firefox Reality for Hololens2, we're experimenting with exposing hand tracking into WebXR for input, a key sort of sensing that will be vital for head-worn-display-based AR (Oculus is also experimenting with exposing hand tracking into VR in the Oculus Browser on Quest). APIs like ARKit and Vuforia let you detect and track faces, images, and objects in the world, capabilities that we explored early on with the WebXR Viewer. We've kept versions of the APIs we developed in the current WebXR Viewer, and are keen to see these capabilities standardized in the future.

WebXR Viewer 2.0 Released<figcaption>Leveraging ARKit's face-tracking to augment the author with a pair of sunglasses (left); Using ARKit's image tracker to put a model of a Duck on a printed images of the Hubs homepage (right)</figcaption>

Integrating with a Full-featured Web Browser

The second change will be immediately noticeable: when you launch the app, you'll be greeted by the familiar Firefox for iOS interface, and be able to take advantage of many of the features of its namesake (tabs, history, private browsing, and using your Firefox account to sync between devices, to name a few).  While not all Firefox features work, such as send-to-device from Firefox, the experience of using the WebXR Viewer should be more enjoyable and productive.

WebXR Viewer 2.0 Released<figcaption>The familiar Firefox for iOS new page in the WebXR Viewer app (left); the WebXR Viewer samples page containing the examples from our Javascript API page (center); and the new "..." menu options for WebXR pages (right).</figcaption>

Our goal for moving this code to the Firefox code-base wasn't just to create a better browsing experience for the WebXR Viewer, though.  This is an experimental app, after all, aimed at developers hoping to explore web-based AR on iOS, and we don't plan on supporting it as a separate product over the long term.  But Apple hasn't shown any sign of implementing WebXR, and it's critically important for the success of the immersive web that an implementation exists on all major platforms. Toward this end, we moved this implementation into the Firefox for iOS code-base to see how this approach to implementing WebXR would behave inside Firefox, with an eye towards (possibly) integrating these features into Firefox for iOS in the future.  Would the WebXR implmentation work at all? (Yes.) Would it perform better or worse than in the old app? (Better, it turns out!)  What UI and usability issues would arise? (Plenty.)  While there is still plenty of UI work to do before moving this to a mainstream browser, we're quite happy with the performance;  WebXR demos run better in this version of the app than they did in the previous one, and the impact on non-WebXR web pages seems minimal.  

We'd love for you to download the new version of the app and try out your WebXR content on it. If you do, please let us know what your experience is.