Blog of DataThis Week in Glean: Proposals for Asynchronous Design

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

At last count there are 14 proposals for Firefox on Glean, the effort that, last year, brought the Glean SDK to Firefox Desktop. What in the world is a small, scrappy team in a small, scrappy company like Mozilla doing wasting so much time with old-school Waterfall Model overhead?!

Because it’s cheaper than the alternative.

Design is crucial before tackling difficult technological problems that affect multiple teams. At the very least you’re writing an API and you need to know what people want to do with it. So how do you get agreement? How do you reach the least bad design in the shortest time?

We in the Data Org use a Proposal Process. It’s a very lightweight thing. You write down in a (sigh) Google Doc what it is you’re proposing (we have a snazzy template), attach it to a bug, then needinfo folks who should look at it. They use Google Docs’ commenting and suggested changes features to improve the proposal in small ways and discuss it, and use Bugzilla’s comments and flags to provide overall feedback on the proposal itself (like, should it even exist) and to ensure they keep getting reminded to look at the proposal until the reviewer’s done reviewing. All in all, it’ll take a week or two of part-time effort to write the proposal, find the right people to review it, and then incorporate the feedback and consider it approved.

(( Full disclosure, the parts involving Bugzilla are my spin on the Proposal Process. It just says you should get feedback, not how. ))

Proposals vs Meetings

Why not use a meeting? Wouldn’t that be faster?

Think about who gets to review things in a meeting as a series of filters. First and foremost, only those who attend can review. I’ve talked before about how distributed across the globe my org is, and a lot of the proposals in Project FOG also needed feedback from subject matter experts across Mozilla as a whole (we are not jumping into the XPIDL swamp without a guide). No way could I find a space in all those calendars, assuming that any of them even overlap due to time zones.

Secondly, with a defensive Proposer, feedback will be limited to those reviewers they can’t overpower in a meeting. So if someone wants to voice a subtle flaw in the C++ Metrics API Design (like how I forgot to include any details about how to handle Labeled Metrics), they have to first get me to stop talking. And even though I’m getting better at that (still a ways to go), if you are someone who doesn’t feel comfortable providing feedback in a meeting (perhaps you’re new and hesitant, or you only kinda know about the topic and are worried about looking foolish, or you are generally averse to speaking in front of others) it won’t matter how quiet I am. The proposal won’t be able to benefit from your input.

Thirdly, some feedback can’t be thought of in a meeting. There’s a rough-and-readiness, an immediacy, to feedback in a meeting setting. You’re thinking on your feet, even if the Proposal and meeting agenda are set well in advance. Some critiques need time to percolate, or additional critical voices to bounce off of. Meetings aren’t great for that unless you can get everyone in a room for a day. Pandemic aside, when was the last time you all had that much time?

Proposal documents are just so much more inclusive than design meetings. You probably still want to have a meeting for early prototyping with a small group of insiders, and another at the end to coax out any lingering doubts… but having the main review stages be done asynchronously to your reviewers’ schedules allows you to include a wider variety of voices. You wouldn’t feel comfortable asking a VP to an hour-long design meeting, but you might feel comfortable sending the doc in an email for visibility.

Asynchronicity For You and Me

On top of being more inclusive, proposals are also more respectful. I don’t know what your schedule is today. I don’t know what life you’re living. But I can safely assume that, unless you’re on vacation, you’ll have enough time between now and, say, next Friday to skim a doc and see if there’s anything foolish in it you need to stop me from doing. Or think of someone else who I didn’t think of who should really take a look.

And by setting a feedback deadline, you the Proposer are setting yourself free. You’ll be getting emails as feedback comes in. You’ll be responding to questions, accepting and rejecting changes, and having short little chats. But you can handle that in bite sized chunks on your own schedule, asynchronously, and give yourself the freedom to schedule synchronous work and meetings in the meantime.

Proposal Evolution

Name a Design that was implemented exactly as written. Go on, I’ll wait.

No? Can’t think of one? Neither can I.

Designs (and thus Proposals) are always incomplete. They can’t take into consideration everything. They’re necessarily at a higher level than the implementation. So in some way, the implementation is the evolution of the Design. But implementations lose the valuable information about Why and How that was so important to set down in the Design. When someone new comes to the project and asks you why we implemented it this way, will you have to rely on the foggy remembrance of oral organizational history? Or will you find some way of keeping an objective record?

Only now have we started to develop the habit of indexing and archiving Proposals internally. That’s how I know there’s been fourteen Project FOG proposals (so far). But I don’t think a dusty wiki is the correct place for them.

I think, once accepted, Proposals should evolve into Documentation. Documentation is a Design adjusted by the realities encountered during implementation and maintained by users asking questions. Documentation is a living document explaining Why and How, kept in sync with the implementation’s explanation of What.

But Documentation is a discussion for another time. Reference Documentation vs User Guides vs Design Documentation vs Marketing Copy vs… so much variety, so little time. And I’ve already written too much.


(( This post is a syndicated version of the original. ))

The Mozilla BlogReimagine Open: Building a Healthier Internet

Does the “openness” that made the internet so successful also inevitably lead to harms online? Is an open internet inherently a haven for illegal speech, for eroding privacy and security, or for inequitable access? Is “open” still a useful concept as we chart a future path for the internet?

A new paper from Mozilla seeks to answer these questions. Reimagine Open: Building Better Internet Experiences explores the evolution of the open internet and the challenges it faces today. The report catalogs findings from a year-long project of outreach led by Mozilla’s Chairwoman and CEO, Mitchell Baker. Its conclusion: We need not break faith with the values embedded in the open internet. But we do need to return to the original conceptions of openness, now eroded online. And we do need to reimagine the open internet, to address today’s need for accountability and online health.

As the paper outlines, the internet’s success is often attributed to a set of technical design choices commonly labelled as “the open internet.” These features – such as decentralized architectures, end-to-end networks, open standards, and open source software – powered the internet’s growth. They also supported values of access, opportunity, and empowerment for the network’s users. And they were aided by accountability mechanisms that checked bad behavior online.

Today’s internet has moved away from these values. The term “open” itself has been watered down, with open standards and open source software now supplanted by closed platforms and proprietary systems. Companies pursuing centralization and walled gardens claim to support “openness.” And tools for online accountability have failed to scale with the incredible diversity of online life. The result is an internet that we know can be better.

Reimagine Open concludes with a set of ideas about how society can take on the challenges of today’s internet, while retaining the best of openness. These include new technical designs and a recommitment to open standards and open software; stronger user demand for healthier open products online; tougher, smarter government regulation; and better online governance mechanisms. Short case studies demonstrate how reimagine open can offer practical insights into tough policy problems.

Our hope is that Reimagine Open is a jumping-off point for the continuing conversation about the internet’s future. Open values still offer powerful insights to address policy challenges, like platform accountability, or digital identity. Openness can be an essential tool in building a new conception of local, open innovation to better serve the Global South. For a deeper look at these ideas and more, please visit the Reimagine Open Project Wiki, and send us your thoughts. Together we can build a reimagined open internet that will act as a powerful force for human progress online.

The post Reimagine Open: Building a Healthier Internet appeared first on The Mozilla Blog.

The Mozilla BlogWhy getting voting right is hard, Part IV: Absentee Voting and Vote By Mail

This is the fourth post in my series on voting systems. Part I covered requirements and then Part II and Part III covered in-person voting using paper ballots. However, paper ballots don’t need to be voted in person; it’s also possible to have people mail in their ballots, in which case they can be counted the same way as if they had been voted in person.

Mail-in ballots get used in two main ways:

  • Absentee Ballots: Inevitably, some voters will be unavailable on election day. Even with early voting, some voters (e.g., students, people living overseas, members of the military, people on travel, etc.) might be out of town for weeks or months. In many cases, some or all these voters are still eligible to vote in the jurisdiction in which they are nominally residents even if they aren’t physically present. The usual procedure is to mail them a ballot and let them mail it back in.
  • Vote By mail (VBM): Some jurisdictions (e.g., Oregon) have abandoned in-person voting entirely and mail every registered voter a ballot and have them mail it back.

From a technical perspective, absentee ballots and vote-by-mail work the same way; it’s just a matter of which sets of voters vote in person and which don’t. These lines also blur some in that some jurisdictions require a reason to vote absentee whereas some just allow anyone to request an absentee ballot (“no-excuse absentee”). Of course, in a vote-by-mail only jurisdiction then voters don’t need to take any action to get mailed a ballot. For convenience, I’ll mostly be referring to all of these procedures as mail-in ballots.

As mentioned above, counting mail-in ballots is the same as counting in-person ballots. In fact, in many cases jurisdictions will use the same ballots in each case, so they can just hand count them or run them through the same optical scanner as they would with in-person voted ballots, which simplifies logistics considerably. The major difference between in-person and mail-in voting is the need for different mechanisms to ensure that only authorized voters vote (and that they only vote once). In an in-person system, this is ensured by determining eligibility when voters enter the polling place and then giving each voter a single ballot, but this obviously doesn’t work in the case of mailed-in ballots — it’s way too easy for an attacker to make a pile of fake ballots and just mail them in — so something else is needed.

Authenticating Ballots

As with in-person voting, the basic idea behind securing mail-in ballots is to tie each ballot to a specific registered voter and ensure that every voter votes once.

If we didn’t care about the secrecy of the ballot, the easy solution would be to give every voter a unique identifier (Operationally, it’s somewhat easier to instead give each ballot a unique serial number and then keep a record of which serial numbers correspond to each voter, but these are largely equivalent). Then when the ballots come in, we check that (1) the voter exists and (2) the voter hasn’t voted already. When put together, these checks make it very difficult for an attacker to make their own ballots: if they use non-existent serial numbers, then the ballots will be rejected, and if they use serial numbers that correspond to some other voter’s ballot then they risk being caught if that voter voted. So, from a security perspective, this works reasonably well, but it’s a privacy disaster because it permanently associates a voter’s identity with the contents of their ballots: anyone who has access to the serial number database and the ballots can determine how individual voters voted.

The solution turns out to be to authenticate the envelopes not the ballots. The way that this works is that each voter is sent a non-unique ballot (i.e., one without a serial number) and then an envelope with a unique serial number. The voter marks their ballot, puts it in the envelope and mails it back. Back at election headquarters, election officials perform the two checks described above. If they fail, then the envelope is sent aside for further processing. If they succeed, then the envelope is emptied — checking that it only contains one ballot — and put into the pile for counting.

This procedure provides some level of privacy protection: there’s no single piece of paper that has both the voter’s identity and their vote, which is good, but at the time when election officials open the ballot they can see both the voter’s identity and the ballot, which is bad. With some procedural safeguards it’s hard to mount a large scale privacy violation: you’re going to be opening a lot of ballots very quickly and so keeping track of a lot of people is impractical, but an official could, for instance, notice a particular person’s name and see how they voted.1 Some jurisdictions address this with a two envelope system: the voter marks their ballot and puts it in an unmarked “secrecy envelope” which then goes into the marked envelope that has their identity on it. At election headquarters officials check the outer envelope, then open it and put the sealed secrecy envelope in the pile for counting. Later, all of the secrecy envelopes are opened and counted; this procedure breaks the connection between the user’s identity and their ballot.2

Signature Matching

The basic idea behind the system described above is to match ballots mailed out (which are tied to voter registration) to ballots mailed in. This works as long as there’s no opportunity for attackers to substitute their own ballots for those of a legitimate voter. There are a number of ways that might happen, including:

  • Stealing the ballot in the mail, either on the way out to the voter or when it is sent back to election headquarters. Stealing the ballot on the way back works a lot better because if voters don’t receive their ballots they might ask for another one, in which case you have duplicates.
  • Inserting fake ballots for people who you don’t expect to vote. This is obviously somewhat risky, as they might decide to vote and then you would have a duplicate, but many people vote infrequently and therefore have a reduced risk of creating a duplicate ballot.

Again, I’m assuming that the attacker can make their own ballots and envelopes. This isn’t trivial, but neither is it impossible, especially for a state-level actor.

Some jurisdictions attempt to address this form of attack by requiring voters to sign their ballot envelopes. Those envelopes can then be compared to the voter’s known signature (for instance on their voter registration card). Some jurisdictions even require a witness to sign the ballot too — affirming the identity of the person signing the ballot, to include a copy of their ID, or even to have the ballot envelope notarized. The requirements vary radically between jurisdictions (see here for a table of how this works in each state). To the best of my knowledge, there’s no real evidence that this kind of signature validation provides significantly more defense against fraud. From an analytic perspective, the level of protection depends on the capabilities of an attacker and the detection methods used by election officials. For instance, an attacker who steals your ballot on the way back could potentially try to duplicate your signature (after all, it’s on the envelope!), which seems reasonably likely to work, but an attacker who is just trying to impersonate people who didn’t vote might have some trouble because they wouldn’t know what your signature looked like.

Ballots with Errors

It’s not uncommon for the returned ballots to have some kind of error, for instance:

  • Voter used their own envelope instead of the official envelope
  • Voter didn’t use the secrecy envelope
  • Voter didn’t sign the envelope
  • Voter signature doesn’t match
  • Envelope not notarized.
  • Overvotes
  • Damaged ballots (torn ballots, ballots with stains, etc.)

Each of these can potentially lead to a voter’s ballot being rejected. Moreover, the more requirements a voter’s ballot has to meet, the greater chance that it will be rejected, so there is a need to balance the additional security and privacy provided by extra requirements against the additional risk of rejecting ballots which are actually legitimate, but just nonconformant. Different jurisdictions have made different tradeoffs here.

Just because a ballot has a problem doesn’t mean that the voter is necessarily out of luck: some jurisdictions have what’s called a cure process in which the election officials reach out to the voter whose name is on the ballot and offer them an opportunity to fix their ballot, with the fix depending on the jurisdiction and the precise problem. Some jurisdictions just discard the ballot, for example in the case of “naked ballots” — ballots where voters did not use the inner secrecy envelope.

Of course, not all problems can be cured. In particular, once the ballot has been disassociated from the envelope, then there’s no way to go back to the voter and get them to fix an error such as an overvote. This issue isn’t unique to vote-by-mail, however: it also occurs with voting systems using central-count optical scanners (see Part III). In general, if the ballots are anonymized before processing, then it’s not really possible to fix any errors in them; you just need to process them the best you can.

Ballot rejection is an opportunity for some level of insider attack: although voting officials do not know how individuals voted, they might be able to know which voters are likely to vote a certain way, perhaps by looking at their address or party affiliation (this is easier if the voter’s name is on the ballot, not just a serial number) and more strictly enforce whatever security checks are required for ballots they think will go the wrong way. Having external observers who are able to ensure uniform standards can significantly reduce the risk here.

Voting Twice

There are a number of situations in which multiple ballots might have been or will be cast for the same voter. A number of these are legitimate, such as a voter changing their mind after they voted by mail and deciding to vote in person — perhaps because they changed their mind about candidates or because they are worried their absentee ballot will not be processed in time — but of course they could also be the result of error or fraud. There are two basic ways in which double voting shows up:

  • Two mail-in ballots
  • One mail-in ballot and one in-person ballot

In the case of two mail-in ballots, it’s most likely that the first ballot has already been taken out of the envelope, so there’s no real way not to count it. All you can do is not count the second ballot. Note that this means that if an attacker manages to successfully submit a ballot for you and gets it in before you, then their vote will count and yours will not. Fortunately, this kind of fraud is rare and detectable and once detected can be investigated. I’m not aware of any election where fake mail-in ballots have materially impacted the results.

The more complicated case is when a voter has had a mail-in ballot sent to them but then decides to vote in person, which can happen for a number of reasons. For instance, the ballot might have been lost in the mail (in either direction). This situation is different because we need to prevent double voting but poll workers don’t know whether the voter also submitted their ballot by mail. If the voter is allowed to vote as usual, you might have a situation in which case the mail-in ballot had already been processed (at least as far as removing it from the envelope) and there was no way to remove either ballot, because they’re both unidentified ballots mixed with other ballots. Instead, the standard process is to require the voter to fill in what’s called a provisional ballot, which is physically like a mail-in ballot except that it has a statement about what happened. Provisional ballots are segregated from regular ballots, so once the rest of the ballots have been processed you can go through the provisionals and process those for voters whose ordinary mail-in ballots have not been received/counted.3

Returned Ballot Theft

Another new source of attack on mail-in ballots — as well as ballot drop-boxes — is theft of the ballots en route to election headquarters. In-person voting has a number of accounting mechanisms designed to ensure that the number of voters matches the number of cast ballots which then matches the number of recorded votes, but these don’t work for mail-in ballots because many people who are sent ballots will fail to return them. In many jurisdictions, voters are able to track their ballots and see if they have been processed, and could cast them in person if they are lost. However, as a practical matter, many voters will not do this. The major defense against this kind of attack is good processes around mail deliver and drop-box security as well as post-hoc investigation of reports of missing ballots.

Secrecy of the Ballot

With proper processes at election headquarters, the ballot secrecy properties of mail-in ballots are comparable to in person voting, with one major exception: with mail-in ballots it is much easier for a voter to demonstrate to a third party how they voted. All they have to do is give the ballot to that third party and let them fill it out and mail it (perhaps signing the envelope first). This allows for vote buying/coercion type attacks. This isn’t ideal, but it’s a difficult attack to mount at a large scale because the attacker needs to physically engage with each voter.

The cost of security

As noted above, many states have fairly extensive verification mechanisms for mail-in ballots. These mechanisms are not free, either to voters or election officials. In particular, requirements such as notarization increase the cost of voting and thus may deter some voters from voting. Even apparently lightweight requirements such as signature matching have the potential to cause valid ballots to be rejected: some people will forget to sign their name and people do not sign their name the same way every time and election officials are not experts on handwriting, so we should expect that they will reject some number of valid ballots. Cottrell, Herron and Smith report about 1% of ballots being rejected for some kind of signature issue, with Black and Hispanic voters seemingly having higher rates of rejection than White voters. Because real fraud is rare and errors are common, the vast majority of rejected ballots will actually be legitimate.4

There is a more general point here: although mail-in ballots seem insecure (and this has been a point of concern in the voting security community) real studies of mail-in ballots show that they have extremely low fraud rates. This means that policy makers have to weigh potential security issues with mail-in voting against their impact on legitimate voters. The current evidence suggests that mail-in voting modestly increases voting rates (experience from Oregon suggest by about 2-5 percentage points).5 The implication is that making mail-in voting more difficult — whether by restricting it or by adding hard-to-follow security requirements — is likely to decrease the number of accepted ballots while only having a small impact on voting fraud.

Up Next: Direct Recording Electronic systems and Ballot Marking Devices

OK. Three posts on paper ballots seems like enough for now, so it’s time to turn to more computerized voting methods. The other major form of voting in the United States uses what’s called the “Direct Recording Electronic” (DRE) voting system which just means that you vote directly on a computer which internally keeps track of the votes. DRE machines are very popular but have been the focus of a lot of concern from a security perspective. We’ll be covering them next, along with a similar seeming but much better system called a “Ballot Marking Device” (BMD). BMDs are like DREs but they print out paper ballots that can then be counted either by hand or with optical scanners.

  1. In this version, the ballots can just have numbers and not names, but as we’ll see below, many jurisdictions require names.
  2. People familiar with computer privacy will recognize this technique from technologies such as proxies, VPNs, or mixnets. 
  3. Provisional ballots are also used for a number of other exception cases such as voters who go to the wrong polling place (here again, it’s hard to tell if they tried to vote at multiple polling places) or voters who claim to be registered but can’t be found on the voters list (this often looks the same to precinct-level officials because each precinct usually just has their own list of voters).
  4. This dynamic is quite common when adding new security checks: any check you add will generally have false positives. In environments where most behavior is innocent, that means that most of the behavior you catch will also be innocent people Bruce Schneier has written extensively about this point. 
  5. While mail-in voting generally seems to increase turnout by reducing barriers to voting, there are a number of populations that find mail-in ballots difficult. One obvious example is people with disabilities, who may find filling in paper ballots difficult. Less well-known is that Native Americans experience special challenges that make exclusive vote-by-mail difficult. Thanks to Joseph Lorenzo Hall for informing me on this point. 

The post Why getting voting right is hard, Part IV: Absentee Voting and Vote By Mail appeared first on The Mozilla Blog.

The Mozilla BlogWe need more than deplatforming

There is no question that social media played a role in the siege and take-over of the US Capitol on January 6.

Since then there has been significant focus on the deplatforming of President Donald Trump. By all means the question of when to deplatform a head of state is a critical one, among many that must be addressed. When should platforms make these decisions? Is that decision-making power theirs alone?

But as reprehensible as the actions of Donald Trump are, the rampant use of the internet to foment violence and hate, and reinforce white supremacy is about more than any one personality. Donald Trump is certainly not the first politician to exploit the architecture of the internet in this way, and he won’t be the last. We need solutions that don’t start after untold damage has been done.

Changing these dangerous dynamics requires more than just the temporary silencing or permanent removal of bad actors from social media platforms.

Additional precise and specific actions must also be taken:  

Reveal who is paying for advertisements, how much they are paying and who is being targeted.

Commit to meaningful transparency of platform algorithms so we know how and what content is being amplified, to whom, and the associated impact.

Turn on by default the tools to amplify factual voices over disinformation.

Work with independent researchers to facilitate in-depth studies of the platforms’ impact on people and our societies, and what we can do to improve things.

These are actions the platforms can and should commit to today. The answer is not to do away with the internet, but to build a better one that can withstand and gird against these types of challenges. This is how we can begin to do that.

Photo by Cameron Smith on Unsplash

The post We need more than deplatforming appeared first on The Mozilla Blog.

hacks.mozilla.orgImproving Cross-Browser Testing, Part 2: New Automation Features in Firefox Nightly

In our previous blog post about the web testing ecosystem, we described the tradeoffs involved in automating the browser via the HTTP-based WebDriver standard versus DevTools protocols such as Chrome DevTools Protocol (CDP). Although there are benefits to WebDriver’s HTTP-based approach, we know there are many developers who find the additional functionality and ergonomics of CDP-based test tools compelling.

It’s clear that WebDriver needs to grow to meet the capabilities of DevTools-based automation. However, that process will take time, and we want more developers to be able to run their automated tests in Firefox today.

To that end, we have shipped an experimental implementation of parts of CDP in Firefox Nightly, specifically targeting the use cases of end-to-end testing using Google’s Puppeteer, and the CDP-based features of Selenium 4.

For users looking to use CDP tooling with stable releases of Firefox, we are currently going through the process to enable the feature on release channels and we hope to make this available as soon as possible.

The remainder of this post will look at the details of how to use Firefox with CDP-based tools.

Puppeteer Automation

Puppeteer is a Node.js library that provides an intuitive async browser-automation API on top of CDP.

Puppeteer itself now offers experimental support for Firefox, based on our CDP implementation. This change was made in collaboration with the Puppeteer maintainers, and allows many existing Puppeteer tests to run in Firefox with only minimal configuration changes.

To use Puppeteer with Firefox, install the puppeteer package and set its product option to “firefox”. As of version 3.0, Puppeteer’s npm install script can automatically fetch the appropriate Firefox Nightly binary for you, making it easier to get up and running.

PUPPETEER_PRODUCT=firefox npm install puppeteer

The following example shows how to launch Firefox in headless mode using Puppeteer:

const puppeteer = require('puppeteer');

(async () => {
  const browser = await puppeteer.launch({
    product: 'firefox',

That’s all there is to it! Adding that one launch option is all that’s required to run a Puppeteer script against Firefox.

You can find a longer example script in the Puppeteer repository that also demonstrates troubleshooting tips such as printing internal browser logs.

Expanding the script from above, let’s navigate to a page and test an element property. You can see a similar example using WebDriver in this series’ first blog post.

const page = await browser.newPage();
await page.goto('http://localhost:8000');
const element = await page.$('.test');
expect(await element.evaluate(node => node.tagName)).toBe('DIV');

Although this has the same functionality as the WebDriver script in the first post, there’s considerably more going on under the hood. With WebDriver this kind of script maps pretty directly onto the protocol, with one remote call per line. For Puppeteer, browser initialization alone depends on fifteen different CDP methods and three kinds of events.

The call to page.goto checks multiple CDP events to ensure that navigation has succeeded, while the calls to page.$ and element.evaluate are both high-level abstractions on top of remote script evaluation.

This additional complexity presents an implementation challenge; making even a simple script work requires a browser to implement many commands and events, and even apparently small deviations from the behaviour of Blink can break assumptions made in the client.

That fragility is not just a result of CDP offering lower-level control than WebDriver, but a consequence of implementing a proprietary protocol which wasn’t designed with cross-browser support in mind.

The CDP support available in Firefox Nightly today enables core Puppeteer features such as navigation, script evaluation, element interaction and screen capture. We understand that many users will depend on APIs we don’t yet support. For example, we know that network request interception is a compelling feature that isn’t yet supported by the Firefox CDP implementation.

Nevertheless, we are interested in feedback when Puppeteer scripts don’t work as expected in Firefox; see the end of this post for how to get in touch.

Selenium 4

As well as fully CDP-based clients like Puppeteer, WebDriver-based clients are starting to add additional functionality based on CDP. For example, Selenium 4 will use CDP to offer new APIs for logging, network request interception, and responding to DOM mutation. Firefox Nightly already has support for the CDP features needed to support access to console log messages.

This represents a longstanding feature request from test authors who want to assert that their test completes without unanticipated error messages, and to collect any logged messages to help debug in the case of a failure.

For example, given a page that logs a message e.g.

<title>Test page</title>
console.log('A log message')

and the following script using the latest trunk Selenium Python bindings:

import trio
from selenium import webdriver
from selenium.webdriver.common.bidi.console import Console

async def get_console_errors():
    driver = webdriver.Firefox()

    async with driver.add_listener(Console.ALL) as messages:


The script will output “A log message”.

We are working to enable more Selenium 4 features for Firefox users, and collaborating with the Selenium authors to ensure that they are supported in all the provided language bindings.

Accessing the CDP Connection Directly

For users who want to experiment with the underlying CDP protocol in Firefox without relying on an existing client, the mechanism to enable CDP support is very similar to that for Chrome. To start the CDP server, launch Firefox Nightly with the --remote-debugging-port command-line option. By default, this starts a server on port 9222. The browser process will print a message like the following to stderr:

DevTools listening on ws://localhost:9222/devtools/browser/9fa78d94-9133-4460-a4f2-f8ffa149b354

This provides the WebSocket URL that is used for interacting with CDP. The server also exposes a couple of useful HTTP endpoints. For example, you can get a list of all available WebSocket targets from http://localhost:9222/json/list.

Bringing advanced automation to all browsers

Our experimentation with CDP in Firefox is an early step toward developing a new version of the WebDriver protocol called WebDriver BiDi. While we participate in the standardization process, our team is interested in feedback around cross-browser end-to-end testing workflows. We invite developers to try running their Puppeteer, or other CDP-based, tests against Firefox Nightly.

If you encounter unexpected behaviour or if there are features that you are missing, there are several ways to reach out to us:

  • We are looking out for Firefox-specific reports on Puppeteer’s issue tracker.
  • If you’re accessing Firefox’s CDP connection directly without a client library, the best place to report issues is Mozilla’s Bugzilla
  • Feel free to ask questions on our Matrix channel #remote-protocol

Wherever you send your feedback, we love to receive protocol-level logs

An automation solution based on a proprietary protocol will always be limited in the range of browsers it can support. The success of the web is built on multi-vendor standards. It’s important that test tooling builds on standards as well so that tests work across all the browsers and devices where the web works.

In the future, we may publish more posts to introduce the work we’re exploring with other vendors on WebDriver-BiDi, a standardization project to specify a bidirectional, automation-focused, protocol for the future.


Thanks to Tantek Çelik, Karl Dubost, Jan Odvarko, Devin Reams, and Maire Reavy for their valuable feedback and suggestions.

The post Improving Cross-Browser Testing, Part 2: New Automation Features in Firefox Nightly appeared first on Mozilla Hacks - the Web developer blog.

Web Application SecurityEncrypted Client Hello: the future of ESNI in Firefox


Two years ago, we announced experimental support for the privacy-protecting Encrypted Server Name Indication (ESNI) extension in Firefox Nightly. The Server Name Indication (SNI) TLS extension enables server and certificate selection by transmitting a cleartext copy of the server hostname in the TLS Client Hello message. This represents a privacy leak similar to that of DNS, and just as DNS-over-HTTPS prevents DNS queries from exposing the hostname to on-path observers, ESNI attempts to prevent hostname leaks from the TLS handshake itself.

Since publication of the ESNI draft specification at the IETF, analysis has shown that encrypting only the SNI extension provides incomplete protection. As just one example: during session resumption, the Pre-Shared Key extension could, legally, contain a cleartext copy of exactly the same server name that is encrypted by ESNI. The ESNI approach would require an encrypted variant of every extension with potential privacy implications, and even that exposes the set of extensions advertised. Lastly, real-world use of ESNI has exposed interoperability and deployment challenges that prevented it from being enabled at a wider scale.

Enter Encrypted Client Hello (ECH)

To address the shortcomings of ESNI, recent versions of the specification no longer encrypt only the SNI extension and instead encrypt an entire Client Hello message (thus the name change from “ESNI” to “ECH”). Any extensions with privacy implications can now be relegated to an encrypted “ClientHelloInner”, which is itself advertised as an extension to an unencrypted “ClientHelloOuter”. Should a server support ECH and successfully decrypt, the “Inner” Client Hello is then used as the basis for the TLS connection. This is explained in more detail in Cloudflare’s excellent blog post on ECH.

ECH also changes the key distribution and encryption stories: A TLS server supporting ECH now advertises its public key via an HTTPSSVC DNS record, whereas ESNI used TXT records for this purpose. Key derivation and encryption are made more robust, as ECH employs the Hybrid Public Key Encryption specification rather than defining its own scheme. Importantly, ECH also adds a retry mechanism to increase reliability with respect to server key rotation and DNS caching. Where ESNI may currently fail after receiving stale keys from DNS, ECH can securely recover, as the client receives updated keys directly from the server.

ECH in Firefox 85

In keeping with our mission of protecting your privacy online, Mozilla is actively working with Cloudflare and others on standardizing the Encrypted Client Hello specification at the IETF. Firefox 85 replaces ESNI with ECH draft-08, and another update to draft-09 (which is targeted for wider interoperability testing and deployment) is forthcoming.

Users that have previously enabled ESNI in Firefox may notice that the about:config option for ESNI is no longer present. Though we recommend that users wait for ECH to be enabled by default, some may want to enable this functionality earlier. This can be done in about:config by setting network.dns.echconfig.enabled and network.dns.use_https_rr_as_altsvc to true, which will allow Firefox to use ECH with servers that support it. While ECH is under active development, its availability may be intermittent as it requires both the client and server to support the same version. As always, settings exposed only under about:config are considered experimental and subject to change. For now, Firefox ESR will continue to support the previous ESNI functionality.

In conclusion, ECH is an exciting and robust evolution of ESNI, and support for the protocol is coming to Firefox. We’re working hard to make sure that it is interoperable and deployable at scale, and we’re eager for users to realize the privacy benefits of this feature.

The post Encrypted Client Hello: the future of ESNI in Firefox appeared first on Mozilla Security Blog.

The Mozilla BlogWhy getting voting right is hard, Part III: Optical Scan

This is the third post in my series on voting systems. For background see part I. As described in part II, hand-counted paper ballots have a number of attractive security and privacy properties but scale badly to large elections. Fortunately, we can count paper ballots efficiently using optical scanners (opscan). This will be familiar to anyone who has taken paper-based standardized tests: instead of just checking a box, next to each choice there is a region (typically an oval) to fill in, as shown in the examples below These ballots can then be machine read using an optical scanner which reports the result totals.

optical scan example optical scan example2<figcaption>Marking optical scan ballots</figcaption>

Optical scan systems come in two basic flavors: “precinct count” and “central count”. In a precinct count system, the optical scanner is located at the precinct (or polling place) and the voters can feed their ballots directly into it. Sometimes the scanner will be mounted on a ballot box which collects the ballots after they are scanned. When the polls close, the scanner produces a total count, typically recorded on a memory card, printed on a paper receipt, or both. These can be sent back to election headquarters, together with the ballots, where they are to be aggregated.

Hart eScan<figcaption>A precinct-count optical scanner (Hart eScan)</figcaption>

In a central count system, the optical scanner is located at election headquarters. These scanners are typically quite a bit larger and faster because they need to process a large number of ballots quickly. Ballots are collected at the precinct as with hand-counting and then sent back there for scanning. Some scanners are self-contained units that do all the tabulating and some just connect to software on a commodity computer which does a lot of the work, but of course this is all invisible to the voter. It’s of course possible to have scanners at both the precinct and election central — this could help detect tampering with the ballots in transit — but I’m not aware of any jurisdiction which does that.

ES&S central count scanner<figcaption>A central count optical scanner (ES&S M650)</figcaption>

Because optical scan ballots are just paper ballots counted via a different method, the voter experience is basically the same, both in good ways (secrecy of the ballot, easy scaling at the polling place) and in bad ways (accessibility). In fact, in case of equipment breakdown or concerns about fraud you can just hand count the ballots without negatively impacting the voter experience (or in fact without voters noticing). The two important ways in which optical scanning differs from hand counting is (1) it’s much faster (2) it’s less verifiable.

Speed and Scalability

The big advantage of optical scanning is that it’s more efficient than hand counting. A hand counting team can process on the order of 6-15 contests per minute, which is much slower than even the slowest optical scanners. To pick a vendor whose technical specs were easy to find, ES&S sells a central count scanner that can count ballots at up to 300 double sided ballots per minute. This is quite an improvement over hand counting when we consider that each ballot will likely have several contests. As an example, the first sheet of a recent Santa Clara sample ballot has 3 contests on one side and 4 on the other, so we’re talking about being able to count about 2000 contests a minute on the high end.

Precinct count scanners typically aren’t particularly fast; they’re comparable to typical consumer-grade scanning hardware and just need to be fast enough that they mostly keep up with the rate at which voters fill in their ballots. Even low-end desktop scanners can scan 10s of pages a minute, so it’s not generally a problem to have one or two scanners handling even a modest sized precinct, given that it typically takes voters more than a minute to fill in their ballot and that you can’t check-in more than a few voters a minute. Additionally, because voters scan their ballots as they vote, you get results as soon as the polls close without having to have extra staff to count the ballots; the poll workers just need to supervise the scanning process (as well as the rest of the tasks they would have to do with hand-counted ballots such as maintain custody of the materials, check-in voters, etc.).

Optical scanning is also a lot cheaper than hand counting. In the Washington recount studied by Pew of optical scanning was $290,000 as opposed to $900,000 for the hand count. This is actually an underestimate of the advantage of optical scanning because, as noted above, that was just the cost to hand count a single contest, whereas the scanning process counts multiple contests at once.

Security and Verifiability

Optical scanning introduces a new security threat: the scanner is a computer and computers can be compromised. If compromised, the computer can produce any answer the attacker wants, which is obviously an undesirable property, but one we take the risk of whenever we put computers in the critical path of the voting process. This isn’t just a theoretical risk: there have been numerous studies of the security of voting machines and in general the results are extremely discouraging: in past studies, if an attacker is able to get physical access to a machine, they were usually able to compromise the software.1. Most of the work here was done in the early 2000s, so it’s possible that things have improved, but the available evidence suggests otherwise. Moreover, there are limits to how good a job it seems possible to do here, which I hope to get to in a future post.

The impact of an attack depends on the machine type. In the case of precinct-count machines, this means that voters might be able to attack the machines in their precinct, and potentially through them the entire jurisdiction2. This is a somewhat difficult attack to mount because you need unsupervised access to the machine for long enough to mount the attack. It’s not uncommon for these devices to have some sort of management port (you need some way to load the ballot definitions for each election, update the software, etc.) though how accessible that is to voters depends on the device and how it’s deployed in practice.

In the case of central count machines, attack might be limited to voting officials, but as noted in Part I, it’s important that a voting system be immune even to this kind of insider attack. Precinct count machines are susceptible to insider attack too: anyone who has access to the warehouse where the machines are stored could potentially tamper with them. In addition, it’s not uncommon for voting machines to be stored overnight at polling places before the election, where you’re mostly relying on whatever lock the church or school or whatever has on its doors, which generally won’t be very good (the machines may also have tamper-evident seals, but those can often be circumvented).

The general consensus in the voting security community is that our goal should be what’s called software independence. Rivest and Wack describe this as follows:

A voting system is software-independent if an undetected change or error in its software cannot cause an undetectable change or error in an election outcome.

What this means in practice is that if you are going to use optical scan voting then you need some way to verify that the scanner is counting the votes correctly. Fortunately, once you’ve scanned the ballots, you still have them available to you, with the exception of any which have been folded, spindled or mutilated by the scanner. This means you can do as much double checking as you want.

Naively, of course, you could just recount the ballots by hand. This often happens in close races, but obviously doing it all the time would obviate the point of using optical scanners. What’s needed is some way to check the scanner without counting every ballot by hand. What’s emerging as the consensus approach here is what’s called a Risk Limiting Audit. I’ll cover this in more detail later, but the basic idea is that you randomly sample ballots and hand count them. You can then use statistics to estimate the chance that the election was decided incorrectly. You keep counting until you either (1) have high confidence that the election was counted correctly or (2) you have counted all the ballots by hand.3

In really close races, you basically have to do a full recount by hand. The reason for this isn’t so much that the machines might have been tampered with but that they might have made mistakes. Even the best optical scanners sometimes mis-scan and it’s not reasonable to expect them to do a good job with the kind of ambiguous ballots that you see in the wild. Ideally, of course, the scanner would kick those ballots back for manual processing, but you don’t want to kick back too many and so there’s ambiguity about which ballots are ambiguous and so on. In most elections this stuff doesn’t matter, but in a really close one it does, and so if you’re working with hand-marked ballots there eventually comes a point where you need to fall back to hand counting. The main value of optical scanning is to reduce the need for routine hand-counting when elections aren’t close, which is fortunately most of the time.

Write-Ins, Scanning Errors, Overvotes, and Other Edge Cases

Of course, unlike humans, optical scanners aren’t very smart — and for security reasons, you don’t really want them doing smart stuff — so there are a number of situations that they handle badly.

For instance, it’s common to allow “write-in” votes in which the candidate’s name does not appear on the ballot but instead the voter writes in a new name. Write-in candidates don’t usually win — although Lisa Murkowski famously won as a write-in candidate in 2010 — but you still need to process those ballots. As shown in the example at the top, the natural way to handle this is to have a choice for each contest which has a blank name: the voter fills in the bubble associated with the space and then writes the name in the space.4

It’s also common to have ballots which can’t be read for one reason or another. For instance, the voter might have used the wrong color pen or not completely marked the bubble. Voters also sometimes for more than one candidate in a given election (“overvoting”). The general way to handle these cases is to have the machine reject these ballots and set them aside for further processing by hand.5 If the number of rejected ballots is less than the margin of victory then you know that it can’t affect the result and while you do eventually want to process them for complete results, you don’t need to for purposes of determining the winner. If there are more rejected ballots than the margin of victory you of course need to process them immediately, but as rejected ballots are typically a small fraction of the total this is much more feasible than a full hand count.

There are of course some edge cases that optical scanners aren’t able to even reject reliably. A good example here is “undervoting” in which a voter doesn’t vote in certain contests. This could be a sign of marking error or it could be intentional; it’s actually quite common in for voters in the US to just vote the presidential contest and then skip the downballot races. Because this is common, you don’t really want the scanner rejecting all undervoted ballots. Instead you keep a tally of the number of undervotes in a given contest and if it’s large enough to potentially affect the election you can go back and hand count the whole election.

It’s important to understand that a risk limiting audit ensures that none of these anomalies can affect the election result, so at some level it doesn’t matter how the scanner handles them; it’s just a matter of setting the right tradeoff in terms of efficiency between the automated and manual counting stages. However, if — as is far too common — you are not doing a risk limiting audit, it’s important to be fairly conservative about having the scanner note ambiguous cases rather than arbitrarily deciding them for one candidate or another.

Up Next: Vote By Mail

So far in this series I’ve talked about paper ballots as if they are cast at the polling place, but that doesn’t have to be the case. They can just as easily be sent to voters who return them by mail. Depending on the situation this is referred to as “vote by mail” (VBM) or “absentee ballots”. VBM brings some special challenges which I’ll be covering in my next post.

  1. See, for instance the reports of the 2007 Californa Top-to-Bottom Review. 
  2. A number of studies have found “viral” attacks in which you compromised one machine and then used that to attack the election management systems, which were then used to infect all the machines in the jurisdiction. 
  3. You might be wondering if this is really the best we can do. RLAs are the best known method that is totally software independent, but if you’re willing to rely on your own software that is independent of the voting machine software, then one option would be arrange to video-record the ballots during counting and then use computer vision techniques to independently do a recount. I collaborated on a system to do this about 10 years back. It worked reasonably well — and would surely work far better with modern computer vision echniques — but never got much interest. 
  4. Actually, the whole idea of having pre-printed ballots is less universal than many Americans think. The Wikipedia article on the so-called Australian Ballot makes fascinating reading. 
  5. One advantage of precinct-level counting is that you can detect this kind of error and give the voter an opportunity to correct it. 

The post Why getting voting right is hard, Part III: Optical Scan appeared first on The Mozilla Blog.

Mozilla L10NL10n Report: December 2020 Edition


New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

New content and projects

What’s new or coming up in Firefox desktop

Upcoming deadlines:

  • Firefox 85 is currently in beta and will be released on January 26. The deadline to update localization is on January 17 (see this older l10n report to understand why it moved closer to the release date).

As anticipated in the last report, this release cycle is longer than usual (6 weeks instead of 4), to accommodate for the end of year holidays in Europe and the US.

The number of new strings remains pretty low, but expect this to change during the first half of 2021, when we should have new content, thanks to a mix of new features and old interfaces revisited. There will also be changes to improve consistency around the use of Title Case and Sentence case for English. This won’t result in new strings to translate for other locales, but it’s a good reminder that each locale should set and follow its own rules, and they should be documented in style guides.

Since we’re at the end of year, here are a few numbers for Firefox:

  • We currently ship Firefox in 96 languages. You should be proud of that accomplishment, since it makes Firefox the most widely localized browser on the market [1].
  • Nightly ships with 10 additional locales. Some of them are very close to shipping in stable builds, hopefully that will happen in 2021.

[1] Disclaimer: other browsers make it quite difficult to understand which languages are effectively supported (there’s no way to switch language, and they don’t necessarily work in the open). Other vendors seem to also have a low entry barrier when it comes to adding a new language, and listing it as available. On the other hand, at Mozilla we require high priority parts to be completely translated, or very close, and a sustainable community before shipping.

What’s new or coming up in mobile

In many regards, you can surely say about 2020 “What a year…” – and that also applies to mobile at Mozilla.

We shipped products, dropped some… Let’s take a closer look at what’s happened over the year.

In 2020, we shipped the all new Firefox for Android browser (“Fenix”) in 94 languages, which is a great accomplishment. Thank you again to all the localizers who have contributed to this project and its global launch. Your work has ensured that Firefox for Android remains a successful product around the world. We are humbled and grateful for that.

As for the latest release that comes out in December, we will be able to try out a few new features, such as a tab grid view and the ability to delete downloads.

2020 also brought improvements and cool new features to Firefox for iOS – especially since the iOS update to version 14. To only list a few:

But 2020 has also been a year when we have had to drop some mobile projects, such as Scryer, Firefox Lite and Firefox for Fire TV. We thank you all for the hard work on these products.

Firefox Reality should be available until at least 2021, but not much l10n work is to be expected. We are still figuring things out in regards to Lockwise, we will keep you posted once we know more.

We are looking forwards to 2021 to continue shipping great localized mobile projects. Thank you all for your ongoing work and support!

What’s new or coming up in web projects

This year, saw the long awaited migration from .lang to .ftl format. The change gives localizers greater flexibility to localize the site in their language more naturally, and the localized content is pushed to production from once a day manually to a few times in an hour automatically. The new file structure ensures consistency usage in brands and product names across the site. The threshold to activate a locale has lowered, with the hope that it would attract more localizers to participate.

In the past seven months or so, 90+ files have been migrated, added and updated to the new format, and more are still in the work. New pages would be more content heavy and more informational.

Another major change is, instead of creating a new What’s New Page (or WNP) with every Firefox release, the team has decided to promote the evergreen WNP page with stable content for an extended period of time. If Firefox desktop is offered in your locale, please make sure this page is fully localized.

The team would like to take the opportunity to express their deep gratitude to all of our community localizers, all over the world. Your work is critical to Mozilla’s global impact and essential for making Mozilla’s products available to the widest possible audience. 61% of 2020 visits were in locales served primarily by community localizers. In those locales, non-Firefox visits to our website grew by 10% this year and downloads increased by 13%! Mozilla’s audience is all over the world, and we couldn’t reach it without you, the localizers who bring to your communities. Thank you!

Firefox Accounts

This year, a limited payment feature was added to a few select markets. Next year, the payment feature will be expanded to support PayPal and in more European regions. The team is working on the details and we will learn more soon.

Common Voice

Despite a challenging year, the project saw a significant growth in dataset collection in the past six months: an additional 2,000 hours added, 6 more languages (Hindi, Lithuanian, Luganda, Thai, Finnish, Hungarian) , and over 7 million clips total! Check out which languages have the most hours on Discourse.

Project WebThings

Given the year 2020 has turned out to be, you may not recall that Mozilla’s strategy for Project WebThings this year was to successfully establish it as an independent, community-led open source project.  The important work needed to complete that transition took place in November and December.  Through the efforts of community members the project streamlined its name to just “WebThings”, established a new home at, relocated all the project’s code and assets on GitHub, created a new set of backend services to support and maintain the global network of users’ WebThings Gateways, and provided a simple path for transitioning Gateways to the new community infrastructure through the release of WebThings Gateway 1.0.  Though a lot changed, Pontoon continues to be our platform for localization and the WebThings team has been continually delighted by the contributions there, with teams supporting thirty-four languages.  You’ll still find WebThings discussion on Discourse, including several more detailed announcements about the community’s newfound independence and plans for 2021.

What’s new or coming up in Foundation projects

Wagtail has become the first CMS to be fully integrated & automated with Pontoon — and managing translations doesn’t require writing or manually deploying a single line of code. This will dramatically reduce the amount of time required to make some content localizable and get it published.

Mozilla Foundation worked directly with Torchbox, Wagtail’s editor, and sponsored the development of the Wagtail Localize plugin, adding localization support into Wagtail. This solution will not only work for us, but for any organization using this free and open source CMS!

Wagtail is currently used on the Foundation website, the Mozilla Festival website and on the Donate websites. A lot of efforts went into designing a system to manage content at a granular level for each locale, so that you will only translate content that is relevant for your locale. For instance you won’t see in Pontoon content that is used for performing A/B tests in English, or custom content that is only relevant to other locales. Another nice feature is that your work gets automatically published on production within a few minutes. You can read more about the changes for the Mozilla donate websites here and you can expect even more content to be localized via Wagtail Localize in 2021!

What’s new or coming up in SuMo

We’ve got a few releases including Firefox 83 in mid-November and Firefox 84 just this week. Some of the following articles are completely new but most are only updated. You can also keep track of SUMO’s new articles on our sprint wiki page in the above links.

Here are the recent articles that have been translated:

What’s new or coming up in Pontoon

Team insights

We’ve just landed a new feature on team dashboards – Insights – which shows an overview of the translation and review activity of each team. Stay tuned for more details in a blog post that will be published soon.

Editor refactor

In order to simplify adding new editor implementations to Pontoon, Adrian refactored the frontend editor code using React hooks. All features should work exactly the same as before, but perform better.

Upgraded to Django 3

Thanks to Philipp Pontoon has been upgraded from Django 2.2 to Django 3.1. The process also brought several related library upgrades and new dependency management using pip-compile.

New test automation

We have moved our test automation fromTravis CI to GitHub Actions. Thanks to Axel and Flod for taking care of it! We’ve also split automation into multiple tasks, which comes with several benefits.

Front-end bugfixes

Several frontend bugs have been fixed by our new contributor Mitch. Perhaps the most interesting one: changing the text to uppercase directly in JS files instead of using the text-transform property of CSS. Reason? The text-transform property is not reliable for some locales. Welcome to the team, Mitch!

Project config improvements

Thanks to Jotes we’ve landed several improvements and bugfixes for the project config support. Jotes also migrated several unit tests from using the Django test framework to pytest.

Upcoming changes to Machinery

April and Jotes have made good progress on the implementation of the Concordance search. It will be added to the Machinery panel and will allow you to search all your past translations without leaving the translate view. Two related changes have already landed – you can now reset custom search with a click of a button, and when you copy custom search result, it gets added to the editor instead of replacing its content.

Friends of the Lion

Image by Elio Qoshi


  • Huge kudos to Alejandro, Manuel, and Sergio who only joined the Galician community recently but have accomplished a lot! All three are studying for a Master’s degree in Translation Technologies and would like to get some hands-on experience in the field and an open-source company like Mozilla can help expand that experience. In a short month, they studied all the onboarding documents, familiarized themselves with Pontoon and the localization process,  then single handedly brought the site to the best shape it has seen in recent years: 36k+ words localized. While they are finishing up their studies, based on their experience with this project, they want to share with us some of the fruits of their effort that might benefit Pontoon and future projects. Thank you all so much, we can’t wait!

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

SUMO BlogSUMO Updates – Looking back on 2020

There is a lot happening in 2020 even when the world we live in right now has changed dramatically in just a year. Amidst all that, I feel even more grateful that the passion in our community remains despite all the internal changes in the organization and as we take our time to rearrange the pieces back, and refocus our lenses in order to welcome 2021.

I want to take this opportunity to reflect back and celebrate what we have accomplished together in 2020:

H1: Transition to Conversocial and get the community strategy project off the ground

We begin our journey in Berlin this year for our last all hands before the world goes into the state we are right now. The discussion at that time has also helped us to shape the community strategy project which we’ve been working on since the end of 2019. We also moved our main communication to Matrix, We received a lot of positive feedback during the transition period which made us confident for the full transition. We also managed to move our Social Support platform from Buffer Reply to Conversocial in March.

We’ve done a lot of progress on the onboarding project too, as part of the community strategy project. Now we have the design and the copy ready for implementation, which still needs to be scheduled.

H2: Intoruducing Play Store Support officially and getting the base metrics agreed

In Q3, we focused our efforts to helped the mobile team to transition from Fennec to Fenix. And we use the rest of the year to work on the remaining areas from the community strategy project that we have re-evaluated. One of the most important piece is the base metrics for the community which I can’t wait to share with you the beginning of next year. On top of that, I’m also putting together a plan to leverage this page as a guidelines center for the contributors moving forward.

Recently, we’ve also managed to do an experiment on tagging on the support forum. This is a small experiment that serves as a stepping stone for the larger tagging strategy project that we will be working on as a team for the next year.

I want also to acknowledge how difficult 2020 was. “Difficult” is probably not even the right word. Despite the combination of uncertainty, various turmoil, frustration, that we experienced, I’m grateful that we could remain focus and accomplished all of these things together.

Thank you is barely enough to express how grateful we are for all the contributions, discussion, ideas, and feedback that we shared through the year 2020. But nevertheless, thank you for always believing and being part of Mozilla’s mission.

Let’s keep on rocking the free web through 2021 and beyond!


Blog of DataThis Week in Glean: Glean in 2021

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.)

All “This Week in Glean” blog posts are listed in the TWiG index (and on the Mozilla Data blog).

A year ago the Glean project was different. We had just released Glean v22.1.0 and Fenix (aka Firefox for Android aka Firefox Daylight) was not released yet, Project FOG was just an idea for the year to come.

2020 changed that, but 2020 changed a lot. What didn’t change was my main assignment: I kept working all throughout the year on the Glean SDK, fixing bugs, expanding its capabilities, enabling more platforms and integrating it into more products. Of course this was only possible because the whole team did that as well.

In September I took over the tech lead role for the SDK from Alessio. One part of this role includes thinking bigger and sketching out the future for the Glean SDK.

Let’s look at this future and what ideas we have for 2021.

(One note: right now these are ideas more than a plan. It neither includes a timeline nor does it include all the other things maintenance includes.)

The vision

The Glean SDK is a fully self-servable telemetry SDK, usable across different platforms. It enables product owners and engineers to instrument their products and rely on their data collection, while following Mozilla policies & privacy standards.

The ideas

In the past weeks I started a list of things I want the Glean SDK to do next. This is a short and incomplete list of not-yet-proposed or accepted ideas. For 2021 we will need to fit this in with the larger Glean project, including plans and ideas for the pipeline and tooling. Oh, and I need to talk with my team to actually decide on the plan and allocate who does what when.

Metric types

When we set out to revamp our telemetry system we build it with the idea of offering higher-level metric types, that give more meaning to individual data points, allowing more correct data collection and better aggregation, analysis and visualisation of this data. We’re not there yet. We currently support more than a dozen metric types across all platforms equally, with the same user-friendly APIs in all supported languages. We also know that this still does not cover all intended use cases that, for example, Firefox Desktop wants. In 2021 we will probably need to work on a few more types, better ergonomics and especially documentation on when which metric type is appropriate.

Revamped testing APIs

Engineers should instrument their code where appropriate and use the collected data to analysis behavior and performance in the wild. The Glean SDK ensures their data is reliably collected and sent to our pipeline. But we cannot ensure that the way the data is collected is correct or whether the metric even makes sense. That’s why we encourage that each metric recording is accompanied by tests to ensure data is collected under the right circumstances and that its the right data under the given test case. Only that way folks will be able to verify and analyse the incoming data later and rely on their results.

The available testing APIs in the Glean SDK provide a bare minimum. For each metric type one can check which data it currently holds. That’s probably easy to validate for simpler metrics such as strings, booleans and counters, but as soon as you have more complex one like any of the distributions or a timespan it gets a bit more complex. Additionally we offer a way to check if any errors occured during recording, which are usually also reported in the data. But here again developers need to know which errors can happen under what circumstances for which metric type.

And at last we want developers to reach for custom pings when the default pings are not sufficient. Testing these is currently near impossible.

Testing these different usages of the SDK can and should be improved. We already have the first proposals and bugs for this:

(Note: as of writing these document might be inaccessible to the wider public, but they will be made public once we gather wider feedback)

For sure we will have more ideas on this in 2021.

UniFFI – generate all the code

The Glean SDK is cross-platform by shipping a core library written in Rust with language bindings on top that connect it with the target platform. This has the advantage that most of the logic is written once and works across all the targets we have. But this also has the downside that each language binding sort of duplicates the user-visible API in their target language again. Currently all metric type implementations need to happen in Rust first, followed by implementations in the language bindings.

All of the implementation work today is mostly manual, but usually follows the same approach. The language binding implementations should not hold additional state, but currently some still do.

Implementation of new metric types and fixing bugs or improving the recording APIs of existing ones results in a lot of busy work replicating the same code patterns in all the languages (of which we now have have 7: Kotlin, Swift, Python, C#, Rust, JavaScript and C++). If we also come up with new metric types next year this becomes worse.

A while ago some of my colleagues started the UniFFI project, a multi-language bindings generator for Rust. For a couple of reasons the Glean SDK cannot rely on that yet.

In 2021 I want us to work towards the goal of using UniFFI (or something like it) to reduce the manual work required on each language binding. This should reduce our general workload supporting several languages and help us avoid accidental bugs on implementing (and maintaining) metric types.

This is a rather short list of things that only focus on the SDK. Let’s see what we get tackled in 2021.

End of 2020

The This Week in Glean series is going into winter hiatus until the second week of January 2021. Thanks to everyone who contributed to 25 This Week in Glean blog posts in 2020:

Alessio, Travis, Mike, Chutten, Bea, Raphael, Anthony, Will & Daosheng

Open Policy & AdvocacyContinuing to Protect our Users in Kazakhstan

In a troubling rehash of events from July 2019, Mozilla was recently informed that Internet Service Providers (ISPs) in Kazakhstan have begun telling their customers that they must install a government-issued root certificate on their devices to access internet services. When a user in Kazakhstan installs the root certificate provided by their ISP, they are choosing to trust a Certificate Authority (CA) that enables the interception and decryption of network communications between Firefox and the website.

As we stated in 2019, we believe this act undermines the security of our users and the web, and it directly contradicts Principle 4 of the Mozilla Manifesto that states, “Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.”

As a result, Mozilla, as well as Apple, Google and Microsoft will block the use of the Kazakhstan root CA certificate within their browsers.​ Following Mozilla’s established precedent, this means that it will not be trusted by Firefox even if the user has installed it. When attempting to access a website that responds with this certificate, Firefox users will see an error message stating that the certificate should not be trusted.

We encourage users in Kazakhstan affected by this change to research the use of virtual private network (VPN) software or the Tor Browser, to access the Web. We also strongly encourage anyone who followed the steps to install the Kazakhstan government root certificate to remove it from your devices and to immediately change your passwords, using a strong, unique password for each of your online accounts. The Password Manager built into Firefox can be used to do this quite easily and across devices.

This statement is also available in Kazakh and Russian.

The post Continuing to Protect our Users in Kazakhstan appeared first on Open Policy & Advocacy.

Mozilla Gfx Teammoz://gfx newsletter #54

Hey all, Jim Mathies here, the new Mozilla Graphics Team manager. We haven’t had a Graphics Newsletter since July, so there’s lots to catch up on. TL/DR – We’re shipping our Rust based WebRender backend to a very wide audience as of Firefox 84. Read on for more detail on our progress.

WebRender Current Status

The release audience for WebRender has expanded quite a bit over the last six months. As a result we expect to achieve nearly 80% desktop coverage by the end of the year and have a goal of shipping to 100% of our user base by next summer. 

Operating System Support

MacOS – As of Firefox 84, we are shipping to all versions of MacOS including the latest Big Sur release.

Windows 10 – We are currently shipping to all versions of Windows 10.

Windows 8/8.1 – We are currently shipping to all versions of Windows 8.

Windows 7 – We are currently shipping to a subset of Windows 7 users. Users currently excluded are running versions of the operating system which have not received the first major platform update.  Prior to this update Windows 7 lacked features WebRender relies on for painting to a window managed by a separate process. We are working on adding a fallback mechanism that moves composition into the parent browser process to work around this. We hope to ship support for this fallback mechanism in Firefox 85.

Android – We are currently shipping to devices that leverage the Mali-G chipset, Pixel devices, and a majority of Adreno 5 and 6 devices. Mali-T GPUs are our next big release target. Once we get Mali-T support out the door, we’ll have achieved 70% coverage for our mobile user base.

Linux – We have a little announcement to make here, in Firefox 84 will ship with an accelerated WebRender backend for the first time ever to a subset of Linux users. The target cohort leverages X11, Gnome, and recent Mesa library versions. We plan to expand this rollout to more desktop configurations over time, stay tuned!

Qualified Hardware

A note about qualified hardware – we have the ability to restrict who received the new pipeline based on a combination of hardware parameters – GPU manufacturer, generation, driver versions, battery power, video refresh rate, dual monitor configurations, and even screen size. We leveraged these filters heavily during our initial rollout to target specific cohorts. Thankfully most of these filters are no longer in use as our target audience has expanded greatly over the last six months.

As of Firefox 83, we are shipping to a majority of nVidia and AMD GPUs, and to all Intel GPUs newer than Generation 6. We actually shipped to Generation 6 GPUs in 83 but had to back off when some users reported rendering glitches on Reddit. The fix for this issue landed in 85 and has since been uplifted to 84 for rollout to Release, bringing WebRender support to the vast majority of modern Intel chipsets in Firefox 84.

The remaining GPUs we plan to target include older Intel Generation 4.5 and 5, a batch of mobile specific ‘LP’ Intel chipsets, and various older AMD/ATI/nVidia chipsets that represent the long tail of compatible chipsets from these manufacturers.

If you’re curious to see if your device is qualified and running with the new pipeline, visit about:support in a tab and view the Graphics section for this information. 

The Long Tail

WebRender is an accelerated rendering backend. This means we leverage the power of your graphics hardware to speed getting pixels to the screen. Unfortunately there are some hardware configurations which will never be able to support this type of rendering pipeline. That’s a problem in that without 100% WebRender coverage for the Firefox user base, we’ll never be able to remove the old pipelining code these users leverage today. Our solution here involves a new fallback mechanism that performs rendering in software. Since WebRender currently supports an OpenGL based hardware backend, software fallback is essentially a software implementation of certain OpenGL ES3 features tailored for WebRender support. We’ve recently started testing software fallback in Nightly and are seeing better than expected performance. We’re not ready to ship this implementation yet but we’re getting closer. Once ready, software fallback will provide WebRender support to the ‘long tail’ of lower-end hardware, uncommon configurations, and users with specific issues like bad drivers.

Shipping software fallback will allow us to close the loop on 100% WebRender coverage, at which point the Graphics Team can move forward on new and interesting projects we’ve been itching to get too for a while now.


Independent of WebRender rollout we are continuing our work on Firefox’s WebGPU implementation  currently available for testing in Nightly builds.  The specification is on track to reach MVP status in the near future, after which it will go through a period of feedback and change on the road to the release of the final specification sometime in 2021.

Looking Beyond WebRender

WebRender development and shipping has taken a few years to accomplish. We’re finally at a point where the team is starting to think about what we’ll work on once we’ve shipped WebRender to our entire user population. There’s lots to do! An overall theme is currently emerging in our planning – Visual Quality and Performance! We’re investigating various opportunities to extend the WebRender pipeline deeper into Gecko’s layout engine, HDR features, improved color management, performance improvements for SVG and Canvas, and improvements in power consumption. We’ll post more about these projects in future posts, stay tuned!

Happy New Year from the Mozilla Graphics Team!

Mozilla Add-ons BlogFriend of Add-ons: Andrei Petcu

Please meet our newest Friend of Add-ons, Andrei Petcu! Andrei is a developer and a free software enthusiast. Over the last four years, he has developed several extensions and themes for Firefox, assisted users with troubleshooting browser issues, and helped improve Mozilla products by filing issues and contributing code.

Andrei made a significant contribution to the add-ons community earlier this year by expanding  Firefox Color’s ability to customize the browser. He hadn’t originally planned to make changes to Firefox Color, but he became interested in, an open-source project that lets users create custom themes for their development environments. After seeing another user ask if themer could create a custom Firefox theme, Andrei quickly investigated implementation options and set to work.

Once a user creates a Firefox theme using, they can install it in one of two ways: they can submit the theme through (AMO) and then install the signed .xpi file, or they can apply it as a custom theme through Firefox Color without requiring a signature.

For the latter, there was a small problem: Firefox Color could only support customizations to the most popular parts of the browser’s themeable areas, like the top bar’s background color, the search bar color, and the colors for active and inactive tabs. If a user wanted to modify unsupported areas, like the sidebar or the background color of a new tab page, they wouldn’t be able to see those modifications if they applied the theme through Firefox Color; they would need to install it via a signed .xpi file.

Andrei reached out with a question: if he submitted a patch to Firefox Color that would expand the number of themeable areas, would it be accepted? Could he go one step further and add another panel to the Firefox Color site so users could explore customizing those areas in real time?

We were enthusiastic about his proposal, and not long after, Andrei began submitting patches to gradually add support. Thanks to his contributions, Firefox Color users can now customize 29 (!) more areas of the browser. You can play with modifying these areas by navigating to the “Advanced Colors” tab of (make sure you have the Firefox Color extension installed to see these changes live in your browser!).

A screenshot of the Advanced colors tab on You can toggle colors for various backgrounds, frames, sidebars, and fields.

If you’re a fan of minimalist themes, you may want to install Firefox Color to try out Andrei’s flat white or flat dark themes. He has also created examples of using advanced colors to subtly modify Firefox’s default light and dark themes.

We hope designers enjoy the flexibility to add more fine-grained customization to their themes for Firefox (even if they use their powers to make Firefox look like Windows 95).

Currently, Andrei is working on a feature to let users  import and export passwords in about:logins. Once that wraps up, he plans to contribute code to the new Firefox for Android.

On behalf of the entire Add-ons Team, thank you for all of your wonderful contributions, Andrei!

If you are interested in getting involved with the add-ons community, please take a look at our current contribution opportunities.

To browse themes for Firefox, visit You can also learn how to make your custom themes for Firefox on Firefox Extension Workshop.

The post Friend of Add-ons: Andrei Petcu appeared first on Mozilla Add-ons Blog.

hacks.mozilla.orgImproving Cross-Browser Testing, Part 1: Web Application Testing Today

Testing web applications can be a challenge. Unlike most other kinds of software, they run across a multitude of platforms and devices. They have to be robust regardless of form factor or choice of browser.

We know this is a problem developers feel: when the MDN Developer Needs Assessment asked web developers for their top pain points, cross-browser testing was in the top five in both 2019 and 2020.

Analysis of the 2020 results revealed a subgroup, comprising 13% of respondents, for whom difficulties writing and running tests were their overall biggest pain point with the web platform.

At Mozilla, we see that as a call to action. With our commitment to building a better Internet, we want to provide web developers the tools they need to build great web experiences – including great tools for testing.

In this series of posts we will explore the current web-application testing landscape and explain what Firefox is doing today to allow developers to run more kinds of tests in Firefox.

The WebDriver Standard

Most current cross-browser test automation uses WebDriver, a W3C specification for browser automation. The protocol used by WebDriver originated in Selenium, one of the oldest and most popular browser automation tools.

To understand the features and limitations of WebDriver, let’s dive in and look at how it works under the hood.

WebDriver provides an HTTP-based synchronous command/response protocol. In this model, clients such as Selenium — called a local end in WebDriver parlance — communicate with a remote end HTTP server using a fixed set of steps:

  1. The local end sends an HTTP request representing a WebDriver command to the remote end.
  2. The remote end takes implementation-specific steps to carry out the command, following the requirements of the WebDriver specification.
  3. The remote end returns an HTTP response to the local end.

The driver and browser together form the remote end. The local end sends HTTP messages to the remote end. Internally the remote end can communicate using any means it likes.

This remote end HTTP server could be built into the browser itself, but the most common setup is for all the HTTP processing to happen in a browser-specific driver binary. This accepts the WebDriver HTTP requests and converts them into an internal format for the browser to consume.

For example when automating Firefox, geckodriver converts WebDriver messages into Gecko’s custom Marionette protocol, and vice versa. ChromeDriver and SafariDriver work in a similar way, each using an internal protocol specific to their associated browser.

Example: A Simple Test Script

To understand this better, let’s take a simple example: navigating to a page, finding an element, and testing a property on that element. From the point of view of a test author, the code to implement this might look like:

element = browser.querySelectorAll(".test")[0]
assert element.tag == "div"

Each line of code in this example causes a single HTTP request from the local end to the remote end, representing a single WebDriver command.

The program does not continue until the local end receives the corresponding HTTP response. In the initial browser.go call, for example, the remote end will only send its response once the browser has finished loading the requested page.

On the wire that program generates the following HTTP traffic (some unimportant details omitted for brevity):

<figcaption style="border-width: 0px; text-align: left; margin-left: 0px;">WebDriverCommand 1, request</figcaption>
POST /session/25bc4b8a-c96e-4e61-9f2d-19c021a6a6a4/url HTTP/1.1 Content-Length: 43
{"url": "http://localhost:8000/index.html"}

At this point the browser performs the network operations to navigate to the requested URL, http://localhost:8000/index.html. Once that page has finished loading, the remote end sends the following response back to the automation client.

<figcaption style="border-width: 0px; text-align: left; margin-left: 0px;">WebDriver Command 1, response</figcaption>
HTTP/1.1 200 OK content-type: application/json; charset=utf-8 content-length: 14 

Next comes the request to find the element with class test:

<figcaption style="border-width: 0px; text-align: left; margin-left: 0px;">WebDriver Command 2, request</figcaption>
POST /session/25bc4b8a-c96e-4e61-9f2d-19c021a6a6a4/elements HTTP/1.1 Content-Length: 43 
{"using": "css selector", "value": ".test"}
<figcaption style="border-width: 0px; text-align: left; margin-left: 0px;">WebDriver Command 2, response</figcaption>
HTTP/1.1 200 OK content-type: application/json; charset=utf-8 content-length: 90 

And finally the request to get the element tag name:

<figcaption style="border-width: 0px; text-align: left; margin-left: 0px;">WebDriver Command 3, request</figcaption>
GET /session/25bc4b8a-c96e-4e61-9f2d-19c021a6a6a4/element/0d861ba8-6901-46ef-9a78-0921c0d6bb5a/name HTTP/1.1
<figcaption style="border-width: 0px; text-align: left; margin-left: 0px;">WebDriver Command 3, response</figcaption>
HTTP/1.1 200 OK content-type: application/json; charset=utf-8 content-length: 15 

Even though the three lines of the code involve significant network operations, the control flow is simple to understand and easy to express in a large range of common programming languages. That’s very different from the situation inside the browser itself, where an apparently simple operation like loading a page has a large number of asynchronous steps.

The fact that the remote end handles all that complexity makes it much easier to write automation clients.


In the simple model above, the local end talks directly to the driver binary which is in control of the browser. But in real test deployment scenarios, the situation may be more complex; arbitrary HTTP middleware can be deployed between the local end and the driver.

One common application of this is to provide provisioning capabilities. Using an intermediary such as Selenium Grid, a single WebDriver HTTP endpoint can front a large number of OS and browser combinations, proxying the commands for each test to the requested machine.

The well-understood semantics of HTTP, combined with the wealth of existing tooling make this kind of setup relatively easy to build and deploy at scale, even over untrusted, possibly high latency, networks like the internet.

This is important to services such as SauceLabs and BrowserStack which run automation on remote servers.

Limitations of HTTP-Based WebDriver

The synchronous command/response model of HTTP imposes some limitations on WebDriver. Since the browser can only respond to commands, it’s hard to model things which may happen in the browser outside the context of a specific request.

A clear example of this is alerts. Alerts can appear at any time, so every WebDriver command has to specifically check for an alert being present before running.

Similar problems occur with logging; the ideal API would send log events as soon as they are generated, but with HTTP-based WebDriver this isn’t possible. Instead, a logging API requires buffering on the browser side, and the client must accept that it may not receive all log messages.

Concerns about standardizing a poor, unreliable API means that logging features have not yet made it into the W3C specification for WebDriver, despite being a common user request.

One of the reasons WebDriver adopted the HTTP model despite these limitations was the simplicity of the programming model. With a fully blocking API, one could easily write WebDriver clients using only language features that were mainstream in the early 2000s.

Since then, many programming languages have gained first-class support for handling events and asynchronous control flow. This means that some of the underlying assumptions that went into the original WebDriver protocol — like asynchronous, event-driven code being too hard to write — are no longer true.

DevTools Protocols

As well as automation via WebDriver, modern browsers also provide remote access for the use of the browser’s DevTools. This is essential for cases where it’s difficult to debug an issue on the same machine where the page itself is running, like an issue that only occurs on mobile.

Different browsers provide different DevTools features, which often require explicit support in the engine and expose implementation details that are not visible to web content. Therefore it’s unsurprising that each browser engine has a unique DevTools protocol, according to their particular requirements.

In DevTools, there’s a core requirement that UI must respond to events emitted by the browser engine. Examples include logging console messages and network requests as they come in so that a user can follow progress.

This means that DevTools protocols don’t use the command/response paradigm of HTTP. Instead, they use a bidirectional protocol in which messages may originate from either the client or the browser. This allows the DevTools to update in real time, responding to changes in the browser as they happen.

DevTools clients have a two-way communication with the browser. The client sends commands to the browser and the browser sends both command responses or events to the client.

Remote automation isn’t a core use case of DevTools. Some operations that are common in one case are rare in the other. For example, client-initiated navigation is present in almost all automated tests, but is rare in DevTools.

Nevertheless, low-level control needed when debugging mean it’s possible to write many automation features on top of the DevTools protocol feature set. Indeed in some browsers such as Chrome, the browser-internal message format used to bridge the gap between the WebDriver binary and the browser itself is in fact the DevTools protocol.

This has inevitably led to the question of whether it’s possible to build automation on top of the DevTools protocol directly. With languages offering better support for asynchronous control flow, and modern web applications demanding more low-level control for testing, libraries such as Google’s Puppeteer have taken DevTools protocols and constructed automation-specific client libraries on top.

These libraries support advanced features such as network request interception which are hard to build on top of HTTP-based WebDriver. The typically promise-based APIs also feel more like modern front-end programming, which has made these tools popular with web developers.

Even mainly WebDriver-based tools are adding additional features which can’t be realised through WebDriver alone. For example some of the new features in Selenium 4, such as access to console logs and better support for HTTP Authentication, require bidirectional communication, and will initially only be supported in browsers which can speak Chrome’s DevTools protocol.

DevTools Difficulties

Although using DevTools for automation is appealing in terms of the feature set, it’s also fraught with problems.

DevTools protocols are browser-specific and can expose a lot of internal state that’s not part of the Web Platform. This means that libraries using DevTools features for automation are typically tied to a specific rendering engine.

They are also beholden to changes in those engines; the tight coupling to the engine internals means DevTools protocols usually offer very limited guarantees of stability.

For the DevTools themselves this isn’t a big problem; the same team usually owns the front-end and the back-end so any refactor just has to update both the client and server at the same time, and cross-version compatibility is not a serious concern. But for automation, it imposes a significant burden on both the client library developer and the test authors.

With WebDriver a single client can work with any supported browser release. With DevTools-based automation, a new client may be required for each browser version. This is the case for Puppeteer, for example, where each Puppeteer release is tied to a particular version of Chromium.

The fact that DevTools protocols are browser-specific makes it very challenging to use them as the foundation for cross-browser tooling. Some automation clients, like Cypress and Microsoft’s Playwright, have made heroic efforts here, eschewing WebDriver but still supporting multiple browsers.

Using a combination of existing DevTools protocols and custom protocols implemented through patches to the underlying browser code or via WebExtensions, they provide features not possible in WebDriver whilst supporting several browser engines.

Requiring such a large amount of code to be maintained by the automation library, and putting the library on the treadmill of browser engine updates, makes maintenance difficult and gives the library authors less time to focus on their core automation features.

Summary and Next Steps

As we have seen, the web application testing ecosystem is becoming fragmented. Most cross-browser testing uses WebDriver; a W3C specification that all major browser engines support.

However, limitations in WebDriver’s HTTP-based protocol mean that automation libraries are increasingly choosing to use browser-specific DevTools protocols to implement advanced features, foregoing cross-browser support when they do.

Test authors shouldn’t have to choose between access to functionality, and browser-specific tooling. And client authors shouldn’t be forced to keep up with the often-breakneck pace of browser engine development.

In our next post, we’ll describe some work Mozilla has done to bring previously Chromium-only test tooling to Firefox.


Thanks to Tantek Çelik, Karl Dubost, Jan Odvarko, Devin Reams, Maire Reavy, Henrik Skupin, and Mike Taylor for their valuable feedback and suggestions.

The post Improving Cross-Browser Testing, Part 1: Web Application Testing Today appeared first on Mozilla Hacks - the Web developer blog.

hacks.mozilla.org2020 MDN Web Developer Needs Assessment now available

The 2020 MDN Web Developer Needs Assessment (DNA) report is now available! This post takes you through what we’ve accomplished in 2020 based on the findings in the inaugural report, key takeaways of the 2020 survey, and what our next steps are as a result.

What We’ve Accomplished

In December 2019, Mozilla released the first Web Developer Needs Assessment survey report. This was a very detailed study of web developers globally and their main pain points with the web platform, designed with input from nearly 30 stakeholders representing product advisory board member organizations (and others) including browser vendors, the W3C, and industry. You can find a full list in the report itself (PDF, 1.4MB download).

We learned that while more than 3/4 of respondents are very satisfied or satisfied with the Web platform, their 4 biggest frustrations included having to support specific browsers (e.g., IE11), dealing with outdated or inaccurate documentation for frameworks and libraries, avoiding or removing a feature that doesn’t work across browsers, and testing across browsers.

Mozilla and other web industry orgs took action based on these results, for example:

  1. MDN prioritized documentation projects that were most needed by the industry.
  2. Mozilla’s engineering team incorporated the findings into future browser engineering team planning and prioritisation work.
  3. A cross-industry effort improved cross-browser support for Flexbox (see Closing the gap (in flexbox) for more details).
  4. Google used the results to help understand and prioritize the key areas of developer frustration, and used the developer satisfaction scores as a success metric going forward.
  5. The results provided valuable input to several standardization and pre-standardization discussions at W3C’s annual TPAC meeting.
  6. Microsoft used the Web DNA as one of their primary research tools when planning investments in the web platform and the surrounding ecosystem of content and tools (for example, and it directly impacted how they now think about areas like cross-browser testing, legacy browser compatibility, best practices hinting, and more.

The 2020 Survey Results

The inaugural results were so useful that in 2020 we decided to run it again, with the same level of collaboration between browser vendors and other stakeholders.

This year we expanded the survey to include some new questions about accessibility tools and web testing, which were requested by some of the survey stakeholders as key areas of interest that should be explored more this time round. We also hired an experienced data scientist to conduct analysis and employ data science best practices.

We ran the survey from October 12 through November 2, 2020 and secured a similarly wide distribution of respondents.

Browser compatibility remains the top pain point, and it is also interesting to note that overall satisfaction with the platform hasn’t changed much, with 77.7% being very satisfied or satisfied with the Web in 2020.

New for this year are the results of our segmentation analysis of the needs which yielded seven, distinct segments. Each one has wildly different needs that surface as the most frustrating when compared to the overall mean scores:

  1. Documentation Disciples — Their top frustrations are outdated documentation for frameworks and libraries and outdated documentation for HTML, CSS, and JavaScript, as well as supporting specific browsers.
  2. Browser Beaters — Their top frustrations are clustered around issues with browser compatibility, design and layout. Like Document Disciples, they also find having to support specific browsers more frustrating than the overall mean.
  3. Progressive Programmers — Their top frustrations are clustered around lack of APIs, lack of support for Progressive Web Apps (PWAs), and using web technologies. For them, browser related needs were typically less frustrating than the overall mean.
  4. Testing Technicians — Needs statements relating to testing, whether end-to-end, front-end, or testing across browsers, caused the most frustration for this segment. Their top frustrations are clustered around issues with browser compatibility, design and layout. Like Progressive Pros, this segment also finds browser compatibility needs less frustrating than the overall mean, with the exception of testing across browsers.
  5. Keeping Currents — The need statements that this segment found most frustrating were keeping up with a large number of new and existing tools and frameworks and keeping up with changes to the web platform. Sticking with the themes from 2019, this segment is concerned with the Pace of Change of the web platform.
  6. Performance Pushers — Needs statements relating to performance and bugs are the top frustrations for this segment. Needs related to testing were rated as less frustrating than the overall mean, but discovering bugs not caught during testing is higher than the overall mean, though the pinpointing performance issues and implementing performance optimization are the top frustrations.
  7. Regulatory Wranglers — This is the more eclectic segment, with a bigger assortment of needs rating higher than the overall mean. However, compliance with laws and regulations for managing user data is the most frustrating need. Closely following that are needs relating to security measures with tracking protection, data storage, and authentication causing frustrations.

What’s Next

We are aiming to follow up on key findings with further research in the next few months. This will involve picking some key areas to focus on, and then performing user interviews and further analysis to allow us to drill down into key areas of frustration to see what the way forward is to mitigating them.

Potential areas to research include qualitative studies about:

  • Testing
  • Documentation
  • The pace of change on the web platform
  • Frustrations around design and layout issues across browsers

Get the report!

To get the full report, see for HTML and PDF versions, and more besides. The 2019 report and our Browser Compatibility follow-up report are still available if you want to compare and contrast.

The post 2020 MDN Web Developer Needs Assessment now available appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Add-ons BlogExtensions in Firefox 85

Before we get into the updates coming to Firefox 85, I want to highlight two changes that we uplifted to Firefox 84, now on release:

Now, back to our regular programming. Here’s what’s coming in Firefox 85, which is scheduled to be released on January 26, 2021:

And finally, we want to remind you about upcoming site isolation changes with Project Fission. As we previously mentioned, the drawWindow method is being deprecated as part of this work. If you use this API, we recommend that you switch to the captureTab method.

About 15% of users on Nightly currently run with Fission. If you see any bug reports that you can’t replicate, remember to test with Fission enabled. Instructions for enabling Fission can be found on the wiki.


Big thanks to Liz Krane, Ankush Dua, and Michael Goossens for their contributions to this release!

The post Extensions in Firefox 85 appeared first on Mozilla Add-ons Blog.

about:communityThe new faces in Firefox 84

With the release of Firefox 84, we are pleased to welcome the developers who’ve contributed their first code change to Firefox, 10 of whom were first-time contributors to the code. Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Open Policy & AdvocacyMozilla reacts to publication of draft Digital Services Act and Digital Markets Act

The European Commission has just published its landmark Digital Services Act (DSA) and Digital Markets Act (DMA). These new draft laws have the potential to transform regulation in the tech sector and we’re happy to see the Commission take on board many of our earlier recommendations for the laws.

Reacting to the DSA and DMA publications, Raegan MacDonald, Mozilla’s Head of Public Policy, said:

Today the European Commission published the long-awaited Digital Services Act (DSA) and Digital Markets Act (DMA). We’ve been involved in the development of these laws for a number of years now, and are encouraged that many of our recommendations for how the Commission could seize a one-in-a-generation opportunity have been taken on board. While we’re still processing the details of the two draft laws, here we give our first reactions to the two groundbreaking proposals.

The DSA’s transparency requirements are a major step forward, particularly its call for disclosure of all advertisements. We’ve consistently said that to better address illegal and harmful content online we need meaningful transparency into how these problems are spreading through the online advertising ecosystem. Importantly, the Commission’s proposal follows the recommendation of us and a number of other advocates that these disclosure obligations ought to apply to all advertisements. We look forward to defining the precise operation of these disclosure mechanisms (we already have a lot of thoughts on that). The Commission’s proposal also seeks to bring more transparency to automated content curation (e.g. recommender systems like Facebook News Feed). This is again another issue we’ve long been active on, and our ongoing YouTube Regrets campaign is a case-in-point of the need for more transparency in this space.

We’re likewise encouraged to see the DSA take a progressive and thoughtful approach to content responsibility. The focus on procedural accountability – working to make firms responsible for assessing and mitigating the risks that misuse of their services may pose – aligns with our vision of a next generation EU regulatory standard that addresses the factors holding back the internet from what it could be. Again, there will be much work required to fill in the legislative detail on this procedural accountability vision, and we’re motivated to build on our recent work (see here and here) when the legislative proposal moves to the mark-up stage.

Last but not least, the Digital Markets Act (DMA). We’re still looking closely at what it means in terms of competition and innovation. However, at this early stage it appears the Commission has laid the groundwork for an ambitious new standard that could enhance consumer choice and the ability of smaller companies to thrive. A vibrant and open internet depends on interoperability, open standards, and opportunities for a diversity of market participants and we look forward to engaging with the next steps.



The post Mozilla reacts to publication of draft Digital Services Act and Digital Markets Act appeared first on Open Policy & Advocacy.

hacks.mozilla.orgAnd now for … Firefox 84

As December ushers in the final curtain for this rather eventful year, there is time left for one more Firefox version to be given its wings. Firefox 84 includes some interesting new features including tab order inspection, complex selector support in :not(), the PerformancePaintTiming API, and more!

This blog post provides merely a set of highlights; for all the details, check out the following:

DevTools gets tab order inspection

The Firefox Developer Tools have gotten a rather nice addition to the Accessibility Inspector this time around — A “Show Tabbing Order” checkbox. When checked, this toggles a visual overlay showing the tabbing order or table items on the current page. This provides a high-level overview of how the page will be navigated using the tab key, which may highlight problems more effectively than simply tabbing through the elements.

a web page with multiple tabbable items showing the tab order for those items visuallly

Web platform additions

Firefox 84 brings some new Gecko platform additions, the highlights of which are listed below.

Complex selector support in :not()

The :not() pseudo-class is rather useful, allowing you to apply styles to elements that don’t match one or more selectors. For example, the following applies a blue background to all elements that aren’t paragraphs:

:not(p) {
  background-color: blue;

However, it was of limited use until recently as it didn’t allow any kind of complex selectors to be negated. Firefox 84 adds support for this, so now you can do things like this:

:not(option:checked) {
  color: #999;

This would set a different text color on <select> options that are not currently selected.


The PerformancePaintTiming interface of the Paint Timing API provides timing information about “paint” (also called “render”) operations during web page construction, which is incredibly useful for developers wishing to develop their own performance tooling.

For example:

function showPaintTimings() {
  if (window.performance) {
    let performance = window.performance;
    let performanceEntries = performance.getEntriesByType('paint');
    performanceEntries.forEach( (performanceEntry, i, entries) => {
      console.log("The time to " + + " was " + performanceEntry.startTime + " milliseconds.");
  } else {
    console.log('Performance timing isn\'t supported.');

Would output something like this in supporting browsers:

The time to first-paint was 2785.915 milliseconds.
The time to first-contentful-paint was 2787.460 milliseconds.

AppCache removal

AppCache was an attempt to create a solution for caching web app assets offline so the site could continue to be used without network connectivity. It seemed to be a good idea because it was really simple to use and could solve this very common problem easily. However, it made many assumptions about what you were trying to do and then broke horribly when your app didn’t follow those assumptions exactly.

Browser vendors have been planning its removal for quite some time, and as of Firefox 84, we have finally gotten rid of it for good. For creating offline app solutions, you should use the Service Worker API instead.


Starting with Firefox 84, users will be able to manage optional permissions for installed add-ons through the Add-ons Manager.

web extensions permissions dialog showing that you cna turn optional permissions on an off via the UI

We recommend that extensions using optional permissions listen for browser.permissions.onAdded and browser.permissions.onRemoved API events. This ensures the extension is aware of the user granting or revoking optional permissions.

Additionally, extension developers can now zoom extension panels, popups, and sidebars using Ctrl + scroll wheel (Cmd + scroll wheel on macOS).

We’ve also fixed an issue where search engine changes weren’t being reset under certain circumstances when an add-on was uninstalled.

WebRender comes to Linux and Android

In our previous Firefox release we added support for our WebRender rendering architecture to a number of new Windows and macOS versions. This time around we are pleased to add a subset of Linux and Android devices. In particular, we’ve enabled WebRender on:

  • Gnome-, X11-, and GLX-based Linux devices.
  • Android Mali-G GPU series phones (which represent approximately 27% of the Fenix release population).

We’re getting steadily closer to our dream of a 60fps web for everyone.

Localhost improvements

Last but not least, we’d like to draw your attention to the fact that we’ve made some significant improvements to the way Firefox handles localhost URLs in version 84. Firefox now ensures that localhost URLs — such as http://localhost/ and http://dev.localhost/ — refer to the local host’s loopback interface (e.g.

As a result, resources loaded from localhost are now assumed to have been delivered securely (see Secure contexts), and also will not be treated as mixed content. This has a number of implications for simplifying local testing of different web features, especially for example those requiring secure contexts (like service workers).

The post And now for … Firefox 84 appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogOur Year in Review: How we’ve kept Firefox working for you in 2020

This year began like any other year, with our best intentions and resolutions to carry out. Then by March, the world changed and everyone’s lives — personally and professionally — turned upside down. Despite that, we kept to our schedule to release a new Firefox every month and we were determined to keep Firefox working for you during challenging times.

We shifted our focus to work on features aimed at helping people adjust to the new way of life, and we made Firefox faster so that you could get more things done. It’s all part of fulfilling our promise to build a better internet for people. So, as we eagerly look to the end of 2020, we look back at this unprecedented year and present you with our list of top features that made 2020 a little easier.

Keeping Calm and Carrying on

How do you cope with this new way of life spent online? Here were the Firefox features we added this year, aimed at bringing some zen in your life.

  • Picture-in-Picture: An employee favorite, we rolled out Picture-in-Picture to Mac and Linux, making it available on all platforms, where previously it was only available on Windows. We continued to improve Picture-in-Picture throughout the year — adding features like keyboard controls for fast forward and rewind — so that you could multitask like never before. We, too, were seeking calming videos; eyeing election results; and entertaining the little ones while trying to juggle home and work demands.
  • No more annoying notifications: We all started browsing more as the web became our window into the outside world, so we replaced annoying notification request pop-ups to stop interrupting your browsing, and added a speech bubble in the address bar when you interacted with the site.
  • Pocket article recommendations: We brought our delightful Pocket article recommendations to Firefox users beyond the US, to Austria, Belgium, Germany, India, Ireland, Switzerland, and the United Kingdom. For anyone wanting to take a pause on doom scrolling, simply open up a new tab in Firefox and check out the positivity in the Pocket article recommendations.
  • Ease eye strain with larger screen view: We all have been staring at the screen for longer than we ever thought we should. So, we’ve improved the global level zoom setting so you can set it and forget it. Then, every website can appear larger, should you wish, to ease eye strain. We also made improvements to our high contrast mode which made text more readable for users with low vision.


Get Firefox


Getting you faster to the places you want to visit

We also looked under the hood of Firefox to improve the speed and search experiences so you could get things done no matter what 2020 handed you.

  • Speed: We made Firefox faster than ever with improved performance on both page loads and start up time. For those the technical details:
      • Websites that use flexbox-based layouts load 20% faster than before;
      • Restoring a session is 17% quicker, meaning you can more quickly pick up where you left off;
      • For Windows users, opening new windows got quicker by 10%;
      • Our JavaScript engine got a revamp improving page load performance by up to 15%, page responsiveness by up to 12%, and reduced memory usage by up to 8%, all the while making it more secure.
  • Search made faster: We were searching constantly this year — what is coronavirus; do masks work; and what is the electoral college? The team spent countless hours improving the search experience in Firefox so that you could search smarter, faster — You could type less and find more with the revamped address bar, where our search suggestions got a redesign. An updated shortcut suggests search engines, tabs, and bookmarks, getting you where you want to go right from the address bar.
  • Additional under-the-hood improvements: We made noticeable improvements to Firefox’s printing experience, which included a fillable PDF form. We also improved your shopping experience with updates to our password management and credit card autofill.

Our promise to build a better internet

This has been an unprecedented year for the world, and as you became more connected online, we stayed focused on pushing for more privacy. It’s just one less thing for you to worry about.

  • HTTPS-Only mode: If you visit a website that asks for your email address or payment info, look for that lock in the address bar, which indicates your connection to it is secure. A site that doesn’t have the lock signals its insecure. It could be as simple as an expired Secure Socket Layer (SSL) certificate. No matter, Firefox’s new HTTPS-Only mode will attempt to establish fully secure connections to every website you visit and will also ask for your permission before connecting to a website if it doesn’t support secure connections.
  • Added privacy protections: We kicked off the year by expanding our Enhanced Tracking Protection, preventing known fingerprinters from profiling our users based on their hardware, and introduced a protection against redirect tracking — always on while you are browsing more than ever.
  • Facebook Container updates: Given the circumstances of 2020, it makes sense that people turned to Facebook to stay connected to friends and family when we couldn’t visit in person. Facebook Container — which helps prevent Facebook from tracking you around the web — added improvements that allowed you to create exceptions to how and when it blocks Facebook logins, likes, and comments, giving you more control over your relationship with Facebook.

Even if you didn’t have Firefox to help with some of life’s challenges online over the past year, don’t start 2021 without it. Download the latest version of Firefox and try these privacy-protecting, easy-to-use features for yourself.

The post Our Year in Review: How we’ve kept Firefox working for you in 2020 appeared first on The Mozilla Blog.

The Mozilla BlogMozilla’s Vision for Trustworthy AI

Mozilla is publishing its white paper, “Creating Trustworthy AI.”

A little over two years ago, Mozilla started an ambitious project: deciding where we should focus our efforts to grow the movement of people committed to building a healthier digital world. We landed on the idea of trustworthy AI.

When Mozilla started in 1998, the growth of the web was defining where computing was going. So Mozilla focused on web standards and building a browser. Today, computing — and the digital society that we all live in — is defined by vast troves of data, sophisticated algorithms and omnipresent sensors and devices. This is the era of AI. Asking questions today such as ‘Does the way this technology works promote human agency?’ or ‘Am I in control of what happens with my data?’ is like asking ‘How do we keep the web open and free?’ 20 years ago.

This current era of computing — and the way it shapes the consumer internet technology that more than 4 billion of us use everyday — has high stakes. AI increasingly powers smartphones, social networks, online stores, cars, home assistants and almost every other type of electronic device. Given the power and pervasiveness of these technologies, the question of whether AI helps and empowers or exploits and excludes will have a huge impact on the direction that our societies head over the coming decades.

It would be very easy for us to head in the wrong direction. As we have rushed to build data collection and automation into nearly everything, we have already seen the potential of AI to reinforce long-standing biases or to point us toward dangerous content. And there’s little transparency or accountability when an AI system spreads misinformation or misidentifies a face. Also, as people, we rarely have agency over what happens with our data or the automated decisions that it drives. If these trends continue, we’re likely to end up in a dystopian AI-driven world that deepens the gap between those with vast power and those without.

On the other hand, a significant number of people are calling attention to these dangerous trends — and saying ‘there is another way to do this!’ Much like the early days of open source, a growing movement of technologists, researchers, policy makers, lawyers and activists are working on ways to bend the future of computing towards agency and empowerment. They are developing software to detect AI bias. They are writing new data protection laws. They are inventing legal tools to put people in control of their own data. They are starting orgs that advocate for ethical and just AI. If these people — and Mozilla counts itself amongst them — are successful, we have the potential to create a world where AI broadly helps rather than harms humanity.

It was inspiring conversations with people like these that led Mozilla to focus the $20M+ that it spends each year on movement building on the topic of trustworthy AI. Over the course of 2020, we’ve been writing a paper titled “Creating Trustworthy AI” to document the challenges and ideas for action that have come up in these conversations. Today, we release the final version of this paper.

This ‘paper’ isn’t a traditional piece of research. It’s more like an action plan, laying out steps that Mozilla and other like-minded people could take to make trustworthy AI a reality. It is possible to make this kind of shift, just as we have been able to make the shift to clean water and safer automobiles in response to risks to people and society. The paper suggests the code we need to write, the projects we need to fund, the issues we need to champion, and the laws we need to pass. It’s a toolkit for technologists, for philanthropists, for activists, for lawmakers.

At the heart of the paper are eight big challenges the world is facing when it comes to the use of AI in the consumer internet technologies we all use everyday. These are things like: bias; privacy; transparency; security; and the centralization of AI power in the hands of a few big tech companies. The paper also outlines four opportunities to meet these challenges. These opportunities centre around the idea that there are developers, investors, policy makers and a broad public that want to make sure AI works differently — and to our benefit. Together, we have a chance to write code, process data, create laws and choose technologies that send us in a good direction.

Like any major Mozilla project, this paper was built using an open source approach. The draft we published in May came from 18 months of conversations, research and experimentation. We invited people to comment on that draft, and they did. People and organizations from around the world weighed in: from digital rights groups in Poland to civil rights activists in the U.S, from machine learning experts in North America to policy makers at the highest levels in Europe, from activists, writers and creators to ivy league professors. We have revised the paper based on this input to make it that much stronger. The feedback helped us hone our definitions of “AI” and “consumer technology.” It pushed us to make racial justice a more prominent lens throughout this work. And it led us to incorporate more geographic, racial, and gender diversity viewpoints in the paper.

In the months and years ahead, this document will serve as a blueprint for Mozilla Foundation’s movement building work, with a focus on research, advocacy and grantmaking. We’re already starting to manifest this work: Mozilla’s advocacy around YouTube recommendations has illuminated how problematic AI curation can be. The Data Futures Lab and European AI Fund that we are developing with partner foundations support projects and initiatives that reimagine how trustworthy AI is designed and built across multiple continents. And Mozilla Fellows and Awardees like Sylvie Delacroix, Deborah Raj, and Neema Iyer are studying how AI intersects with data governance, equality, and systemic bias. Past and present work like this also fed back into the white paper, helping us learn by doing.

We also hope that this work will open up new opportunities for the people who build the technology we use everyday. For so long, building technology that valued people was synonymous with collecting no or little data about them. While privacy remains a core focus of Mozilla and others, we need to find ways to protect and empower users that also include the collection and use of data to give people experiences they want. As the paper outlines, there are more and more developers — including many of our colleagues in the Mozilla Corporation — who are carving new paths that head in this direction.

Thank you for reading — and I look forward to putting this into action together.

The post Mozilla’s Vision for Trustworthy AI appeared first on The Mozilla Blog.

hacks.mozilla.orgWelcome Yari: MDN Web Docs has a new platform

After several intense months of work on such a significant change, the day is finally upon us: MDN Web Docs’ new platform (codenamed Yari) is finally launched!

icon for yari, includes a man with a spear, plus the text yari, the mdn web docs platform

Between November 2 and December 14, we ran a beta period in which a number of our fabulous community members tested out the new platform, submitted content changes, allowed us to try out the new contribution workflow, and suggested improvements to both the platform and styling. All of you have our heartfelt thanks.

This post serves to provide an update on where we are now, what we’re aiming to do next, and what you can do to help.

Where we are now

We’ve pulled together a working system in a short amount of time that vastly improves on the previous platform, and solves a number of tangible issues. There is certainly a lot of work still to do, but this new release provides a stable base to iterate from, and you’ll see a lot of further improvements in the coming months. Here’s a peek at where we are now:

Contributing in GitHub

The most significant difference with the new platform is that we’ve decentralized the content from a SQL database to files in a git repository. To edit content, you now submit pull requests against the repo, rather than editing the wiki using the old WYSIWYG editor.

This has a huge advantage in terms of contribution workflow — because it’s a GitHub repo, you can insert it into your workflow however you feel comfortable, mass changes are easier to make programmatically, and you can lump together edits across multiple pages in a single pull request rather than as scattered individual edits, and we can apply intelligent automatic linting to edits to speed up work.

The content repo initially comes with a few basic CLI tools to help you with fundamental tasks, such as yarn start (to create a live preview of what your document will look like when rendered on MDN), yarn content create (to add a new page), yarn content move (to move an existing page), etc. You can find more details of these, and other contribution instructions, in the repo’s README file.

Caring for the community

Community interactions will not just be improved, but transformed. You can now have a conversation about a change over a pull request before it is finalized and submitted, making suggestions and iterating, rather than worrying about getting it perfect the first time round.

We think that this model will give contributors more confidence in making changes, and allow us to build a much better relationship with our community and help them improve their contributions.

Reducing developer burden

Our developer maintenance burden is also now much reduced with this update. The existing (Kuma) platform is complex,  hard to maintain, and adding new features is very difficult. The update will vastly simplify the platform code — we estimate that we can remove a significant chunk of the existing codebase, meaning easier maintenance and contributions.

This is also true of our front-end architecture: The existing MDN platform has a number of front-end inconsistencies and accessibility issues, which we’ve wanted to tackle for some time. The move to a new, simplified platform gives us a perfect opportunity to fix such issues.

What we’re doing next

There are a number of things that we could do to further improve the new platform going forward. Last week, for example, we already talked about our plans for the future of l10n on MDN.

The first thing we’ll be working on in the new year is ironing out the kinks in the new platform. After that, we can start to serve our readers and contributors much better than before, implementing new features faster and more confidently, which will lead to an even more useful MDN, with an even more powerful contribution model.

The sections below are by no means definite, but they do provide a useful idea of what we’ve got planned next for the platform. We are aiming to publish a public roadmap in the future, so that you can find out where we’re at, and make suggestions.

Moving to Markdown

At its launch, the content is stored in HTML format. This is OK — we all know a little HTML — but it is not the most convenient format to edit and write, especially if you are creating a sizable new page from scratch. Most people find Markdown easier to write than HTML, so we want to eventually move to storing our core content in Markdown (or maybe some other format) rather than HTML.

Improving search

For a long time, the search functionality has been substandard on MDN. Going forward, we not only want to upgrade our search to return useful results, but we also want to search more usefully, for example fuzzy search, search by popularity search by titles, summaries, full text search, and more.

Representing MDN meta pages

Currently only the MDN content pages are represented in our new content repo. We’d eventually like to stop using our old home, profile, and search pages, which are still currently served from our old Django-based platform, and bring those into the new platform with all the advantages that it brings.

And there’s more!

We’d also like to start exploring:

  • Optimizing file attachments
  • Implementing and enforcing CSP on MDN
  • Automated linting and formatting of all code snippets
  • Gradually removing the old-style KumaScript macros that remain in MDN content, removing, rendering, or replacing them as appropriate. For example, link macros can just be rendered out, as standard HTML links will work fine, whereas all the sidebar macros we have should be replaced by a proper sidebar system built into the actual platform.

What you can do to help

As you’ll have seen from the above next steps, there is still a lot to do, and we’d love the community to help us with future MDN content and platform work.

  • If you are more interested in helping with content work, you can find out how to help at Contributing to MDN.
  • If you are more interested in helping with MDN platform development, the best place to learn where to start is the Yari README.
  • In terms of finding a good general place to chat about MDN, you can join the discussion on the MDN Web Docs chat room on Matrix.

The post Welcome Yari: MDN Web Docs has a new platform appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogWhy getting voting right is hard, Part II: Hand-Counted Paper Ballots

In Part I we looked at desirable properties for voting system. In this post, I want to look at the details of a specific system: hand-counted paper ballots.

Sample Ballot

Hand-counted paper ballots are probably the simplest voting system in common use (though mostly outside the US). In practice, the process usually looks something like the following:

  1. Election officials pre-print paper ballots and distribute them to polling places. Each paper ballot has a list of contests and the choices for each contest, and a box or some other location where the voter can indicate their choice, as shown above.
  2. Voters arrive at the polling place, identify themselves to election workers, and are issued a ballot. They mark the section of the ballot corresponding to their choice. They cast their ballots by putting them into a ballot box, which can be as simple as a cardboard box with a hole in the top for the ballots.
  3. Once the polls close, the election workers collect all the ballots. If they are to be locally counted, then the process is as below; if they are to be centrally counted, they are transported back to election headquarters for counting.

The counting process varies between jurisdictions, but at a high level the process is simple. The vote counters go through each ballot one at a time and determine which choice it is for. Joseph Lorenzo Hall provides a good description of the procedure for California’s statutory 1% tally here:

In practice, the hand-counting method used by counties in California seems very similar. The typical tally team uses four people consisting of two talliers, one caller and one witness:

  • The caller speaks aloud the choice on the ballot for the race being tallied (e.g., “Yes…Yes…Yes…” or “Lincoln…Lincoln…Lincoln…”).
  • The witness observes each ballot to ensure that the spoken vote corresponded to what was on the ballot and also collates ballots in cross-stacks of ten ballots.
  • Each tallier records the tally by crossing out numbers on a tally sheet to keep track of the vote tally.

Talliers announce the tally at each multiple of ten (“10”, “20”, etc.) so that they can roll-back the tally if the two talliers get out of sync.

Obviously other techniques are possible, but as long as people are able to observe, differences in technique are mostly about efficiency rather than accuracy or transparency. The key requirement here is that any observer can look at the ballots and see that they are being recorded as they are cast. Jurisdictions will usually have some mechanism for challenging the tally of a specific ballot.

Security and Verifiability

The major virtue of hand-counted paper ballots is that they are simple, with security and privacy properties that are easy for voters to understand and reason about, and for observers to verify for themselves

It’s easiest to break the election in two phases:

  • Voting and collecting the ballots
  • Counting the collected ballots

If each of these is done correctly, then we can have high confidence that the election was correctly decided.


The security properties of the voting process mostly come down to ballot handling, namely that:

  • Only authorized voters get ballots and only one ballot. Note that it’s necessary to ensure this because otherwise it’s very hard to prevent multiple voting, where an authorized voter puts in two ballots.
  • Only the ballots of authorized voters make it into the ballot box.
  • All the ballots in the ballot box and only the ballots from the ballot box make it to election headquarters.

The first two of these properties are readily observed by observers — whether independent or partisan. The last property typically relies on technical controls. For instance, in Santa Clara county ballots are taken from the ballot box and put into clear tamper-evident bags for transport to election central, which limits the ability for poll workers to replace the ballots. When put together all three properties provide a high degree of confidence that the right ballots are available to be counted. This isn’t to say that there’s no opportunity for fraud via sleight-of-hand or voter impersonation (more on this later) but it’s largely one-at-a-time fraud, affecting a few ballots at a time, and is hard to perpetrate at scale.


The counting process is even easier to verify: it’s conducted in the open and so observers have their own chance to see each ballot and be confident that it has been counted correctly. Obviously, you need a lot of observers because you need at least one for each counting team, but given that the number of voters far exceeds the number of counting teams, it’s not that impractical for a campaign to come up with enough observers.

Probably the biggest source of problems with hand-counted paper ballots is disputes about the meaning of ambiguous ballots. Ideally voters would mark their ballots according to the instructions, but it’s quite common for voters to make stray marks, mark more than one box, fill in the boxes with dots instead of Xs, or even some more exotic variations, as shown in the examples below. In each case, it needs to be determined how to handle the ballot. It’s common to apply an “Intent of the voter” standard, but this still requires judgement. One extra difficulty here is that at the point where you are interpreting each ballot, you already know what it looks like, so naturally this can lead to a fair amount of partisan bickering about whether to accept each individual ballot, as each side tries to accept ballots that seem like they are for their preferred candidate and disqualify ballots that seem like they are for their opponent.

double marklizard people

A related issue is whether a given ballot is valid. This isn’t so much an issue with ballots cast at a polling place, but for vote-by-mail ballots there can be questions about signatures on the envelopes, the number of envelopes, etc. I’ll get to this later when I cover vote by mail in a later post.

Privacy/Secrecy of the Ballot

The level of privacy provided by paper ballots depends a fair bit on the precise details of how they are used and handled. In typical elections, voters will be given some level of privacy to fill out their ballot, so they don’t have to worry too much about that stage (though presumably in theory someone could set up cameras in the polling place). Aside from that, we primarily need to worry about two classes of attack:

  1. Tracking a given voter’s ballot from checkin to counting.
  2. Determining how a voter voted from the ballot itself.

Ideally — at least from the perspective of privacy — the ballots are all identical and the ballot box is big enough that you get some level of shuffling (how much is an open question), then it’s quite hard to correlate the ballot a voter was given to when it’s counted, though you might be able to narrow it down some by looking at which polling place/box the ballot came in and where it was in the box. In some jurisdictions, ballots have serial numbers, which might make this kind of tracking easier, though only if records of which voter gets which ballot are kept and available. Apparently the UK has this kind of system but tightly controls the records.

It’s generally not possible to tell from a ballot itself which voter it belongs to unless the voter cooperates by making the ballot distinctive in some way. This might happen because the voter is being paid (or threatened) to cast their vote a certain way. While some election jurisdictions prohibit distinguishing marks, as a practical matter it’s not really possible to prevent voters from making such marks if they really want to. This is especially true when the ballots need not be machine readable and so the voter has the ability to fill in the box somewhat distinctively (there are a lot of ways to write an X!). In elections with a lot of contests, as with many places on the US, it is also possible to use what’s called a “pattern voting” attack in which you vote one contest the way you are told and then vote the downballot contests in a way that uniquely identifies you. This sort of attack is very hard to prevent, but actually checking that people voted they way they were told is of course a lot of work. There are also more exotic attacks such as fingerprinting paper stock, but none of these are easy to mount in bulk.


One big drawback of hand-marked ballots is that they are not very accessible, either to people with disabilities or to non-native speakers. For obvious reasons, if you’re blind or have limited dexterity it can be hard to fill in the boxes (this is even harder with optical scan type ballots). Many jurisdictions that use paper ballots will also have some accommodation for people with disabilities. Paper ballots work fine in most languages, but each language must be separately translated and then printed, and then you need to have extras of each ballot type in case more people come than you expect, so at the end of the day the logistics can get quite complicated. By contrast, electronic voting machines (which I’ll get to later) scale much better to multiple languages.


Although hand-counting does a good job of producing accurate and verifiable counts, it does not scale very well1. Estimates of how expensive it is to count ballots vary quite a bit, but a 2010 Pew study of hand recounts in Washington and Minnesota (the 2004 Washington gubernatorial and 2008 Minnesota US Senate races) put the cost of recounting a single contest at between $0.15 and $0.60 per ballot. Of course, as noted above some of the cost here is that of disputing ambiguous ballots. If the races is not particularly competitive then these ballots can be set aside and only need to be carefully adjudicated if they have a chance of changing the result.

Importantly, the cost of hand-counting goes up with the number of ballots times the number of contests on the ballot. In the United States it’s not uncommon to have 20 or more contests per election. For example, here is a sample ballot from the 2020 general election in Santa Clara County, CA. This ballot has the following contests

Type Count
President 1
US House of Representatives 1
State Assembly 1
Superior Court Judge 1
County Board of Education 1
County Board of Supervisors 1
Community College District 1
City Mayor 1
City Council (vote for two) 1
State Propositions 12
Local ballot measures 6
Total 32

In an election like this, the cost to count could be several dollars per ballot. Of course, California has an exceptionally large number of contests, but in general hand-counting represents a significant cost.

Aside from the financial impact of hand counting ballots, it just takes a long time. Pew notes that both the Washington and Minnesota recounts took around seven months to resolve, though again this is partly due to the small margin of victory. As another example, California law requires a “1% post-election manual tally” in which 1% of precincts are randomly selected for hand-counting. Even with such a restricted count, the tally can take weeks in a large county such as Los Angeles, suggesting that hand counting all the ballots would be prohibitive in this setting. This isn’t to say that hand counting can never work, obviously, merely that it’s not a good match for the US electoral system, which tends to have a lot more contests than in other countries.

Up Next: Optical Scanning

The bottom line here is that while hand counting works well in many jurisdictions it’s not a great fit for a lot of elections in the United States. So if we can’t count ballots by hand, then what can we do? The good news is that there are ballot counting mechanisms which can provide similar assurance and privacy properties to hand counting but do so much more efficiently, namely optical scan ballots. I’ll be covering that in my next post.

  1. By contrast, the marking process is very scalable: if you have a long line, you can put out more tables, pens, privacy screens, etc. 

The post Why getting voting right is hard, Part II: Hand-Counted Paper Ballots appeared first on The Mozilla Blog.

Open Policy & AdvocacyMozilla teams up with Twitter, Automattic, and Vimeo to provide recommendations on EU content responsibility

The European Commission will soon unveil its landmark Digital Services Act draft law, that will set out a vision for the future of online content responsibility in the EU. We’ve joined up with Twitter, Auttomattic, and Vimeo to provide recommendations on how the EU’s novel proposals can ensure a more thoughtful approach to addressing illegal and harmful content in the EU, in a way that tackles online harms while safeguarding smaller companies’ ability to compete.

As we note in our letter,

“The present conversation is too often framed through the prism of content removal alone, where success is judged solely in terms of ever-more content removal in ever-shorter periods of time.

Without question, illegal content – including terrorist content and child sexual abuse material – must be removed expeditiously. Yet by limiting policy options to a solely stay up-come down binary, we forgo promising alternatives that could better address the spread and impact of problematic content while safeguarding rights and the potential for smaller companies to compete.

Indeed, removing content cannot be the sole paradigm of Internet policy, particularly when concerned with the phenomenon of ‘legal-but-harmful’ content. Such an approach would benefit only the very largest companies in our industry.

We therefore encourage a content moderation discussion that emphasises the difference between illegal and harmful content and highlights the potential of interventions that address how content is surfaced and discovered. Included in this is how consumers are offered real choice in the curation of their online environment.”

We look forward to working with lawmakers in the EU to help bring this vision for a healthier internet to fruition in the upcoming Digital Services Act deliberations.

You can read the full letter to EU lawmakers here and more background on our engagement with the EU DSA here.

The post Mozilla teams up with Twitter, Automattic, and Vimeo to provide recommendations on EU content responsibility appeared first on Open Policy & Advocacy.

The Mozilla BlogWhy getting voting right is hard, Part I: Introduction and Requirements

Every two years around this time, the US has an election and the rest of the world marvels and asks itself one question: What the heck is going on with US elections? I’m not talking about US politics here but about the voting systems (machines, paper, etc.) that people use to vote, which are bafflingly complex. While it’s true that American voting is a chaotic patchwork of different systems scattered across jurisdictions, running efficient secure elections is a genuinely hard problem. This is often surprising to people who are used to other systems that demand precise accounting such as banking/ATMs or large scale databases, but the truth is that voting is fundamentally different and much harder.

In this series I’ll be going through a variety of different voting systems so you can see how this works in practice. This post provides a brief overview of the basic requirements for voting systems. We’ll go into more detail about the practical impact of these requirements as we examine each system.


To understand voting systems design, we first need to understand the requirements to which they are designed. These vary somewhat, but generally look something like the below.

Efficient Correct Tabulation

This requirement is basically trivial: collect the ballots and tally them up. The winner is the one with the most votes 1. You also need to do it at scale and within a reasonable period of time otherwise there’s not much point.

Verifiable Results

It’s not enough for the election just to produce the right result, it must also do so in a verifiable fashion. As voting researcher Dan Wallach is fond of saying, the purpose of elections is to convince the loser that they actually lost, and that means more than just trusting the election officials to count the votes correctly. Ideally, everyone in world would be able to check for themselves that the votes had been correctly tabulated (this is often called “public verifiability”), but in real-world systems it usually means that some set of election observers can personally observe parts of the process and hopefully be persuaded it was conducted correctly.

Secrecy of the Ballot

The next major requirement is what’s called “secrecy of the ballot”, i.e., ensuring that others can’t tell how you voted. Without ballot secrecy, people could be pressured to vote certain ways or face negative consequences for their votes. Ballot secrecy actually has two components (1) other people — including election officials — can’t tell how you voted and (2) you can’t prove to other people how you voted. The first component is needed to prevent wholesale retaliation and/or rewards and the second is needed to prevent retail vote buying. The actual level of ballot secrecy provided by systems varies. For instance, the UK system technically allows election officials to match ballots to the voter, but prevents it with procedural controls and vote by mail systems generally don’t do a great job of preventing you from proving how you voted, but in general most voting systems attempt to provide some level of ballot secrecy.2


Finally, we want voting systems to be accessible, both in the specific sense that we want people with disabilities to be able to vote and in the more general sense that we want it to be generally easy for people to vote. Because the voting-eligible population is so large and people’s situations are so varied, this often means that systems have to make accommodations, for instance for overseas or military voters or for people who speak different languages.

Limited Trust

As you’ve probably noticed, one common theme in these requirements is the desire to limit the amount of trust you place in any one entity or person. For instance, when I worked the polls in Santa Clara county elections, we would collect all the paper ballots and put them in tamper-evident bags before taking them back to election central for processing. This makes it harder for the person transporting the ballots to examine the ballots or substitute their own. For those who aren’t used to the way security people think, this often feels like saying that election officials aren’t trustworthy, but really what it’s saying is that elections are very high stakes events and critical systems like this should be designed with as few failure points as possible, and that includes preventing both outsider and insider threats, protecting even against authorized election workers themselves.

An Overconstrained Problem

Individually each of these requirements is fairly easy to meet, but the combination of them turns out to be extremely hard. For example if you publish everyone’s ballots then it’s (relatively) easy to ensure that the ballots were counted correctly, but you’ve just completely give up secrecy of the ballot.3 Conversely, if you just trust election officials to count all the votes, then it’s much easier to provide secrecy from everyone else. But these properties are both important, and hard to provide simultaneously. This tension is at the heart of why voting is so much more difficult than other superficially systems like banking. After all, your transactions aren’t secret from the bank. In general, what we find is that voting systems may not completely meet all the requirements but rather compromise on trying to do a good job on most/all of them.

Up Next: Hand-Counted Paper Ballots

In the next post, I’ll be covering what is probably the simplest common voting system: hand-counted paper ballots. This system actually isn’t that common in the US for reasons I’ll go into, but it’s widely used outside the US and provides a good introduction into some of the problems with running a real election.

  1. For the purpose of this series, we’ll mostly be assuming first past the post systems, which are the main systems in use in the US.
  2. Note that I’m talking here about systems designed for use by ordinary citizens. Legislative voting, judicial voting, etc. are qualitatively different: they usually have a much smaller number of voters and don’t try to preserve the secrecy of the ballot, so the problem is much simpler. 
  3. Thanks to Hovav Shacham for this example. 

The post Why getting voting right is hard, Part I: Introduction and Requirements appeared first on The Mozilla Blog.

hacks.mozilla.orgAn update on MDN Web Docs’ localization strategy

In our previous post — MDN Web Docs evolves! Lowdown on the upcoming new platform — we talked about many aspects of the new MDN Web Docs platform that we’re launching on December 14th. In this post, we’ll look at one aspect in more detail — how we are handling localization going forward. We’ll talk about how our thinking has changed since our previous post, and detail our updated course of action.

Updated course of action

Based on thoughtful feedback from the community, we did some additional investigation and determined a stronger, clearer path forward.

First of all, we want to keep a clear focus on work leading up to the launch of our new platform, and making sure the overall system works smoothly. This means that upon launch, we still plan to display translations in all existing locales, but they will all initially be frozen — read-only, not editable.

We were considering automated translations as the main way forward. One key issue was that automated translations into European languages are seen as an acceptable solution, but automated translations into CJK languages are far from ideal — they have a very different structure to English and European languages, plus many Europeans are able to read English well enough to fall back on English documentation when required, whereas some CJK communities do not commonly read English so do not have that luxury.

Many folks we talked to said that automated translations wouldn’t be acceptable in their languages. Not only would they be substandard, but a lot of MDN Web Docs communities center around translating documents. If manual translations went away, those vibrant and highly involved communities would probably go away — something we certainly want to avoid!

We are therefore focusing on limited manual translations as our main way forward instead, looking to unfreeze a number of key locales as soon as possible after the new platform launch.

Limited manual translations

Rigorous testing has been done, and it looks like building translated content as part of the main build process is doable. We are separating locales into two tiers in order to determine which will be unfrozen and which will remain locked.

  • Tier 1 locales will be unfrozen and manually editable via pull requests. These locales are required to have at least one representative who will act as a community lead. The community members will be responsible for monitoring the localized pages, updating translations of key content once the English versions are changed, reviewing edits, etc. The community lead will additionally be in charge of making decisions related to that locale, and acting as a point of contact between the community and the MDN staff team.
  • Tier 2 locales will be frozen, and not accept pull requests, because they have no community to maintain them.

The Tier 1 locales we are starting with unfreezing are:

  • Simplified Chinese (zh-CN)
  • Traditional Chinese (zh-TW)
  • French (fr)
  • Japanese (ja)

If you wish for a Tier 2 locale to be unfrozen, then you need to come to us with a proposal, including evidence of an active team willing to be responsible for the work associated with that locale. If this is the case, then we can promote the locale to Tier 1, and you can start work.

We will monitor the activity on the Tier 1 locales. If a Tier 1 locale is not being maintained by its community, we shall demote it to Tier 2 after a certain period of time, and it will become frozen again.

We are looking at this new system as a reasonable compromise — providing a path for you the community to continue work on MDN translations providing the interest is there, while also ensuring that locale maintenance is viable, and content won’t get any further out of date. With most locales unmaintained, changes weren’t being reviewed effectively, and readers of those locales were often confused between using their preferred locale or English, their experience suffering as a result.

Review process

The review process will be quite simple.

  • The content for each Tier 1 locale will be kept in its own separate repo.
  • When a PR is made against that repo, the localization community will be pinged for a review.
  • When the content has been reviewed, an MDN admin will be pinged to merge the change. We should be able to set up the system so that this happens automatically.
  • There will also be some user-submitted content bugs filed at, as well as on the issue trackers for each locale repo. When triaged, the “sprints” issues will be assigned to the relevant localization team to fix, but the relevant localization team is responsible for triaging and resolving issues filed on their own repo.

Machine translations alongside manual translations

We previously talked about the potential involvement of machine translations to enhance the new localization process. We still have this in mind, but we are looking to keep the initial system simple, in order to make it achievable. The next step in Q1 2021 will be to start looking into how we could most effectively make use of machine translations. We’ll give you another update in mid-Q1, once we’ve made more progress.

The post An update on MDN Web Docs’ localization strategy appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogState of Mozilla 2019-2020: Annual Impact Report

2020 has been a year like few others with the internet’s value and necessity front and center. The State of Mozilla for 2019-2020 makes clear that Mozilla’s mission and role in the world is more important than ever. Dive into the full report by clicking on the image below.

2019–2020 State of Mozilla

About the State of Mozilla

Mozilla releases the State of Mozilla annually. This impact report outlines how Mozilla’s products, services, advocacy and engagement have influenced technology and society over the past year. The State of Mozilla also includes details on Mozilla’s finances as a way of further demonstrating how Mozilla uses the power of its unique structure and resources to achieve its mission — an internet that is open and accessible to all.

The post State of Mozilla 2019-2020: Annual Impact Report appeared first on The Mozilla Blog.

Firefox UXSimplifying the Complex: Crafting Content for Meaningful Privacy Experiences

How content strategy simplified the language and improved content design around a core Firefox feature.

Image of a shield on purple background with the words “Enhanced Tracking Protection.”<figcaption>Enhanced Tracking Protection is a feature of the Firefox browser that automatically protects your privacy behind the scenes.</figcaption>

Firefox protects your privacy behind the scenes while you browse. These protections work invisibly, blocking advertising trackers, data collectors, and other annoyances that lurk quietly on websites. This core feature of the Firefox browser is called Enhanced Tracking Protection (ETP). When the user experience team redesigned the front-end interface, content strategy led efforts to simplify the language and improve content design.

Aligning around new nomenclature

The feature had been previously named Content Blocking. Content blockers are extensions that block trackers placed by ads, analytics companies, and social media. It’s a standardized term among tech-savvy, privacy-conscious users. However, it wasn’t well understood in user testing. Some participants perceived the protections to be too broad, assuming it blocked adult content (it doesn’t). Others thought the feature only blocked pop-ups.

This was an opportunity to improve the clarity and comprehension of the feature name itself. The content strategy team renamed the feature to Enhanced Tracking Protection.

A chart outlining the differences between Content Blocking and Enhanced Tracking Protection.<figcaption>The feature had been previously called Content Blocking, which was a reflection of its technical functionality. To bring focus to the feature’s end benefits to users, we renamed it to Enhanced Tracking Protection.</figcaption>

Renaming a core browser feature is 5 percent coming up with a name, 95 percent getting everyone on the same page. Content strategy led the communication and coordination of the name change. This included alignment efforts with marketing, localization, support, engineering, design, and legal.

Revisiting content hierarchy and information architecture

To see what Firefox blocked on a particular website, you can select the shield to the left of your address bar.

Image of shield in the Firefox address bar highlighted.<figcaption>To access the Enhanced Tracking Protection panel in Firefox, select the shield to the left of the address bar.</figcaption>

Previously, this opened a panel jam-packed with an overwhelming amount of information — only a portion of which pertained to the actual feature. User testing participants struggled to parse everything on the panel.

The new Enhanced Tracking Protection panel needed to do a few things well:

  • Communicate if the feature was on and working
  • Make it easy to turn the feature off and back on
  • Provide just-enough detail about which trackers Firefox blocks
  • Offer a path to adjust your settings and visit your Protections Dashboard
Image of the previous Content Blocking panel alongside the new Enhanced Tracking Protection Panel.<figcaption>The panel previously included information about a site’s connection, the Content Blocking feature, and site-level permissions. The redesigned panel focuses only on Enhanced Tracking Protection.</figcaption>

We made Enhanced Tracking Protection the panel’s only focus. Information pertaining to a site’s security and permissions were moved to a different part of the UI. We made design and content changes to reinforce the feature’s on/off state. Additional modifications were made to the content hierarchy to improve scannability.

Solving problems without words

A chart outlining variations of button copy and the recommendation to change the design element entirely.<figcaption>Users weren’t able to quickly able to turn the feature on and off. Clearer button copy alone couldn’t solve the problem.</figcaption>

Words alone can’t solve certain problems. The enable/disable button is a perfect example. Users can go to their settings and manually opt in to a stricter level of Enhanced Tracking Protection. Websites occasionally don’t work as expected in strict ETP. There’s an easy fix: Turn it off right from the panel.

User testing participants couldn’t figure out how to do it. Though there was a button on the panel, its function was far from obvious. A slight improvement was made by updating the copy from ‘enable/disable’ to ‘turn on/turn off.’ Ultimately, the best solution was not better button copy. It was removing the button entirely and replacing it with a different design element.

A chart outlining the differences between a button and a switch design element.<figcaption>A switch better communicated the on/off state of the feature.</figcaption>

We also moved this element to the top of the panel for easier access.

Image of the previous button, which read “Disable Blocking for this Site” beside the new switch element.<figcaption>User testing participants struggled to understand how to turn the feature off. The solution was not better button copy, but replacing the button with an on/off switch and moving it higher up on the panel for better visibility.</figcaption>

Lastly, we added a sub-panel to inform users how turning off ETP might fix their problem. We used one of the best-kept secrets of the content strategy trade: a bulleted list to make this sub-panel scannable.

Image of Enhanced Tracking Protection panel and its sub-panel, which explains reasons why a site might not be working.<figcaption>A sub-panel outlines reasons why you might want to turn Enhanced Tracking Protection off.</figcaption>

Improving the clarity of language on Protections Dashboard

An image of the previous Protections Dashboard beside the revised content and design.<figcaption>Adding clarifying language to Protections Dashboard provides an overview of the page, offers users a path to adjust their settings, and reinforces that the Enhanced Tracking Protection feature is always on.</figcaption>

Firefox also launched a Protections Dashboard to give users more visibility into their privacy and security protections. After user research was conducted, we made further changes to the content design and copy. All credit goes to my fellow content strategist Meridel Walkington for making these improvements.

A chart outlining issues identified in user research and changes that were made.<figcaption>Content strategy recommended improvements to the content design and language of the Protections Dashboard to improve comprehension.</figcaption>

Explaining jargon in clear and simple terms

Jargon can’t always be avoided. Terms like ‘cryptominers,’ ‘fingerprinters,’ and other trackers Firefox blocks are technical by nature. Most users aren’t familiar with these terms, so the Protections Dashboard offers a short definition to break down each in clear, simple language. We also offer a path for users to explore these terms in more detail. The goal was to provide just-enough information without overwhelming users when they landed on their dashboard.

Descriptions for types of trackers that Firefox blocks.<figcaption>Descriptions of each type of tracker help explain terms that are inherently technical.</figcaption>
“Mozilla doesn’t just throw stats at users. The dashboard has a minimalist design and uses color-coded bar graphs to provide a simple overview of the different types of trackers blocked. It also features explainers clearly describing what the different types of trackers do.” — Fast Company: Firefox at 15: its rise, fall, and privacy-first renaissance

Wrapping up

Our goal in creating meaningful privacy experiences is to educate and empower users without overwhelming and paralyzing them. It’s an often delicate dance that requires deep partnership between product management, engineering, design, research and content strategy. Enhanced Tracking Protection is just one example of this type of collaboration. For any product to be successful, it’s important that our cross-functional teams align early on the user problems so we can design the best experience to meet them where they are.


Thank you to Michelle Heubusch for your ongoing support in this work and to Meridel Walkington for making it even better. All user research conducted by Alice Rhee. Design by Bryan Bell and Eric Pang.

Simplifying the Complex: Crafting Content for Meaningful Privacy Experiences was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

hacks.mozilla.orgFlying the Nest: WebThings Gateway 1.0

WebThings Gateway 1.0

After four years of incubation at Mozilla, we are excited to announce the release of WebThings Gateway 1.0 and a new home for the WebThings platform.

WebThings Gateway floorplan

You may have heard that following a company restructuring in August, the WebThings platform is being spun out of Mozilla as an independent open source project, at the new community-run home of

This blog post will explain what to expect from the 1.0 release, the action you need to take if you want to transition your existing WebThings Gateway to new community-run infrastructure, and what to expect from the WebThings project going forward.

See the release notes for the full set of new features and changes in the WebThings Gateway 1.0 release including support for new types of sensors, searchable add-ons and translations into five new languages.

The Journey So Far

The Mozilla IoT team released the first version of “Project Things” in June 2017, six months after an initial whitepaper proposing how Mozilla could apply its mission to the emerging IoT ecosystem. We wanted to apply lessons learned from the World Wide Web to the Internet of Things, to create an IoT which “puts people first, where individuals can shape their own experience and are empowered, safe and independent”.

Mozilla IoT team

Our team’s goal was to create an open source implementation of the Web of Things which embodied Mozilla’s values and helped drive IoT standards around privacy, security and interoperability. We aimed to bridge the communication gap between connected devices and work towards a more decentralised Internet of Things that is safe, open and interoperable.

Some of the highlights of the last four years have included:

  • Twelve releases of our WebThings Gateway software, which allows users to directly monitor and control their home over the web without a middleman.
  • Translation of WebThings Gateway into 34 spoken languages, with over 100,000 downloads powering thousands of DIY smart homes around the world.
  • Over a hundred add-ons developed for WebThings Gateway, bridging a wide range of different protocols and devices to the Web of Things, providing various types of user notifications and extending the gateway’s user interface.
  • WebThings Framework implementations in over a dozen programming languages, enabling developers to implement their own web things in the language of their choice.
  • The growth of a worldwide community of hackers, makers and educators who have been pivotal in building, testing and promoting WebThings around the world.
  • Presentations and workshops at conferences from Mozilla Festival in London and FOSDEM in Brussels to LinuxConf in South Africa and Maker Faire in Silicon Valley.
  • Countless innovative DIY projects by the community – controlling physical devices using voice and virtual reality, smart campervans, smart yurts, earthquake alerts, pool heating, air quality monitoring and plant watering.
  • The release of the Mozilla WebThings Gateway Kit in partnership with OKdo (still available for a limited time only!)
  • Contributions to the W3C Thing Description specification which became a W3C recommendation in April this year.

Our New Home

Going forward, you will be able to find the WebThings community at our new home of You can follow @WebThingsIO on Twitter, fork us on GitHub and sign up for our newsletter to keep up to date with all the latest news.

For the time being we will still be using the WebThings forum at and the #iot channel at for discussions.


As part of the transition the Mozilla IoT remote access service and automatic software updates will be discontinued on 31st December 2020, to be replaced by community-run services at which you can transfer to if you choose.

If you have an existing WebThings Gateway then you should shortly receive an automatic update to the 1.0 release and see a banner appear at the top of your gateway’s web interface.

WebThings Gateway transition bannerClicking the “choose” button will display a dialog explaining the choices you have about whether to transfer to new community-run services. This includes whether you wish to continue to receive software updates from the community, and whether you wish to use the replacement remote access service and swap your subdomain for a subdomain.

You will also have the option of signing up for the new WebThings newsletter and will need to accept the WebThings community Privacy Policy and Terms of Service in order to make use of any replacement services.

WebThings Gateway transition dialogIf you choose not to transfer your gateway to the new infrastructure then fear not, your gateway will continue to work just as it did before on your local network as it doesn’t rely on any cloud services to function. But please be aware that after 31st December 2020 you will no longer be able to use the remote access service and Mozilla will no longer provide software updates, including security fixes.


Following the transition, governance of the project is being passed to the community using a module ownership system independent of the Mozilla Corporation’s organisational structure, like the one used by the Mozilla project. For continuity the initial module owners of the top level WebThings module will be Ben Francis and Michael Stegeman from the original Mozilla IoT team. These module owners will then be able to create sub-modules and assign new module owners and peers to help govern the project going forward.

You can find the initial list of modules and module owners on our wiki. If you would like to volunteer to be an owner or peer of a module, or propose the creation of a new sub-module, then you can contact the owner of the module or parent module, or contact the top level module owners at

The best way to achieve module owner or peer status is by demonstrating your commitment to the module through ongoing contributions, so rather than wait for permission we encourage you to just get stuck in and start hacking on whatever area interests you.

How to Contribute

Having flown the nest from Mozilla, the future of the WebThings project is now in the hands of its worldwide community. Your support is going to be crucial in enabling the project to continue to thrive and grow.

There are many ways you can contribute to WebThings:

  • 💻 Development – Pick a bug, task or feature off the Product Backlog and start hacking
  • 🐜 Testing – Either writing or fixing automated tests, or manual testing of builds of the latest master branch
  • 🧩 Add-onsWrite an add-on (or help maintain an existing one), to add support for a new type of device or protocol, add new notification mechanisms or extend the UI for new use cases
  • 💡 Things – Build a new web thing using the WebThings Framework to expand the Web of Things ecosystem, or even create a web thing library in a new programming language
  • 📖 Documentation – Our documentation could do with some love, and we are currently overhauling this section of the website
  • 💬 Support – Help other community members with questions and problems on the forums and #iot chat channel
  • 🌍 Localisation – Help translate the WebThings Gateway into new languages using Pontoon
  • 📣 Evangelism – Talk about WebThings at events, on blogs and on social media, give talks and run workshops to help spread the word
  • 📄 Standardisation – Help with standardising the Thing Description and Web Thing Protocol (see the recent call for use cases & requirements)

What’s Next?

Now that version 1.0 is out of the door, we are already starting to think about working towards a version 2.0. In terms of a roadmap, WebThings’ new commercial sponsor Krellian has some ideas about where to take the project next, but we’d most like to hear from you (the WebThings Community) about what you’d like to see from the project in the future.

We’d again like to take this opportunity to thank you all for your contributions and support for the project so far. The team is looking forward to this new chapter in the WebThings story, as the project flies the nest from Mozilla to make its own way in the world! We hope to take you all along for the ride.

Come and join the discussion on our forum, and follow us on Twitter or subscribe to our new email newsletter if you’d like to be kept up to date with the latest WebThings news.

Happy hacking!

The post Flying the Nest: WebThings Gateway 1.0 appeared first on Mozilla Hacks - the Web developer blog.

Open Policy & AdvocacyMozilla reacts to publication of the EU Democracy Action Plan

The European Commission has just published its new EU Democracy Action Plan (EDAP). This is an important step forward in the efforts to better protect democracy in the digital age, and we’re happy to see the Commission take onboard many of our recommendations.

Reacting to the EDAP publication, Raegan MacDonald, Mozilla’s Head of Public Policy, said:

“Mozilla has been a leading advocate for the need for greater transparency in online political advertising. We haven’t seen adequate steps from the platforms to address these problems themselves, and it’s time for regulatory solutions. So we welcome the Commission’s signal of support for the need for broad disclosure of sponsored political content. We likewise welcome the EDAP’s acknowledgement of the risks associated with microtargeting of political content.

As a founding signatory to the EU Code of Practice on Disinformation we are encouraged that the Commission has adopted many of our recommendations for how the Code can be enhanced, particularly with respect to its implementation and its role within a more general EU policy approach to platform responsibility.

We look forward to working with the EU institutions to fine-tune the upcoming legislative proposals.”

The post Mozilla reacts to publication of the EU Democracy Action Plan appeared first on Open Policy & Advocacy.

Blog of DataThis Week in Glean: Glean is Frictionless Data Collection

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

So you want to collect data in your project? Okay, it’s pretty straightforward.

  1. API: You need a way to combine the name of your data with the value that data has. Ideally you want it to be ergonomic to your developers to encourage them to instrument things without asking you for help, so it should include as many compile-time checks as you can and should be friendly to the IDEs and languages in use. Note the plurals.
  2. Persistent Storage: Keyed by the name of your data, you need some place to put the value. Ideally this will be common regardless of the instrumentation’s language or thread of execution. And since you really don’t want crashes or sudden application shutdowns or power outages to cause you to lose everything, you need to persist this storage. You can write it to a file on disk (if your platforms have such access), but be sure to write the serialization and deserialization functions with backwards-compatibility in mind because you’ll eventually need to change the format.
  3. Networking: Data stored with the product has its uses, but chances are you want this data to be combined with more data from other installations. You don’t need to write the network code yourself, there are libraries for HTTPS after all, but you’ll need to write a protocol on top of it to serialize your data for transmission.
  4. Scheduling: Sending data each time a new piece of instrumentation comes in might be acceptable for some products whose nature is only-online. Messaging apps and MMOs send so much low-latency data all the time that you might as well send your data as it comes in. But chances are you aren’t writing something like that, or you respect the bandwidth of your users too much to waste it, so you’ll only want to be sending data occasionally. Maybe daily. Maybe when the user isn’t in the middle of something. Maybe regularly. Maybe when the stored data reaches a certain size. This could get complicated, so spend some time here and don’t be afraid to change it as you find new corners.
  5. Errors: Things will go wrong. Instrumentation will, despite your ergonomic API, do something wrong and write the wrong value or call stop() before start(). Your networking code will encounter the weirdness of the full Internet. Your storage will get full. You need some way to communicate the health of your data collection system to yourself (the owner who needs to adjust scheduling and persistence and other stuff to decrease errors) and to others (devs who need to fix their instrumentation, analysts who should be told if there’s a problem with the data, QA so they can write tests for these corner cases).
  6. Ingestion: You’ll need something on the Internet listening for your data coming in. It’ll need to scale to the size of your product’s base and be resilient to Internet Attacks. It should speak the protocol you defined in #4, so you should probably have some sort of machine-readable definition of that protocol that product and ingestion can share. And you should spend some time thinking about what to do when an old product with an old version of the protocol wants to send data to your latest ingestion endpoint.
  7. Pipeline: Not all data will go to the same place. Some is from a different product. Some adheres to a different schema. Some is wrong but ingestion (because it needs to scale) couldn’t do the verification of it, so now you need to discard it more expensively. Thus you’ll be wanting some sort of routing infrastructure to take ingested data and do some processing on it.
  8. Warehousing: Once you receive all these raw payloads you’ll need a place to put them. You’ll want this place to be scalable, high-performance, and highly-available.
  9. Datasets: Performing analysis to gain insight from raw payloads is possible (even I have done it), but it is far more pleasant to consolidate like payloads with like, perhaps ordered or partitioned by time and by some dimensions within the payload that’ll make analyses quicker. Maybe you’ll want to split payloads into multiple rows of a tabular dataset, or combine multiple payloads into single rows. Talk to the people doing the analyses and ask them what would make their lives easier.
  10. Tooling: Democratizing data analysis is a good way to scale up the number of insights your organization can find at once, and it’s a good way to build data intuition. You might want to consider low-barrier data analysis tooling to encourage exploration. You might also want to consider some high-barrier data tooling for operational analyses and monitoring (good to know that the update is rolling out properly and isn’t bricking users’ devices). And some things for the middle ground of folks that know data and have questions, but don’t know SQL or Python or R.
  11. Tests: Don’t forget that every piece of this should be testable and tested in isolation and in integration. If you can manage it, a suite of end-to-end tests does wonders for making you feel good that the whole system will continue to work as you develop it.
  12. Documentation: You’ll need two types of documentation: User and Developer. The former is for the “user” of the piece (developers who wish to instrument back in #1, analysts who have questions that need answering in #10). The latter is for anyone going in trying to understand the “Why” and “How” of the pieces’ architecture and design choices.

You get all that? Thread safety. File formats. Networking protocols. Scheduling using real wall-clock time. Schema validation. Open ports on the Internet. At scale. User-facing tools and documentation. All tested and verified.

Look, I said it’d be straightforward, not that it’d be easy. I’m sure it’ll only take you a few years and a couple tries to get it right.

Or, y’know, if you’re a Mozilla project you could just use Glean which already has all of these things…

  1. API: The Glean SDK API aims to be ergonomic and idiomatic in each supported language.
  2. Persistent Storage: The Glean SDK uses rkv as a persistent store for unsubmitted data, and a documented flat file format for submitted but not yet sent data.
  3. Networking: The Glean SDK provides an API for embedding applications to provide their own networking stack (useful when we’re embedded in a browser), and some default implementations if you don’t care to provide one. The payload protocol is built on Structured Ingestion and has a schema that generates and deploys new versions daily.
  4. Scheduling: Each Glean SDK payload has its own schedule to respect the character of the data it contains, from as frequently as the user foregrounds the app to, at most, once a day.
  5. Errors: The Glean SDK builds user metric and internal health metrics into the SDK itself.
  6. Ingestion: The edge servers and schema validation are all documented and tested. We autoscale quite well and have a process for handling incidents.
  7. Pipeline: We have a pubsub system on GCP that handles a variety of different types of data.
  8. Warehousing: I can’t remember if we still call this the Data Lake or not.
  9. Datasets: We have a few. They are monitored. Our workflow software for deriving the datasets is monitored as well.
  10. Tooling: Quite a few of them are linked from the Telemetry Index.
  11. Tests: Each piece is tested individually. Adjacent pieces sometimes have integration suites. And Raphael recently spun up end-to-end tests that we’re very appreciative of. And if you’re just a dev wondering if your new instrumentation is working? We have the debug ping viewer.
  12. Documentation: Each piece has developer documentation. Some pieces, like the SDK, also have user documentation. And the system at large? Even more documentation.

Glean takes this incredibly complex problem, breaks it into pieces, solves each piece individually, then puts the solution together in a way that makes it greater than the sum of its parts.

All you need is to follow the six steps to integrate the Glean SDK and notify the Ecosystem that your project exists, and then your responsibilities shrink to just instrumentation and analysis.

If that isn’t frictionless data collection, I don’t know what is.


(( If you’re not a Mozilla project, and thus don’t by default get to use the Data Platform (numbers 6-10) for your project, come find us on the #glean channel on Matrix and we’ll see what help we can get you. ))

(( This post was syndicated from its original location. ))

Web Application SecurityDesign of the CRLite Infrastructure

Firefox is the only major browser that still evaluates every website it connects to whether the certificate used has been reported as revoked. Firefox users are notified of all connections involving untrustworthy certificates, regardless the popularity of the site. Inconveniently, checking certificate status sometimes slows down the connection to websites. Worse, the check reveals cleartext information about the website you’re visiting to network observers.

We’re now testing a technology named CRLite which provides Firefox users with the confidence that the revocations in the Web PKI are enforced by the browser without this privacy compromise. This is a part of our goal to use encryption everywhere. (See also: Encrypted SNI and DNS-over-HTTPS)

The first three posts in this series are about the newly-added CRLite technology and provide background that will be useful for following along with this post:

This blog post discusses the back-end infrastructure that produces the data which Firefox uses for CRLite. To begin with, we’ll trace that data in reverse, starting from what Firefox needs to use for CRLite’s algorithms, back to the inputs derived from monitoring the whole Web PKI via Certificate Transparency.

Tracing the Flow of Data

Individual copies of Firefox maintain in their profiles a CRLite database which is periodically updated via Firefox’s Remote Settings. Those updates come in the form of CRLite filters and “stashes”.

Filters and Stashes

The general mechanism for how the filters work is explained in Figure 3 of The End-to-End Design of CRLite.

Introduced in this post is the concept of CRLite stashes. These are lists of certificate issuers and the certificate serial numbers that those issuers revoked, which the CRLite infrastructure distributes to Firefox users in lieu of a whole new filter. If a certificate’s identity is contained within any of the issued stashes, then that certificate is invalid.

Combining stashes with the CRLite filters produces an algorithm which, in simplified terms, proceeds like this:

A representation of the CRLite Algorithm: Is this website’s Certificate Authority enrolled in CRLite? If not, use the online status check, OCSP. If it is enrolled, should I expect this website’s certificate to be a part of the CRLite filter available in my local profile? If it should be in the CRLite filter, does this website’s certificate appear in the filter as having been revoked by its issuer? If it’s not in the filter as having been revoked, does this website’s certificate appear in any of my local profile’s CRLite stashes as being revoked? If either the CRLite filter or the stashes indicate the website’s certificate is revoked, don’t trust it and show an error page. If it’s not in the CRLite filter nor in the CRLite stashes as revoked, then proceed to run the rest of the Web PKI validity and trust checks. If for any reason we can’t tell from the above, for example the local filters or stashes are too old or we encounter an error, then we go back to using the online status check, OCSP.

Figure 1: Simplified CRLite Decision Tree

Every time the CRLite infrastructure updates its dataset, it produces both a new filter and a stash containing all of the new revocations (compared with the previous run). Firefox’s CRLite is up-to-date if it has a filter and all issued stashes for that filter.

Enrolled, Valid and Revoked

To produce the filters and stashes, CRLite needs as input:

  1. The list of trusted certificate authority issuers which are enrolled in CRLite,
  2. The list of all currently-valid certificates issued by each of those enrolled certificate authorities, e.g. information from Certificate Transparency,
  3. The list of all unexpired-but-revoked certificates issued by each of those enrolled certificate authorities, e.g. from Certificate Revocation Lists.

These bits of data are the basis of the CRLite decision-making.

The enrolled issuers are communicated to Firefox clients as updates within the existing Intermediate Preloading feature, while the certificate sets are compressed into the CRLite filters and stashes. Whether a certificate issuer is enrolled or not is directly related to obtaining the list of their revoked certificates.

Collecting Revocations

To obtain all the revoked certificates for a given issuer, the CRLite infrastructure reads the Certificate Revocation List (CRL) Distribution Point extension out of all that issuer’s unexpired certificates and filters the list down to those CRLs which are available over HTTP/HTTPS. Then, every URL in that list is downloaded and verified: Does it have a valid, trusted signature? Is it up-to-date? If any could not be downloaded, do we have a cached copy which is still both valid and up-to-date?

For issuers which are considered enrolled, all of the entries in the CRLs are collected and saved as a complete list of all revoked certificates for that issuer.

Lists of Unexpired Certificates

The lists of currently-valid certificates and unexpired-but-revoked certificates have to be calculated, as the data sources that CRLite uses consist of:

  1. Certificate Transparency’s list of all certificates in the WebPKI, and
  2. All the published certificate revocations from the previous step.

By policy now, Certificate Transparency (CT) Logs, in aggregate, are assumed to provide a complete list of all certificates in the public Web PKI. CRLite then filters the complete CT dataset down to certificates which haven’t yet reached their expiration date, but which have been issued by certificate authorities trusted by Firefox.

Filtering CT data down to a list of unexpired certificates allows CRLite to derive the needed data sets using set math:

  • The currently-valid certificates are those which are unexpired and not included in any revocation list,
  • The unexpired-but-revoked certificates are those which are unexpired and are included in a revocation list.

The CT data simply comes from a continual monitoring of the Certificate Transparency ecosystem. Every known CT log is monitored by Mozilla’s infrastructure, and every certificate added to the ecosystem is processed.

The Kubernetes Pods

All these functions are orchestrated as four Kubernetes pods with the descriptive names Fetch, Generate, Publish, and Sign-off.


Fetch is a Kubernetes deployment, or always-on task, which constantly monitors Certificate Transparency data from all Certificate Transparency logs. Certificates that aren’t expired are inserted into a Redis database, configured so that certificates are expunged automatically when they reach their expiration time. This way, whenever the CRLite infrastructure requires a list of all unexpired certificates known to Certificate Transparency, it can iterate through all of the certificates in the Redis database. The actual data stored in Redis is described in our FAQ.

Figure 2: The Fetch task reads from Certificate Transparency and stores data in a Redis database


The Generate pod is a periodic task, which currently runs four times a day. This task reads all known unexpired certificates from the Redis database, downloads and validates all CRLs from the issuing certificate authorities, and synthesizes a filter and a stash from those data sources. The resulting filters and stashes are uploaded into a Google Cloud Storage bucket, along with all the source input data, for both public audit and distribution.

Figure 3: The Generate task reads from a Redis database and the Internet, and writes its results to Google Cloud Storage


The Publish task is also a periodic task, running often. It looks for new filters and stashes in the Google Cloud Storage bucket, and stages either a new filter or a stash to Firefox’s Remote Settings when the Generate task finishes producing one.

Figure 4: The Publish job reads from Google Cloud Storage and writes to Remote Settings


Finally, a separate Sign-Off task runs periodically, also often. When there is an updated filter or stash staged at Firefox’s Remote Settings, the sign-off task downloads the staged data and tests it, looking for coherency and to make sure that CRLite does not accidentally include revocations that could break Firefox. If all the tests pass, the Sign-Off task approves the new CRLite data for distribution, which triggers Megaphone to push the update to Firefox users that are online.

Figure 5: The Sign-Off task interacts with both Remote Settings and the public Internet

Using CRLite

We recently announced in the mailing list that Firefox Nightly users on Desktop are relying on CRLite, after collecting encouraging performance measurements for most of 2020. We’re working on plans to begin tests for Firefox Beta users soon. If you want to try using CRLite, you can use Firefox Nightly, or for the more adventurous reader, interact with the CRLite data directly.

Our final blog post in this series, Part 5, will reflect on the collaboration between Mozilla Security Engineering and the several research teams that designed and have analyzed CRLite to produce this impressive system.

The post Design of the CRLite Infrastructure appeared first on Mozilla Security Blog.

Rumbling Edge - ThunderbirdBiggest Casino Technology Innovations

Technology has influenced every activity we do and has also enabled us to do the unimaginable. It is not a surprise that technology has changed the gambling industry too. With so many innovations, we might think we are at a golden age, but technology surpasses and sets the bar high every time we feel we have had the best. It is only the best one yet.

Time flies

We all are aware of how there are no clocks inside a casino, and when you are gambling, you tend to lose track of time. The idea of making customers stay and gamble more is fascinating and has been practised for a long time now.

Technology has made it possible for us to have casino applications on our watches. When we say watches, we are talking about smartwatches, and now gamblers can even get a casino experience on their watches, from not knowing how time flies while gambling to gambling on a time device, we have come a full circle.

Time flies

Virtual reality

Until now, we were lauding online gambling as a cutting-edge innovation but then came the Virtual reality that topped online gambling and gave the gamblers an even more realistic experience. With virtual reality, you can sit in your living room and feel like you are at the casino.

While playing online on your computer or mobile device you are so indulged in the game that you forget your surroundings, but when you look away from the screen even for a second, you realise your reality. The virtual reality changes this, when you put on your VR goggles and have the other gadgets, there is no looking away from the screen. It feels like you are in the casino, no matter where you are.

Virtual reality


With changing times, not everyone wants to go to a physical casino, and since gamblers usually have just one application or website that they trust, they are monitored. The gamblers are closely monitored, and their behaviour and habits are tracked. Information on how often they play, how much they bet and the time spent on each game is kept track. The first-time visitors are valued as they can be lured and be made into frequent gamblers. The first-time visitors are given offers that they can’t refuse.

Chip tracking

Chips are the currency of the casino, and some fraudulent people can manufacture fake chips and fool the casino and exchange the fake ones for more money. There might be some cheaters and thieves who steal the higher denomination chips and come back later to exchange it. With tracking devices in every chip, the casino can keep track of where the chips are, and if they suspect that someone has stolen, then they can easily declare the chips invalid.

The trends in technology have helped casino protect their customers so they can have a safe experience.

The post Biggest Casino Technology Innovations first appeared on Rumbling Edge.

Open Policy & AdvocacyFour key takeaways to CPRA, California’s latest privacy law

California is on the move again in the consumer privacy rights space. On Election Day 2020 California voters approved Proposition 24 the California Privacy Rights Act (CPRA). CPRA – commonly called CCPA 2.0 – builds upon the less than two year old California Consumer Privacy Act (CCPA) continuing the momentum to put more control over personal data in people’s hands, additional compliance obligations for businesses and creating a new California Protection Agency for regulation and enforcement

With federal privacy legislation efforts stagnating during the last years, California continues to set the tone and expectations that lead privacy efforts in the US. Mozilla continues to support data privacy laws that empower people, including the European General Data Protection Regulation (GDPR), California Consumer Privacy Act, (CCPA) and now the California Privacy Rights Act (CPRA). And while CPRA is far from perfect it does expand privacy protections in some important ways.

Here’s what you need to know. CPRA includes requirements we foresee as truly beneficial for consumers such as additional rights to control their information, including sensitive personal information, data deletion, correcting inaccurate information, and putting resources in a centralized authority to ensure there is real enforcement of violations.

CPRA gives people more rights to opt-out of targeted advertising

We are heartened about the significant new right around “cross-context behavior advertising.” At its core, this right allows consumers to exert more control and opt-out of behavioral, targeted advertising — it will no longer matter if the publisher “sells” their data or not.

This control is one that Mozilla has been a keen and active supporter of for almost a decade; from our efforts with the Do Not Track mechanism in Firefox, to Enhanced Tracking Protection to our support of the Global Privacy Control experiment. However, this right is not exercised by default–users must take the extra step of opting in to benefit from it.

CPRA abolishes “dark patterns”

Another protection the CPRA brings is prohibiting the use of “dark patterns” or features of interface design meant to trick users into doing things that they might not want to do, but ultimately benefit the business in question. Dark patterns are used in websites and apps to give the illusion of choice, but in actuality are deliberately designed to deceive people.

For instance, how often the privacy preserving options — like opting out of tracking by companies — take multiple clicks, and navigating multiple screens to finally get to the button to opt-out, while the option to accept the tracking is one simple click. This is only one of many types of dark patterns. This behavior fosters distrust in the internet ecosystem and is patently bad for people and the web. And it needs to go. Mozilla also supports federal legislation that has been introduced focused on banning dark patterns.

CPRA introduces a new watchdog for privacy protection

The CPRA establishes a new data protection authority, the “California Privacy Protection Agency” (CPPA), the first of its kind in the US. This will improve enforcement significantly compared to what the currently responsible CA Attorney General is able to do, with limited capacity and priorities in other fields. The CPRA designates funds to the new agency that are expected to be around $100 million. How the CPRA will be interpreted and enforced will depend significantly on who makes up the five-member board of the new agency, to be created until mid-2021. Two of the board seats (including the chair) will be appointed by Gov. Newsom, one seat will be appointed by the attorney general, another by the Senate Rules Committee, and the fifth by the Speaker of the Assembly, to be filled in about 90 days.

CPRA requires companies to collect less data

CPRA requires businesses to minimize the collection of personal data (collect the least amount needed) — a principle Mozilla has always fostered internally and externally as core to our values, products and services. While the law doesn’t elaborate how this will be monitored and enforced, we think this principle is a good first step in fostering lean data approaches.

However, the CPRA in its current form still puts the responsibility on consumers to opt-out of the sale and retention of personal data. Also, it allows data-processing businesses to create exemptions from the CCPA’s limit on charging consumers differently when they exercise their privacy rights. Both provisions do not correspond to our goal of “privacy as a default”.

CPRA becomes effective January 1, 2023 with a look back period to January 2022. Until then, its provisions will need lots of clarification and more details, to be provided by lawmakers and the new Privacy Protection Agency. This will be hard work for many, but we think the hard work is worth the payoff: for consumers and for the internet.

The post Four key takeaways to CPRA, California’s latest privacy law appeared first on Open Policy & Advocacy.

Mozilla Add-ons BlogExtensions in Firefox 84

Here are our highlights of what’s coming up in the Firefox 84 release:

Manage Optional Permissions in Add-ons Manager

As we mentioned last time, users will be able to manage optional permissions of installed extensions from the Firefox Add-ons Manager (about:addons).

Optional permissions in about:addons

We recommend that extensions using optional permissions listen for browser.permissions.onAdded and browser.permissions.onRemoved API events. This ensures the extension is aware of the user granting or revoking optional permissions.


We would like to thank Tom Schuster for his contributions to this release.

The post Extensions in Firefox 84 appeared first on Mozilla Add-ons Blog.

Blog of DataThis Week in Glean: Fantastic Facts and where to find them

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

We have been working on Glean for a few years now, starting with an SDK with Android support and increasing our SDK platform coverage by implementing our core in Rust and providing language bindings for other platforms, well beyond the mobile space.

Before our next major leaps (FOG, Glean.js), we wanted to understand what our internal consumers thought of Glean: what challenges are they facing? Are we serving them well?

Disclaimer: I’m not a user researcher, and  I did my best to study (and practice!) how to make sure our team’s and our customers’ time investment would not be wasted, but I might have still gotten things wrong! “Interviewing Users: How to Uncover Compelling Insights” by Steve Portigal really helped me to put things into perspective and get a better understanding of the process!

Here’s what I learned from trying to understand how users work with Glean:

1. Rely on user researchers!

Humans can ask questions, that’s a fact.. right?

Yes, but that’s not sufficient to understand users and how they interact with the product. Getting a sense of what works for them and what doesn’t is much more difficult than asking “What would you like our product to do?”.

If your team has access to UX researchers, join efforts and get together to better understand your users 🙂

2. Define who to interview

Since I am in the Glean SDK team, we did not interview any of my team peers.

Glean has a very diverse user base: product managers, developers, data scientists, data engineers. All of them are using the Glean SDK, Pipeline and Tools in slightly different ways!

We decided to select representatives throughout Mozilla for each group. Moreover, we did not necessarily want feedback exclusively from people who already used Glean. Mozilla has a legacy telemetry system in Firefox most of the company was exposed to, so we made sure to include both existing Glean users and prospective Glean users.

We narrowed down the list to about 60 candidates, which was then refined to 50 candidates.

3. Logistics is important!

Before starting collecting feedback using the interview process, we tried to make sure that everything was in place to provide a smooth experience both for us and the interviewed folks, since all the interviews were performed online:

  • we set up a shared calendar to let interviewers grab interview slots;
  • we set up a template for a note-taking document, attempting to set a consistent style for all the notes;
  • we documented the high level structure of the interview: introduction (5 minutes) + conversation (30 minutes) + conclusions (5 minutes);
  • we set up email templates for inviting people to the interview; it included information about the interview, a link to an anonymous preliminary questionnaire (see the next point), a link to join the video meeting and a private link to the meeting notes.

4. Provide a way to anonymously send feedback

We knew that interviewing ~50 folks would have taken time, so we tried to get early feedback by sending an email to the engineering part of the company asking for feedback. By making the questionnaire anonymous, we additionally tried to make folks feel more comfortable in providing honest feedback (and critics!). We allowed participants to flag other participants and left it open during the whole interview process. Some of the highlights were duplicated between the interviews and the questionnaire, but the latter got us a few insights that we were not able to capture in live interviews.

5. Team up!

Team support was vital during the whole process, as each interview required at least two people in addition to the interviewed one: the interviewer (the person actually focusing on the conversation) and the note taker. This allowed the interviewer to exclusively pay attention to the conversation, keeping track of the context and digging into aspects of the conversation that they deemed interesting. At the end of each interview, in the last 5 minutes, the note taker would ask any remaining questions or details.

6. Prepare relevant base questions

To consistently interview folks, we prepared a set of base questions to use in all the interviews. A few questions from this list would be different depending on the major groups of interviewees we identified: data scientists (both who used and not used Glean), product managers, developers (both who used and not used Glean).

We ended up with a list of 10 questions, privileging open-ended questions and avoiding leading or closed questions (well, except for the “Have you ever used Glean” part 🙂 ).

7. Always have post-interview sync ups

Due to the split between the note-taker and the interviewer, it was vital for us to have 15 minutes, right after the interview, to fill in missing information in the notes and share the context between the interviewing members.

We learned after the initial couple of interviews that the more we waited for the sync up meeting, the more the notes appeared foggy in our brain.

8. Review and work on notes as you go

While we had post-interview sync-ups, all the findings from each interview were noted down together with the notes document for that interview. After about 20 interviews, we realized that all the insights needed to be in a single report: that took us about a week to do, at the end of the interviewing cycle, and it would have been much faster to note them down in a structured format after each interview.

Well, we learn by making mistakes, right?

9. Publish the results

The interviews were a goldmine of discoveries: our assumptions were challenged in many areas, giving us a lot of food for thought. Our findings did not exclusively touch our team! For this reason we decided to create a presentation and disseminate the information about our process and its results in a biweekly meeting in our Data Org.

The raw insights have so far been shared with the relevant teams, who are triaging them.

10. Make the insights REALLY actionable

Our team is still working on this 🙂 We have received many insights from this process and are figuring out how to guarantee that all of them are considered and don’t fall back in our team’s backlog. We are considering reserving bandwidth to specifically address the most important ones on a monthly basis.

Many of the findings already informed our current choices and designs, some of them are changing our priorities and sparking new conversations in our whole organization.

We believe that getting user feedback early in the process is vital: part of this concern is eased by our proposal culture (before we dive into development, we asynchronously discuss with all the stakeholders on shared documents with the intent of ironing out the design, together!) but there’s indeed huge value in performing more frequent interviews (maybe on a smaller number of folks) with all the different user groups.

SeaMonkeySeaMonkey is out!

Hi everyone,

This is a quick announcement that SeaMonkey has just been released.

Yes, it’s a quick update.    So quick, I hadn’t even gotten a chance to update my installed version to 2.53.5.  ;P


Open Policy & AdvocacyMozilla DNS over HTTPS (DoH) and Trusted Recursive Resolver (TRR) Comment Period: Help us enhance security and privacy online

Update: We’ve extended the deadline of the comment period to January 20 2021.

For a number of years now, we have been working hard to update and secure one of the oldest parts of the Internet, the Domain Name System (DNS). We passed a key milestone in that endeavor earlier this year, when we rolled out the technical solution for privacy and security in the DNS – DNS-over-HTTPS (DoH) – to Firefox users in the United States. Given the transformative nature of this technology and our mission commitment to transparency and collaboration, we have consistently sought to implement DoH thoughtfully and inclusively. Therefore, as we explore how to bring the benefits of DoH to Firefox users in different regions of the world, we’re today launching a comment period to help inform our plans.

Some background

Before explaining our comment period, it’s first worth clarifying a few things about DoH and how we’re implementing it:

What is the ‘DNS’?

The Domain Name System (DNS for short) is a shared, public database that links a human-friendly name, such as, to a computer-friendly series of numbers, called an IP address (e.g. By performing a “lookup” in this database, your web browser is able to find websites on your behalf. Because of how DNS was originally designed decades ago, browsers doing DNS lookups for websites — even for encrypted https:// sites — had to perform these lookups without encryption.

What are the security and privacy concerns with traditional DNS?

Because there is no encryption in traditional DNS, other entities along the way might collect (or even block or change) this data. These entities could include your Internet Service Provider (ISP) if you are connecting via a home network, your mobile network operator (MNO) if you are connecting on your phone, a WiFi hotspot vendor if you are connecting at a coffee shop, and even eavesdroppers in certain scenarios.

In the early days of the Internet, these kinds of threats to people’s privacy and security were known, but not being exploited yet. Today, we know that unencrypted DNS is not only vulnerable to spying but is being exploited, and so we are helping the Internet to make the shift to more secure alternatives. That’s where DoH comes in.

What is DoH and how does it mitigate these problems?

Following the best practice of encrypting HTTP traffic, Mozilla has worked with industry stakeholders at the Internet Engineering Task Force (IETF) to define a DNS encryption technology called DNS over HTTPS or DoH (pronounced “dough”), specified in RFC 8484. It encrypts your DNS requests, and responses are encrypted between your device and the DNS resolver via HTTPS. Because DoH is an emerging Internet standard, operating system vendors and browsers other than Mozilla can also implement it. In fact, Google, Microsoft and Apple have either already implemented or are in late stages of implementing DoH in their respective browsers and/or operating systems, making it a matter of time before it becomes a ubiquitous standard to help improve security on the web.

How has Mozilla rolled out DoH so far?

Mozilla has deployed DoH to Firefox users in the United States, and as an opt-in feature for Firefox users in other regions. We are currently exploring how to expand deployment beyond the United States. Consistent with Mozilla’s mission, in countries where we roll out this feature the user is given an explicit choice to accept or decline DoH, with a default-on orientation to protect user privacy and security.

Importantly, our deployment of DoH adds an extra layer of user protection beyond simple encryption of DNS lookups. Our deployment includes a Trusted Recursive Resolver (TRR) program, whereby DoH lookups are routed only to DNS providers who have made binding legal commitments to adopt extra protections for user data (e.g., to limit data retention to operational purposes and to not sell or share user data with other parties). Firefox’s deployment of DoH is also designed to respect ISP offered parental control services where users have opted into them and offers techniques for it to operate with enterprise deployment policies.

The comment period

As we explore bringing the benefits of DoH to more users, in parallel, we’re launching a comment period to crowdsource ideas, recommendations, and insights that can help us maximise the security and privacy-enhancing benefits of our implementation of DoH in new regions. We welcome contributions for anyone who cares about the growth of a healthy, rights-protective and secure Internet.

Engaging with the Mozilla DoH implementation comment period

  • Length: The global public comment period will last for a total of 45 days, starting from November 19, 2020 and ending on January 4, 2021. Update: The deadline of the comment period has been extended to January 20 2021.
  • Audience: The consultation is open to all relevant stakeholders interested in a more secure, open and healthier Internet across the globe.
  • Questions for Consultation: A detailed set of questions which serve as a framework for the consultation are available here. It is not mandatory to respond to all questions.
  • Submitting comments: All responses can be submitted in plaintext or in the form of an accessible pdf to

Unless the author/authors explicitly opt-out in the email in which they submit their responses, all genuine responses will be made available publicly on our blog. Submissions that violate our Community Participation Guidelines will not be published.

Our goal is that DoH becomes as ubiquitous for DNS as HTTPS is for web traffic, supported by ISPs, MNOs, and enterprises worldwide to help protect both end users and DNS providers themselves. We hope this public comment will take us closer to that goal, and we look forward to hearing from stakeholders around the world in creating a healthier Internet.

The post Mozilla DNS over HTTPS (DoH) and Trusted Recursive Resolver (TRR) Comment Period: Help us enhance security and privacy online appeared first on Open Policy & Advocacy.

The Mozilla BlogRelease: Mozilla’s Greenhouse Gas emissions baseline

When we launched our Environmental Sustainability Programme in March 2020, we identified three strategic goals:

  1. Reduce and mitigate Mozilla’s organisational impact;
  2. Train and develop Mozilla staff to build with sustainability in mind;
  3. Raise awareness for sustainability, internally and externally.

Today, we are releasing our baseline Greenhouse Gas emissions (GHG) assessment for 2019, which forms the basis upon which we will build to reduce and mitigate Mozilla’s organisational impact.


GHG Inventory: Summary of Findings

Mozilla’s overall emissions in 2019 amounted to: 799,696 mtCO2e (metric tons of carbon dioxide equivalent).

    • Business Services and Operations: 14,222 mtCO2e
      • Purchased goods and services: 8,654 mtCO2e
      • Business travel: 2,657 mtCO2e
      • Events: 1,199 mtCO2e
      • Offices and co-locations: 1,195 mtCO2e
      • Remotees: 194 mtCO2e
      • Commute: 147 mtCO2e
    • Product use: 785,474 mtCO2e


How to read these emissions

There are two major buckets:

  1. Mozilla’s impact in terms of business services and operations, which we calculated with as much primary data as possible;
  2. The impact of the use of our products, which makes up roughly 98% of our overall emissions.

In 2019, the use of products spanned Firefox Desktop and Mobile, Pocket, and Hubs.

Their impact is significant, and it is an approximation. We can’t yet really measure the energy required to run and use our products specifically. Instead, we are estimating how much power is required to use the devices needed to access our products for the time that we know people spent on our products. In other words, we estimate the impact of desktop computers, laptops, tablets, or phones while being online overall.

For now, this helps us get a sense of the impact the internet is having on the environment. Going forward, we need to figure out how to reduce that share while continuing to grow and make the web open and accessible to all.

The emissions related to our business services and operations cover all other categories from the GHG protocol that are applicable to Mozilla.

For 2019, this includes 10 offices and 6 co-locations, purchased goods and services, events that we either host or run, all of our commercial travel including air, rail, ground transportation, and hotels, as well as estimates of the impact of our remote workforce and the commute of our office employees, which we gathered through an internal survey.


How we look at this data

  1. It’s easy to lose sight of all the work we’ve got to do to reduce our business services and operations emissions, if we only look at the overarching distribution of our emissions: Pie chart visualising 2% impact for Business Services and Operations and 98% for Product Use
  2. If we zoom in on our business services and operations emissions, we’ll note that our average emissions per employee are: 12 mtCO2e.
  3. There is no doubt plenty of room for improvement. Our biggest area for improvement will likely be the Purchased Goods and Services category. Think: more local and sustainable sourcing, switching to vendors with ambitious climate targets and implementation plans, prolonging renewal cycles, and more. 
  4. In addition, we need to significantly increase the amount of renewable energy we procure for Mozilla spaces, which in 2019 was at: 27%.
  5. And while a majority of Mozillians already opt for a low-carbon commute, we’ll explore additional incentives here, too:column chart listing the percentages of different commute modes globally
  6. We will also look at our travel and event policies to determine where we add the most value, which trip is really necessary, how we can improve virtual participation, and where we have space for more local engagement.

You can find the longform, technical report in PDF format here.

We’ll be sharing more about what we learned from our first GHG assessment, how we’re planning to improve and mitigate our impact soon. Until then, please reach out to the team should you have any questions at

The post Release: Mozilla’s Greenhouse Gas emissions baseline appeared first on The Mozilla Blog.

Web Application SecurityMeasuring Middlebox Interference with DNS Records


The Domain Name System (DNS) is often referred to as the “phonebook of the Internet.” It is responsible for translating human readable domain names–such as–into IP addresses, which are necessary for nearly all communication on the Internet. At a high level, clients typically resolve a name by sending a query to a recursive resolver, which is responsible for answering queries on behalf of a client. The recursive resolver answers the query by traversing the DNS hierarchy, starting from a root server, a top-level domain server (e.g. for .com), and finally the authoritative server for the domain name. Once the recursive resolver receives the answer for the query, it caches the answer and sends it back to the client.

Unfortunately, DNS was not originally designed with security in mind, leaving users vulnerable to attacks. For example, previous work has shown that recursive resolvers are susceptible to cache poisoning attacks, in which on-path attackers impersonate authoritative nameservers and send incorrect answers for queries to recursive resolvers. These incorrect answers then get cached at the recursive resolver, which may cause clients that later query the same domain names to visit malicious websites. This attack is successful because the DNS protocol typically does not provide any notion of correctness for DNS responses. When a recursive resolver receives an answer for a query, it assumes that the answer is correct.

DNSSEC is able to prevent such attacks by enabling domain name owners to provide cryptographic signatures for their DNS records. It also establishes a chain of trust between servers in the DNS hierarchy, enabling clients to validate that they received the correct answer.

Unfortunately, DNSSEC deployment has been comparatively slow: measurements show, as of November 2020, only about 1.8% of .com records are signed, and about 25% of clients worldwide use DNSSEC-validating recursive resolvers. Even worse, essentially no clients validate DNSSEC themselves, which means that they have to trust their recursive resolvers.

One potential obstacle to client-side validation is network middleboxes. Measurements have shown that some middleboxes do not properly pass all DNS records. If a middlebox were to block the RRSIG records that carry DNSSEC signatures, clients would not be able to distinguish this from an attack, making DNSSEC deployment problematic. Unfortunately, these measurements were taken long ago and were not specifically targeted at DNSSEC. To get to the bottom of things, we decided to run an experiment.

Measurement Description

There are two main questions we want to answer:

  • At what rate do network middleboxes between clients and recursive resolvers interfere with DNSSEC records (e.g., DNSKEY and RRSIG)?
  • How does the rate of DNSSEC interference compare to interference with other relatively new record types (e.g., SMIMEA and HTTPSSVC)?

At a high level, in collaboration with Cloudflare we will first serve the above record types from domain names that we control. We will then deploy an add-on experiment to Firefox Beta desktop clients which requests each record type for our domain names. Finally, we will check whether we got the expected responses (or any response at all). As always, users who have opted out of sending telemetry or participating in studies will not receive the add-on.

To analyze the rate of network middlebox interference with DNSSEC records, we will send DNS responses to our telemetry system, rather than performing any analysis locally within the client’s browser. This will enable us to see the different ways that DNS responses are interfered with without relying on whatever analysis logic we bake into our experiment’s add-on. In order to protect user privacy, we will only send information for the domain names in the experiment that we control—not for any other domain names for which a client issues requests when browsing the web. Furthermore, we are not collecting UDP, TCP, or IP headers. We are only collecting the payload of the DNS response, for which we know the expected format. The data we are interested in should not include identifying information about a client, unless middleboxes inject such information when they interfere with DNS requests/responses.

We are launching the experiment today to 1% of Firefox Beta desktop clients and expect to publish our initial results around the end of the year.

The post Measuring Middlebox Interference with DNS Records appeared first on Mozilla Security Blog.

hacks.mozilla.orgFoundations for the Future


This week the Servo project took a significant next step in bringing community-led transformative innovations to the web by announcing it will be hosted by the Linux Foundation.  Mozilla is pleased to see Servo, which began as a research effort in 2012, open new doors that can lead it to ever broader benefits for users and the web. Working together, the Servo project and Linux Foundation are a natural fit for nurturing continued growth of the Servo community, encouraging investment in development, and expanding availability and adoption.

Historical Retrospective

From the outset the Servo project was about pioneering new approaches to web browsing fundamentals leveraging the extraordinary advantages of the Rust programming language, itself a companion Mozilla research effort. Rethinking the architecture and implementation of core browser operations allowed Servo to demonstrate extraordinary benefits from parallelism and direct leverage of our increasingly sophisticated computer and mobile phone hardware.

Those successes inspired the thinking behind Project Quantum, which in 2017 delivered compelling improvements in user responsiveness and performance for Firefox in large part by incorporating Servo’s parallelized styling engine (“Stylo”) along with other Rust-based components. More recently, Servo’s WebRender GPU-based rendering subsystem delivered new performance enhancements for Firefox, and Servo branched out to become an equally important component of Mozilla’s Firefox Reality virtual reality browser.

What’s Next?

All along the way, the Servo project has been an exemplary showcase for the benefits of open source based community contribution and leadership. Many other individuals and organizations contributed much of the implementation work that has found its way into Servo, Firefox, Firefox Reality, and indirectly the Rust programming language itself.

Mozilla is excited at what the future holds for Servo. Organizing and managing an effort at the scale and reach of Servo is no small thing, and the Linux Foundation is an ideal home with all the operational expertise and practical capabilities to help the project fully realize its potential. The term “graduate” somehow feels appropriate for this transition, and Mozilla could not be prouder or more enthusiastic.

For more information about the Servo project and to contribute, please visit


The post Foundations for the Future appeared first on Mozilla Hacks - the Web developer blog.

about:communityWelcoming New Contributors: Firefox 83

With the release of Firefox 83, we are pleased to welcome all the developers who’ve contributed their first code change to Firefox in this release, 18 of whom are brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

hacks.mozilla.orgFirefox 83 is upon us

Did November spawn a monster this year? In truth, November has given us a few snippets of good news, far from the least of which is the launch of Firefox 83! In this release we’ve got a few nice additions, including Conical CSS gradients, overflow debugging in the Developer Tools, enabling of WebRender across more platforms, and more besides.

This blog post provides merely a set of highlights; for all the details, check out the following:


In the HTML Pane, scrollable elements have a “scroll” badge next to them, which you can now toggle to highlight elements causing an overflow (expanding nodes as needed to make them visible):

devtools page inspector showing a scroll badge next to an element that is scrolling

You will also see an “overflow” badge next to the node causing the overflow.

Firefox devtools screenshot showing an overflow badge next to a child element that is causing its parent to overflow

And in addition to that, if you hover over the node(s) causing the overflow, the UI will show a “ghost” of the content so you can see how far it overflows.

Firefox UI showing a highlighted paragraph and a ghost of the hidden overflow content

These new features are very useful for helping to debug problems related to overflow.

Web platform additions

Now let’s see what’s been added to Gecko in Firefox 83.

Conic gradients

We’ve had support for linear gradients and radial gradients in CSS images (e.g. in background-image) for a long time. Now in Firefox 83 we can finally add support for conic gradients to that list!

You can create a really simple conic gradient using two colors:

conic-gradient(red, orange);

simple conic gradient that goes from red to orange

But there are many options available. A more complex syntax example could look like so:

  from 45deg /* vary starting angle */
  at 30% 40%, /* vary position of gradient center */
  red, /* include multiple color stops */
  indigo 80%, /* vary angle of individual color stops */
  violet 90%

complex conic gradient showing all the colors of the rainbow, positioned off center

And in the same manner as the other gradient types, you can create repeating conic gradients:

repeating-conic-gradient(#ccc 20deg, #666 40deg)

repeating conic gradient that continually goes from dark gray to light gray

For more information and examples, check out our conic-gradient() reference page, and the Using CSS gradients guide.

WebRender comes to more platforms

We started work on our WebRender rendering architecture a number of years ago, with the aim of delivering the whole web at 60fps. This has already been enabled for Windows 10 users with suitable hardware, but today we bring the WebRender experience to Win7, Win8 and macOS 10.12 to 10.15 (not 10.16 beta as yet).

It’s an exciting time for Firefox performance — try it now, and let us know what you think!

Pinch to zoom on desktop

Last but not least, we’d like to draw your attention to pinch to zoom on desktop — this has long been requested, and finally we are in a position to enable pinch to zoom support for:

  • Windows laptop touchscreens
  • Windows laptop touchpads
  • macOS laptop touchpads

The post Firefox 83 is upon us appeared first on Mozilla Hacks - the Web developer blog.

Web Application SecurityFirefox 83 introduces HTTPS-Only Mode


Security on the web matters. Whenever you connect to a web page and enter a password, a credit card number, or other sensitive information, you want to be sure that this information is kept secure. Whether you are writing a personal email or reading a page on a medical condition, you don’t want that information leaked to eavesdroppers on the network who have no business prying into your personal communications.

That’s why Mozilla is pleased to introduce HTTPS-Only Mode, a brand-new security feature available in Firefox 83. When you enable HTTPS-Only Mode:

  • Firefox attempts to establish fully secure connections to every website, and
  • Firefox asks for your permission before connecting to a website that doesn’t support secure connections.


How HTTPS-Only Mode works

The Hypertext Transfer Protocol (HTTP) is a fundamental protocol through which web browsers and websites communicate. However, data transferred by the regular HTTP protocol is unprotected and transferred in cleartext, such that attackers are able to view, steal, or even tamper with the transmitted data. HTTP over TLS (HTTPS) fixes this security shortcoming by creating a secure and encrypted connection between your browser and the website you’re visiting. You know a website is using HTTPS when you see the lock icon in the address bar:

The majority of websites already support HTTPS, and those that don’t are increasingly uncommon. Regrettably, websites often fall back to using the insecure and outdated HTTP protocol. Additionally, the web contains millions of legacy HTTP links that point to insecure versions of websites. When you click on such a link, browsers traditionally connect to the website using the insecure HTTP protocol.

In light of the very high availability of HTTPS, we believe that it is time to let our users choose to always use HTTPS. That’s why we have created HTTPS-Only Mode, which ensures that Firefox doesn’t make any insecure connections without your permission. When you enable HTTPS-Only Mode, Firefox tries to establish a fully secure connection to the website you are visiting.

Whether you click on an HTTP link, or you manually enter an HTTP address, Firefox will use HTTPS instead. Here’s what that upgrade looks like:


How to turn on HTTPS-Only Mode

If you are eager to try this new security enhancing feature, enabling HTTPS-Only Mode is simple:

  1. Click on Firefox’s menu button and choose “Preferences”.
  2. Select “Privacy & Security” and scroll down to the section “HTTPS-Only Mode”.
  3. Choose “Enable HTTPS-Only Mode in all windows”.

Once HTTPS-Only Mode is turned on, you can browse the web as you always do, with confidence that Firefox will upgrade web connections to be secure whenever possible, and keep you safe by default. For the small number of websites that don’t yet support HTTPS, Firefox will display an error message that explains the security risk and asks you whether or not you want to connect to the website using HTTP. Here’s what the error message looks like:

It also can happen, rarely, that a website itself is available over HTTPS but resources within the website, such as images or videos, are not available over HTTPS. Consequently, some web pages may not look right or might malfunction. In that case, you can temporarily disable HTTPS-Only Mode for that site by clicking the lock icon in the address bar:

The future of the web is HTTPS-Only

Once HTTPS becomes even more widely supported by websites than it is today, we expect it will be possible for web browsers to deprecate HTTP connections and require HTTPS for all websites. In summary, HTTPS-Only Mode is the future of web browsing!

Thank You

We are grateful to many Mozillians for making HTTPS-Only Mode possible, including but not limited to the work of Meridel Walkington, Eric Pang, Martin Thomson, Steven Englehardt, Alice Fleischmann, Angela Lazar, Mikal Lewis, Wennie Leung, Frederik Braun, Tom Ritter, June Wilde, Sebastian Streich, Daniel Veditz, Prangya Basu, Dragana Damjanovic, Valentin Gosu, Chris Lonnen, Andrew Overholt, and Selena Deckelmann. We also want to acknowledge the work of our friends at the EFF, who pioneered a similar approach in HTTPS Everywhere’s EASE Mode. It’s a privilege to work with people who are passionate about building the web we want: free, independent and secure.


The post Firefox 83 introduces HTTPS-Only Mode appeared first on Mozilla Security Blog.

SeaMonkeySeaMonkey 2.53.5

Hi everyone,

Just want to update everyone that SeaMonkey 2.53.5 has been released.

The team is also following this up with a .1 release due to a bug in the video code (that’s as specific that I can get as I haven’t caught up to what’s going on).


Web Application SecurityPreloading Intermediate CA Certificates into Firefox

Throughout 2020, Firefox users have been seeing fewer secure connection errors while browsing the Web. We’ve been improving connection errors overall for some time, and a new feature called Intermediate Certificate Authority (CA) Preloading is our latest innovation. This technique reduces connection errors that users encounter when web servers forget to properly configure their TLS security.

In essence, Firefox pre-downloads all trusted Web Public Key Infrastructure (PKI) intermediate CA certificates into Firefox via Mozilla’s Remote Settings infrastructure. This way, Firefox users avoid seeing an error page for one of the most common server configuration problems: not specifying proper intermediate CA certificates.

For Intermediate CA Preloading to work, we need to be able to enumerate every intermediate CA certificate that is part of the trusted Web PKI. As a result of Mozilla’s leadership in the CA community, each CA in Mozilla’s Root Store Policy is required to disclose these intermediate CA certificates to the multi-browser Common CA Database (CCADB). Consequently, all of the relevant intermediate CA certificates are available via the CCADB reporting mechanisms. Given this information, we periodically synthesize a list of these intermediate CA certificates and place them into Remote Settings. Currently the list contains over two thousand entries.

When Firefox receives the list for the first time (or later receives updates to the list), it enumerates the entries in batches and downloads the corresponding intermediate CA certificates in the background. The list changes slowly, so once a copy of Firefox has completed the initial downloads, it’s easy to keep it up-to-date. The list can be examined directly using your favorite JSON tooling at this URL:

For details on processing the records, see the Kinto Attachment plugin for Kinto, used by Firefox Remote Settings.

Certificates provided via Intermediate CA Preloading are added to a local cache and are not imbued with trust. Trust is still derived from the standard Web PKI algorithms.

Our collected telemetry confirms that enabling Intermediate CA Preloading in Firefox 68 has led to a decrease of unknown issuers errors in the TLS Handshake.

unknown issuer errors declining after Firefox Beta 68

While there are other factors that affect the relative prevalence of this error, this data supports the conclusion that Intermediate CA Preloading is achieving the goal of avoiding these connection errors for Firefox users.

Intermediate CA Preloading is reducing errors today in Firefox for desktop users, and we’ll be working to roll it out to our mobile users in the future.

The post Preloading Intermediate CA Certificates into Firefox appeared first on Mozilla Security Blog.

hacks.mozilla.orgWarp: Improved JS performance in Firefox 83


We have enabled Warp, a significant update to SpiderMonkey, by default in Firefox 83. SpiderMonkey is the JavaScript engine used in the Firefox web browser.

With Warp (also called WarpBuilder) we’re making big changes to our JIT (just-in-time) compilers, resulting in improved responsiveness, faster page loads and better memory usage. The new architecture is also more maintainable and unlocks additional SpiderMonkey improvements.

This post explains how Warp works and how it made SpiderMonkey faster.

How Warp works

Multiple JITs

The first step when running JavaScript is to parse the source code into bytecode, a lower-level representation. Bytecode can be executed immediately using an interpreter or can be compiled to native code by a just-in-time (JIT) compiler. Modern JavaScript engines have multiple tiered execution engines.

JS functions may switch between tiers depending on the expected benefit of switching:

  • Interpreters and baseline JITs have fast compilation times, perform only basic code optimizations (typically based on Inline Caches), and collect profiling data.
  • The Optimizing JIT performs advanced compiler optimizations but has slower compilation times and uses more memory, so is only used for functions that are warm (called many times).

The optimizing JIT makes assumptions based on the profiling data collected by the other tiers. If these assumptions turn out to be wrong, the optimized code is discarded. When this happens the function resumes execution in the baseline tiers and has to warm-up again (this is called a bailout).

For SpiderMonkey it looks like this (simplified):Baseline Interpreter/JIT, after warmup Ion/Warp JIT. Bailout arrow from Ion/Warp back to Baseline.

Profiling data

Our previous optimizing JIT, Ion, used two very different systems for gathering profiling information to guide JIT optimizations. The first is Type Inference (TI), which collects global information about the types of objects used in the JS code. The second is CacheIR, a simple linear bytecode format used by the Baseline Interpreter and the Baseline JIT as the fundamental optimization primitive. Ion mostly relied on TI, but occasionally used CacheIR information when TI data was unavailable.

With Warp, we’ve changed our optimizing JIT to rely solely on CacheIR data collected by the baseline tiers. Here’s what this looks like:
overview of profiling data as described in the text

There’s a lot of information here, but the thing to note is that we’ve replaced the IonBuilder frontend (outlined in red) with the simpler WarpBuilder frontend (outlined in green). IonBuilder and WarpBuilder both produce Ion MIR, an intermediate representation used by the optimizing JIT backend.

Where IonBuilder used TI data gathered from the whole engine to generate MIR, WarpBuilder generates MIR using the same CacheIR that the Baseline Interpreter and Baseline JIT use to generate Inline Caches (ICs). As we’ll see below, the tighter integration between Warp and the lower tiers has several advantages.

How CacheIR works

Consider the following JS function:

function f(o) {
    return o.x - 1;

The Baseline Interpreter and Baseline JIT use two Inline Caches for this function: one for the property access (o.x), and one for the subtraction. That’s because we can’t optimize this function without knowing the types of o and o.x.

The IC for the property access, o.x, will be invoked with the value of o. It can then attach an IC stub (a small piece of machine code) to optimize this operation. In SpiderMonkey this works by first generating CacheIR (a simple linear bytecode format, you could think of it as an optimization recipe). For example, if o is an object and x is a simple data property, we generate this:

GuardToObject        inputId 0
GuardShape           objId 0, shapeOffset 0
LoadFixedSlotResult  objId 0, offsetOffset 8

Here we first guard the input (o) is an object, then we guard on the object’s shape (which determines the object’s properties and layout), and then we load the value of o.x from the object’s slots.

Note that the shape and the property’s index in the slots array are stored in a separate data section, not baked into the CacheIR or IC code itself. The CacheIR refers to the offsets of these fields with shapeOffset and offsetOffset. This allows many different IC stubs to share the same generated code, reducing compilation overhead.

The IC then compiles this CacheIR snippet to machine code. Now, the Baseline Interpreter and Baseline JIT can execute this operation quickly without calling into C++ code.

The subtraction IC works the same way. If o.x is an int32 value, the subtraction IC will be invoked with two int32 values and the IC will generate the following CacheIR to optimize that case:

GuardToInt32     inputId 0
GuardToInt32     inputId 1
Int32SubResult   lhsId 0, rhsId 1

This means we first guard the left-hand side is an int32 value, then we guard the right-hand side is an int32 value, and we can then perform the int32 subtraction and return the result from the IC stub to the function.

The CacheIR instructions capture everything we need to do to optimize an operation. We have a few hundred CacheIR instructions, defined in a YAML file. These are the building blocks for our JIT optimization pipeline.

Warp: Transpiling CacheIR to MIR

If a JS function gets called many times, we want to compile it with the optimizing compiler. With Warp there are three steps:

  1. WarpOracle: runs on the main thread, creates a snapshot that includes the Baseline CacheIR data.
  2. WarpBuilder: runs off-thread, builds MIR from the snapshot.
  3. Optimizing JIT Backend: also runs off-thread, optimizes the MIR and generates machine code.

The WarpOracle phase runs on the main thread and is very fast. The actual MIR building can be done on a background thread. This is an improvement over IonBuilder, where we had to do MIR building on the main thread because it relied on a lot of global data structures for Type Inference.

WarpBuilder has a transpiler to transpile CacheIR to MIR. This is a very mechanical process: for each CacheIR instruction, it just generates the corresponding MIR instruction(s).

Putting this all together we get the following picture (click for a larger version):

We’re very excited about this design: when we make changes to the CacheIR instructions, it automatically affects all of our JIT tiers (see the blue arrows in the picture above). Warp is simply weaving together the function’s bytecode and CacheIR instructions into a single MIR graph.

Our old MIR builder (IonBuilder) had a lot of complicated code that we don’t need in WarpBuilder because all the JS semantics are captured by the CacheIR data we also need for ICs.

Trial Inlining: type specializing inlined functions

Optimizing JavaScript JITs are able to inline JavaScript functions into the caller. With Warp we are taking this a step further: Warp is also able to specialize inlined functions based on the call site.

Consider our example function again:

function f(o) {
    return o.x - 1;

This function may be called from multiple places, each passing a different shape of object or different types for o.x. In this case, the inline caches will have polymorphic CacheIR IC stubs, even if each of the callers only passes a single type. If we inline the function in Warp, we won’t be able to optimize it as well as we want.

To solve this problem, we introduced a novel optimization called Trial Inlining. Every function has an ICScript, which stores the CacheIR and IC data for that function. Before we Warp-compile a function, we scan the Baseline ICs in that function to search for calls to inlinable functions. For each inlinable call site, we create a new ICScript for the callee function. Whenever we call the inlining candidate, instead of using the default ICScript for the callee, we pass in the new specialized ICScript. This means that the Baseline Interpreter, Baseline JIT, and Warp will now collect and use information specialized for that call site.

Trial inlining is very powerful because it works recursively. For example, consider the following JS code:

function callWithArg(fun, x) {
    return fun(x);
function test(a) {
    var b = callWithArg(x => x + 1, a);
    var c = callWithArg(x => x - 1, a);
    return b + c;

When we perform trial inlining for the test function, we will generate a specialized ICScript for each of the callWithArg calls. Later on, we attempt recursive trial inlining in those caller-specialized callWithArg functions, and we can then specialize the fun call based on the caller. This was not possible in IonBuilder.

When it’s time to Warp-compile the test function, we have the caller-specialized CacheIR data and can generate optimal code.

This means we build up the inlining graph before functions are Warp-compiled, by (recursively) specializing Baseline IC data at call sites. Warp then just inlines based on that without needing its own inlining heuristics.

Optimizing built-in functions

IonBuilder was able to inline certain built-in functions directly. This is especially useful for things like Math.abs and Array.prototype.push, because we can implement them with a few machine instructions and that’s a lot faster than calling the function.

Because Warp is driven by CacheIR, we decided to generate optimized CacheIR for calls to these functions.

This means these built-ins are now also properly optimized with IC stubs in our Baseline Interpreter and JIT. The new design leads us to generate the right CacheIR instructions, which then benefits not just Warp but all of our JIT tiers.

For example, let’s look at a Math.pow call with two int32 arguments. We generate the following CacheIR:

LoadArgumentFixedSlot      resultId 1, slotIndex 3
GuardToObject              inputId 1
GuardSpecificFunction      funId 1, expectedOffset 0, nargsAndFlagsOffset 8
LoadArgumentFixedSlot      resultId 2, slotIndex 1
LoadArgumentFixedSlot      resultId 3, slotIndex 0
GuardToInt32               inputId 2
GuardToInt32               inputId 3
Int32PowResult             lhsId 2, rhsId 3

First, we guard that the callee is the built-in pow function. Then we load the two arguments and guard they are int32 values. Then we perform the pow operation specialized for two int32 arguments and return the result of that from the IC stub.

Furthermore, the Int32PowResult CacheIR instruction is also used to optimize the JS exponentiation operator, x ** y. For that operator we might generate:

GuardToInt32               inputId 0
GuardToInt32               inputId 1
Int32PowResult             lhsId 0, rhsId 1

When we added Warp transpiler support for Int32PowResult, Warp was able to optimize both the exponentiation operator and Math.pow without additional changes. This is a nice example of CacheIR providing building blocks that can be used for optimizing different operations.



Warp is faster than Ion on many workloads. The picture below shows a couple examples: we had a 20% improvement on Google Docs load time, and we are about 10-12% faster on the Speedometer benchmark:
20% faster on GDocs, 10-12% faster on Speedometer

We’ve seen similar page load and responsiveness improvements on other JS-intensive websites such as Reddit and Netflix. Feedback from Nightly users has been positive as well.

The improvements are largely because basing Warp on CacheIR lets us remove the code throughout the engine that was required to track the global type inference data used by IonBuilder, resulting in speedups across the engine.

The old system required all functions to track type information that was only useful in very hot functions. With Warp, the profiling information (CacheIR) used to optimize Warp is also used to speed up code running in the Baseline Interpreter and Baseline JIT.

Warp is also able to do more work off-thread and requires fewer recompilations (the previous design often overspecialized, resulting in many bailouts).

Synthetic JS benchmarks

Warp is currently slower than Ion on certain synthetic JS benchmarks such as Octane and Kraken. This isn’t too surprising because Warp has to compete with almost a decade of optimization work and tuning for those benchmarks specifically.

We believe these benchmarks are not representative of modern JS code (see also the V8 team’s blog post on this) and the regressions are outweighed by the large speedups and other improvements elsewhere.

That said, we will continue to optimize Warp the coming months and we expect to see improvements on all of these workloads going forward.

Memory usage

Removing the global type inference data also means we use less memory. For example the picture below shows JS code in Firefox uses 8% less memory when loading a number of websites (tp6):
8% less memory on the tp6 suite

We expect this number to improve the coming months as we remove the old code and are able to simplify more data structures.

Faster GCs

The type inference data also added a lot of overhead to garbage collection. We noticed some big improvements in our telemetry data for GC sweeping (one of the phases of our GC) when we enabled Warp by default in Firefox Nightly on September 23:
Drop in GC-sweeping times when warp landed, for example mean around 30 to around 20 ms

Maintainability and Developer Velocity

Because WarpBuilder is a lot more mechanical than IonBuilder, we’ve found the code to be much simpler, more compact, more maintainable and less error-prone. By using CacheIR everywhere, we can add new optimizations with much less code. This makes it easier for the team to improve performance and implement new features.

What’s next?

With Warp we have replaced the frontend (the MIR building phase) of the IonMonkey JIT. The next step is removing the old code and architecture. This will likely happen in Firefox 85. We expect additional performance and memory usage improvements from that.

We will also continue to incrementally simplify and optimize the backend of the IonMonkey JIT. We believe there’s still a lot of room for improvement for JS-intensive workloads.

Finally, because all of our JITs are now based on CacheIR data, we are working on a tool to let us (and web developers) explore the CacheIR data for a JS function. We hope this will help developers understand JS performance better.


Most of the work on Warp was done by Caroline Cullen, Iain Ireland, Jan de Mooij, and our amazing contributors André Bargull and Tom Schuster. The rest of the SpiderMonkey team provided us with a lot of feedback and ideas. Christian Holler and Gary Kwong reported various fuzz bugs.

Thanks to Ted Campbell, Caroline Cullen, Steven DeTar, Matthew Gaudet, Melissa Thermidor, and especially Iain Ireland for their great feedback and suggestions for this post.

The post Warp: Improved JS performance in Firefox 83 appeared first on Mozilla Hacks - the Web developer blog.

Firefox UXHow to Write Microcopy That Improves the User Experience

Writing clear microcopy can sometimes take a surprising amount of time.

Photo of typesetting letters of various shapes and sizes.<figcaption>Photo by Amador Loureiro on Unsplash.</figcaption>

The small bits of copy you see sprinkled throughout apps and websites are called microcopy. As content designers, we think deeply about what each word communicates.

Microcopy is the tidiest of UI copy types. But do not let its crisp, contained presentation fool you: the process to get to those final, perfect words can be messy. Very messy. Multiple drafts messy, mired with business and technical constraints.

Quotation attributed to Blaise Pascale that reads: I would have written a shorter letter, but I did not have the time.<figcaption>Blaise Pascal, translated from Lettres Provinciales</figcaption>

Here’s a secret about good writing that no one ever tells you: When you encounter clear UX content, it’s a result of editing and revision. The person who wrote those words likely had a dozen or more versions you’ll never see. They also probably had some business or technical constraints to consider, too.

If you’ve ever wondered what all goes into writing microcopy, pour yourself a micro-cup of espresso or a micro-brew and read on!

1. Understand the component and how it behaves.

As a content designer, you should try to consider as many cases as possible up front. Work with Design and Engineering to test the limits of the container you’re writing for. What will copy look like when it’s really short? What will it look like when it’s really long? You might be surprised what you discover.

Images of two Firefox iOS widgets: Quick Actions and Top Sites.<figcaption>Before writing the microcopy for iOS widgets, I needed first to understand the component and how it worked.</figcaption>

As an example, I wrote the descriptions for new iOS widgets. Apple recently introduced these widgets to elevate app content and actions directly on your Home Screen. You can use Firefox widgets to start a search, close tabs, or open a favorite site. Each one has a corresponding description that tells you what the widget does.

Before I sat down to write a single word of microcopy for the widget descriptions, I would need to know the following:

  • Is there a character limit for the widget descriptions?
  • What happens if the copy expands beyond that character limit? Does it truncate?
  • We had three widget sizes. Would this impact the available space for descriptions?

Because these widgets didn’t yet exist in the wild for me to interact with, I asked Engineering to help answer my questions. Engineering played with variations of character length in a testing environment to see how the UI might change.

Image of two iOS testing widgets side-by-side, one with a long description and one with a short description.<figcaption>Engineering tried variations of copy length for the descriptions in a testing environment. This helped us understand surprising behavior in the template itself.</figcaption>

We learned the template behaved in a peculiar way. The widget would shrink to accommodate a longer description. Then, the template would essentially lock to that size. Even if other widgets had shorter descriptions, the widgets themselves would appear teeny. You had to strain your eyes to read any text on the widget itself. Long story short, the descriptions needed to be as concise as possible. This would accommodate for localization and keep the widgets from shrinking.

First learning how the widgets behaved was a crucial step to writing effective microcopy. Build relationships with cross-functional peers so you can ask those questions and understand the limitations of the component you need to write for.

2. Spit out your first draft. Then revise, revise, revise.

The next step is the writing. And the rewriting. In my own experience, I rarely land on the final line of microcopy right away.

Image of a Mark Twain quote:<figcaption>Mark Twain, The Wit and Wisdom of Mark Twain</figcaption>

Now that I understood my constraints around the widgets, I was ready to start typing. I typically work through several versions in a Google Doc, wearing out my delete key as I keep reworking until I get it ‘right.’

Image of a table outlining iterations of microcopy written and an assessment of each one.<figcaption>I wrote several iterations of the description for this widget to maximize the limited space and make the microcopy as useful as possible.</figcaption>

Microcopy strives to provide maximum clarity in a limited amount of space. Every word counts and has to work hard. It’s worth the effort to analyze each word and ask yourself if it’s serving you as well as it could. Consider tense, voice, and other words on the screen.

3. Solicit feedback on your work.

Before delivering final strings to Engineering, it’s always a good practice to get a second set of eyes from a fellow team member (this could be a content designer, UX designer, or researcher). Someone less familiar with the problem space can help spot confusing language or superfluous words.

In many cases, our team also runs copy by our localization team to understand if the language might be too US-centric. Sometimes we will add a note for our localizers to explain the context and intent of the message. We also do a legal review with in-house product counsel. These extra checks give us better confidence in the microcopy we ultimately ship.

Wrapping up

Magical microcopy doesn’t shoot from our fingertips as we type (though we wish it did)! If we have any trade secrets to share, it’s only that first we seek to understand our constraints, then we revise, tweak, and rethink our words many times over. Ideally we bring in a partner to help us further refine and help us catch blind spots. This is why writing short can take time.

If you’re tasked with writing microcopy, first learn as much as you can about the component you are writing for, particularly its constraints. When you finally sit down to write, don’t worry about getting it right the first time. Get your thoughts on paper, reflect on what you can improve, then repeat. You’ll get crisper and cleaner with each draft.


Thank you to my editors Meridel Walkington and Sharon Bautista for your excellent notes and suggestions on this post. Thanks to Emanuela Damiani for the Figma help.

How to Write Microcopy That Improves the User Experience was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Firefox UXWe’re Changing Our Name to Content Design

The Firefox UX content team has a new name that better reflects how we work.

Co-authored with Betsy Mikel

Stock photo image of a stack of blank business cards<figcaption>Photo by Brando Makes Branding on Unsplash</figcaption>

Hello. We’re the Firefox Content Design team. We’ve actually met before, but our name then was the Firefox Content Strategy team.

Why did we change our name to Content Design, you ask? Well, for a few (good) reasons.

It better captures what we do

We are designers, and our material is content. Content can be words, but it can be other things, too, like layout, hierarchy, iconography, and illustration. Words are one of the foundational elements in our design toolkit — similar to color or typography for visual designers — but it’s not always written words, and words aren’t created in a vacuum. Our practice is informed by research and an understanding of the holistic user journey. The type of content, and how content appears, is something we also create in close partnership with UX designers and researchers.

“Then, instead of saying ‘How shall I write this?’, you say, ‘What content will best meet this need?’ The answer might be words, but it might also be other things: pictures, diagrams, charts, links, calendars, a series of questions and answers, videos […], and many more besides. When your job is to decide which of those, or which combination of several of them, meets the user’s need — that’s content design.”
— Sarah Richards defined the content design practice in her seminal work, Content Design

It helps others understand how to work with us

While content strategy accurately captures the full breadth of what we do, this descriptor is better understood by those doing content strategy work or very familiar with it. And, as we know from writing product copy, accuracy is not synonymous with clarity.

Strategy can also sound like something we create on our own and then lob over a fence. In contrast, design is understood as an immersive and collaborative practice, grounded in solving user problems and business goals together.

Content design is thus a clearer descriptor for a broader audience. When we collaborate cross-functionally (with product managers, engineers, marketing), it’s important they understand what to expect from our contributions, and how and when to engage us in the process. We often get asked: “When is the right time to bring in content? And the answer is: “The same time you’d bring in a designer.”

We’re aligning with the field

Content strategy is a job title often used by the much larger field of marketing content strategy or publishing. There are website content strategists, SEO content strategists, and social media content strategists, all who do different types of content-related work. Content design is a job title specific to product and user experience.

And, making this change is in keeping with where the field is going. Organizations like Slack, Netflix, Intuit, and IBM also use content design, and practice leaders Shopify and Facebook recently made the change, articulating reasons that we share and echo here.

It distinguishes the totality of our work from copywriting

Writing interface copy is about 10% of what we do. While we do write words that appear in the product, it’s often at the end of a thoughtful design process that we participate in or lead.

We’re still doing all the same things we did as content strategists, and we are still strategic in how we work (and shouldn’t everyone be strategic in how they work, anyway?) but we are choosing a title that better captures the unseen but equally important work we do to arrive at the words.

It’s the best option for us, but there’s no ‘right’ option

Job titles are tricky, especially for an emerging field like content design. The fact that titles are up for debate and actively evolving shows just how new our profession is. While there have been people creating product content experiences for a while, the field is really starting to now professionalize and expand. For example, we just got our first dedicated content design and UX writing conference this year with Button.

Content strategy can be a good umbrella term for the activities of content design and UX writing. Larger teams might choose to differentiate more, staffing specialized strategists, content designers, and UX writers. For now, content design is the best option for us, where we are, and the context and organization in which we work.

“There’s no ‘correct’ job title or description for this work. There’s not a single way you should contribute to your teams or help others understand what you do.”
 — Metts & Welfle, Writing is Designing

Words matter

We’re documenting our name change publicly because, as our fellow content designers know, words matter. They reflect but also shape reality.

We feel a bit self-conscious about this declaration, and maybe that’s because we are the newest guests at the UX party — so new that we are still writing, and rewriting, our name tag. So, hi, it’s nice to see you (again). We’re happy to be here.

Thank you to Michelle Heubusch, Gemma Petrie, and Katie Caldwell for reviewing this post.

We’re Changing Our Name to Content Design was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla UXWe’re Changing Our Name to Content Design

The Firefox UX content team has a new name that better reflects how we work.

Stock photo image of a stack of blank business cards

Photo by Brando Makes Branding on Unsplash

<figcaption class="imageCaption"></figcaption>

Hello. We’re the Firefox Content Design team. We’ve actually met before, but our name then was the Firefox Content Strategy team.

Why did we change our name to Content Design, you ask? Well, for a few (good) reasons.

It better captures what we do

We are designers, and our material is content. Content can be words, but it can be other things, too, like layout, hierarchy, iconography, and illustration. Words are one of the foundational elements in our design toolkit — similar to color or typography for visual designers — but it’s not always written words, and words aren’t created in a vacuum. Our practice is informed by research and an understanding of the holistic user journey. The type of content, and how content appears, is something we also create in close partnership with UX designers and researchers.

“Then, instead of saying ‘How shall I write this?’, you say, ‘What content will best meet this need?’ The answer might be words, but it might also be other things: pictures, diagrams, charts, links, calendars, a series of questions and answers, videos, addresses, maps […] and many more besides. When your job is to decide which of those, or which combination of several of them, meets the user’s need — that’s content design.”
— Sarah Richards defined the content design practice in her seminal work, Content Design

It helps others understand how to work with us

While content strategy accurately captures the full breadth of what we do, this descriptor is better understood by those doing content strategy work or very familiar with it. And, as we know from writing product copy, accuracy is not synonymous with clarity.

Strategy can also sound like something we create on our own and then lob over a fence. In contrast, design is understood as an immersive and collaborative practice, grounded in solving user problems and business goals together.

Content design is thus a clearer descriptor for a broader audience. When we collaborate cross-functionally (with product managers, engineers, marketing), it’s important they understand what to expect from our contributions, and how and when to engage us in the process. We often get asked: “When is the right time to bring in content? And the answer is: “The same time you’d bring in a designer.”

We’re aligning with the field

Content strategy is a job title often used by the much larger field of marketing content strategy or publishing. There are website content strategists, SEO content strategists, and social media content strategists, all who do different types of content-related work. Content design is a job title specific to product and user experience.

And, making this change is in keeping with where the field is going. Organizations like Slack, Netflix, Intuit, and IBM also use content design, and practice leaders Shopify and Facebook recently made the change, articulating reasons that we share and echo here.

It distinguishes the totality of our work from copywriting

Writing interface copy is about 10% of what we do. While we do write words that appear in the product, it’s often at the end of a thoughtful design process that we participate in or lead.

We’re still doing all the same things we did as content strategists, and we are still strategic in how we work (and shouldn’t everyone be strategic in how they work, anyway?) but we are choosing a title that better captures the unseen but equally important work we do to arrive at the words.

It’s the best option for us, but there’s no ‘right’ option

Job titles are tricky, especially for an emerging field like content design. The fact that titles are up for debate and actively evolving shows just how new our profession is. While there have been people creating product content experiences for a while, the field is really starting to now professionalize and expand. For example, we just got our first dedicated content design and UX writing conference this year with Button.

Content strategy can be a good umbrella term for the activities of content design and UX writing. Larger teams might choose to differentiate more, staffing specialized strategists, content designers, and UX writers. For now, content design is the best option for us, where we are, and the context and organization in which we work.

“There’s no ‘correct’ job title or description for this work. There’s not a single way you should contribute to your teams or help others understand what you do.”
 — Metts & Welfle, Writing is Designing

Words matter

We’re documenting our name change publicly because, as our fellow content designers know, words matter. They reflect but also shape reality.

We feel a bit self-conscious about this declaration, and maybe that’s because we are the newest guests at the UX party — so new that we are still writing, and rewriting, our name tag. So, hi, it’s nice to see you (again). We’re happy to be here.


Thank you to Michelle Heubusch, Gemma Petrie, and Katie Caldwell for reviewing this post. 

Mozilla UXHow to Write Microcopy That Improves the User Experience

Photo of typesetting letters of various shapes and sizes.

Photo by Amador Loureiro on Unsplash.

The small bits of copy you see sprinkled throughout apps and websites are called microcopy. As content designers, we think deeply about what each word communicates.

Microcopy is the tidiest of UI copy types. But do not let its crisp, contained presentation fool you: the process to get to those final, perfect words can be messy. Very messy. Multiple drafts messy, mired with business and technical constraints.

Here’s a secret about good writing that no one ever tells you: When you encounter clear UX content, it’s a result of editing and revision. The person who wrote those words likely had a dozen or more versions you’ll never see. They also probably had some business or technical constraints to consider, too.

If you’ve ever wondered what all goes into writing microcopy, pour yourself a micro-cup of espresso or a micro-brew and read on!

Quotation attributed to Blaise Pascale that reads: I would have written a shorter letter, but I did not have the time.

Blaise Pascal, translated from Lettres Provinciales

1. Understand the component and how it behaves.

As a content designer, you should try to consider as many cases as possible up front. Work with Design and Engineering to test the limits of the container you’re writing for. What will copy look like when it’s really short? What will it look like when it’s really long? You might be surprised what you discover.

Images of two Firefox iOS widgets: Quick Actions and Top Sites.

Before writing the microcopy for iOS widgets, I needed first to understand the component and how it worked.

Apple recently introduced a new component to the iOS ecosystem. You can now add widgets the Home Screen on your iPhone, iPad, or iPod touch. The Firefox widgets allow you to start a search, close your tabs, or open one of your top sites.

Before I sat down to write a single word of microcopy, I would need to know the following:

  • Is there a character limit for the widget descriptions?
  • What happens if the copy expands beyond that character limit? Does it truncate?
  • We had three widget sizes. Would this impact the available space for microcopy?

Because these widgets didn’t yet exist in the wild for me to interact with, I asked Engineering to help answer my questions. Engineering played with variations of character length in a testing environment to see how the UI might change.

Image of two iOS testing widgets side-by-side, one with a long description and one with a short description.

Engineering tried variations of copy length in a testing environment. This helped us understand surprising behavior in the template itself.

We learned the template behaved in a peculiar way. The widget would shrink to accommodate a longer description. Then, the template would essentially lock to that size. Even if other widgets had shorter descriptions, the widgets themselves would appear teeny. You had to strain your eyes to read any text on the widget itself. Long story short, the descriptions needed to be as concise as possible. This would accommodate for localization and keep the widgets from shrinking.

First learning how the widgets behaved was a crucial step to writing effective microcopy. Build relationships with cross-functional peers so you can ask those questions and understand the limitations of the component you need to write for.

2. Spit out your first draft. Then revise, revise, revise.

Image of a Mark Twain quote:

Mark Twain, The Wit and Wisdom of Mark Twain

Now that I understood my constraints, I was ready to start typing. I typically work through several versions in a Google Doc, wearing out my delete key as I keep reworking until I get it ‘right.’

Image of a table outlining iterations of microcopy written and an assessment of each one.

I wrote several iterations of the description for this widget to maximize the limited space and make the microcopy as useful as possible.

Microcopy strives to provide maximum clarity in a limited amount of space. Every word counts and has to work hard. It’s worth the effort to analyze each word and ask yourself if it’s serving you as well as it could. Consider tense, voice, and other words on the screen.

3. Solicit feedback on your work.

Before delivering final strings to Engineering, it’s always a good practice to get a second set of eyes from a fellow team member (this could be a content designer, UX designer, or researcher). Someone less familiar with the problem space can help spot confusing language or superfluous words.

In many cases, our team also runs copy by our localization team to understand if the language might be too US-centric. Sometimes we will add a note for our localizers to explain the context and intent of the message. We also do a legal review with in-house product counsel. These extra checks give us better confidence in the microcopy we ultimately ship.

Wrapping up

Magical microcopy doesn’t shoot from our fingertips as we type (though we wish it did)! If we have any trade secrets to share, it’s only that first we seek to understand our constraints, then we revise, tweak, and rethink our words many times over. Ideally we bring in a partner to help us further refine and help us catch blind spots. This is why writing short can take time.
If you’re tasked with writing microcopy, first learn as much as you can about the component you are writing for, particularly its constraints. When you finally sit down to write, don’t worry about getting it right the first time. Get your thoughts on paper, reflect on what you can improve, then repeat. You’ll get crisper and cleaner with each draft.


Thank you to my editors Meridel Walkington and Sharon Bautista for your excellent notes and suggestions on this post. Thanks to Emanuela Damiani for the Figma help.

This post was originally published on Medium.

SeaMonkeyUpdating on the updates..

Where are the updates?  It’s been so long… soo sooo long that I’ve forgotten when anyone got their browser updated automagically (as someone once said).

The release process is still half-manual half automatic.   I had hoped to make it totally automated; but we still have IanN and frg manually building them in a-manual-scripting method.    So that’s lacking; but it’s nearer than before.  Well, hell lot nearer than the updates situation.  No, that’s not saying a lot.  Kinda akin to starting a 10km hike and saying I’m near the finish line.  But it’s more like three quarters way to the destination.

As for the updates…  *sigh*.   I had thought I was nearly completed; but upon looking at the whole mess,  I’m far from completing.   Old versions and new versions.  Should we care about those using 2.0x?  What about 1.x?  At this point, I hate to say this but, those still using 1.x have no way of updating to whatever it is they can update to.   Even before this update process getting hosed,  1.x required manual updating to 2.x (read: Install anew).

So what about 2.0?  Then we have the issue of platform support.  Thanks to XP (and OSX Lion, and other old OSX systems) being de-supported, that adds a lot to what needs to be done.   Sounds kinda like ‘excusey’, I know.

The problem now boils down to what I want vs. what I can do with whatever time I can allot to dealing with this problem.

What I want:

I’d like to have an update system that can update whatever version to whatever *max* version that can be supported on the user’s system.    This is the most ideal setting.

What I can do:

Do we live in an ideal world?  No.   I feel the pressure.  I really do.  Psychologically I feel the anger and frustration.  I sense the pitchforks and the tar and feather.   I know they’re out to get me…

What I can’t do:

I unfortunately cannot set up updates for unsupported systems.  The obvious cases are for the OS/2s,  FreeBSDs,  and other systems I see checking for updates.   Yes, even Sunbird I can’t do anything about it.

Anyway…  I am currently working on the updates system.   I had hoped to build upon the old one (the pre-removal from Mozilla’s aus2 server) and while we did use it,   I had problems understanding the code.   So I built a newer system but due to the lack of concentration (haven’t been in the zone for a long time),  I couldn’t get things going.   Now, hopefully, I can concentrate on this.

I *really* need to get things working.  Those people with the pitchforks are gonna come for me.   Worst case scenario,  I get removed from the project and I’m relegated to peeling spuds  [which brings me to the name Spuds Mackenzie for some reason, but I digress].

Anyway, what I say here is wholy and solely my $0.02.  The other devs have nothing to do with this post.  So if the pitchforks do come…  the other devs have nothing to do with this….

Sometimes… I feel like “Arnold J. Rimmer” in Red Dwarf after the Astronavigation Exam where he’d stand up, does his curt double-rimmer salute to the examiner…  except.. his case, he says “I am a fish”…   I say, “I am an idiot.”   and then he passes out onto the floor.   I’m not yet there to passing out though.   So I guess…. nothing like him.    Now… Ace.. that’s someone I’d like to aspire to….  “Smoke me a kipper and I”ll be back for breakfast”





Blog of DataThis week in Glean: Glean.js

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.)

In a previous TWiG blog post, I talked about my experiment on trying to compile glean-core to Wasm. The motivation for that experiment was the then upcoming Glean.js workweek, where some of us were going to take a pass at building a proof-of-concept implementation of Glean in Javascript. That blog post ends on the following note:

My conclusion is that although we can compile glean-core to Wasm, it doesn’t mean that we should do that. The advantages of having a single source of truth for the Glean SDK are very enticing, but at the moment it would be more practical to rewrite something specific for the web.

When I wrote that post, we hadn’t gone through the Glean.js workweek and were not sure yet if it would be viable to pursue a new implementation of Glean in Javascript.

I am not going to keep up the suspense though. We were able to implement a proof of concept version of Glean that works in Javascript environments during that workweek, it:

  • Persisted data throughout application runs (e.g. client_id);
  • Allowed for recording event metrics;
  • Sent Glean schema compliant pings to the pipeline.

And all of this, we were able to make work on:

  • Static websites;
  • Svelte apps;
  • Node.js servers;
  • Electron apps;
  • Node.js command like applications;
  • Node.js server applications;
  • Qt/QML apps.

Check out the code for this project on:

The outcome of the workweek confirmed it was possible and worth it to go ahead with Glean.js. For the past weeks the Glean SDK team has officially started working on the roadmap for this project’s MVP.

Our plan is to have an MVP of Glean.js that can be used on webextensions by February/2021.

The reason for our initial focus on webextensions is that the Ion project has volunteered to be Glean.js’ first consumer. Support for static websites and Qt/QML apps will follow. Other consumers such as Node.js servers and CLIs are not part of the initial roadmap.

Although we learned a lot by building the POC, we were probably left with more open questions than answered ones. The Javascript environment is a very special one and when we set out to build something that can work virtually anywhere that runs Javascript, we were embarking on an adventure.

Each Javascript environment has different resources the developer can interact with. Let’s think, for example, about persistence solutions: on web browsers we can use localStorage or IndexedDB, but on Node.js servers / CLIs we would need to go another way completely and use Level DB or some other external library. What is the best way to deal with this and what exactly are the differences between environments?

The issue of having different resources is not even the most challenging one. Glean defines internal metrics and their lifetimes, and internal pings and their schedules. This is important so that our users can do base analysis without having any custom metrics or pings. The hardest open question we were left with was: what pings should Glean.js send out of the box and what should their scheduling look like?

Because Glean.js opens up possibilities for such varied consumers: from websites to CLIs, defining scheduling that will be universal for all of its consumers is probably not even possible. If we decide to tackle these questions for each environment separately, we are still facing tricky consumers and consumers that we are not used to, such as websites and web extensions.

Websites specifically come with many questions: how can we guarantee client side data persistence, if a user can easily delete all of it by running some code in the console or tweaking browser settings. What is the best scheduling for pings, if each website can have so many different usage lifecycles?

We are excited to tackle these and many other challenges in the coming months. Development of the roadmap can be followed on Bug 1670910.

Mozilla Add-ons BlogContribute to selecting new Recommended extensions

Recommended extensions—a curated list of extensions that meet Mozilla’s highest standards of security, functionality, and user experience—are in part selected with input from a rotating editorial board of community contributors. Each board runs for six consecutive months and evaluates a small batch of new Recommended candidates each month. The board’s evaluation plays a critical role in helping identify new potential Recommended additions.

We are now accepting applications for community board members through 18 November. If you would like to nominate yourself for consideration on the board, please email us at amo-featured [at] mozilla [dot] org and provide a brief explanation why you feel you’d make a keen evaluator of Firefox extensions. We’d love to hear about how you use extensions and what you find so remarkable about browser customization. You don’t have to be an extension developer to effectively evaluate Recommended candidates (though indeed many past board members have been developers themselves), however you should have a strong familiarity with extensions and be comfortable assessing the strengths and flaws of their functionality and user experience.

Selected contributors will participate in a six-month project that runs from December – May.

Here’s the entire collection of Recommended extensions, if curious to explore what’s currently curated.

Thank you and we look forward to hearing from interested contributors by the 18 November application deadline!

The post Contribute to selecting new Recommended extensions appeared first on Mozilla Add-ons Blog.

Mozilla L10NIntroducing source string comments in Pontoon

Published on behalf of April Bowler.

When we first shipped the ability to add comments within Pontoon we mentioned that there were some additional features that would be coming, and I’m happy to announce that those features are now active.

What are those features, you say? Let’s have a look:


If you have a comment that you need to ensure is seen by a particular person on the project, you can now ‘mention’ that person by typing the “@” symbol and their name. Once the comment is submitted that person will then be notified that they have been mentioned in a comment.

Pinned Comments

Project Managers can now Pin comments within the Comments panel in the 3rd column. This will not only add a visible pin to the comment, it will also place the comment within the source string Metadata section in the middle column and make it visible globally across all locales.

Request Context or Report Issue

Also present in the top section of the middle column is a new button that allows localizers to request more context or report an issue in the source string. When utilized this button will open the Comments panel and insert a mention for the contact person of the project. This will ensure that the contact person receives a notification about the comment.

Request context or report issue

Request Context or Report Issue button allows localizers to ping Project Managers, which can pin their responses to become globally visible for everyone.

Final Thoughts

As you can probably tell by the descriptions, all of these features work hand-in-hand with each other to help improve the workflow and communication within Pontoon.

For example, if you run into an ambiguous string, you can request context and the contact person will clarify it for you. If the meaning of the source string is not clear for the general audience, they can also pin their response, which will make it visible to everyone and even notify users who already translated or reviewed the string.

It has truly been a pleasure to work on this feature, first as part of my Outreachy internship and then as a contributor, and I hope that it has a positive impact on your work within Pontoon. I look forward to making continued contributions.

hacks.mozilla.orgMDN Web Docs evolves! Lowdown on the upcoming new platform

The time has come for Kuma — the platform that powers MDN Web Docs — to evolve. For quite some time now, the MDN developer team has been planning a radical platform change, and we are ready to start sharing the details of it. The question on your lips might be “What does a Kuma evolve into? A KumaMaMa?”

What? Kuma is evolving!

For those of you not so into Pokémon, the question might instead be “How exactly is MDN changing, and how does it affect MDN users and contributors”?

For general users, the answer is easy — there will be very little change to how we serve the great content you use everyday to learn and do your jobs.

For contributors, the answer is a bit more complex.

The changes in a nutshell

In short, we are updating the platform to move the content from a MySQL database to being hosted in a GitHub repository (codename: Project Yari).

Congratulations! Your Kuma evolved into Yari

The main advantages of this approach are:

  • Less developer maintenance burden: The existing (Kuma) platform is complex and hard to maintain. Adding new features is very difficult. The update will vastly simplify the platform code — we estimate that we can remove a significant chunk of the existing codebase, meaning easier maintenance and contributions.
  • Better contribution workflow: We will be using GitHub’s contribution tools and features, essentially moving MDN from a Wiki model to a pull request (PR) model. This is so much better for contribution, allowing for intelligent linting, mass edits, and inclusion of MDN docs in whatever workflows you want to add it to (you can edit MDN source files directly in your favorite code editor).
  • Better community building: At the moment, MDN content edits are published instantly, and then reverted if they are not suitable. This is really bad for community relations. With a PR model, we can review edits and provide feedback, actually having conversations with contributors, building relationships with them, and helping them learn.
  • Improved front-end architecture: The existing MDN platform has a number of front-end inconsistencies and accessibility issues, which we’ve wanted to tackle for some time. The move to a new, simplified platform gives us a perfect opportunity to fix such issues.

The exact form of the platform is yet to be finalized, and we want to involve you, the community, in helping to provide ideas and test the new contribution workflow! We will have a beta version of the new platform ready for testing on November 2, and the first release will happen on December 14.

Simplified back-end platform

We are replacing the current MDN Wiki platform with a JAMStack approach, which publishes the content managed in a GitHub repo. This has a number of advantages over the existing Wiki platform, and is something we’ve been considering for a number of years.

Before we discuss our new approach, let’s review the Wiki model so we can better understand the changes we’re making.

Current MDN Wiki platform


workflow diagram of the old kuma platform

It’s important to note that both content contributors (writers) and content viewers (readers) are served via the same architecture. That architecture has to accommodate both use cases, even though more than 99% of our traffic comprises document page requests from readers. Currently, when a document page is requested, the latest version of the document is read from our MySQL database, rendered into its final HTML form, and returned to the user via the CDN.

That document page is stored and served from the CDN’s cache for the next 5 minutes, so subsequent requests — as long as they’re within that 5-minute window — will be served directly by the CDN. That caching period of 5 minutes is kept deliberately short, mainly due to the fact that we need to accommodate the needs of the writers. If we only had to accommodate the needs of the readers, we could significantly increase the caching period and serve our document pages more quickly, while at the same time reducing the workload on our backend servers.

You’ll also notice that because MDN is a Wiki platform, we’re responsible for managing all of the content, and tasks like storing document revisions, displaying the revision history of a document, displaying differences between revisions, and so on. Currently, the MDN development team maintains a large chunk of code devoted to just these kinds of tasks.

New MDN platform


workflow diagram of the new yari platform

With the new JAMStack approach, the writers are served separately from the readers. The writers manage the document content via a GitHub repository and pull request model, while the readers are served document pages more quickly and efficiently via pre-rendered document pages served from S3 via a CDN (which will have a much longer caching period). The document content from our GitHub repository will be rendered and deployed to S3 on a daily basis.

You’ll notice, from the diagram above, that even with this new approach, we still have a Kubernetes cluster with Django-based services relying on a relational database. The important thing to remember is that this part of the system is no longer involved with the document content. Its scope has been dramatically reduced, and it now exists solely to provide APIs related to user accounts (e.g. login) and search.

This separation of concerns has multiple benefits, the most important three of which are as follows:

  • First, the document pages are served to readers in the simplest, quickest, and most efficient way possible. That’s really important, because 99% of MDN’s traffic is for readers, and worldwide performance is fundamental to the user experience.
  • Second, because we’re using GitHub to manage our document content, we can take advantage of the world-class functionality that GitHub has to offer as a content management system, and we no longer have to support the large body of code related to our current Wiki platform. It can simply be deleted.
  • Third, and maybe less obvious, is that this new approach brings more power to the platform. We can, for example, perform automated linting and testing on each content pull request, which allows us to better control quality and security.

New contribution workflow

Because MDN content is soon to be contained in a GitHub repo, the contribution workflow will change significantly. You will no longer be able to click Edit on a page, make and save a change, and have it show up nearly immediately on the page. You’ll also no longer be able to do your edits in a WYSIWYG editor.

Instead, you’ll need to use git/GitHub tooling to make changes, submit pull requests, then wait for changes to be merged, the new build to be deployed, etc. For very simple changes such as fixing typos or adding new paragraphs, this may seem like a step back — Kuma is certainly convenient for such edits, and for non-developer contributors.

However, making a simple change is arguably no more complex with Yari. You can use the GitHub UI’s edit feature to directly edit a source file and then submit a PR, meaning that you don’t have to be a git genius to contribute simple fixes.

For more complex changes, you’ll need to use the git CLI tool, or a GUI tool like GitHub Desktop, but then again git is such a ubiquitous tool in the web industry that it is safe to say that if you are interested in editing MDN, you will probably need to know git to some degree for your career or course. You could use this as a good opportunity to learn git if you don’t know it already! On top of that there is a file system structure to learn, and some new tools/commands to get used to, but nothing terribly complex.

Another possible challenge to mention is that you won’t have a WYSIWYG to instantly see what the page looks like as you add your content, and in addition you’ll be editing raw HTML, at least initially (we are talking about converting the content to markdown eventually, but that is a bit of a ways off). Again, this sounds like a step backwards, but we are providing a tool inside the repo so that you can locally build and preview the finished page to make sure it looks right before you submit your pull request.

Looking at the advantages now, consider that making MDN content available as a GitHub repo is a very powerful thing. We no longer have spam content live on the site, with us then having to revert the changes after the fact. You are also free to edit MDN content in whatever way suits you best — your favorite IDE or code editor — and you can add MDN documentation into your preferred toolchain (and write your own tools to edit your MDN editing experience). A lot of engineers have told us in the past that they’d be much happier to contribute to MDN documentation if they were able to submit pull requests, and not have to use a WYSIWYG!

We are also looking into a powerful toolset that will allow us to enhance the reviewing process, for example as part of a CI process — automatically detecting and closing spam PRs, and as mentioned earlier on, linting pages once they’ve been edited, and delivering feedback to editors.

Having MDN in a GitHub repo also offers much easier mass edits; blanket content changes have previously been very difficult.

Finally, the “time to live” should be acceptable — we are aiming to have a quick turnaround on the reviews, and the deployment process will be repeated every 24 hours. We think that your changes should be live on the site in 48 hours as a worst case scenario.

Better community building

Currently MDN is not a very lively place in terms of its community. We have a fairly active learning forum where people ask beginner coding questions and seek help with assessments, but there is not really an active place where MDN staff and volunteers get together regularly to discuss documentation needs and contributions.

Part of this is down to our contribution model. When you edit an MDN page, either your contribution is accepted and you don’t hear anything, or your contribution is reverted and you … don’t hear anything. You’ll only know either way by looking to see if your edit sticks, is counter-edited, or is reverted.

This doesn’t strike us as very friendly, and I think you’ll probably agree. When we move to a git PR model, the MDN community will be able to provide hands-on assistance in helping people to get their contributions right — offering assistance as we review their PRs (and offering automated help too, as mentioned previously) — and also thanking people for their help.

It’ll also be much easier for contributors to show how many contributions they’ve made, and we’ll be adding in-page links to allow people to file an issue on a specific page or even go straight to the source on GitHub and fix it themselves, if a problem is encountered.

In terms of finding a good place to chat about MDN content, you can join the discussion on the MDN Web Docs chat room on Matrix.

Improved front-end architecture

The old Kuma architecture has a number of front-end issues. Historically we have lacked a well-defined system that clearly describes the constraints we need to work within, and what our site features look like, and this has led to us ending up with a bloated, difficult to maintain front-end code base. Working on our current HTML and CSS is like being on a roller coaster with no guard-rails.

To be clear, this is not the fault of any one person, or any specific period in the life of the MDN project. There are many little things that have been left to fester, multiply, and rot over time.

Among the most significant problems are:

  • Accessibility: There are a number of accessibility problems with the existing architecture that really should be sorted out, but were difficult to get a handle on because of Kuma’s complexity.
  • Component inconsistency: Kuma doesn’t use a proper design system — similar items are implemented in different ways across the site, so implementing features is more difficult than it needs to be.

When we started to move forward with the back-end platform rewrite, it felt like the perfect time to again propose the idea of a design system. After many conversations leading to an acceptable compromise being reached, our design system — MDN Fiori — was born.

Front-end developer Schalk Neethling and UX designer Mustafa Al-Qinneh took a whirlwind tour through the core of MDN’s reference docs to identify components and document all the inconsistencies we are dealing with. As part of this work, we also looked for areas where we can improve the user experience, and introduce consistency through making small changes to some core underlying aspects of the overall design.

This included a defined color palette, simple, clean typography based on a well-defined type scale, consistent spacing, improved support for mobile and tablet devices, and many other small tweaks. This was never meant to be a redesign of MDN, so we had to be careful not to change too much. Instead, we played to our existing strengths and made rogue styles and markup consistent with the overall project.

Besides the visual consistency and general user experience aspects, our underlying codebase needed some serious love and attention — we decided on a complete rethink. Early on in the process it became clear that we needed a base library that was small, nimble, and minimal. Something uniquely MDN, but that could be reused wherever the core aspects of the MDN brand was needed. For this purpose we created MDN-Minimalist, a small set of core atoms that power the base styling of MDN, in a progressively enhanced manner, taking advantage of the beautiful new layout systems we have access to on the web today.

Each component that is built into Yari is styled with MDN-Minimalist, and also has its own style sheet that lives right alongside to apply further styles only when needed. This is an evolving process as we constantly rethink how to provide a great user experience while staying as close to the web platform as possible. The reason for this is two fold:

  • First, it means less code. It means less reinventing of the wheel. It means a faster, leaner, less bandwidth-hungry MDN for our end users.
  • Second, it helps address some of the accessibility issues we have begrudgingly been living with for some time, which are simply not acceptable on a modern web site. One of Mozilla’s accessibility experts, Marco Zehe, has given us a lot of input to help overcome these. We won’t fix everything in our first iteration, but our pledge to all of our users is that we will keep improving and we welcome your feedback on areas where we can improve further.

A wise person once said that the best way to ensure something is done right is to make doing the right thing the easy thing to do. As such, along with all of the work already mentioned, we are documenting our front-end codebase, design system, and pattern library in Storybook (see Storybook files inside the yari repo) with companion design work in Figma (see typography example) to ensure there is an easy, public reference for anyone who wishes to contribute to MDN from a code or design perspective. This in itself is a large project that will evolve over time. More communication about its evolution will follow.

The future of MDN localization

One important part of MDN’s content that we have talked about a lot during the planning phase is the localized content. As you probably already know, MDN offers facilities for translating the original English content and making the localizations available alongside it.

This is good in principle, but the current system has many flaws. When an English page is moved, the localizations all have to be moved separately, so pages and their localizations quite often go out of sync and get in a mess. And a bigger problem is that there is no easy way of signalling that the English version has changed to all the localizers.

General management is probably the most significant problem. You often get a wave of enthusiasm for a locale, and lots of translations done. But then after a number of months interest wanes, and no-one is left to keep the translations up to date. The localized content becomes outdated, which is often harmful to learning, becomes a maintenance time-suck, and as a result, is often considered worse than having no localizations at all.

Note that we are not saying this is true of all locales on MDN, and we are not trying to downplay the amount of work volunteers have put into creating localized content. For that, we are eternally grateful. But the fact remains that we can’t carry on like this.

We did a bunch of research, and talked to a lot of non-native-English speaking web developers about what would be useful to them. Two interesting conclusions were made:

  1. We stand to experience a significant but manageable loss of users if we remove or reduce our localization support. 8 languages cover 90% of the accept-language headers received from MDN users (en, zh, es, ja, fr, ru, pt, de), while 14 languages cover 95% of the accept-languages (en, zh, es, ja, fr, ru, pt, de, ko, zh-TW, pl, it, nl, tr). We predict that we would expect to lose at most 19% of our traffic if we dropped L10n entirely.
  2. Machine translations are an acceptable solution in most cases, if not a perfect one. We looked at the quality of translations provided by automated solutions such as Google Translate and got some community members to compare these translations to manual translations. The machine translations were imperfect, and sometimes hard to understand, but many people commented that a non-perfect language that is up-to-date is better than a perfect language that is out-of-date. We appreciate that some languages (such as CJK languages) fare less well than others with automated translations.

So what did we decide? With the initial release of the new platform, we are planning to include all translations of all of the current documents, but in a frozen state. Translations will exist in their own mdn/translated-content repository, to which we will not accept any pull requests. The translations will be shown with a special header that says “This is an archived translation. No more edits are being accepted.” This is a temporary stage until we figure out the next step.

Note: In addition, the text of the UI components and header menu will be in English only, going forward. They will not be translated, at least not initially.

After the initial release, we want to work with you, the community, to figure out the best course of action to move forward with for translations. We would ideally rather not lose localized content on MDN, but we need to fix the technical problems of the past, manage it better, and ensure that the content stays up-to-date.

We will be planning the next phase of MDN localization with the following guiding principles:

  • We should never have outdated localized content on MDN.
  • Manually localizing all MDN content in a huge range of locales seems infeasible, so we should drop that approach.
  • Losing ~20% of traffic is something we should avoid, if possible.

We are making no promises about deliverables or time frames yet, but we have started to think along these lines:

  • Cut down the number of locales we are handling to the top 14 locales that give us 95% of our recorded accept-language headers.
  • Initially include non-editable Machine Learning-based automated translations of the “tier-1” MDN content pages (i.e. a set of the most important MDN content that excludes the vast long tail of articles that get no, or nearly no views). Ideally we’d like to use the existing manual translations to train the Machine Learning system, hopefully getting better results. This is likely to be the first thing we’ll work on in 2021.
  • Regularly update the automated translations as the English content changes, keeping them up-to-date.
  • Start to offer a system whereby we allow community members to improve the automated translations with manual edits. This would require the community to ensure that articles are kept up-to-date with the English versions as they are updated.


I’d like to thank my colleagues Schalk Neethling, Ryan Johnson, Peter Bengtsson, Rina Tambo Jensen, Hermina Condei, Melissa Thermidor, and anyone else I’ve forgotten who helped me polish this article with bits of content, feedback, reviews, edits, and more.

The post MDN Web Docs evolves! Lowdown on the upcoming new platform appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Add-ons BlogExtensions in Firefox 83

In addition to our brief update on extensions in Firefox 83, this post contains information about changes to the Firefox release calendar and a feature preview for Firefox 84.

Thanks to a contribution from Richa Sharma, the error message logged when a tabs.sendMessage is passed an invalid tabID is now much easier to understand. It had regressed to a generic message due to a previous refactoring.

End of Year Release Calendar

The end of 2020 is approaching (yay?), and as usual people will be taking time off and will be less available. To account for this, the Firefox Release Calendar has been updated to extend the Firefox 85 release cycle by 2 weeks. We will release Firefox 84 on 15 December and Firefox 85 on 26 January. The regular 4-week release cadence should resume after that.

Coming soon in Firefox 84: Manage Optional Permissions in Add-ons Manager

Starting with Firefox 84, currently available on the Nightly pre-release channel, users will be able to manage optional permissions of installed extensions from the Firefox Add-ons Manager (about:addons).

Optional permissions in about:addons

We recommend that extensions using optional permissions listen for the browser.permissions.onAdded and browser.permissions.onRemoved API events. This ensures the extension is aware of the user granting or revoking optional permissions.

The post Extensions in Firefox 83 appeared first on Mozilla Add-ons Blog.

SeaMonkeySeaMonkey 2.53.5 Beta 1..

Hi All,

Just want to drop a simple note announcing to the world that SeaMonkey 2.53.5 beta 1 has been released.

We, at the project, hope that everyone is staying vigilant against this pandemic.

Please do keep safe and healthy.



Mozilla L10NL10n Report: October 2020 Edition

New content and projects

What’s new or coming up in Firefox desktop

Upcoming deadlines:

  • Firefox 83 is currently in beta and will be released on November 17. The deadline to update localization is on November 8 (see the previous l10n report to understand why it moved closer to the release date).
  • There might be changes to the release schedule in December. If that happens, we’ll make sure to cover them in the upcoming report.

The number of new strings remains pretty low, but there was a change that landed without new string IDs: English switched from a hyphen (-) to an em dash (–) as separator for window titles. Since this is a choice that belongs to each locale, and the current translation might be already correct, we decided to not invalidate all existing translations and notify localizers instead.

You can see the details of the strings that changed in this changeset. The full list of IDs, in case you want to search for them in Pontoon:

  • browser-main-window
  • browser-main-window-mac
  • page-info-page
  • page-info-frame
  • webrtc-indicator-title
  • tabs.containers.tooltip
  • webrtcIndicator.windowtitle
  • timeline.cssanimation.nameLabel
  • timeline.csstransition.nameLabel
  • timeline.scriptanimation.nameLabel
  • toolbox.titleTemplate1
  • toolbox.titleTemplate2
  • TitleWithStatus
  • profileTooltip

Alternatively, you can search for “ – “ in Pontoon, but you’ll have to skim through a lot of results, since Pontoon searches also in comments.

What’s new or coming up in mobile

Firefox for iOS v29, as well as iOS v14, both recently shipped – bringing with them the possibility to make Firefox your default browser for the first time ever!

v29 also introduced a Firefox homescreen widget, and more widgets will likely come soon. Congratulations to all for helping localize these awesome new features!

v30 strings have just recently been exposed on Pontoon, and the deadline for l10n strings completion is November 4th. Screenshots will be updated for testing very soon.

Firefox for Android (“Fenix”) is currently open for localizing v83 strings. The next couple of weeks should be used for completing your current pending strings, as well as testing your work for the release. Take a look at our updated docs here!

What’s new or coming up in web projects

Recently, the monthly WNP pages that went out with the Firefox releases contained content promoting features that were not available for global markets or campaign messages that targeted select few markets. In these situations, for all the other locales, the page was always redirected to the evergreen WNP page.  As a result, please make it a high priority if your locale has not completed the page.

More pages were added and migrated to Fluent format. For migrated content, the web team has decided not to make any edits until there is a major content update or page layout redesign. Please take the time to resolve errors first as the page may have been activated. If a placeable error is not fixed, it will be shown on production.

Common Voice & WebThings Gateway

Both projects now have a new point of contact. If you want to add a new language or have any questions with the strings, send an email directly to the person first. Follow the projects’ latest development through the channels on Discourse. The l10n-drivers will continue to provide support to both teams through the Pontoon platform.

What’s new or coming up in SuMo

Please help us localize the following articles for Firefox 82 (desktop and Android):

What’s new or coming up in Pontoon

Spring cleaning on the road to Django 3. Our new contributor Philipp started the process of upgrading Django to the latest release. In a period of 12 days, he landed 12(!) patches, ranging from library updates and replacing out-of-date libraries with native Python and Django capabilities to making our testing infrastructure more consistent and dropping unused code. Well done, Philipp! Thanks to Axel and Jotes for the reviews.

Newly published localizer facing documentation

Firefox for Android docs have been updated to reflect the recent changes introduced by our migration to the new “Fenix” browser. We invite you to take a look as there are many changes to the Firefox for Android localization workflow.


Kudos to these long time Mozillians who found creative ways to spread the words promoting their languages and sharing their experience by taking the public airways.

  • Wim of Frisian was interviewed by a local radio station where he shared his passion localizing Mozilla projects. He took the opportunity to promote Common Voice and call for volunteers speaking Frisian. Since the airing of the story, Frisian saw an increase with 11 hours spoken clips and an increase between 250 to 400 people donating their voices. The interview would be aired monthly.
  • Quentin of Occitan made an appearance in this local news story in the south of French, talking about using Pontoon to localize Mozilla products in Occitan.

Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

Image by Elio Qoshi

  • Iskandar of Indonesian community for his huge efforts completing Firefox for Android (Fenix) localization in recent months.
  • Dian Ina and Andika of Indonesian community for their huge efforts completing the Thunderbird localization in recent months!

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

hacks.mozilla.orgMDN Web Docs: Editorial strategy and community participation

We’ve made a lot of progress on moving forward with MDN Web Docs in the last couple of months, and we wanted to share where we are headed in the short- to mid-term, starting with our editorial strategy and renewed efforts around community participation.

New editorial strategy

Our updated editorial strategy has two main parts: the creation of content pillars and an editorial calendar.

The MDN writers’ team has always been responsible for keeping the MDN web platform reference documentation up-to-date, including key areas such as HTML, CSS, JavaScript, and Web APIs. We are breaking these key areas up into “content pillars”, which we will work on in turn to make sure that the significant new web platform updates are documented each month.

Note: This also means that we can start publishing our Firefox developer release notes again, so you can keep abreast of what we’re supporting in each new version, as well as Mozilla Hacks posts to give you further insights into what we are up to in Firefox engineering.

We will also be creating and maintaining an editorial calendar in association with our partners on the Product Advisory Board — and the rest of the community that has input to provide — which will help us prioritize general improvements to MDN documentation going forward. For example, we’d love to create more complete documentation on important web platform-related topics such as accessibility, performance, and security.

MDN will work with domain experts to help us update these docs, as well as enlist help from you and the rest of our community — which is what we want to talk about for the rest of this post.

Community call for participation

There are many day-to-day tasks that need to be done on MDN, including moderating content, answering queries on the Discourse forums, and helping to fix user-submitted content bugs. We’d love you to help us out with these tasks.

To this end, we’ve rewritten our Contributing to MDN pages so that it is simpler to find instructions on how to perform specific atomic tasks that will help burn down MDN backlogs. The main tasks we need help with at the moment are:

We hope these changes will help revitalize the MDN community into an even more welcoming, inclusive place where anyone can feel comfortable coming and getting help with documentation, or with learning new technologies or tools.

If you want to talk to us, ask questions, and find out more, join the discussion on the MDN Web Docs chat room on Matrix. We are looking forward to talking to you.

Other interesting developments

There are some other interesting projects that the MDN team is working hard on right now, and will provide deeper dives into with future blog posts. We’ll keep it brief here.

Platform evolution — MDN content moves to GitHub

For quite some time now, the MDN developer team has been planning a radical platform change, and we are ready to start sharing details of it. In short, we are updating the platform to move from a Wiki approach with the content in a MySQL database, to a JAMStack approach with the content being hosted in a Git repository (codename: Project Yari).

This will not affect end users at all, but the MDN developer team and our content contributors will see many benefits including a better contribution workflow (via Github), better ways in which we can work with our community, and a simplified, easier-to-maintain platform architecture. We will talk more about this in the next blog post!

Web DNA 2020

The 2019 Web Developer Needs Assessment (Web DNA) is a ground-breaking piece of research that has already helped to shape the future of the web platform, with input from more than 28,000 web developers’ helping to identify the top pain points with developing for the web.

The Web DNA will be run again in 2020, in partnership with Google, Microsoft, and several other stakeholders providing input into the form of the questions for this year. We launched the survey on October 12, and this year’s report is due out before the end of the year.

The post MDN Web Docs: Editorial strategy and community participation appeared first on Mozilla Hacks - the Web developer blog.

Blog of DataThis Week in Glean: Cross-Platform Language Binding Generation with Rust and “uniffi”

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

As the Glean SDK continues to expand its features and functionality, it has also continued to expand the number and types of consumers within the Mozilla ecosystem that rely on it for collection and transport of important metrics.  On this particular adventure, I find myself once again working on one of these components that tie into the Glean ecosystem.  In this case, it has been my work on the Nimbus SDK that has inspired this story.

Nimbus is our new take on a rapid experimentation platform, or a way to try out new features in our applications for subsets of the population of users in a way in which we can measure the impact.  The idea is to find out what our users like and use so that we can focus our efforts on the features that matter to them.  Like Glean, Nimbus is a cross-platform client SDK intended to be used on Android, iOS, and all flavors of Desktop OS that we support.  Also like Glean, this presented us with all of the challenges that you would normally encounter when creating a cross-platform library.  Unlike Glean, Nimbus was able to take advantage of some tooling that wasn’t available when we started Glean, namely: uniffi.

So what is uniffi?  It’s a multi-language bindings generator for Rust.  What exactly does that mean?  Typically you would have to write something in Rust and create a hand-written Foreign Function Interface (FFI) layer also in Rust.  On top of that, you also end up creating a hand-written wrapper in each and every language that is supported.  Instead, uniffi does most of the work for us by generating the plumbing necessary to transport data across the FFI, including the specific language bindings, making it a little easier to write things once and a lot easier to maintain multiple supported languages.  With uniffi we can write the code once in Rust, and then generate the code we need to be able to reuse these components in whatever language (currently supporting Kotlin, Swift and Python with C++ and JS coming soon) and on whatever platform we need.

So how does uniffi work?  The magic of uniffi works through generating a cdylib crate from the Rust code.  The interface is defined in a separate file through an Interface Description Language (IDL), specifically, a variant of WebIDL.  Then, using the uniffi-bindgen tool, we can scaffold the Rust side of the FFI and build our Rust code as we normally would, producing a shared library.  Back to uniffi-bindgen again to then scaffold the language bindings side of things, either Kotlin, Swift, or Python at the moment, with JS and C++ coming soon.  This leaves us with platform specific libraries that we can include in applications that call through the FFI directly into the Rust code at the core of it all.

There are some limitations to what uniffi can accomplish, but for most purposes it handles the job quite well.  In the case of Nimbus, it worked amazingly well because Nimbus was written keeping uniffi language binding generation in mind (and uniffi was written with Nimbus in mind).  As part of playing around with uniffi, I also experimented with how we could leverage it in Glean. It looks promising for generating things like our metric types, but we still have some state in the language binding layer that probably needs to be moved into the Rust code before Glean could move to using uniffi.  Cutting down on all of the handwritten code is a huge advantage because the Glean metric types require a lot of boilerplate code that is basically duplicated across all of the different languages we support.  Being able to keep this down to just the Rust definitions and IDL, and then generating the language bindings would be a nice reduction in the burden of maintenance.  Right now if we make a change to a metric type in Glean, we have to touch every single language binding: Rust, Kotlin, Swift, Python, C#, etc.

Looking back at Nimbus, uniffi does save a lot on overhead since we can write almost everything in Rust.  We will have a little bit of functionality implemented at the language layer, namely a callback that is executed after receiving and processing the reply from the server, the threading implementation that ensures the networking is done in a background thread, and the integration with Glean (at least until the Glean Rust API is available).  All of these are ultimately things that could be done in Rust as uniffi’s capabilities grow, making the language bindings basically just there to expose the API.  Right now, Nimbus only has a Kotlin implementation in support of our first customer, Fenix, but when it comes time to start supporting iOS and desktop, it should be as simple as just generating the bindings for whatever language that we want (and that uniffi supports).

Having worked on cross-platform tools for the last two years now, I can really appreciate the awesome power of being able to leverage the same client SDK implementation across multiple platforms.  Not only does this come as close as possible to giving you the same code driving the way something works across all platforms, it makes it a lot easier to trust that things like Glean collect data the same way across different apps and platforms and that Nimbus is performing randomization calculations the same across platforms and apps.  I have worked with several cross-platform technologies in my career like Xamarin or Apache Cordova, but Rust really seems to work better for this without as much of the overhead.  This is especially true with tools like uniffi to facilitate unlocking the cross-platform potential.  So, in conclusion, if you are responsible for cross-platform applications or libraries or are interested in creating them, I strongly urge you to think about Rust (there’s no way I have time to go into all the cool things Rust does…) and tools like uniffi to make that easier for you.  (If uniffi doesn’t support your platform/language yet, then I’m also happy to report that it is accepting contributions!)

hacks.mozilla.orgComing through with Firefox 82

As October ushers in the tail-end of the year, we are pushing Firefox 82 out the door. This time around we finally enable support for the Media Session API, provide some new CSS pseudo-selector behaviours, close some security loopholes involving the property, and provide inspection for server-sent events in our developer tools.

This blog post provides merely a set of highlights; for all the details, check out the following:

Inspecting server-sent events

Server-sent events allow for an inversion of the traditional client-initiated web request model, with a server sending new data to a web page at any time by pushing messages. In this release we’ve added the ability to inspect server-sent events and their message contents using the Network Monitor.

You can go to the Network Monitor, select the file that is sending the server-sent events, and view the received messages in the Response tab on the right-hand panel.

For more information, check out our Inspecting server-sent events guide.

Web platform updates

Now let’s look at the web platform additions we’ve got in store in 82.

Media Session API

The Media Session API enables two main sets of functionality:

  1. First of all, it provides a way to customize media notifications. It does this by providing metadata for display by the operating system for the media your web app is playing.
  2. Second, it provides event handlers that the browser can use to access platform media keys such as hardware keys found on keyboards, headsets, remote controls, and software keys found in notification areas and on lock screens of mobile devices. So you can seamlessly control web-provided media via your device, even when not looking at the web page.

The code below provides an overview of both of these in action:

if ('mediaSession' in navigator) {
  navigator.mediaSession.metadata = new MediaMetadata({
    title: 'Unforgettable',
    artist: 'Nat King Cole',
    album: 'The Ultimate Collection (Remastered)',
    artwork: [
      { src: '',   sizes: '96x96',   type: 'image/png' },
      { src: '', sizes: '128x128', type: 'image/png' },
      { src: '', sizes: '192x192', type: 'image/png' },
      { src: '', sizes: '256x256', type: 'image/png' },
      { src: '', sizes: '384x384', type: 'image/png' },
      { src: '', sizes: '512x512', type: 'image/png' },

  navigator.mediaSession.setActionHandler('play', function() { /* Code excerpted. */ });
  navigator.mediaSession.setActionHandler('pause', function() { /* Code excerpted. */ });
  navigator.mediaSession.setActionHandler('seekbackward', function() { /* Code excerpted. */ });
  navigator.mediaSession.setActionHandler('seekforward', function() { /* Code excerpted. */ });
  navigator.mediaSession.setActionHandler('previoustrack', function() { /* Code excerpted. */ });
  navigator.mediaSession.setActionHandler('nexttrack', function() { /* Code excerpted. */ });

Let’s consider what this could look like to a web user — say they are playing music through a web app like Spotify or YouTube. With the first block of code above we can provide metadata for the currently playing track that can be displayed on a system notification, on a lock screen, etc.

The second block of code illustrates that we can set special action handlers, which work the same way as event handlers but fire when the equivalent action is performed at the OS-level. This could include for example when a keyboard play button is pressed, or a skip button is pressed on a mobile lock screen.

The aim is to allow users to know what’s playing and to control it, without needing to open the specific web page that launched it.

What’s in a

This property is used to get or set the name of the window’s current browsing context — this is used primarily for setting targets for hyperlinks and forms. Previously one issue was that, when a page from a different domain was loaded into the same tab, it could access any information stored in, which could create a security problem if that information was sensitive.

To close this hole, Firefox 82 and other modern browsers will reset to an empty string if a tab loads a page from a different domain, and restore the name if the original page is reloaded (e.g. by selecting the “back” button).

This could potentially surface some issues — historically has also been used in some frameworks for providing cross-domain messaging (e.g. SessionVars and Dojo’s as a more secure alternative to JSONP. This is not the intended purpose of, however, and there are safer/better ways of sharing information between windows, such as Window.postMessage().

CSS highlights

We’ve got a couple of interesting CSS additions in Firefox 82.

To start with, we’ve introduced the standard ::file-selector-button pseudo-element, which allows you to select and style the file selection button inside <input type=”file”> elements.

So something like this is now possible:

input[type=file]::file-selector-button {
  border: 2px solid #6c5ce7;
  padding: .2em .4em;
  border-radius: .2em;
  background-color: #a29bfe;
  transition: 1s;
input[type=file]::file-selector-button:hover {
  background-color: #81ecec;
  border: 2px solid #00cec9;

Note that this was previously handled by the proprietary ::-webkit-file-upload-button pseudo, but all browsers should hopefully be following suit soon enough.

We also wanted to mention that the :is() and :where() pseudo-classes have been updated so that their error handling is more forgiving — a single invalid selector in the provided list of selectors will no longer make the whole rule invalid. It will just be ignored, and the rule will apply to all the valid selectors present.


Starting with Firefox 82, language packs will be updated in tandem with Firefox updates. Users with an active language pack will no longer have to deal with the hassle of defaulting back to English while the language pack update is pending delivery.

Take a look at the Add-ons Blog for more updates to the WebExtensions API in Firefox 82!

The post Coming through with Firefox 82 appeared first on Mozilla Hacks - the Web developer blog.

about:communityNew Contributors, Firefox 82

With Firefox 82 hot off the byte presses, we are pleased to welcome the developers whose first code contributions shipped in this release, 18 of whom were new volunteers! Please join us in thanking each of them for their persistence and enthusiasm, and take a look at their contributions:

Open Policy & AdvocacyMozilla Mornings on addressing online harms through advertising transparency

On 29 October, Mozilla will host the next installment of Mozilla Mornings – our regular breakfast series that brings together policy experts, policymakers and practitioners for insight and discussion on the latest EU digital policy developments.

A key focus of the upcoming Digital Services Act and European Democracy Action Plan initiatives is platform transparency – transparency about content curation, commercial practices, and data use to name a few. This installment of Mozilla Mornings will focus on transparency of online advertising, and in particular, how mechanisms for greater transparency of ad placement and ad targeting could mitigate the spread and impact of illegal and harmful content online.

As the European Commission prepares to unveil a series of transformative legislative proposals on these issues, the discussion promises to be timely and insightful.

Daniel Braun
Deputy Head of Cabinet of Ms. Věra Jourová, European Commission Vice-PresidentKarolina Iwańska
Lawyer and Policy Analyst, Panoptykon Foundation

Sam Jeffers
Co-Founder and Executive Director, Who Targets Me

With opening remarks by Raegan MacDonald
Head of Public Policy, Mozilla Corporation

Moderated by Jennifer Baker
EU Tech Journalist

Logistical information

29 October, 2020
10:30-12:00 CET
Zoom Webinar (conferencing details to be provided on morning of event)

Register your attendance here

The post Mozilla Mornings on addressing online harms through advertising transparency appeared first on Open Policy & Advocacy.

hacks.mozilla.orgA New Backend for Cranelift, Part 1: Instruction Selection

This post will describe my recent work on Cranelift as part of my day job at Mozilla. In this post, I will set some context and describe the instruction selection problem. In particular, I’ll talk about a revamp to the instruction selector and backend framework in general that we’ve been working on for the last nine months or so. This work has been co-developed with my brilliant colleagues Julian Seward and Benjamin Bouvier, with significant early input from Dan Gohman as well, and help from all of the wonderful Cranelift hackers.

Background: Cranelift

So what is Cranelift?

The project is a compiler framework written in Rust that is designed especially (but not exclusively) for just-in-time compilation. It’s a general-purpose compiler: its most popular use-case is to compile WebAssembly, though several other frontends exist, for example, cg_clif, which adapts the Rust compiler itself to use Cranelift. Folks at Mozilla and several other places have been developing the compiler for a few years now. It is the default compiler backend for wasmtime, a runtime for WebAssembly outside the browser, and is used in production in several other places as well.

We recently flipped the switch to turn on Cranelift-based WebAssembly support in nightly Firefox on ARM64 (AArch64) machines, including most smartphones, and if all goes well, it will eventually go out in a stable Firefox release. Cranelift is developed under the umbrella of the Bytecode Alliance.

In the past nine months, we have built a new framework in Cranelift for the “machine backends”, or the parts of the compiler that support particular CPU instruction sets. We also added a new backend for AArch64, mentioned above, and filled out features as needed until Cranelift was ready for production use in Firefox. This blog post sets some context and describes the design process that went into the backend-framework revamp.

Old Backend Design: Instruction Legalizations

To understand the work that we’ve done recently on Cranelift, we’ll need to zoom into the cranelift_codegen crate above and talk about how it used to work. What is this “CLIF” input, and how does the compiler translate it to machine code that the CPU can execute?

Cranelift makes use of CLIF, or the Cranelift IR (Intermediate Representation) Format, to represent the code that it is compiling. Every compiler that performs program optimizations uses some form of an Intermediate Representation (IR): you can think of this like a virtual instruction set that can represent all the operations a program is allowed to do. The IR is typically simpler than real instruction sets, designed to use a small set of well-defined instructions so that the compiler can easily reason about what a program means.

The IR is also independent of the CPU architecture that the compiler eventually targets; this lets much of the compiler (such as the part that generates IR from the input programming language, and the parts that optimize the IR) be reused whenever the compiler is adapted to target a new CPU architecture. CLIF is in Static Single Assignment (SSA) form, and uses a conventional control-flow graph with basic blocks (though it previously allowed extended basic blocks, these have been phased out). Unlike many SSA IRs, it represents φ-nodes with block parameters rather than explicit φ-instructions.

Within cranelift_codegen, before we revamped the backend design, the program remained in CLIF throughout compilation and up until the compiler emitted the final machine code. This might seem to contradict what we just said: how can the IR be machine-independent, but also be the final form from which we emit machine code?

The answer is that the old backends were built around the concept of “legalization” and “encodings”. At a high level, the idea is that every Cranelift instruction either corresponds to one machine instruction, or can be replaced by a sequence of other Cranelift instructions. Given such a mapping, we can refine the CLIF in steps, starting from arbitrary machine-independent instructions from earlier compiler stages, performing edits until the CLIF corresponds 1-to-1 with machine code. Let’s visualize this process:

Figure: legalization by repeated instruction expansion

A very simple example of a CLIF instruction that has a direct “encoding” to a machine instruction is iadd, which just adds two integers. On essentially any modern architecture, this should map to a simple ALU instruction that adds two registers.

On the other hand, many CLIF instructions do not map cleanly. Some arithmetic instructions fall into this category: for example, there is a CLIF instruction to count the number of set bits in an integer’s binary representation (popcount); not every CPU has a single instruction for this, so it might be expanded into a longer series of bit manipulations.

There are operations that are defined at a higher semantic level, as well, that will necessarily be lowered with expansions: for example, accesses to Wasm memories are lowered into operations that fetch the linear memory base and its size, bounds-check the Wasm address against the limit, compute the real address for the Wasm address, and perform the access.

To compile a function, then, we iterate over the CLIF and find instructions with no direct machine encodings; for each, we simply expand into the legalized sequence, and then recursively consider the instructions in that sequence. We loop until all instructions have machine encodings. At that point, we can emit the bytes corresponding to each instruction’s encoding.

Growing Pains, and a New Backend Framework?

There are a number of advantages to the legacy Cranelift backend design, which performs expansion-based legalization with a single IR throughout. As one might expect, though, there are also a number of drawbacks. Let’s discuss a few of each.

Single IR and Legalization: Pros

  1. By operating on a single IR all the way to machine-code emission, the same optimizations can be applied at multiple stages.
  2. If most of the Cranelift instructions become one machine instruction, and few legalizations are necessary, then this scheme can be very fast: it becomes simply a single traversal to fill in “encodings”, which were represented by small indices into a table.

Single IR and Legalization: Cons

  1. Expansion-based legalization may not always result in optimal code. There are sometimes single machine instructions that implement the behavior of multiple CLIF instructions, i.e. a many-to-one mapping. In order to generate efficient code, we want to be able to make use of these instructions; expansion-based legalization cannot.
  2. Expansion-based legalization rules must obey a partial order among instructions: if A expands into a sequence including B, then B cannot later expand into A. This could become a difficulty for the machine-backend author to keep straight.
  3. There are efficiency concerns with expansion-based legalization. At an algorithmic level, we prefer to avoid fixpoint loops (in this case, “continue expanding until no more expansions exist”) whenever possible. The runtime is bounded, but the bound is somewhat difficult to reason about, because it depends on the maximum depth of chained expansions.The data structures that enable in-place editing are also much slower than we would like. Typically, compilers store IR instructions in linked lists to allow for in-place editing. While this is asymptotically as fast as an array-based solution (we never need to perform random access), it is much less cache-friendly or ILP-friendly on modern CPUs. We’d prefer instead to store arrays of instructions and perform single passes over them whenever possible.
  4. Our particular implementation of the legalization scheme grew to be somewhat unwieldy over time (see, e.g., the GitHub issue #1141: Kill Recipes With Fire). Adding a new instruction required learning about “recipes”, “encodings”, and “legalizations” as well as mere instructions and opcodes; a more conventional code-lowering approach would avoid much of this complexity.
  5. A single-level IR has a fundamental tension: for analyses and optimizations to work well, an IR should have only one way to represent any particular operation. On the other hand, a machine-level representation should represent all of the relevant details of the target ISA. A single IR simply cannot serve both ends of this spectrum properly, and difficulties arose as CLIF strayed from either end.

For all of these reasons, as part of our revamp of Cranelift and a prerequisite to our new AArch64 backend, we built a new framework for machine backends and instruction selection. The framework allows machine backends to define their own instructions, separately from CLIF; rather than legalizing with expansions and running until a fixpoint, we define a single lowering pass; and everything is built around more efficient data-structures, carefully optimizing passes over data and avoiding linked lists entirely. We now describe this new design!

A New IR: VCode

The main idea of the new Cranelift backend is to add a machine-specific IR, with several properties that are chosen specifically to represent machine-code well. We call this VCode, which comes from “virtual-register code”, and the VCode contains MachInsts, or machine instructions. The key design choices we made are:

  • VCode is a linear sequence of instructions. There is control-flow information that allows traversal over basic blocks, but the data structures are not designed to easily allow inserting or removing instructions or reordering code. Instead, we lower into VCode with a single pass, generating instructions in their final (or near-final) order.
  • VCode is not SSA-based; instead, its instructions operate on registers. While lowering, we allocate virtual registers. After the VCode is generated, the register allocator computes appropriate register assignments and edits the instructions in-place, replacing virtual registers with real registers. (Both are packed into a 32-bit representation space, using the high bit to distinguish virtual from real.) Eschewing SSA at this level allows us to avoid the overhead of maintaining its invariants, and maps more closely to the real machine.

VCode instructions are simply stored in an array, and the basic blocks are recorded separately as ranges of array (instruction) indices. We designed this data structure for fast iteration, but not for editing.

Each instruction is mostly opaque from the point of view of the VCode container, with a few exceptions: every instruction exposes its (i) register references, and (ii) basic-block targets, if a branch. Register references are categorized into the usual “uses” and “defs” (reads and writes).

Note as well that the instructions can refer to either virtual registers (here denoted v0..vN) or real machine registers (here denoted r0..rN). This design choice allows the machine backend to make use of specific registers where required by particular instructions, or by the ABI (parameter-passing conventions).

Aside from registers and branch targets, an instruction contained in the VCode may contain whatever other information is necessary to emit machine code. Each machine backend defines its own type to store this information. For example, on AArch64, here are several of the instruction formats, simplified:

enum Inst {
    /// An ALU operation with two register sources and a register destination.
    AluRRR {
        alu_op: ALUOp,
        rd: Writable,
        rn: Reg,
        rm: Reg,

    /// An ALU operation with a register source and an immediate-12 source, and a register
    /// destination.
    AluRRImm12 {
        alu_op: ALUOp,
        rd: Writable,
        rn: Reg,
        imm12: Imm12,

    /// A two-way conditional branch. Contains two targets; at emission time, a conditional
    /// branch instruction followed by an unconditional branch instruction is emitted, but
    /// the emission buffer will usually collapse this to just one branch.
    CondBr {
        taken: BranchTarget,
        not_taken: BranchTarget,
        kind: CondBrKind,

    // ...

These enum arms could be considered similar to “encodings” in the old backend, except that they are defined in a much more straightforward way, using type-safe and easy-to-use Rust data structures.

Lowering from CLIF to VCode

We’ve now come to the most interesting design question: how do we lower from CLIF instructions, which are machine-independent, into VCode with the appropriate type of CPU instructions? In other words, what have we replaced the expansion-based legalization and encoding scheme with?

We perform a single pass over the CLIF instructions, and at each instruction, we invoke a function provided by the machine backend to lower the CLIF instruction into VCode instruction(s). The backend is given a “lowering context” by which it can examine the instruction and the values that flow into it, performing “tree matching” as desired (see below). This naturally allows 1-to-1, 1-to-many, or many-to-1 translations. We incorporate a reference-counting scheme into this pass to ensure that instructions are only generated if their values are actually used; this is necessary to eliminate dead code when many-to-1 matches occur.

Tree Matching

Recall that the old design allowed for 1-to-1 and 1-to-many mappings from CLIF instructions to machine instructions, but not many-to-1. This is particularly problematic when it comes to pattern-matching for things like addressing modes, where we want to recognize particular combinations of operations and choose a specific instruction that covers all of those operations at once.

Let’s start by defining a “tree” that is rooted at a particular CLIF instruction. For each argument to the instruction, we can look “up” the program to find its producer (def). Because CLIF is in SSA form, either the instruction argument is an ordinary value, which must have exactly one definition, or it is a block parameter (φ-node in conventional SSA formulations) that represents multiple possible definitions. We will say that if we reach a block parameter (φ-node), we simply end at a tree leaf – it is perfectly alright to pattern-match on a tree that is a subset of the true dataflow (we might get suboptimal code, but it will still be correct). For example, given the CLIF code:

block0(v0: i64, v1: i64):
  brnz v0, block1(v0)
  jump block1(v1)

block1(v2: i64):
  v3 = iconst.i64 64
  v4 = iadd.i64 v2, v3
  v5 = iadd.i64 v4, v0
  v6 = load.i64 v5
  return v6

Let’s consider the load instruction: v6 = load.i64 v5. A simple code generator could map this 1-to-1 to the CPU’s ordinary load instruction, using the register holding v5 as an address. This would certainly be correct. However, we might be able to do better: for example, on AArch64, the available addressing modes include a two-register sum ldr x0, [x1, x2] or a register with a constant offset ldr x0, [x1, #64].

The “operand tree” might be drawn like this:

Figure: operand tree for load instruction

We stop at v2 and v0 because they are block parameters; we don’t know with certainty which instruction will produce these values. We can replace v3 with the constant 64. Given this view, the lowering process for the load instruction can fairly easily choose an addressing mode. (On AArch64, the code to make this choice is here; in this case it would choose the register + constant immediate form, generating a separate add instruction for v0 + v2.)

Note that we do not actually explicitly construct an operand tree during lowering. Instead, the machine backend can query each instruction input, and the lowering framework will provide a struct giving the producing instruction if known, the constant value if known, and the register that will hold the value if needed.

The backend may traverse up the tree (via the “producing instruction”) as many times as needed. If it cannot combine the operation of an instruction further up the tree into the root instruction, it can simply use the value in the register at that point instead; it is always safe (though possibly suboptimal) to generate machine instructions for only the root instruction.

Lowering an Instruction

Given this matching strategy, then, how do we actually do the translation? Basically, the backend provides a function that is called once per CLIF instruction, at the “root” of the operand tree, and can produce as many machine instructions as it likes. This function is essentially just a large match statement over the opcode of the root CLIF instruction, with the match-arms looking deeper as needed.

Here is a simplified version of the match-arm for an integer add operation lowered to AArch64 (the full version is here):

match op {
    // ...
    Opcode::Iadd => {
        let rd = get_output_reg(ctx, outputs[0]);
        let rn = put_input_in_reg(ctx, inputs[0]);
        let rm = put_input_in_rse_imm12(ctx, inputs[1]);
        let alu_op = choose_32_64(ty, ALUOp::Add32, ALUOp::Add64);
        ctx.emit(alu_inst_imm12(alu_op, rd, rn, rm));
    // ...

There is some magic that happens in several helper functions here. put_input_in_reg() invokes the proper methods on the ctx to look up the register that holds an input value. put_input_in_rse_imm12() is more interesting: it returns a ResultRSEImm12, which is a “register, shifted register, extended register, or 12-bit immediate”. This set of choices captures all of the options we have for the second argument of an add instruction on AArch64.

The helper looks at the node in the operand tree and attempts to match either a shift or zero/sign-extend operator, which can be incorporated directly into the add. It also checks whether the operand is a constant and if so, could fit into a 12-bit immediate field. If not, it falls back to simply using the register input. alu_inst_imm12() then breaks down this enum and chooses the appropriate Inst arm (AluRRR, AluRRRShift, AluRRRExtend, or AluRRImm12 respectively).

And that’s it! No need for legalization and repeated code editing to match several operations and produce a machine instruction. We have found this way of writing lowering logic to be quite straightforward and easy to understand.

Backward Pass with Use-Counts

Now that we can lower a single instruction, how do we lower a function body with many instructions? This is not quite as straightforward as looping over the instructions and invoking the match-over-opcode function described above (though that would actually work). In particular, we want to handle the many-to-1 case more efficiently. Consider what happens when the add-instruction logic above is able to incorporate, say, a left-shift operator into the add instruction.

The add machine instruction would then use the shift’s input register, and completely ignore the shift’s output. If the shift operator has no other uses, we should avoid doing the computation entirely; otherwise, there was no point in merging the operation into the add.

We implement a sort of reference counting to solve this problem. In particular, we track whether any given SSA value is actually used, and we only generate code for a CLIF instruction if any of its results are used (or if it has a side-effect that must occur). This is a form of dead-code elimination but integrated into the single lowering pass.

We must see uses before defs for this to work. Thus, we iterate over the function body “backward”. Specifically, we iterate in postorder; this way, all instructions are seen before instructions that dominate them, so given SSA form, we see uses before defs.


Let’s see how this works in real life! Consider the following CLIF code:

function %f25(i32, i32) -> i32 {
block0(v0: i32, v1: i32):
  v2 = iconst.i32 21
  v3 = ishl.i32 v0, v2
  v4 = isub.i32 v1, v3
  return v4

We expect that the left-shift (ishl) operation should be merged into the subtract operation on AArch64, using the reg-reg-shift form of ALU instruction, and indeed this happens (here I am showing the debug-dump format one can see with RUST_LOG=debug when running clif-util compile -d --target aarch64):

VCode {
  Entry block: 0
Block 0:
  (original IR block: block0)
  (instruction range: 0 .. 6)
  Inst 0:   mov %v0J, x0
  Inst 1:   mov %v1J, x1
  Inst 2:   sub %v4Jw, %v1Jw, %v0Jw, LSL 21
  Inst 3:   mov %v5J, %v4J
  Inst 4:   mov x0, %v5J
  Inst 5:   ret

This then passes through the register allocator, has a prologue and epilogue attached (we cannot generate these until we know which registers are clobbered), has redundant moves elided, and becomes:

stp fp, lr, [sp, #-16]!
mov fp, sp
sub w0, w1, w0, LSL 21
mov sp, fp
ldp fp, lr, [sp], #16

which is a perfectly valid function, correct and callable from C, on AArch64! (We could do better if we knew that this were a leaf function and avoided the stack-frame setup and teardown! Alas, many optimization opportunities remain.)

We’ve only scratched the surface of Cranelift’s design in this blog-post. Nevertheless, I hope this post has given a sense of the exciting work being done in compilers today!

This post is an abridged version; see the full version for more more technical details.

Thanks to Julian Seward and Benjamin Bouvier for reviewing this post and suggesting several additions and corrections.

The post A New Backend for Cranelift, Part 1: Instruction Selection appeared first on Mozilla Hacks - the Web developer blog.

Open Policy & AdvocacyThe EU’s Current Approach to QWACs (Qualified Website Authentication Certificates) will Undermine Security on the Open Web

Since its founding in 1998, Mozilla has championed human-rights-compliant innovation as well as choice, control, and privacy for people on the Internet. We have worked hard to actualise this belief for the billions of users on the Web by actively leading and participating in the creation of Web standards that drive the Internet. We recently submitted our thoughts to the European Commission on its survey and public consultation regarding the eIDAS regulation, advocating for an interpretation of eIDAS that is better for user security and retains innovation and interoperability of the global Internet. 

Given our background in the creation of the Transport Layer Security (TLS) standard for website security, we believe that mandating an interpretation of eIDAS that requires Qualified Website Authentication Certificates (QWACs) to be bound with TLS certificates is deeply concerning. Along with weakening user security, it will cause serious harm to the single European digital market and its place within the global internet. 

Some high-level reasons for this position, as elucidated in our multiple recent submissions to the European Commission survey, are:

  1. It violates the eIDAS Requirements: The cryptographic binding of a QWAC to a connection or TLS certificate will violate several provisions of the eIDAS regulation, including Recital 67 (website authentication), Recital 27 (technological neutrality), and Recital 72 (interoperability). The move to cryptographically bind a QWAC to a connection or TLS certificate will negate this wise consideration and go against the legislative intent of the Council.
  2. It will undermine technical neutrality and interoperability:  Mandating TLS binding with QWACs will hinder technological neutrality and interoperability, as it will go against established best practices which have successfully helped keep the Web secure for the past two decades. Apart from being central to the goals of the eIDAS regulation itself, technological neutrality and interoperability are the pillars upon which innovation and competition take place on the web. Limiting them will severely hinder the ability of the EU digital single market to remain competitive within the global economy in a safe and secure manner.
  3. It will undermine privacy for end users: Validating QWACs, as currently envisaged by ETSI, poses serious privacy risks to end users. In particular, the proposal uses validation procedures or protocols that would reveal a user’s browsing activity to a third-party validation service. This third party service would be in a position to track and profile users based on this information. Even if this were to be limited by policy, this information is largely indistinguishable from a privacy-problematic tracking technique known as “link decoration”.
  4. It will create dangerous security risks for the Web: It has been repeatedly suggested that Trust Service Providers (TSPs) who issue QWACs under the eIDAS regulation automatically be included in the root certificate authority (CA) stores of all browsers. Such a move will amount to forced website certificate whitelisting by government dictate and will irremediably harm users’ safety and security. It goes against established best practices of website authentication that have been created by consensus from the varied experiences of the Internet’s explosive growth. The technical and policy requirements for a TSP to be included in the root CA store of Mozilla Firefox, for example, compare much more favourably than the framework created by the eIDAS for TSPs. They are more transparent, have more stringent audit requirements and provide for improved public oversight as compared to what eIDAS requires of TSPs.

As stated in our Manifesto and our white paper on bringing openness to digital identity, we believe individuals’ security and privacy on the Internet are fundamental and must not be treated as optional. The eIDAS regulation (even if inadvertently) using TLS certificates, enabling tracking, and requiring a de-facto whitelisting of TLS certificate issuers on the direction of government agencies is fundamentally incompatible with this vision of a secure and open Internet. We look forward to working with the Commission to achieve the objectives of eIDAS without harming the Open Web.

The post The EU’s Current Approach to QWACs (Qualified Website Authentication Certificates) will Undermine Security on the Open Web appeared first on Open Policy & Advocacy.

Mozilla Add-ons BlogExtensions in Firefox 82

Before we get to the Firefox 82 updates, I want to let you know that Philipp has passed the baton for these blog posts over to me. I plan to stick to the same format you know and hopefully love, but leave a comment if there’s anything you’d like to see change in future installments.

Language Packs

Starting with Firefox 82, language packs will be updated in tandem with Firefox updates. Users with an active language pack will no longer have to deal with the hassle of defaulting back to English while the language pack update is pending delivery.

Misc updates in Firefox 82

The cookie permission is no longer required in order to obtain the cookieStoreId for a tab, making it possible to identify container tabs without additional permissions.

The error message logged when a webRequest event listener is passed invalid match patterns in the urls value is now much easier to understand.

Firefox site isolation by default starting in Nightly 83

As mentioned earlier, we’re working on a big change to Firefox that isolates sites from one another. In the next few weeks, we’ll be rolling out an experiment to enable isolation by default for most Nightly users, starting in Firefox 83, with plans for a similar experiment on Beta by the end of the year.

For extensions that deal with screenshots, we’ve extended the captureTab and captureVisibleTab methods to enable capturing an arbitrary area of the page, outside the currently visible viewport. This should cover functionality previously enabled by the (long deprecated) drawWindow method, and you can find more details about new rect and scale options on the ImageDetails MDN page.

While we haven’t seen many reports of extension incompatibilities till now, Fission is a big architectural change to Firefox, and the web platform has many corner cases. You can help us find anything we missed by testing your extensions with Fission enabled, and reporting any issues on Bugzilla.


Thank you to Michael Goossens for his multiple contributions to this release.

The post Extensions in Firefox 82 appeared first on Mozilla Add-ons Blog.

Blog of DataThis Week in Glean: FOG Progress report

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.)

All “This Week in Glean” blog posts are listed in the TWiG index (and on the Mozilla Data blog).

About a year ago chutten started the “This Week in Glean” series with an initial blog post about Glean on Desktop. Back then we were just getting started to bring Glean to Firefox Desktop. No code had been written for Firefox Desktop, no proposals had been written to discuss how we even do it.

Now, 12 months later, after four completed milestones, a dozen or so proposal and quite a bit of code, the Project Firefox on Glean (FOG) is finally getting into a stage where we can actually use and test it. It’s not ready for prime time yet, but FOG is enabled in Firefox Nightly already.

Over the past 4 weeks I’ve been on and off working on building out our support for a C++ and a JavaScript API. Soon Firefox engineers will be able to instrument their code using Glean. In C++ this will look like:

JavaScript developers will use it like this:

My work is done in bug 1646165. It’s still under review and needs some fine-tuning before that, but I hope to land it by next week. We will then gather some data from Nightly and validate that it works as expected (bug 1651110), before we let it ride the trains to land in stable (bug 1651111).

Our work won’t end there. With my initial API work landing we can start supporting all metric types, we still need to figure out some details of how FOG will handle IPC, and then there’s the whole process of convincing other people to use the new system.

2020 will be the year of Glean on the Desktop.

Mozilla Add-ons BlogNew add-on badges

A few weeks ago, we announced the pilot of a new Promoted Add-ons program. This new program aims to expand the number of add-ons we can review and verify as compliant with our add-on policies in exchange for a fee from participating developers.

We have recently finished selecting the participants for the pilot, which will run until the end of November 2020. When these extensions successfully complete the review process, they will receive a new badge on their listing page on (AMO) and in the Firefox Add-ons Manager (about:addons).

Verified badge as it appears on AMO

Verified badge as it appears in the Firefox Add-ons Manager

We also introduced the “By Firefox” badge to indicate add-ons that are built by Mozilla. These add-ons also undergo manual review, and we are currently in the process of rolling them out.

By Firefox badge as it appears on AMO

By Firefox badge as it appears in the Firefox Add-ons Manager

Recommended extensions will continue to use the existing Recommended badge in the same locations.

We hope these badges make it easy to identify which extensions are regularly reviewed by Mozilla’s staff. As a reminder, all extensions that are not regularly reviewed by Mozilla display the following caution label on their AMO listing page:

If you’re interested in installing a non-badged extension, we encourage you to first read these security assessment tips.

The post New add-on badges appeared first on Mozilla Add-ons Blog.

Open Policy & AdvocacyOpen Letter to South Korea’s ICT Minister, Mr. Ki-Young Choe: Ensure the TBA amendments don’t harm the open internet in South Korea

This week, as a global champion for net neutrality, Mozilla wrote to South Korea’s ICT Minister, Mr. Ki-Young Choe, to urge him to reconsider the Telecommunications Business Act (TBA) amendments. We believe they violate the principles of net neutrality and would make the South Korean internet a less open space.

In our view, forcing websites (content providers) to pay network usage fees will damage the Korean economy, harm local content providers, increase consumer costs, reduce quality of service, and make it harder for global services to reach South Koreans. While we recognise the need for stable network infrastructure, these amendments are the wrong way to achieve that goal and will lead to South Korea to be a less competitive economy on the global stage.

As we’ve detailed in our open letter, we believe the current TBA amendments will harm the South Korean economy and internet in the following ways:

  • Undermining competition: Such a move would unfairly benefit large players, preventing smaller players who can’t pay these fees or shoulder equivalent obligations from competing against them. Limiting its application only to those content providers meeting certain thresholds of daily traffic volume or viewer numbers will not solve the problem as it deprives small players of opportunities to compete against the large ones.  In the South Korean local content ecosystem, where players such as Naver, Kakao and other similar services are being forced to pay exorbitant fees to ISPs to merely reach users, the oppressive effects of the service stabilization costs can become further entry barriers.
  • Limiting access to global services: This move would oblige content providers from all over the world who want to reach South Korean users to either pay local ISPs or be penalised by the Korean government for providing unstable services.  As a result, many will likely choose to not offer services in South Korea rather than comply with this law and incur its costs and risks.
  • Technical infeasibility: Requiring content providers to be responsible for service stability is impractical in the technical design of how the internet operates. Last mile network conditions are largely the responsibility of ISPs and is the core commodity they sell to their customers. Network management practices can have an outsized impact on how a user experiences a service, such as latency and download speeds. Under the amended law, a content provider, who has not been paid for the delivery of its contents by anyone, could be held liable for actions of the ISP as part of routine network management practices, including underinvestment in capacity, which is both unfair and impractical.
  • Driving up consumer costs: When it comes to consumers, content providers will likely pass the costs associated with these additional fees and infrastructure along to them, potentially creating an impractical scenario where users and content providers pay greater fees for an overall decrease in quality of service. Consumers would also suffer from an overall decrease in quality of service as providers who refuse to install local caches due to mandatory fees will  be forced to use caches outside the territory of Korea (and will therefore be significantly slower, as we saw in 2017).

The COVID-19 pandemic has made clear that there has never been a greater need for an open and accessible internet that is a public resource available to all. This law, on the other hand, makes it harder for the Korean internet infrastructure to remain a model for the world to aspire towards in these difficult times. We urge the Ministry of Science and Information Communication Technologies to revisit the TBA amendment as a whole. We remain committed to engaging with the government and ensuring South Koreans enjoy access to the full diversity of the open internet.

The post Open Letter to South Korea’s ICT Minister, Mr. Ki-Young Choe: Ensure the TBA amendments don’t harm the open internet in South Korea appeared first on Open Policy & Advocacy.

Mozilla Add-ons BlogAdd-ons interns: developing software and careers

For the last several years, Mozilla has participated in the Google Summer of Code and Outreachy internship programs. Both programs offer paid three-month internship opportunities to students or other industry newcomers to work on a programming project with an open source organization. This year, we were joined by Lisa Chan and Atique Ahmed Ziad, from Outreachy and Google Summer of Code, respectively.

With mentorship from (AMO) engineers Bob Silverberg and Andrew Williamson, Lisa built a Homepage Curation Tool to help our editorial staff easily make changes to the AMO homepage. Atique was mentored by Firefox engineers Luca Greco and Rob Wu, and senior add-on admin reviewer Andreas Wagner, and he developed a privileged extension for Firefox that monitors the activity of other installed extensions. This prototype is the starting point of a new feature that will help extension developers, add-on developers, and Firefox engineers investigate bugs in extensions or in the browser’s WebExtensions APIs.

We sat down with Lisa and Atique and asked them to share their internship experiences.

Question: What were the goals of your project? 

Lisa: I was able to achieve most of the goals of my project, which were to implement two major features and a minor feature on the AMO Homepage Curation Tool.

Two of the features (a major feature and the minor one) were associated with the admin user’s ability to select an image for the add-on in the primary hero area on the AMO homepage. Prior to implementing the major feature, the admin user could only choose an image from a gallery of pre-selected images. With the update, the admin user now has the option to upload an image as well. The admin tool also now utilizes the thumbnails of the images in the gallery following the minor feature update.

Screenshot of the AMO Homepage Admin tool

The second major feature involved providing the admin user with the ability to update and/or reorder the modules that appear under the hero area on the AMO homepage. These modules are sets of pre-defined add-on extensions and themes, such as Recommended Extensions, Popular Themes, etc, based on query data. The modules were previously built and rendered on the front-end separately, but they can now be created and managed on the server-side. When complete, the front-end will be able to retrieve and display the modules with a single call to a new API endpoint.

Screenshot of the AMO Homepage Curation Tool

Atique: My objective was to develop the minimum viable product of an Extension Activity Monitor. In the beginning, my mentors and I had formulated the user stories as a part of requirement analysis and agreed on the user stories to complete during the GSoC period.

The goals of Extension Activity Monitor were to be able to start and stop monitoring all installed extensions in Firefox, capture the real-time activity logs from monitored extensions and display them in a meaningful way, filter the activity logs, save the activity logs in a file, and load activity logs from a file.

Screenshot of Extension Activity Monitor prototype

Extension Activity Monitor in action

Question: What accomplishment from your internship are you most proud of? 

Lisa: Each pull request approval was a tiny victory for me, as I not only had to learn how to fix each issue, but learn Python and Django as well. I was only familiar with the front-end and had expected the internship project to be front-end, as all of my contributions on my Outreachy application were to the addons-frontend repository. While I had hoped my mentors would let me get a taste of the server-side by working on a couple of minor issues, I did not expect my entire project to be on the server-side. I am proud to have learned and utilized a new language and framework during my internship with the guidance of Bob and Andrew.

Atique: I am proud that I have successfully made a working prototype of Extension Activity Monitor and implemented most of the features that I had written in my proposal. But the accomplishment I am proudest of is that my skillset is now at a much higher level than it was before this internship.

Question: What was the most surprising thing you learned?

Lisa: I was surprised to learn how much I enjoyed working on the server-side. I had focused on learning front-end development because I thought I’d be more inclined to the design elements of creating the user interface and experience. However, I found the server-side to be just as intriguing, as I was drawn to creating the logic to properly manage the data.

Atique: My mentors helped me a lot in writing more readable, maintainable and good quality code, which I think is the best part of my learnings. I never thought about those things deeply in any project that I had worked on before this internship. My mentors said, “Code is for humans first and computer next,” which I am going to abide by forever.

Question: What do you plan to do after your internship?

Lisa: I plan to continue learning both front-end and back-end development while obtaining certifications. In addition, I’ll be seeking my next opportunity in the tech industry as well.

Atique: Participating in Google Summer of Code was really helpful to understand my strengths and shortcomings. I received very useful and constructive feedback from my mentors. I will be working on my shortcomings and make sure that I become a good software engineer in the near future.

Question: Is there anything else you would like to share? 

Lisa: I’m glad my internship project was on the server-side, as it not only expanded my skill set, but also opened up more career opportunities for me. I’m also grateful for the constant encouragement, knowledge, and support that Bob and Andrew provided me throughout this whole experience. I had an incredible summer with the Firefox Add-ons team and Mozilla community, and I am thankful for Outreachy for providing me with this amazing opportunity. ‘

Atique: I think the opportunities like Google Summer of Code and Outreachy provides a great learning experience to students. I am thankful to these programs’ organizers, also thankful to Mozilla for participating in programs that give great opportunities to students like me to work with awesome mentors.

Congratulations on your achievements, Lisa and Atique! We look forward to seeing your future accomplishments. 

The post Add-ons interns: developing software and careers appeared first on Mozilla Add-ons Blog.

hacks.mozilla.orgTo Eleventy and Beyond

In 2018, we launched Firefox Extension Workshop, a site for Firefox-specific extension development documentation. The site was originally built using the Ruby-based static site generator Jekyll. We had initially selected Jekyll for this project because we wanted to make it easy for editors to update the site using Markdown, a lightweight markup language.

Once the site had been created and more documentation was added, the build times started to grow. Every time we made a change to the site and wanted to test it locally, it would take ten minutes or longer for the site to build. The builds took so long that we needed to increase the default time limit for CircleCI, our continuous integration and continuous delivery service, because builds were failing when they ran past ten minutes with no output.

Investigating these slow builds using profiling showed that most of the time was being spent in the Liquid template rendering calls.

In addition to problems with build times, there were also issues with cache-busting in local development. This meant that things like a change of images wouldn’t show up in the site without a full rebuild after clearing the local cache.

As we were discussing how to best move forward, Jekyll 4 was released with features expected to improve build performance. However, an early test of porting to this version actually performed worse than the previous version. We then realized that we needed to find an alternative and port the site.

Update: 05-10-2020: A Jekyll developer reached out having investigated the slow builds. The findings were that Jekyll on its own isn’t slow, the high build times in our case were primarily caused by 3rd party plugins.

Looking at the Alternatives

The basic requirements for moving away from Jekyll were as follows:

  • Build performance needed to be better than the 10 minutes (600s+) it took to build locally.
  • Local changes should be visible as quickly as possible.
  • Ideally the solution would be JavaScript based, as the Add-ons Engineering team has a lot of collective JavaScript experience and a JavaScript based infrastructure would make it easier to extend.
  • It needed to be flexible, enough to meet the demands of adding more documentation in the future.

Hugo (written in Go), Gatsby (JS), and Eleventy (11ty) (JS) were all considered as options.

In the end, Eleventy was chosen for one main reason: it provided enough similarities to Jekyll that we could migrate the site without a major rewrite. In contrast, both Hugo and Gatsby would have required a significant refactoring.  Porting the site also meant less changes up front, which allowed us to focus on maintaining parity with the site already in production.

As Eleventy provides an option to use liquid templates, via LiquidJS, it meant that templates only needed relatively minimal changes to work.

The Current Architecture

There are four main building blocks in the Jekyll site:  Liquid templating, Markdown for documentation, Sass for CSS, and JQuery for behaviour.

To migrate to Eleventy we planned to minimize changes to how the site works, and focus on porting all of the existing documentation without changing the CSS or JavaScript.

Getting Started with the Port

The blog post Turning Jekyll up to Eleventy by Paul Lloyd was a great help in describing what would need to be done to get the site working under Eleventy.

The first step was to create the an Eleventy configuration file based on the old Jekyll one.

Data files were moved from YAML to JSON. This was done via a YAML to JSON conversion.

Next, the templates were updated to fix the differences in variables and the includes. The jekyll-assets plugin syntax was removed so that assets were directly served from the assets directory.

Up and Running with Statics

As a minimal fix to replace the Jekyll plugins for CSS and JS,  the CSS (Sass) and JS were built with Command Line Interface (CLI) scripts added to the package.json using Uglify and SASS.

For Sass, this required loading paths via the CLI and then just passing the main stylesheet:

  _assets/css/styles.scss _assets/css/styles.css

For JS, every script was passed to uglify in order:

  -o _assets/js/bundle.js

This was clearly quite clunky, but it got JS and CSS working in development albeit in a way that required a script to be run manually. However, with the CSS and JS bundles working, this made it possible to focus on getting the homepage up and running without worrying about anything more complicated to start with.

With a few further tweaks the homepage built successfully:

Screenshot of the Firefox Extension Workshop homepage

Screenshot of the Extension Workshop Homepage built with Eleventy

Getting the entire site to build

With the homepage looking like it should, the next task was fixing the rest of the syntax. This was a pretty laborious process of updating all the templates and removing or replacing plugins and filters that Eleventy didn’t have.

After some work fixing up the files, finally the entire site could be built without error! 🎉

What was immediately apparent was how fast Eleventy is. The entire site was built in under 3 seconds. That is not a typo; it is 3 seconds not minutes.

A gif of Doc from Back to the Future

Doc from Back to the Future when he first realizes how fast Eleventy is: “Under 3 seconds… Great Scott!”

Improving Static Builds

Building JS and CSS is not part of Eleventy itself. This means it’s up to you to decide how you want to handle statics.

The goals were as follows:

  • Keep it fast.
  • Keep it simple (especially for content authors)
  • Have changes be reflected as quickly as possible.

The first approach moved the CSS and JS to node scripts. These replicated the crude CLI scripts using APIs.

Ultimately we decided to decouple asset building from Eleventy entirely. This meant that Eleventy could worry about building the content, and a separate process would handle the static assets. It also meant that the static asset scripts could write out to the build folder directly.

This made it possible to have the static assets built in parallel with the Eleventy build. The downside was that Eleventy couldn’t tell the browserSync instance (the server Eleventy uses in development) to update as it wasn’t involved with this process. It also meant that watching JS and SASS source files needed to be handled with a separate watching configuration, which in this case was handled via chokidar-cli. Here’s the command used in the package.json script for the CSS build:

chokidar 'src/assets/css/*.scss' -c 'npm run sass:build'

The sass:build script runs bin/build-styles

Telling BrowserSync to update JS and CSS

With the JS and CSS being built correctly, we needed to update the browserSync instance to tell it that the JS and CSS had changed. This way, when you make a CSS change the page will reload without a refresh. Fast updates provide the ideal short feedback loop for iterative changes.

Fortunately, browserSync has a web API. We were able to use this to tell browserSync to update every time the CSS or JS build is built in development.

For the style bundle, the URL called is http://localhost:8081/__browser_sync__?method=reload&args=styles.css

To handle this, the build script fetches this URL whenever new CSS is built.

Cleaning Up

Next, we needed to clean up everything and make sure the site looked right. Missing plugin functionality needed to be replaced and docs and templates needed several adjustments.

Here’s a list of some of the tasks required:

  • Build a replacement for the Jekyll SEO plugin in templating and computed data.
  • Clean-up syntax-highlighting and move to “code fences.”
  • Use the 11ty “edit this page on github” recipe. (A great feature for making it easier to accept contributions to document improvements).
  • Clean-up templates to fix some errant markup. This was mainly about looking at whitespace control in liquid as in some cases there were some bad interactions with the markdown which resulted in spurious extra elements.
  • Recreate the pages API plugin. Under Jekyll this was used by the search, so we needed to recreate this for parity in order to avoid re-implementing the search from scratch.
  • Build tag pages instead of using the search. This was also done for SEO benefits.

Building for Production

With the site looking like it should and a lot of the minor issues tidied up, one of the final steps was to look at how to handle building the static assets for production.

For optimal performance, static assets should not need to be fetched by the browser if that asset (JS, CSS, Image, fonts etc) is in the browser’s cache. To do that you can serve assets with expires headers set into the future (typically one year). This means that if foo.js is cached, when asked to fetch it, the browser won’t re-fetch it unless the cache is removed or the cached asset is older than the expiration date set via the expires header in the original response.

Once you’re caching assets in this way, the URL of the resource needs to be changed to “bust the cache”, otherwise the browser won’t make a request if the cached asset is considered “fresh.”

Cache-busting strategies

Here’s a few standard approaches used for cache-busting URLS:

Whole Site Version Strings

The git revision of the entire site can be used as part of the URL. This means that every time a revision is added and the site is published, every asset would be considered fresh. The downside is that this will cause clients to download every asset even if it hasn’t actually been updated since the last time the site was published.

Query Params

With this approach, a query string is appended to a URL with either a hash of the contents or some other unique string e.g:



A downside of this approach is that in some cases caching proxies won’t consider the query string. This could result in stale assets being served. Despite this, this approach is pretty common and caching proxies don’t typically default to ignoring query params.

Content Hashes with Server-side Rewrites

In this scenario, you change your asset references to point to files with a hash as part of the URL. The server is configured to rewrite those resources internally to ignore the hash.

For example if your HTML refers to foo.4ab8sbc7.css the server will serve foo.css. This means you don’t need to update the actual filename of foo.css to foo.4ab8sbc7.css.

This requires server config to work, but it’s a pretty neat approach.

Content Hashes in Built Files

In this approach, once you know the hash of your content, you need to update the references to that file, and then you can output the file with that hash as part of the filename.

The upside of this approach is that once the static site is built this way it can be served anywhere and you don’t need any additional server config as per the previous example.

This was the strategy we decided to use.

Building an Asset Pipeline

Eleventy doesn’t have an asset pipeline, though this is something that is being considered for the future.

In order to deploy the Eleventy port, cache-busting would be needed so we could continue to deploy to S3 with far future expires headers.

With the Jekyll assets plugin, you used liquid templating to control the assets building. Ideally, I wanted to avoid content authors needing to know about cache-busting at all.

To make this work, all asset references would need to start with “/assets/” and could not be built with variables. Given the simplicity of the site, this was a reasonable limitation.

With asset references easily found, a solution to implement cache-busting was needed.

First attempt: Using Webpack as an asset pipeline

The first attempt used Webpack.  We almost got it working using eleventy-webpack-boilerplate as a guide, however this started to introduce differences in the webpack configs for dev and production since it essentially used the built HTML as an entry point. This was because cache-busting and optimizations were not to be part of the dev process to keep the locale development build as fast as possible. Getting this working became more and more complex, requiring special patched forks of loaders because of limitations in the way HTML extraction worked.

Webpack also didn’t work well for this project because of the way it expects to understand relationships in JS modules in order to build a bundle. The JS for this site was written in an older style where the scripts are concatenated together in a specific order without any special consideration for dependencies (other than script order). This alone required a lot of workarounds to work in Webpack.

Second attempt: AssetGraph

On paper, AssetGraph looked like the perfect solution:

AssetGraph is an extensible, node.js-based framework for manipulating and optimizing web pages and web applications. The main core is a dependency graph model of your entire website, where all assets are treated as first class citizens. It can automatically discover assets based on your declarative code, reducing the configuration needs to a minimum.

From the AssetGraph

The main concept is that the relationship between HTML documents and other resources is essentially a graph problem.

AssetGraph-builder uses AssetGraph, and using your HTML as a starting point it works out all the relationships and optimizes all of your assets along the way.

This sounded ideal. However when I ran it on the built content of the site, node ran out of memory. With no control over what it was doing and minimal feedback as to where it was stuck, this attempt was shelved.

That said, the overall goals of the AssetGraph project are really good, and this looks like it’s something worth keeping an eye on for the future.

Third attempt: Building a pipeline script

In the end, the solution that ended up working best was to build a script that would process the assets after the site build was completed by Eleventy.

The way this works is as follows:

  • Process all the binary assets (images, fonts etc), record a content hash for them.
  • Process the SVG and record a content hash for them.
  • Process the CSS, and rewrite references to resources within them, minify the CSS, and record a content hash for them.
  • Process the JS, and rewrite references to resources within them, minify the JS, and record a content hash for them.
  • Process the HTML and rewrite the references to assets within them.
  • Write anything out that hasn’t already been written to a new directory.

Note: there might be some edge cases not covered here. For example, if a JS file references another JS file,  this could break depending on the order of processing. The script could be updated to handle this, but that would mean files would need to be re-processed as changed and anything that referenced them would also need to be updated and so on. Since this isn’t a concern for the current site, this was left out for simplicity. There’s also no circular dependency detection. Again, for this site and most other basic sites this won’t be a concern.

There’s a reason optimizations and cache-busting aren’t part of the development build. This separation helps to ensure the site continues to build really fast when making changes locally.

This is a trade-off, but as long as you check the production build before you release it’s a reasonable compromise. In our case, we have a development site built from master, a staging site built from tags with a -stage suffix, and the production site. All of these deployments run the production deployment processes so there’s plenty of opportunity for catching issues with full builds.


Porting to Eleventy has been a positive change. It certainly took quite a lot of steps to get there, but it was worth the effort.

In the past, long build times with Jekyll made this site really painful for contributors and document authors to work with. We’re already starting to see some additional contributions as a result of lowering the barrier of entry.

With this port complete, it’s now starting to become clear what the next steps should be to minimize the boilerplate for document authors and make the site even easier to use and contribute to.

If you have a site running under Jekyll or are considering using a modern static site generator, then taking a look at Eleventy is highly recommended. It’s fast, flexible, well documented, and a joy to work with. ✨

The post To Eleventy and Beyond appeared first on Mozilla Hacks - the Web developer blog.

Open Policy & AdvocacyMozilla Partners with the African Telecommunications Union to Promote Rural Connectivity

Mozilla and the African Telecommunications Union (ATU) have signed a Memorandum of Understanding (MOU) for a joint project that will promote rural connectivity in the Africa region. “The project, pegged to the usage of spectrum policy, regulations and practices, is designed to ensure affordable access to communication across the continent,” said ATU Secretary-General John OMO. “Figuring out how to make spectrum accessible, particularly in rural areas, is critical to bringing people online throughout the African continent,” said Mitchell Baker, CEO of Mozilla, “I’m committed to Mozilla making alliances to address this challenge.”

While half the world is now connected to the internet, the existing policy, regulatory, financial, and technical models are not fit for purpose to connect the poorer and more sparsely populated rural areas. More needs to be done to achieve the United Nations’ universal access goals by 2030. Clear policy and regulatory interventions that can support innovation, and new business models to speed up progress, are urgently required.

Access to the internet should not be a luxury, but a global public resource that is open and accessible to all. This is particularly true during a global public health crisis, as it underpins the deployment of digital healthcare solutions for detecting COVID-19.

Rural connectivity in the African region presents a unique set of challenges. More than 60% of Africa’s populations live in rural areas, but they lack resources and infrastructures needed to connect them. Potential users are often spread out, making it difficult to support the traditional business case for investments necessary to establish broadband infrastructure.

There are many factors that contribute to this digital divide, but one of the biggest challenges is making wireless spectrum available to low-cost operators, who are prepared to deploy new business models for rural access.

Spectrum licenses are bets, in the form of 10-15 year commitments, for national coverage for mobile operators. As the demand for wireless spectrum continues to increase beyond its administrative availability, policy-makers and regulators have increasingly turned to spectrum auctions to assign a limited number of licenses. However, spectrum auctions act as a barrier to competition, creating financial obstacles for innovative, smaller service providers who could bring new technology and business models to rural areas. In addition, the high fees associated with these auctions are a disincentive to larger mobile operators to roll out services in rural areas, resulting in the dramatically under-utilised spectrum.

To unlock innovation and investment, we must develop policy and regulatory instruments to address access to spectrum in rural areas. Mozilla has partnered with the ATU to facilitate a dialogue among regulators, policy-makers, and other stakeholders, to explore ways to unlock the potential of the unused spectrum. Mozilla and the ATU will develop recommendations based on these dialogues and good practice. The recommendations will be presented at the 2021 Annual ATU Administrative Council meeting.

The post Mozilla Partners with the African Telecommunications Union to Promote Rural Connectivity appeared first on Open Policy & Advocacy.

Mozilla Add-ons BlogExpanded extension support in Firefox for Android Nightly

A few weeks ago, we mentioned that we were working on increasing extension support in the Firefox for Android Nightly pre-release channel. Starting September 30, you will be able to install any extension listed on (AMO) in Nightly.

This override was created for extension developers and advanced users who are interested in testing for compatibility, so it’s not easily accessible. Installing untested extensions can lead to unexpected outcomes; please be judicious about the extensions you install. Also, since most developers haven’t been able to test and optimize their extensions for the new Android experience, please be kind if something doesn’t work the way it should. We will remove negative user reviews about extension performance in Nightly.

Currently, Nightly uses the Collections feature on AMO to install extensions. You will need to create a collection on AMO and change an advanced setting in Nightly in order to install general extensions.

Create a collection on AMO

Follow these instructions to create a collection on AMO. You can name the collection whatever you like as long as it does not contain any spaces in the name. When you are creating your collection, you will see a number in the Custom URL field. This number is your user ID. You will need the collection name and user ID to configure Nightly in the following steps.

Screenshot of the Create a Collection page

Once your collection has been created, you can add extensions to it. Note that you will only be able to add extensions that are listed on AMO.

You can edit this collection at any time.

Enable general extension support setting in Nightly

You will need to make a one-time change to Nightly’s advanced settings to enable general extension installation.

Step 1: Tap on the three dot menu and select Settings.

Screenshot of the Firefox for Android menu

Step 2: Tap on About Firefox Nightly.

Screenshot of the Fenix Settings Menu

Step 3. Tap the Firefox Nightly logo five times until the “Debug menu enabled” notification appears.

Screenshot of About Firefox Nightly

Screenshot - Debug menu enabled

Step 4: Navigate back to Settings. You will now see a new entry for “Custom Add-on collection.” Once a custom add-on collection has been set up, this menu item will always be visible.

Screenshot - Settings

Step 5: Configure your custom add-on collection. Use the collection name and your user ID from AMO for the Collection Owner (User ID)  and Collection name fields, respectively.

Screenshot of interface for adding a custom add-on collection



After you tap “OK,” the application will close and restart.

WebExtensions API support

Most of the WebExtensions APIs supported on the previous Firefox for Android experience are supported in the current application. The notable exceptions are the (implementation in progress) and the browserData APIs. You can see the current list of compatible APIs on MDN.

Extensions that use unsupported APIs may be buggy or not work at all on Firefox for Android Nightly.

User Experience

The new Firefox for Android has a much different look and feel than Firefox for desktop, or even the previous Android experience. Until now, we’ve worked with the developers of Recommended Extensions directly to optimize the user experience of their extensions for the release channel. We plan to share these UX recommendations with all developers on Firefox Extension Workshop in the upcoming weeks.

Coming next

We will continue to publish our plans for increasing extension support in Firefox for Android as they solidify. Stay tuned to this blog for future updates!

The post Expanded extension support in Firefox for Android Nightly appeared first on Mozilla Add-ons Blog.

Mozilla Add-ons BlogMore Recommended extensions added to Firefox for Android Nightly

As we mentioned recently, we’re adding Recommended extensions to Firefox for Android Nightly as a broader set of APIs become available to accommodate more add-on functionality. We just updated the collection with some new Recommended extensions, including…

Mobile favorites Video Background Play Fix (keeps videos playing in the background even when you switch tabs) and Google Search Fixer (mimics the Google search experience on Chrome) are now in the fold.

Privacy related extensions FoxyProxy (proxy management tool with advanced URL pattern matching) and Bitwarden (password manager) join popular ad blockers Ghostery and AdGuard.

Dig deeper into web content with Image Search Options (customizable reverse image search tool) and Web Archives (view archived web pages from an array of search engines). And if you end up wasting too much time exploring images and cached pages you can get your productivity back on track with Tomato Clock (timed work intervals) and LeechBlock NG (block time-wasting websites).

The new Recommended extensions will become available for Firefox for Android Nightly on 26 September, If you’re interested in exploring these new add-ons and others on your Android device, install Firefox Nightly and visit the Add-ons menu. Barring major issues while testing on Nightly, we expect these add-ons to be available in the release version of Firefox for Android in November.

The post More Recommended extensions added to Firefox for Android Nightly appeared first on Mozilla Add-ons Blog.