SeaMonkeySeaMonkey 2.53.14 is now out!

Hi everyone,

The SeaMonkey Project team is pleased to announce the immediate release of SeaMonkey 2.53.14!  There has been some changes, so please check out [1] and/or [2].

[1] –



PS: A confession: I haven’t been working too closely with the code and have only been doing the release engineering bit.   So the work of actually building the code and ensuring running is kudos to everyone else(Ian and frg both driving the releases).   While  I don’t understand the code layout anymore; I do hope to figure it out soon.  I just need a new laptop to build stuff and time.  *sigh*

PPS: I’m slowly getting the crash stats working.

Open Policy & AdvocacyMozilla Responds to EU General Court’s Judgment on Google Android

This week, the EU’s General Court largely upheld the decision sanctioning Google for restricting competition on the Android mobile operating system. But, on their own, the judgment and the record fine do not help to unlock competition and choice online, especially when it comes to browsers.

In July 2018, when the European Commission announced its decision, we expressed hope that the result would help to level the playing field for independent browsers like Firefox and provide real choice for consumers. Sadly for billions of people around the world who use browsers every day, this hope has not been realized – yet.

The case may rumble on in appeals for several more years, but Mozilla will continue to advocate for an Internet which is open, accessible, private, and secure for all, and we will continue to build products which advance this vision. We hope that those with the power to improve browser choice for consumers will also work towards these tangible goals.

The post Mozilla Responds to EU General Court’s Judgment on Google Android appeared first on Open Policy & Advocacy.

SUMO BlogTribute to FredMcD

It brings us great sadness to share the news that FredMcD has recently passed away.

If you ever posted a question to our Support Forum, you may be familiar with a contributor named “FredMcD”. Fred was one of the most active contributors in Mozilla Support, and for many years remains one of our core contributors. He was regularly awarded a forum contributor badge every year since 2013 for his consistency in contributing to the Support Forum.

He was a dedicated contributor, super helpful, and very loyal to Firefox users making over 81400 contributions to the Support Forum since 2013.  During the COVID-19 lockdown period, he focussed on helping people all over the world when they were online the most – at one point he was doing approximately 3600 responses in 90 days, an average of 40 a day.

In March 2022, I learned the news that he was hospitalized for a few weeks. He was back active in our forum shortly after he was discharged. But then we never heard from him again after his last contribution on May 5, 2022. There’s very little we know about Fred. But we were finally able to confirm his passing just recently.

We surely lost a great contributor. He was a helpful community member and his assistance with incidents was greatly appreciated. His support approach has always been straightforward and simple. It’s not rare, that he was able to solve a problem in one go like this or this one.

To honor his passing, we added his name to the about:credits page to make sure that his contribution and impact on Mozilla will never be forgotten. He will surely be missed by the community.

I’d like to thank Paul for his collaboration in this post and for his help in getting Fred’s name to the about:credits page. Thanks, Paul!


The Mozilla Thunderbird BlogThunderbird Tip: Customize Colors In The Spaces Toolbar

In our last video tip, you learned how to manually sort the order of all your mail and account folders. Let’s keep that theme of customization rolling forward with a quick video guide on customizing the Spaces Toolbar that debuted in Thunderbird 102.

The Spaces Toolbar is on the left hand side of your Thunderbird client and gives you fast, easy access to your most important activities. With a single click you can navigate between Mail, Address Books, Calendars, Tasks, Chat, settings, and your installed add-ons and themes.

Watch below how to customize it!

Video Guide: Customizing The Spaces Toolbar In Thunderbird

This 2-minute tip video shows you how to easily customize the Spaces Toolbar in Thunderbird 102.

*Note that the color tools available to you will vary depending on the operating system you’re using. If you’re looking to discover some pleasing color palettes, we recommend the excellent, free tools at

Have You Subscribed To Our YouTube Channel?

We’re currently building the next exciting era of Thunderbird, and developing a Thunderbird experience for mobile. We’re also putting out more content and communication across various platforms to keep you informed. And, of course, to show you some great usage tips along the way.

To accomplish that, we’ve launched our YouTube channel to help you get the most out of Thunderbird. You can subscribe here. Help us reach more people than ever before by liking each video and leaving a comment if it helped!

Another Tip Before You Go?

The post Thunderbird Tip: Customize Colors In The Spaces Toolbar appeared first on The Thunderbird Blog.

The Mozilla BlogAnnouncing Carlos Torres, Mozilla’s new Chief Legal Officer

I am pleased to announce that starting today, September 12, Carlos Torres has joined Mozilla as our Chief Legal Officer. In this role Carlos will be responsible for leading our global legal and public policy teams, developing legal, regulatory and policy strategies that support Mozilla’s mission. He will also manage all regulatory issues and serve as a strategic business partner helping us accelerate our growth and evolution. Carlos will also serve as Corporate Secretary. He will report to me and join our steering committee.

<figcaption>Carlos Torres joins Mozilla executive team.</figcaption>

Carlos stood out in the interview process because of his great breadth of experience across many topics including strategic and commercial agreements, product, privacy, IP, employment, board matters, investments, regulatory and litigation. He brings experience in both large and small companies, and in organizations with different risk profiles as well as a deep belief in Mozilla’s commitment to innovation and to an open internet.

“Mozilla continues to be a unique and respected voice in technology, in a world that needs trusted institutions more than ever,” said Torres. “There is no other organization that combines community, product, technology and advocacy to produce trusted innovative products that people love. I’ve always admired Mozilla for its principled, people-focused approach and I’m grateful for the opportunity to contribute to Mozilla’s mission and evolution.”

Carlos comes to us most recently from Flashbots where he led the company’s legal and strategic initiatives. Prior to that, he was General Counsel for two start ups and spent over a decade at Salesforce in a variety of leadership roles including VP, Business Development and Strategic Alliances and VP, Associate General Counsel, Chief of Staff. He also served as senior counsel of a biotech company and started his legal career at Orrick, Herrington & Sutcliffe.

The post Announcing Carlos Torres, Mozilla’s new Chief Legal Officer appeared first on The Mozilla Blog.

Mozilla L10NRedesigned profile page now available in Pontoon

Back in February 2022, we reached out to our community to ask for feedback on a proposal to completely rethink the profile page in Pontoon.

The goal was to improve the experience for everyone on the platform, transforming this page into an effective tool that could showcase contributions, provide useful contact information, and help locale managers to grow their communities.

As a reminder, these were the user stories that we defined to help us guide the design process.

As a contributor, I want to be able to:

  • Quickly see how much I contribute over time to localization.
  • Share my profile with potential employers in the localization industry, or use it to demonstrate my involvement in projects as a volunteer.
  • Control visibility of personal information displayed in my profile.
  • See data about the quality of my contributions and use it to make a case for promotion with locale managers or administrators.
  • See when my own suggestions have been reviewed and access them.

As a translator, I want to be able to:

  • See if a person has usually been providing good quality translations.
  • Check if the person has specific permissions for my own locale, potentially for other locales.

As a locale manager, I want to be able to:

  • See the quality of contributions provided by a specific person.
  • See how frequently a person contributes translations, and the amount of their contributions.

As an administrator (or project manager), I want to be able to:

  • See data about the user:
    • When they signed up.
    • When was the last time they logged in to Pontoon, or they were active on the platform.
    • Quickly assess the frequency of contributions by type (reviews performed, translations).
    • Which projects and locales they contributed to.
    • Get a sense of the quality and amount of their contribution.
  • Easily access contributions by a specific person.

We’re happy to announce that the vast majority of the work has been completed, and you can already see it online in Pontoon. You can click on your profile icon in the top right corner, then click again on the name/icon in the dropdown to display your personal profile page (or you can see an example here).

Pontoon New Profile

In the left column, you can find information about the user: contact details, roles, as well as last known activity.

Each user can customize this information in the updated Settings page (click the CHANGE SETTINGS button to access it), where it’s possible to enter data as well as determine the visibility of some of the fields.

In the top central section there are two new graphs:

  • Approval rate shows the ratio between the number of translations approved and the total number of translations reviewed, excluding self-approved translations.
  • Self-approval rate is only visible for users with translator rights, and shows the ratio between the number of translations submitted directly — or self-approved after submitting them as suggestions — and the total number of translations approved.

Right below these graphs, there is a section showing a graphical representation of the user’s activity in the last year:

  • Each square represents a day, while each row represents a day of the week. The lighter the color, the higher the number of contributions on that day.
  • By default, the chart will show data for Submissions and reviews, which means translations submitted and reviews performed. We decided to use this as default among all the options, since it actually shows actions that require an active role from the user.
  • The chart will display activity for the last year, while the activity log below will by default display activity in more detail for the last month. Clicking on a specific square (day) will only show the activity for that day.

It’s important to note that the activity log includes links that allow you to jump to those specific strings in the translation editor, and that includes reviews performed or received, for which a new filter has been implemented.

We hope that you’ll find this new profile page useful in your day to day contributions to Mozilla. If you encounter any problems, don’t hesitate to file an issue on GitHub.


hacks.mozilla.orgThe 100% Markdown Expedition

A snowy mountain peak at sunset

The 100% Markdown Expedition

In June 2021, we decided to start converting the source code for MDN web docs from HTML into a format that would be easier for us to work with. The goal was to get 100% of our manually-written documentation converted to Markdown, and we really had a mountain of source code to climb for this particular expedition.

In this post, we’ll describe why we decided to migrate to Markdown, and the steps you can take that will help us on our mission.

Why get to 100% Markdown?

We want to get all active content on MDN Web Docs to Markdown for several reasons. The top three reasons are:

  • Markdown is a much more approachable and friendlier way to contribute to MDN Web Docs content. Having all content in Markdown will help create a unified contribution experience across languages and repositories.
  • With all content in Markdown, the MDN engineering team will be able to clean up a lot of the currently maintained code. Having less code to maintain will enable them to focus on improving the tooling for writers and contributors. Better tooling will lead to a more enjoyable contribution workflow.
  • All content in Markdown will allow the MDN Web Docs team to run the same linting rules across all active languages.

Here is the tracking issue for this project on the translated content repository.


This section describes the tools you’ll need to participate in this project.


If you do not have git installed, you can follow the steps described on this getting started page.

If you are on Linux or macOS, you may already have Git. To check, open your terminal and run: git --version

On Windows, there are a couple of options:


We’re tracking source code and managing contributions on GitHub, so the following will be needed:

• A GitHub account.
• The GitHub CLI to follow the commands below. (Encouraged, but optional, i.e., if you are already comfortable using Git, you can accomplish all the same tasks without the need for the GitHub CLI.)


First, install nvm – or on Windows

Once all of the above is installed, install Nodejs version 16 with NVM:

nvm install 16
nvm use 16
node --version

This should output a Nodejs version number that is similar to v16.15.1.


You’ll need code and content from several repositories for this project, as listed below.

You only need to fork the translated-content repository. We will make direct clones of the other two repositories.

Clone the above repositories and your fork of translated-content as follows using the GitHub CLI:

gh repo clone mdn/markdown
gh repo clone mdn/content
gh repo clone username/translated-content # replace username with your GitHub username
Setting up the conversion tool
cd markdown

You’ll also need to add some configuration via an .env file. In the root of the directory, create a new file called .env with the following contents:

Setting up the content repository
cd .. # This moves you out of the `markdown` folder
cd content

Converting to Markdown

I will touch on some specific commands here, but for detailed documentation, please check out the markdown repo’s README.

We maintain a list of documents that need to be converted to Markdown in this Google sheet. There is a worksheet for each language. The worksheets are sorted in the order of the number of documents to be converted in each language – from the lowest to the highest. You do not need to understand the language to do the conversion. As long as you are comfortable with Markdown and some HTML, you will be able to contribute.

NOTE: You can find a useful reference to the flavor of Markdown supported on MDN Web Docs. There are some customizations, but in general, it is based on GitHub flavoured Markdown.

The steps
Creating an issue

On the translated-content repository go to the Issues tab and click on the “New issue” button. As mentioned in the introduction, there is a tracking issue for this work and so, it is good practice to reference the tracking issue in the issue you’ll create.

You will be presented with three options when you click the “New issue” button. For our purposes here, we will choose the “Open a blank issue” option. For the title of the issue, use something like, “chore: convert mozilla/firefox/releases for Spanish to Markdown”. In your description, you can add something like the following:

As part of the larger 100% Markdown project, I am converting the set of documents under mozilla/firefox/releases to Markdown.

NOTE: You will most likely be unable to a assign an issue to yourself. The best thing to do here is to mention the localization team member for the appropriate locale and ask them to assign the issue to you. For example, on GitHub you would add a comment like this: “Hey @mdn/yari-content-es I would like to work on this issue, please assign it to me. Thank you!”

You can find a list of teams here.

Updating the spreadsheet

The tracking spreadsheet contains a couple of fields that you should update if you intend to work on speific items. The first item you need to add is your GitHub username and link the text to your GitHub profile. Secondly, set the status to “In progress”. In the issue column, paste a link to the issue you created in the previous step.

Creating a feature branch

It is a common practice on projects that use Git and GitHub to follow a feature branch workflow. I therefore need to create a feature branch for the work on the translated-content repository. To do this, we will again use our issue as a reference.

Let’s say your issue was called ” chore: convert mozilla/firefox/releases for Spanish to Markdown” with an id of 8192. You will do the following at the root of the translated-content repository folder:

NOTE: The translated content repository is a very active repository. Before creating your feature branch, be sure to pull the latest from the remote using the command git pull upstream main

git pull upstream main
git switch -c 8192-chore-es-convert-firefox-release-docs-to-markdown

NOTE: In older version of Git, you will need to use git checkout -B 8192-chore-es-convert-firefox-release-docs-to-markdown.

The above command will create the feature branch and switch to it.

Running the conversion

Now you are ready to do the conversion. The Markdown conversion tool has a couple of modes you can run it in:

  • dry – Run the script, but do not actually write any output
  • keep – Run the script and do the conversion but, do not delete the HTML file
  • replace – Do the conversion and delete the HTML file

You will almost always start with a dry run.

NOTE: Before running the command below, esnure that you are in the root of the markdown repository.

yarn h2m mozilla/firefox/releases --locale es --mode dry

This is because the conversion tool will sometimes encounter situations where it does not know how to convert parts of the document. The markdown tool will produce a report with details of the errors encountered. For example:

# Report from 9/1/2022, 2:40:14 PM
## All unhandled elements
- li.toggle (4)
- dl (2)
- ol (1)
## Details per Document
### [/es/docs/Mozilla/Firefox/Releases/1.5](<>)
#### Invalid AST transformations
##### dl (101:1) => listItem

type: "text"
value: ""

### [/es/docs/Mozilla/Firefox/Releases/3](<>)
### Missing conversion rules
- dl (218:1)

The first line in the report states that the tool had a problem converting four instances of li.toggle. So, there are four list items with the class attribute set to toggle. In the larger report, there is this section:

### [/es/docs/Mozilla/Firefox/Releases/9](<>)
#### Invalid AST transformations
##### ol (14:3) => list

type: "html"
value: "<li class=\\"toggle\\"><details><summary>Notas de la Versión para Desarrolladores de Firefox</summary><ol><li><a href=\\"/es/docs/Mozilla/Firefox/Releases\\">Notas de la Versión para Desarrolladores de Firefox</a></li></ol></details></li>",type: "html"
value: "<li class=\\"toggle\\"><details><summary>Complementos</summary><ol><li><a href=\\"/es/Add-ons/WebExtensions\\">Extensiones del navegador</a></li><li><a href=\\"/es/Add-ons/Themes\\">Temas</a></li></ol></details></li>",type: "html"
value: "<li class=\\"toggle\\"><details><summary>Firefox por dentro</summary><ol><li><a href=\\"/es/docs/Mozilla/\\">Proyecto Mozilla (Inglés)</a></li><li><a href=\\"/es/docs/Mozilla/Gecko\\">Gecko</a></li><li><a href=\\"/es/docs/Mozilla/Firefox/Headless_mode\\">Headless mode</a></li><li><a href=\\"/es/docs/Mozilla/JavaScript_code_modules\\">Modulos de código JavaScript (Inglés)</a></li><li><a href=\\"/es/docs/Mozilla/js-ctypes\\">JS-ctypes (Inglés)</a></li><li><a href=\\"/es/docs/Mozilla/MathML_Project\\">Proyecto MathML</a></li><li><a href=\\"/es/docs/Mozilla/MFBT\\">MFBT (Inglés)</a></li><li><a href=\\"/es/docs/Mozilla/Projects\\">Proyectos Mozilla (Inglés)</a></li><li><a href=\\"/es/docs/Mozilla/Preferences\\">Sistema de Preferencias (Inglés)</a></li><li><a href=\\"/es/docs/Mozilla/WebIDL_bindings\\">Ataduras WebIDL (Inglés)</a></li><li><a href=\\"/es/docs/Mozilla/Tech/XPCOM\\">XPCOM</a></li><li><a href=\\"/es/docs/Mozilla/Tech/XUL\\">XUL</a></li></ol></details></li>",type: "html"
value: "<li class=\\"toggle\\"><details><summary>Crear y contribuir</summary><ol><li><a href=\\"/es/docs/Mozilla/Developer_guide/Build_Instructions\\">Instrucciones para la compilación</a></li><li><a href=\\"/es/docs/Mozilla/Developer_guide/Build_Instructions/Configuring_Build_Options\\">Configurar las opciones de compilación</a></li><li><a href=\\"/es/docs/Mozilla/Developer_guide/Build_Instructions/How_Mozilla_s_build_system_works\\">Cómo funciona el sistema de compilación (Inglés)</a></li><li><a href=\\"/es/docs/Mozilla/Developer_guide/Source_Code/Mercurial\\">Código fuente de Mozilla</a></li><li><a href=\\"/es/docs/Mozilla/Localization\\">Localización</a></li><li><a href=\\"/es/docs/Mozilla/Mercurial\\">Mercurial (Inglés)</a></li><li><a href=\\"/es/docs/Mozilla/QA\\">Garantía de Calidad</a></li><li><a href=\\"/es/docs/Mozilla/Using_Mozilla_code_in_other_projects\\">Usar Mozilla en otros proyectos (Inglés)</a></li></ol></details></li>"

The problem is therefore in the file /es/docs/Mozilla/Firefox/Releases/9. In this instance, we can ignore this as we will simply leave the HTML as is in the Markdown. This is sometimes needed as the HTML we need cannot be accurately represented in Markdown. The part you cannot see in the output above is this portion of the file:

<div><section id="Quick_links">
    <li class="toggle">

If you do a search in the main content repo you will find lots of instances of this. In all those cases, you will see that the HTML is kept in place and this section is not converted to Markdown.

The next two problematic items are two dl or description list elements. These elements will require manual conversion using the guidelines in our documentation. The last item, the ol is actually related to the li.toggle issue. Those list items are wrapped by an ol and because the tool is not sure what to do with the list items, it is also complaining about the ordered list item.

Now that we understand what the problems are, we have two options. We can run the exact same command but this time use the replace mode or, we can use the keep mode. I am going to go ahead and run the command with replace. While the previous command did not actually write anything to the translated content repository, when run with replace it will create a new file called with the converted Markdown and delete the index.html that resides in the same directory.

yarn h2m mozilla/firefox/releases --locale es --mode replace

Following the guidelines from the report, I will have to pay particular attention to the following files post conversion:

  • /es/docs/Mozilla/Firefox/Releases/1.5
  • /es/docs/Mozilla/Firefox/Releases/3
  • /es/docs/Mozilla/Firefox/Releases/9

After running the command, run the following at the root of the translated content repository folder, git status. This will show you a list of the changes made by the command. Depending on the number of files touched, the output can be verbose. The vital thing to keep an eye out for is that there are no changes to folders or files you did not expect.

Testing the changes

Now that the conversion has been done, we need to review the syntax and see that the pages render correctly. This is where the content repo is going to come into play. As with the markdown repository, we also need to create a .env file at the root of the content folder.


With this in place we can start the development server and take a look at the pages in the browser. To start the server, run yarn start. You should see output like the following:

❯ yarn start
yarn run v1.22.17
$ yarn up-to-date-check && env-cmd --silent cross-env CONTENT_ROOT=files REACT_APP_DISABLE_AUTH=true BUILD_OUT_ROOT=build yari-server
$ node scripts/up-to-date-check.js
[HPM] Proxy created: /  -> <>
CONTENT_ROOT: /Users/schalkneethling/mechanical-ink/dev/mozilla/content/files
Listening on port 5042

Go ahead and open http://localhost:5042 which will serve the homepage. To find the URL for one of the pages that was converted open up the Markdown file and look at the slug in the frontmatter. When you ran git status earlier, it would have printed out the file paths to the terminal window. The file path will show you exactly where to find the file, for example, files/es/mozilla/firefox/releases/1.5/ Go ahead and open the file in your editor of choice.

In the frontmatter, you will find an entry like this:

slug: Mozilla/Firefox/Releases/1.5

To load the page in your browser, you will always prepend http://localhost:5042/es/docs/ to the slug. In other words, the final URL you will open in your browser will be http://localhost:5042/es/docs/Mozilla/Firefox/Releases/1.5. You can open the English version of the page in a separate tab to compare, but be aware that the content could be wildly different as you might have converted a page that has not been updated in some time.

What you want to look out for is anything in the page that looks like it is not rendering correctly. If you find something that looks incorrect, look at the Markdown file and see if you can find any syntax that looks incorrect or completely broken. It can be extremely useful to use a tool such as VSCode with a Markdown tool and Prettier installed.

Even if the rendered content looks good, do take a minute and skim over the generated Markdown and see if the linters bring up any possible errors.

NOTE: If you see code like this {{FirefoxSidebar}} this is a macro call. There is not a lot of documentation yet but, these macros come from KumaScript in Yari.

A couple of other things to keep in mind. When you run into an error, before you spend a lot of time trying to understand what exatly the problem is or how to fix it, do the following:

  1. Look for the same page in the content repository and make sure the page still exists. If it was removed from the content repository, you can safely remove it from translated-content as well.
  1. Look at the same page in another language that has already been converted and see how they solved the problem.

For example, I ran into an error where a page I loaded simply printed the following in the browser: Error: 500 on /es/docs/Mozilla/Firefox/Releases/2/Adding_feed_readers_to_Firefox/index.json: SyntaxError: Expected "u" or ["bfnrt\\\\/] but "_" found.. I narrowed it down to the following piece of code inside the Markdown:

{{ languages( { "en": "en/Adding\\_feed\\_readers\\_to\\_Firefox", "ja": "ja/Adding\\_feed\\_readers\\_to\\_Firefox", "zh-tw": "zh\\_tw/\\u65b0\\u589e\\u6d88\\u606f\\u4f86\\u6e90\\u95b1\\u8b80\\u5de5\\u5177" } ) }}

In French it seems that they removed the page, but when I looked in zh-tw it looks like they simply removed this macro call. I opted for the latter and just removed the macro call. This solved the problem and the page rendered correctly. Once you have gone through all of the files you converted it is time to open a pull request.

Preparing and opening a pull request

# the dot says add everything
git add .

Start by getting all your changes ready for committing:

If you run git status now you will see something like the following:

❯ git status
On branch 8192-chore-es-convert-firefox-release-docs-to-markdown
Changes to be committed: # this be followed by a list of files that has been added, ready for commit

Commit your changes:

git commit -m 'chore: convert Firefox release docs to markdown for Spanish'

Finally you need to push the changes to GitHub so we can open the pull request:

git push origin 8192-chore-es-convert-firefox-release-docs-to-markdown

You can now head over to the translated content repository on GitHub where you should see a banner that asks whether you want to open a pull request. Click the “Compare and pull button” and look over your changes on the next page to ensure nothing surprises.

At this point, you can also add some more information and context around the pull request in the description box. It is also critical that you add a line as follows, “Fix #8192”. Substitute the number with the number of the issue you created earlier. The reason we do this is so that we link the issue and the pull request. What will also happen is, once the pull request is merged, GitHub will automatically close the issue.

Once you are satisfied with the changes as well as your description, go ahead and click the button to open the pull request. At this stage GitHub will auto-assign someone from the appropriate localization team to review your pull request. You can now sit back and wait for feedback. Once you receive feedback, address any changes requested by the reviewer and update your pull request.

Once you are both satisfied with the end result, the pull request will be merged and you will have helped us get a little bit closer to 100% Markdown. Thank you! One final step remains though. Open the spreadsheet and update the relevant rows with a link to the pull request, and update the status to “In review”.

Once the pull request has been merged, remember to come back and update the status to done.

Reach out if you need help

If you run into any problems and have questions, please join our MDN Web Docs channel on Matrix.


Photo by Cristian Grecu on Unsplash

The post The 100% Markdown Expedition appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogThe children’s book author behind #disabledandcute on her favorite corners of the internet

Keah Brown rests her head on her hand while posing for a photo.<figcaption>Photo: Carissa King</figcaption>

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we are also quick to point out that the internet is pretty darn magical. The internet opens up doors and opportunities, allows for people to connect with others, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner Of The Internet, we talk with people about the online spaces they can’t get enough of, what we should save in Pocket to read later, and what sites and forums shaped them.

This month we chat with writer Keah Brown. She created the viral #disabledandcute hashtag and just published “Sam’s Super Seats,” her debut children’s book about a girl with cerebral palsy who goes back-to-school shopping with her best friends. She talks about celebrating the joys of young people with disabilities online, her love for the band Paramore, other pop culture “-mores” she’s obsessed with and a deep dive into a TV show reboot that never was. 

What is your favorite corner of the internet?

Film and TV chats with my friends, the corner of the internet that loves Drew Barrymore (because duh!), the new “A League of Their Own” discussions corner, the Paramore fandom because they are the best band in the world, and the rom-com corner of the internet.

What is an internet deep dive that you can’t wait to jump back into? 

The deep dive into why we won’t see “Lizzie McGuire” the reboot.

What is the one tab you always regret closing? 

The YouTube video I [opened] in a new tab so I wouldn’t lose it when the video I was watching ended.

What can you not stop talking about on the internet right now? 

My new children’s book, “Sam’s Super Seats,” Paramore, Drew Barrymore, “A League of Their Own” the series, getting ready to move into my first apartment, Meg Thee Stallion, and Renaissance, Beyonce’s [new] album.

What was the first online community you engaged with? 

The “Glee” fandom on Tumblr.

What articles and videos are in your Pocket waiting to be read/watched right now? 

Abbi Jacobson on “The Daily Show,” apartment tours on the Listed YouTube channel, “10 Renter-Friendly Fixes for Your First Apartment,Tracee Ellis Ross on “Hart to Heart,” and Meghan Markle’s podcast interview with Serena Williams.

How can parents or other caretakers of young people with disabilities use the internet to fight stigma and celebrate their joys?

By being aware that the disabled people in their lives are people first and deserve to be treated as such. Ask them for permission before posting about them online. Fight for them like you would anyone else you love and treat them like fully realized human beings 🙂 What I think is also important is that we center disabled people themselves and give them the space to share their own stories and celebrate joy while fighting stigma, too.

If you could create your own corner of the internet what would it look like? 

The really cool thing is that it looks exactly like the one I have now. I talk about all my favorite things starting with the more’s: Drew Barrymore, Mandy Moore, Paramore and then there is what I’m watching for film and TV, then we have house tours, book promotion (buy “Sam’s Super Seats”!) and people who love cheesecake and pizza. 

Keah Brown is a journalist, author and screenwriter. Keah is the creator of the viral hashtag #DisabledAndCute. Her work has appeared in Town & Country Magazine, Teen Vogue, Elle, Harper’s Bazaar, Marie Claire UK, and The New York Times, among other publications. Her essay collection “The Pretty One” and picture book “Sam’s Super Seats” are both out now. You can follow her on Twitter and Instagram.

An illustration reads: The Tech Talk

Talk to your kids about online safety

Get tips

The post The children’s book author behind #disabledandcute on her favorite corners of the internet appeared first on The Mozilla Blog.

The Mozilla BlogThe Tech Talk

The internet is a great place for families. It gives us new opportunities to discover the world, connect with others and just generally make our lives easier and more colorful.

But it also comes with new challenges and complications for the people raising the next generations. Mozilla wants to help families make the best online decisions, whatever that looks like, with our latest series, The Tech Talk.

Talk to your kids about online safety

Get tips

An illustration shows a silhouette of a child surrounded by emojis.

Concerned about screen time?

Here’s what experts are saying.

An illustration shows a digital pop-up box that reads: A back-to-school checklist for online safety: Set up new passwords. Check your devices' privacy settings. Protect your child's browsing information. Discuss parental controls with the whole family. Have the "tech talk."

A back-to-school checklist for online safety

This school year, make the best use of the internet while staying safe.

A child smiles while using a table computer.

Are parental controls the answer to keeping kids safe online?

There are a few things to consider before giving parental controls a go.

An illustration shows three columns containing newspaper icons along with social media icons.

5 ways to fight misinformation on your social feed

Slow your scroll with this guide that we created with the News Literacy Project and the Teens for Press Freedom.

Ten young people lean on a wall looking down at their phones.

A little less misinformation, a little more action

How do teens engage with information on social media? We asked them.

Keah Brown rests her head on her hand while posing for a photo.

The children’s book author behind #disabledandcute on her favorite corners of the internet

Keah Brown talks about celebrating the joys of young people with disabilities online.

The post The Tech Talk appeared first on The Mozilla Blog.

Mozilla Add-ons BlogHello from the new developer advocate

Hello extension developers, I’m Juhis, it’s a pleasure to meet you all. In the beginning of August I joined Mozilla and the Firefox add-ons team as a developer advocate. I expect us to see each other quite a lot in the future. My mom taught me to always introduce myself to new people so here we go!

My goal is to help all of you to learn from each other, to build great add-ons and to make that journey an enjoyable experience. Also, I want to be your voice to the teams building Firefox and add-ons tooling.

My journey into the world of software

I’m originally from Finland and grew up in a rather small town in the southwest. I got excited about computers from a very young age. I vividly remember a moment from my childhood when my sister created a digital painting of two horses, but since it was too large for the screen, I had to scroll to reveal the other horse. That blew my four-year old mind and I’ve been fascinated by the opportunities of technology ever since.

After some years working in professional software development, I realized I could offer maximum impact by building communities and helping others become developers rather than just coding myself. Ever since, I’ve been building developer communities, organized meetups, taught programming and served as a general advocate for the potentials of technology.

I believe in the positive empowerment that technology can bring to individuals all around the world. Whether it’s someone building something small to solve a problem in their daily life, someone building tools for their community, or being able to build and run your own business, there are so many ways we can leverage technology for good.

Customize your own internet experience with add-ons

The idea of shaping your own internet experience has been close to my heart for a long time. It can be something relatively simple like running custom CSS through existing extensions to make a website more enjoyable to use, or maybe it’s building big extensions for thousands of other people to enjoy. I’m excited to now be in a position where I can help others to build great add-ons of their own.

To understand better what a new extensions developer goes through, I built an extension following our documentation and processes. I built it for fellow Pokemon TCG players who want a a more visual way to read decklists online. Pokemon TCG card viewer can be installed from It adds a hover state to card codes it recognizes and displays a picture of the card on hover.

Best way to find me is in the Mozilla Matrix server as in the Add-ons channel. Come say hi!

The post Hello from the new developer advocate appeared first on Mozilla Add-ons Community Blog.

The Mozilla Thunderbird BlogThunderbird Tip: How To Manually Sort All Email And Account Folders

In our last blog post, you learned an easy way to change the order your accounts are displayed in Thunderbird. Today, we have a short video guide that takes your organizing one step further. You’ll learn how to manually sort all of the Thunderbird folders you have. That includes any newsgroup and RSS feed subscriptions too!

Have You Subscribed To Our YouTube Channel?

We’re currently building the next exciting era of Thunderbird, and developing a Thunderbird experience for mobile. We’re also putting out more content and communication across various platforms to keep you informed. And, of course, to show you some great usage tips along the way.

To accomplish that, we’ve launched our YouTube channel to help you get the most out of Thunderbird. You can subscribe here. Help us reach 1000 subscribers by the end of September!

Video Guide: Manually Sort Your Thunderbird Folders

The short video below shows you everything you need to know:

We plan to produce many more tips just like this on our YouTube channel. We’ll also share them right here on the Thunderbird blog, so grab our RSS feed! (Need a guide for using RSS with Thunderbird? Here you go!)

Do you have a good Thunderbird tip we should turn into a video? Let us know in the comments, and thank you for using Thunderbird!

The post Thunderbird Tip: How To Manually Sort All Email And Account Folders appeared first on The Thunderbird Blog.

SeaMonkeySeaMonkey 2.53.14 Beta 1 is out!

Hi All,

The SeaMonkey Project team is pleased to announce the immediate release of SeaMonkey 2.53.14 beta 1.

As it is a beta, please check out the release notes at [1] and/or [2] and take it for a spin with a new profile (or a copy/backup of your production profile).

The updates will be forthcoming, specifically, within the hour after I’ve pressed that shiny red button.  or was it the green one?  😛

Best Regards,


[1] –

[2] –

SUMO BlogWhat’s up with SUMO – August 2022

Hi everybody,

Summer is not a thing in my home country, Indonesia. But I learn that taking some time off after having done a lot of work in the first half of the year is useful for my well-being. So I hope you had a chance to take a break this summer.

We passed half of Q3 already at this point, so let’s see what SUMO has been doing and up to with renewed excitement after this holiday season.

Welcome note and shout-outs

  • Thanks to Felipe for doing a short experiment on social support mentoring. This was helpful to understand what other contributors might need when they start contributing.
  • Thanks to top contributors for Firefox for iOS in the forum. We are in need of more iOS contributors in the forum, so your contribution is highly appreciated.
  • I’d like to give special thanks to a few contributors who start to contribute more to KB these days: Denys, Kaie, Lisah933, jmaustin, and many others.

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to add them in our next edition.

Community news

  • We are now sharing social and mobile support stats regularly. This is an effort to make sure that contributors are updated and exposed to both contribution areas. We knew it’s not always easy to discover opportunity to contribute to social or mobile support since we’re utilizing a different tool for these contribution areas. Check out the last one was from last week.
  • The long-awaited work to fix the automatic function to shorten KB article link has been released to production. Read more about this change in this contributor thread and how you can help remove manual links that we added in the past when the functionality was broken.
  • Check out our post about Back to School marketing campaign if you haven’t.

Catch up

  • Consider subscribing to Firefox Daily Digest if you haven’t to get daily updates about Firefox from across different platforms.
  • Watch the monthly community call if you haven’t. Learn more about what’s new in July! Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can answer them during the meeting.
  • If you’re an NDA’ed contributor, you can watch the recording of the Customer Experience weekly scrum meeting from AirMozilla to catch up with the latest product updates.
  • Check out the following release notes from Kitsune in the month:

Community stats


KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only

Month Page views Vs previous month
Jul 2022 7,325,189 -5.94%

Top 5 KB contributors in the last 90 days: 

KB Localization

Top 10 locales based on total page views

Locale Jul 2022 pageviews (*)
de 8.31%
zh-CN 7.01%
fr 5.94%
es 5.91%
pt-BR 4.75%
ru 4.14%
ja 3.93%
It 2.13%
zh-TW 1.99%
pl 1.94%
* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

Forum Support

Forum stats


Top 5 forum contributors in the last 90 days: 

Social Support

Channel Total incoming conv Conv interacted Resolution rate
Jul 2022 237 251 75.11%

Top 5 Social Support contributors in the past 2 months: 

  1. Bithiah K
  2. Christophe Villeneuve
  3. Felipe Koji
  4. Kaio Duarte
  5. Matt Cianfarani

Play Store Support

Channel Jul 2022
Total priority review Total priority review replied Total reviews replied
Firefox for Android 2155 508 575
Firefox Focus for Android 45 18 92
Firefox Klar Android 3 0 0

Top 5 Play Store contributors in the past 2 months: 

  • Paul Wright
  • Selim Şumlu
  • Felipe Koji
  • Tim Maks
  • Matt Cianfarani

Product updates

To catch up on product releases update, please watch the recording of the Customer Experience scrum meeting from AirMozilla. You can also subscribe to the AirMozilla folder by clickling on the Subscribe button at the top right corner of the page to get notifications each time we add a new recording.

Useful links:

Open Policy & AdvocacyMozilla Meetups – The Long Road to Federal Privacy Protections: Are We There Yet?

Register Below!
Join us for a discussion about the need for comprehensive privacy reform and whether the political landscape is ready to make it happen.

The panel session will be immediately followed by a happy hour reception with drinks and light fare. 

Date and time: Wednesday, September 21st – panel starts @ 4:00PM promptly (doors @ 3:45pm)
Location: Wunder Garten, 1101 First St. NE, Washington, DC 20002

The post Mozilla Meetups – The Long Road to Federal Privacy Protections: Are We There Yet? appeared first on Open Policy & Advocacy.

hacks.mozilla.orgMerging two GitHub repositories without losing commit history

Merging two GitHub repositories without losing history

We are in the process of merging smaller example code repositories into larger parent repositories on the MDN Web Docs project. While we thought that copying the files from one repository into the new one would lose commit history, we felt that this might be an OK strategy. After all, we are not deleting the old repository but archiving it.

After having moved a few of these, we did receive an issue from a community member stating that it is not ideal to lose history while moving these repositories and that there could be a relatively simple way to avoid this. I experimented with a couple of different options and finally settled on a strategy based on the one shared by Eric Lee on his blog.

tl;dr The approach is to use basic git commands to apply all of the histories of our old repo onto a new repo without needing special tooling.

Getting started

For the experiment, I used the sw-test repository that is meant to be merged into the dom-examples repository.

This is how Eric describes the first steps:

# Assume the current directory is where we want the new repository to be created
# Create the new repository

git init

# Before we do a merge, we need to have an initial commit, so we’ll make a dummy commit

dir > deleteme.txt
git add .
git commit -m “Initial dummy commit”

# Add a remote for and fetch the old repo
git remote add -f old_a <OldA repo URL>

# Merge the files from old_a/master into new/master
git merge old_a/master

I could skip everything up to the git remote ... step as my target repository already had some history, so I started as follows:

git clone
cd dom-examples

Running git log on this repository, I see the following commit history:

commit cdfd2aeb93cb4bd8456345881997fcec1057efbb (HEAD -> master, upstream/master)
Merge: 1c7ff6e dfe991b
Date:   Fri Aug 5 10:21:27 2022 +0200

    Merge pull request #143 from mdn/sideshowbarker/webgl-sample6-UNPACK_FLIP_Y_WEBGL

    “Using textures in WebGL”: Fix orientation of Firefox logo

commit dfe991b5d1b34a492ccd524131982e140cf1e555
Date:   Fri Aug 5 17:08:50 2022 +0900

    “Using textures in WebGL”: Fix orientation of Firefox logo

    Fixes <>

commit 1c7ff6eec8bb0fff5630a66a32d1b9b6b9d5a6e5
Merge: be41273 5618100
Date:   Fri Aug 5 09:01:56 2022 +0200

    Merge pull request #142 from mdn/sideshowbarker/webgl-demo-add-playsInline-drop-autoplay

    WebGL sample8: Drop “autoplay”; add “playsInline”

commit 56181007b7a33907097d767dfe837bb5573dcd38
Date:   Fri Aug 5 13:41:45 2022 +0900

With the current setup, I could continue from the git remote command, but I wondered if the current directory contained files or folders that would conflict with those in the service worker repository. I searched around some more to see if anyone else had run into this same situation but did not find an answer. Then it hit me! I need to prepare the service worker repo to be moved.

What do I mean by that? I need to create a new directory in the root of the sw-test repo called service-worker/sw-test and move all relevant files into this new subdirectory. This will allow me to safely merge it into dom-examples as everything is contained in a subfolder already.

To get started, I need to clone the repo we want to merge into dom-examples.

git clone
cd sw-test

Ok, now we can start preparing the repo. The first step is to create our new subdirectory.

mkdir service-worker
mkdir service-worker/sw-test

With this in place, I simply need to move everything in the root directory to the subdirectory. To do this, we will make use of the move (mv) command:

NOTE: Do not yet run any of the commands below at this stage.

# enable extendedglob for ZSH
set -o extendedglob
mv ^sw-test(D) service-worker/swtest

The above command is a little more complex than you might think. It uses a negation syntax. The next section explains why we need it and how to enable it.

How to exclude subdirectories when using mv

While the end goal seemed simple, I am pretty sure I grew a small animal’s worth of grey hair trying to figure out how to make that last move command work. I read many StackOverflow threads, blog posts, and manual pages for the different commands with varying amounts of success. However, none of the initial set of options quite met my needs. I finally stumbled upon two StackOverflow threads that brought me to the answer.

To spare you the trouble, here is what I had to do.

First, a note. I am on a Mac using ZSH (since macOS Catalina, this is now the default shell). Depending on your shell, the instructions below may differ.

For new versions of ZSH, you use the set -o and set +o commands to enable and disable settings. To enable extendedglob, I used the following command:

# Yes, this _enables_ it
set -o extendedglob

On older versions of ZSH, you use the setopt and unsetopt commands.

setopt extendedglob

With bash, you can achieve the same using the following command:

shopt -s extglob

Why do you even have to do this, you may ask? Without this, you will not be able to use the negation operator I use in the above move command, which is the crux of the whole thing. If you do the following, for example:

mkdir service-worker
mv * service-worker/sw-test

It will “work,” but you will see an error message like this:

mv: rename service-worker to service-worker/sw-test/service-worker: Invalid argument

We want to tell the operating system to move everything into our new subfolder except the subfolder itself. We, therefore, need this negation syntax. It is not enabled by default because it could cause problems if file names contain some of the extendedglob patterns, such as ^. So we need to enable it explicitly.

NOTE: You might also want to disable it after completing your move operation.

Now that we know how and why we want extendedglob enabled, we move on to using our new powers.

NOTE: Do not yet run any of the commands below at this stage.

mv ^sw-test(D) service-worker/sw-test

The above means:

  • Move all the files in the current directory into service-worker/sw-test.
  • Do not try to move the service-worker directory itself.
  • The (D) option tells the move command to also move all hidden files, such as .gitignore, and hidden folders, such as .git.

NOTE: I found that if I typed mv ^sw-test and pressed tab, my terminal would expand the command to mv LICENSE app.js gallery image-list.js index.html service-worker star-wars-logo.jpg style.css sw.js. If I typed mv ^sw-test(D) and pressed tab, it would expand to mv .git .prettierrc LICENSE app.js gallery image-list.js index.html service-worker star-wars-logo.jpg style.css sw.js. This is interesting because it clearly demonstrates what happens under the hood. This allows you to see the effect of using (D) clearly. I am not sure whether this is just a native ZSH thing or one of my terminal plugins, such as Fig. Your mileage may vary.

Handling hidden files and creating a pull request

While it is nice to be able to move all of the hidden files and folders like this, it causes a problem. Because the .git folder is transferred into our new subfolder, our root directory is no longer seen as a Git repository. This is a problem.

Therefore, I will not run the above command with (D) but instead move the hidden files as a separate step. I will run the following command instead:

mv ^(sw-test|service-worker) service-worker/sw-test

At this stage, if you run ls it will look like it moved everything. That is not the case because the ls command does not list hidden files. To do that, you need to pass the -A flag as shown below:

ls -A

You should now see something like the following:

❯ ls -A
.git           .prettierrc    service-worker

Looking at the above output, I realized that I should not need to move the .git folder. All I needed to do now was to run the following command:

mv .prettierrc service-worker

After running the above command, ls -A will now output the following:

❯ ls -A
.git simple-service-worker

Time to do a little celebration dance 😁

We can move on now that we have successfully moved everything into our new subdirectory. However, while doing this, I realized I forgot to create a feature branch for the work.

Not a problem. I just run the command, git switch -C prepare-repo-for-move. Running git status at this point should output something like this:

❯ git status
On branch prepare-repo-for-move
Changes not staged for commit:
  (use "git add/rm <file>..." to update what will be committed)
  (use "git restore <file>..." to discard changes in working directory)
	deleted:    .prettierrc
	deleted:    LICENSE
	deleted:    app.js
	deleted:    gallery/bountyHunters.jpg
	deleted:    gallery/myLittleVader.jpg
	deleted:    gallery/snowTroopers.jpg
	deleted:    image-list.js
	deleted:    index.html
	deleted:    star-wars-logo.jpg
	deleted:    style.css
	deleted:    sw.js

Untracked files:
  (use "git add <file>..." to include in what will be committed)

no changes added to commit (use "git add" and/or "git commit -a")

Great! Let’s add our changes and commit them.

git add .
git commit -m 'Moved all source files into new subdirectory'

Now we want to push our changes and open a pull request.

Woop! Let’s push:

git push origin prepare-repo-for-move

Head over to your repository on GitHub. You should see a banner like “mv-files-into-subdir had recent pushes less than a minute ago” and a “Compare & pull request” button.

Click the button and follow the steps to open the pull request. Once the pull request is green and ready to merge, go ahead and merge!

NOTE: Depending on your workflow, this is the point to ask a team member to review your proposed changes before merging. It is also a good idea to have a look over the changes in the “Files changed” tab to ensure nothing is part of the pull request you did not intend. If any conflicts prevent your pull request from being merged, GitHub will warn you about these, and you will need to resolve them. This can be done directly on or locally and pushed to GitHub as a separate commit.

When you head back to the code view on GitHub, you should see our new subdirectory and the .gitignore file.

With that, our repository is ready to move.

Merging our repositories

Back in the terminal, you want to switch back to the main branch:

git switch main

You can now safely delete the feature branch and pull down the changes from your remote.

git branch -D prepare-repo-for-move
git pull origin main

Running ls -A after pulling the latest should now show the following:

❯ ls -A
.git       service-worker

Also, running git log in the root outputs the following:

commit 8fdfe7379130b8d6ea13ea8bf14a0bb45ad725d0 (HEAD -> gh-pages, origin/gh-pages, origin/HEAD)
Author: Schalk Neethling
Date:   Thu Aug 11 22:56:48 2022 +0200


commit 254a95749c4cc3d7d2c7ec8a5902bea225870176
Merge: f5c319b bc2cdd9
Author: Schalk Neethling
Date:   Thu Aug 11 22:55:26 2022 +0200

    Merge pull request #45 from mdn/prepare-repo-for-move

    chore: prepare repo for move to dom-examples

commit bc2cdd939f568380ce03d56f50f16f2dc98d750c (origin/prepare-repo-for-move)
Author: Schalk Neethling
Date:   Thu Aug 11 22:53:13 2022 +0200

    chore: prepare repo for move to dom-examples

    Prepping the repository for the move to dom-examples

commit f5c319be3b8d4f14a1505173910877ca3bb429e5
Merge: d587747 2ed0eff
Author: Ruth John
Date:   Fri Mar 18 12:24:09 2022 +0000

    Merge pull request #43 from SimonSiefke/add-navigation-preload

Here are the commands left over from where we diverted earlier on.

# Add a remote for and fetch the old repo
git remote add -f old_a <OldA repo URL>

# Merge the files from old_a/master into new/master
git merge old_a/master

Alrighty, let’s wrap this up. First, we need to move into the root of the project to which we want to move our project. For our purpose here, this is the dom-examples directory. Once in the root of the directory, run the following:

git remote add -f swtest

NOTE: The -f tells Git to fetch the remote branches. The ssw is a name you give to the remote so this could really be anything.

After running the command, I got the following output:

❯ git remote add -f swtest
Updating swtest
remote: Enumerating objects: 500, done.
remote: Counting objects: 100% (75/75), done.
remote: Compressing objects: 100% (57/57), done.
remote: Total 500 (delta 35), reused 45 (delta 15), pack-reused 425
Receiving objects: 100% (500/500), 759.76 KiB | 981.00 KiB/s, done.
Resolving deltas: 100% (269/269), done.
From <>
 * [new branch]      gh-pages        -> swtest/gh-pages
 * [new branch]      master          -> swtest/master
 * [new branch]      move-prettierrc -> swtest/move-prettierrc
 * [new branch]      rename-sw-test  -> swtest/rename-sw-test

NOTE: While we deleted the branch locally, this is not automatically synced with the remote, so this is why you will still see a reference to the rename-sw-test branch. If you wanted to delete it on the remote, you would run the following from the root of that repository: git push origin :rename-sw-test (if you have configured your repository “to automatically delete head branches”, this will be automatically deleted for you)

Only a few commands left.

NOTE: Do not yet run any of the commands below at this stage.

git merge swtest/gh-pages

Whoops! When I ran the above, I got the following error:

❯ git merge swtest/gh-pages
fatal: refusing to merge unrelated histories

But this is pretty much exactly what I do want, right? This is the default behavior of the merge command, but you can pass a flag and allow this behavior.

git merge swtest/gh-pages --allow-unrelated-histories

NOTE: Why gh-pages? More often than not, the one you will merge here will be main but for this particular repository, the default branch was named gh-pages. It used to be that when using GitHub pages, you would need a branch called gh-pages that will then be automatically deployed by GitHub to a URL that would be something like

After running the above, I got the following:

❯ git merge swtest/gh-pages --allow-unrelated-histories
CONFLICT (add/add): Merge conflict in
Automatic merge failed; fix conflicts and then commit the result.

Ah yes, of course. Our current project and the one we are merging both contain a, so Git is asking us to decide what to do. If you open up the file in your editor, you will notice something like this:

<<<<<<< HEAD


There might be a number of these in the file. You will also see some entries like this, >>>>>>> swtest/gh-pages. This highlights the conflicts that Git is not sure how to resolve. You could go through and clear these manually. In this instance, I just want what is in the at the root of the dom-examples repo, so I will clean up the conflicts or copy the content from the from GitHub.

As Git requested, we will add and commit our changes.

git add .
git commit -m 'merging sw-test into dom-examples'

The above resulted in the following output:

❯ git commit
[146-chore-move-sw-test-into-dom-examples 4300221] Merge remote-tracking branch 'swtest/gh-pages' into 146-chore-move-sw-test-into-dom-examples

If I now run git log in the root of the directory, I see the following:

commit 4300221fe76d324966826b528f4a901c5f17ae20 (HEAD -> 146-chore-move-sw-test-into-dom-examples)
Merge: cdfd2ae 70c0e1e
Author: Schalk Neethling
Date:   Sat Aug 13 14:02:48 2022 +0200

    Merge remote-tracking branch 'swtest/gh-pages' into 146-chore-move-sw-test-into-dom-examples

commit 70c0e1e53ddb7d7a26e746c4a3412ccef5a683d3 (swtest/gh-pages)
Merge: 4b7cfb2 d4a042d
Author: Schalk Neethling
Date:   Sat Aug 13 13:30:58 2022 +0200

    Merge pull request #47 from mdn/move-prettierrc

    chore: move prettierrc

commit d4a042df51ab65e60498e949ffb2092ac9bccffc (swtest/move-prettierrc)
Author: Schalk Neethling
Date:   Sat Aug 13 13:29:56 2022 +0200

    chore: move prettierrc

    Move `.prettierrc` into the siple-service-worker folder

commit 4b7cfb239a148095b770602d8f6d00c9f8b8cc15
Merge: 8fdfe73 c86d1a1
Author: Schalk Neethling
Date:   Sat Aug 13 13:22:31 2022 +0200

    Merge pull request #46 from mdn/rename-sw-test

Yahoooo! That is the history from sw-test now in our current repository! Running ls -A now shows me:

❯ ls -A
.git                           indexeddb-examples             screen-wake-lock-api
.gitignore                     insert-adjacent                screenleft-screentop             matchmedia                     scrolltooptions
LICENSE                        media                          server-sent-events                      media-session                  service-worker
abort-api                      mediaquerylist                 streams
auxclick                       payment-request                touchevents
canvas                         performance-apis               web-animations-api
channel-messaging-basic        picture-in-picture             web-crypto
channel-messaging-multimessage pointer-lock                   web-share
drag-and-drop                  pointerevents                  web-speech-api
fullscreen-api                 reporting-api                  web-storage
htmldialogelement-basic        resize-event                   web-workers
indexeddb-api                  resize-observer                webgl-examples

And if I run ls -A service-worker/, I get:

❯ ls -A service-worker/

And finally, running ls -A service-worker/simple-service-worker/ shows:

❯ ls -A service-worker/simple-service-worker/
.prettierrc          image-list.js      style.css app.js             index.html         sw.js
LICENSE            gallery            star-wars-logo.jpg

All that is left is to push to remote.

git push origin 146-chore-mo…dom-examples

NOTE: Do not squash merge this pull request, or else all commits will be squashed together as a single commit. Instead, you want to use a merge commit. You can read all the details about merge methods in their documentation on GitHub.

After you merge the pull request, go ahead and browse the commit history of the repo. You will find that the commit history is intact and merged. o/\o You can now go ahead and either delete or archive the old repository.

At this point having the remote configured for our target repo serve no purpose so, we can safe remove the remote.

git remote rm swtest
In Conclusion

The steps to accomplish this task is then as follows:

# Clone the repository you want to merge
git clone
cd sw-test

# Create your feature branch
git switch -C prepare-repo-for-move
# NOTE: With older versions of Git you can run:
# git checkout -b prepare-repo-for-move

# Create directories as needed. You may only need one, not two as
# in the example below.
mkdir service-worker
mkdir service-worker/sw-test

# Enable extendedglob so we can use negation
# The command below is for modern versions of ZSH. See earlier
# in the post for examples for bash and older versions of ZSH
set -o extendedglob

# Move everything except hidden files into your subdirectory,
# also, exclude your target directories
mv ^(sw-test|service-worker) service-worker/sw-test

# Move any of the hidden files or folders you _do_ want
# to move into the subdirectory
mv .prettierrc service-worker

# Add and commit your changes
git add .
git commit -m 'Moved all source files into new subdirectory'

# Push your changes to GitHub
git push origin prepare-repo-for-move

# Head over to the repository on GitHub, open and merge your pull request
# Back in the terminal, switch to your `main` branch
git switch main

# Delete your feature branch
# This is not technically required, but I like to clean up after myself :)
git branch -D prepare-repo-for-move
# Pull the changes you just merged
git pull origin main

# Change to the root directory of your target repository
# If you have not yet cloned your target repository, change
# out of your current directory
cd ..

# Clone your target repository
git clone
# Change directory
cd dom-examples

# Create a feature branch for the work
git switch -C 146-chore-move-sw-test-into-dom-examples

# Add your merge target as a remote
git remote add -f ssw

# Merge the merge target and allow unrelated history
git merge swtest/gh-pages --allow-unrelated-histories

# Add and commit your changes
git add .
git commit -m 'merging sw-test into dom-examples'

# Push your changes to GitHub
git push origin 146-chore-move-sw-test-into-dom-examples

# Open the pull request, have it reviewed by a team member, and merge.
# Do not squash merge this pull request, or else all commits will be
# squashed together as a single commit. Instead, you want to use a merge commit.

# Remove the remote for the merge target
git remote rm swtest

Hopefully, you now know how to exclude subdirectories using the mv command, set and view shell configuration, and merge the file contents of a git repo into a new repository while preserving the entire commit history using only basic git commands.

The post Merging two GitHub repositories without losing commit history appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogSlow your scroll: 5 ways to fight misinformation on your social feed

An illustration shows three columns containing newspaper icons along with social media icons.<figcaption>Credit: Nick Velazquez / Mozilla</figcaption>

The news is overwhelming. Attention spans are waning. Combine those with social media feeds that are optimized for endless scrolling, and we get an internet where misinformation thrives. 

In many ways, consuming news has become a social act. We get to share what we’re reading and thinking through social media. Other people respond with their own thoughts and opinions. Algorithms pick up on all of this activity, and soon enough, our feeds feed us what to consume next – one after another. While it could be actual news and accurate information, often, it’s an opinionated take, inaccuracy or even propaganda. 

Of course, the internet also connects us with reliable sources. But when it comes to social media, it becomes a matter of whether or not we actually stop scrolling and take the time to verify what we’re seeing and hearing. So, how can we fight misinformation in our never-ending feeds? Consider these five tips.

An illustration reads: The Tech Talk

Talk to your kids about online safety

Get tips

1. Filter out the aesthetics

Cool infographic catch your eye? Know that it’s probably designed to do just that: grab our attention. Same with content from creators we love. One day they’re dancing, the next they’re giving us health advice. Before taking what we see and hear at face value, we should ask ourselves the 5 Ws:

  • Who is posting? Are they the original source of the information? If not, who is?
  • What is the subject of the post? Is it the source’s expertise or are they relaying something they experienced first-hand?
  • When was it posted? Is the information still relevant today, or have circumstances changed?
  • If it’s an image or a video, where is the event that’s depicted located?
  • Why did they post it? Are they trying to sell you something or gain your support in any way?

2. If something sparks emotion, take a beat

Shocking images and videos can spread quickly on social media. It doesn’t mean we can’t trust them, but it does mean that stakes are higher when they turn out to be misleading or manipulated. 

Before hitting that like or share button, consider what might happen if that turns out to be the case. How would sharing false information affect us, other people or the larger world? Emotions can cloud our judgment, especially when a topic feels personal, so just taking a moment to let our critical thinking kick in can often do the trick.

3. Know when it’s time to dig deeper

There can be obvious signs of misinformation. Think typos, grammatical errors and clear alteration of images or videos. But many times, it’s hard to tell. Is it a screenshot of an article with no link, or footage of a large protest? Does the post address a polarizing topic? 

It might even take an expert like an investigative journalist, fact-checker or researcher to figure out whether a piece of media has been manipulated or if a post is the product of a sophisticated disinformation campaign. That’s when knowing how to find experts’ work — trustworthy sources — comes in handy. 

4. Report misinformation

If you’ve determined that something is false, report it in the app. Social media companies often rely on users to flag misleading and dangerous content, so take an extra but impactful step to help make sure others don’t fall for misinformation. 

5. Feed your curiosity – outside the feed

Real talk: Our attention spans are getting shorter, and learning about the world through quick, visual content can be more entertaining than reading. That’s OK! Still, we should give ourselves some time to explore what piques our interests outside of our social media apps.

Hear something outrageous? Look up news articles and learn more, maybe you can even do something about it. Concerned about vaccines, a pandemic or another public health emergency? Educate yourself and see what your local health officials are saying. Feel strongly about a topic everyone’s talking about online? Start a conversation about it in real life. Our screens give us a window to the larger world, but looking up to notice what’s right in front of us can be pretty great too. 

This guide was created in partnership with the News Literacy Project and the Teens for Press Freedom. The News Literacy Project, a nonpartisan education nonprofit, is building a national movement to advance the practice of news literacy throughout American society, creating better informed, more engaged and more empowered individuals — and ultimately a stronger democracy. The Teens for Press Freedom is a national, youth-led organization dedicated to promoting freedom of the press and factual literacy among teens.

The internet is a great place for families. It gives us new opportunities to discover the world, connect with others and just generally make our lives easier and more colorful. But it also comes with new challenges and complications for the people raising the next generations. Mozilla wants to help families make the best online decisions, whatever that looks like, with our latest series, The Tech Talk.

Firefox browser logo

Get Firefox

Get the browser that protects what’s important

The post Slow your scroll: 5 ways to fight misinformation on your social feed appeared first on The Mozilla Blog.

The Mozilla BlogFirefox Presents: Feeling alive with the ‘Stoke King’

If you could use a little hyping up to go outside, look no further than Wade Holland’s social media feeds. A former competitive skier from Montana, Holland encourages people to find their “stoke” – whether that’s by going on a mountain bike ride, rollerblading, or just feeling the sun on your skin.

“It can be finding a little park right behind your house and singing and dancing in it,” Holland said. “You don’t have to hike Everest. You can do whatever elevates your stoke!”

Now based in Los Angeles, the 34-year-old content creator calls himself a “stoke king.” His vibe is that of a very enthusiastic personal trainer, except you’ll see his outfit from a mile away, the gym is nature, and he’s training you to amp your zest for life all the way up to 11. 

Holland is his own personal success story. Years of injuries made him rethink his goal of becoming a professional skier. While filming a backcountry skiing video with a crew at 21, he flew about 60 feet and landed on his hip on a rock, shattering his femur. He had to be rescued through a helicopter and taken into surgery, during which he had a titanium rod placed in his leg. 

“I almost didn’t make it back from that,” Holland said. 

While the injury didn’t stop him from being active outdoors, he had to scale back. 

“It made me realize that maybe what I’m better at is getting other people excited about what I love so much,” he said. “That led me to a path of creating content that helps people get to a destination and feel good about themselves doing it.”

Holland’s mission became convincing people that anyone can go outside and enjoy nature, wherever they are and whatever their ability. No sleek cycling suit, surfboard or ski poles needed.

After years of consistently producing content, his ability to get people just as excited as he is has paid off. Wade’s motivational adventure posts have drawn 38,500 followers on TikTok and 213,000 on Instagram, where he met his partner Abby Wren, who’s a makeup content creator. 

Wade Holland holds up his hands, wearing gloves that read "stoked."<figcaption>Photo: Nita Hong for Mozilla</figcaption>

Holland had been booked to host an event  in Victoria, Canada. Always on the lookout for opportunities to collaborate, he searched #contentcreators and found Wren, a fellow Montana native. He asked her if she wanted to meet up. She agreed and asked to meet in Vancouver.

But the ferry wasn’t running that day, and sea planes were fully booked. Holland, not wanting to miss the chance to meet Wren, persuaded a helicopter company to help.

“I said, ‘Hey, this is kind of wild, but I’m trying to meet this woman who could be my future wife.’ I showed them a picture of Abby and told them that if they let me get on this helicopter, I’ll make them a 30-second video,” Wade recalled. “They said, ‘Wow, we’ve never been pitched that idea, this seems so outlandish. But this is going to be a hell of a story. Get on.’”

Holland and Wren have been together ever since, and they plan on getting married next year. His life changed because of his excitement to meet a woman he’s never met, and he got others to feel as thrilled as he was. 

“Each day I’m reminded how much life is a gift,” Holland said. “That it’s my responsibility to squeeze the most out of every day I have on this planet, bring my passion and enthusiasm to connect with my community online, and inspire them to get outside and stay stoked.”

Firefox is exploring all the ways the internet makes our planet an awesome place. Almost everything we do today ties back to the online world in some way — so, join us in highlighting the funny, weird, inspiring and courageous stories that remind us why we love the world wide web.

Wade Holland smiles at the camera.

Get the browser that makes a difference

Download Firefox

The post Firefox Presents: Feeling alive with the ‘Stoke King’ appeared first on The Mozilla Blog.

The Mozilla Thunderbird BlogWe Asked AI To Create These Beautiful Thunderbird Wallpapers

The buzz around AI-generated artwork continues to grow with each passing week. As machine-learning-driven AI systems like DALL·E 2, Midjourney, and Stable Diffusion continue to evolve, some truly awe-inspiring creations are being unleashed onto the world. We wanted to tap into that creative energy to produce some unique desktop wallpapers for the Thunderbird community!

So, we fed Midjourney the official Thunderbird logo and a series of descriptive text prompts to produce the stunning desktop wallpapers you see below. (Can you spot which one is also inspired by our friends at Firefox?)

Dozens of variations and hundreds of images later, we narrowed it down to four designs. Aside from adding a small Thunderbird watermark in the lower corners of each wallpaper, these images are exactly as the Midjourney AI produced them.

View And Download The Thunderbird Wallpapers

We did take the liberty of upscaling each image to UltraHD resolution, meaning they’ll look fantastic even on your 4K monitors. And of course, on your 1080p or 1440p panels as well.

Just click each image below to download the full-resolution file.

If you love them, share this page and tell people about Thunderbird! And if you end up using them as your PC wallpaper, send us a screenshot on Mastodon or Twitter.

Thunderbird is the leading open-source, cross-platform email and calendaring client, free for business and personal use. We want it to stay secure and become even better. Donations allow us to hire developers, pay for infrastructure, expand our userbase, and continue to improve.

Click here to make a donation

The post We Asked AI To Create These Beautiful Thunderbird Wallpapers appeared first on The Thunderbird Blog.

The Mozilla BlogA little less misinformation, a little more action

Ten young people lean on a wall looking down at their phones.<figcaption>Credit: Nick Velazquez / Mozilla</figcaption>

As each generation comes of age, they challenge the norms that came before them. If you were to ask most people their go-to way to search, they would mention a search engine. But for Gen Z, TikTok has become one of the most popular ways to find information.

Adrienne Sheares, a social media strategist and a millennial who grew up relying on search engines, had difficulty grasping the habit. So, she spoke with a small group of Gen Zers and reported what she heard in a recent Twitter thread.

Among her learnings: Young people are drawn to content TikTok curates for them, they prefer watching quick videos over reading, and they know misinformation exists and “will avoid content on the platform that can easily be false.” Sheares’ thread went viral. Her curiosity resonated, especially for people with habits very different to those of Gen Z’s.  

As part of our mission at Mozilla, we’re working to support families in having a healthy relationship with the internet. That includes an online experience where young people are equipped to cut through the noise – including misinformation. So we wanted to learn more about how Gen Z consumes the news, and how families can encourage curiosity about current events without shutting out social media. After all, while it may be rife with misinformation, it’s still an essential platform for many teens to connect with their peers.

We spoke with members of Teens for Press Freedom, a youth-led organization that advocates for news literacy among teenagers. We asked Sofia, Agatha, Charlotte, Eloise and Kevin – who are all in their teens – about how they engage with information on social media, their concerns about algorithms and how we can help Gen Zers fight misinformation. Here’s what they said. 

An illustration reads: The Tech Talk

Talk to your kids about online safety

Get tips

Gen Zers are vocal about their values

The way we consume news has become intrinsically social. People start sharing the news they’re consuming because that’s what you do on Instagram and other platforms. People say, “Hey, I’m reading this and therefore, I fit into this educated part of political American life. I have a real opinion that’s very valid.” Everyone wants to feel like they’re part of that group.


Agatha, co-director of Teens for Press Freedom, first took notice of how news spreads on Instagram in 2020, when she was 14. 

“There was this post about Palestine and Israel that was incredibly antisemitic,” Agatha recalled. “It was sort of convincing people that they should be antisemitic. That obviously isn’t right. I’m Jewish, and I felt like the post associated Jewish people with the actions of Israel’s government. That felt like misinformation because I didn’t do anything. It seemed to blame people who have never even lived in Israel.”

She started seeing more and more posts with misinformation about other issues, including COVID-19, the Black Lives Matter protests and violence against Asian Americans. “People were sharing them because it looked cool, like they were doing the right thing by spreading these infographics and letting their thousands of followers know about these incidents,” Agatha said.

Many young people want to publicly express their values. However, they run into a problem in the way they do it. 

“People weren’t making sure that the information they were spreading was actually correct and not just something somebody had written, copied into a graphic and sent it out to the world,” Agatha said. 

Charlotte, who co-founded Teens for Press Freedom and is now an incoming freshman at Dartmouth College, said many people fall into a trap of “virtue signaling.”

“The way we consume news has become intrinsically social,” Charlotte said. “People start sharing the news they’re consuming because that’s what you do on Instagram and other platforms. People say, ‘Hey, I’m reading this and therefore, I fit into this educated part of political American life. I have a real opinion that’s very valid.’ Everyone wants to feel like they’re part of that group.”

The infinite feed has shaped Gen Z’s online habits

There’s something about the endless scroll that is so compelling to people.


Facebook launched in 2004, YouTube in 2005, Twitter in 2006, Instagram in 2010 and Snapchat in 2011. Millennials came of age as those platforms exploded. Gen Zers – those born after 1996, or people 25 and younger, as classified by the Pew Research Center – don’t remember a time when the internet wasn’t a major means of personal communication and media consumption.

Social media feeds favor information presented succinctly, so users can quickly move on to the next post one after another. TikTok, launched in 2016, has “hacked that algorithm so well,” Charlotte said. “Now, everyone’s using it. There’s YouTube Shorts, Instagram Reels, Netflix Fast Laughs. There’s something about the endless scroll that is so compelling to people. That just invites us to spend hours and hours learning about the world in that way.”

Teens today have lived most of their lives in that world, and it has affected how they consume the news.

Short attention spans fuel misinformation

When I’m listening to music, I can’t get myself to sit through a full song without skipping to the next one. Consuming things is just what we’re programmed to do.


Many teens know how to confirm facts through resources on the internet. That’s thanks to ongoing efforts by educators who include verifying information in their lesson plans. 

Kevin, workshop team director at Teens for Press Freedom, recently saw a post on Instagram purportedly about a California bill that would allow late-term abortions. “I looked it up because I was curious,” he said. He quickly learned that the law doesn’t actually propose that. 

The issue, Kevin said, is taking the time to fact-check. 

“We’re a generation constantly fed and fed and fed and given things to consume,” said Eloise, advocacy director at Teens for Press Freedom. “Our attention spans are significantly lower than generations before us. When I’m listening to music, I can’t get myself to sit through a full song without skipping to the next one. Consuming things is just what we’re programmed to do.”

That may be why many Gen Zers prefer watching short videos to learn information instead of reading articles.

“People feel like reading the news is not something to prioritize when they can just look at headlines,” Agatha said. “A lot of newspapers have an audio link now so people listen to it instead. Or it’ll say five-minute read, and people will take five minutes to read it. But they don’t want to spend 10, 20 minutes informing themselves on what’s happening to the world.”

News events become more engaging on social media with flashy imagery and content that highlights the outrageous. While this means platforms have become a breeding ground for misinformation, there’s also a silver lining: Younger generations have become more motivated than ever to engage in issues they care about.

Teens are aware about the power of algorithms

We find that [algorithms are] kind of abusing our personal information.


While many Gen Zers feel equipped to figure out what’s real or not on social media, algorithms that feed users content curated to each individual are hurting their ability to slow down and choose what they consume. 

“A lot of misinformation are half-truths, like it’s almost believable enough that you can accept it without doing any extra research,” said Sofia, a high school junior and co-director of Teens for Press Freedom. “You go to TikTok to be entertained, and if that entertainment is inundated with misleading information, you’re consuming it without knowing you’re consuming it.”

The teens expressed concern about algorithm-based technologies being tested on young people. Kevin sees it as “abusing their personal information.” Being fed posts based on each person’s interests can create a distorted ecosystem of content that includes misleading, even manipulative, information.

“You’re sucked into this world of people you don’t know, and you see all these different ideas and things that are your interests, and you spend hours and hours on there,” Agatha said. “Their ideas sort of become yours. Your opinion then becomes TikTok’s opinion and vice versa.”

Sofia said this has contributed to the loss of productive conversation around politics: “Algorithms are not only creepy. It’s really damaging not just to the individual but to the political situation in the United States. People are only seeing content that aligns with their beliefs.”

Charlotte said, “There’s this rhetoric about how Gen Z is the most informed generation because of social media, and in many ways that’s true. But social media isn’t really the great democratizer. There’s [also] a lot we don’t know because of these algorithms.”

There are ways to help younger generations fight misinformation

Rather than being talked at, [teens] can talk to each other about issues.


While education about trustworthy sources needs to continue through school, the group said we need to expand the conversation to social media. 

“A lot of people our age think that being critical of sources is something school-related,” Kevin said. “People will say something like, ‘I saw this on TikTok, and then you know, very non-reluctantly quote social media as a source of information.”

Applying the process of verifying information on social media means facilitating discussions among people who consume content in similar ways. 

“Rather than being talked at, they can talk to each other about issues,” Sofia said. “If they’re convinced by someone their own age that what they’re experiencing is not something that they alone have to go through, or that they alone have to figure out a solution, that makes the whole thing a lot easier to confront.”

For parents, this can mean finding peer-to-peer resources for their kids like Teens for Press Freedom’s misinformation workshops. Families can also have real conversations with their children about their values and issues they care about, encouraging curiosity instead of avoiding complicated topics. 

Ultimately, adults can use their power to support efforts to make the internet a better place – one where technology doesn’t use children’s data against them. Young people will tell us what they need if we ask. We can’t let algorithms do that work for us.  

The internet is a great place for families. It gives us new opportunities to discover the world, connect with others and just generally make our lives easier and more colorful. But it also comes with new challenges and complications for the people raising the next generations. Mozilla wants to help families make the best online decisions, whatever that looks like, with our latest series, The Tech Talk.

Firefox browser logo

Get Firefox

Get the browser that protects what’s important

The post A little less misinformation, a little more action appeared first on The Mozilla Blog.

The Mozilla Thunderbird BlogThunderbird Tip: Rearrange The Order Of Your Accounts

One of Thunderbird’s strengths is managing multiple email accounts, newsgroup accounts, and RSS feed subscriptions. But how do you display those accounts in the order YOU want? It’s super easy, and our new Thunderbird Tips video (viewable below) shows you how in less than one minute!

But First: Our YouTube Channel!

We’re currently building the next exciting era of Thunderbird, and developing a Thunderbird experience for mobile. But we’re also trying to put out more content across various platforms to keep you informed — and maybe even entertained!

To that end, we are relaunching our YouTube channel with a forthcoming new podcast, and a series of tips and tricks to help you get the most out of Thunderbird. You can subscribe here. Help us reach 1000 subscribers by the end of August!

Bonus accessibility tips:

1) The keyboard shortcut for this command is ALT + ⬆/⬇ (OPTION + ⬆/⬇ on macOS).
2) Account Settings can also be accessed by the Spaces toolbar, App Menu, and Account Central.

As always, thanks for using Thunderbird! And thanks for making Thunderbird possible with your support and donations.

The post Thunderbird Tip: Rearrange The Order Of Your Accounts appeared first on The Thunderbird Blog.

Open Policy & AdvocacyIt’s Time to Pass U.S. Federal Privacy Legislation

Despite being a powerhouse of technology and innovation, the U.S. lags behind global counterparts when it comes to privacy protections. Everyday, people face the real possibility that their very personal information could fall into the hands of third parties seeking to weaponize it against them.

At Mozilla, we strive to not only empower people with tools to protect their own privacy, but also to influence other companies to adopt better privacy practices. That said, we can’t solve every problem with a technical fix or rely on companies to voluntarily prioritize privacy.

The good news? After decades of failed attempts and false starts, real reform may finally be on the horizon. We’ve recently seen more momentum than ever for policy changes that would provide meaningful protections for consumers and more accountability from companies. It’s time that we tackle the real-world harms that emerge as a result of pervasive data collection online and abusive privacy practices.

Strong federal privacy legislation is critical in creating an environment where users can truly benefit from the technologies they rely on without paying the premium of exploitation of their personal data. Last month, the House Committee on Energy & Commerce took the important step of voting the bipartisan American Data Privacy and Protection Act (ADPPA) out of committee and advancing the bill to the House floor. Mozilla supports these efforts and encourages Congress to pass the ADPPA.

Stalling on federal policy efforts would only hurt American consumers. We look forward to continuing our work with policymakers and regulators to achieve meaningful reform that restores trust online and holds companies accountable. There’s more progress than ever before towards a solution. We can’t miss this moment.

The post It’s Time to Pass U.S. Federal Privacy Legislation appeared first on Open Policy & Advocacy.

The Mozilla BlogHow Firefox’s Total Cookie Protection and container extensions work together

When we recently announced the full public roll-out of Firefox Total Cookie Protection — a new default browser feature that automatically confines cookies to the websites that created them, thus eliminating the most common method that sites use to track you around the web — it raised a question: Do container extensions like Mozilla’s Facebook Container and Multi-Account Containers still serve a purpose, since they similarly perform anti-tracking functions by suppressing cookie trails?

In short, yes. Container extensions offer additional benefits even beyond the sweeping new privacy enhancements introduced with Firefox Total Cookie Protection.

Total Cookie Protection + container extensions = enhanced anti-tracking 

Total Cookie Protection isolates cookies from each website you visit, so Firefox users now receive comprehensive cookie suppression wherever they go on the web. 

However, Total Cookie Protection does not isolate cookies from different open tabs under the same domain. So for instance, if you have Google Shopping open in one tab, Gmail in another, and Google News in a third, Google will know you have all three pages open and connect their cookie trails. 

<figcaption>Total Cookie Protection creates a separate cookie jar for each website you visit. (Illustration: Meghan Newell)</figcaption>

But with a container extension, you can isolate cookies even within parts or pages of the same domain. You could have Gmail open in one container tab and Google Shopping and News in other containers (for instance, under different accounts) and Google will be oblivious to their relation. 

Beyond this added privacy protection, container extensions are most useful as an easy means of separating different parts of your online life (e.g. personal, work) within the same browser. 

A couple reasons you might want Multi-Account Containers installed on Firefox… 

  • Avoid logging in and out of different accounts under the same web platform; for example, with containers you could have separate instances of Slack open at the same time — one for work, another for friends. 
  • If multiple family members or roommates share Firefox on one computer, each person can easily access their own container with a couple clicks.

While, technically, you can create a Facebook container within Multi-Account Containers, the Facebook Container extension is intended to provide a simple, targeted solution for so many Facebook users concerned about the pervasive ways the social media behemoth tracks you around the web. 

Facebook tracks your online moves outside of Facebook through the various widgets you find embedded ubiquitously around the web (e.g. “Like” buttons or Facebook comments on articles, social share features, etc.). The convenience of automatic sign-in when you visit Facebook is because of cookies. However, this convenience comes at a steep privacy cost — those same cookies can tell Facebook about any page you visit associated with one of its embedded features. 

But with Facebook Container installed on Firefox, you maintaIn the convenience of automatic Facebook sign-in while cutting off the cookie trail to other sites you visit outside of Facebook. 

So if you want superior anti-tracking built right into your browser, plus the enhanced privacy protections and organizational convenience of containers, install a container extension on Firefox and rest easy knowing your cookie trails aren’t exposed. 

Firefox browser logo

Get Firefox

Get the browser that protects what’s important

The post How Firefox’s Total Cookie Protection and container extensions work together appeared first on The Mozilla Blog.

The Mozilla Thunderbird BlogThunderbird Time Machine: Windows XP + Thunderbird 1.0

Let’s step back into the Thunderbird Time Machine, and transport ourselves back to November 2004. If you were a tech-obsessed geek like me, maybe you were upgrading Windows 98 to Windows XP. Or playing Valve’s legendary shooter Half-Life 2. Maybe you were eagerly installing a pair of newly released open-source software applications called Firefox 1.0 and Thunderbird 1.0…

As we work toward a new era of Thunderbird, we’re also revisiting its roots. Because the entirety of Thunderbird’s releases and corresponding release notes have been preserved, I’ve started a self-guided tour of Thunderbird’s history. Read the first post in this series here:

“Thunderbirds Are GO!”

Before we get into the features of Thunderbird 1.0, I have to call out the endearing credits reel that could be viewed from the “About Thunderbird” menu. You could really feel that spark of creativity and fun from the developers:

<figcaption>Yes, we have a YouTube channel! Subscribe for more. </figcaption>

Windows XP + 2 New Open-Source Alternatives

Thunderbird 1.0 launched in the prime of Windows XP, and it had a companion for the journey: Firefox 1.0! Though both of these applications had previous versions (with different logos and different names), their official 1.0 releases were milestones. Especially because they were open-source and represented quality alternatives to existing “walled-garden” options.

Thunderbird 1.0 and Firefox 1.0 installers on Windows XP<figcaption>Thunderbird 1.0 and Firefox 1.0 installers on Windows XP</figcaption>

(Thunderbird was, and always has been, completely free to download and use. But the internet was far less ubiquitous than it is now, so we offered to mail users within the United States a CD-ROM for $5.95.)

Without a doubt, Mozilla Thunderbird is a very good e-mail client. It sends and receives mail, it checks it for spam, handles multiple accounts, imports data from your old e-mail application, scrapes RSS news feeds, and is even cross-platform.

Thunderbird 1.0 Review | Ars Technica

Visually, it prided itself on having a pretty consistent look across Windows, Mac OS X, and Linux distributions like CentOS 3.3 or Red Hat. And the iconography was updated to be more colorful and playful, in a time when skeuomorphic design reigned supreme.

Groundbreaking Features In Thunderbird 1.0

Thunderbird 1.0 launched with a really diverse set of features. It offered add-ons (just like its brother Firefox) to extend functionality. But it also delivered some cutting-edge stuff like:

  • Adaptive junk mail controls
  • RSS integration (this was only 2 months after podcasts first debuted)
  • Effortless migration from Outlook Express and Eudora
  • A Global Inbox that could combine multiple POP3 email accounts
  • Message Grouping (by date, sender, priority, custom labels, and more)
  • Automatic blocking of remote image requests from unknown senders
Thunderbird 1.0 About Page<figcaption>Thunderbird 1.0 About Page</figcaption>

Feeling Adventurous? Try It For Yourself!

If you have a PowerPC Mac, a 32-bit Linux distribution, or any real or virtualized version of Windows after Windows 98, you can take your own trip down memory lane. All of Thunderbird’s releases are archived here.

Thunderbird is the leading open-source, cross-platform email and calendaring client, free for business and personal use. We want it to stay secure and become even better. Donations allow us to hire developers, pay for infrastructure, expand our userbase, and continue to improve.

Click here to make a donation

The post Thunderbird Time Machine: Windows XP + Thunderbird 1.0 appeared first on The Thunderbird Blog.

The Mozilla BlogAnnouncing Steve Teixeira, Mozilla’s new Chief Product Officer

I am pleased to share that Steve Teixeira has joined Mozilla as our Chief Product Officer. During our search for a Chief Product Officer, Steve stood out to us because of his extensive experience at tech and internet companies where he played instrumental roles in shaping products from research, design, security, development, and getting them out to market.

<figcaption>Steve Teixeira joins Mozilla executive team. Steve was photographed in Redmond, Wash., August 5, 2022.
(Photo by Dan DeLong for Mozilla)</figcaption>

As Chief Product Officer, Steve will be responsible for leading our product teams. This will include setting a product vision and strategy that accelerates the growth and impact of our existing products and setting the foundation for new product development.  His product management and technical expertise as well as his leadership experience are the right fit to lead our product teams into Mozilla’s next chapter. 

“There are few opportunities today to build software that is unambiguously good for the world while also being loveable for customers and great for business,” said Teixeira. “I see that potential in Firefox, Pocket, and the rest of the Mozilla product family. I’m also excited about being a part of the evolution of the product family that comes from projecting Mozilla’s evergreen principles through a modern lens to solve some of today’s most vexing challenges for people on the internet.”

Steve comes to us most recently from Twitter, where he spent eight months as a Vice President of Product for their Machine Learning and Data platforms. Prior to that, Steve led Product Management, Design and Research in Facebook’s Infrastructure organization. He also spent almost 14 years at Microsoft where he was responsible for the Windows third-party software ecosystems and held leadership roles in Windows IoT, Visual Studio and the Technical Computing Group. Steve also held a variety of engineering roles at small and medium-sized companies in the Valley in spaces like developer tools, endpoint security, mobile computing, and professional services. 

Steve will report to me and sit on the steering committee.

The post Announcing Steve Teixeira, Mozilla’s new Chief Product Officer appeared first on The Mozilla Blog.

The Mozilla BlogWhy I joined Mozilla’s Board of Directors

I first started working with digitalization and the internet when I became CEO of Scandinavia Online in 1998. It was the leading online service in the Nordics and we were pioneers and idealists. I learnt a lot from that experience: the endless opportunities, the tricky business models and the extreme ups and downs in hypes and busts of evaluation. I also remember Mozilla during that time as a beacon of competence and idealism, as well as a champion for the open internet as a force for good.

kristin skogen lund mozilla board member<figcaption>Kristin Skogen Lund</figcaption>

Since those early days I have worked in the media industry, telecoms and interest organizations. Today I serve as CEO of Schibsted, the leading Nordic-based media company (which initially started Scandinavia Online back in the days). We own and operate around 70 digital consumer brands across media, online marketplaces, financial services, price comparison services and technology ventures. Within the global industry, we were known as one of the few traditional media companies that adapted to the digital world early on by disrupting our business model and gaining a position in the digital landscape early.

I am deeply engaged in public policy and I serve as president of the European Tech Alliance (EUTA), comprising the leading tech companies of Europe. We work to influence and improve the EU’s digital regulation and to ensure an optimal breeding ground for European digital entrepreneurship. This work is essential as our societies depend upon technology being a force for good, something that cannot be taken for granted, nor is it always the case.

I take great honor in serving on the board of Mozilla to help promote its vision and work to diversify and expand to new audiences and services. It is exciting to serve on the board of a US-based company with such strong roots and that has been an inspiration for me these past 25 years.

The process of meeting board members and management has strengthened my impression of a very capable and engaged team. To build on past successes is never easy, but in Mozilla’s case it is all the more important — not just for Mozilla, but for the health of the internet and thus our global community. I look very much forward to being part of, and contributing to, that tremendous endeavor.

The post Why I joined Mozilla’s Board of Directors appeared first on The Mozilla Blog.

The Mozilla Thunderbird BlogHow You Can Contribute To Thunderbird Without Knowing How To Code

Thunderbird and K-9 Mail are both open-source software projects. That means anyone can contribute to them, improve them, and make them better products. But how does one contribute? You must need some programming skills, right? No! Do you want to learn how to help make a big difference in the global Thunderbird community, without knowing a single line of code? We have a few ideas to share.

Our friend Dustin Krysak, a contributor to Ubuntu Budgie and a customer solutions engineer at Sysdig, brilliantly compares a software project to a construction job:

“Programming is the carpenter, but you still need the architects, designers, project managers, and permit people to make a construction job come together.”

Dustin Krysak

Similarly, making an open-source project like Thunderbird requires an entire community! We need the talents of programmers, graphic designers, translators, writers, financial supporters, enthusiastic fans, quality assurance helpers, bug hunters, & beta testers.

Even if you just have an idea to share, you can make a difference!

No matter what your skill set is, you can absolutely help make Thunderbird better than ever. Here are a few ideas to get you started.

Join The Support Crew

Are you an experienced Thunderbird user who knows the software inside and out? Maybe you want to pay it forward and volunteer some time to help new users! We even have a private discussion group for our support crew to help each other, so they can better support Thunderbird users.

To get involved:


Want to help improve Thunderbird by simply using it? Testing is a great way to #contribute and requires no prior experience! Help us catch those bugs before they get loose!

Here’s all the info you need to get started with testing:

Let’s Go Bug Hunting

Speaking of bugs, capturing and reporting Thunderbird bugs is really important. It’s invaluable! And you’ll enjoy the satisfaction of helping MILLIONS of other users avoid that bug in the future!

We use Mozilla’s Bugzilla, a very powerful tool:

Translate This

We want the entire world to use Thunderbird, which is why it’s currently available in more than 60 languages. If you’re a wordsmith who understands multiple languages, you can aid in our ongoing translation efforts for translating various Thunderbird web pages.

Join 100 other contributors who help translate Thunderbird websites!

Document All The Things

Know what else is important? Documentation! From beginner tutorials to technical guides, there’s always a need for helpful information to be written down and easily found.

There are many ways you can contribute to Thunderbird documentation. Start here:

Financial Support

Financial contributions are another way to help. Thunderbird is both free and freedom respecting, but we’re also completely funded by donations!

Year after year, the generosity of our donors makes it possible for Thunderbird to thrive. You can contribute a one-time or recurring monthly donation at

Sharing Is Caring

Do you share our tweets, Fediverse posts, or Facebook messages? Do you tell your friends and colleagues about Thunderbird? Then yes, you are contributing!

Word of mouth and community promotions are yet another key ingredient to the success of Thunderbird, and any open source project.

Enthusiasm is contagious. Keep sharing ❤


If you DO have coding skills there are so many ways to help! One of those ways is adding new functionality and designs to Thunderbird with Extensions and Themes!

Here’s what you need to know about making add-ons for Thunderbird:

Last but certainly not least, we’re always looking for contributions to Thunderbird itself from the talented FOSS developer community!

Our Developer Hub has everything you need to start hacking away on Thunderbird:

K-9 Mail and Thunderbird Mobile

Last but certainly not least, there’s the newest member of the Thunderbird family, K-9 Mail. As we work towards bringing Thunderbird to Android, contributions are encouraged! Find out how you can help the K-9 Mail team here:

We can’t wait to see the many important ways you’ll contribute to Thunderbird. If you have questions about it, leave a comment on this post or ask us on social media.

The post How You Can Contribute To Thunderbird Without Knowing How To Code appeared first on The Thunderbird Blog.

The Mozilla BlogA back-to-school checklist for online safety

The first day of school is right around the corner. Whether that brings some relief, gives you jitters or both, we’re here to support families with one major thing: internet safety.  

For parents, thinking about the dangers of the web can be scary. But it doesn’t have to be. While the internet isn’t perfect, it’s also a wonderful place for learning and connecting with others. Here’s what families can do to make the best of it while staying safe this school year. 

An illustration shows a digital pop-up box that reads: A back-to-school checklist for online safety: Set up new passwords. Check your devices' privacy settings. Protect your child's browsing information. Discuss parental controls with the whole family. Have the "tech talk."<figcaption>Credit: Nick Velazquez / Mozilla</figcaption>

1. Set up new passwords

Back-to-school season is a good time to update passwords, since students often log in to the same learning tools they use at home and on campus. It’s important to teach kids the basics of password hygiene, including keeping passwords in a safe place and regularly changing them. 

2. Check your devices’ privacy settings 

Whether you have a preschooler who uses the family tablet to watch videos or a kid who’s finally ready for a phone, make sure to set up these devices with data privacy in mind. Figure out – together, if possible – which information they’re sharing with the apps they use. 

Have a school-issued device? Take the time to look into the settings, and don’t be afraid to ask teachers and school administrators about how the tools and software used in classrooms are handling students’ data.

3. Protect your child’s browsing information

An investigation by The Markup, in collaboration with Mozilla Rally, exposed how federal financial aid applications automatically sent students’ personal information to Facebook – even if a student didn’t have a Facebook account. It’s just one example of how invasive big tech’s data tracking has become. One way to cut the amount of information companies are collecting about your kid is by protecting their internet browsing data. 

Firefox has Total Cookie Protection on by default to all users. That means that when your child visits a website, cookies (which store bits of information a page remembers about them), stays within that website and out of the hands of companies that want to track their online behavior and target them with ads. 

How to make Firefox the default browser on a desktop computer:

  • If you haven’t already, download Firefox and open the app. 
  • In the menu bar at the top of the screen, click on Firefox > Preferences.
  • In the general panel, click on the Make Default button.

How to make Firefox the default browser on mobile:

  • Download Firefox.
  • On an iOS device, go to settings, scroll down and click on Firefox > Default Browser App > Firefox
  • On an Android, open the app. Click on the menu button next to the address bar > Settings > Set as default browser > Firefox for Android > Set as default.

Find more information about setting Firefox as the default browser on iOS and Android here.

<figcaption>Make Firefox your default browser on mobile.</figcaption>

4. Discuss parental controls with the whole family

Relying on parental control settings to limit kids’ screen time and block websites may be tempting. But no tool can completely protect kids online. One thing that researchers and advocates agree on when it comes to technology: open communication. Parents should talk to their children about whether or not they need to use parental controls and why. They should also figure out a plan to ease restrictions as kids learn how to manage themselves online. 

Ready for that conversation? Here are some Firefox extensions to consider with your family: 

  • Unhook
    Specific to YouTube, Unhook strips away a lot of the distracting “rabbit role” elements of the site, including suggested videos, trending content and comments. 
  • Tomato Clock
    Based on a renowned time management method (Pomodoro technique), this extension helps a user focus on the computer by breaking up work intervals into defined “tomato” bursts. While this productivity extension could benefit anyone, parents might find it useful for helping kids stay focused during online school time.
  • Block Site
    Try this add-on if your family has agreed to implement restrictions on specific websites. With its password control feature, not only can parents continue to visit these websites, but they can also leave custom display messages if their kid tries to access a restricted site (“Busted! Shouldn’t you be doing homework?”) as well as redirect from one site to another (e.g. to a public library website).

If you’re new to extensions, you can learn more here

5. Have the “tech talk”

Of course, besides weak passwords and school work distractions, there’s plenty of age-appropriate topics that parents may want to talk to their children about. “It helps to talk about values first, then think through together – in developmentally appropriate ways, based on a child’s life stage – how to put those values into practice,” said Leah A. Plunkett, a Harvard Law School lecturer who teaches a course on youth and digital citizenship.

Another idea: Consider putting what your family has agreed upon on paper and have everyone sign it. Or, use a template like Common Sense Media‘s or this one, which also list items that parents can agree to do, like recognizing the role media plays in their kids’ lives, even if they don’t fully understand it.

Like with any other aspect of parenting, providing kids safe and healthy experiences online is more complicated than it seems. The process won’t be perfect, but learning together – with help from trusted sources – can go a long way. 

The internet is a great place for families. It gives us new opportunities to discover the world, connect with others and just generally make our lives easier and more colorful. But it also comes with new challenges and complications for the people raising the next generations. Mozilla wants to help families make the best online decisions, whatever that looks like, with our latest series, The Tech Talk.

An illustration reads: The Tech Talk

Talk to your kids about online safety

Get tips

The post A back-to-school checklist for online safety appeared first on The Mozilla Blog.

Firefox UXHow We’ve Used Figma to Evolve Our Content Design Practice

Including tips and tools for UX writers

A couple of keys are suspended  from a row of colorful legos that form a key hook.<figcaption>Photo by Scott Webb on Unsplash</figcaption>

Firefox UX adopted Figma about two years ago. In this post, I’ll share how our content design team has used the tool to shape collaboration and influence how design gets done.

Our Figma Journey: From Coach House to Co-Creation

Before Figma, the Firefox content design team worked in a separate space from visual and interaction designers. Our own tools — primarily the Google suite of products — were like a copy coach house.

When it came time to collaborate with UX design colleagues, we had to walk to the main design house (like Sketch, for example). We had to ask for help with the fancy coffee machine (Can you update this label for me? Is it too late to make a change to the layout to fit our content needs?). We felt a bit like guests: out of our element.

A sketch of a coach house behind a main house.<figcaption>Image source</figcaption>

This way of working made collaboration more complex. Content and UX design were using different tools, working in silo and simultaneously, to create one experience. If a content designer needed to explore layout changes, we would find ourselves painstakingly recreating mocks in other tools.

Google Slide screenshot with a recreation of the Firefox homepage design, with comments calling our requested changes.<figcaption>To propose copy and design changes to the Firefox homepage, I had to re-create the mock in Google Slides.</figcaption>

Today, thanks to Figma, visual, interaction, and content design can co-create within the same space. This approach better reflects how our disciplines can and should collaborate:

  • In early stages, when exploring ideas and shaping the architecture of an experience, content and design work together in a shared tool.
  • When it’s time to refine visuals, annotate interactions, or polish copy, our disciplines can branch off in different tools as needed, while staying in sync and on the same page. Then, we come back together to reflect the final visual and copy design in Figma.
“As the design systems team, it is very important to foster and support a culture of collaboration within the Firefox org for designing and making decisions together. Figma is a flexible design tool that has allowed the design systems team to partner more closely with teammates like content design, which ultimately means we build better products.” — Jules Simplicio, Design Systems, Firefox

Figma, Two Years Later: What We’ve Learned

1. Content design still needs other tools to do our jobs well

While Figma gave us the keys to the main design house, we continue to use other tools for copy development. There are a few steps critical to our process that Figma just can’t do.

Tools like Google Docs and Google Slides continue to serve us well for:

  • Content reviews with stakeholders like product management, legal, and localization
  • Aligning cross-functionally on content direction
  • Documenting rationale and context to tee up content recommendations
  • Managing comments and feedback

There’s no silver bullet tool for us, and that reflects the diversity of our stakeholders and collaborators, within and outside of UX. For now, we’ve accepted that our discipline will continue to need multiple tools, and we’ve identified which ones are best for which purpose.

Table describing how and when to use Google Docs, Miro, Google Slides, and Figma.<figcaption>Guidelines for tool usage. How and when to use what will flex according to context. Link to file.</figcaption>

Our pull towards tools that focus on the words also isn’t a bad or surprising thing. Focusing on language forces us to focus on understanding, first and foremost.

“Design teams use software like Sketch or XD to show what they’re designing — and those tools are great — but it’s easy to get caught up in the details… there’s no selector for figuring out what you’re working on or why it matters. It’s an open world — a blank slate. It’s a job for words… so before you start writing button labels, working on voice and tone guidelines, use the world’s most understated and effective design tool: a text editor.” Writing is Designing, Metts & Welfle

2. Content designers don’t need to become Figma experts

As a content designer, our focus is still on content: how it’s organized, what it includes, what it says. We’ve learned we don’t need to get into the Figma weeds of interaction or visual design, like creating our own components or variants.

However, we’ve found these basic functions helpful for collaboration:

  1. Using a low-fidelity wireframe library (see below)
  2. Inserting a component from a library
  3. Copying a component
  4. Turning things on or off within a complex component (example: hide a button)
  5. Editing text within components
  6. Creating sticky notes, pulling from a project template file (see below)
  7. Exporting frames for sharing
  8. Following someone in a file (for presentation and collaboration)
  9. Using plugins (example: for finding text within a file)

You can learn all these things in Figma’s public forums. But when we get stuck, we’ve saved ourselves time and frustration by asking a UX design colleague to troubleshoot. We’ve found that designers, especially those who work in design systems, are happy to help. We recommend creating or joining a Figma Slack channel at work to share ideas and ask questions.

3. Content designers DO need a low-fidelity library

Figma made it much easier for UX teams, including content design, to work in high fidelity. But this can be a blessing and a curse. It’s nice to be able to make quick adjustments to a component. However, when we’re in the early stages of a project we need to focus on ideas, strategy, and structure, rather than polish. We found ourselves getting distracted by details, and turning to other tools (like pen and paper) for visual explorations.

To solve for this, we partnered with the design systems team to create a low-fidelity wireframing library. We built this together, tested it, and then rolled it out to the broader UX team. As a result, we now have a tool within Figma that allows us to create mocks quickly and collaboratively. We created our custom library specific to browser design but Figma has many wireframing kits that you can use (like this one).

Our low-fidelity library democratizes the design process in a way that’s especially helpful for us writers: we can co-create with UX design colleagues using the same easy tools. It also helps people understand the part we play in determining things like information architecture. More broadly, working in low-fidelity work prevents stakeholders like engineering from thinking something is ready for hand-off when it’s not.

Screenshot of low fidelity components: browser chrome, text treatments, image.<figcaption>Example components from our low-fidelity library.</figcaption>

4. File structure is important

As new roommates, when we first started collaborating with UX designers in Figma, there was some awkwardness. We were used to working in separate tools. The Figma design canvas yawned before us. How would we do this? What the fig?

We collaborated with design systems to build a project template file. Of course, having a standard file structure and naming system is good documentation hygiene for any design team, but the template file also supports cross-discipline UX collaboration:

  • It puts some structure and process in place for that wide open Figma space, including identifying the end-to-end steps of the design process.
  • It gives us a shared set of tools to align on goals, capture thoughts, and track progress.
  • It helps concretize and solidify content design’s place and role within that process.
Screenshot of a portion of the Figma Project Template, which includes project name, team members, a summary of the problem and requirements and due date, and a space to capture current state designs.<figcaption>Screenshot of our Firefox Figma Project Template.</figcaption>

The file template is like a kit of parts. You don’t need all the parts for every design project. Certain pieces are particularly helpful for collaboration:

  • Status strip punch list to track the design and content review process. You can adjust the pill for each step as you move through the review process.
<figcaption>Our Firefox design status strip. Note, steps may happen in a different order (for example, localization is often earlier in the process).</figcaption>
  • Summary card: This asks UX and content designers to summarize the problem and share relevant documentation. As content designers, this helps us context switch more quickly (as we frequently need to do).
Summary card component which includes problem to be solved, requirements, due date, solution, and links to more information.<figcaption>Summary card component in the Firefox Project Template.</figcaption>
  • Standardized layout: Design files can quickly get out of control, in particular for projects with a lot of exploration and iteration. The file suggests a vertical structure in which you move your explorations to a space below, with final design and copy at the top. This kind of documentation is helpful for cross-functional collaborators like product management and engineering so they can orient themselves to the designs and understand status, like what’s final and what’s still work-in-progress.
  • Content frame: This is a space to explore and document copy-specific issues like information architecture, localization, and terminology.
Content card which includes a space for copy iterations, as well as guidance to include notes on things like localization and terminology.<figcaption>Content card in the Firefox Project Template.</figcaption>
  • Sticky notes and call-outs. The comment function in Figma can be tricky to manage. Comments get lost in the sea of design, you can’t search them, and you lose a thread once they are closed. For all those reasons, we tend to prefer sticky notes and call-outs, especially for meatier topics.
A sticky-note component and call-out notes for Project Decisions, Critical Issues, and Open Questions.<figcaption>Sticky note and call-out cards in the Firefox Project Template.</figcaption>

Closing Thoughts

A tool is only as effective as the collaboration and process surrounding it. Our team is still figuring out the best way to use Figma and scale best practices.

At the end of the day, collaboration is about people. It’s messy and a continual work-in-progress, especially for content design. But, at least we’re in the same design house now, and it’s got good bones for us to continue defining and refining how our discipline does its work.

Thank you to Betsy Mikel, Brent G. Trotter, and Emily Wachowiak for reviewing this post. And thank you to our design systems collaborators, Jules Simplicio and Katie Caldwell, for all the work you do to make getting work done better.

How We’ve Used Figma to Evolve Our Content Design Practice was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla L10NL10n Report: July 2022 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 


Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New content and projects

What’s new or coming up in Firefox desktop

While the last months have been pretty quiet in terms of new content for Firefox, we’re approaching a new major release for 2022, and that will include new features and dedicated onboarding.

Part of the content has already started landing in the last days, expect more in the coming weeks. In the meantime, make sure to check out the feature name guidelines for Firefox View and Colorways.

In terms of upcoming deadlines: Firefox 104 is currently in Beta and it will be possible to update translations up to August 14.

What’s new or coming up in mobile

Mobile releases now align more closely to desktop release schedules, so you may notice that target dates for these projects are the same in Pontoon. As with desktop, things are quiet now for mobile, but we’ll be seeing more strings landing in the coming weeks for the next major release.

What’s new or coming up in web projects

Firefox Relay website & add-on

We’re expanding Firefox Relay Premium into new locales across Europe: Austria, Belgium, Cyprus, Estonia, Finland, France, Germany, Greece, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Portugal, Slovakia, Slovenia, Spain, Sweden, and Switzerland. In order to deliver a truly great experience to our users in these new locales, we would like to make sure that users can utilize our products in the language they feel most comfortable with. Having these languages localized will take already complex topics like privacy and security and help connect more with users and offer them greater protections.

If you don’t see the product offered in the language in the markets above, maybe you can help by requesting to localize the product. Thank you for helping spread the word.

What’s new or coming up in Pontoon

  • When 100% TM match is available, it now automatically appears in the editor if the string doesn’t have any translations yet.

    100% matches from Translation Memory now automatically appear in the editor

  • Before new users make their first contribution to a locale, they are now provided with guidelines. And when they submit their first suggestion, team managers get notified.

    Tooltip with guidelines for new contributors.

  • The Contributors page on the Team dashboard has been reorganized. Contributors are grouped by their role within the team, which makes it easier to identify and reach out to team managers.

    Team contributors grouped by role.

  • We have introduced a new list parameter in translate view URLs, which allows for presenting a selected list of strings in the sidebar.
  • Deadlines have been renamed to Target Dates.
  • Thanks to Eemeli for making a bunch of under-the-hood improvements, which make our codebase much easier to build on.


Want to showcase an event coming up that your community is participating in? Contact us and we’ll include it.

Friends of the Lion

Know someone in your l10n community who’s been doing a great job and should appear here? Contact us and we’ll make sure they get a shout-out!

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

Open Policy & AdvocacyMozilla submits comments in OSTP consultation on privacy-preserving data sharing

Earlier this month, the US Office of Science and Technology Policy (OSTP) asked stakeholders to contribute to the development of a national strategy for “responsibly harnessing privacy-preserving data sharing and analytics to benefit individuals and society.” This effort offers a much-needed opportunity to advance privacy in online advertising, an industry that has not seen improvement in many years.

In our comments, we set out the work that Mozilla has undertaken over the past decade to shape the evolution of privacy preserving advertising, both in our products, and in how we engage with regulators and standards bodies.

Mozilla has often outlined that the current state of the web is not sustainable, particularly how online advertising works today. The ecosystem is broken. It’s opaque by design, rife with fraud, and does not serve the vast majority of those which depend on it – most importantly, the people who use the open web. The ways in which advertising is conducted today – through pervasive tracking, serial privacy violations, market consolidation, and lack of transparency – are not working and cause more harm than good.

At Mozilla, we’ve been working to drive the industry in a better direction through technical solutions. However, technical work alone can’t address disinformation, discrimination, societal manipulation, privacy violations, and more. A complementary regulatory framework is necessary to mitigate the most egregious practices in the ecosystem and ensure that the outcomes of such practices (discrimination, electoral manipulation, etc.) are untenable under law rather than due to selective product policy enforcement.

Our vision is a web which empowers individuals to make informed choices without their privacy and security being compromised.  There is a real opportunity now to improve the privacy properties of online advertising. We must draw upon the internet’s founding principles of transparency, public participation, and innovation. We look forward to seeing how OSTP’s national strategy progresses this vision.

The post Mozilla submits comments in OSTP consultation on privacy-preserving data sharing appeared first on Open Policy & Advocacy.

The Mozilla Thunderbird BlogThunderbird Time Machine, 2003: A Look Back At Thunderbird 0.1

Let’s take a walk down memory lane to the summer of 2003. Linkin Park, 50 Cent, and Evanescence have top-selling new albums. Apple’s iPod hasn’t even sold 1 million units. Mozilla’s new web browser used to be called Phoenix, but now it’s called Firebird. And a new cross-platform, open-source application called Thunderbird has debuted from the foundations of Mozilla Mail

Because the entirety of Thunderbird’s releases and corresponding release notes have been preserved, I’ve started a self-guided tour of Thunderbird’s history. Why? A mixture of personal and technical curiosity. I used Thunderbird for a couple years in the mid-2000s, and again more recently, but there are giant gaps in my experience. So I’m revisiting every single major version to discover the nuances between releases; the changes big and small.

(If you ever get the craving to do the same, I’ve found the easiest operating system to use is Windows, preferably inside a virtual machine. Early versions of Thunderbird for Macs were built for PowerPC architecture, while early Linux versions were 32-bit only. Both may cause you headaches with modern PC hardware!)

3-Pane Mail Layout: A Solid Foundation!

Below is my screenshot of Thunderbird 0.1 running on a Windows 11 virtual machine.

The first thing you’re probably thinking is “well, not much has changed!” With respect to the classic 3-pane mail presentation, you’re absolutely right! (Hey, why mess with a good thing?)

A screenshot of Thunderbird 0.1 from 2003, running on modern hardware and Windows 11.

Thousands of changes have been made to the client between Thunderbird 0.1 and Thunderbird 102, both under the hood and cosmetically. But it’s clear that Thunderbird started with a strong foundation. And it remains one of the most flexible, customizable applications you can use.

Something else stands out about that screenshot above: the original Thunderbird logo. Far removed from the modern, flat, circular logo we have today, this original logo simply took the Mozilla Phoenix/Firebird logo and gave it a blue coat of paint:

The original Mozilla Phoenix/Thunderbird logos<figcaption>The original Mozilla Thunderbird (top) and Mozilla Phoenix (bottom) logos</figcaption>

Thunderbird 0.1 Release Notes: “Everything Is New”

Back in 2003, much of what we take for granted in Thunderbird now was actually groundbreaking. Things like UI extensions to extend functionality, and user-modifiable theming were forward-thinking ideas. For a bit of historical appreciation, here are the release notes for Thunderbird 0.1:

  • Customizable Toolbars and Mail 3-pane: Toolbars can be customized the way you want them. Choose View / Toolbars / Customize inside any window. Mozilla Thunderbird also supports a new vertical 3-pane configuration (Tools / Options / General), giving you even more choice in how you want to view your mail.
  • Extensions: UI extensions can be added to Mozilla Thunderbird to customize your experience with specific features and enhancements that you need. Extensions allow you to add features particular to your needs such as offline mail support. A full list of available extensions can be found here.
  • Contacts Manager: A contacts sidebar for mail compose makes it easy and convenient to add address book contacts to emails.
  • Junk Mail Detection: In addition to automatically detecting junk mail using the same method as Mozilla Mail, Thunderbird also sanitizes HTML in mail marked as junk in order to better protect your privacy and give peace of mind when viewing a message identified as junk.
  • New default theme: Mozilla Thunderbird 0.1 sports a crisp, fresh and attractive theme, based on the amazing Qute theme by Arvid Axelsson. This is the same theme used by Mozilla Firebird, giving Thunderbird a similar look and feel. Thunderbird also supports a growing number of downloadable themes which alter the appearance of the client.
  • Stream-lined user interface and simplified Options UI.
  • Integrated spell checker.

Next Time, Inside The Thunderbird Time Machine…

A fictitious entry from a Livejournal page, circa December 2004:

“I had a super productive weekend! Finally finished Half-Life 2 and cannot wait for the sequel! I also upgraded my Dell Inspiron 7000 laptop from Windows 98 to Windows XP, so it’s time to install Firefox 1.0 and Thunderbird 1.0. Looking forward to trying this new open-source software!”

Thunderbird is the leading open-source, cross-platform email and calendaring client, free for business and personal use. We want it to stay secure and become even better. Donations allow us to hire developers, pay for infrastructure, expand our userbase, and continue to improve.

Click here to make a donation

The post Thunderbird Time Machine, 2003: A Look Back At Thunderbird 0.1 appeared first on The Thunderbird Blog.

Blog of DataThis Week in Data: Python Environment Freshness

(“This Week in Glean Data” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

By: Perry McManis and Chelsea Troy

A note on audience: the intended reader for this post is a data scientist or analyst, product owner or manager, or similar who uses Python regularly but has not had the opportunity to work with engineering processes to the degree they may like. Experienced engineers may still benefit from the friendly reminder to keep their environments fresh and up-to-date.

When was the last time you remade your local Python environment? One month ago? Six months ago? 1997?

Wait, please, don’t leave. I know, I might as well have asked you when the last time you cleaned out the food trap in your dishwasher was and I apologize. But this is almost as important. Almost.

If you don’t recall when, go ahead and check when you made your currently most used environment. It might surprise you how long ago it was.

# See this helpful stack overflow post by Timur Shtatland:
Mac: conda env list -v -v -v | grep -v '^#' | perl -lane 'print $F[-1]' | xargs /bin/ls -lrtd
Linux: conda env list | grep -v '^#' | perl -lane 'print $F[-1]' | xargs ls -lrt1d
Windows: conda env list
# Find the top level directory of your envs, e.g. C:\Users\yourname\miniconda3\envs
Windows: dir /T:C C:\Users\yourname\miniconda3\envs

Don’t feel bad though, if it does surprise you, or the answer is one you’d not admit publicly. Python environments are hard. Not in the everything is hard until you know how way, but in the why doesn’t this work? This worked last week! way. And the impetus is often to just not mess with things. Especially if you have that one environment that you’ve been using for the last 4 years, you know the one you have propped up with popsicle sticks and duct tape? But I’d like to propose that you consider regularly remaking your environments, and you build your own processes for doing so.

It is my opinion that if you can, you should be working in a fresh environment.

Much like the best by date, what is fresh is contextual. But if you start getting that when did I stand this env up? feeling, it’s time. Working in a fresh environment has a few benefits. Firstly, it makes it more likely that other folks will be able to easily duplicate it. Similarly to how providing an accurate forecast becomes increasingly difficult as you go further into the future, as you get further away from the date you completed a task in a changing ecosystem, the less likely it is that task can be successfully completed again.

Perhaps even more relevant is that packages often release security updates, APIs improve, functionality that you originally had to implement yourself may even get an official release. Official releases, especially for higher level programming languages like Python, are often highly optimized. For many researchers, those optimizations are out of the scope of their work, and rightly so. But the included version of that calculation in your favorite stats package will not only have several engineers working on it to make it run as quickly as possible, now you have the benefit of many researchers testing it concurrently with you.

These issues can collide spectacularly in cases where people get stuck trying to replicate your environment due to a deprecated version of a requirement. And if you never update your own environment, it could take someone else bringing it up to you to even notice that one of the packages you are using is no longer available, or an API has been moved from experimental to release, or removed altogether.

There is no best way of making fresh environments, but I have a few suggestions you might consider.

I will preface by saying that my preference is for command line tools, and these suggestions reflect that. Using graphical interfaces is a perfectly valid way to handle your environments, I’m just not that familiar with them, so while I think the ideas of environment freshness still apply, you will have to find your own way with them. And more generally, I would encourage you to develop your own processes anyway. These are more suggestions on where to start, and not all of them need find their way into your routines.

If you are completely unfamiliar with these environments, and you’ve been working in your base environment, I would recommend in the strongest terms possible that you immediately back it up. Python environments are shockingly easy to break beyond repair and tend to do so at the worst possible time in the worst possible way. Think live demo in front of the whole company that’s being simulcast on youtube. LeVar Burton is in the audience. You don’t want to disappoint him, do you? The easiest way to quickly make a backup is to create a new environment through the normal means, confirm it has everything you need in it, and make a copy of the whole install folder of the original.

If you’re not in the habit of making new environments, the next time you need to do an update for a package you use constantly, consider making an entirely new environment for it. As an added bonus this will give you a fallback option in case something goes wrong. If you’ve not done this before, one of the easiest ways is to utilize pip’s freeze function.

pip list --format=freeze > requirements.txt
conda create -n {new env name}
conda activate {new env name}
pip install -r requirements.txt
pip install {package} --upgrade

When you create your requirements.txt file, it’s usually a pretty good idea to go through it. A common gotcha is that you might see local file paths in place of version numbers. That is why we used pip list here. But it never hurts to check.

Take a look at your version numbers, are any of these really out of date? That is something we want to fix, but often some of our important packages have dependencies that require specific versions and we have to be careful not to break those dependencies. But we can work around that while getting the newest version we can by removing those dependencies from our requirements file and installing our most critical packages separately. That way we let pip or conda get the newest versions of everything that will work. For example, if I need pandas, and I know pandas depends on numpy, I can remove both from my requirements document and let pip handle my dependencies for me.

pip install --upgrade pip
pip install -r requirements.txt
pip install pandas

Something you may notice is that this block looks like it should be something that could be packaged up since it’s just a collection of commands. And indeed it can. We can put this in a shell script and with a bit of work, add a command line option, to more or less fire off a new environment for us in one go. This can also be expanded with shell commands for cases where we may need a compiler, tool from another language, a github repo even, etc. Assuming we have a way to run shell scripts,  Let’s call this

conda deactivate
conda create -n $1
conda activate $1apt install gcc

apt install g++pip install --upgrade pip
pip install pystan==
python3 -m pip install prophet --no-cache-dir

pip install -r requirements.txt
pip install scikit-learn

git clone

cd ./fenix

echo "Finished creating new environment: $1"

And by adding some flag handling, now using bash we can call sh newenv and be ready to go.

It will likely take some experimentation the first time or two. But once you know the steps you need to follow, getting new environment creation down to just a few minutes is as easy as packaging your steps up. And if you want to share, you can send your setup script rather than a list of instructions. Including this in your repository with a descriptive name and a mention in your is a low effort way to help other folks get going with less friction.

There are of course tons of other great ways to package environments, like Docker. I would encourage you to read into them if you are interested in reproducibility beyond the simpler case of just rebuilding your local environment with regularity. There are a huge number of fascinating and immensely powerful tools out there to explore, should you wish to bring even more rigor to your Python working environments.

In the end, the main thing is to work out a fast and repeatable method that enables you to get your environment up and going again quickly from scratch. One that works for you. And then when you get the feeling that your environment has been around for a while, you won’t have to worry about making a new environment being an all-day or even worse, all-week affair. By investing in your own process, you will save yourself loads of time in the long run, and you may even save your colleagues some too. And hopefully, save yourself some frustration, too.

Like anything, the key to working out your process is repetitions. The first time will be hard, though maybe some of the tips here can make it a bit easier. But the second time will be easier. And after a handful, you will have developed a framework that will allow you to make your work more portable, more resilient and less angering, even beyond Python.

SUMO BlogIntroducing Smith Ellis

Hi everybody,

I’m so happy to introduce our latest addition to the Customer Experience team. Smith Ellis is going to join forces with Tasos and Ryan to develop our support platform. It’s been a while since we got more than 2 engineers on the team, so I’m personally excited to see what we can unlock with more engineers.

Here’s a bit of an intro from Smith:

Hello Mozillians!  I’m Smith Ellis, and I’m joining the Customer Experience team as a Software Engineer. I’m more than happy to be here. I’ve held many technical and management roles in the past and have found that doing work that makes a difference is what makes me happy. My main hobbies are electronics, music, video games, programming, welding and playing with my kids.

I look forward to meeting you and making a difference with Mozilla.

Please join me to congratulate and welcome Smith into our SUMO family!

SUMO BlogWhat’s up with SUMO – July

Hi everybody,

There is a lot going on in Q2 but we also accomplished many things too! I hope you’re able to celebrate what you’ve contributed and let’s move forward to Q3 with renewed energy and excitement!

Welcome note and shout-outs

  • Welcome kaie, alineee, lisah9333, and Denys. Thanks for joining the KB world.
  • Thanks to Paul, Infinity, Anokhi, Noah, Wes, and many others for supporting Firefox users in the iOS platform.
  • Shout-outs to Kaio Duarte for doing a great job on being a social support moderator.
  • Congratulations to YongHan for getting both the l10n and forum badge in 2022. Keep up the good work!

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to add them in our next edition.

Community news

  • You should be able to watch the scrum meeting without NDA requirement by now. Subscribe to the AirMozilla folder if you haven’t already, so you will get notifications each time we added a new recording.
  • How is our localization community is doing? Check out the result of the SUMO localization audit that we did in Q2. You can also watch the recording of my presentation at the previous community meeting in June.
  • Are you enthusiastic about helping people? We need more contributors for social and mobile support! 
  • Are you a Thunderbird contributor? We need you! 
  • Say hi to Ryan Johnson, our latest addition to the CX team!

Catch up

  • Watch the monthly community call if you haven’t. Learn more about what’s new in June! Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can answer them during the meeting.
  • To catch up on product releases update, please watch the recording of the Customer Experience scrum meeting from AirMozilla.
  • Consider subscribe to Firefox Daily Digest to get daily updates about Firefox from across different platforms.
  • Also, check out SUMO Engineering Board to see what the platform team is currently doing.

Community stats


KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only
Month Page views Vs previous month
May 2022 7,921,342 3.19%
Jun 2022 7,787,739 -1.69%

Top 5 KB contributors in the last 90 days: 

KB Localization

Top 10 locales based on total page views

Locale May 2022 pageviews (*) Jun 2022 pageviews (*) Localization progress (per Jul, 11)(**)
de 7.93% 7.94% 97%
zh-CN 6.86% 6.69% 100%
fr 6.22% 6.17% 89%
es 6.28% 5.93% 29%
pt-BR 5.26% 4.80% 52%
ru 4.03% 4.00% 77%
ja 3.75% 3.63% 46%
zh-TW 2.07% 2.26% 4%
It 2.31% 2.20% 100%
pl 1.97% 1.96% 87%
* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

Forum Support

Forum stats


Top 5 forum contributors in the last 90 days: 

Social Support

Channel Total incoming conv Conv interacted Resolution rate
May 2022 376 222 53.30%
Jun 2022 319 177 53.04%

Top 5 Social Support contributors in the past 2 months: 

  1. Kaio Duarte
  2. Bithiah K
  3. Magno Reis
  4. Christophe Villeneuve
  5. Felipe Koji

Play Store Support

Channel May 2022 Jun 2022
Total priority review Total reviews replied Total priority review Total reviews replied
Firefox for Android 570 474 648 465
Firefox Focus for Android 267 52 236 39
Firefox Klar Android 4 1 3 0

Top 5 Play Store contributors in the past 2 months: 

  • Paul Wright
  • Tim Maks
  • Kaio Duarte
  • Felipe Koji
  • Selim Şumlu

Product updates

To catch up on product releases update, please watch the recording of the Customer Experience scrum meeting from AirMozilla. You can also subscribe to the AirMozilla folder to get notifications each time we added a new recording.

Useful links:

Mozilla L10NIntroducing Pretranslation in Pontoon

In the coming months we’ll begin rolling out the Pretranslation feature in Pontoon.

Pretranslation is the process of using translation memory and machine translation systems to translate content before it is edited by translators. It is intended to speed up the translation process and ease the work of translators.

How it works

Pretranslation will be off by default and only enabled for a selected list of locales within selected projects by the L10n Program Managers.

When enabled, each string without any translations will be automatically pretranslated using the 100% match from translation memory or (should that not be available) using the machine translation engine. If there are multiple 100% matches in TM, the one which has been used the most times will be used.

Pretranslations will be assigned a new translation status – Pretranslated. That will allow translators to distinguish them from community submitted translations and suggestions, and make them easier to review and postedit.

The important bit is that pretranslations will be stored to version control systems immediately, which means the postediting step will take place after translations might have already shipped in the product.

Machine translation engines

We have evaluated several different options and came to conclusion that the following scenario works best for our use case:

  • If your locale is supported by the Google AutoML Translation, we’ll use that service and train the engine using the Mozilla translation data sets, which will result in better translation quality than what’s currently suggested in the Machinery tab by generic MT engines.
  • For other locales we’ll use Google Translation API or similar engines.

Get involved

We are in the early stages of the feature rollout. We’re looking for teams that would like to test the feature within a pilot project. If you’re a locale manager and want to opt in, please send an email to and we’ll add your locale to our list of early adopters.

SeaMonkeySeaMonkey 2.53.13 is out!

Hi everyone!

The SeaMonkey Project is pleased to announce the immediate release of SeaMonkey 2.53.13!

Updates will be enabled once this is posted.

Please check out [1] and [2].

[1] –

[2] –

Open Policy & AdvocacyMozilla statement as EU Parliament adopts new pro-competition rulebook for Big Tech

The EU Parliament today adopted the ‘Digital Markets Act’, new rules that will empower consumers to easily choose and enjoy independent web browsers. We welcome the new pro-competition approach and call for a comprehensive designation of the full range of Big Tech gatekeepers to ensure the legislation contributes to a fairer and more competitive European digital market.

The DMA will grant consumers more freedom to choose what software they wish to use, while creating the conditions for independent developers to compete fairly with Big Tech. In particular, we see immense benefit in empowering consumer choice through prohibitions that tackle manipulative software designs and introduce safeguards that allow consumers to simply and easily try new apps, delete unwanted apps, switch between apps, change app defaults, and to expect similar functionality and use.

The browser is a unique piece of software that represents people online. It is a tool that allows individuals to exercise choices about what they do and what happens to them as the navigate across the web. But like other independent web browsers, Mozilla has been harmed by unfair business practices that take away consumer choice, for instance when gatekeepers make it effectively impossible for consumers to enable and keep Firefox as their default web browser, or when they erect artificial operating system barriers that mean we can’t even offer consumers the choice of a privacy- and security-first browser. Ensuring that gatekeepers allocate enough resources to fully and transparently comply with the DMA is the first step towards a more open web and increased competition, allowing users to easily install and keep Firefox as their preferred browser on both desktop and mobile” – added Owen Bennett, Mozilla Senior Policy Manager.

To make the DMA’s promise a reality, swift and effective enforcement of the new law is required. It’s essential that all gatekeepers – and their core platform services – are identified and designated as soon as possible. This is the first test for the European Commission’s enforcement approach, and regulators must set down a marker that Europe means business.

We look forward to contributing to remedies that can ensure independent browsers can compete and offer consumers meaningful choices.

The post Mozilla statement as EU Parliament adopts new pro-competition rulebook for Big Tech appeared first on Open Policy & Advocacy.

The Mozilla Thunderbird BlogAre Your Favorite Thunderbird Add-ons Compatible With Thunderbird 102?

Thunderbird 102 is here! Our annual release is full of highly-requested features, beautiful new visuals, and quality-of-life upgrades. We’re also working hard to ensure that your favorite Thunderbird add-ons are compatible. In fact, we expect the majority of add-ons to be updated within a few weeks of Thunderbird 102’s release.

We understand that certain add-ons are invaluable to your workflow. So, to help you decide the best time to update your version of Thunderbird, here’s a simple way to discover if the extensions you currently have installed are compatible with Thunderbird 102.

Install “Addon Compatibility Check For TB 102”

How do you check the compatibility of your add-ons? By installing an add-on! While it doesn’t have the most creative name, “Addon Compatibility Check For TB 102” does its job considerably well.

(This extension works with Thunderbird 68 and newer.)

Addon Compatibility Check For TB 102 icon in Thunderbird 102<figcaption>Click the icon that looks like a puzzle to instantly check add-on compatibility</figcaption>

Once installed, just click the icon that looks like a puzzle piece in your Mail Toolbar. It will instantly check if your installed Thunderbird add-ons will function properly with Thunderbird 102. (The data updates once every 24 hours). It also suggests any known alternatives for extensions that might have been abandoned and no longer receive updates.

Here’s my own result, above. The only “unknown” is uBlock Origin, but that’s because I downloaded it from Github, and not from the main Thunderbird extensions website. (Good news: it does work with 102!)

The Thunderbird Add-on developer community is making a concentrated effort to ensure that all of your favorite add-ons receive updates for Thunderbird 102. We hope this add-on helps you decide when upgrading from Thunderbird 91 is right for you.

The post Are Your Favorite Thunderbird Add-ons Compatible With Thunderbird 102? appeared first on The Thunderbird Blog.

Blog of DataThis Week in Glean: Reviewing a Book – Rust in Action

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

This blog post is going to be a bit different from what you may have read from me in the past. Usually I write about things I am working on or things I have encountered while working that I find interesting. This is still a post about something I find interesting, but instead of directly related to the things I’ve been working on, it’s about something that Mozilla actively encourages me to do: furthering my knowledge and professional development. In this instance, I chose to read a book on Rust to try and increase my knowledge and fill in any gaps in my understanding of a programming language I use almost every day and have come to really enjoy working with. The book in question is Rust in Action by Tim McNamara.

The first thing I would like to call out is the great organization of the material in the book. The first few chapters go over a lot of basic material that was perfect for a beginner to Rust, but which I felt that I was already reasonably familiar with. So, I was able to skim over a few chapters and land at just the right point where I felt comfortable with my knowledge and start reading up on the things I was ready to learn more about. This happened to be right around the end of Part 1 with the bits about lifetimes and borrowing. I have been using Rust long enough to understand a lot of how this works, but learning some of the general strategies to help deal with ownership issues was helpful, especially thinking about wrapping data in types designed to aid in movement issues.

Being an especially data oriented person, I took extra enjoyment out of the “Data in depth” chapter. Having done quite a bit of embedded software development in the past, the explanation of endianness brought back some memories of some not-so-portable code that would break because of this. The rest of the chapter was filled with bit by bit explanations of types and how they represented data in memory, op-codes, and other intricacies of thinking about data at a very low level. Definitely one of my favorite chapters in the book!

I found that the other chapters that stood out to me did so because they explained a topic such as Networking (chapter 8) or Time (chapter 9) in the context of Rust. These were things I had worked with in the past with other languages and that recognition allowed the approaches that were being explained in Rust to really sink in.  Examples of patterns like sending raw TCP data and formatting of timestamps were interrupted with concepts like traits and ways to improve error handling at just the right times to explain Rust approaches to them.

I would definitely recommend this book to anyone who is interested in learning more about Rust, especially how that applies to “systems” programming. I feel like I have come away from this with a better understanding of a few things that were still a little fuzzy and I learned quite a bit about threading in Rust that I really hadn’t been exposed to before. We continue to rely on Rust to write our services in a cross-platform way and there is a lot of information and techniques in this book that I can directly apply to the real-world problems I face working on data and experimentation tools here at Mozilla.

All in all, a really enjoyable book with fun examples to work through. Thanks to the author, and to Mozilla for encouraging me to continually improve myself.

hacks.mozilla.orgNeural Machine Translation Engine for Firefox Translations add-on

Firefox Translations is a website translation add-on that provides an automated translation of web content. Unlike cloud-based alternatives, translation is done locally on the client-side in the user’s computer so that the text being translated does not leave your machine, making it entirely private. The add-on is available for installation on Firefox Nightly, Beta and in General Release.

The add-on utilizes proceedings of project Bergamot which is a collaboration between Mozilla, the University of Edinburgh, Charles University in Prague, the University of Sheffield, and the University of Tartu with funding from the 🇪🇺 European Union’s Horizon 2020 research and innovation programme.

The add-on is powered internally by Bergamot Translator, a Neural Machine Translation engine that performs the actual task of translation. This engine can also be utilized in different contexts, like in this demo website, which lets the user perform free-form translations without using the cloud.

In this article, we will discuss the technical challenges around the development of the translation engine and how we solved them to build a usable Firefox Translations add-on.


The translation engine is built on top of marian framework, which is a free Neural Machine Translation framework written in pure C++. The framework has a standalone native application that can provide simple translation functionality. However, two novel features needed to be introduced to the add-on that was not present in the existing native application.

The first was translation of forms, allowing users to input text in their own language and dynamically translate it on-the-fly into the page’s language. The second was estimating the quality of the translations so that low-confidence translations could be automatically highlighted in the page, in order to notify the user of potential errors. This led to the development of the translation engine which is a high level C++ API layer on top of marian.

The resulting translation engine is compiled directly to native code. There were three potential architectural solutions to integrating it into the add-on:

  1. Native integration to Firefox: Bundling the entire translation engine native code into Firefox.
  2. Native messaging: Deploying the translation engine as a native application on the user’s computer and allowing the add-on to exchange messages with it.
  3. Wasm: Porting the translation engine to Wasm and integrating it to the add-on using the developed JS bindings.

We evaluated these solutions on the following factors which we believed were crucial to develop a production ready translation add-on:

  1. Security: The approach of native integration inside the Firefox Web Browser was discarded following Mozilla’s internal security review of the engine code base, which highlighted issues over the number of third-party dependencies of the marian framework.
  2. Scalability and Maintainability: Native messaging would have posed challenges around distributing the code for the project because of the overhead of providing builds compatible with all platforms supported by Firefox. This would have been impractical to scale and maintain.
  3. Platform Support: The underlying marian framework of the translation engine supports translation only on x86/x86_64 architecture based processors. Given the increasing availability of ARM based consumer devices, the native messaging approach would have restricted the reach of the private and local translation technology to a wider audience.
  4. Performance: Wasm runs slower compared to the native code. However, it has potential to execute at near native speed by taking advantage of common hardware capabilities available on a wide range of platforms.

Wasm design as a portable compilation target for programming languages means developing and distributing a single binary running on all platforms. Additionally, Wasm is memory-safe and runs in a sandboxed execution environment, making it secure when parsing and processing Web content. All these advantages coupled with its potential to execute at near native speed gave us motivation to prototype this architectural solution and evaluate whether it meets the performance requirement of the translation add-on.

Prototyping: Porting to Wasm

We chose the Emscripten toolchain for compiling the translation engine to Wasm. The engine didn’t compile to Wasm out of the box and we made few changes to successfully compile and perform translation using the generated Wasm binary, some of which are as follows:

Prototyping to integration


After having a working translation Wasm binary, we identified a few key problems that needed to be solved to convert the prototype to a usable product.


Packaging of all the files for each supported language pair in the Wasm binary meant it was impractical to scale for new language pairs. All the files of each language pair (translating from one language to another and vice versa) in compressed form amount to ~40MB of disk space. As an example, supporting translation of 6 language pairs made the size of the binary ~250 MB.

Demand-based language support

The packaging of files for each supported language pair in the Wasm binary meant that the users will be forced to download all supported language pairs even if they intended to use only a few of them. This is highly inefficient compared to downloading files for language pairs based on the user’s demand.


We benchmarked the translation engine on three main metrics which we believed were critical from a usability perspective.

  1. Startup time: The time it takes for the engine to be ready for translation. The engine loads models, vocabularies, and optionally a shortlist file contents during this step.
  2. Translation speed: The time taken by the engine to translate a given text after its successful startup, measured in the number of words translated per second aka wps.
  3. Wasm binary size: The disk space of the generated Wasm binary.

The size of the generated Wasm binary, owing to the packaging, became dependent on the number of language pairs supported. The translation engine took an unusually long time (~8 seconds) to startup and was extremely slow in performing translation making it unusable.

As an example, translation from English to German language using corresponding trained models gave only 95 wps on a MacBook Pro (15-inch, 2017), MacOS version 11.6.2, 3.1 GHz Quad-Core Intel Core i7 processor, 16 GB 2133 MHz RAM.


Scalability, demand-based language support and binary size

As packaging of the files affected the usability of the translation engine on multiple fronts, we decided to solve that problem first. We introduced a new API in the translation engine to pass required files as byte buffers from outside instead of packing them during compile time in Emscripten’s virtual file system.

This allowed the translation engine to scale for new languages without increasing the size of the Wasm binary and enabled the add-on to dynamically download files of only those language pairs that the users were interested in. The final size of the Wasm binary (~6.5 MB) was well within the limits of the corresponding metric.

Startup time optimization

The new API that we developed to solve the packaging problem, coupled with few other optimizations in the marian framework, solved the long startup time problem. Engine’s startup time reduced substantially (~1 second) which was well within the acceptable limits of this performance criteria.

Translation speed optimization

Profiling the translation step in the browser indicated that the General matrix multiply (GEMM) instruction for 8-bit integer operands was the most computational intensive operation, and the exception handling code had a high overhead on translation speed. We focused our efforts to optimize both of them.

  1. Optimizing exception handling code: We replaced try/catch with if/else based implementation in a function that was frequently called during the translation step which resulted in ~20% boost in translation speed.
  2. Optimizing GEMM operation: Deeper investigation on profiling results revealed that the absence of GEMM instruction in Wasm standard was the reason for it to perform so poorly on Wasm.
    1. Experimental GEMM instructions: Purely for performance evaluation of GEMM instruction without getting it standardized in Wasm, we landed two experimental instructions in Firefox Nightly and Release for x86/x86_64 architecture. These instructions improved the translation speed by ~310% and the translation of webpages seemed fast enough for the feature to be usable on these architectures. This feature was protected behind a flag and was exposed only to privileged extensions in Firefox Release owing to its experimental nature. We still wanted to figure out a standard based solution before this could be released as production software but it allowed us to continue developing the extension while we worked with the Firefox WASM team on a better long-term solution.
    2. Non-standard long term solution: In the absence of a concrete timeline regarding the implementation of GEMM instruction in the Wasm standard, we replaced the experimental GEMM instructions with a Firefox specific non-standard long term solution which provided the same or more translation speeds as provided by the experimental GEMM instructions. Apart from privileged extensions, this solution enabled translation functionallity for non-privileged extensions as well as regular content with same translation speeds and enabled translation on ARM64 based platforms, albeit with low speeds. None of this was possible with experimental GEMM instructions.
    3. Native GEMM intrinsics: In an effort to improve translation speeds further, we landed a native GEMM implementation in Firefox Nightly protected behind a flag and exposed as intrinsics. The translation engine would directly call these intrinsics during the translation step whenever it is running in Firefox Nightly on x86/x86_64 architecture based systems. This work increased the translation speeds by 25% and 43% for SSSE3 and AVX2 simd extensions respectively compared to the experimental instructions that we had landed earlier.
  3. Emscripten toolchain upgrade: The most recent effort of updating the Emscripten toolchain to the latest version increased the translation speeds for all platforms by ~15% on Firefox and reduced the size of the Wasm binary further by ~25% (final size ~4.94 MB).

Eventually, we achieved the translation speeds of ~870 wps for translation from English to German language using corresponding trained models on Firefox Release on a MacBook Pro (15-inch, 2017), MacOS version 11.6.2, 3.1 GHz Quad-Core Intel Core i7 processor, 16 GB 2133 MHz RAM.


The translation engine is optimized to run at high translation speeds only for x86/x86_64 processors and we have ideas for improving the situation on ARM. A standardized Wasm GEMM instruction can achieve similar speeds on ARM, providing benefits to emerging class of consumer laptops and mobile devices. We also know that the native Marian engine performs even better with multithreading, but we had to disable multithreaded code in this version of the translation engine. Once SharedArrayBuffer support is broadly enabled, we believe we could re-enable multithreading and even faster translation speeds are possible.


I would like to thank Bergamot consortium partners, Mozilla’s Wasm team and my teammates Andre Natal, Evgeny Pavlov for their contributions in developing a mature translation engine. I am thankful to Lonnen along with Mozilla’s Add-on team, Localization team, Q&A team and Mozilla community who supported us and contributed to the development of the Firefox Translations add-on.

This project has received funding from the 🇪🇺European Union’s Horizon 2020 research and innovation programme under grant agreement No 825303.

The post Neural Machine Translation Engine for Firefox Translations add-on appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla Thunderbird BlogThunderbird 102 Released: A Serious Upgrade To Your Communication

Thunderbird 102 is here! On behalf of the entire Thunderbird team, I’m thrilled to announce the availability of our major yearly release. Thunderbird 102 is loaded with highly-requested features, and we think you’ll be delighted by them.

It features refreshed icons, color folders, and quality-of-life upgrades like the redesigned message header. It ushers in a brand new Address Book to bring you closer than ever to the people you communicate with. Plus useful new tools to help you manage your data, navigate the app faster, and boost your productivity. We’re even bringing Matrix to the party!

[Press friends: we also have a press release with lots of screenshots and GIFs for you right here.]

New icons and color folders<figcaption>New icons and color folders</figcaption>

Thunderbird 102 is available to download right now for our global community of 20M+ Linux, macOS, and Windows users. (Using Thunderbird 91 and don’t need an immediate upgrade? Your client should automatically update within the next few weeks as we gradually roll it out.)

Let’s talk about some of the best new features in Thunderbird 102.


You don’t communicate with contacts, you communicate with people. That’s why Thunderbird 102 gives you one of the best tools to better track and interact with all the important people in your life.

<figcaption>New Contact layout in Thunderbird 102</figcaption>
Thunderbird 102: Basic contact view in dark/light modes (Shown here on Linux)<figcaption>Basic contact view in dark/light modes (Shown here on Linux)</figcaption>

One of Thunderbird 102’s headlining features is the new Address Book, which represents the first of many steps towards a new, modernized direction for Thunderbird’s UX/UI. The redesigned Address Book delivers a clean visual presentation and several new information fields to help you better understand who you’re communicating with. It’s also compatible with vCard specs, the industry standard for saving contacts. If your app (like Google Contacts) or device (like an Android smartphone) can export existing contacts to vCard format, Thunderbird can effortlessly import them. Finally, the new Address Book acts as a one-stop launchpad for messaging or event creation with each of your contacts.

New Spaces Toolbar: Get There Faster

Customizing the Spaces Toolbar<figcaption>Thunderbird 102’s Spaces Toolbar</figcaption>

Thunderbird now features a central Spaces Toolbar for fast and easy access to your most important activities inside the application: With a single click, you can navigate between Mail, Address Book, Calendar, Tasks, Chat, and your installed add-ons. And if you feel like adapting Thunderbird to your personal preferences, there’s a button for Settings, too!

New Import / Export Wizard: Move Data With Ease

Thunderbird 102's Import / Export Wizard (shown here on Linux)<figcaption>Thunderbird 102’s Import / Export Wizard (shown here on Linux)</figcaption>

Moving accounts and data in and out of Thunderbird should be a breeze! Previously, this required the installation and use of add-ons, but the Thunderbird team is thrilled to share that it’s now a core, built-in feature. A step-by-step wizard provides a contextual, guided experience for importing your important data. This means that moving from Outlook, SeaMonkey, or another Thunderbird installation is easier than ever.

Redesigned Message Header: Focus On What Matters

The redesigned message header<figcaption>Message Header: TRANSFORM! </figcaption>

One of the core philosophies behind the development of Thunderbird 102 was doing more with less. Now, when reading your email, the redesigned message header allows you to focus on what matters as it highlights important information. It’s also more responsive and easier to navigate. Plus, you can “star” important messages from the header itself, and convert them into a calendar event or a task.

Matrix Chat Support: Decentralized Communication For Everyone

The Thunderbird team loves open source and open standards. That’s why you’ll find out-of-box support for the popular, decentralized chat protocol Matrix in Thunderbird 102. Matrix provides secure messaging, VOIP, and data transfer capabilities and is rapidly expanding its feature-set. 

We hope you love using it as much as we love building it! And there’s still more to come as Thunderbird 102’s development evolves! Complete details of all the changes can be found in the Thunderbird release notes for version 102 and newer.

The post Thunderbird 102 Released: A Serious Upgrade To Your Communication appeared first on The Thunderbird Blog.

hacks.mozilla.orgThe JavaScript Specification has a New License

Ecma International recently approved the 2022 standard of ECMAScript. There is something new in this edition that hasn’t been part of prior editions, but this isn’t a new programming feature.

In March of this year, Ecma International accepted a proposal led by Mozilla for a new alternative license. On June 22nd, the first requests to adopt this license were granted to TC39 and applied to the following documents: ECMA-262 (ECMAScript, the official name for JavaScript) and ECMA-402 (the Internationalization API for ECMAScript).

The ECMAScript specification is developed at Ecma International, while other web technologies like HTML and CSS are being developed at W3C. These institutions have different default license agreements, which creates two problems. First, having different licenses increases the overhead of legal review for participants. This can create a speed bump for contributing across different specifications. Second, the default ECMA license contains some restrictions against creating derivative works, in contrast to W3C. These provisions haven’t been a problem in practice, but they nevertheless don’t reflect how we think Open Source should work, especially for something as foundational as JavaScript. Mozilla wants to make it easy for everyone to participate in evolving the Web, so we took the initiative of introducing an alternative license for Ecma International specifications.

What is the alternative license?

The full alternative license text may be found on the Ecma License FAQ. Ecma now provides two licenses, which can be adopted depending on the needs of a given technical committee. The default Ecma International license provides a definitive document and location for work on a given standard, with the intention of preventing forking. The license has provisions that allow inlining a given standard into source text, as well as reproduction in part or full.

The new alternative license seeks to align with the work of the W3C, and the text is largely based on the W3C’s Document and Software License. This license is more permissive regarding derivative works of a standard. This provides a legal framework and an important guarantee that the development of internet infrastructure can continue independent of any organization. By applying the alternative license to a standard as significant as ECMAScript, Ecma International has demonstrated its stewardship of a fundamental building block of the web. In addition, this presents a potential new home for standardization projects with similar licensing requirements.

Standards and Open Source

Standardization arises from the need of multiple implementers to align on a common design. Standardization improves collaboration across the industry, and reduces replicated solutions to the same problem. It also provides a way to gather feedback from users or potential users. Both Standards and Open Source produce technical solutions through collaboration. One notable distinction between standardization and an Open Source project is that the latter often focuses on developing solutions within a single implementation.

Open source has led the way with permissive licensing of projects. Over the years, different licenses such as the BSD, Creative Commons, GNU GPL & co, MIT, and MPL have sought to allow open collaboration with different focuses and goals. Standardizing bodies are gradually adopting more of the techniques of Open Source. In 2015, W3C adopted its Document and Software License, and in doing so moved many of the specifications responsible for the Web such as CSS and HTML. Under this new license, W3C ensured that the ability to build on past work would exist regardless of organizational changes.

Mozilla’s Role

As part of our work to ensure a free and open web, we worked together with Ecma International, and many partners to write a License inspired by the W3C Document and Software License. Our goal was that JavaScript’s status would align with other specifications of the Web. In addition, with this new license available to all TCs at Ecma International, this will provide other organizations to approach standardization with the same perspective.

Changes like this come from the work of many different participants and we thank everyone at TC39 who helped with this effort. In addition, I’d like also thank my colleagues at Mozilla for their excellent work: Zibi Braniecki and Peter Saint-Andre, who supported me in writing the document drafts and the Ecma International discussions; Daniel Nazer, Eric Rescorla, Bobby Holley and Tantek Çelik for their advice and guidance of this project.

The post The JavaScript Specification has a New License appeared first on Mozilla Hacks - the Web developer blog.

hacks.mozilla.orgFuzzing rust-minidump for Embarrassment and Crashes – Part 2

This is part 2 of a series of articles on rust-minidump. For part 1, see here.

So to recap, we rewrote breakpad’s minidump processor in Rust, wrote a ton of tests, and deployed to production without any issues. We killed it, perfect job.

And we still got massively dunked on by the fuzzer. Just absolutely destroyed.

I was starting to pivot off of rust-minidump work because I needed a bit of palette cleanser before tackling round 2 (handling native debuginfo, filling in features for other groups who were interested in rust-minidump, adding extra analyses that we’d always wanted but were too much work to do in Breakpad, etc etc etc).

I was still getting some PRs from people filling in the corners they needed, but nothing that needed too much attention, and then @5225225 smashed through the windows and released a bunch of exploding fuzzy rabbits into my office.

I had no idea who they were or why they were there. When I asked they just lowered one of their seven pairs of sunglasses and said “Because I can. Now hold this bunny”. I did as I was told and held the bunny. It was a good bun. Dare I say, it was a true bnnuy: it was libfuzzer. (Huh? You thought it was gonna be AFL? Weird.)

As it turns out, several folks had built out some really nice infrastructure for quickly setting up a decent fuzzer for some Rust code: cargo-fuzz. They even wrote a little book that walks you through the process.

Apparently those folks had done such a good job that 5225225 had decided it would be a really great hobby to just pick up a random rust project and implement fuzzing for it. And then to fuzz it. And file issues. And PRs that fix those issues. And then implement even more fuzzing for it.

Please help my office is drowning in rabbits and I haven’t seen my wife in weeks.

As far as I can tell, the process seems to genuinely be pretty easy! I think their first fuzzer for rust-minidump was basically just:

  • checked out the project
  • run cargo fuzz init (which autogenerates a bunch of config files)
  • write a file with this:

use libfuzzer_sys::fuzz_target;
use minidump::*;

fuzz_target!(|data: &[u8]| {
    // Parse a minidump like a normal user of the library
    if let Ok(dump) = minidump::Minidump::read(data) {
        // Ask the library to get+parse several streams like a normal user.

        let _ = dump.get_stream::<MinidumpAssertion>();
        let _ = dump.get_stream::<MinidumpBreakpadInfo>();
        let _ = dump.get_stream::<MinidumpCrashpadInfo>();
        let _ = dump.get_stream::<MinidumpException>();
        let _ = dump.get_stream::<MinidumpLinuxCpuInfo>();
        let _ = dump.get_stream::<MinidumpLinuxEnviron>();
        let _ = dump.get_stream::<MinidumpLinuxLsbRelease>();
        let _ = dump.get_stream::<MinidumpLinuxMaps>();
        let _ = dump.get_stream::<MinidumpLinuxProcStatus>();
        let _ = dump.get_stream::<MinidumpMacCrashInfo>();
        let _ = dump.get_stream::<MinidumpMemoryInfoList>();
        let _ = dump.get_stream::<MinidumpMemoryList>();
        let _ = dump.get_stream::<MinidumpMiscInfo>();
        let _ = dump.get_stream::<MinidumpModuleList>();
        let _ = dump.get_stream::<MinidumpSystemInfo>();
        let _ = dump.get_stream::<MinidumpThreadNames>();
        let _ = dump.get_stream::<MinidumpThreadList>();
        let _ = dump.get_stream::<MinidumpUnloadedModuleList>();

And that’s… it? And all you have to do is type cargo fuzz run and it downloads, builds, and spins up an instance of libfuzzer and finds bugs in your project overnight?

Surely that won’t find anything interesting. Oh it did? It was largely all bugs in code I wrote? Nice.

cargo fuzz is clearly awesome but let’s not downplay the amount of bafflingly incredible work that 5225225 did here! Fuzzers, sanitizers, and other code analysis tools have a very bad reputation for drive-by contributions.

I think we’ve all heard stories of someone running a shiny new tool on some big project they know nothing about, mass filing a bunch of issues that just say “this tool says your code has a problem, fix it” and then disappearing into the mist and claiming victory.

This is not a pleasant experience for someone trying to maintain a project. You’re dumping a lot on my plate if I don’t know the tool, have trouble running the tool, don’t know exactly how you ran it, etc.

It’s also very easy to come up with a huge pile of issues with very little sense of how significant they are.

Some things are only vaguely dubious, while others are horribly terrifying exploits. We only have so much time to work on stuff, you’ve gotta help us out!

And in this regard 5225225’s contributions were just, bloody beautiful.

Like, shockingly fantastic.

They wrote really clear and detailed issues. When I skimmed those issues and misunderstood them, they quickly clarified and got me on the same page. And then they submitted a fix for the issue before I even considered working on the fix. And quickly responded to review comments. I didn’t even bother asking them to squashing their commits because damnit they earned those 3 commits in the tree to fix one overflow.

Then they submitted a PR to merge the fuzzer. They helped me understand how to use it and debug issues. Then they started asking questions about the project and started writing more fuzzers for other parts of it. And now there’s like 5 fuzzers and a bunch of fixed issues!

I don’t care how good cargo fuzz is, that’s a lot of friggin’ really good work! Like I am going to cry!! This was so helpful??? 😭

That said, I will take a little credit for this going so smoothly: both Rust itself and rust-minidump are written in a way that’s very friendly to fuzzing. Specifically, rust-minidump is riddled with assertions for “hmm this seems messed up and shouldn’t happen but maybe?” and Rust turns integer overflows into panics (crashes) in debug builds (and index-out-of-bounds is always a panic).

Having lots of assertions everywhere makes it a lot easier to detect situations where things go wrong. And when you do detect that situation, the crash will often point pretty close to where things went wrong.

As someone who has worked on detecting bugs in Firefox with sanitizer and fuzzing folks, let me tell you what really sucks to try to do anything with: “Hey so on my machine this enormous complicated machine-generated input caused Firefox to crash somewhere this one time. No, I can’t reproduce it. You won’t be able to reproduce it either. Anyway, try to fix it?”

That’s not me throwing shade on anyone here. I am all of the people in that conversation. The struggle of productively fuzzing Firefox is all too real, and I do not have a good track record of fixing those kinds of bugs. 

By comparison I am absolutely thriving under “Yeah you can deterministically trip this assertion with this tiny input you can just check in as a unit test”.

And what did we screw up? Some legit stuff! It’s Rust code, so I am fairly confident none of the issues were security concerns, but they were definitely quality of implementation issues, and could have been used to at very least denial-of-service the minidump processor.

Now let’s dig into the issues they found!

#428: Corrupt stacks caused infinite loops until OOM on ARM64


As noted in the background, stackwalking is a giant heuristic mess and you can find yourself going backwards or stuck in an infinite loop. To keep this under control, stackwalkers generally require forward progress.

Specifically, they require the stack pointer to move down the stack. If the stack pointer ever goes backwards or stays the same, we just call it quits and end the stackwalk there.

However, you can’t be so strict on ARM because leaf functions may not change the stack size at all. Normally this would be impossible because every function call at least has to push the return address to the stack, but ARM has the link register which is basically an extra buffer for the return address.

The existence of the link register in conjunction with an ABI that makes the callee responsible for saving and restoring it means leaf functions can have 0-sized stack frames!

To handle this, an ARM stackwalker must allow for there to be no forward progress for the first frame of a stackwalk, and then become more strict. Unfortunately I hand-waved that second part and ended up allowing infinite loops with no forward progress:

// If the new stack pointer is at a lower address than the old,
// then that's clearly incorrect. Treat this as end-of-stack to
// enforce progress and avoid infinite loops.
// NOTE: this check allows for equality because arm leaf functions
// may not actually touch the stack (thanks to the link register
// allowing you to "push" the return address to a register).
if frame.context.get_stack_pointer() < self.get_register_always("sp") as u64 {
    trace!("unwind: stack pointer went backwards, assuming unwind complete");
    return None;

So if the ARM64 stackwalker ever gets stuck in an infinite loop on one frame, it will just build up an infinite backtrace until it’s killed by an OOM. This is very nasty because it’s a potentially very slow denial-of-service that eats up all the memory on the machine!

This issue was actually originally discovered and fixed in #300 without a fuzzer, but when I fixed it for ARM (32-bit) I completely forgot to do the same for ARM64. Thankfully the fuzzer was evil enough to discover this infinite looping situation on its own, and the fix was just “copy-paste the logic from the 32-bit impl”.

Because this issue was actually encountered in the wild, we know this was a serious concern! Good job, fuzzer!

(This issue specifically affected minidump-processor and minidump-stackwalk)

#407: MinidumpLinuxMaps address-based queries didn’t work at all


MinidumpLinuxMaps is an interface for querying the dumped contents of Linux’s /proc/self/maps file. This provides metadata on the permissions and allocation state for mapped ranges of memory in the crashing process.

There are two usecases for this: just getting a full dump of all the process state, and specifically querying the memory properties for a specific address (“hey is this address executable?”). The dump usecase is handled by just shoving everything in a Vec. The address usecase requires us to create a RangeMap over the entries.

Unfortunately, a comparison was flipped in the code that created the keys to the RangeMap, which resulted in every correct memory range being discarded AND invalid memory ranges being accepted. The fuzzer was able to catch this because the invalid ranges tripped an assertion when they got fed into the RangeMap (hurray for redundant checks!).

if self.base_address < self.final_address { 
 return None; 

Although tests were written for MinidumpLinuxMaps, they didn’t include any invalid ranges, and just used the dump interface, so the fact that the RangeMap was empty went unnoticed!

This probably would have been quickly found as soon as anyone tried to actually use this API in practice, but it’s nice that we caught it beforehand! Hooray for fuzzers!

(This issue specifically affected the minidump crate which technically could affect minidump-processor and minidump-stackwalk. Although they didn’t yet actually do address queries, they may have crashed when fed invalid ranges.)

#381: OOM from reserving memory based on untrusted list length.


Minidumps have lots of lists which we end up collecting up in a Vec or some other collection. It’s quite natural and more efficient to start this process with something like Vec::with_capacity(list_length). Usually this is fine, but if the minidump is corrupt (or malicious), then this length could be impossibly large and cause us to immediately OOM.

We were broadly aware that this was a problem, and had discussed the issue in #326, but then everyone left for the holidays. #381 was a nice kick in the pants to actually fix it, and gave us a free simple test case to check in.

Although the naive solution would be to fix this by just removing the reserves, we opted for a solution that guarded against obviously-incorrect array lengths. This allowed us to keep the performance win of reserving memory while also making rust-minidump fast-fail instead of vaguely trying to do something and hallucinating a mess.

Specifically, @Swatinem introduced a function for checking that the amount of memory left in the section we’re parsing is large enough to even hold the claimed amount of items (based on their known serialized size). This should mean the minidump crate can only be induced to reserve O(n) memory, where n is the size of the minidump itself.

For some scale:

  • A minidump for Firefox’s main process with about 100 threads is about 3MB.
  • A minidump for a stackoverflow from infinite recursion (8MB stack, 9000 calls) is about 8MB.
  • A breakpad symbol file for Firefox’s main module can be about 200MB.

If you’re symbolicating, Minidumps probably won’t be your memory bottleneck. 😹

(This issue specifically affected the minidump crate and therefore also minidump-processor and minidump-stackwalk.)

The Many Integer Overflows and My Greatest Defeat

The rest of the issues found were relatively benign integer overflows. I claim they’re benign because rust-minidump should already be working under the assumption that all the values it reads out of the minidump could be corrupt garbage. This means its code is riddled with “is this nonsense” checks and those usually very quickly catch an overflow (or at worst print a nonsense value for some pointer).

We still fixed them all, because that’s shaky as heck logic and we want to be robust. But yeah none of these were even denial-of-service issues, as far as I know.

To demonstrate this, let’s discuss the most evil and embarrassing overflow which was definitely my fault and I am still mad about it but in a like “how the heck” kind of way!?

The overflow is back in our old friend the stackwalker. Specifically in the code that attempts to unwind using frame pointers. Even more specifically, when offsetting the supposed frame-pointer to get the location of the supposed return address:

let caller_ip = stack_memory.get_memory_at_address(last_bp + POINTER_WIDTH)?;
let caller_bp = stack_memory.get_memory_at_address(last_bp)?;
let caller_sp = last_bp + POINTER_WIDTH * 2;

If the frame pointer (last_bp) was ~u64::MAX, the offset on the first line would overflow and we would instead try to load ~null. All of our loads are explicitly fallible (we assume everything is corrupt garbage!), and nothing is ever mapped to the null page in normal applications, so this load would reliably fail as if we had guarded the overflow. Hooray!

…but the overflow would panic in debug builds because that’s how debug builds work in Rust!

This was actually found, reported, and fixed without a fuzzer in #251. All it took was a simple guard:

(All the casts are because this specific code is used in the x86 impl and the x64 impl.)

if last_bp as u64 >= u64::MAX - POINTER_WIDTH as u64 * 2 {
    // Although this code generally works fine if the pointer math overflows,
    // debug builds will still panic, and this guard protects against it without
    // drowning the rest of the code in checked_add.
    return None;

let caller_ip = stack_memory.get_memory_at_address(last_bp as u64 + POINTER_WIDTH as u64)?;
let caller_bp = stack_memory.get_memory_at_address(last_bp as u64)?;
let caller_sp = last_bp + POINTER_WIDTH * 2;

And then it was found, reported, and fixed again with a fuzzer in #422.

Wait what?

Unlike the infinite loop bug, I did remember to add guards to all the unwinders for this problem… but I did the overflow check in 64-bit even for the 32-bit platforms.

slaps forehead

This made the bug report especially confusing at first because the overflow was like 3 lines away from a guard for that exact overflow. As it turns out, the mistake wasn’t actually as obvious as it sounds! To understand what went wrong, let’s talk a bit more about pointer width in minidumps.

A single instance of rust-minidump has to be able to handle crash reports from any platform, even ones it isn’t natively running on. This means it needs to be able to handle both 32-bit and 64-bit platforms in one binary. To avoid the misery of copy-pasting everything or making everything generic over pointer size, rust-minidump prefers to work with 64-bit values wherever possible, even for 32-bit plaftorms.

This isn’t just us being lazy: the minidump format itself does this! Regardless of the platform, a minidump will refer to ranges of memory with a MINIDUMP_MEMORY_DESCRIPTOR whose base address is a 64-bit value, even on 32-bit platforms!

  ULONG64                      StartOfMemoryRange;

So quite naturally rust-minidump’s interface for querying saved regions of memory just operates on 64-bit (u64) addresses unconditionally, and 32-bit-specific code casts its u32 address to a u64 before querying memory.

That means the code with the overflow guard was manipulating those values as u64s on x86! The problem is that after all the memory loads we would then go back to “native” sizes and compute caller_sp = last_bp + POINTER_WIDTH * 2. This would overflow a u32 and crash in debug builds. 😿

But here’s the really messed up part: getting to that point meant we were successfully loading memory up to that address. The first line where we compute caller_ip reads it! So this overflow means… we were… loading memory… from an address that was beyond u32::MAX…!?


The fuzzer had found an absolutely brilliantly evil input.

It abused the fact that MINIDUMP_MEMORY_DESCRIPTOR technically lets 32-bit minidumps define memory ranges beyond u32::MAX even though they could never actually access that memory! It could then have the u64-based memory accesses succeed but still have the “native” 32-bit operation overflow!

This is so messed up that I didn’t even comprehend that it had done this until I wrote my own test and realized that it wasn’t actually failing because I foolishly had limited the range of valid memory to the mere 4GB a normal x86 process is restricted to.

And I mean that quite literally: this is exactly the issue that creates Parallel Universes in Super Mario 64.

But hey my code was probably just bad. I know google loves sanitizers and fuzzers, so I bet google breakpad found this overflow ages ago and fixed it:

uint32_t last_esp = last_frame->context.esp;
uint32_t last_ebp = last_frame->context.ebp;
uint32_t caller_eip, caller_esp, caller_ebp;

if (memory_->GetMemoryAtAddress(last_ebp + 4, &caller_eip) &&
    memory_->GetMemoryAtAddress(last_ebp, &caller_ebp)) {
    caller_esp = last_ebp + 8;
    trust = StackFrame::FRAME_TRUST_FP;
} else {

Ah. Hmm. They don’t guard for any kind of overflow for those uint32_t’s (or the uint64_t’s in the x64 impl).

Well ok GetMemoryAtAddress does actual bounds checks so the load from ~null will generally fail like it does in rust-minidump. But what about the Parallel Universe overflow that lets GetMemoryAtAddress succeed?

Ah well surely breakpad is more principled with integer width than I was–

virtual bool GetMemoryAtAddress(uint64_t address, uint8_t*  value) const = 0;
virtual bool GetMemoryAtAddress(uint64_t address, uint16_t* value) const = 0;
virtual bool GetMemoryAtAddress(uint64_t address, uint32_t* value) const = 0;
virtual bool GetMemoryAtAddress(uint64_t address, uint64_t* value) const = 0;

Whelp congrats to 5225225 for finding an overflow that’s portable between two implementations in two completely different languages by exploiting the very nature of the file format itself!

In case you’re wondering what the implications of this overflow are: it’s still basically benign. Both rust-minidump and google-breakpad will successfully complete the frame pointer analysis and yield a frame with a ~null stack pointer.

Then the outer layer of the stackwalker which runs all the different passes in sequence will see something succeeded but that the frame pointer went backwards. At this point it will discard the stack frame and terminate the stackwalk normally and just calmly output whatever the backtrace was up to that point. Totally normal and reasonable operation.

I expect this is why no one would notice this in breakpad even if you run fuzzers and sanitizers on it: nothing in the code actually does anything wrong. Unsigned integers are defined to wrap, the program behaves reasonably, everything is kinda fine. We only noticed this in rust-minidump because all integer overflows panic in Rust debug builds.

However this “benign” behaviour is slightly different from properly guarding the overflow. Both implementations will normally try to move on to stack scanning when the frame pointer analysis fails, but in this case they give up immediately. It’s important that the frame pointer analysis properly identifies failures so that this cascading can occur. Failing to do so is definitely a bug!

However in this case the stack is partially in a parallel universe, so getting any kind of useful backtrace out of it is… dubious to say the least.

So I totally stand by “this is totally benign and not actually a problem” but also “this is sketchy and we should have the bounds check so we can be confident in this code’s robustness and correctness”.

Minidumps are all corner cases — they literally get generated when a program encounters an unexpected corner case! It’s so tempting to constantly shrug off situations as “well no reasonable program would ever do this, so we can ignore it”… but YOU CAN’T.

You would not have a minidump at your doorstep if the program had behaved reasonably! The fact that you are trying to inspect a minidump means something messed up happened, and you need to just deal with it!

That’s why we put so much energy into testing this thing, it’s a nightmare!

I am extremely paranoid about this stuff, but that paranoia is based on the horrors I have seen. There are always more corner cases.

There are ALWAYS more corner cases. ALWAYS.


The post Fuzzing rust-minidump for Embarrassment and Crashes – Part 2 appeared first on Mozilla Hacks - the Web developer blog.

hacks.mozilla.orgHacks Decoded: Bikes and Boomboxes with Samuel Aboagye

Welcome to our Hacks: Decoded Interview series!

Once a month, Mozilla Foundation’s Xavier Harding speaks with people in the tech industry about where they’re from, the work they do and what drives them to keep going forward. Make sure you follow Mozilla’s Hacks blog to find more articles in this series and make sure to visit the Mozilla Foundation site to see more of our org’s work.

Meet Samuel Aboagye!

Samuel Aboagye is a genius. Aboagye is 17 years old. In those 17 years, he’s crafted more inventions than you have, probably. Among them: a solar-powered bike and a Bluetooth speaker, both using recycled materials. We caught up with Ghanaian inventor Samuel Aboagye over video chat in hopes that he’d talk with us about his creations, and ultimately how he’s way cooler than any of us were at 17.


Samuel, you’ve put together lots of inventions like an electric bike and Bluetooth speaker and even a fan. What made you want to make them?

For the speaker, I thought of how I could minimize the rate at which yellow plastic containers pollute the environment.  I tried to make good use of it after it served its purpose. So, with the little knowledge, I acquired in my science lessons, instead of the empty container just lying down and polluting the environment, I tried to create something useful with it.  

After the Bluetooth speaker was successful, I realized there was more in me I could show to the universe. More importantly, we live in a very poor ventilated room and we couldn’t afford an electric fan so the room was unbearably hot. As such, this situation triggered and motivated me to manufacture a fan to solve this family problem.

With the bike, I thought it would be wise to make life easier for the physically challenged because I was always sad to see them go through all these challenges just to live their daily lives. Electric motors are very expensive and not common in my country, so I decided to do something to help.

Since solar energy is almost always readily available in my part of the world and able to renew itself, I thought that if I am able to make a bike with it, it would help the physically challenged to move from one destination to another without stress or thinking of how to purchase a battery or fuel.  

So how did you go about making them? Did you run into any trouble?

I went around my community gathering used items and old gadgets like radio sets and other electronics and then removed parts that could help in my work. With the electrical energy training given to me by my science teacher after discovering me since JHS1, I was able to apply this and also combined with my God-given talent.

Whenever I need some sort of technical guidance, I call on my teacher Sir David. He has also been my financial help for all my projects.  Financing projects has always been my biggest struggle and most times I have to wait on him to raise funds for me to continue.

The tricycle: Was it much harder to make than a bike?

​​Yes, it was a little bit harder to make the tricycle than the bike. It’s time-consuming and also cost more than a bike. It needs extra technical and critical thinking too. 

You made the bike and speaker out of recycled materials. This answer is probably obvious but I’ve gotta ask: why recycled materials?  Is environment-friendly tech important to you?

I used recycled materials because they were readily available and comparable to cheap and easy to get. With all my inventions I make sure they are all environmentally friendly so as not to pose any danger now or future to the beings on Earth.  But also, I want the world to be a safe and healthy place to be. 


The post Hacks Decoded: Bikes and Boomboxes with Samuel Aboagye appeared first on Mozilla Hacks - the Web developer blog.

SeaMonkeySeaMonkey 2.53.13 Beta 1 is out!

Hi All,

The SeaMonkey Project is pleased to announce the immediate release of 2.53.13 Beta 1.

Please check out [1] and/or [2].   Updates will be available shortly.



[1] –

[2] –

Firefox UXHow we created inclusive writing guidelines for Firefox

Prioritizing inclusion work as a small team, and what we learned in the process.

Sharpened pencil with pencil shavings and pencil sharpener on white background.<figcaption>Photo by Eduardo Casajús Gorostiaga on Unsplash.</figcaption>

Our UX content design team recently relaunched our product content guidelines. In addition to building out more robust guidance on voice, tone, and mechanics, we also introduced new sections, including inclusive writing.

At the time, our content design team had just two people. We struggled to prioritize inclusion work while juggling project demands and string requests. If you’re in a similar cozy boat, here’s how we were able to get our inclusive writing guidelines done.

1. Set a deadline, even if it’s arbitrary

It’s hard to prioritize work that doesn’t have a deadline or isn’t a request from your product manager. You keep telling yourself you’ll eventually get to it… and you don’t.

So I set a deadline. That deadline applied to our entire new content guidelines, but this section had its own due date to track against. To hold myself accountable, I scheduled weekly check-ins to review drafts.

Spoiler: I didn’t hit my deadline. But, I made significant progress. When the deadline came and went, the draft was nearly done. This made it easier to bring the guidelines across the finish line.

2. Gather inspiration

Developing inclusive writing guidance from scratch felt overwhelming. Where would I even start? Fortunately, a lot of great work has already been done by other product content teams. I started by gathering inspiration.

There are dozens of inclusive writing resources, but not all focus exclusively on product content. These were good models to follow:

I looked for common themes as well as how other organizations tackled content specific to their products. As a content designer, I also paid close attention to how to make writing guidelines digestible and easy to understand.

Based on my audit, I developed a rough outline:

  • Clarify what we mean by ‘inclusive language’
  • Include best practices for how to consider your own biases and write inclusively
  • Provide specific writing guidance for your own product
  • Develop a list of terms to avoid and why they’re problematic. Suggest alternate terms.

3. Align on your intended audience

Inclusivity has many facets, particularly when it comes to language. Inclusive writing could apply to job descriptions, internal communications, or marketing content. To start, our focus would be writing product content only.

Our audience was mainly Firefox content designers, but occasionally product designers, product managers, and engineers might reference these guidelines as well.

Having a clear audience was helpful when our accessibility team raised questions about visual and interaction design. We debated including color and contrast guidance. Ultimately, we decided to keep scope limited to writing for the product. At a later date, we could collaborate with product designers to draft more holistic accessibility guidance for the larger UX team.

4. Keep the stakes low for your first draft

This was our first attempt at capturing how to write inclusively for Firefox. I was worried about getting it wrong, but didn’t want that fear to stop me from even getting started.

So I let go of the expectation I’d write perfect, ready-to-ship guidance on the first try. I simply tried to get a “good start” on paper. Then I brought my draft to our internal weekly content team check-in. This felt like a safe space to bring unfinished work.

The thoughtful conversations and considerations my colleagues raised helped me move the work forward. Through multiple feedback loops, we worked collaboratively to tweak, edit, and revise.

5. Gather input from subject matter experts

I then sought feedback from our Diversity & Inclusion and Accessibility teams. Before asking them to weigh in, I wrote a simple half-page brief to clarify scope, deadlines, and the type of feedback we needed.

Our cross-functional peers helped identify confusing language and suggested further additions. With their input, I made significant revisions that made the guidelines even stronger.

6. Socialize, socialize, socialize

The work doesn’t stop once you hit publish. Documentation has a tendency to collect dust on the shelf unless you make sure people know it exists. Our particular strategy includes:

  • Include on our internal wiki, with plans to publish it to our new design system later this year
  • Seek placement in our internal company-wide newsletter
  • Promote in internal Slack channels
  • Look for opportunities to reference the guidelines in internal conversations and company meetings

7. Treat the guidelines as a living document

We take a continuous learning approach to inclusive work. I expect our inclusive writing guidance to evolve.

To encourage others to participate in this work, we will continue to be open to contributions and suggestions outside of our team, making updates as we go. We also intend to review the guidelines as a content team each quarter to see what changes we may need to make.

Wrapping up

My biggest learning from the process of creating new inclusive writing guidelines is this: Your impact can start small. It can even start as a messy Google Doc. But the more you bring other people in to contribute, the stronger the end result in a continuing journey of learning and evolution.


Thank you to the following individuals who contributed to the inclusive writing guidelines: Meridel Walkington, Leslie Gray, Asa Dotzler, Jainaba Seckan, Steven Potter, Kelsey Carson, Eitan Isaacson, Morgan Rae Reschenberg, Emily Wachowiak

How we created inclusive writing guidelines for Firefox was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

hacks.mozilla.orgEverything Is Broken: Shipping rust-minidump at Mozilla – Part 1

Everything Is Broken: Shipping rust-minidump at Mozilla

For the last year I’ve been leading the development of rust-minidump, a pure-Rust replacement for the minidump-processing half of google-breakpad.

Well actually in some sense I finished that work, because Mozilla already deployed it as the crash processing backend for Firefox 6 months ago, it runs in half the time, and seems to be more reliable. (And you know, isn’t a terrifying ball of C++ that parses and evaluates arbitrary input from the internet. We did our best to isolate Breakpad, but still… yikes.)

This is a pretty fantastic result, but there’s always more work to do because Minidumps are an inky abyss that grows deeper the further you delve… wait no I’m getting ahead of myself. First the light, then the abyss. Yes. Light first.

What I can say is that we have a very solid implementation of the core functionality of minidump parsing+analysis for the biggest platforms (x86, x64, ARM, ARM64; Windows, MacOS, Linux, Android). But if you want to read minidumps generated on a PlayStation 3 or process a Full Memory dump, you won’t be served quite as well.

We’ve put a lot of effort into documenting and testing this thing, so I’m pretty confident in it!

Unfortunately! Confidence! Is! Worth! Nothing!

Which is why this is the story of how we did our best to make this nightmare as robust as we could and still got 360 dunked on from space by the sudden and incredible fuzzing efforts of @5225225.

This article is broken into two parts:

  1. what minidumps are, and how we made rust-minidump
  2. how we got absolutely owned by simple fuzzing

You are reading part 1, wherein we build up our hubris.

Background: What’s A Minidump, and Why Write rust-minidump?

Your program crashes. You want to know why your program crashed, but it happened on a user’s machine on the other side of the world. A full coredump (all memory allocated by the program) is enormous — we can’t have users sending us 4GB files! Ok let’s just collect up the most important regions of memory like the stacks and where the program crashed. Oh and I guess if we’re taking the time, let’s stuff some metadata about the system and process in there too.

Congratulations you have invented Minidumps. Now you can turn a 100-thread coredump that would otherwise be 4GB into a nice little 2MB file that you can send over the internet and do postmortem analysis on.

Or more specifically, Microsoft did. So long ago that their docs don’t even discuss platform support. MiniDumpWriteDump’s supported versions are simply “Windows”. Microsoft Research has presumably developed a time machine to guarantee this.

Then Google came along (circa 2006-2007) and said “wouldn’t it be nice if we could make minidumps on any platform”? Thankfully Microsoft had actually built the format pretty extensibly, so it wasn’t too bad to extend the format for Linux, MacOS, BSD, Solaris, and so on. Those extensions became google-breakpad (or just Breakpad) which included a ton of different tools for generating, parsing, and analyzing their extended minidump format (and native Microsoft ones).

Mozilla helped out with this a lot because apparently, our crash reporting infrastructure (“Talkback”) was miserable circa 2007, and this seemed like a nice improvement. Needless to say, we’re pretty invested in breakpad’s minidumps at this point.

Fast forward to the present day and in a hilarious twist of fate, products like VSCode mean that Microsoft now supports applications that run on Linux and MacOS so it runs breakpad in production and has to handle non-Microsoft minidumps somewhere in its crash reporting infra, so someone else’s extension of their own format is somehow their problem now!

Meanwhile, Google has kind-of moved on to Crashpad. I say kind-of because there’s still a lot of Breakpad in there, but they’re more interested in building out tooling on top of it than improving Breakpad itself. Having made a few changes to Breakpad: honestly fair, I don’t want to work on it either. Still, this was a bit of a problem for us, because it meant the project became increasingly under-staffed.

By the time I started working on crash reporting, Mozilla had basically given up on upstreaming fixes/improvements to Breakpad, and was just using its own patched fork. But even without the need for upstreaming patches, every change to Breakpad filled us with dread: many proposed improvements to our crash reporting infrastructure stalled out at “time to implement this in Breakpad”.

Why is working on Breakpad so miserable, you ask?

Parsing and analyzing minidumps is basically an exercise in writing a fractal parser of platform-specific formats nested in formats nested in formats. For many operating systems. For many hardware architectures. And all the inputs you’re parsing and analyzing are terrible and buggy so you have to write a really permissive parser and crawl forward however you can.

Some specific MSVC toolchain that was part of Windows XP had a bug in its debuginfo format? Too bad, symbolicate that stack frame anyway!

The program crashed because it horribly corrupted its own stack? Too bad, produce a backtrace anyway!

The minidump writer itself completely freaked out and wrote a bunch of garbage to one stream? Too bad, produce whatever output you can anyway!

Hey, you know who has a lot of experience dealing with really complicated permissive parsers written in C++? Mozilla! That’s like the core functionality of a web browser.

Do you know Mozilla’s secret solution to writing really complicated permissive parsers in C++?

We stopped doing it.

We developed Rust and ported our nastiest parsers to it.

We’ve done it a lot, and when we do we’re always like “wow this is so much more reliable and easy to maintain and it’s even faster now”. Rust is a really good language for writing parsers. C++ really isn’t.

So we Rewrote It In Rust (or as the kids call it, “Oxidized It”). Breakpad is big, so we haven’t actually covered all of its features. We’ve specifically written and deployed:

  • dump_syms which processes native build artifacts into symbol files.
  • rust-minidump which is a collection of crates that parse and analyze minidumps. Or more specifically, we deployed minidump-stackwalk, which is the high-level cli interface to all of rust-minidump.

Notably missing from this picture is minidump writing, or what google-breakpad calls a client (because it runs on the client’s machine). We are working on a rust-based minidump writer, but it’s not something we can recommend using quite yet (although it has sped up a lot thanks to help from Embark Studios).

This is arguably the messiest and hardest work because it has a horrible job: use a bunch of native system APIs to gather up a bunch of OS-specific and Hardware-specific information about the crash AND do it for a program that just crashed, on a machine that caused the program to crash.

We have a long road ahead but every time we get to the other side of one of these projects it’s wonderful.


Background: Stackwalking and Calling Conventions

One of rust-minidump’s (minidump-stackwalk’s) most important jobs is to take the state for a thread (general purpose registers and stack memory) and create a backtrace for that thread (unwind/stackwalk). This is a surprisingly complicated and messy job, made only more complicated by the fact that we are trying to analyze the memory of a process that got messed up enough to crash.

This means our stackwalkers are inherently working with dubious data, and all of our stackwalking techniques are based on heuristics that can go wrong and we can very easily find ourselves in situations where the stackwalk goes backwards or sideways or infinite and we just have to try to deal with it!

It’s also pretty common to see a stackwalker start hallucinating, which is my term for “the stackwalker found something that looked plausible enough and went on a wacky adventure through the stack and made up a whole pile of useless garbage frames”. Hallucination is most common near the bottom of the stack where it’s also least offensive. This is because each frame you walk is another chance for something to go wrong, but also increasingly uninteresting because you’re rarely interested in confirming that a thread started in The Same Function All Threads Start In.

All of these problems would basically go away if everyone agreed to properly preserve their cpu’s PERFECTLY GOOD DEDICATED FRAME POINTER REGISTER. Just kidding, turning on frame pointers doesn’t really work either because Microsoft invented chaos frame pointers that can’t be used for unwinding! I assume this happened because they accidentally stepped on the wrong butterfly while they were traveling back in time to invent minidumps. (I’m sure it was a decision that made more sense 20 years ago, but it has not aged well.)

If you would like to learn more about the different techniques for unwinding, I wrote about them over here in my article on Apple’s Compact Unwind Info. I’ve also attempted to document breakpad’s STACK WIN and STACK CFI unwind info formats here, which are more similar to the  DWARF and PE32 unwind tables (which are basically tiny programming languages).

If you would like to learn more about ABIs in general, I wrote an entire article about them here. The end of that article also includes an introduction to how calling conventions work. Understanding calling conventions is key to implementing unwinders.


How Hard Did You Really Test Things?

Hopefully you now have a bit of a glimpse into why analyzing minidumps is an enormous headache. And of course you know how the story ends: that fuzzer kicks our butts! But of course to really savor our defeat, you have to see how hard we tried to do a good job! It’s time to build up our hubris and pat ourselves on the back.

So how much work actually went into making rust-minidump robust before the fuzzer went to work on it?

Quite a bit!

I’ll never argue all the work we did was perfect but we definitely did some good work here, both for synthetic inputs and real world ones. Probably the biggest “flaw” in our methodology was the fact that we were only focused on getting Firefox’s usecase to work. Firefox runs on a lot of platforms and sees a lot of messed up stuff, but it’s still a fairly coherent product that only uses so many features of minidumps.

This is one of the nice benefits of our recent work with Sentry, which is basically a Crash Reporting As A Service company. They are way more liable to stress test all kinds of weird corners of the format that Firefox doesn’t, and they have definitely found (and fixed!) some places where something is wrong or missing! (And they recently deployed it into production too! 🎉)

But hey don’t take my word for it, check out all the different testing we did:

Synthetic Minidumps for Unit Tests

rust-minidump includes a synthetic minidump generator which lets you come up with a high-level description of the contents of a minidump, and then produces an actual minidump binary that we can feed it into the full parser:

// Let’s make a synth minidump with this particular Crashpad Info…

let module = ModuleCrashpadInfo::new(42, Endian::Little)
    .add_simple_annotation("simple", "module")
    .add_annotation_object("string", AnnotationValue::String("value".to_owned()))
    .add_annotation_object("invalid", AnnotationValue::Invalid)
    .add_annotation_object("custom", AnnotationValue::Custom(0x8001, vec![42]));

let crashpad_info = CrashpadInfo::new(Endian::Little)
    .add_simple_annotation("simple", "info");

let dump = SynthMinidump::with_endian(Endian::Little).add_crashpad_info(crashpad_info);

// convert the synth minidump to binary and read it like a normal minidump
let dump = read_synth_dump(dump).unwrap();

// Now check that the minidump reports the values we expect…

minidump-synth intentionally avoids sharing layout code with the actual implementation so that incorrect changes to layouts won’t “accidentally” pass tests.

A brief aside for some history: this testing framework was started by the original lead on this project, Ted Mielczarek. He started rust-minidump as a side project to learn Rust when 1.0 was released and just never had the time to finish it. Back then he was working at Mozilla and also a major contributor to Breakpad, which is why rust-minidump has a lot of similar design choices and terminology.

This case is no exception: our minidump-synth is a shameless copy of the synth-minidump utility in breakpad’s code, which was originally written by our other coworker Jim Blandy. Jim is one of the only people in the world that I will actually admit writes really good tests and docs, so I am totally happy to blatantly copy his work here.

Since this was all a learning experiment, Ted was understandably less rigorous about testing than usual. This meant a lot of minidump-synth was unimplemented when I came along, which also meant lots of minidump features were completely untested. (He built an absolutely great skeleton, just hadn’t had the time to fill it all in!)

We spent a lot of time filling in more of minidump-synth’s implementation so we could write more tests and catch more issues, but this is definitely the weakest part of our tests. Some stuff was implemented before I got here, so I don’t even know what tests are missing!

This is a good argument for some code coverage checks, but it would probably come back with “wow you should write a lot more tests” and we would all look at it and go “wow we sure should” and then we would probably never get around to it, because there are many things we should do.

On the other hand, Sentry has been very useful in this regard because they already have a mature suite of tests full of weird corner cases they’ve built up over time, so they can easily identify things that really matter, know what the fix should roughly be, and can contribute pre-existing test cases!

Integration and Snapshot Tests

We tried our best to shore up coverage issues in our unit tests by adding more holistic tests. There’s a few checked in Real Minidumps that we have some integration tests for to make sure we handle Real Inputs properly.

We even wrote a bunch of integration tests for the CLI application that snapshot its output to confirm that we never accidentally change the results.

Part of the motivation for this is to ensure we don’t break the JSON output, which we also wrote a very detailed schema document for and are trying to keep stable so people can actually rely on it while the actual implementation details are still in flux.

Yes, minidump-stackwalk is supposed to be stable and reasonable to use in production!

For our snapshot tests we use insta, which I think is fantastic and more people should use. All you need to do is assert_snapshot! any output you want to keep track of and it will magically take care of the storing, loading, and diffing.

Here’s one of the snapshot tests where we invoke the CLI interface and snapshot stdout:

fn test_evil_json() {
    // For a while this didn't parse right
    let bin = env!("CARGO_BIN_EXE_minidump-stackwalk");
    let output = Command::new(bin)

    let stdout = String::from_utf8(output.stdout).unwrap();
    let stderr = String::from_utf8(output.stderr).unwrap();

    insta::assert_snapshot!("json-pretty-evil-symbols", stdout);
    assert_eq!(stderr, "");

Stackwalker Unit Testing

The stackwalker is easily the most complicated and subtle part of the new implementation, because every platform can have slight quirks and you need to implement several different unwinding strategies and carefully tune everything to work well in practice.

The scariest part of this was the call frame information (CFI) unwinders, because they are basically little virtual machines we need to parse and execute at runtime. Thankfully breakpad had long ago smoothed over this issue by defining a simplified and unified CFI format, STACK CFI (well, nearly unified, x86 Windows was still a special case as STACK WIN). So even if DWARF CFI has a ton of complex features, we mostly need to implement a Reverse Polish Notation Calculator except it can read registers and load memory from addresses it computes (and for STACK WIN it has access to named variables it can declare and mutate).

Unfortunately, Breakpad’s description for this format is pretty underspecified so I had to basically pick some semantics I thought made sense and go with that. This made me extremely paranoid about the implementation. (And yes I will be more first-person for this part, because this part was genuinely where I personally spent most of my time and did a lot of stuff from scratch. All the blame belongs to me here!)

The STACK WIN / STACK CFI parser+evaluator is 1700 lines. 500 of those lines are a detailed documentation and discussion of the format, and 700 of those lines are an enormous pile of ~80 test cases where I tried to come up with every corner case I could think of.

I even checked in two tests I knew were failing just to be honest that there were a couple cases to fix! One of them is a corner case involving dividing by a negative number that almost certainly just doesn’t matter. The other is a buggy input that old x86 Microsoft toolchains actually produce and parsers need to deal with. The latter was fixed before the fuzzing started.

And 5225225 still found an integer overflow in the STACK WIN preprocessing step! (Not actually that surprising, it’s a hacky mess that tries to cover up for how messed up x86 Windows unwinding tables were.)

(The code isn’t terribly interesting here, it’s just a ton of assertions that a given input string produces a given output/error.)

Of course, I wasn’t satisfied with just coming up with my own semantics and testing them: I also ported most of breakpad’s own stackwalker tests to rust-minidump! This definitely found a bunch of bugs I had, but also taught me some weird quirks in Breakpad’s stackwalkers that I’m not sure I actually agree with. But in this case I was flying so blind that even being bug-compatible with Breakpad was some kind of relief.

Those tests also included several tests for the non-CFI paths, which were similarly wobbly and quirky. I still really hate a lot of the weird platform-specific rules they have for stack scanning, but I’m forced to work on the assumption that they might be load-bearing. (I definitely had several cases where I disabled a breakpad test because it was “obviously nonsense” and then hit it in the wild while testing. I quickly learned to accept that Nonsense Happens And Cannot Be Ignored.)

One major thing I didn’t replicate was some of the really hairy hacks for STACK WIN. Like there are several places where they introduce extra stack-scanning to try to deal with the fact that stack frames can have mysterious extra alignment that the windows unwinding tables just don’t tell you about? I guess?

There’s almost certainly some exotic situations that rust-minidump does worse on because of this, but it probably also means we do better in some random other situations too. I never got the two to perfectly agree, but at some point the divergences were all in weird enough situations, and as far as I was concerned both stackwalkers were producing equally bad results in a bad situation. Absent any reason to prefer one over the other, divergence seemed acceptable to keep the implementation cleaner.

Here’s a simplified version of one of the ported breakpad tests, if you’re curious (thankfully minidump-synth is based off of the same binary data mocking framework these tests use):

fn test_x86_frame_pointer() {
    let mut f = TestFixture::new();
    let frame0_ebp = Label::new();
    let frame1_ebp = Label::new();
    let mut stack = Section::new();

    // Setup the stack and registers so frame pointers will work
    stack = stack
        .append_repeated(12, 0) // frame 0: space
        .mark(&frame0_ebp)      // frame 0 %ebp points here
        .D32(&frame1_ebp)       // frame 0: saved %ebp
        .D32(0x40008679)        // frame 0: return address
        .append_repeated(8, 0)  // frame 1: space
        .mark(&frame1_ebp)      // frame 1 %ebp points here
        .D32(0)                 // frame 1: saved %ebp (stack end)
        .D32(0);                // frame 1: return address (stack end)
    f.raw.eip = 0x4000c7a5;
    f.raw.esp = stack.start().value().unwrap() as u32;
    f.raw.ebp = frame0_ebp.value().unwrap() as u32;

    // Check the stackwalker's output:
    let s = f.walk_stack(stack).await;
    assert_eq!(s.frames.len(), 2);
        let f0 = &s.frames[0];
        assert_eq!(, FrameTrust::Context);
        assert_eq!(f0.context.valid, MinidumpContextValidity::All);
        assert_eq!(f0.instruction, 0x4000c7a5);
        let f1 = &s.frames[1];
        assert_eq!(, FrameTrust::FramePointer);
        assert_eq!(f1.instruction, 0x40008678);

A Dedicated Production Diffing, Simulating, and Debugging Tool

Because minidumps are so horribly fractal and corner-casey, I spent a lot of time terrified of subtle issues that would become huge disasters if we ever actually tried to deploy to production. So I also spent a bunch of time building socc-pair, which takes the id of a crash report from Mozilla’s crash reporting system and pulls down the minidump, the old breakpad-based implementation’s output, and extra metadata.

It then runs a local rust-minidump (minidump-stackwalk) implementation on the minidump and does a domain-specific diff over the two inputs. The most substantial part of this is a fuzzy diff on the stackwalks that tries to better handle situations like when one implementation adds an extra frame but the two otherwise agree. It also uses the reported techniques each implementation used to try to identify whose output is more trustworthy when they totally diverge.

I also ended up adding a bunch of mocking and benchmarking functionality to it as well, as I found more and more places where I just wanted to simulate a production environment.

Oh also I added really detailed trace-logging for the stackwalker so that I could easily post-mortem debug why it made the decisions it made.

This tool found so many issues and more importantly has helped me quickly isolate their causes. I am so happy I made it. Because of it, we know we actually fixed several issues that happened with the old breakpad implementation, which is great!

Here’s a trimmed down version of the kind of report socc-pair would produce (yeah I abused diff syntax to get error highlighting. It’s a great hack, and I love it like a child):

comparing json...

: {
    crash_info: {
        address: 0x7fff1760aca0
        crashing_thread: 8
    crashing_thread: {
        frames: [
            0: {
                file: wrappers.cpp:1750da2d7f9db490b9d15b3ee696e89e6aa68cb7
                frame: 0
                function: RustMozCrash(char const*, int, char const*)
                function_offset: 0x00000010
-               did not match
+               line: 17
-               line: 20
                module: xul.dll


    unloaded_modules: [
        0: {
            base_addr: 0x7fff48290000
-           local val was null instead of:
            code_id: 68798D2F9000
            end_addr: 0x7fff48299000
            filename: KBDUS.DLL
        1: {
            base_addr: 0x7fff56020000
            code_id: DFD6E84B14000
            end_addr: 0x7fff56034000
            filename: resourcepolicyclient.dll
~   ignoring field write_combine_size: "0"

- Total errors: 288, warnings: 39

benchmark results (ms):
    2388, 1986, 2268, 1989, 2353, 
    average runtime: 00m:02s:196ms (2196ms)
    median runtime: 00m:02s:268ms (2268ms)
    min runtime: 00m:01s:986ms (1986ms)
    max runtime: 00m:02s:388ms (2388ms)

max memory (rss) results (bytes):
    267755520, 261152768, 272441344, 276131840, 279134208, 
    average max-memory: 258MB (271323136 bytes)
    median max-memory: 259MB (272441344 bytes)
    min max-memory: 249MB (261152768 bytes)
    max max-memory: 266MB (279134208 bytes)

Output Files: 
    * (download) Minidump: b4f58e9f-49be-4ba5-a203-8ef160211027.dmp
    * (download) Socorro Processed Crash: b4f58e9f-49be-4ba5-a203-8ef160211027.json
    * (download) Raw JSON: b4f58e9f-49be-4ba5-a203-8ef160211027.raw.json
    * Local minidump-stackwalk Output: b4f58e9f-49be-4ba5-a203-8ef160211027.local.json
    * Local minidump-stackwalk Logs: b4f58e9f-49be-4ba5-a203-8ef160211027.log.txt

Staging and Deploying to Production

Once we were confident enough in the implementation, a lot of the remaining testing was taken over by Will Kahn-Greene, who’s responsible for a lot of the server-side details of our crash-reporting infrastructure.

Will spent a bunch of time getting a bunch of machinery setup to manage the deployment and monitoring of rust-minidump. He also did a lot of the hard work of cleaning up all our server-side configuration scripts to handle any differences between the two implementations. (Although I spent a lot of time on compatibility, we both agreed this was a good opportunity to clean up old cruft and mistakes.)

Once all of this was set up, he turned it on in staging and we got our first look at how rust-minidump actually worked in ~production:


Our staging servers take in about 10% of the inputs that also go to our production servers, but even at that reduced scale we very quickly found several new corner cases and we were getting tons of crashes, which is mildly embarrassing for the thing that handles other people’s crashes.

Will did a great job here in monitoring and reporting the issues. Thankfully they were all fairly easy for us to fix. Eventually, everything smoothed out and things seemed to be working just as reliably as the old implementation on the production server. The only places where we were completely failing to produce any output were for horribly truncated minidumps that may as well have been empty files.

We originally did have some grand ambitions of running socc-pair on everything the staging servers processed or something to get really confident in the results. But by the time we got to that point, we were completely exhausted and feeling pretty confident in the new implementation.

Eventually Will just said “let’s turn it on in production” and I said “AAAAAAAAAAAAAAA”.

This moment was pure terror. There had always been more corner cases. There’s no way we could just be done. This will probably set all of Mozilla on fire and delete Firefox from the internet!

But Will convinced me. We wrote up some docs detailing all the subtle differences and sent them to everyone we could. Then the moment of truth finally came: Will turned it on in production, and I got to really see how well it worked in production:

*dramatic drum roll*

It worked fine.

After all that stress and anxiety, we turned it on and it was fine.

Heck, I’ll say it: it ran well.

It was faster, it crashed less, and we even knew it fixed some issues.

I was in a bit of a stupor for the rest of that week, because I kept waiting for the other shoe to drop. I kept waiting for someone to emerge from the mist and explain that I had somehow bricked Thunderbird or something. But no, it just worked.

So we left for the holidays, and I kept waiting for it to break, but it was still fine.

I am honestly still shocked about this!

But hey, as it turns out we really did put a lot of careful work into testing the implementation. At every step we found new problems but that was good, because once we got to the final step there were no more problems to surprise us.

And the fuzzer still kicked our butts afterwards.

But that’s part 2! Thanks for reading!


The post Everything Is Broken: Shipping rust-minidump at Mozilla – Part 1 appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla Thunderbird BlogFrequently Asked Questions: Thunderbird Mobile and K-9 Mail

Today, we announced our detailed plans for Thunderbird on mobile. We also welcomed the open-source Android email client K-9 Mail into the Thunderbird family. Below, you’ll find an evolving list of frequently asked questions about this collaboration and our future plans.

Why not develop your own mobile client?

The Thunderbird team had many discussions on how we might provide a great mobile experience for our users. In the end, we didn’t want to duplicate effort if we could combine forces with an existing open-source project that shared our values. Over years of discussing ways K-9 and Thunderbird could collaborate, we decided it would best serve our users to work together.

Should I install K-9 Mail now or wait for Thunderbird?

If you want to help shape the future of Thunderbird on Android, you’re encouraged to install K-9 Mail right now. Leading up to the first official release of Thunderbird for Android, the user interface will probably change a few times. If you dislike somewhat frequent changes in apps you use daily, you might want to hold off.

Will this affect desktop Thunderbird? How?

Many Thunderbird users have asked for a Thunderbird experience on mobile, which we intend to provide by helping make K-9 amazing (and turning it into Thunderbird on Android). K-9 will supplement the Thunderbird experience and enhance where and how users are able to have a great email experience. Our commitment to desktop Thunderbird is unchanged, most of our team is committed to making that a best-in-class email client and it will remain that way.

What will happen to K-9 Mail once the official Thunderbird for Android app has been released?

K-9 Mail will be brought in-line with Thunderbird from a feature perspective, and we will ensure that syncing between Thunderbird and K-9/Thunderbird on Android is seamless. Of course, Thunderbird on Android and Thunderbird on Desktop are both intended to serve very different form factors, so there will be UX differences between the two. But we intend to allow similar workflows and tools on both platforms.

Will I be able to sync my Thunderbird accounts with K-9 Mail?

Yes. We plan to offer Firefox Sync as one option to allow you to securely sync accounts between Thunderbird and K-9 Mail. We expect this feature to be implemented in the summer of 2023.

Will Thunderbird for Android support calendars, tasks, feeds, or chat like the desktop app?

We are working on an amazing email experience first. We are looking at the best way to provide Thunderbird’s other functionality on Android but currently are still debating how best to achieve that. For instance, one method is to simply sync calendars, and then users are able to use their preferred calendar application on their device. But we have to discuss this within the team, and the Thunderbird and K-9 communities, then decide what the best approach is.

Going forward, how will K-9 Mail donations be used?

Donations made towards K-9 will be allocated to the Thunderbird project. Of course, Thunderbird in turn will provide full support for K-9 Mail’s development and activities that support the advancement and sustainability of the app.

Is a mobile Thunderbird app in development for iOS?

Thunderbird is currently evaluating the development of an iOS app.

How can I get involved?

1) Participate in our discussion and planning forum.
2) Developers are encouraged to visit to get started.
3) Obtain Thunderbird source code by visiting
4) K-9 Mail source code is available at:
5) You can financially support Thunderbird and K-9 Mail’s development by donating via this link:

Thunderbird is the leading open-source, cross-platform email and calendaring client, free for business and personal use. We want it to stay secure and become even better. A donation will allow us to hire more developers, pay for infrastructure, expand our userbase, and continue to improve.

Click here to make a donation

The post Frequently Asked Questions: Thunderbird Mobile and K-9 Mail appeared first on The Thunderbird Blog.

Mozilla Add-ons BlogManifest V3 Firefox Developer Preview — how to get involved

While MV3 is still in development, many major features are already included in the Developer Preview, which provides an opportunity to expose functionality for testing and feedback. With strong developer feedback, we’re better equipped to quickly address critical bug fixes, provide clear developer documentation, and reorient functionality.

Some features, such as a well defined and documented lifecycle for Event Pages, are still works in progress. As we complete features, they’ll land in future versions of Firefox and you’ll be able to test and progress your extensions into MV3 compatibility. In most ways Firefox is committed to MV3 cross browser compatibility. However, as explained in Manifest V3 in Firefox: Recap & Next Steps, in some cases Firefox will offer distinct extension functionality.

Developer Preview is not available to regular users; it requires you to change preferences in about:config. Thus you will not be able to upload MV3 extensions to (AMO) until we have an official release available to users.

The following are key considerations about migration at this time and areas we’d greatly appreciate developer feedback.

  1. Read the MV3 migration guide. MV3 contains many changes and our migration guide covers the major necessary steps, as well as linking to documentation to help understand further details.
  2. Update your extension to be compatible with Event Pages. One major difference in Firefox is our use of Event Pages, which provides an alternative to the existing Background Pages that allows idle timeouts and page restarts. This adds resilience to the background, which is necessary for resource constraints and mobile devices. For the most part, Event Pages are compatible with existing Background Pages, requiring only minor changes. We plan to release Event Pages for MV2 in an upcoming Firefox release, so preparation to use Event Pages can be included in MV2 addons soon. Many extensions may not need all the capabilities available in Event Pages. The background scripts are easily transferable to the Service Worker background when it becomes available in a future release. In the meantime, extensions attempting to support both Chrome and Firefox can take advantage of Event Pages in Firefox.
  3. Test your content scripts with MV3. There are multiple changes that will impact content scripts, ranging from tighter restrictions on CORS, CSP, remote code execution, and more. Not all extensions will run into issues in these cases, and some may only require minor modifications that will likely work within MV2 as well.
  4. Understand and consider your migration path for APIs that have changed or deprecated. Deprecated APIs will require code changes to utilize alternate or new APIs. Examples include New Scripting API (which will be part of MV2 in a future release), changing page and browser actions to the action API, etc.
  5. Test and plan migration for permissions. Most permissions are already available as optional permissions in MV2. With MV3, we’re making host permissions optional — in many cases by default. While we do not yet have the primary UI for user control in Developer Preview, developers should understand how these changes will affect their extensions.
  6. Let us know how it’s going! Your feedback will help us make the transition from MV2 to MV3 as smooth as possible. Through Developer Preview we anticipate learning about MV3 rough edges, documentation needs, new features to be fleshed out, and bugs to be fixed. We have a host of community channels you can access to ask questions, help others, report problems, or whatever else you desire to communicate as it relates to the MV3 migration process.

Stay in touch with us on any of these forums…


The post Manifest V3 Firefox Developer Preview — how to get involved appeared first on Mozilla Add-ons Community Blog.

hacks.mozilla.orgTraining efficient neural network models for Firefox Translations

Machine Translation is an important tool for expanding the accessibility of web content. Usually, people use cloud providers to translate web pages. State-of-the-art Neural Machine Translation (NMT) models are large and often require specialized hardware like GPUs to run inference in real-time.

If people were able to run a compact Machine Translation (MT) model on their local machine CPU without sacrificing translation accuracy it would help to preserve privacy and reduce costs.

The Bergamot project is a collaboration between Mozilla, the University of Edinburgh, Charles University in Prague, the University of Sheffield, and the University of Tartu with funding from the European Union’s Horizon 2020 research and innovation programme. It brings MT to the local environment, providing small, high-quality, CPU optimized NMT models. The Firefox Translations web extension utilizes proceedings of project Bergamot and brings local translations to Firefox.

In this article, we will discuss the components used to train our efficient NMT models. The project is open-source, so you can give it a try and train your model too!


NMT models are trained as language pairs, translating from language A to language B. The training pipeline was designed to train translation models for a language pair end-to-end, from environment configuration to exporting the ready-to-use models. The pipeline run is completely reproducible given the same code, hardware and configuration files.

The complexity of the pipeline comes from the requirement to produce an efficient model. We use Teacher-Student distillation to compress a high-quality but resource-intensive teacher model into an efficient CPU-optimized student model that still has good translation quality. We explain this further in the Compression section.

The pipeline includes many steps: compiling of components, downloading and cleaning datasets, training teacher, student and backward models, decoding, quantization, evaluation etc (more details below). The pipeline can be represented as a Directly Acyclic Graph (DAG).


Firfox Translation training pipeline DAG

The workflow is file-based and employs self-sufficient scripts that use data on disk as input, and write intermediate and output results back to disk.

We use the Marian Neural Machine Translation engine. It is written in C++ and designed to be fast. The engine is open-sourced and used by many universities and companies, including Microsoft.

Training a quality model

The first task of the pipeline is to train a high-quality model that will be compressed later. The main challenge at this stage is to find a good parallel corpus that contains translations of the same sentences in both source and target languages and then apply appropriate cleaning procedures.


It turned out there are many open-source parallel datasets for machine translation available on the internet. The most interesting project that aggregates such datasets is OPUS. The Annual Conference on Machine Translation also collects and distributes some datasets for competitions, for example, WMT21 Machine Translation of News. Another great source of MT corpus is the Paracrawl project.

OPUS dataset search interface:

OPUS dataset search interface

It is possible to use any dataset on disk, but automating dataset downloading from Open source resources makes adding new language pairs easy, and whenever the data set is expanded we can then easily retrain the model to take advantage of the additional data. Make sure to check the licenses of the open-source datasets before usage.

Data cleaning

Most open-source datasets are somewhat noisy. Good examples are crawled websites and translation of subtitles. Texts from websites can be poor-quality automatic translations or contain unexpected HTML, and subtitles are often free-form translations that change the meaning of the text.

It is well known in the world of Machine Learning (ML) that if we feed garbage into the model we get garbage as a result. Dataset cleaning is probably the most crucial step in the pipeline to achieving good quality.

We employ some basic cleaning techniques that work for most datasets like removing too short or too long sentences and filtering the ones with an unrealistic source to target length ratio. We also use bicleaner, a pre-trained ML classifier that attempts to indicate whether the training example in a dataset is a reversible translation. We can then remove low-scoring translation pairs that may be incorrect or otherwise add unwanted noise.

Automation is necessary when your training set is large. However, it is always recommended to look at your data manually in order to tune the cleaning thresholds and add dataset-specific fixes to get the best quality.

Data augmentation

There are more than 7000 languages spoken in the world and most of them are classified as low-resource for our purposes, meaning there is little parallel corpus data available for training. In these cases, we use a popular data augmentation strategy called back-translation.

Back-translation is a technique to increase the amount of training data available by adding synthetic translations. We get these synthetic examples by training a translation model from the target language to the source language. Then we use it to translate monolingual data from the target language into the source language, creating synthetic examples that are added to the training data for the model we actually want, from the source language to the target language.

The model

Finally, when we have a clean parallel corpus we train a big transformer model to reach the best quality we can.

Once the model converges on the augmented dataset, we fine-tune it on the original parallel corpus that doesn’t include synthetic examples from back-translation to further improve quality.


The trained model can be 800Mb or more in size depending on configuration and requires significant computing power to perform translation (decoding). At this point, it’s generally executed on GPUs and not practical to run on most consumer laptops. In the next steps we will prepare a model that works efficiently on consumer CPUs.

Knowledge distillation

The main technique we use for compression is Teacher-Student Knowledge Distillation. The idea is to decode a lot of text from the source language into the target language using the heavy model we trained (Teacher) and then train a much smaller model with fewer parameters (Student) on these synthetic translations. The student is supposed to imitate the teacher’s behavior and demonstrate similar translation quality despite being significantly faster and more compact.

We also augment the parallel corpus data with monolingual data in the source language for decoding. This improves the student by providing additional training examples of the teacher’s behavior.


Another trick is to use not just one teacher but an ensemble of 2-4 teachers independently trained on the same parallel corpus. It can boost quality a little bit at the cost of having to train more teachers. The pipeline supports training and decoding with an ensemble of teachers.


One more popular technique for model compression is quantization. We use 8-bit quantization which essentially means that we store weights of the neural net as int8 instead of float32. It saves space and speeds up matrix multiplication on inference.

Other tricks

Other features worth mentioning but beyond the scope of this already lengthy article are the specialized Neural Network architecture of the student model, half-precision decoding by the teacher model to speed it up, lexical shortlists, training of word alignments, and finetuning of the quantized student.

Yes, it’s a lot! Now you can see why we wanted to have an end-to-end pipeline.

How to learn more

This work is based on a lot of research. If you are interested in the science behind the training pipeline, check out reference publications listed in the training pipeline repository README and across the wider Bergamot project. Edinburgh’s Submissions to the 2020 Machine Translation Efficiency Task is a good academic starting article. Check this tutorial by Nikolay Bogoychev for a more practical and operational explanation of the steps.


The final student model is 47 times smaller and 37 times faster than the original teacher model and has only a small quality decrease!

Benchmarks for en-pt model and Flores dataset:

Model Size Total number of parameters Dataset decoding time on 1 CPU core Quality, BLEU
Teacher 798Mb 192.75M 631s 52.5
Student quantized 17Mb 15.7M 17.9s 50.7

We evaluate results using MT standard BLEU scores that essentially represent how similar translated and reference texts are. This method is not perfect but it has been shown that BLEU scores correlate well with human judgment of translation quality.

We have a GitHub repository with all the trained models and evaluation results where we compare the accuracy of our models to popular APIs of cloud providers. We can see that some models perform similarly, or even outperform, the cloud providers which is a great result taking into account our model’s efficiency, reproducibility and open-source nature.

For example, here you can see evaluation results for the English to Portuguese model trained by Mozilla using open-source data only.

Evaluation results en-pt

Anyone can train models and contribute them to our repo. Those contributions can be used in the Firefox Translations web extension and other places (see below).


It is of course possible to run the whole pipeline on one machine, though it may take a while. Some steps of the pipeline are CPU bound and difficult to parallelize, while other steps can be offloaded to multiple GPUs. Most of the official models in the repository were trained on machines with 8 GPUs. A few steps, like teacher decoding during knowledge distillation, can take days even on well-resourced single machines. So to speed things up, we added cluster support to be able to spread different steps of the pipeline over multiple nodes.

Workflow manager

To manage this complexity we chose Snakemake which is very popular in the bioinformatics community. It uses file-based workflows, allows specifying step dependencies in Python, supports containerization and integration with different cluster software. We considered alternative solutions that focus on job scheduling, but ultimately chose Snakemake because it was more ergonomic for one-run experimentation workflows.

Example of a Snakemake rule (dependencies between rules are inferred implicitly):

rule train_teacher:
    message: "Training teacher on all data"
    log: f"{log_dir}/train_teacher{{ens}}.log"
    conda: "envs/base.yml"
    threads: gpus_num*2
    resources: gpu=gpus_num
    output: model=f'{teacher_base_dir}{{ens}}/{best_model}'
    shell: '''bash pipeline/train/ \
                teacher train {src} {trg} "{params.prefix_train}" \
                "{params.prefix_test}" "{params.dir}" \
                "{input.vocab}" {params.args} >> {log} 2>&1'''

Cluster support

To parallelize workflow steps across cluster nodes we use Slurm resource manager. It is relatively simple to operate, fits well for high-performance experimentation workflows, and supports Singularity containers for easier reproducibility. Slurm is also the most popular cluster manager for High-Performance Computers (HPC) used for model training in academia, and most of the consortium partners were already using or familiar with it.

How to start training

The workflow is quite resource-intensive, so you’ll need a pretty good server machine or even a cluster. We recommend using 4-8 Nvidia 2080-equivalent or better GPUs per machine.

Clone and follow the instructions in the readme for configuration.

The most important part is to find parallel datasets and properly configure settings based on your available data and hardware. You can learn more about this in the readme.

How to use the existing models

The existing models are shipped with the Firefox Translations web extension, enabling users to translate web pages in Firefox. The models are downloaded to a local machine on demand. The web extension uses these models with the bergamot-translator Marian wrapper compiled to Web Assembly.

Also, there is a playground website at where you can input text and translate it right away, also locally but served as a static website instead of a browser extension.

If you are interested in an efficient NMT inference on the server, you can try a prototype HTTP service that uses bergamot-translator natively compiled, instead of compiled to WASM.

Or follow the build instructions in the bergamot-translator readme to directly use the C++, JavaScript WASM, or Python bindings.


It is fascinating how far Machine Translation research has come in recent years. Local high-quality translations are the future and it’s becoming more and more practical for companies and researchers to train such models even without access to proprietary data or large-scale computing power.

We hope that Firefox Translations will set a new standard of privacy-preserving, efficient, open-source machine translation accessible for all.


I would like to thank all the participants of the Bergamot Project for making this technology possible, my teammates Andre Natal and Abhishek Aggarwal for the incredible work they have done bringing Firefox Translations to life, Lonnen for managing the project and editing this blog post and of course awesome Mozilla community for helping with localization of the web-extension and testing its early builds.

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 825303 🇪🇺

The post Training efficient neural network models for Firefox Translations appeared first on Mozilla Hacks - the Web developer blog.

SUMO BlogIntroducing Ryan Johnson

Hi folks,

Please join me to welcome Ryan Johnson to the Customer Experience team as a Staff Software Engineer. He will be working closely with Tasos to maintain and improve the Mozilla Support platform.

Here’s a short intro from Ryan:

Hello everyone! I’m Ryan Johnson, and I’m joining the SUMO engineering team as a Staff Software Engineer. This is a return to Mozilla for me, after a brief stint away, and I’m excited to work with Tasos and the rest of the Customer Experience team in reshaping SUMO to better serve the needs of Mozilla and all of you. In my prior years at Mozilla, I was fortunate to work on the MDN team, and with many of its remarkable supporters, and this time I look forward to meeting, working with, and learning from many of you.

Once again, please join me to congratulate and welcome Ryan to our team!

Open Policy & AdvocacyEnhancing trust and security on the internet – browsers are the first line of defence

Enhancing trust and security online is one of the defining challenges of our time – in the EU alone, 37% of residents do not believe they can sufficiently protect themselves from cybercrime. Individuals need assurance that their credit card numbers, social media logins, and other sensitive data are protected from cybercriminals when browsing. With that in mind, we’ve just unveiled an update to the security policies that protect people from cybercrime, demonstrating again the critical role Firefox plays in ensuring trust and security online.

Browsers like Firefox use encryption to protect individuals’ data from eavesdroppers when they navigate online (e.g. when sending credit card details to an online marketplace). But protecting data from cybercriminals when it’s on the move is only part of the risk we mitigate. Individuals also need assurance that they are sending data to the correct domain (e.g., “”). If someone sends their private data to a cybercriminal instead of to their bank, for example, it is of little consolation that the data was encrypted while getting there.

To address this we rely on cryptographic website certificates, which allow a website to prove that it controls the domain name that the individual has navigated to. Websites obtain these certificates from certificate authorities, organisations that run checks to verify that websites are not compromised. Certificate authorities are a critical pillar of trust in this ecosystem – if they mis-issue certificates to cybercriminals or other malicious actors, the consequences for individuals can be catastrophic.

To keep Firefox users safe, we ensure that only certificate authorities that maintain high standards of security and transparency are trusted in the browser (i.e., included in our ‘root certificate store’). We also continuously monitor and review the behaviour of certificate authorities that we opt to trust to ensure that we can take prompt action to protect individuals in cases where a trusted certificate authority has been compromised.

Properly maintaining a root certificate store is a significant undertaking, not least because the cybersecurity threat landscape is constantly evolving. We aim to ensure our security standards are always one step ahead, and as part of that effort, we’ve just finalised an important policy update that will increase transparency and security in the certificate authority ecosystem. This update introduces new standards for how audits of certificate authorities should be conducted and by whom; phases out legacy encryption standards that some certificate authorities still deploy today; and requires more transparency from certificate authorities when they revoke certificates. We’ve already begun working with certificate authorities to ensure they can properly transition to the new higher security standards.

The policy update is the product of a several-month process of open dialogue and debate amongst various stakeholders in the website security space. It is a further case-in-point of our belief in the value of transparent, community-based processes across the board for levelling-up the website security ecosystem. For instance, before accepting a certificate authority in Firefox we process lots of data and perform significant due diligence, then publish our findings and hold a public discussion with the community. We also maintain a public security incident reporting process to encourage disclosure and learning from experts in the field.

Ultimately, this update process highlights once again how operating an independent root certificate store allows us to drive the website security ecosystem towards ever-higher standards, and to serve as the first line of defence for when web certificates are misused. It’s a responsibility we take seriously and we see it as critical to enhancing trust on the internet.

It’s also why we’re so concerned about draft laws under consideration in the EU (Article 45 of the ‘eIDAS regulation’) that would forbid us from applying our security standards to certain certificate authorities and block us from taking action if and when those certificate authorities mis-issue certificates. If adopted in its current form by the EU, Article 45 would be a major step back for security on the internet, because of how it would restrict browser security efforts and because of the global precedent it would set. A broad movement of digital rights organisations; consumer groups; and numerous independent cybersecurity experts (here, here, and here) has begun to raise the alarm and to encourage the EU to change course on Article 45. We are working hard to do so too.

We’re proud of our root certificate store and the role it plays in enhancing trust and security online. It’s part of our contribution to the internet – we’ll continue to invest in it with security updates like this one and work with lawmakers on ensuring legal frameworks continue to support this critical work.


Thumbnail photo credit:

Creative Commons Attribution-Share Alike 4.0 International license.
Attribution: Santeri Viinamäki


The post Enhancing trust and security on the internet – browsers are the first line of defence appeared first on Open Policy & Advocacy.

SUMO BlogWhat’s up with SUMO – May

Hi everybody,

Q2 is a busy quarter with so many exciting projects on the line. The onboarding project implementation is ongoing, mobile support project also goes smoothly so far (we even start to scale to support Apple AppStore), but we also managed to audit our localization process (with the help of our amazing contributors!). Let’s dive more into it without further ado.

Welcome note and shout-outs

  • Welcome to the social support program to Magno Reis and Samuel. They both are long-time contributors on the forum who are spreading their wings to Social Support.
  • Welcome to the world of the SUMO forum to Dropa, YongHan, jeyson1099, simhk, and zianshi17.
  • Welcome to the KB world to kaie, alineee, and rodrigo.bucheli.
  • Welcome to the KB localization to Agehrg4 (ru), YongHan (zh-tw), ibrahimakgedik3 (tr), gabriele.siukstaite (t), apokvietyte (lt), Anokhi (ms), erinxwmeow (ms), and dvyarajan7 (ms). Welcome to the SUMO family!
  • Thanks to the localization contributors who helped me understand their workflow and pain points on the localization process. So many insightful feedback and things that we may not understand without your input. I can’t thank everybody enough for your input!
  • Huge shout outs to Kaio Duarte Costa for stepping up as Social Support moderator. He’s been an amazing contributor to the program since 2020, and I believe that he’ll be a great role model for the community. Thank you and congratulations!

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to add them in our next edition.

Community news

  • I highly recommend checking out KB call from May. We talked about many interesting topics, from KB review queue, to a group exercise on writing content for localization.
  • It’s been 2 months since we on board Dayana as a Community Support Advocate (read the intro blog post here) and we can’t wait to share more about our learnings and accomplishment!
  • Please read this forum thread and this bug report for those of you who experiences trouble with uploading images to Mozilla Support.

Catch up

  • Watch the monthly community call if you haven’t. Learn more about what’s new in April! Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can answer them during the meeting.
  • There’s also KB call, this one was the recording for the month of May. Find out more about KB call from this wikipage.
  • If you’re an NDA’ed contributor, you can watch the recording of the Customer Experience weekly scrum meeting from AirMozilla to catch up with the latest product updates.
  • Check out SUMO Engineering Board to see what the platform team is currently doing.

Community stats


KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only

Month Page views Vs previous month
Apr 2022 7,407,129 -1.26%

Top 5 KB contributors in the last 90 days: 

KB Localization

Top 10 locales based on total page views

Locale Mar 2022 pageviews (*) Localization progress (per Apr, 11)(**)
de 7.99% 98%
zh-CN 7.42% 100%
fr 6.27% 87%
es 6.07% 30%
pt-BR 4.94% 54%
ru 4.60% 82%
ja 3.80% 48%
It 2.36% 100%
pl 2.06% 87%
ca 1.67% 0%

* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

Forum Support

Forum stats



Top 5 forum contributors in the last 90 days: 

Social Support

Channel Total incoming message Conv interacted Resolution rate
Apr 2022 504 316 75.00%

Top 5 Social Support contributors in the past 2 months: 

  1. Bithiah Koshy
  2. Christophe Villeneuve
  3. Magno Reis
  4. Md Monirul Alom
  5. Felipe Koji

Play Store Support

Channel Apr 2022
Total priority review Total priority review replied Total reviews replied
Firefox for Android 1226 234 291
Firefox Focus for Android 109 0 4
Firefox Klar Android 1 0 0

Top 5 Play Store contributors in the past 2 months: 

  • Paul Wright
  • Tim Maks
  • Selim Şumlu
  • Bithiah Koshy

Product updates

Firefox desktop

Firefox mobile

  • TBD

Other products / Experiments

  • TBD

Useful links:

Open Policy & AdvocacyMozilla Meetups: The Building Blocks of a Trusted Internet

Join us on June 9 at 3 PM ET for a virtual conversation on how the digital policy landscape not only shapes, but is also shaped by, the way we move around online and what steps our policymakers need to take to ensure a healthy internet.

View the event here!

The post Mozilla Meetups: The Building Blocks of a Trusted Internet appeared first on Open Policy & Advocacy.

Blog of DataCrash Reporting Data Sprint

Two weeks ago the Socorro Eng/Ops team, in charge of Socorro and Tecken, had its first remote 1-day Data Sprint to onboard folks from ops and engineering.

The Sprint was divided into three main parts, according to the objectives we initially had:

  • Onboard new folks in the team
  • Establish a rough roadmap for the next 3-6 months.
  • Find a more efficient way to work together,

The sprint was formatted as a conversation followed by a presentation guided by Will Kahn-Greene, who leads the efforts in maintaining and evolving the Socorro/Tecken platforms. In the end we went through the roadmap document to decide what our immediate future would look like.


Finding a more efficient way to work together

We wanted to track our work queue more efficiently and decided that a GitHub project would be a great candidate for the task. It is simple to set up and maintain and has different views that we can lightly customize.

That said, because our main issue tracker is Bugzilla, one slightly annoying thing we still have to do while creating an issue on our GitHub project is place the full url to the bug in the issue title:

If we could place links as part of the title, then we could do something like:

Which is much nicer, but GitHub doesn’t support that.

Here’s the link to our work queue:


Onboarding new people to Socorro/Tecken

This was a really interesting part of the day in which we went through different aspects of the crash reporting ecosystem and crash ingestion pipeline.

Story time

The story of Mozilla’s whole Crash Reporting system dates back to 2007, when the Socorro project was created. Since then, Crash Reporting has been an essential part of our products. It is present in all stages, from development to release, and is comprised of an entire ecosystem of libraries and systems maintained by different teams across Mozilla.

Socorro is one of the longer-running projects we have at Mozilla. Along with Antenna and Crash Stats it comprises the Crash Ingestion pipeline, which is maintained by the socorro-eng team. The team is also in charge of the Symbol and Symbolication Servers a.k.a. Tecken.

Along with that story we also learned interesting facts about Crash Reporting, such as:

    • Crash Reports are not the same as Crash Pings: Both things are emitted by the Crash Reporter when Firefox crashes, but Reports go to Socorro and Pings go to the telemetry pipeline
    • Not all Crash Reports are accepted: The collector throttles crash reports according to a set of rules that can be found here
    • Crash Reports are pretty big compared to telemetry pings: They’re 600Kb aggregate but stack overflow crashes can be bigger than 25MB
    • Crash Reports are reprocessed regularly: Whenever something that is involved in generating crash signatures or crash stacks is changed or fixed we reprocess the Crash Reports to regenerate their signatures and stacks

What’s what

There are lots of names involved in the Crash Reporting. We went over what most of them mean:

Symbols: A symbol is an entry in a .sym file that maps from a memory location (a byte number) to something that’s going on in the original code. Since binaries don’t contain information about the code such as code lines, function names and stack navigation, symbols are used to enrich minidumps emitted by binaries with such info. This process is called symbolication. More on symbol files:

Crash Report: When an application process crashes, the Crash Reporter will submit a Crash Report with metadata annotations (BuildId, ProductName, Version, etc) and minidumps which contain info on the crashed processes.

Crash Signature: Generated to every Crash Report by an algorithm unique to Socorro with the objective of grouping similar crashes.

Minidump: A file created and managed by the Breakpad library. It holds info on a crashed process such as CPU, register contents, heap, loaded modules, threads etc.

Breakpad: A set of tools to work with minidump files. It defines the sym file format and includes components to extract information from processes as well as package, submit  and process them. More on Breakpad:

A peek at how it works

Will also explained how things work under the hood and we had a look on the diagrams that show what comprises Tecken and Socorro:



Tecken architecture diagram

Tecken ( is a Django web application that uses S3 for storage and RDS for bookkeeping.

Eliot ( is a webapp that downloads sym files from Tecken for symbolication.



Socorro Architecture Diagram

Socorro has a Crash Report Collector, a Processor and a web application ( for searching and analyzing crash data. Notice the Crash Ingestion pipeline processes Crash Reports and exports a safe form of the processed crash to Telemetry.

More on Crash Reports data

The Crash Reporter is an interesting piece of an application since it needs to do its work while the world around it is collapsing. That means a number of unusual things can happen to the data it collects to build a Report. That being said, there’s a good chance the data it collects is ok, and even when it isn’t, it can still be interesting.

A real concern toward Crash Report data is how toxic it can get: While some pieces of the data are things like the ProductName, BuildID, Version and so on, other pieces are highly identifiable such as URL, UserComments and Breadcrumbs.

Add that to the fact that minidumps contain copies of memory from the crashed processes, which can store usernames, passwords, credit card numbers and so on, and you end up with a very toxic dataset!


Establishing a roadmap for the next 3-6 months

Another interesting exercise that made that Sprint feel even more productive was going over the Tecken/Socorro roadmap and reprioritizing things. While Will was explaining the reasons why we should do certain things, I took that chance to also ask questions and get better context on the different decisions we made, our struggles of past and present and where we aim to be.



It was super productive to have a full day on which we could focus completely on all things Socorro/Tecken. That series of activities allowed us to improve the way we work, transfer knowledge and prioritize things for the not-so-distant future.

Big shout out to Will Kahn-Greene for organizing and driving this event, and also for the patience to explain things carefully and precisely.

Web Application SecurityUpgrading Mozilla’s Root Store Policy to Version 2.8

In accordance with the Mozilla Manifesto, which emphasizes the open development of policy that protects users’ privacy and security, we have worked with the Mozilla community over the past several months to improve the Mozilla Root Store Policy (MRSP) so that we can now announce version 2.8, effective June 1, 2022. These policy changes aim to improve the transparency of Certificate Authority (CA) operations and the certificates that they issue. A detailed comparison of the policy changes may be found here, and the significant policy changes that appear in this version are:

  • MRSP section 2.4: any matter documented in an audit as a qualification, a modified opinion, or a major non-conformity is also considered an incident and must have a corresponding Incident Report filed in Mozilla’s Bugzilla system;
  • MRSP section 3.2: ETSI auditors must be members of the Accredited Conformity Assessment Bodies’ Council and WebTrust auditors must be enrolled by CPA Canada as WebTrust practitioners;
  • MRSP section 3.3: CAs must maintain links to older versions of their Certificate Policies and Certification Practice Statements until the entire root CA certificate hierarchy operated in accordance with such documents is no longer trusted by the Mozilla root store;
  • MRSP section 4.1: before October 1, 2022, intermediate CA certificates capable of issuing TLS certificates are required to provide the Common CA Database (CCADB) with either the CRL Distribution Point for the full CRL issued by the CA certificate or a JSON array of partitioned CRLs that are equivalent to the full CRL for certificates issued by the CA certificate;
  • MRSP section 5.1.3: as of July 1, 2022, CAs cannot use the SHA-1 algorithm to issue S/MIME certificates, and effective July 1, 2023, CAs cannot use SHA-1 to sign any CRLs, OCSP responses, OCSP responder certificates, or CA certificates;
  • MRSP section 5.3.2: CA certificates capable of issuing working server or email certificates must be reported in the CCADB by July 1, 2022, even if they are technically constrained;
  • MRSP section 5.4: while logging of Certificate Transparency precertificates is not required by Mozilla, it is considered by Mozilla as a binding intent to issue a certificate, and thus, the misissuance of a precertificate is equivalent to the misissuance of a certificate, and CAs must be able to revoke precertificates, even if corresponding final certificates do not actually exist;
  • MRSP section 6.1.1:  specific RFC 5280 Revocation Reason Codes must be used under certain circumstances (see blog post Revocation Reason Codes for TLS Server Certificates)
  • MRSP section 8.4: new unconstrained third-party CAs must be approved through Mozilla’s review process that involves a public discussion.

These changes will provide Mozilla with more complete information about CA practices and certificate status. Several of these changes will require that CAs revise their practices, so we have also sent CAs a CA Communication and Survey to alert them about these changes and to inquire about their ability to comply with the new requirements by the effective dates.

In summary, these updates to the MRSP will improve the quality of information about CA operations and the certificates that they issue, which will increase security in the ecosystem by further enabling Firefox to keep your information private and secure.

The post Upgrading Mozilla’s Root Store Policy to Version 2.8 appeared first on Mozilla Security Blog.

Mozilla Add-ons BlogManifest v3 in Firefox: Recap & Next Steps

It’s been about a year since our last update regarding Manifest v3. A lot has changed since then, not least of which has been the formation of a community group under the W3C to advance cross-browser WebExtensions (WECG).

In our previous update, we announced that we would be supporting MV3 and mentioned Service Workers as a replacement for background pages. Since then, it became apparent that numerous use cases would be at risk if this were to proceed as is, so we went back to the drawing board. We proposed Event Pages in the WECG, which has been welcomed by the community and supported by Apple in Safari.

Today, we’re kicking off our Developer Preview program to gather feedback on our implementation of MV3. To set the stage, we want to outline the choices we’ve made in adopting MV3 in Firefox, some of the improvements we’re most excited about, and then talk about the ways we’ve chosen to diverge from the model Chrome originally proposed.

Why are we adopting MV3?

When we decided to move to WebExtensions in 2015, it was a long term bet on cross-browser compatibility. We believed then, as we do now, that users would be best served by having useful extensions available for as many browsers as possible. By the end of 2017 we had completed that transition and moved completely to the WebExtensions model. Today, many cross-platform extensions require only minimal changes to work across major browsers. We consider this move to be a long-term success, and we remain committed to the model.

In 2018, Chrome announced Manifest v3, followed by Microsoft adopting Chromium as the base for the new Edge browser. This means that support for MV3, by virtue of the combined share of Chromium-based browsers, will be a de facto standard for browser extensions in the foreseeable future. We believe that working with other browser vendors in the context of the WECG is the best path toward a healthy ecosystem that balances the needs of its users and developers. For Mozilla, this is a long term bet on a standards-driven future for WebExtensions.

Why is MV3 important to improving WebExtensions?

Manifest V3 is the next iteration of WebExtensions, and offers the opportunity to introduce improvements that would otherwise not be possible due to concerns with backward compatibility. MV2 had architectural constraints that made some issues difficult to address; with MV3 we are able to make changes to address this.

One core part of the extension architecture is the background page, which lives forever by design. Due to memory or platform constraints (e.g. on Android), we can’t guarantee this state, and termination of the background page (along with the extension) is sometimes inevitable. In MV3, we’re introducing a new architecture: the background script must be designed to be restartable. To support this, we’ve reworked existing and introduced new APIs, enabling extensions to declare how the browser should behave without requiring the background script.

Another core part of extensions are content scripts, to directly interact with web pages. We are blocking unsafe coding practices and are offering more secure alternatives to improve the base security of extensions: string-based code execution has been removed from extension APIs. Moreover, to improve the isolation of data between different origins, cross-origin requests are no longer possible from content scripts, unless the destination website opts in via CORS.

User controls for site access

Extensions often need to access user data on websites. While that has enabled extensions to provide powerful features and address numerous user needs, we’ve also seen misuse that impacts user’s privacy.

Starting with MV3, we’ll be treating all site access requests from extensions as optional, and provide users with transparency and controls to make it easier to manage which extensions can access their data for each website.

At the same time, we’ll be encouraging extensions to use models that don’t require permanent access to all websites, by making it easier to grant access for extensions with a narrow scope, or just temporarily. We are continuing to evaluate how to best handle cases, such as privacy and security extensions, that need the ability to intercept or affect all websites in order to fully protect our users.

What are we doing differently in Firefox?


One of the most controversial changes of Chrome’s MV3 approach is the removal of blocking WebRequest, which provides a level of power and flexibility that is critical to enabling advanced privacy and content blocking features. Unfortunately, that power has also been used to harm users in a variety of ways. Chrome’s solution in MV3 was to define a more narrowly scoped API (declarativeNetRequest) as a replacement. However, this will limit the capabilities of certain types of privacy extensions without adequate replacement.

Mozilla will maintain support for blocking WebRequest in MV3. To maximize compatibility with other browsers, we will also ship support for declarativeNetRequest. We will continue to work with content blockers and other key consumers of this API to identify current and future alternatives where appropriate. Content blocking is one of the most important use cases for extensions, and we are committed to ensuring that Firefox users have access to the best privacy tools available.

Event Pages

Chrome’s version of MV3 introduced Background Service Worker as a replacement for the (persistent) Background Page. Mozilla is working on extension Service Workers in Firefox for compatibility reasons, but also because we like that they’re an event-driven environment with defined lifetimes, already part of the Web Platform with good cross-browser support.

We’ve found Service Workers can’t fully support various use cases we consider important, especially around DOM-related features and APIs. Additionally, the worker environment is not as familiar to regular web developers, and our developer community has expressed that completely rewriting extensions can be tedious for thousands of independent developers of existing extensions.

In Firefox, we have decided to support Event Pages in MV3, and our developer preview will not include Service Workers (we’re continuing to work on supporting these for a future release). This will help developers to more easily migrate existing persistent background pages to support MV3 while retaining access to all of the DOM related features available in MV2. We will also support Event Pages in MV2 in an upcoming release, which will additionally aid migration by allowing extensions to transition existing MV2 extensions over a series of releases.

Next Steps for Firefox

In launching our Developer Preview program for Manifest v3, our hope is that authors will test out our MV3 implementation to help us identify gaps or incompatibilities in our implementation. Work is continuing in parallel, and we expect to launch MV3 support for all users by the end of 2022. As we get closer to completion, we will follow up with more detail on timing and how we will support extensions through the transition.

For more information on the Manifest v3 Developer Preview, please check out the migration guide.  If you have questions or feedback on Manifest v3, we would love to hear from you on the Firefox Add-ons Discourse.

The post Manifest v3 in Firefox: Recap & Next Steps appeared first on Mozilla Add-ons Community Blog.

Web Application SecurityRevocation Reason Codes for TLS Server Certificates

In our continued efforts to improve the security of the web PKI, we are taking a multi-pronged approach to tackling some long-existing problems with revocation of TLS server certificates. In addition to our ongoing CRLite work, we added new requirements to version 2.8 of Mozilla’s Root Store Policy that will enable Firefox to depend on revocation reason codes being used consistently, so they can be relied on when verifying the validity of certificates during TLS connections. We also added a new requirement that CA operators provide their full CRL URLs in the CCADB. This will enable Firefox to pre-load more complete certificate revocation data, eliminating dependency on the infrastructure of CAs during the certificate verification part of establishing TLS connections. The combination of these two new sets of requirements will further enable Firefox to enforce revocation checking of TLS server certificates, which makes TLS connections even more secure.

Previous Policy Updates

Significant improvements have already been made in the web PKI, including the following changes to Mozilla’s Root Store Policy and the CA/Browser Forum Baseline Requirements (BRs), which reduced risks associated with exposure of the private keys of TLS certificates by reducing the amount of time that the exposure can exist.

  • TLS server certificates issued on or after 1 September 2020 MUST NOT have a Validity Period greater than 398 days.
  • For TLS server certificates issued on or after October 1, 2021, each dNSName or IPAddress in the certificate MUST have been validated within the prior 398 days.

Under those provisions, the maximum validity period and maximum re-use of domain validation for TLS certificates roughly corresponds to the typical period of time for owning a domain name; i.e. one year. This reduces the risk of potential exposure of the private key of each TLS certificate that is revoked, replaced, or no longer needed by the original certificate subscriber.

New Requirements

In version 2.8 of Mozilla’s Root Store Policy we added requirements stating that:

  1. Specific RFC 5280 Revocation Reason Codes must be used under certain circumstances; and
  2. CA operators must provide their full CRL URLs in the Common CA Database (CCADB).

These new requirements will provide a complete accounting of all revoked TLS server certificates. This will enable Firefox to pre-load more complete certificate revocation data, eliminating the need for it to query CAs for revocation information when establishing TLS connections.

The new requirements about revocation reason codes account for the situations that can happen at any time during the certificate’s validity period, and address the following problems:

  • There were no policies specifying which revocation reason codes should be used and under which circumstances.
  • Some CAs were not using revocation reason codes at all for TLS server certificates.
  • Some CAs were using the same revocation reason code for every revocation.
  • There were no policies specifying the information that CAs should provide to their certificate subscribers about revocation reason codes.

Revocation Reason Codes

Section 6.1.1 of version 2.8 of Mozilla’s Root Store Policy states that when a TLS server certificate is revoked for one of the following reasons the corresponding entry in the CRL must include the revocation reason code:

  • keyCompromise (RFC 5280 Reason Code #1)
    • The certificate subscriber must choose the “keyCompromise” revocation reason code when they have reason to believe that the private key of their certificate has been compromised, e.g., an unauthorized person has had access to the private key of their certificate.
  • affiliationChanged (RFC 5280 Reason Code #3)
    • The certificate subscriber should choose the “affiliationChanged” revocation reason code when their organization’s name or other organizational information in the certificate has changed.
  • superseded (RFC 5280 Reason Code #4)
    • The certificate subscriber should choose the “superseded” revocation reason code when they request a new certificate to replace their existing certificate.
  • cessationOfOperation (RFC 5280 Reason Code #5)
    • The certificate subscriber should choose the “cessationOfOperation” revocation reason code when they no longer own all of the domain names in the certificate or when they will no longer be using the certificate because they are discontinuing their website.
  • privilegeWithdrawn (RFC 5280 Reason Code #9)
    • The CA will specify the “privilegeWithdrawn” revocation reason code when they obtain evidence that the certificate was misused or the certificate subscriber has violated one or more material obligations under the subscriber agreement or terms of use.

RFC 5280 Reason Codes that are not listed above shall not be specified in the CRL for TLS server certificates, for reasons explained in the wiki page.


These new requirements are important steps towards improving the security of the web PKI, and are part of our effort to resolve long-existing problems with revocation of TLS server certificates. The requirements about revocation reason codes will enable Firefox to depend on revocation reason codes being used consistently, so they can be relied on when verifying the validity of certificates during TLS connections. The requirement that CA operators provide their full CRL URLs in the CCADB will enable Firefox to pre-load more complete certificate revocation data, eliminating dependency on the infrastructure of CAs during the certificate verification part of establishing TLS connections. The combination of these two new sets of requirements will further enable Firefox to enforce revocation checking of TLS server certificates, which makes TLS connections even more secure.

The post Revocation Reason Codes for TLS Server Certificates appeared first on Mozilla Security Blog.

hacks.mozilla.orgImproved Process Isolation in Firefox 100


Firefox uses a multi-process model for additional security and stability while browsing: Web Content (such as HTML/CSS and Javascript) is rendered in separate processes that are isolated from the rest of the operating system and managed by a privileged parent process. This way, the amount of control gained by an attacker that exploits a bug in a content process is limited.

Ever since we deployed this model, we have been working on improving the isolation of the content processes to further limit the attack surface. This is a challenging task since content processes need access to some operating system APIs to properly function: for example, they still need to be able to talk to the parent process. 

In this article, we would like to dive a bit further into the latest major milestone we have reached: Win32k Lockdown, which greatly reduces the capabilities of the content process when running on Windows. Together with two major earlier efforts (Fission and RLBox) that shipped before, this completes a sequence of large leaps forward that will significantly improve Firefox’s security.

Although Win32k Lockdown is a Windows-specific technique, it became possible because of a significant re-architecting of the Firefox security boundaries that Mozilla has been working on for around four years, which allowed similar security advances to be made on other operating systems.

The Goal: Win32k Lockdown

Firefox runs the processes that render web content with quite a few restrictions on what they are allowed to do when running on Windows. Unfortunately, by default they still have access to the entire Windows API, which opens up a large attack surface: the Windows API consists of many parts, for example, a core part dealing with threads, processes, and memory management, but also networking and socket libraries, printing and multimedia APIs, and so on.

Of particular interest for us is the win32k.sys API, which includes many graphical and widget related system calls that have a history of being exploitable. Going back further in Windows’ origins, this situation is likely the result of Microsoft moving many operations that were originally running in user mode into the kernel in order to improve performance around the Windows 95 and NT4 timeframe.

Having likely never been originally designed to run in this sensitive context, these APIs have been a traditional target for hackers to break out of application sandboxes and into the kernel.

In Windows 8, Microsoft introduced a new mitigation named PROCESS_MITIGATION_SYSTEM_CALL_DISABLE_POLICY that an application can use to disable access to win32k.sys system calls. That is a long name to keep repeating, so we’ll refer to it hereafter by our internal designation: “Win32k Lockdown“.

The Work Required

Flipping the Win32k Lockdown flag on the Web Content processes – the processes most vulnerable to potentially hostile web pages and JavaScript – means that those processes can no longer perform any graphical, window management, input processing, etc. operations themselves.

To accomplish these tasks, such operations must be remoted to a process that has the necessary permissions, typically the process that has access to the GPU and handles compositing and drawing (hereafter called the GPU Process), or the privileged parent process. 

Drawing web pages: WebRender

For painting the web pages’ contents, Firefox historically used various methods for interacting with the Windows APIs, ranging from using modern Direct3D based textures, to falling back to GDI surfaces, and eventually dropping into pure software mode.

These different options would have taken quite some work to remote, as most of the graphics API is off limits in Win32k Lockdown. The good news is that as of Firefox 92, our rendering stack has switched to WebRender, which moves all the actual drawing from the content processes to WebRender in the GPU Process.

Because with WebRender the content process no longer has a need to directly interact with the platform drawing APIs, this avoids any Win32k Lockdown related problems. WebRender itself has been designed partially to be more similar to game engines, and thus, be less susceptible to driver bugs.

For the remaining drivers that are just too broken to be of any use, it still has a fully software-based mode, which means we have no further fallbacks to consider.

Webpages drawing: Canvas 2D and WebGL 3D

The Canvas API provides web pages with the ability to draw 2D graphics. In the original Firefox implementation, these JavaScript APIs were executed in the Web Content processes and the calls to the Windows drawing APIs were made directly from the same processes.

In a Win32k Lockdown scenario, this is no longer possible, so all drawing commands are remoted by recording and playing them back in the GPU process over IPC.

Although the initial implementation had good performance, there were nevertheless reports from some sites that experienced performance regressions (the web sites that became faster generally didn’t complain!). A particular pain point are applications that call getImageData() repeatedly: having the Canvas remoted means that GPU textures must now be obtained from another process and sent over IPC.

We compensated for this in the scenario where getImageData is called at the start of a frame, by detecting this and preparing the right surfaces proactively to make the copying from the GPU faster.

Besides the Canvas API to draw 2D graphics, the web platform also exposes an API to do 3D drawing, called WebGL. WebGL is a state-heavy API, so properly and efficiently synchronizing child and parent (as well as parent and driver) takes great care.

WebGL originally handled all validation in Content, but with access to the GPU and the associated attack surface removed from there, we needed to craft a robust validating API between child and parent as well to get the full security benefit.

(Non-)Native Theming for Forms

HTML web pages have the ability to display form controls. While the overwhelming majority of websites provide a custom look and styling for those form controls, not all of them do, and if they do not you get an input GUI widget that is styled like (and originally was!) a native element of the operating system.

Historically, these were drawn by calling the appropriate OS widget APIs from within the content process, but those are not available under Win32k Lockdown.

This cannot easily be fixed by remoting the calls, as the widgets themselves come in an infinite amount of sizes, shapes, and styles can be interacted with, and need to be responsive to user input and dispatch messages. We settled on having Firefox draw the form controls itself, in a cross-platform style.

While changing the look of form controls has web compatibility implications, and some people prefer the more native look – on the few pages that don’t apply their own styles to controls – Firefox’s approach is consistent with that taken by other browsers, probably because of very similar considerations.

Scrollbars were a particular pain point: we didn’t want to draw the main scrollbar of the content window in a different manner as the rest of the UX, since nested scrollbars would show up with different styles which would look awkward. But, unlike the rather rare non-styled form widgets, the main scrollbar is visible on most web pages, and because it conceptually belongs to the browser UX we really wanted it to look native.

We, therefore, decided to draw all scrollbars to match the system theme, although it’s a bit of an open question though how things should look if even the vendor of the operating system can’t seem to decide what the “native” look is.

Final Hurdles

Line Breaking

With the above changes, we thought we had all the usual suspects that would access graphics and widget APIs in win32k.sys wrapped up, so we started running the full Firefox test suite with win32k syscalls disabled. This caused at least one unexpected failure: Firefox was crashing when trying to find line breaks for some languages with complex scripts.

While Firefox is able to correctly determine word endings in multibyte character streams for most languages by itself, the support for Thai, Lao, Tibetan and Khmer is known to be imperfect, and in these cases, Firefox can ask the operating system to handle the line breaking for it. But at least on Windows, the functions to do so are covered by the Win32k Lockdown switch. Oops!

There are efforts underway to incorporate ICU4X and base all i18n related functionality on that, meaning that Firefox will be able to handle all scripts perfectly without involving the OS, but this is a major effort and it was not clear if it would end up delaying the rollout of win32k lockdown.

We did some experimentation with trying to forward the line breaking over IPC. Initially, this had bad performance, but when we added caching performance was satisfactory or sometimes even improved, since OS calls could be avoided in many cases now.

DLL Loading & Third Party Interactions

Another complexity of disabling win32k.sys access is that so much Windows functionality assumes it is available by default, and specific effort must be taken to ensure the relevant DLLs do not get loaded on startup. Firefox itself for example won’t load the user32 DLL containing some win32k APIs, but injected third party DLLs sometimes do. This causes problems because COM initialization in particular uses win32k calls to get the Window Station and Desktop if the DLL is present. Those calls will fail with Win32k Lockdown enabled, silently breaking COM and features that depend on it such as our accessibility support. 

On Windows 10 Fall Creators Update and later we have a fix that blocks these calls and forces a fallback, which keeps everything working nicely. We measured that not loading the DLLs causes about a 15% performance gain when opening new tabs, adding a nice performance bonus on top of the security benefit.

Remaining Work

As hinted in the previous section, Win32k Lockdown will initially roll out on Windows 10 Fall Creators Update and later. On Windows 8, and unpatched Windows 10 (which unfortunately seems to be in use!), we are still testing a fix for the case where third party DLLs interfere, so support for those will come in a future release.

For Canvas 2D support, we’re still looking into improving the performance of applications that regressed when the processes were switched around. Simultaneously, there is experimentation underway to see if hardware acceleration for Canvas 2D can be implemented through WebGL, which would increase code sharing between the 2D and 3D implementations and take advantage of modern video drivers being better optimized for the 3D case.


Retrofitting a significant change in the separation of responsibilities in a large application like Firefox presents a large, multi-year engineering challenge, but it is absolutely required in order to advance browser security and to continue keeping our users safe. We’re pleased to have made it through and present you with the result in Firefox 100.

Other Platforms

If you’re a Mac user, you might wonder if there’s anything similar to Win32k Lockdown that can be done for macOS. You’d be right, and I have good news for you: we already quietly shipped the changes that block access to the WindowServer in Firefox 95, improving security and speeding process startup by about 30-70%. This too became possible because of the Remote WebGL and Non-Native Theming work described above.

For Linux users, we removed the connection from content processes to the X11 Server, which stops attackers from exploiting the unsecured X11 protocol. Although Linux distributions have been moving towards the more secure Wayland protocol as the default, we still see a lot of users that are using X11 or XWayland configurations, so this is definitely a nice-to-have, which shipped in Firefox 99.

We’re Hiring

If you found the technical background story above fascinating, I’d like to point out that our OS Integration & Hardening team is going to be hiring soon. We’re especially looking for experienced C++ programmers with some interest in Rust and in-depth knowledge of Windows programming.

If you fit this description and are interested in taking the next leap in Firefox security together with us, we’d encourage you to keep an eye on our careers page.

Thanks to Bob Owen, Chris Martin, and Stephen Pohl for their technical input to this article, and for all the heavy lifting they did together with Kelsey Gilbert and Jed Davis to make these security improvements ship.

The post Improved Process Isolation in Firefox 100 appeared first on Mozilla Hacks - the Web developer blog.

SeaMonkeySeaMonkey 2.53.12 is out!

Hi All,

The SeaMonkey Project is pleased to announce the immediate release of 2.53.12!

Please check out [1] and [2].  Updates forthcoming.

Nothing beats a quick release.  🙂  Kudos to the guys driving these bug fixes.

[1] –

[2] –

hacks.mozilla.orgCommon Voice dataset tops 20,000 hours

The latest Common Voice dataset, released today, has achieved a major milestone: More than 20,000 hours of open-source speech data that anyone, anywhere can use. The dataset has nearly doubled in the past year.

Why should you care about Common Voice?

  • Do you have to change your accent to be understood by a virtual assistant? 
  • Are you worried that so many voice-operated devices are collecting your voice data for proprietary Big Tech datasets?
  • Are automatic subtitles unavailable for you in your language?

Automatic Speech Recognition plays an important role in the way we can access information, however, of the 7,000 languages spoken globally today only a handful are supported by most products.

Mozilla’s Common Voice seeks to change the language technology ecosystem by supporting communities to collect voice data for the creation of voice-enabled applications for their own languages. 

Common Voice Dataset Release 

This release wouldn’t be possible without our contributors — from voice donations to initiating their language in our project, to opening new opportunities for people to build voice technology tools that can support every language spoken across the world.

Access the dataset:

Access the metadata: 

Highlights from the latest dataset:

  • The new release also features six new languages: Tigre, Taiwanese (Minnan), Meadow Mari, Bengali, Toki Pona and Cantonese.
  • Twenty-seven languages now have at least 100 hours of speech data. They include Bengali, Thai, Basque, and Frisian.
  • Nine languages now have at least 500 hours of speech data. They include Kinyarwanda (2,383 hours), Catalan (2,045 hours), and Swahili (719 hours).
  • Nine languages now all have at least 45% of their gender tags as female. They include Marathi, Dhivehi, and Luganda.
  • The Catalan community fueled major growth. The Catalan community’s Project AINA — a collaboration between Barcelona Supercomputing Center and the Catalan Government — mobilized Catalan speakers to contribute to Common Voice. 
  • Supporting community participation in decision making yet. The Common Voice language Rep Cohort has contributed feedback and learnings about optimal sentence collection, the inclusion of language variants, and more. 

 Create with the Dataset 

How will you create with the Common Voice Dataset?

Take some inspiration from technologists who are creating conversational chatbots, spoken language identifiers, research papers and virtual assistants with the Common Voice Dataset by watching this talk: 

Share with us how you are using the dataset on social media using #CommonVoice or sharing on our Community discourse. 


The post Common Voice dataset tops 20,000 hours appeared first on Mozilla Hacks - the Web developer blog.

hacks.mozilla.orgMDN Plus now available in more regions

At the end of March this year, we announced MDN Plus, a new premium service on MDN that allows users to customize their experience on the website.

We are very glad to announce today that it is now possible for MDN users around the globe to create an MDN Plus free account, no matter where they are.

Click here to create an MDN Plus free account*.

The premium version of the service is currently available as follows: in the United States, Canada (since March 24th, 2022), Austria, Belgium, Finland, France, United Kingdom, Germany, Ireland, Italy, Malaysia, the Netherlands, New Zealand, Puerto Rico, Sweden, Singapore, Switzerland, Spain (since April 28th, 2022), Estonia, Greece, Latvia, Lithuania, Portugal, Slovakia and Slovenia (since June 15th, 2022).

We continue to work towards expanding this list even further.

Click here to create an MDN Plus premium account**.

* Now available to everyone

** You will need to subscribe from one of the regions mentioned above to be able to have an MDN Plus premium account at this time

The post MDN Plus now available in more regions appeared first on Mozilla Hacks - the Web developer blog.

SeaMonkeySeaMonkey 2.53.12 Beta 1 is out!

Hi All,

The SeaMonkey Project is pleased to announce the immediate release of 2.53.12 Beta 1.

As it is a beta, please do backup your profile before updating to it.

Please check out [1] and [2].

Updates are slowly being turned on for 2.53.12b1 after I post this blog.  The last few times users had updated to the newest version even before I had posted the blog, so that somewhat confused people.   This shouldn’t be the case now.(After all, I had posted the blog, *then* I flipped the update bit, then updated this blog with this side note. :))

Best Regards,


[1] –

[2] –

SUMO BlogIntroducing Dayana Galeano

Hi everybody, 

I’m excited to welcome Dayana Galeano, our new Community Support Advocate, to the Customer Experience team.

Here’s a short introduction from Dayana: 

Hi everyone! My name is Dayana and I’ll be helping out with mobile support for Firefox. I’ll be pitching in to help respond to app reviews and identifying trends to help track feedback. I’m excited to join this community and work alongside all of you!

Since the Community Support Advocate role is new for Mozilla Support, we’d like to take a moment to describe the role and how it will enhance our current support efforts. 

Open-source culture has been at the center of Mozilla’s identity since the beginning, and this has been our guide for how we support our products. Our “peer to peer” support model, powered by the SUMO community, has enabled us to support Firefox and other products through periods of rapid growth and change, and it’s been a crucial strategy to our success. 

With the recent launches of premium products like Mozilla VPN and Firefox Relay, we’ve adapted our support strategy to meet the needs and expectations of subscribers. We’ve set up processes to effectively categorize and identify issues and trends, enabling us to pull meaningful insights out of each support interaction. In turn, this has strengthened our relationships with product teams and improved our influence when it comes to improving customer experience. With this new role, we hope to apply some of these processes  to our peer to peer support efforts as well.

To be clear about our intentions, this is not a step away from peer to peer support at Mozilla. Instead, we are optimistic that this will deepen the impact our peer to peer support strategy will have with the product teams, enabling us to better segment our support data, share more insightful reports on releases, and showcase the hard work that our community is putting into SUMO each and every day. This can then pave the way for additional investment into resources, training, and more effective onboarding for new members of the community. 

Dayana’s primary focus will be supporting the mobile ecosystem, including Firefox for Android, Firefox for iOS, Firefox Focus (Android and iOS), as well as Firefox Klar. The role will initially emphasize support question moderation, including tagging and categorizing our inbound questions, and the primary support channels will be app reviews on iOS and Android. This will evolve over time, and we will be sure to communicate about these changes.

And with that, please join me to give a warm welcome to Dayana! 

Open Policy & AdvocacyThe FTC and DOJ merger guidelines review is an opportunity to reset competition enforcement in digital markets

As the internet becomes increasingly closed and centralized, consolidation and the opportunity for anti-competitive behavior rises. We are encouraged to see legislators and regulators in many jurisdictions exploring how to update consumer protection and competition policies. We look forward to working together to advance innovation, interoperability, and consumer choice.

Leveling the playing field so that any developer or company can offer innovative new products and people can shape their own online experiences has long been at the core of Mozilla’s vision and of our advocacy to policymakers. Today, we focus on the call for public comments on merger enforcement from the US Federal Trade Commission (FTC) and the US Department of Justice (DOJ) – a key opportunity for us to highlight how existing barriers to competition and transparency in digital markets can be addressed in the context of merger rules.

Our submission focuses on the below key themes, viewed particularly through the lens of increasing competition in browsers and browser engines – technologies that are central to how consumers engage on the web.

  • The Challenge of Centralization Online: For the internet to fulfill its promise as a driver for innovation, a variety of players must be able to enter the market and grow. Regulators need to be agile in their approach to tackle walled gardens and vertically-integrated technology stacks that tilt the balance against small, independent players.
  • The Role of Data: Data aggregation can be both the motive and the effect of a merger, with potential harms to consumers from vertically integrated data sharing being increasingly recognised and sometimes addressed. The roles of privacy and data protection in competition should be incorporated into merger analysis in this context.
  • Greater Transparency to Inform Regulator Interventions: Transparency tools can provide insight into how data is being used or how it is shared across verticals and are important both for consumer protection and to ensure effective competition enforcement. We need to create the right regulatory environment for these tools to be developed and used, including safe harbor access to data for public interest researchers.
  • Enabling Effective Interoperability as a Remedy: Interoperability should feature as an essential tool in the competition enforcer’s toolkit. In particular, web compatibility – ensuring that services and websites work equally no matter what operating system, browser, or device a person is using – may prove useful in addressing harms arising from a vertically integrated silo of technologies.
  • Critical Role of Open Standards in Web Compatibility: The role of Standards Development Organizations and standards processes is vital to an open and decentralized internet.
  • Harmful Design Practices Impede Consumer Choice: The FTC and the DOJ should ban design practices that inhibit consumer control. This includes Dark Patterns and Manipulative Design Techniques used by companies to trick consumers into doing something they don’t mean to do.


The post The FTC and DOJ merger guidelines review is an opportunity to reset competition enforcement in digital markets appeared first on Open Policy & Advocacy.

hacks.mozilla.orgAdopting users’ design feedback

On March 1st, 2022, MDN Web Docs released a new design and a new brand identity. Overall, the community responded to the redesign enthusiastically and we received many positive messages and kudos. We also received valuable feedback on some of the things we didn’t get quite right, like the browser compatibility table changes as well as some accessibility and readability issues.

For us, MDN Web Docs has always been synonymous with the term Ubuntu, “I am because we are.” Translated in this context, “MDN Web Docs is the amazing resource it is because of our community’s support, feedback, and contributions.”

Since the initial launch of the redesign and of MDN Plus afterwards, we have been humbled and overwhelmed by the level of support we received from our community of readers. We do our best to listen to what you have to say and to act on suggestions so that together, we make MDN better. 

Here is a summary of how we went about addressing the feedback we received.

Eight days after the redesign launch, we started the MDN Web Docs Readability Project. Our first task was to triage all issues submitted by the community that related to readability and accessibility on MDN Web Docs. Next up, we identified common themes and collected them in this meta issue. Over time, this grew into 27 unique issues and several related discussions and comments. We collected feedback on GitHub and also from our communities on Twitter and Matrix.

With the main pain points identified, we opened a discussion on GitHub, inviting our readers to follow along and provide feedback on the changes as they were rolled out to a staging instance of the website. Today, roughly six weeks later, we are pleased to announce that all these changes are in production. This was not the effort of any one person but is made up of the work and contributions of people across staff and community.

Below are some of the highlights from this work.

Dark mode

We updated the color palette used in dark mode in particular.

  • We reworked the initial color palette to use colors that are slightly more subtle in dark mode while ensuring that we still meet AA accessibility guidelines for color contrast.
  • We reconsidered the darkness of the primary background color in dark mode and settled on a compromise that improved the experience for the majority of readers.
  • We cleaned up the notecards that indicate notices such as warnings, experimental features, items not on the standards track, etc.


We got a clear sense from some of our community folks that readers found it more difficult to skim content and find sections of interest after the redesign. To address these issues, we made the following improvements:

Browser compatibility tables

Another area of the site for which we received feedback after the redesign launch was the browser compatibility tables. Almost its own project inside the larger readability effort, the work we invested here resulted, we believe, in a much-improved user experience. All of the changes listed below are now in production:

  • We restored version numbers in the overview, which are now color-coded across desktop and mobile.
  • The font size has been bumped up for easier reading and skimming.
  • The line height of rows has been increased for readability.
  • We reduced the table cells to one focusable button element.
  • Browser icons have been restored in the overview header.
  • We reordered support history chronologically to make the version range that the support notes refer to visually unambiguous.

We also fixed the following bugs:

  • Color-coded pre-release versions in the overview
  • Showing consistent mouseover titles with release dates
  • Added the missing footnote icon in the overview
  • Showing correct support status for edge cases (e.g., omit prefix symbol if prefixed and unprefixed support)
  • Streamlined mobile dark mode

We believe this is a big step in the right direction but we are not done. We can, and will, continue to improve site-wide readability and functionality of page areas, such as the sidebars and general accessibility. As with the current improvements, we invite you to provide us with your feedback and always welcome your pull requests to address known issues.

This was a collective effort, but we’d like to mention folks who went above and beyond. Schalk Neethling and Claas Augner from the MDN Team were responsible for most of the updates. From the community, we’d like to especially thank Onkar Ruikar, Daniel Jacobs, Dave King, and Queen Vinyl Da.i’gyu-Kazotetsu.


The post Adopting users’ design feedback appeared first on Mozilla Hacks - the Web developer blog.

hacks.mozilla.orgMozilla partners with the Center for Humane Technology

We’re pleased to announce that we have partnered with the Center for Humane Technology, a nonprofit organization that radically reimagines the digital infrastructure. Its mission is to drive a comprehensive shift toward humane technology that supports the collective well-being, democracy and shared information environment. Many of you may remember the Center for Humane Tech from the Netflix documentary ‘Social Dilemma’, solidifying the saying “If you’re not paying for the product, then you are the product”. The Social Dilemma, is all about the dark side of technology, focusing on the individual and societal impact of algorithms. 

It’s no surprise that this decision to partner was a no brainer and supports our efforts for a safe and open web that is accessible and joyful for all. Many people do not understand how AI and algorithms regularly touch our lives and feel powerless in the face of these systems. We are dedicated to making sure the public understands that we can and must have a say in when machines are used to make important decisions – and shape how those decisions are made. 

Over the last few years, our work has been increasingly focused on building more trustworthy AI and safe online spaces. From challenging YouTube’s algorithms, where Mozilla research shows that the platform keeps pushing harmful videos and its algorithm is recommending videos with misinformation, violent content, hate speech and scams to its over two billion users to developing Enhanced Tracking Protection in Firefox that automatically protects your privacy while you browse, and Pocket which recommends high-quality, human-curated articles without collecting your browsing history or sharing your personal information with advertisers.

Let’s face it, most, if not all people, would probably prefer to use social media platforms that are safer and technologists should design products that reflect all users and without bias. As we collectively continue to think about our role in these areas — now and in the future, this course from the Center for Humane Tech is a great addition to the many tools necessary for change to take place. 

The course rightly titled ‘Foundations of Humane Technologylaunched out of beta in March of this year, after rave reviews from hundreds of beta testers!

It explores the personal, societal, and practical challenges of being a humane technologist. Participants will leave the course with a strong conceptual framework, hands-on tools, and an ecosystem of support from peers and experts. Topics range from respecting human nature to minimizing harm to designing technology that deliberately avoids reinforcing inequitable dynamics of the past. 

The course is completely free of charge and is centered towards building awareness and self-education through an online, at-your-own pace or binge-worthy set of eight modules. The course is marketed to professionals, with or without a technical background involved in shaping tomorrow’s technology. 

It includes interactive exercises and reflections to help you internalize what you’re learning and regular optional Zoom sessions to discuss course content, connect with like-minded people, learn from experts in the field and even rewards a credential upon completion that can be shared with colleagues and prospective employers.

The problem with tech is not a new one, but this course is a stepping stone in the right direction.

The post Mozilla partners with the Center for Humane Technology appeared first on Mozilla Hacks - the Web developer blog.

Blog of DataThis Week in Glean: What Flips Your Bit?

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

The idea of “soft-errors”, particularly “single-event upsets” often comes up when we have strange errors in telemetry. Single-event upsets are defined as: “a change of state caused by one single ionizing particle (ions, electrons, photons…) striking a sensitive node in a micro-electronic device, such as in a microprocessor, semiconductor memory, or power transistors. The state change is a result of the free charge created by ionization in or close to an important node of a logic element (e.g. memory “bit”)”. And what exactly causes these single-event upsets? Well, from the same Wikipedia article: “Terrestrial SEU arise due to cosmic particles colliding with atoms in the atmosphere, creating cascades or showers of neutrons and protons, which in turn may interact with electronic circuits”. In other words, energy from space can affect your computer and turn a 1 into a 0 or vice versa.

There are examples in our data collected by Glean from Mozilla projects like Firefox, that appear to be malformed by a single bit from the value we would expect. In almost every case we cannot find any plausible explanation or bug in any of the infrastructure from client to analysis, so we often shrug and say “oh well, it must be cosmic rays”. A totally fantastical explanation for an empirical measurement of some anomaly that we cannot explain.

What if it wasn’t just some fantastical explanation? What if there was some grain of truth in there and somehow we could detect cosmic rays with browser telemetry data? I was personally struck with these questions, recently, as I became aware of a recent bug that was filed that described just these sorts of errors in their data. These errors were showing up as strings with a single character different in the data (well, a single bit actually). At about the same time, I read an article about a geomagnetic storm that hit at the end of March. Something clicked and I started to really wonder if we could possibly have detected a cosmic event through these single-event upsets in our telemetry data.

I did a little research to see if there was any data on the frequency of these events and found a handful of articles (for instance) that kept referring to a study done by IBM in the 1990’s that referenced 1 cosmic ray bit flip per 256MB of memory per month. After a little digging, I was able to come up with two papers by J.F. Ziegler, an IBM researcher. The first paper, from 1979, on “The Effects of Cosmic Rays on Computer Memories”, goes into the mechanisms by which cosmic rays can affect bits in computer memory, and makes some rough estimates on the frequency of such events, as well as the effect of elevation on the frequency. The later article from the 1990’s, “Accelerated Testing For Cosmic Soft-Error Rate”, went more in detail in measuring the soft-error rates of different chips by different manufacturers. While I never found the exact source of the “1 bit-flip per 256MB per month” quote in either of these papers, the figure could possibly be generalized from the soft-error rate data in the papers. So, while I’m not entirely sure that that number for the rate is accurate, it’s probably close enough for us to do some simple calculations.

So, now that I had checked out the facts behind cosmic ray induced errors, it was time to see if there was any evidence of this in our data. First of all, where could I find these errors, and where would I most likely find these sorts of errors? I thought about the types of data that we collect and decided that a numeric field would be nearly impossible to detect a bit-flip within, unless it was a field with a very limited expected range. String fields seemed to be a little easier candidate to search for, since single bit flips tend to make strings a little weird due to a single unexpected character. There are also some good places to go looking for bit flips in our error streams too, such as when a column or table name is affected. Secondly, I had to make a few hand-wavy assumptions in order to crunch some numbers. The main assumption is that every bit in our data has the same chance of being flipped as any other bit in any other memory. The secondary assumption is that the bits are getting flipped at the client side of the connection, and not while on our servers.

We have a lot of users, and the little bit of data we collect from each client really adds up. Let’s convert that error rate to some more compatible units. Using the 1/256MB/month figure from the article, that’s 4096 cosmic soft-errors per terabyte per month. According to my colleague, chutten, we receive about 100 terabytes of data per day, or 2800 TB in a 4 week period. If we multiply that out, it looks like we have the potential to find 11,468,800 bit flips in a given 4 week period of our data. WHAT! That seemed like an awful lot of possibilities, even if I suspect a good portion of them to be undetectable just due to not being an “obvious” bit flip.

Looking at the Bugzilla issue that had originally sparked my interest in this, it contained some evidence of labels embedded in the data being affected by bit-flips. This was pretty easy to spot because we knew what labels we were expecting and the handful of anomalies stood out. Not only that, the effect seemed to be somewhat localized to a geographical area. Maybe this wasn’t such a bad place to try and correlate this information with space-weather forecasts. Back to the internet and I find an interesting space-weather article that seems to line up with the dates from the bug. I finally hit a bit of a wall in this fantastical investigation when I found it difficult to get data on solar radiation by day and geographical location. There is a rather nifty site, which has quite a bit of interesting data on solar radiation, but I was starting to hit the limits of my current knowledge and the limits on time that I had set out for myself to write this blog post.

So, rather reluctantly, I had to set aside any deeper investigations into this for another day. I do leave the search here feeling that not only is it possible that our data contains signals for cosmic activity, but that it is very likely that it could be used to correlate or even measure the impact of cosmic ray induced single-event upsets. I hope that sometime in the future I can come back to this and dig a little deeper. Perhaps someone reading this will also be inspired to poke around at this possibility and would be interested in collaborating on it, and if you are, you can reach me via the Glean Channel on Matrix as @travis. For now, I’ve turned something that seemed like a crazy possibility in my mind into something that seems a lot more likely than I ever expected. Not a bad investigation at all.

Open Policy & AdvocacyCompetition should not be weaponized to hobble privacy protections on the open web

Recent privacy initiatives by major tech companies, such as Google’s Chrome Privacy Sandbox (GCPS) and Apple’s App Tracking Transparency, have brought into sharp focus a key underlying question – should we maintain pervasive data collection on the web under the guise of preserving competition?

Mozilla’s answer to this is that the choice between a more competitive or a more privacy-respecting web is a false one and should be scrutinized. Many parties on the Internet, including but also beyond the largest players, have built their business models to depend on extensive user tracking. Because this tracking is so baked into the web ecosystem, closing privacy holes necessarily means limiting various parties’ ability to collect and exploit that data. This ubiquity is not, however, a reason to protect a status quo that harms consumers and society. Rather, it is a reason to move away from that status quo to find and deploy better technology that continues to offer commercial value with better privacy and security properties.

None of this is to say that regulators should not intervene to prevent blatant self-preferencing by large technology companies, including in their advertising services. However, it is equally important that strategically targeted complaints not be used as a trojan horse to prevent privacy measures, such as the deprecation of third party cookies (TPCs) or restricting device identifiers, from being deployed more widely. As an example, bundling legitimate competition scrutiny of the GCPS proposals with the deprecation of third party cookies has led to the indefinite delay of this vital privacy improvement in one of the most commonly used browsers. Both the competition and privacy aspects warranted close attention, but leaning too much in favor of the former has left people unprotected.

Rather than asking regulators to look at the substance of privacy features so they do not favor dominant platforms (and there is undoubtedly work to be done on that front), vested interests have instead managed to spin the issue into one with a questionable end goal – to ensure they retain access to exploitative models of data extraction. This access, however, is coming at the cost of meaningful progress in privacy preserving advertising. Any attempt to curtail access to the unique identifiers by which people are tracked online (cookies or device IDs) is being painted as “yet another example” of BigTech players unfairly exercising dominant power. Mozilla agrees with the overall need for scrutiny of concentrated platforms when it comes to the implementation of such measures. However, we are deeply concerned that the scope creep of these complaints to include privacy protections, such as TPC deprecation which is already practiced elsewhere in the industry, is actively harming consumers.

Instead of standing in the way of privacy protection, the ecosystem should instead be working to create a high baseline of privacy protections and an even playing field for all players. That means foreclosing pervasive data collection for large and small parties alike. In particular, we urge regulators to consider advertising related privacy enhancements by large companies with the following goals:

  • Prevent Self-Preferencing: It is crucial to ensure that dominant platforms aren’t closing privacy holes for small players while leaving those holes in place for themselves. Dominant companies shouldn’t allow their services to exploit data at the platform-level that third party apps or websites can no longer access due to privacy preserving measures.
  • Restricting First Party Data Sharing: Regulatory interventions should limit data sharing within large technology conglomerates which have first party relationships with consumers across a variety of services. Privacy regulations already require companies to be explicit with consumers about who has access to their data, how it is shared, etc. Technology conglomerates conveniently escape these rules because the individual products and services are housed within the same company. Some would suggest that third party tracking identifiers are a means to counterbalance the dominance of large, first party platforms. However, we believe competition regulators can tackle dominance in first party data directly through targeted interventions governing how data can be shared and used within the holding structures of large platforms. This leverages classic competition remedies and is far better than using regulatory authority to prop up an outdated and harmful tracking technology like third party cookies.

Consumer welfare is at the heart of both competition and privacy enforcement, and leaving people’s privacy at risk shouldn’t be a remedy for market domination. Mozilla believes that the development of new technologies and regulations will need to go hand in hand to ensure that the future of the web is both private for consumers and remains a sustainable ecosystem for players of all sizes.

The post Competition should not be weaponized to hobble privacy protections on the open web appeared first on Open Policy & Advocacy.

SUMO BlogWhat’s up with SUMO – April 2022

Hi everybody,

April is a transition month, with the season starting to change from winter to spring, and a new quarter is beginning to unfold. A lot to plan, but it also means a lot of things to be excited about. With that spirit, let’s see what the Mozilla Support community has been up to these days:

Welcome note and shout-outs

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to add them in our next edition.

Community news

  • The result of the Mozilla Support Contributor Survey 2022 is out. You can check the summary and recommendations from this deck.
  • The TCP/ETP project has been running so well. The KB changes are on the way, and we finished the forum segmentation and found 2 TCP-related bugs. The final report of the project is underway.
  • We’re one version away from Firefox 100. Check out what to expect in Firefox 100!
  • For those of you who experience problems with media upload in SUMO, check out this contributor thread to learn more about the issue.
  • Mozilla Connect was officially soft-launched recently. Check out the Connect Campaign and learn more about how to get involved!
  • The buddy forum is now archived and replaced with the contributor introduction forum. However, due to a permission issue, we’re hiding the new introduction forum at the moment until we figure out the problem.
  • Previously, I mentioned that we’re hoping to finish the onboarding project implementation by the end of Q1. But we should expect a delay for this project as our platform team is stretched tight at the moment.

Catch up

  • Watch the monthly community call if you haven’t. Learn more about what’s new in February and Marchs! Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can answer them during the meeting.
  • If you’re an NDA’ed contributor, you can watch the recording of the Customer Experience weekly scrum meeting from AirMozilla to catch up with the latest product updates.
  • Check out SUMO Engineering Board to see what the platform team is currently doing.

Community stats


KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only
Month Page views Vs previous month
Feb 2022 6,772,577 -14.56%
Mar 2022 7,501,867 10.77%

Top 5 KB contributors in the last 90 days: 

  1. AliceWyman
  2. Pierre Mozinet
  3. Bithiah
  4. Danny C
  5. Seburo

KB Localization

Top 10 locales based on total page views

Locale Feb 2022 pageviews (*) Mar 2022 pageviews (*) Localization progress (per Apr, 11)(**)
de 9.56% 8.74% 97%
fr 6.83% 6.84% 89%
es 6.79% 6.56% 32%
zh-CN 5.65% 7.28% 100%
ru 4.30% 6.12% 86%
pt-BR 3.91% 4.61% 56%
ja 3.81% 3.82% 52%
It 2.64% 2.45% 99%
pl 2.51% 2.28% 87%
zh-TW 1.42% 1.19% 4%
* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

  1. Jim Spentzos
  2. Michele Rodaro
  3. TyDraniu
  4. Mark Heijl
  5. Milupo

Forum Support

Forum stats


Top 5 forum contributors in the last 90 days: 

  1. FredMcD
  2. Jscher2000
  3. Cor-el
  4. Seburo
  5. Sfhowes
  6. Davidsk

Social Support

Channel Total incoming conv Conv interacted Resolution rate
Feb 2022 229 217 64.09%
Mar 2022 360 347 66.14%

Top 5 Social Support contributors in the past 2 months: 

  1. Bithiah K
  2. Christophe Villeneuve
  3. Kaio Duarte
  4. Tim Maks
  5. Felipe Koji

Play Store Support

Channel Feb 2022 Mar 2022
Total priority review Total priority review replied Total reviews replied Total priority review Total priority review replied Total reviews replied
Firefox for Android 1464 58 92 1387 346 411
Firefox Focus for Android 45 11 54 142 11 94
Firefox Klar Android 0 0 0 2 0 2

Top 3 Play Store contributors in the past 2 months: 

  1. Paul Wright
  2. Tim Maks
  3. Selim Şumlu

Product updates

Firefox desktop

  • V99 landed on Apr 5, 2022
    • Enable CC autofill UK, FR, DE
  • V100 is set for May 3, 2022
    • Picture in Picture
    • Quality Foundations
    • Privacy Segmentation (promoting Fx Focus)

Firefox mobile

  • Mobile V100 set to land on May 3, 2022
  • Firefox Android V100 (unconfirmed)
    • Wallpaper foundations
    • Task Continuity
    • New Tab Banner – messaging framework
    • Clutter-Free History
  • Firefox iOS V100 (unconfirmed)
    • Clutter Free History
  • Firefox Focus V100 (unconfirmed)
    • Unknown


Other products / Experiments

  • Pocket Android (End of April) [Unconfirmed]
    • Sponsored content
  • Relay Premium V22.03 staggered release cadence
    • Sign in with Alias Icon (April 27th)
    • Sign back in with Alias Icon (April 27th)
    • Promotional email blocker to free users (April 21st)
    • Non-Premium Waitlist (April 21st)
    • Replies count surfaced to users (unknown)
    • History section of News (unknown)
  • Mozilla VPN V2.8 (April 18)
    • Mobile onboarding/authentication flow improvements
    • Connection speed
    • Tunnel VPN through Port 53/DNS


Useful links:

Mozilla L10NL10n Report: April 2022 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 


Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New content and projects

What’s new or coming up in Firefox desktop

Firefox 100 is now in beta, and will be released on May 3, 2022. The deadline to update localization is April 24.

As part of this release, users will see a special celebration message.

You can test this dialog by:

  • Opening about:welcome in Nightly.
  • Copying and pasting the following code in the Browser Console:

If you’re not familiar with the Browser Console, take a look at these old instructions to set it up, then paste the command provided above.

What’s new or coming up in mobile

Just like Firefox desktop, v100 is right around the corner for mobile.

  • Firefox for Android and Focus for Android: deadline is April 27.
  • Firefox for iOS and Focus for iOS: deadline is April 24.

Some strings landed late in the cycle – but everything should have arrived by now.

What’s new or coming up in web projects

Relay Website and add-on

The next release is on April 19th. This release includes new strings along with massive updates to both projects thanks to key terminology changes:

  • alias to mask
  • domain to subdomain
  • real email to true email

To learn more about the change, please check out this Discourse post. If you can’t complete the updates by the release date, there will be subsequent updates soon after the deadline so your work will be in production soon. Additionally, the obsolete strings will be removed once the products have caught up with the updates in most locales.

What’s new or coming up in SuMo

What’s new or coming up in Pontoon

Review notifications

We added a notification for suggestion reviews, so you’ll now know when your suggestions have been accepted or rejected. These notifications are batched and sent daily.

Changes to Fuzzy strings

Soon, we’ll be making changes to the way we treat Fuzzy strings. Since they aren’t used in the product, they’ll be displayed as Missing. You will no longer find Fuzzy strings on the dashboards and in the progress charts. The Fuzzy filter will be moved to Extra filters. You’ll still see the yellow checkmark in the History panel to indicate that a particular translation is Fuzzy.

Newly published localizer facing documentation


  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

Image by Elio Qoshi

  • Thanks to everybody on the TCP/ETP contributor focus group. You’re all amazing and the Customer Experience team can’t thank you enough for everyone’s collaboration on the project.

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

  • If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Open Policy & AdvocacyPhilippines’ SIM Card Registration Act will expose users to greater privacy and security risks online

While well-intentioned, the Philippines’ Subscriber Identity Module (SIM) Card Registration Act (2022) will set a worrying precedent for the privacy and anonymity of people on the internet. In its current state, approved by the Philippine Congress (House of Representatives and Senate) but awaiting Presidential assent, the law contains provisions requiring social media companies to mandatorily verify the real names and phone numbers of users that create accounts on their platform. Such a move will not only limit the anonymity that is essential online (for example, for whistle blowing and protection from stalkers) but also reduce the privacy and security they can expect from private companies.

These provisions raise a number of concerns, both in principle as well as regarding implementation, which merit serious reconsideration of the law.

  • Sharing sensitive personal data with technology companies: Implementing the real name and phone number requirement in practice would entail users sending photos of government issued IDs to the companies. This will incentivise the collection of sensitive personal data from government IDs that are submitted for this verification, which can then be used to profile and target users. This is not hypothetical conjecture – we have already seen phone numbers collected for security purposes being used for profiling by some of the largest technology players in the world.
  • Harming smaller players: Such a move would entrench power in the hands of large players in the social media space who can afford to build and maintain such verification systems, harming the ability of innovation from smaller, more agile startups from being able to compete effectively within the Philippines. The broad definition of “social media” in the law also leaves the possibility of applying to many more players than intended, further harming the innovation economy.
  • Increased risk of data breaches: As we have seen from the deployment digital identity systems around the world, such a move will also increase the risk from data breaches by creating large, single points of failure in the form of those systems where these identification documents used to verify real world identity are stored by private, social media companies. As evidence from far better protected systems has shown, such breaches are just a matter of time, with disastrous consequences for users that will extend far beyond their online interactions on such platforms.
  • Inadequate Solution: There is no evidence to prove that this measure will help fight crimes, misinformation or scams (its motivating factor), and it ignores the benefits that anonymity can bring to the internet, such as whistleblowing and protection from stalkers. Anonymity is an integral aspect of free speech online and such a move will have a chilling effect on public discourse.

For all of these reasons, it is critical that the Subscriber Identity Module (SIM) Card Registration Act not be approved into binding law and these provisions be reconsidered to allow the residents of Philippines to continue to enjoy an open internet.

The post Philippines’ SIM Card Registration Act will expose users to greater privacy and security risks online appeared first on Open Policy & Advocacy.

hacks.mozilla.orgPerformance Tool in Firefox DevTools Reloaded

In Firefox 98, we’re shipping a new version of the existing Performance panel. This panel is now based on the Firefox profiler tool that can be used to capture a performance profile for a web page, inspect visualized performance data and analyze it to identify slow areas.

The icing on the cake of this already extremely powerful tool is that you can upload collected profile data with a single click and share the resulting link with your teammates (or anyone really). This makes it easier to collaborate on performance issues, especially in a distributed work environment.

The new Performance panel is available in Firefox DevTools Toolbox by default and can be opened by Shift+F5 key shortcut.


The only thing the user needs to do to start profiling is clicking on the big blue button – Start recording. Check out the screenshot below.

As indicated by the onboarding message at the top of the new panel the previous profiler will be available for some time and eventually removed entirely.

When profiling is started (i.e. the profiler is gathering performance data) the user can see two more buttons:

  • Capture recording – Stop recording, get what’s been collected so far and visualize it
  • Cancel recording – Stop recording and throw away all collected data

When the user clicks on Capture recording all collected data are visualized in a new tab. You should see something like the following:

The inspection capabilities of the UI are powerful and let the user inspect every bit of the performance data. You might want to follow this detailed UI Tour presentation created by the Performance team at Mozilla to learn more about all available features.


There are many options that can be used to customize how and what performance data should be collected to optimize specific use cases (see also the Edit Settings… link at the bottom of the panel).

To make customization easier some presets are available and the Web Developer preset is selected by default. The profiler can be also used for profiling Firefox itself and Mozilla is extensively using it to make Firefox fast for millions of its users. The WebDeveloper preset is intended for profiling standard web pages and the rest is for profiling Firefox.

The Profiler can be also used directly from the Firefox toolbar without the DevTools Toolbox being opened. The Profiler button isn’t visible in the toolbar by default, but you can enable it by loading and clicking on the “Enable Firefox Profiler Menu Button” on the page.

This is what the button looks like in the Firefox toolbar.

As you can see from the screenshot above the UI is almost exactly the same (compared to the DevTools Performance panel).

Sharing Data

Collected performance data can be shared publicly. This is one of the most powerful features of the profiler since it allows the user to upload data to the Firefox Profiler online storage. Before uploading a profile, you can select the data that you want to include, and what you don’t want to include to avoid leaking personal data. The profile link can then be shared in online chats, emails, and bug reports so other people can see and investigate a specific case.

This is great for team collaboration and that’s something Firefox developers have been doing for years to work on performance. The profile can also be saved as a file on a local machine and imported later from

There are many more powerful features available and you can learn more about them in the extensive documentation. And of course, just like Firefox itself, the profiler tool is an open source project and you might want to contribute to it.

There is also a great case study on using the profiler to identify performance issues.

More is coming to DevTools, so stay tuned!

The post Performance Tool in Firefox DevTools Reloaded appeared first on Mozilla Hacks - the Web developer blog.

SeaMonkeySeaMonkey is released!

Hi All,

I hope everyone’s keeping safe.

The SeaMonkey Project team is pleased to announce the immediate release of SeaMonkey

As this is a security update, please ensure you’ve updated your SeaMonkey (either via automatic updates or via manual download (if you aren’t able to have it automatically update)).

Please check out [1] or [2].


PS: Updates are gradually being enabled. Thanks.

[1] –

[2] –


hacks.mozilla.orgIntroducing MDN Plus: Make MDN your own

MDN is one of the most trusted resources for information about web standards, code samples, tools, and everything you need as a developer to create websites. In 2015, we explored how we could expand beyond documentation to provide a structured learning experience. Our first foray was the Learning Area, with the goal of providing a useful addition to the regular MDN reference and guide material. In 2020, we added the first Front-end developer learning pathway. We saw a lot of interest and engagement from users, and the learning area contributed to about 10% of MDN’s monthly web traffic. These two initiatives were the start of our exploration into how we could offer more learning resources to our community. Today, we are launching MDN Plus, our first step to providing a personalized and more powerful experience while continuing to invest in our always free and open webdocs.

Build your own MDN Experience with MDN Plus

In 2020 and 2021 we surveyed over 60,000 MDN users and learned that many of the respondents  wanted a customized MDN experience. They wanted to organize MDN’s vast library in a way that worked for them. For today’s premium subscription service, MDN Plus, we are releasing three new features that begin to address this need: Notifications, Collections and MDN Offline. More details about the features are listed below:

  • Notifications: Technology is ever changing, and we know how important it is to stay on top of the latest updates and developments. From tutorial pages to API references, you can now get notifications for the latest developments on MDN. When you follow a page, you’ll get notified when the documentation changes, CSS features launch, and APIs ship. Now, you can get a notification for significant events relating to the pages you want to follow. Read more about it here.

Screenshot of a list of notifications on mdn plus

  • Collections: Find what you need fast with our new collections feature. Not only can you pick the MDN articles you want to save, we also automatically save the pages you visit frequently. Collections help you quickly access the articles that matter the most to you and your work. Read more about it here.

Screenshot of a collections list on mdn plus

  • MDN offline: Sometimes you need to access MDN but don’t have an internet connection. MDN offline leverages a Progressive Web Application (PWA) to give you access to MDN Web Docs even when you lack internet access so you can continue your work without any interruptions. Plus, with MDN offline you can have a faster experience while saving data. Read more about it here.

Screenshot of offline settings on mdn plus

Today, MDN Plus is available in the US and Canada. In the coming months, we will expand to other countries including France, Germany, Italy, Spain, Belgium, Austria, the Netherlands, Ireland, United Kingdom, Switzerland, Malaysia, New Zealand and Singapore. 

Find the right MDN Plus plan for you

MDN is part of the daily life of millions of web developers. For many of us MDN helped with getting that first job or helped land a promotion. During our research we found many of these users, users who felt so much value from MDN that they wanted to contribute financially. We were both delighted and humbled by this feedback. To provide folks with a few options, we are launching MDN Plus with three plans including a supporter plan for those that want to spend a little extra. Here are the details of those plans:

  • MDN Core: For those who want to do a test drive before purchasing a plan, we created an option that lets you try a limited version for free.  
  • MDN Plus 5:  Offers unlimited access to notifications, collections, and MDN offline with new features added all the time. $5 a month or an annual subscription of $50.
  • MDN Supporter 10:  For MDN’s loyal supporters the supporter plan gives you everything under MDN Plus 5 plus early access to new features and a direct feedback channel to  the MDN team. It’s $10 a month or $100 for an annual subscription.  

Additionally, we will offer a 20% discount if you subscribe to one of the annual subscription plans.

We invite you to try the free trial version or sign up today for a subscription plan that’s right for you. MDN Plus is only available in selected countries at this time.


The post Introducing MDN Plus: Make MDN your own appeared first on Mozilla Hacks - the Web developer blog.