SUMO BlogHubs transition

Hi SUMO folks,

I’m delighted to share this news with you. The Hubs team has recently transitioned into a new phase of a product. If in the past, you needed to figure out the hosting and deployment on your own with Hubs Cloud, you now have the option to simply subscribe to unlock more capabilities to customize your Hubs room. To learn more about this transformation, you can read their blog post.

Along with this relaunch, Mozilla has also just acquired Active Replica, a team that shares Mozilla’s passion for 3D development. To learn more about this acquisition, you can read this announcement.

What does this mean for the community?

To support this change, the SUMO team has been collaborating with the Hubs team to update Hubs help articles that we host on our platform. We also recently removed Hubs AAQ (Ask a Question) from our forum, and replaced it with a contact form that is directly linked to our paid support infrastructure (similar to what we have for Mozilla VPN and Firefox Relay).

Paying customers of Hubs will need to be directed to file a support ticket via the Hubs contact form which will be managed by our designated staff members. Though contributors can no longer help with the forum, you are definitely welcome to help with Hubs’ help articles. There’s also a Mozilla Hubs Discord server that contributors can pop into and participate in.

We are excited about the new direction that the Hubs team is taking and hope that you’ll support us along the way. If you have any questions or concerns, we’re always open to discussion.

The Mozilla BlogPulse Joins the Mozilla Family to Help Develop a New Approach to Machine Learning

I’m proud to announce that we have acquired Pulse, an incredible team that has developed some truly novel machine learning approaches to help streamline the digital workplace. The products that Raj, Jag, Rolf, and team have built are a great demonstration of their creativity and skill, and we’re incredibly excited to bring their expertise into our organization. They will spearhead our efforts in applied ethical machine learning, as we invest to make Mozilla products more personal, starting with Pocket. 

Machine learning (ML) has become a powerful driver of product experience. At its best, it helps all of us to have better, richer experiences across the web. Building ML models to drive these experiences requires data on people’s preferences, behaviors, and actions online, and that’s why Mozilla has taken a very cautious approach in applying ML in our own product experiences. It is possible to build machine learning models that act in service of the people on the internet, transparently, respectful of privacy, and built from the start with a focus on equity and inclusion. In short, Mozilla will continue its tradition of DOING: building products that serve as examples of a better way forward for the industry, a way forward that puts people first.

Which explains why we were so excited when we began talking to the Pulse team. It became immediately obvious that we both fundamentally agree that the world needs a model where automated systems are built from day one with individual people as the primary beneficiary. Mozilla, with an almost 25 year history of building products with people and privacy at their core, is the right organization to do that. And with Pulse as part of our team, we can move even more quickly to set a new example for the industry. 

One of the things that makes this marriage such a great fit is Pulse’s history of building products that optimize for the preference of each individual customer. They know how to take things from theory and design and turn them into real product experiences that address actual needs and preferences. That kind of background is going to be critical as we work to enhance the experience across our existing and new products in the coming years. I’m particularly excited to enhance our machine learning capabilities, including personalization, in Pocket, a fantastic product that has only just scratched the surface of its ultimate potential.

We have big plans for the Pulse team’s skills and know-how, and are thrilled to have their contributions to our growing entire portfolio of products. 

So, Raj, Jag, Rolf, and team, welcome aboard! We are energized by the chance to work together, and I can’t wait to see what we build.

The post Pulse Joins the Mozilla Family to Help Develop a New Approach to Machine Learning appeared first on The Mozilla Blog.

The Mozilla BlogCelebrating Pocket’s Best of 2022

The run-up to December is always my favorite time of year at Pocket. It’s when we sift through our data (always anonymous and aggregated—we’re part of Mozilla, after all 😉), to see which must-read profiles, thought-provoking essays, and illuminating explainers Pocket readers loved best over the past 12 months. 

Today, we’re delighted to bring you Pocket’s Best of 2022. This year’s honor roll is our biggest ever: a whopping 20 lists celebrating the year’s top articles across culture, technology, science, business, and more. All are informed by the saving and reading habits of Pocket’s millions of curious, discerning users.

The stories people save to Pocket reveal something unique—not only about what’s occupying our collective attention, but about what we aspire to be. And what we see again and again from 40 million saves to Pocket every month is the gravitational pull of stories that help us better understand the world around us—and ourselves. 

For the past few years, our most-saved articles have reflected our challenging, unsettling times: how to manage burnout (2019), Covid uncertainty (2020), and the chronic sense of ‘blah’ so many of us felt as the pandemic wore on (2021). This year, we see something different: seeds of renewal. Our data shows people looking to reinvent themselves and redefine what happiness looks like to them. We see readers eager to reset their relationships: with their stuff, with technology, and especially with other people. Articles about how to build deeper connections were some of the most popular stories saved to Pocket this year. 

These are, in many ways, age-old challenges. But what you’ll find in our Best of 2022 collections are all the ways Pocket readers are discovering and embracing new solutions after two long, hard years. To borrow a phrase from a story that resonated deeply with the Pocket community this year, it feels like something of a vibe shift

Nowhere was this more evident than in the author who earned more saves to Pocket than any other this year: Arthur C. Brooks, whose “How to Build a Life” series for The Atlantic was a Pocket favorite month in and month out. Whether it’s shortcuts to contentment or tips for how to want less, you can seek your own vibe shift (and bliss) in a special year-end collection of Arthur’s most popular pieces, with an introduction by the #1 most-saved author himself. 

There are so many more gems to enjoy in the Best of 2022 collections, including:

If the articles featured in Best of 2022 are new to you, save them to your Pocket and dig in over the holidays. (May we suggest making use of our Listen feature while wrapping gifts?) While you’re at it, join the millions of people discovering the thought-provoking articles we curate in our apps, daily newsletter, and in the Firefox browser each and every day. 

With Pocket, you can make active decisions about how you spend your time online—if that isn’t a vibe shift, what is?

From all of us at Pocket, have a joyous and safe holiday season.

Carolyn O’Hara is senior director of content discovery at Pocket.

P.S. For our German Pocket users: Auch für unsere deutschsprachigen Pocket User:innen haben wir das größtes „Best of” bisher in petto – mit den besten Artikeln und Storys, die unsere Community dieses Jahr am meisten geklickt, gelesen und gespeichert hat. Entdecke hier die spannendsten Geschichten von 2022 zu Psychologie, Wissenschaft, Technologie und vielen anderen Themen. Obendrauf hat Autorin Sarah Diehl eine besondere Collection über die Kraft des Alleinseins kuratiert. In diesem Sinne: Happy reading!

Methodology: The Best of 2022 winners were selected based on an aggregated and anonymized analysis of the links saved to Pocket in 2022, with a focus on English- and German-language articles. Results took into account how often a piece of content was saved, opened, read, and shared, among other factors.

Save and discover the best articles, stories and videos on the web

Get Pocket

The post Celebrating Pocket’s Best of 2022 appeared first on The Mozilla Blog.

The Mozilla Blog3 ways to use Mozilla Hubs, a VR platform that’s accessible and private by design

A 3D illustration shows human, animal, food and robotic characters floating in a nature setting.<figcaption>Credit: JR Ingram / Mozilla</figcaption>

When NASA’s Webb Space Telescope team and artist Ashley Zelinskie wanted to bring space exploration to everyone, they chose Mozilla Hubs, our open source platform for creating 3D virtual spaces right from your browser. 

Ashley told us that they “didn’t want to cut people out that didn’t have fancy VR headsets or little experience in VR. … If we were going to invite the world to experience the Webb Telescope we wanted everyone to be able to attend.” 

That’s exactly why Mozilla has been investing in the immersive web: We believe that virtual worlds are part of the future of the internet, and we want them to be accessible and safe for all. 

That means each Hubs user controls access to the virtual world they created, which is only discoverable to the people they share it with. Hubs users and their guests can also immerse themselves in this world right from their desktop or mobile browser – no downloads or installations required. And while you can use a VR headset, you can access the same spaces through your phone, tablet or desktop computer.

If you’re curious, take a look at a few ways people have been creating immersive worlds with Mozilla Hubs: 

To create art galleries and portfolios

A screenshot from a Mozilla Hubs room shows an art gallery.<figcaption>Credit: Apart Poster Gallery by Paradowski Creative</figcaption>

It’s not just space art. A virtual museum of art prints put together by the creative agency Paradowski helped raise money for a COVID-19 response fund by the World Health Organization. In St. Louis, Missouri, the American Institute of Graphic Arts showcased artists’ work during the school’s annual design show. In the U.K., the Royal Institute of British Architects presented an exhibition that immersed visitors in architectural milestones over the last five centuries. 

While Mozilla Hubs can host projects on the grandest scale, you can use it for personal projects too: Whether you’re an artist, photographer or a 3D modeler, you can create an immersive portfolio that’s easy to use and accessible to anybody with a browser.

To build spaces for hobbies (and meet new people)

A screenshot from a Mozilla Hubs space shows a group of human and animal characters. <figcaption>Credit: Mozilla Hubs Creator Labs </figcaption>

The website Meetup lets people find local events based on their interests – from pet poultry to coding to learning a new language. In addition to in-person gatherings, the platform allows people to organize online. Those who wish to meet up virtually can do so in a 3D space through Hubs. 

You can create your own immersive space and invite others. You can also just grab an existing room made available by another creator, remix it and make it your own.

To teach and learn 

NYU Langone Health, one of the largest healthcare systems in the northeast U.S., uses Hubs to teach anatomy. Hubs helps instructors immerse medical students in the coursework, including 3D vascular stereoscopic models.

A screenshot from a Mozilla Hubs room shows a drought map.<figcaption>Credit: Screenshot courtesy of Dr. Tutaleni Asino</figcaption>

Oklahoma State University’s Emerging Technologies and Creativity Research Lab created a virtual science expo that showcased different Earth environments and hosted Q&A sessions with scientists.

Olympic medalist Sofía Toro, along with professors from the Universidad Católica San Antonio in Spain, even taught a windsurfing class online using Hubs.

Virtual spaces offer new opportunities for connections and innovation. Through Hubs, Mozilla wants to make those opportunities available to everyone. Learn more and join the Hubs community here.  

The post 3 ways to use Mozilla Hubs, a VR platform that’s accessible and private by design appeared first on The Mozilla Blog.

The Mozilla Thunderbird BlogThunderbird Android Update: K-9 Mail 6.400 Adds Customizable Swipe Actions

In what feels like the blink of an eye, four months have passed since announcing our plans for bringing Thunderbird to Android. The path to bringing you a great email experience on Android devices begins with K-9 Mail, which joined the Thunderbird product family earlier this summer.

As we work towards a modern redesign of desktop Thunderbird, we’re also working towards improving K-9 Mail as it begins its transition to Thunderbird mobile in Summer 2023.

Later this week, we’ll share a preview of K-9 Mail’s beautiful Message View redesign. Today, though, let’s talk about the newly released version 6.400.

K-9 Mail 6.400 Adds Customizable Swipe Features

Version 6.400 of K-9 Mail for Android begins rolling out today on F-Droid and Google’s Play Store. Dedicated to pursuing a modernized interface, the new version introduces customizable swipe actions.

<figcaption>K-9 Mail 6.400 introduces swipe actions, such as Archiving and Deleting. (Shown here in Dark Mode)</figcaption>

As you can see above, an intuitive swipe to the left or right can quickly delete or archive a message. But you’re obviously not limited to just those actions. You can customize both right and left swipe actions with the following options:

  • Toggle Selection
  • Mark as read/unread
  • Add/remove star
  • Archive
  • Delete
  • Spam
  • Move

To configure your own preferences, open the app menu (the 3 vertical lines on the top left of the app), and then: Settings ➡ General Settings ➡ Interaction ➡ Swipe Actions.

Swipe interactions extend to the full message view. When you’re reading a message, simply swipe left to navigate to the next message in your list, or swipe right to back up to the previous message.

<figcaption>K-9 Mail 6.400 introduces swipe actions, such as moving between your messages. (Shown here in Light Mode)</figcaption>

We also recently added integral OAuth 2.0 support for major email account providers like Google, Yahoo Mail, AOL Mail, Hotmail, and Microsoft (Office 365 and personal accounts).

Track Our Progress On The Thunderbird Android Roadmap

In addition to our Thunderbird Supernova roadmap, we’ve recently added a roadmap for Android Thunderbird. Track our progress and see what other features are in development by clicking here.

Then discuss the future of Thunderbird on Android on our public mailing list.

Where To Get K-9 Mail Version 6.400

Version 6.400 will start gradually rolling out today. As always, you can get it on the following platforms:

GitHub | F-Droid | Play Store

(Note that the release will gradually roll out on the Google Play Store, so please be patient if it doesn’t automatically update.)

Try New Features First: Join The K-9 Mail Beta!

As K-9 Mail transforms into Thunderbird for Android, be the first to try out new features and interface improvements by testing the Beta version.

GitHub releases → We publish all of our releases there. Beta versions are marked with a “Pre-release” label.

Play Store → You should be able to join the beta program using the Google Play Store app on the device. Look out for the “Join the beta” section in K-9 Mail’s detail page.

F-Droid → Unlike stable versions, beta versions on F-Droid are not marked with a “Suggested” label. You have to manually select such a version to install it. To get update notification for non-suggested versions you need to check ‘Settings > Expert mode > Unstable updates’ in the F-Droid app.

The post Thunderbird Android Update: K-9 Mail 6.400 Adds Customizable Swipe Actions appeared first on The Thunderbird Blog.

The Mozilla BlogNew phone? Give yourself the gift of privacy with these 5 tips

An illustration shows two gift boxes and a padlock.<figcaption>Credit: Nick Velazquez</figcaption>

So you’ve unboxed a shiny new phone, peeled the sticker off the screen and transferred your data. If you’re reading this, you’ve made the smart decision to take another important step: Setting up your device for privacy and security

Here are five steps you can take to help keep your data safe. Your future self thanks you.

1. Set up privacy controls on your new devices

Do you know which apps know your location, track your online activity and have access to your contacts? Check your privacy settings to make sure you’re only sharing what you’re comfortable sharing with your apps. Here’s how to get to your phone’s privacy settings:

  • iPhone: Settings > Privacy & Security
  • Android: Settings > Privacy > Permission Manager

2. Turn on auto-update

Updates can be disruptive, but they’re also vital in keeping your device safe from hackers who take advantage of security holes that updates are intended to patch. Here’s where you can turn on auto-update:

  • iPhone: Settings > General > Software Update > Automatic Updates
  • Android: Settings > Software Updates

3. Opt out of the default browser

Sure it’s convenient to use the browser that’s already on your phone. But you do have a choice. Download Firefox to use a browser that’s backed by a nonprofit and that will always put you and your privacy first. Once you’ve installed Firefox, here’s how to make it your default browser:

  • iPhone: Settings > Firefox > Default Browser App > Firefox
  • Android: Settings > Set as default browser > Firefox for Android > Set as default

Another benefit: If you already use Firefox on desktop, you’ll get to see your bookmarks, history and saved credit card information and passwords on your phone too. Just log into your Firefox account to move seamlessly between your devices. 

A table compares major browsers' security and privacy features, including private browsing, blocking third-party tracking cookies by default, blocking cryptomining scripts and blocking social trackers. Firefox checks the boxes for all.

4. Prevent spam texts and calls with Firefox Relay

Want fewer spam text messages and calls? Sign up for Firefox Relay, which gives you a phone number mask (i.e. not your true digits) when website forms ask for your number. That way, when you’re making restaurant reservations or signing up for discount codes, you lessen the chance of companies selling your phone number to third parties. Bonus: You can even give your phone number mask to people when you don’t want to give them your true number just yet. Phone calls and texts will automatically get forwarded to you. Learn more about how Firefox Relay works here.

5. Consider using a VPN

Many mobile apps don’t implement encryption properly, leaving the data on your phone vulnerable to hackers. Using a VPN encrypts your connection and conceals your IP address, shielding your identity and location from prying eyes. The Mozilla VPN, unlike some services, will never log and sell your data. (P.S. We’ve made it more accessible to take advantage of both Firefox Relay and Mozilla VPN. Learn more about it here.)

Staying secure and private online isn’t hard, but it does take some effort. Mozilla is always here to help. For more tips about living your best online life, check out our #AskFirefox series on YouTube

Layer on even more protection with phone number masking

Sign up for Firefox Relay

The post New phone? Give yourself the gift of privacy with these 5 tips appeared first on The Mozilla Blog.

The Mozilla Thunderbird BlogHelp Keep Thunderbird Alive and Thriving In 2023

A few short years ago Thunderbird was on the verge of extinction. But you saved us! This year we began work on an Android version of Thunderbird, made excellent progress toward next year’s “Supernova” release, and hired more talented software engineers, developers, and designers to help us make Thunderbird better than ever in 2023.

Putting YOU In Control — Not A Corporation

Since 2003, part of our mission has been giving you a customizable communication experience full of powerful features. The other part of Thunderbird’s mission is more personal: Respecting your privacy and putting you in control – not a corporation. 

We never show advertisements, and we never sell your data. That’s because Thunderbird is completely funded by gifts from generous people just like you. You keep this great software free, and you keep us thriving! 

But accomplishing this mission is expensive. Consistently improving Thunderbird and keeping it competitive means ensuring your security in a constantly changing landscape of mail providers. It means maintaining complex server infrastructure. It means fixing bugs and updating old code. It means striving for full accessibility and a refreshing, modern design. 

Help Thunderbird Thrive In 2023

So today, we’re asking for your help. Did you know that development of Thunderbird is funded by less than 1% of the people who use and enjoy it? 

If you find value in using Thunderbird, please consider giving a gift to support it. Your contributions make a huge difference. And if you’ve already donated this year, THANK YOU!

Thank you for using Thunderbird, and thank you for trusting us with your important daily communications. 

The post Help Keep Thunderbird Alive and Thriving In 2023 appeared first on The Thunderbird Blog.

hacks.mozilla.orgImproving Firefox stability with this one weird trick

The first computer I owned shipped with 128 KiB of RAM and to this day I’m still jarred by the idea that applications can run out of memory given that even 15-year-old machines often shipped with 4 GiB of memory. And yet it’s one of the most common causes of instability experienced by users and in the case of Firefox the biggest source of crashes on Windows.

As such, at Mozilla, we spend significant resources trimming down Firefox memory consumption and carefully monitoring the changes. Some extra efforts have been spent on the Windows platform because Firefox was more likely to run out of memory there than on macOS or Linux. And yet none of those efforts had the impact of a cool trick we deployed in Firefox 105.

But first things first, to understand why applications running on Windows are more prone to running out of memory compared to other operating systems it’s important to understand how Windows handles memory.

All modern operating systems allow applications to allocate chunks of the address space. Initially these chunks only represent address ranges that aren’t backed by physical memory unless data is stored in them. When an application starts using a bit of address space it has reserved, the OS will dedicate a chunk of physical memory to back it, possibly swapping out some existing data if need be. Both Linux and macOS work this way, and so does Windows except that it requires an extra step compared to the other OSes.

After an application has requested a chunk of address space it needs to commit it before being able to use it. Committing a range requires Windows to guarantee it can always find some physical memory to back it. Afterwards, it behaves just like Linux and macOS. As such Windows limits how much memory can be committed to the sum of the machine’s physical memory plus the size of the swap file.

This resource – known as commit space – is a hard limit for applications. Memory allocations will start to fail once the limit is reached. In operating system speech this means that Windows does not allow applications to overcommit memory.

One interesting aspect of this system is that an application can commit memory that it won’t use. The committed amount will still count against the limit even if no data is stored in the corresponding areas and thus no physical memory has been used to back the committed region. When we started analyzing out of memory crashes we discovered that many users still had plenty of physical memory available – sometimes gigabytes of it – but were running out of commit space instead.

Why was that happening? We don’t really know but we made some educated guesses: Firefox tracks all the memory it uses and we could account for all the memory that we committed directly.

However, we have no control over Windows system libraries and in particular graphics drivers. One thing we noticed is that graphics drivers commit memory to make room for textures in system memory. This allows them to swap textures out of the GPU memory if there isn’t enough and keep them in system memory instead. A mechanism that is similar to how regular memory can be swapped out to disk when there is not enough RAM available. In practice, this rarely happens, but these areas still count against the limit.

We had no way of fixing this issue directly but we still had an ace up our sleeve: when an application runs out of memory on Windows it’s not outright killed by the OS, its allocation simply fails and it can then decide what it does by itself.

In some cases, Firefox could handle the failed allocation, but in most cases, there is no sensible or safe way to handle the error and it would need to crash in a controlled way… but what if we could recover from this situation instead? Windows automatically resizes the swap file when it’s almost full, increasing the amount of commit space available. Could we use this to our advantage?

It turns out that the answer is yes, we can. So we adjusted Firefox to wait for a bit instead of crashing and then retry the failed memory allocation. This leads to a bit of jank as the browser can be stuck for a fraction of a second, but it’s a lot better than crashing.

There’s also another angle to this: Firefox is made up of several processes and can survive losing all of them but the main one. Delaying a main process crash might lead to another process dying if memory is tight. This is good because it would free up memory and let us resume execution, for example by getting rid of a web page with runaway memory consumption.

If a content process died we would need to reload it if it was the GPU process instead the browser would briefly flash while we relaunched it; either way, the result is less disruptive than a full browser crash. We used a similar trick in Firefox for Android and Firefox OS before that and it worked well on both platforms.

This little trick shipped in Firefox 105 and had an enormous impact on Firefox stability on Windows. The chart below shows how many out-of-memory browser crashes were experienced by users per active usage hours:

Firefox trick

You’re looking at a >70% reduction in crashes, far more than our rosiest predictions.

And we’re not done yet! Stalling the main process led to a smaller increase in tab crashes – which are also unpleasant for the user even if not nearly as annoying as a full browser crash – so we’re cutting those down too.

Last but not least we want to improve Firefox behavior in low-memory scenarios by responding differently to cases where we’re low on commit space and cases where we’re low on physical memory, this will reduce swapping and help shrink Firefox footprint to make room for other applications.

I’d like to send special thanks to my colleague Raymond Kraesig who implemented this “trick”, carefully monitored its impact and is working on the aforementioned improvements.

The post Improving Firefox stability with this one weird trick appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogThe best gift for anyone who wants to feel safer when they go online: Mozilla privacy products

The holidays are a wonderful time of the year where we are happily shopping for unique gifts for loved ones online. It also means we’re sharing our personal information online like giving out email addresses or phone numbers to sign up for discount programs or creating new accounts. Whenever we go online, we are asked to give our personal information, which can end up in the wrong hands. Once our information is out there and publicly available it’s even tougher to get it back. 

Here at Mozilla, a mission-driven company with a 20-year track record of fighting for online privacy and a healthier internet, we get that. Our privacy products, Firefox Relay and Mozilla VPN, have helped people feel safer when they go online and have blocked more than 1.5 million unwanted emails from people’s inboxes while keeping their real email addresses safe from trackers across the web. So, wherever you go online with Mozilla’s trusted products and services, your information is safer. 

Mozilla’s privacy products include Firefox Relay which hides your real email address and masks your phone number, and Mozilla VPN, our fast and easy-to-use VPN service, that helps protect the privacy of your network traffic. Together they help you keep what you do online private. And now, we are making it easier to get both Firefox Relay and Mozilla VPN together — for $6.99 a month when you sign up for an annual subscription. Whether you currently use one or none of these products, here’s more information on what makes these products a must-have whenever you go online. 

Mozilla privacy product #1: Firefox Relay

Since the launch of Firefox Relay, thousands of users have signed up for our smart, easy solution that hides their real email address to help protect their identity. This year, we continued to look to our users to improve and shape their Firefox Relay experience. In 2022, we added user-requested features which included increasing the email limit size to 10 MB and making Firefox Relay available as a Chrome extension. For Firefox Relay Premium users, we added a phone number mask feature to protect personal phone numbers. Whether you are signing up for loyalty programs, booking a restaurant reservation, or making purchases that require your phone number, now you can feel confident that your personal phone number won’t fall in the wrong hands. You can read more about the phone number mask feature here. Firefox Relay has helped keep thousands of people’s information safe. Check out the great coverage in The Verge, Popular Science, Consumer Reports and PCMag

Mozilla privacy product  #2: Mozilla VPN 

This year, Mozilla VPN, our fast and easy-to-use Virtual Private Network service, integrated with one of our users’ favorite Firefox Add-ons, Multi-Account Containers, to offer a unique, privacy solution that is only available in Firefox. We also included the ability to multi-hop, which means that you can use two VPN servers instead of one for extra protection. You can read more about this feature here. To date, thousands of people have signed up to Mozilla VPN, which provides device-level network traffic protection as you go on the web. Besides our loyal users, there are numerous news articles (Consumer Reports, Washington Post, KTLA-TV and The Verge) that can tell you more about how a VPN can help whenever you use the web. 

Better Together: Firefox Relay and Mozilla VPN

If there’s one person you shouldn’t forget on your list, it’s giving yourself the gift of privacy with Mozilla’s products. And now we’re offering Firefox Relay and Mozilla VPN together at $6.99 a month, when you sign up for an annual subscription. 

Developed by Mozilla, we are committed to innovate and deliver new products like Mozilla VPN and Firefox Relay. We know that it’s more important than ever for you to be safe, and for you to know that what you do online is your own business. By subscribing to our products, users support both Mozilla’s product development and our mission to build a better web for all. 

Subscribe today either from the Mozilla VPN or Firefox Relay site.

The post The best gift for anyone who wants to feel safer when they go online: Mozilla privacy products  appeared first on The Mozilla Blog.

Mozilla Add-ons BlogManifest v3 signing available November 21 on Firefox Nightly

Starting November 21, 2022 add-on developers are welcome to upload their Firefox Manifest version 3 (MV3) compatible extensions to addons.mozilla.org (AMO) and have them signed as MV3 extensions. Getting an early jump on MV3 signing enables you to begin testing your extension’s future functionality on Nightly to ensure a smooth eventual transition to MV3 in Firefox.

To be clear, Firefox will continue to support MV2 extensions for the foreseeable future, even as we welcome MV3 extensions in the release to general availability in Firefox 109 (January 17, 2023). Our goal has been to ensure a seamless transition from MV2 to MV3 for extension developers. Taking a gradual approach and gathering feedback as MV3 matures, we anticipate opportunities will emerge over time to modify our initial MV3 offering. In these instances, we intend to take the time necessary to make informed decisions about our approach.

Towards the end of 2023 — once we’ve had time to evaluate and assess MV3’s rollout (including identifying important MV2 use cases that will persist into MV3) — we’ll decide on an appropriate timeframe to deprecate MV2. Once this timeframe is established, we’ll communicate MV2’s closure process with advance notice. For now, please see this guide for supporting both MV2 and MV3 versions of your extension on AMO.

Mozilla’s vision for Firefox MV3

Firefox MV3 offers simplified and consolidated APIs, enhanced security and privacy mechanisms, and functionality to better support mobile platforms. As we continue to collaborate with other browser vendors and the developer community to shape MV3, we recognize cross-browser compatibility as a fundamental consideration. That said, we’re also implementing distinct elements to suit Firefox’s product and community needs. We want to give extension developers creative flexibility and choice, while ensuring users maintain access to the highest standards of extension customization and security. Firefox MV3 stands apart from other iterations of MV3 in two critical ways:

  1. While other browser vendors introduced declarativeNetRequest (DNR) in favor of blocking Web Request in MV3, Firefox MV3 continues to support blocking Web Request and will support a compatible version of DNR in the future. We believe blocking Web Request is more flexible than DNR, thus allowing for more creative use cases in content blockers and other privacy and security extensions. However, DNR also has important performance and compatibility characteristics we want to support.
  2. Firefox MV3 offers Event Pages as the background script in lieu of service workers, though we plan to support service workers in the future for compatibility. Event Pages offer benefits like DOM and Web APIs that aren’t available to service workers, while also generally providing a simpler migration path.

Over subsequent releases next year, we’ll continue to expand Firefox MV3 compatibility.

MV3 also ushers an exciting user interface change in the form of the new Unified Extensions button (already available on Firefox Nightly). This will give users direct control over which extensions can access specific web sites.

The Unified Extensions button will give Firefox users direct control over website specific extension permissions.

Users are able to review, grant, or revoke MV3 extension access to any website. MV2 extensions will display in the button interface, but permissions access is unavailable. Please see this post for more information about the new Unified Extensions button.

If you’re planning to migrate your MV2 extension to MV3, there are steps you can take today to get started. We always encourage feedback from our developer community, so don’t hesitate to get in touch:

The post Manifest v3 signing available November 21 on Firefox Nightly appeared first on Mozilla Add-ons Community Blog.

Mozilla Add-ons BlogUnified Extensions Button and how to handle permissions in Manifest V3

Manifest V3 (MV3) is bringing new user-facing changes to Firefox, including the Unified Extensions Button to manage installed and enabled browser extension permissions (origin controls), providing Firefox users control over extension access to their browsers. The first building blocks of this button were added to Nightly in Firefox 107 and will become available with the general release of MV3 in Firefox 109.

Unified Extensions Button

The Unified Extensions button will give Firefox users direct control over website specific extension permissions.

In MV2, host permissions are granted by the user at the time of install and there’s no elegant way for the user to change this setting (short of uninstalling/reinstalling and choosing different permissions). But with the new Unified Extensions Button in MV3 in Firefox, users will have easy access and persistent control over which extensions can access any web page, at any time. Users are free to grant ongoing access to a website, or make a choice per visit. To enable this, MV3 treats host permissions (listed in the extension manifest) as opt-in.

The button panel will display the user’s installed and enabled extensions and their current permission state. In addition to managing host permissions, the panel also allows the user to manage, remove, or report the extension. Extensions with browser actions will behave similarly in the toolbar as in the panel.

Manifest V2 (MV2) extensions will also display in the panel; however users can’t take actions for MV2 host permissions since those were granted at installation and this choice cannot be reversed in MV2 without uninstalling the extension and starting again.

How to deal with opt-in permissions in extension code

The Permissions API provides a way for developers to read and request permissions.

With permissions.request(), you can request specific permissions that have been defined as optional permissions in the manifest:

const permissionsToRequest = {
  permissions: ["bookmarks", "history"],
  origins: ["https://developer.mozilla.org/"]
}

async function requestPermissions() {
  function onResponse(response) {
    if (response) {
      console.log("Permission was granted");
    } else {
      console.log("Permission was refused");
    }

    return browser.permissions.getAll();
  }

  const response = await browser.permissions.request(permissionsToRequest);
  const currentPermissions = await onResponse(response);

  console.log(`Current permissions:`, currentPermissions);
}

This is handy when the request for permissions is tied to a user action like selecting a context menu item. Note that you cannot request for a permission that is not defined in the manifest.

Other times, you’ll want to react to a permission being granted or removed. This can be done with permissions.onAdded and permissions.onRemoved respectively.


function handleAdded(permissions) {
  console.log(`New API permissions: ${permissions.permissions}`);
  console.log(`New host permissions: ${permissions.origins}`);
}

browser.permissions.onAdded.addListener(handleAdded);

Finally, you can check for already existing permissions in two different ways: permissions.getAll() returns a list of all granted permissions and permissions.contains(permissionsToCheck) checks for specific permissions and resolves to true if, and only if, all checked permissions are granted.


// Extension permissions are:
// "webRequest", "tabs", "*://*.mozilla.org/*"

let testPermissions1 = {
  origins: ["*://mozilla.org/"],
  permissions: ["tabs"]
};

const testResult1 = await browser.permissions.contains(testPermissions1);
console.log(testResult1); // true

We always encourage feedback from our developer community, so don’t hesitate to get in touch:

The post Unified Extensions Button and how to handle permissions in Manifest V3 appeared first on Mozilla Add-ons Community Blog.

Open Policy & AdvocacyMozilla Comments on FTC’s “Commercial Surveillance and Data Security” Advance Notice of Proposed Rulemaking

Like regulators around the world, the US Federal Trade Commission (FTC) is exploring the possibility of new rules to protect consumer privacy online. We’re excited to see the FTC take this important step and ask key questions surrounding commercial surveillance and data security practices, from advertising and transparency to data collection and deceptive design practices.

Mozilla has a long track record on privacy. It’s an integral aspect of our Manifesto, where we state that individuals’ security and privacy on the internet are fundamental and must not be treated as optional. It’s evidenced in our products and in our collaboration with others in industry to forge solutions to create a better, more private online experience.

But we can’t do it alone. Without rules of the road, sufficient incentive won’t exist to shift the rest of the industry to more privacy preserving practices. To meet that need, we’ve called for comprehensive privacy legislation like the American Data Privacy and Protection Act (ADPPA), greater ad transparency, and strong enforcement around the world. In our latest submission to the FTC, we detail the urgent need for US regulators and policymakers to take action to create a healthier internet.

At a high level, our comments focus on:

Privacy Practices Online: Everyone should have control over their personal data, understand how it’s obtained and used, and be able to access, modify, or delete it. To that end, Mozilla has long advocated for companies to adopt better privacy practices through our Lean Data Practices methodology. It’s also important that rules govern not just the collection of data, but the uses of that data in order to limit harmful effects – from the impact of addictive user interfaces on kids to the use of recommendation systems to discrimination in housing and jobs.

Privacy Preserving Advertising: The way in which advertising is conducted today is broken and causes more harm than good.  At the same time, we believe there’s nothing inherently wrong with digital advertising. It supports a large section of services provided on the web and it is here to stay, in some form. A combination of new research, technical solutions, increased public awareness, and effective regulatory enforcement can reform advertising for the future of the web.

Deceptive Design Practices: Consumers are being tricked into handing over their data with deceptive patterns, then that data is used to manipulate them. The use of deceptive design patterns results in consumer harms including limited or frustrated choice, lower quality, lower innovation, poor privacy, and unfair contracts. This is bread-and-butter deception – the online manifestation of what the FTC was established to address – and it is critical that the FTC has the authority to take action against such deception.

Automated Decision Making Systems (ADMS): For years, research and investigative reporting have uncovered instances of ADMS that cause or enable discrimination, surveillance, or other harms to individuals and communities. The risks stemming from ADMS are particularly grave where these systems affect, for example, people’s livelihoods, safety, and liberties. We need enforceable rules that hold developers and deployers of ADMS to a higher standard, built on the pillars of transparency, accountability, and redress.

Systemic Transparency and Data Sharing: We encourage the FTC to strengthen the mechanisms that empower policymakers and trusted experts to better understand what is happening on major internet platforms. To achieve this, we need greater access to platform data (subject to strong user privacy protections), greater research tooling, and greater protections for researchers.

Practices surrounding consumer data on the internet today, and the resulting societal harms, have put people and trust at risk. The future of privacy online requires industry to step up to protect and empower people online, and demands that lawmakers and regulators implement frameworks that create the ecosystem and incentive for a better internet ahead.

To read Mozilla’s full submission, click here.

The post Mozilla Comments on FTC’s “Commercial Surveillance and Data Security” Advance Notice of Proposed Rulemaking appeared first on Open Policy & Advocacy.

The Mozilla Blog4 ways a Firefox account comes in handy

An illustration shows a Firefox browser window with cycling arrows in the middle, a pop-up hidden password field and the Pocket logo next to the address bar.<figcaption>Credit: Nick Velazquez / Mozilla</figcaption>

Even people who are very online can use some help navigating the internet – from keeping credit card details safe when online shopping to generating a password when one simply doesn’t have any more passwords in them.

Using Firefox as your main browser helps take care of that. Want to level up? With a Firefox account, you can take advantage of the following features whether you’re using your desktop device, tablet or your phone.

1. See your bookmarks across devices

To easily find your go-to places on the web (aka your bookmarks) on your phone or tablet, use Firefox mobile for Android or iOS. Not only will you get the same privacy-first experience you enjoy when using Firefox on desktop, you’ll also have Firefox Sync, which lets you see your bookmarks wherever you log into your Firefox account. Firefox Sync allows you to choose the data you want to take with you. In addition to bookmarks, you also have the option to sync your browsing history, open tabs and your installed add-ons across devices. 

A Firefox browser pop-up shows a window asking the user to choose what they want to sync.

2. Use a secure password manager that goes with you wherever you are

Firefox has a built-in password manager that can generate a secure password when you’re creating a new account on a website. (Just click the password field and hit Use a Securely Generated Password. Firefox will save your login for that site.) When you’re using Firefox on your mobile device and you’re logged into your Firefox account, you’ll see your usernames and passwords right where you saved them.

3. Shop securely across devices with credit card autofill

Firefox will also automatically fill in credit card information that you saved when purchasing something online. You just need to enter your CVV number, which Firefox doesn’t save as a security measure. For extra protection, you can choose to require your device’s password, face ID or fingerprint before Firefox autofills your credit card data. Here’s how to turn that on. 

While this works both on desktop and mobile devices when you’re signed into your Firefox account, you can also opt to start shopping on one device and send your browser tab to another to complete your purchase. For example, you can add items to an online shopping cart on your phone but choose to check out on your laptop. 

4. Stay productive now, save that article or video for later

The internet is full of stories, whether it’s a long read about Gen Z’s internet habits or a video about nerdcore hip-hop. They’re a fun way to learn about the world, but sometimes, we need to set them aside so we can finish that research paper for class or slide deck for work. Just hit the Pocket button in the toolbar to easily save an article or video. When you’re ready, just log into Pocket with your Firefox account and you’ll find everything you’ve saved.

A screenshot from the Firefox browser shows the Pocket logo next to the address bar.

Switching to Firefox on your iOS or Android device is easy

If you already use Firefox on desktop, then you already know how Firefox beats other major browsers on security, privacy and functionality. You can easily enjoy the same benefits with a Firefox account on your phone or tablet by making Firefox your default browser on mobile. Here’s how to do that: 

A table shows a comparison of Firefox's portability vs. other browsers.

The internet can bring us to our favorite online spaces and take us to new, fascinating places at the tip of our fingers. A Firefox account lets you enjoy all the web has to offer while keeping your data safe – wherever you are. 

Firefox browser logo

Get Firefox

Get the browser that protects what’s important

The post 4 ways a Firefox account comes in handy appeared first on The Mozilla Blog.

The Mozilla BlogOver a quarter of parents believe their children don’t know how to protect their information online – Firefox can help with that

Parenting has never been easy. But with a generation growing up with groundbreaking technology, families are facing new challenges along with opportunities as children interact with screens everywhere they go — while learning at school, playing with friends and for on-the-go entertainment. 

We are previewing a new Mozilla Firefox survey conducted in partnership with YouGov to better understand families’ needs in the United States, Canada, France, Germany and the United Kingdom that we will release fully in January 2023. We wanted to hear parents’ thoughts around online safety, as well as their biggest concerns and questions when their kids navigate through the sticky parts of the web before getting to the good stuff. Here are the top insights we learned from the survey:

  • Many parents believe their kids have no idea how to protect themselves online. About 1 in 3 parents in France and Germany don’t think their child “has any idea on how to protect themselves or their information online.” In the U.S., Canada and the U.K., about a quarter of parents feel the same way. 

As far as the safety of the internet itself, parents in the U.S. seem to be more trusting across all the countries surveyed: Almost 1 in 10 said they believe the internet is “very safe” for children. Parents in France trust the internet the least, with almost 75% finding it to be unsafe to some degree. 

  • U.S. parents spend the most time online compared to parents in other countries, and so do their children. Survey takers in the U.S. reported an average of 7 hours of daily internet use via web browsers, mobile apps and other means. Asked how many hours their children spend online on a typical day, U.S. parents said an average of 4 hours. That’s compared to 2 hours of internet use among children in France, where parents reported spending about 5 hours online everyday. No matter where a child grows up, they spend more time online a day as they get older.  
  • Yes, toddlers use the web. Parents in North America and Western Europe reported introducing their kids to the internet some time between 2 and 8 years old.  North America and the U.K. skew younger, with kids first getting introduced online between 2 and 5 for about a third of households.  Kids are introduced to the internet in France and Germany when they are older, between 8 to14 years old. 

Overall, the survey showed parents to be content with the time in which they chose to introduce their children to internet safety. Although in retrospect, over 1 in 5 parents in the U.S, Canada and France would have preferred to do so at an even younger age. 

Most parents speak to their children about internet safety between the ages of 5 and 8. Whatever age, these conversations don’t have to be difficult. OK, it may be a teeny-bit awkward – but you can lean on Firefox to help out with a few starter topics to get the conversation started. To find out more about starting a Tech Talk, here is our Firefox guide to help steer the conversation in the right direction. 

Methodology: 

This survey was conducted among parents between the ages of 25 and 55 years old living in the U.S., Canada, Germany, France and the U.K., who have children between 5 and 17 years old. The survey interviewed 3,699 participants between Sept 21 – Sept. 29, 2022. 


The internet is a great place for families. It gives us new opportunities to discover the world, connect with others and just generally make our lives easier and more colorful. But it also comes with new challenges and complications for the people raising the next generations. Mozilla wants to help families make the best online decisions, whatever that looks like, with our latest series, The Tech Talk.

The post Over a quarter of parents believe their children don’t know how to protect their information online – Firefox can help with that  appeared first on The Mozilla Blog.

The Mozilla Thunderbird BlogImportant Message For Microsoft Office 365 Enterprise Users

In a coming release of the Thunderbird 102.x series, we will be making some changes to the way we handle OAuth2 authorization with Microsoft accounts, and this may involve some extra work for users currently using Microsoft-hosted accounts through their employer or educational institution.

In order to meet Microsoft’s requirements for publisher verification, it is necessary for us to switch to a new Azure application and application ID. However, some of these accounts are configured to require administrators to approve any applications accessing email.

We have already made the necessary changes in the current Thunderbird beta series.

If you are using a hosted Microsoft account, please temporarily launch Thunderbird 107.0b3 or later (download here) and attempt to log in, making sure to select “OAuth2” as your authentication method.

If you encounter a screen saying “Need admin approval” during the login process, please contact your IT administrators to approve the client ID 9e5f94bc-e8a4-4e73-b8be-63364c29d753 for Mozilla Thunderbird (it may appear to admins as “Mzla Technologies Corporation”).

We request the following permissions:

  • IMAP.AccessAsUser.All
  • POP.AccessAsUser.All
  • SMTP.Send
  • offline_access

After doing this, you may return to using the version you were using previously.

The post Important Message For Microsoft Office 365 Enterprise Users appeared first on The Thunderbird Blog.

The Mozilla BlogHow to talk to kids about social media

An illustratin shows a confused face looking up at various social media icons.<figcaption>Credit: Nick Velazquez / Mozilla</figcaption>

Joy Cho smiles while posing for a photo.
Joy Cho is the founder and creative director of the lifestyle brand and design company Oh Joy! For two years in a row, she was named one of Time’s 30 Most Influential People on the Internet and has the most followed account on Pinterest with over 15 million followers. You can also follow her on Instagram and learn more about her work on ohjoy.com.

I’m part of the first generation of parents of children who’ve been exposed to the internet since birth. I remember the days without cellphones, email, social media, streaming TV shows and quick access to the web (hello, that AOL dial-up ringtone!). So often, I feel unsure about how to set up my kids for success with their own technology use – something my own parents never had to figure out. 

Social media in particular can be scary to think about when it comes to my kids, but I know it can be great too. I’m a designer and business owner who has made amazing connections online, and it has allowed me to create my own community on the internet.

But I have also seen the toll social media takes: making unhealthy comparisons, doomscrolling and being unable to focus on one thing at a time. Do you often find yourself watching TV while also scrolling through Instagram on your phone? I sure do. 

So how do we talk to our kids about social media?

First, consider your own habits

Every family is different and should handle the topic in a way that works for them. Just like rules for treats, chores, bedtime and allowance, social media is just another part of parenting that involves making the best choices for their family. 

From my experience, as someone who spends a lot of social media for work, I believe that we should model the behavior we want our kids to have. Here are some things I’m doing and asking myself: 

1. How much am I on social media? And what am I doing while there?

How much of that time is productive (creating community, catching up with friends, learning something) vs. how much of it feels like I’m on social media just to pass time?

Looking at our own habits first will better inform how we can be role models for our kids. If you feel that you’re on your device more than you’d want to, this is a great time to modify that before establishing what to expect of your kids.

2. How much privacy do I want?

I know some families who don’t post any photos of their kids online. And then some families share what feels like every moment of every day in their kids’ lives on a public social media account. Some people post photos but cover their kids’ faces or only show their kids from the back. 

I used to share my kids’ photos and videos all the time when they were younger, but as they got older, I felt the need to protect their privacy more and more. Now, you will rarely see them on my public feeds. I use social media for business, and my kids are not part of my business, so now I make that distinction because that is what feels right for me and my family.

Your answer might be different. You might have 75 followers of your closest family and friends on a private account and social media is the easiest way to share updates about your family. Whatever you’re doing, make sure it’s intentional and feels right for you.

3. How can I use social media for good?

While social media often gets a bad rep, it also gives us incredible ways to connect with people, offer inspiration and keep us in tune with what’s going on in the world. Think about how you can, too, contribute to that positivity based on what you share and how you share it.

If there are accounts or people that bring upon feelings of comparison, anger or negativity in your daily scroll, consider muting those accounts or unfollowing. Don’t let anyone online steal your peace.

Reflect as a family

Once you’ve reflected and maybe changed a few of your own online habits, here are some questions to ask yourself about your kids’ tech use:

1. What devices and apps feel right for my kids at their current ages?

What am I OK with having my kids seeing or using based on their ages? And how will it change as they get older? At what age can they have their own iPad or their own phone?

My kids are 8 and 11 and neither of them have their own phones yet. They have iPads, and they have access to apps that my husband and I have approved and feel are age-appropriate. They both can be on YouTube Kids solo but can only watch YouTube on our family TV (where the adults can more quickly see and hear what they are watching). They have access to streaming channels under the child settings, which help us feel better about the things they may come across on their own.

2. What kind of limits make sense for social media or device time?

Most kids (and adults!) can’t self-limit their time on a device. So it can be good to set limits until kids can regulate themselves. You can choose to change those limits based on their ages, behavior or as a special reward. Whether that’s three hours per day or one hour per week, decide what you think is appropriate, but make sure to have the conversation with them about it, too.

3. Consider parental controls

You can use family settings to limit time on devices and on specific apps. Some kids need their computer for homework, but you can define what they’ll be able to access and for how long, which helps set boundaries and stay on task for homework. Most streaming services also offer pre-set controls for kids as well. Weigh the benefits of these built-in safeguards based on your family’s needs. 

4. How does using social media affect my kids?

Studies that come out on this topic paint a bleak picture. A report by the brand Dove found that idealized beauty advice on social media can cause low self-esteem in 50% of girls. So before your kids start on social platform, or even if they’re already active on them, you can ask them: Do you find yourself getting cranky when you’re on your device? Or sad, depressed, jealous, confused or angry? How can we prevent that from happening?

This is a good way to start discussing screen time, and having kids ask themselves how much screen time is too much. I find that some children have a major mood shift when they get a lot of screen time, and you can help them recognize that feeling, which will help their ability to regulate themselves.

5. How can my kids use the internet for good?

One thing that I like to remind myself is all the good that can come from the internet. My children have this incredible access to so much knowledge, most of the world’s art, connection to almost every part of the world and so many opportunities for fun and education. Depending on their age, their access and ability to use devices will vary. Can they play educational games or watch educational shows? Can they play games with their friends that allow them to socialize or learn how to work on a team? Can they watch craft videos and learn how to make something for the very first time? Look at the parts that are great about it and guide them in that direction.

Keep safety and privacy in mind

One thing to remember: Whatever I (or my kids) post on social media stays there forever.

While images, videos, tweets and messages can be deleted, information can resurface for others to still see, screenshot or save your content before it goes away. We’ve seen so many examples of past social media postings come back to hurt someone’s career or reputation, so make this clear to your children and keep the conversation going as they grow older.

I hope asking yourself these questions can help prepare you for having the “social media talk” with your own kids. You can always change it up and evolve your family’s take on it. Just as we need to modify things in other aspects of parenting, how our families handle the internet can evolve as technology changes, and as our kids, hopefully, grow up to be the most joyful versions of themselves. 


The internet is a great place for families. It gives us new opportunities to discover the world, connect with others and just generally make our lives easier and more colorful. But it also comes with new challenges and complications for the people raising the next generations. Mozilla wants to help families make the best online decisions, whatever that looks like, with our latest series, The Tech Talk.

An illustration reads: The Tech Talk

Talk to your kids about online safety

Get tips

The post How to talk to kids about social media appeared first on The Mozilla Blog.

The Mozilla Thunderbird BlogThunderbird Supernova Preview: The New Calendar Design

In 2023, Thunderbird will reinvent itself with the “Supernova” release, featuring a modernized interface and brand new features like Firefox Sync. One of the major improvements you can look forward to is an overhaul to our calendar UI (user interface). Today we’re excited to give you a preview of what it looks like!

Since this is a work-in-progress, bear with us for a few disclaimers. The most important one is that these screenshots are mock-ups which guide the direction of the new calendar interface. Here are a few other things to consider:

  • We’ve intentionally made this calendar pretty busy to demonstrate how the cleaner UI makes the calendar more visually digestible, even when dealing with many events.
  • Dialogs, popups, tool-tips, and all the companion calendar elements are also being redesigned.
  • Many of the visual changes will be user-customizable.
  • Any inconsistent font sizes you see are only present in the mock-up.
  • Right now we’re showing Light Mode. Dark and High Contrast mode will both be designed and shared in the near future.
  • These current mock-ups were done with the “Relaxed” Density setting in mind, but of course a tighter interface with scalable font-size will be possible.

Thunderbird Supernova Calendar: Monthly, Weekly, Daily Views

Thunderbird 115 Calendar Mockup: Monthly View<figcaption>Thunderbird Supernova Calendar: Monthly View</figcaption>

The first thing you may notice is that Saturday and Sunday are only partially visible. You can choose to visually collapse the weekends to save space.

But wait, we don’t all work Monday through Friday! That’s why you’ll be able to define what your weekend is, and collapse those days instead.

And do you see that empty toolbar at the top? Don’t worry, all the calendar actions will be reachable in context, and the toolbar will be customizable. Flexibility and customization is what you’ve come to expect from Thunderbird, and we’ll continue to provide that.

Thunderbird Supernova 115 Calendar Weekly View<figcaption>Thunderbird Supernova Calendar: Weekly View</figcaption>

Speaking of customization, visual customization options for the calendar will be available via a menu popup. Some (but not all) of the options you’ll see here are:

  • Hide calendar color
  • Hide calendar icons
  • Swap calendar color with category color
  • Collapse weekends
  • Completely remove your weekend days
Thunderbird Supernova 115 Calendar Daily View<figcaption>Thunderbird Supernova Calendar: Daily View</figcaption>

You’ll also see some new hotkey hints in the Search boxes (top middle, top right).

Speaking of Search, we’re moving the “Find Events” area into the side pane. A drop-down will allow choosing which information (such as title, location, and date) you want each event to show.

Thunderbird Supernova Calendar: Event View

Thunderbird 115 Calendar: Event View<figcaption>Thunderbird Supernova Calendar: Event View</figcaption>

The Event view also gets a decidedly modernized look. The important details have a lot more breathing room, yet subheadings like Location, Organizer and Attendees are easier to spot at a glance. Plus, you’ll be able to easily sort and identify the list of attendees by their current RSVP status.

By default, getting to this event preview screen requires only 1 click. And it’s 2 clicks to open the edit view (which you can do either in a new tab or a separate floating window). Because you love customization, you can control the click behavior. Do you want to skip the event preview screen and open the edit screen with just 1 click? We’ll have an option for that in preferences.

Feedback? Questions?

Life gets busy, so we want our new calendar design to look and feel comfortable. It will help you more efficiently sift, sort, and digest all the crucial details of your day.

Do you have questions or feedback about the new calendar in Thunderbird Supernova? We have a public mailing list specifically for User Interface and User Experience in Thunderbird, and it’s very easy to join.

Just head over to this link on TopicBox and click the “Join The Conversation” button!


The post Thunderbird Supernova Preview: The New Calendar Design appeared first on The Thunderbird Blog.

SUMO BlogHow to contribute to Mozilla through user support

SUMO contributors posing in front of whistler inukshuk statue in Whistler

It is with great pleasure that I am announcing the launch of our new contribute page in SUMO a.k.a SUpport.Mozilla.Org. SUMO is one of the oldest contribution areas in Mozilla, and we want to show you just how easy it is to contribute!

There are many ways you can get involved with SUMO, so getting started can be confusing. However, our new contribute page should help with that, since the pages are now updated with simpler steps to follow and a refreshed design.

We also added two new contribution areas, so now we have five ways to contribute:

  1. Answer questions in the support forum
  2. Write help articles
  3. Localize help articles
  4. Provide support on social media channels (newly added)
  5. Respond to mobile store reviews (newly added)

The first 3 areas are nothing new for SUMO contributors. You can contribute by replying to forum posts, writing help articles (or Knowledge Base articles as we call them here), or translating the help article’s content to your respective locales.

Providing support on social media channels is also nothing new to SUMO. But with the ease of tools that we have now, we are able to invite more contributors to the program. In 2020, we started the @FirefoxSupport account on Twitter and as of now, we have posted 4115 tweets and gained 3336 followers. If you’re a social media enthusiast, the Social Support program is a perfect contribution area for you.

Responding to user reviews on the mobile store is something relatively new that we started a couple of years ago to support the Firefox for Android transition from Fennec to Fenix. We realize that the mobile ecosystem is a different territory with different behavior. We wanted to make sure that we serve people where they need us the most, which means providing support for those who leave us app reviews. If this sounds more like your thing, you should definitely join the Mobile Store Support program.

And if you still can’t decide, you can always start by saying hi to us in our Matrix room or contributor forums.

 

Keep on rocking the helpful web,

Kiki

The Mozilla BlogHow to talk to kids about video games

<figcaption>Credit: Nick Velazquez / Mozilla</figcaption>

Dr. Naomi Fisher poses for a photo.
Dr. Naomi Fisher is a U.K.-based clinical psychologist specializing in trauma and autism. She’s the author of “Changing Our Minds: How Children Can Take Control of Their Own Learning.” Her writing has also appeared in the British Psychological Society’s The Psychologist, among other publications. You can follow her on Twitter. Photo: Justine Diamond

I spend a lot of time talking to parents about screens. Most of those conversations are about fear.  

“I’m so worried about my child withdrawing into screens,” they say. “Are they addicted? How can I get them to stop?”  

I understand where they are coming from. I’m a clinical psychologist with 16 years of experience working in the U.K. and France, including for the U.K. National Health Service and in private practice. I’m also the mother of an 11-year-old girl and a teenage boy. 

“Screen time” has become one of the bogeymen of our age. We blame screens for our children’s unhappiness, anger or lack of engagement. We worry about screen time incessantly, so much so that sometimes it seems that the benchmark of a good parent in 2022 is the strictness of your screen time limits.

<figcaption> Credit: Nick Velazquez / Mozilla</figcaption>

Defining screen time

The oddest thing about this is that “screen time” doesn’t really exist. You can’t pin it down unless you think that the screen itself – a sheet of glass – has a magical, harmful effect. A screen is merely a portal to many activities that also happen offline. These include gaming, chatting, reading, writing, watching documentaries, coding, learning languages and art – I could go on.  

Just yesterday, my teenager and I played the online word puzzle Redactle together. We’ve been doing it daily for months, and we’ve learned about history, science, poetry and bed bugs along the way. Do word puzzles become damaging because they are accessed via a sheet of glass? 

Still, parents are afraid of “too much” screen time, and they want firm answers. “Is 30 minutes a day too much?” they ask. When in turn I ask them what their children are doing on screens, they rarely have much idea. “Watching rubbish” or “wasting time” are common responses. It’s not often that parents spend time on screens with their children, many of them saying that they don’t want to encourage it.

Stop counting the minutes

I tell parents to stop counting the minutes for a moment, and instead spend some time watching their children without judgment. They return surprised. 

Parents see their children socializing with friends as they play. They’re designing their own mini-games, or memorizing the countries of the world. They’ve built the Titanic in Minecraft. The “screen time bogeyman” starts to melt away.

For me, screens give families an opportunity.

Photo: Lauren Psyk

They are a chance to connect with our children by doing something they love. And for some young people, there are benefits that they can’t find elsewhere.

Some children I meet don’t feel competent elsewhere in their lives, but feel good about themselves when they play video games. They tell me about Plants vs. Zombies and they come alive. We exchange tips on our favorite way to defend the house from marauding zombies.  They love games, but everyone is telling them that they should be doing something else. Often, no other adult seems interested.

I see young people who are really isolated. They have difficulties making friends, or they have been bullied at school. Online gaming can be their first step towards making connections. They don’t have to start with talking: They type on the in-game chat and when they feel ready, move onto voice chat. They emerge in their own time.  

Some of the young people I work with have difficulty keeping calm throughout the day. For them, their devices provide a way to take up space. They put on their headphones and sink into a familiar game. They recharge, letting them cope with their day for a bit longer. It’s a wonderfully portable way to decompress.

Do games cause unhappiness?

I’m not saying that there’s never a reason to worry. I meet some young people who are very unhappy. They use gaming to avoid their thoughts and feelings, and they get very angry when asked to stop. The adults around them usually blame the games for their unhappiness, thinking that banning them would help improve their well-being.

But here’s the issue: Gaming is rarely the cause of the problem. Instead, it’s a solution that a young person has found to cope with the way they are feeling. Sometimes, gaming can seem like the only thing that makes them happy. Banning video games takes that away, causing a child to feel angry with their parents at the same time. 

We need to address the root of their unhappiness rather than ban something they love, and we need to nurture that relationship. Sometimes, dropping the judgment around gaming can lead to parents and children reconnecting rather than fighting.

Valuing our children’s interests

Appreciating our children’s love of screens is far more than just showing an interest in what they do. When they were little, we looked after their most precious toys, even if they were ragged and dirty. They were important because they were important to our child. We didn’t tell them that their teddy bears were rubbish and we’d like to get rid of them (even if we secretly thought exactly that), because we knew that would hurt them.  

Now they’re older, games and digital creations have replaced stuffed toys. When we demonize screens, we demonize the things our children love. We tell them that the things they value aren’t valuable. We tell them the things they enjoy most are a waste of time. That is never going to be a good way to build a strong and supportive relationship.

Instead, I encourage parents to join their kids. Gaming might bore you, but you can be interested in your child and what makes them come alive. You can value their joy, their curiosity and their exploration. You can give the games a go and see what they find so enthralling. Download Brawl Stars, Minecraft or Roblox, and see if your child will show you how to play. If they don’t want to, find a tutorial video for yourself.

Let them see that you are interested in their passions, because you are interested in them. They will see that you value them for who they are. And from that seed, many good things can grow.  


How to talk to kids about video games

Watch what your kids do on screens, even if at first you don’t see the point. Ask them to tell you about it or just observe.

Ask if you can join them, even if that means watching videos together. Resist the urge to denigrate what they are doing. Search for more websites similar to those they find interesting.

Try something different together. Make suggestions and expand their on-screen horizons. There are quizzes galore on Sporcle, or clever spin-offs from Wordle like Quordle, Absurdle and Fibble.

Connect over a board game. Many family board games have virtual editions. Our family loves the Evolution app. Try Carcassonne, Forbidden Island, Settlers of Catan, or the Game of Life.

Hang out together, apart. Online gaming can be a great way to spend time with your kids when you aren’t with them. Kids can struggle to talk remotely but playing Minecraft or Cluedo together can be lots of fun, even miles apart.


The internet is a great place for families. It gives us new opportunities to discover the world, connect with others and just generally make our lives easier and more colorful. But it also comes with new challenges and complications for the people raising the next generations. Mozilla wants to help families make the best online decisions, whatever that looks like, with our latest series, The Tech Talk.

An illustration reads: The Tech Talk

Talk to your kids about online safety

Get tips

The post How to talk to kids about video games appeared first on The Mozilla Blog.

The Mozilla BlogMozilla Ventures: Investing in responsible tech

Early next year, we will launch Mozilla Ventures, a first-of-its-kind impact venture fund to invest in startups that push the internet — and the tech industry — in a better direction 

_____

Many people complain about today’s tech industry. Some say the internet has lost its soul. And some even say it’s impossible to make it better. 

My response: we won’t know unless we try, together. 

Personally, I think it is possible to build successful companies — and great internet products — that put people before profits. Mozilla proves this. But so do WordPress, Hugging Face, ProtonMail, Kickstarter, and a good number of others. All are creating products and technology that respect users — and that are making the internet a healthier place.

I believe that, if we have A LOT more founders creating companies like these, we have a real chance to push the tech industry — and the internet — in a better direction. 

The thing is, the system is stacked against founders like this. It is really, really hard. This struck us when Mozilla briefly piloted a startup support program a couple of years ago. Hundreds of young founders and teams showed up with ideas for products and tech that were ‘very Mozilla’. Yet, we’ve also heard it’s hard to find mission aligned investors, or mentors and incubators who shared their vision for products that put people first.  

Through this pilot, Mozilla found the kinds of mentors these founders were looking for. And, we offered pre-seed investments to dozens of companies. But we also saw the huge need to do more, and to do it systematically over time. Mozilla Ventures will be our first step in filling this need. 

Launching officially in early 2023, Mozilla Ventures will start with an initial $35M, and grow through partnerships with other investors.

The fund will focus on early stage startups whose products are designed to delight users or empower developers — but with the sort of values outlined in the Mozilla Manifesto baked in from day one. Imagine a social network that feels like a truly safe place to connect with your closest family and friends. Or an AI tooling company that makes it easier for developers to detect and mitigate bias when developing digital products and services. Or a company offering a personal digital assistant that is both a joy to use and hyper focused on protecting your privacy. We know there are founders out there who want to build products and companies like these, and that want to do so in a way that looks and feels different than the tech industry of today. 

Processwise, Mozilla Ventures will look for founders with a compelling product vision and alignment with Mozilla’s values. From there, it will look at their team, their product and their business, just as other investors do. And, where all these things add up, we’ll invest. 

The fund will be led by Managing Partner Mohamed Nanabhay. Mohamed brings a decade of experience investing in digital media businesses designed to advance democracy and free speech where those things are hard to come by. Which perfectly sets him up for the job ahead — finding and funding founders who have the odds stacked against them, and then helping them succeed. 

Over the past few months, Mohamed and I have spent a good amount of time thinking about the basic thesis behind the fund (find great startups that align with the Mozilla Manifesto) — and testing this thesis out through conversations with founders. 

Even before we publicly announced Mozilla Ventures in November 2022, we’d already found three companies that validate our belief that companies like this are out there — Secure AI Labs, Block Party and HeyLogin. They are all companies driven by the idea that the digital world can be private, secure, respectful, and that there are businesses to be built creating this world. We’re honored that these companies saw the same alignment we did. They all opened up space on their cap table for Mozilla. And we invested.

Our first few months of conversations with founders (and other investors) have also underlined this: we have more questions than answers. Almost everyone we’ve talked to is excited by the idea of pushing the tech industry in a different direction, especially younger founders. On the flipside, everyone sees huge challenges — existing tech monopolies, venture funding growth at all costs, public cynicism. It’s important to be honest, we don’t have all the answers. We will (collectively) need to work through these challenges as we go. So, that’s what we will do. Our plan is to continue talking to founders — and making select investments — in the months leading up to the launch of the fund. We will also keep talking to fellow travelers like Lucid Capitalism, Startups and Society and Responsible Innovation Labs, who have already started asking some of the tough questions. And, we will continue speaking with a select group of potential co-investors (LPs) who share our values. We believe that, together, we have a chance of putting the tech industry on a truly different course in the years ahead.

The post Mozilla Ventures: Investing in responsible tech appeared first on The Mozilla Blog.

Mozilla Add-ons BlogBegin your MV3 migration by implementing new features today

Early next year, Firefox will release Mozilla’s Manifest V3 (MV3). Therefore, it’s an ideal time to consider migrating your Manifest V2 extensions. One of our goals throughout our approach to MV3 has been to gradually release new WebExtensions features that enable you to begin implementing APIs that are compatible with MV3. To this end, we recently released some exciting new features you should know about…

 

MV3 changes you can make to your extension right now

 

Event Pages

In Firefox MV3, we’re providing Event Pages as the background script. Event Pages retain several important features, including access to DOM and WebAPIs that are not available with the new service worker backgrounds used in Google Chrome.

We enabled Event Pages for MV2 (aka non-persistent background pages that can be closed and respawned based on browser events) in Firefox 106. This update is a major step toward MV3 because all extensions must adopt Event Pages in MV3. But you can make this change today and use new Event Pages benefits such as:

  • Resiliency against unexpected system crashes. Now we can restart a corrupted background page without hindering the user.
  • No need for an extension reboot to reset a background page.
  • Save on memory resources by putting idle background pages to sleep.
How do I implement Event Pages?

To turn your background into an Event Page, set `persistent: false` on the background page in your manifest.json. Here’s more info on background scripts with implementation details.

Now that your background script is non-persistent, you need to tell Firefox when to wake up the page if it’s suspended. There are two methods available:

  1. Use an event listener like `browser.tabs.onCreated` in your background script.  Event listeners must be added at the top level execution of your script. This way, if your background is sleeping Firefox knows to wake the script whenever a new tab is spawned. This works with nearly all events in the WebExtensions API. Here’s more info on adding listeners. (Note that Firefox recognizes arguments passed to addListener and does not create multiple listeners for the same set of arguments.)
  2. Use `browser.runtime.getBackgroundPage` if you need a background page to run processes unrelated to events. For instance, you may need a background script to run a process while the user is involved with a browser action or side panel. Use this API anytime you need direct access to a background page that may be suspended or closed. Here’s more info on background script functions.

Menus and Scripting APIs also support persistent data:

  • Menu items created by an event page are available after they’re registered — even if the event page is terminated. The event page respawns as necessary to menu events.
  • Registered scripts can be injected into matching web pages without the need for a running Event Page.

Scripting

You can take another big step toward MV3 by switching to the new Scripting API. This API consolidates several scripting related APIs — contentScripts.register(), tabs.insertCSS(), tabs.removeCSS(), and tabs.executeScript() — and adds capabilities to register, update, and unregister content scripts at runtime.

Also, arbitrary strings can no longer be executed because the code parameter has been removed. So you’ll need to move any arbitrary strings executed as scripts into files contained within the extension, or to the func property used with, if necessary, the args parameter.

This API requires the scripting permission.

Preparing for MV3 restrictions

MV3 will impose enhanced restrictions on several features. Most of these restrictions are outlined in the MV3 migration guide. By following the steps detailed in the guide, there are some ways you can begin modifying your MV2 extension to make it comply more closely with MV3 requirements. A few noteworthy areas include…

Conform to MV3’s Content Security Policy

Mozilla’s long-standing add-on policies prohibit remote code execution. In keeping with these policies, the content_security_policy field no longer supports sources permitting remote code in script-related directives, such as script-src or `’unsafe-eval’`. The only permitted values for the `script-src` directive is `’self’` and `’wasm-unsafe-eval’`. `’wasm-unsafe-eval’` must be specified in the CSP if an extension wants to use WebAssembly. In MV3, content scripts are subject to the same CSP as other parts of the extension.

Historically, a custom extension CSP required object-src to be specified. This is not required in MV3 and was removed from MV2 in Firefox 106 (see object-src in content_security_policy on MDN). This change makes it easier for extensions to customize the CSP with minimal boilerplate.

The Content Security Policy (CSP) is more restrictive in MV3. If you are using a custom CSP in your MV2 add-on, you can validate the CSP by temporarily running it as an MV3 extension.  See the MV3 migration guide for details.

Upgrade insecure requests – https by default

When communicating with external servers, extensions will use https by default. Extensions should replace the “http:” and ”ws:” schemes in their source code with secure alternatives, “https:” and ”wss:”. The default MV3 CSP includes the upgrade-secure-requests directive, to enforce the use of secure schemes even if an insecure scheme was used.

Extensions can opt out of this https requirement by overriding the content_security_policy and omitting the upgrade-secure-requests, provided that no user data is transmitted insecurely through the extension.

Opt-in permissions

All MV3 permissions, including host permissions, are opt-in for users. This necessitated a significant Firefox design change — the introduction of the Unified Extensions button — so users can easily grant or deny website specific permissions at any time (the button is enabled on Firefox Nightly for early testing and feedback).

The Unified Extensions button gives Firefox users direct control over website specific extension permissions.

Therefore, you must ensure your extension has permission whenever it accesses APIs covered by a permission, accesses a tab, or uses Fetch API. MV2 already has APIs that enables you to check for permissions and watch for changes in permission. When necessary, you can get the current permission status. However, rather than always checking, use the permissions.onAdded and permissions.onRemoved event APIs to watch for changes.

Update content scripts

While content scripts continue to have access to the same extension APIs in MV3 as in MV2, most of the special exceptions and extension specific capabilities have been removed from the web platform APIs (DOM APIs). In particular, the extension’s host permissions no longer apply to Fetch and XMLHttpRequest.

CSP for content scripts

With MV2 no CSP is applied to content scripts. In MV3, content scripts are subjected to the same CSP as other parts of the extension (see CSP for content scripts on MDN). Notably, this means that remote code cannot be executed from the content script. Some existing uses can be replaced with functionality from the Scripting API such as func and args (see the “Scripting” section above), which is available to MV2 extensions.

XHR and Fetch

With MV2 you also have access to some APIs, such as XMLHttpRequest and Fetch, from both extension and web page contexts. This allows for cross origin requests in a way that is not available in MV3. In MV3, XHR and Fetch operate as if the web page itself was using them, and are subject to cross origin controls.

Content scripts can continue using XHR and Fetch by first making requests to background scripts. A background script can then use Fetch to get the data and return the necessary information to the content script. To avoid privacy issues, set the “credentials” option to “omit” and cache option to “no-cache”. In the future, we may offer an API to support the make-request-on-behalf-of-a-document-in-a-tab use case.

Will Chrome MV3 extensions work in Firefox MV2?

The release of MV3 in Firefox is distinct from Chrome. Add-ons intended to work across different browsers will, in most cases, require some level of adjustment to be compatible in both Firefox and Chrome. That said, we are committed to a high level of compatibility. We will be providing additional APIs and features in the near future. If you’ve converted your Chrome extension to Google’s MV3, you may be able to consolidate some of those changes into your Firefox MV2 extension. Here are a few areas to investigate:

  • Service Workers are not yet available in Firefox; however many scripts may work interchangeably between Service Workers and Event Pages, depending on functionality. To get things working, you may need to remove service worker specific APIs. See Service Worker Global Scope for more information.
  • DNR is not yet available in Firefox. Firefox retains WebRequest blocking in MV3, which can be used in place of DNR. When DNR is available, simple request modifications can be moved over to DNR.
  • The storage.session API is not yet available in Firefox. You can use other storage mechanisms in the meantime.

Hopefully, we’ve provided helpful information so you can use the new MV2 features to start your migration to MV3. As always, we appreciate your feedback and welcome questions. Here are the ways to get in touch:

The post Begin your MV3 migration by implementing new features today appeared first on Mozilla Add-ons Community Blog.

hacks.mozilla.orgRevamp of MDN Web Docs Contribution Docs

The MDN Web Docs team recently undertook a project to revamp and reorganize the “Contribution Docs”. These are all the pages on MDN that describe what’s what – the templates and page structures, how to perform a task on MDN, how to contribute to MDN, and the community guidelines to follow while contributing to this massive open source project.

The contribution docs are an essential resource that help authors navigate the MDN project. Both the community as well as the partner and internal teams reference it regularly whenever we want to cross-check our policies or how-tos in any situation. Therefore, it was becoming important that we spruce up these pages to keep them relevant and up to date.

Cleanup

This article describes the updates we made to the “Contribution Docs”.

Reorganization

To begin with, we grouped and reorganized the content into two distinct buckets – Community guidelines and Writing guidelines. This is how the project outline looks like now:

  • You’ll now find all the information about open source etiquette, discussions, process flows, users and teams, and how to get in touch with the maintainers in the  Community guidelines section.
  • You’ll find the information about how to write for MDN, what we write, what we regard as experimental, and so on in the Writing guidelines section.

Next, we shuffled the information around a bit so that logically similar pieces sit together. We also collated information that was scattered across multiple pages into more logical chunks.

For example, the Writing style guide now also includes information about “Write with SEO in mind”, which was earlier a separate page elsewhere.

We also restructured some documents, such as the Writing style guide. This document is now divided into the sections “General writing guidelines”, “Writing style”, and “Page components”. In the previous version of the style guide, everything was grouped under “Basics”.

Updates and rewrites

In general, we reviewed and removed outdated as well as repeated content. The cleanup effort also involved doing the following:

  • Removing and redirecting common procedural instructions, such as setting up Git and using Github, to Github docs, instead of repeating the steps on MDN.
  • Moving some repository-specific information to the respective repository. For example, a better home for the content about “Matching web features to browser release version numbers” is in the mdn/browser-compat-data repository.
  • Rewriting a few pages to make them relevant to the currently followed guidelines and processes.
  • Documenting our process flows for issues and pull requests on mdn/content vs other repositories on mdn. This is an ongoing task as we tweak and define better guidelines to work with our partners and community.

New look

As a result of the cleanup effort, the new “Contributor Docs” structure looks like this:

Community guidelines

Writing guidelines

 

Comparing the old with the new

The list below will give you an idea of the new home for some of the content in the previous version:

  • “Contributing to MDN”
    • New home: Community guidelines > Contributing to MDN Web Docs
  • “Get started on MDN”
    • New home: Community guidelines > Contributing to MDN Web Docs > Getting started with MDN Web Docs
  • “Basic etiquette for open source projects”
    • New home: Community guidelines > Contributing to MDN Web Docs > Open source etiquette
  • “Where is everything on MDN”
    • New home: Community guidelines > Contributing to MDN Web Docs > MDN Web Docs Repositories
  • “Localizing MDN”
    • New home: Community guidelines > Contributing to MDN Web Docs > Translated content
  • “Does this belong on MDN Web Docs”, “Editorial policies”, “Criteria for inclusion”, “Process for selection”, “Project guidelines”
    • New home: Writing guidelines > What we write
  • “Criteria for inclusion”, “Process for selection”, “Project guidelines”
    • New home: Writing guidelines > What we write > Criteria for inclusion on MDN Web Docs
  • “MDN conventions and definitions”
    • New home for definitions: Writing guidelines > Experimental, deprecated and obsolete
    • New home for conventions: Writing guidelines > What we write
  • “Video data on MDN”
    • New home: Writing guidelines > How-to guides > How to add images and media
  • “Structured data on MDN”
    • New home: Writing guidelines > How-to guides > How to use structured data
  • “Content structures”
    • New home: Writing guidelines > Page structures

Summary 

The Contribution Docs are working documents — they are reviewed and edited regularly to keep them up to date with editorial and community policies. Giving them a good spring clean allows easier maintenance for us and our partners.

The post Revamp of MDN Web Docs Contribution Docs appeared first on Mozilla Hacks - the Web developer blog.

SUMO BlogIntroducing Lucas Siebert

Hey folks,

I’m super delighted to introduce you to our new Technical Writer, Lucas Siebert. Lucas is joining the content team alongside Abby and Fabi. Some of you may meet him already in our previous community call in October. Here’s a bit more info about Lucas:

Hi, everyone! I’m Lucas, Mozilla’s newest Technical Writer. I’m super excited to work alongside you all to provide content for our knowledge base. You will find me authoring, proofreading, editing, and localizing articles. If you have suggestions for making our content more accurate and user-friendly, please get in touch!

Please join me to congratulate and welcome Lucas!

Mozilla L10NL10n Report: October 2022 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New content and projects

What’s new or coming up in Firefox desktop

First of all, thanks to all localizers who contributed to a successful MR release (106) for Firefox desktop. While the new content wasn’t as large as previous major releases, it was definitely challenging, with new feature names added for the first time in a long time.

What’s next? We expect a period of stabilization, with bug fixes that will require new strings, followed by a low volume of new content. We’ll make sure to keep an eye out for the next major release in 2023, and provide as much context as possible for both translation and testing.

Now more than ever it’s a good time to make sure you’re following the Bugzilla component for your locale, testing Nightly builds, and keeping an eye out for potential feedback on social media.

One other update is that we have made significant progress in removing legacy formats from Firefox:

  • All DTD strings have been removed and migrated to Fluent. Given the nature of our infrastructure — we need to support all shipping versions including ESR — the strings will remain available in Pontoon until late Summer 2023, when Firefox ESR 102 will become obsolete. In the meantime, all these files have been marked as low priority in Pontoon (see for example Italian, tag “Legacy DTD strings (ESR 102)”).
  • We have started migrating some plural strings from .properties to Fluent. We are aware that plural forms in .properties were confusing, using a semicolon as separator and only a comment to distinguish them from standard strings. For this reason, we’ll also try to prevent developers from adding new plural strings using this format.

What’s new or coming up in mobile

We have recently launched our Major Release on both Mobile and Desktop! This was v106 release. Thank you to all localizers who have worked hard on this global launch. There were more than 274 folks working on this, and (approximately) 67,094 translations!

Thank you!

Here are the main MR features on mobile:

  • New wallpapers
  • Recently synced tabs will now appear in the “Jump Back” section of your home page
  • Users will see CFR (UI popups) pointing to the new MR features. Existing users updating to 106 should also see new onboarding screens introducing the MR features

What’s new or coming up in web projects

Firefox Relay Website

A bunch of strings were added as the result of a new feature that’s only available in Canada and the US at the moment. Locale specific files were created. This is the first time a product team targets non-English users as well as English users in both countries with a new feature.  Since we don’t have Canadian French and US Spanish communities, these pages were assigned to the French and Mexican Spanish communities respectively. Please give these pages higher priority as they are time sensitive and there is a promotion going on. The promotion encourages users to sign up for both Firefox Relay and Mozilla VPN as a bundle at a discounted price. Thanks to both communities for helping out.

There will be promotional strings added to the Firefox Relay Add-on project. The strings are available for all locales to localize but the promotion is only visible for users in the US and Canada.

What’s new or coming up in Pontoon

Pontoon profile pages have a brand new look: check out this blog post for more information about this change, and don’t forget to update your profile with relevant contact information, to help both project managers and fellow localizers get in touch if needed.

Events

Want to showcase an event coming up that your community is participating in? Contact us and we’ll include it.

Friends of the Lion

 

Image by Elio Qoshi

 

 

Know someone in your l10n community who’s been doing a great job and should appear here? Contact us and we’ll make sure they get a shout-out!

 

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

Blog of DataThis Week in Glean: Page Load Data, Three Ways (Or, How Expensive Are Events?)

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. All “This Week in Glean” blog posts are listed in the TWiG index).

At Mozilla we make, among other things, Web Browsers which we tend to call Firefox. The central activity in a Web Browser like Firefox is loading a web page. It gets done a lot by each and every one of our users, and so you can imagine that data about pageloads is of important business interest to us.

But exactly because this is done a lot and by every one of our users, this inspires concerns of scale and cost. How much does it cost us to learn more about pageloads?[0]

As with all things in Data, the answer is the same: “Well, it depends.”

In this case it depends on how you record the data. How you record the data depends on what questions you hope to answer with it. We’re going to stick to the simplest of questions to make this (highly-suspect) comparison even remotely comparable.

Option 1: Just the Counts, Ma’am

I say page loads are done a lot, but how much is “a lot”? If that’s our only question, maybe the data we need is simply a count of pageloads. Glean already has a metric type for counting things, so it should be fairly quick to implement.

This should be cheap, right? Just a single number? Well, it depends.

Scale 1: Frequency

The count of pageloads is just a single number. One, maybe as many as eight, bytes to record, store, transmit, retain, and analyze. But Firefox has to report it more than once, so we need to first scale our cost of “one, maybe as many as eight, bytes” by the number of times we send this information.

When we first implemented Firefox’s pageload count in Glean, I wanted to send it on the builtin “metrics” ping which is sent once a day from anyone running Firefox that day[1]. In an effort to gain more complete and timely data, we ended up adding it to the builtin “baseline” ping which is sent (on average for Firefox Desktop) 8 or more times per day.

For our frequency scale we thus use 8/day.

Scale 2: Population

These 8 recordings per day are sent by about 200M users over a month. Days and months aren’t easy to scale between as not all users use Firefox every day, and our population gains new users and loses old users at variable rates… so I recalculated the Frequency scale to be in terms of months and found that we get 68 pings per month from these roughly 200M users.

So the cost is pretty easy to calculate then? Whatever the cost is of storing and transmitting 200M x 68/month x eight bytes ~= 109 GB?

Not entirely. But until and unless those other costs are not comparable between options, we can just treat them as noise. This cost, rendered in the size of the data, of about 109GB? It’ll do.

Option 2: What an Event

Page loads are interesting not just in how many of them there are, but also about what type of load they are and how long the load took. The order of a page load in between other events might also be of interest: did it happen before or after some network trouble? Did a bunch of pageloads happen all at once, or spread across the day? We might wish to instrument page loads as Glean events.

Events are each more expensive than a count. They carry a timestamp (eight bytes) and repeat their names each time they’re recorded (some strings, say fifteen bytes).

(We are not counting the load type or how long the load took in our calculations of the size of an individual sample as we’re still trying to compare methods of answering the same “How many page loads are there?” question.)

Scale 3: Page Loads

“Each time they’re recorded”, huh. Guess that means we get to multiply by the number of page loads. Each Firefox Desktop user, over the course of a month, loads on average 1190 pages[2]. This means instead of sending 68 numbers a month, we’re sending 1190 batches of strings a month.

So the comparable cost is whatever the cost is of storing and transmitting 200M x (eight bytes and fifteen bytes) x 1190 ~= 5.47TB..

We’ve jumped an order of magnitude here. And we’re not done.

Option 3: Custom Pings, and Custom Pings Only

What if the context we wish to record alongside the event of a page load cannot fit inside Glean’s prudent “event” metric type limits? What if the collected pageload data would benefit from a retention limit or access control list different from other counts or events? What if you want to submit this data to be uploaded as soon as it has been recorded? In that case, we could send a pageload as a Glean custom ping.

We’ve not (yet) done this in Firefox Desktop (at least partially because it complicates ordering amongst other events: the Glean SDK expends a lot of effort to ensure the timestamps between events are reliable. Ping times are client times which are subject to the whims of the user.), so I’m going to get even hand-wavier than before as I try to determine how large each individual data sample will be.

A Glean custom ping without any metrics in it comes to around 500 bytes. When our data platform ingests the ping and turns it into a row in a dataset, we add some metadata which adds another 300 bytes or so (which only affects storage inside the Data Platform and doesn’t add costs to client storage or client bandwidth).

We could go deeper and cost out the network headers, the costs of using TLS to ensure the integrity of the connection… but we’d be here all day. So I’m gonna call that 200 bytes to make it a nice round 1000 bytes per ping.

We’re sending these pings per pageload, so the cost is whatever the cost is of storing and transmitting 200M x 1190 x 1000 bytes = 238TB.

Rule of Thumb: 50x

There you have it: for each step up the cost ladder you’re adding an extra 50x multiplier to the cost of storing and transmitting the data. The reality’s actually much worse if it’s harder to analyze and reason about the data as it gets more complex (which it in most cases is) because, as you might remember from one of my previous explorations in costing out metrics: it’s the human costs of things (like analysis) that really getcha.

But you have to balance it out. If adding more context and information ensures your analyses only have to look in one place for its data instead of trying to tie together loosely-coupled concepts from multiple locations… if using a custom ping ensures you have everything you need and don’t have to form a committee to resource an engineer to add implementation which needs to be deployed and individually validated… if you’re willing to bet 50x or 250x the cost on getting it right the first time, then that could be a good price to pay.

But is this the case for you and your data?

Well, it depends.

:chutten

[0]: Avid readers of this blog may notice that this isn’t the first time I’ve written on the costs of data. And it likely won’t be the last!

[1]: How often a “metrics” ping is sent is a little more complicated than “once a day”, but it averages out to about that much so I’m sticking with it for this napkin.

[2]: Yes there are some wild and wacky outliers included in the figure “an average of 1190 page loads” that I’m not bothering to clean up. You can Page Loads Georg to your hearts’ content.

[3]: This is about how many characters the JSON-encoded ping payload comes to, uncompressed.

(This post is a syndicated copy of the original.)

The Mozilla Thunderbird BlogNeed Help With Thunderbird? Here’s How To Get Support

We understand that email and calendaring can be a vital part of your work day, and just as important to your personal life. We also realize that sometimes you’ll have questions about using Thunderbird. That’s where the amazing Thunderbird Community enters the picture. Whether you need tech support or just need a simple answer to a question, here’s how to find the help you need. And how to help the people who are helping you!


Why Community Support?

We celebrate the fact that our software is open-source and funded by donations. It’s this community-powered model that helped us thrive during the past few years.

The generous donations of our users have allowed us to build a solid foundation for Thunderbird’s future. Our core team of engineers and developers is devoting their time to improving Thunderbird from all angles, from visuals to features to infrastructure.

But because we’re open-source, a global community of contributors also helps improve Thunderbird by adding ideas, code, bug fixes, helpful documentation, translations… and user support!

So, our approach to support reflects our commitment to open-source and open development: we invite knowledgeable, friendly people to help their fellow Thunderbird users. This means fewer barriers to getting help, regardless of your native language, your time zone, or your skill level.

Thunderbird Trouble? Try This First!

Sometimes, a custom setting or Thunderbird Add-On might be causing your problem. And there’s an easy way to figure that out: try Troubleshoot Mode.

Troubleshoot Mode (previously called Safe Mode) is a special way of starting Thunderbird that can be used to find and fix problems with your installation. Troubleshoot Mode will run Thunderbird with some features (like Add-ons) and settings disabled. If the problem you’re experiencing does not happen in Troubleshoot Mode, you’ve already done a lot to narrow down what’s causing the issue!

Always try Troubleshoot Mode before reporting a problem. Just follow this link to learn how to turn Troubleshoot Mode on and off.

#1: Thunderbird Community Support

When you have a question about Thunderbird or need some help, this dedicated support page is the best place to visit:

➡ https://support.mozilla.org/products/thunderbird

The global Thunderbird Community has many experienced experts who volunteer their time and knowledge to help fellow Thunderbird users fix their issues.

You’ll find an extensive (and always growing) knowledge base of articles covering Thunderbird’s features, and helpful how-to guides on customization, privacy settings, exporting, and much more.

If your search doesn’t produce a satisfying result, you can ask the community a question from the same page. All you’ll need is a Firefox Account and an email address to receive notifications about responses to your question.

#2: Reddit (/r/Thunderbird)

With nearly half a billion monthly active users, Reddit is ranked as the 9th most popular website in the world. You might already have an account! We have our own “Subreddit” where Thunderbird volunteers and staff members answer user questions and share important updates: https://www.reddit.com/r/Thunderbird/

Reddit works well as a support forum. It has fast notifications, a threaded conversations view, and easy-to-read interface. If you can’t find your answer elsewhere, ask us on Reddit!

Screenshots Are Helpful: Include Them!

Taking a screenshot of the problem you’re having is a great way to show the developers and volunteers your problem, especially if you’re having difficulty describing it with words.

Before posting on the Thunderbird Support Forum or Reddit, try to capture screenshots of the issue. Here are links explaining how to do that on Windows, Linux, and macOS:

Include Your OS and Thunderbird Versions

You want your problem solved as quickly as possible so you can get on with your work! One productive step toward doing that is to always include your operating system (e.g. Ubuntu 22.04, macOS Monterey 12.5, Windows 10) and exact version of Thunderbird (e.g. Thunderbird 102.3.1) in your initial question.

It really speeds up the process and helps the Thunderbird Community to better assist you.

To find out your version of Thunderbird, click the App menu (), and then “Help” and then “About Thunderbird.”


Obviously we hope you never have problems with Thunderbird, but if you ever need help, we hope the above resources and tips help you solve them!

💡 Do you want to request a new feature for Thunderbird? We’d love to see your ideas! Just browse to this page on Mozilla Connect and tell us.

Thunderbird is the leading open-source, cross-platform email and calendaring client, free for business and personal use. We want it to stay secure and become even better. Donations allow us to hire more developers and add exciting new features.

Click here to make a donation. 

The post Need Help With Thunderbird? Here’s How To Get Support appeared first on The Thunderbird Blog.

hacks.mozilla.orgImproving Firefox responsiveness on macOS

If you’re running Firefox on macOS you might have noticed that its responsiveness has improved significantly in version 103, especially if you’ve got a lot of tabs, or when your machine is busy running other applications at the same time. This improvement was achieved via a small change in how locking is implemented within Firefox’s memory allocator.

Firefox uses a highly customized version of the jemalloc memory allocator across all architectures. We’ve diverged significantly from upstream jemalloc in order to guarantee optimal performance and memory usage on Firefox.

Memory allocators have to be thread safe and – in order to be performant – need to be able to serve a large number of concurrent requests from different threads. To achieve this, jemalloc uses locks within its internal structures that are usually only held very briefly.

Locking within the allocator is implemented differently than in the rest of the codebase. Specifically, creating mutexes and using them must not issue new memory allocations because that would lead to infinite recursion within the allocator itself. To achieve this the allocator tends to use thin locks native to the underlying operating system. On macOS we relied for a long time on OSSpinLock locks.

As the name suggests these are not regular mutexes that put threads trying to acquire them to sleep if they’re already taken by another thread. A thread attempting to lock an already locked instance of OSSpinLock will busy-poll the lock instead of waiting for it to be released, which is commonly referred to as spinning on the lock.

This might seem counter-intuitive, as spinning consumes CPU cycles and power and is usually frowned upon in modern codebases. However, putting a thread to sleep has significant performance implications and thus is not always the best option.

In particular, putting a thread to sleep and then waking it up requires two context switches as well as saving and restoring the thread state to/from memory. Depending on the CPU and workload the thread state can range from several hundred bytes to a few kilobytes. Putting a thread to sleep also has indirect performance effects.

For example, the caches associated with the core the thread was running on were likely holding useful data. When a thread is put to sleep another thread from an unrelated workload might then be selected to run in its place, replacing the data in the caches with new data.

When the original thread is restored it might end up on a different core, or on the same core but with cold caches, filled with unrelated data. Either way, the thread will proceed execution more slowly than if it had kept running undisturbed.

Because of all the above, it might be advantageous to let a thread spin briefly if the lock it’s trying to acquire is only held for a brief period of time. It can result in both higher performance and lower power consumption as the cost of spinning is less than sleeping.

However spinning has a significant drawback: if it goes on for too long it can be detrimental, as it will just waste cycles. Worse still, if the machine is heavily loaded, spinning might put additional load on the system, potentially slowing down precisely the thread that owns the lock, increasing the chance of further threads needing the lock, spinning some more.

As you might have guessed by now OSSpinLock offered very good performance on a lightly loaded system, but behaved poorly as load ramped up. More importantly it had two fundamental flaws: it spinned in user-space and never slept.

Spinning in user-space is a bad idea in general, as user-space doesn’t know how much load the system is currently experiencing. In kernel-space a lock might make an informed decision, for example not to spin at all if the load is high, but OSSpinLock had no such provision, nor did it adapt.

But more importantly, when it couldn’t really grab a lock it would yield instead of sleeping. This is particularly bad because the kernel has no clue that the yielding thread is waiting on a lock, so it might wake up another thread that is also fighting for the same lock instead of the one that owns it.

This will lead to more spinning and yielding and the resulting user experience will be terrible. On heavily loaded systems this could lead to a near live-lock and Firefox effectively hanging. This problem with OSSpinLock was known within Apple hence its deprecation.

Enter os_unfair_lock, Apple’s official replacement for OSSpinLock. If you still use OSSpinLock you’ll get explicit warnings to use it instead.

So I went ahead and used it, but the results were terrible. Performance in some of our automated tests degraded by as much as 30%. os_unfair_lock might be better behaved than OSSpinLock, but it sucked.

As it turns out os_unfair_lock doesn’t spin on contention, it makes the calling thread sleep right away when it finds a contended lock.

For the memory allocator this behavior was suboptimal and the performance regression unacceptable. In some ways, os_unfair_lock had the opposite problem of OSSpinLock: it was too willing to sleep when spinning would have been a better choice. At this point, it’s worth mentioning while we’re at it that pthread_mutex locks are even slower on macOS so those weren’t an option either.

However, as I dug into Apple’s libraries and kernel, I noticed that some spin locks were indeed available, and they did the spinning in kernel-space where they could make a more informed choice with regards to load and scheduling. Those would have been an excellent choice for our use-case.

So how do you use them? Well, it turns out they’re not documented. They rely on a non-public function and flags which I had to duplicate in Firefox.

The function is os_unfair_lock_with_options() and the options I used are OS_UNFAIR_LOCK_DATA_SYNCHRONIZATION and OS_UNFAIR_LOCK_ADAPTIVE_SPIN.

The latter asks the kernel to use kernel-space adaptive spinning, and the former prevents it from spawning additional threads in the thread pools used by Apple’s libraries.

OS_UNFAIR_LOCK_DATA_SYNCHRONIZATION and OS_UNFAIR_LOCK_ADAPTIVE_SPIN. The latter asks the kernel to use kernel-space adaptive spinning, and the former prevents it from spawning additional threads in the thread pools used by Apple's libraries.

Did they work? Yes! Performance on lightly loaded systems was about the same as OSSpinLock but on loaded ones, they provided massively better responsiveness. They also did something extremely useful for laptop users: they cut down power consumption as a lot less cycles were wasted having the CPUs spinning on locks that couldn’t be acquired.

Unfortunately, my woes weren’t over. The OS_UNFAIR_LOCK_ADAPTIVE_SPIN flag is supported only starting with macOS 10.15, but Firefox also runs on older versions (all the way to 10.12).

As an intermediate solution, I initially fell back to OSSpinLock on older systems. Later I managed to get rid of it for good by relying on os_unfair_lock plus manual spinning in user-space.

This isn’t ideal but it’s still better than relying on OSSpinLock, especially because it’s needed only on x86-64 processors, where I can use pause instructions in the loop which should reduce the performance and power impact when a lock can’t be acquired.

When two threads are running on the same physical core, one using pause instructions leaves almost all of the core’s resources available to the other thread. In the unfortunate case of two threads spinning on the same core they’ll still consume very little power.

At this point, you might wonder if os_unfair_lock – possibly coupled with the undocumented flags – would be a good fit for your codebase. My answer is likely yes but you’ll have to be careful when using it.

If you’re using the undocumented flags be sure to routinely test your software on new beta versions of macOS, as they might break in future versions. And even if you’re only using os_unfair_lock public interface beware that it doesn’t play well with fork(). That’s because the lock stores internally the mach thread IDs to ensure consistent acquisition and release.

These IDs change after a call to fork() as the thread creates new ones when copying your process’ threads. This can lead to potential crashes in the child process. If your application uses fork(), or your library needs to be fork()-safe you’ll need to register at-fork handlers using pthread_atfork() to acquire all the locks in the parent before the fork, then release them after the fork (also in the parent), and reset them in the child.

Here’s how we do it in our code.

The post Improving Firefox responsiveness on macOS appeared first on Mozilla Hacks - the Web developer blog.

SeaMonkeySeaMonkey 2.53.14 is now out!

Hi everyone,

The SeaMonkey Project team is pleased to announce the immediate release of SeaMonkey 2.53.14!  There has been some changes, so please check out [1] and/or [2].

[1] – https://www.seamonkey-project.org/releases/seamonkey2.53.14/

[2]- https://www.seamonkey-project.org/releases/2.53.14

:ewong

PS: A confession: I haven’t been working too closely with the code and have only been doing the release engineering bit.   So the work of actually building the code and ensuring running is kudos to everyone else(Ian and frg both driving the releases).   While  I don’t understand the code layout anymore; I do hope to figure it out soon.  I just need a new laptop to build stuff and time.  *sigh*

PPS: I’m slowly getting the crash stats working.

Open Policy & AdvocacyMozilla Responds to EU General Court’s Judgment on Google Android

This week, the EU’s General Court largely upheld the decision sanctioning Google for restricting competition on the Android mobile operating system. But, on their own, the judgment and the record fine do not help to unlock competition and choice online, especially when it comes to browsers.

In July 2018, when the European Commission announced its decision, we expressed hope that the result would help to level the playing field for independent browsers like Firefox and provide real choice for consumers. Sadly for billions of people around the world who use browsers every day, this hope has not been realized – yet.

The case may rumble on in appeals for several more years, but Mozilla will continue to advocate for an Internet which is open, accessible, private, and secure for all, and we will continue to build products which advance this vision. We hope that those with the power to improve browser choice for consumers will also work towards these tangible goals.

The post Mozilla Responds to EU General Court’s Judgment on Google Android appeared first on Open Policy & Advocacy.

SUMO BlogTribute to FredMcD

It brings us great sadness to share the news that FredMcD has recently passed away.

If you ever posted a question to our Support Forum, you may be familiar with a contributor named “FredMcD”. Fred was one of the most active contributors in Mozilla Support, and for many years remains one of our core contributors. He was regularly awarded a forum contributor badge every year since 2013 for his consistency in contributing to the Support Forum.

He was a dedicated contributor, super helpful, and very loyal to Firefox users making over 81400 contributions to the Support Forum since 2013.  During the COVID-19 lockdown period, he focussed on helping people all over the world when they were online the most – at one point he was doing approximately 3600 responses in 90 days, an average of 40 a day.

In March 2022, I learned the news that he was hospitalized for a few weeks. He was back active in our forum shortly after he was discharged. But then we never heard from him again after his last contribution on May 5, 2022. There’s very little we know about Fred. But we were finally able to confirm his passing just recently.

We surely lost a great contributor. He was a helpful community member and his assistance with incidents was greatly appreciated. His support approach has always been straightforward and simple. It’s not rare, that he was able to solve a problem in one go like this or this one.

To honor his passing, we added his name to the about:credits page to make sure that his contribution and impact on Mozilla will never be forgotten. He will surely be missed by the community.


I’d like to thank Paul for his collaboration in this post and for his help in getting Fred’s name to the about:credits page. Thanks, Paul!

 

The Mozilla Thunderbird BlogThunderbird Tip: Customize Colors In The Spaces Toolbar

In our last video tip, you learned how to manually sort the order of all your mail and account folders. Let’s keep that theme of customization rolling forward with a quick video guide on customizing the Spaces Toolbar that debuted in Thunderbird 102.

The Spaces Toolbar is on the left hand side of your Thunderbird client and gives you fast, easy access to your most important activities. With a single click you can navigate between Mail, Address Books, Calendars, Tasks, Chat, settings, and your installed add-ons and themes.

Watch below how to customize it!

Video Guide: Customizing The Spaces Toolbar In Thunderbird

This 2-minute tip video shows you how to easily customize the Spaces Toolbar in Thunderbird 102.

*Note that the color tools available to you will vary depending on the operating system you’re using. If you’re looking to discover some pleasing color palettes, we recommend the excellent, free tools at colorhunt.co.


Have You Subscribed To Our YouTube Channel?

We’re currently building the next exciting era of Thunderbird, and developing a Thunderbird experience for mobile. We’re also putting out more content and communication across various platforms to keep you informed. And, of course, to show you some great usage tips along the way.

To accomplish that, we’ve launched our YouTube channel to help you get the most out of Thunderbird. You can subscribe here. Help us reach more people than ever before by liking each video and leaving a comment if it helped!


Another Tip Before You Go?

The post Thunderbird Tip: Customize Colors In The Spaces Toolbar appeared first on The Thunderbird Blog.

Mozilla L10NRedesigned profile page now available in Pontoon

Back in February 2022, we reached out to our community to ask for feedback on a proposal to completely rethink the profile page in Pontoon.

The goal was to improve the experience for everyone on the platform, transforming this page into an effective tool that could showcase contributions, provide useful contact information, and help locale managers to grow their communities.

As a reminder, these were the user stories that we defined to help us guide the design process.

As a contributor, I want to be able to:

  • Quickly see how much I contribute over time to localization.
  • Share my profile with potential employers in the localization industry, or use it to demonstrate my involvement in projects as a volunteer.
  • Control visibility of personal information displayed in my profile.
  • See data about the quality of my contributions and use it to make a case for promotion with locale managers or administrators.
  • See when my own suggestions have been reviewed and access them.

As a translator, I want to be able to:

  • See if a person has usually been providing good quality translations.
  • Check if the person has specific permissions for my own locale, potentially for other locales.

As a locale manager, I want to be able to:

  • See the quality of contributions provided by a specific person.
  • See how frequently a person contributes translations, and the amount of their contributions.

As an administrator (or project manager), I want to be able to:

  • See data about the user:
    • When they signed up.
    • When was the last time they logged in to Pontoon, or they were active on the platform.
    • Quickly assess the frequency of contributions by type (reviews performed, translations).
    • Which projects and locales they contributed to.
    • Get a sense of the quality and amount of their contribution.
  • Easily access contributions by a specific person.

We’re happy to announce that the vast majority of the work has been completed, and you can already see it online in Pontoon. You can click on your profile icon in the top right corner, then click again on the name/icon in the dropdown to display your personal profile page (or you can see an example here).

Pontoon New Profile

In the left column, you can find information about the user: contact details, roles, as well as last known activity.

Each user can customize this information in the updated Settings page (click the CHANGE SETTINGS button to access it), where it’s possible to enter data as well as determine the visibility of some of the fields.

In the top central section there are two new graphs:

  • Approval rate shows the ratio between the number of translations approved and the total number of translations reviewed, excluding self-approved translations.
  • Self-approval rate is only visible for users with translator rights, and shows the ratio between the number of translations submitted directly — or self-approved after submitting them as suggestions — and the total number of translations approved.

Right below these graphs, there is a section showing a graphical representation of the user’s activity in the last year:

  • Each square represents a day, while each row represents a day of the week. The lighter the color, the higher the number of contributions on that day.
  • By default, the chart will show data for Submissions and reviews, which means translations submitted and reviews performed. We decided to use this as default among all the options, since it actually shows actions that require an active role from the user.
  • The chart will display activity for the last year, while the activity log below will by default display activity in more detail for the last month. Clicking on a specific square (day) will only show the activity for that day.

It’s important to note that the activity log includes links that allow you to jump to those specific strings in the translation editor, and that includes reviews performed or received, for which a new filter has been implemented.

We hope that you’ll find this new profile page useful in your day to day contributions to Mozilla. If you encounter any problems, don’t hesitate to file an issue on GitHub.

 

hacks.mozilla.orgThe 100% Markdown Expedition

A snowy mountain peak at sunset

The 100% Markdown Expedition

In June 2021, we decided to start converting the source code for MDN web docs from HTML into a format that would be easier for us to work with. The goal was to get 100% of our manually-written documentation converted to Markdown, and we really had a mountain of source code to climb for this particular expedition.

In this post, we’ll describe why we decided to migrate to Markdown, and the steps you can take that will help us on our mission.

Why get to 100% Markdown?

We want to get all active content on MDN Web Docs to Markdown for several reasons. The top three reasons are:

  • Markdown is a much more approachable and friendlier way to contribute to MDN Web Docs content. Having all content in Markdown will help create a unified contribution experience across languages and repositories.
  • With all content in Markdown, the MDN engineering team will be able to clean up a lot of the currently maintained code. Having less code to maintain will enable them to focus on improving the tooling for writers and contributors. Better tooling will lead to a more enjoyable contribution workflow.
  • All content in Markdown will allow the MDN Web Docs team to run the same linting rules across all active languages.

Here is the tracking issue for this project on the translated content repository.

Tools

This section describes the tools you’ll need to participate in this project.

Git

If you do not have git installed, you can follow the steps described on this getting started page.

https://git-scm.com/book/en/v2/Getting-Started-Installing-Git

If you are on Linux or macOS, you may already have Git. To check, open your terminal and run: git --version

On Windows, there are a couple of options:

GitHub

We’re tracking source code and managing contributions on GitHub, so the following will be needed:

• A GitHub account.
• The GitHub CLI to follow the commands below. (Encouraged, but optional, i.e., if you are already comfortable using Git, you can accomplish all the same tasks without the need for the GitHub CLI.)

Nodejs

First, install nvm – https://github.com/nvm-sh/nvm#installing-and-updating or on Windows https://github.com/coreybutler/nvm-windows

Once all of the above is installed, install Nodejs version 16 with NVM:

nvm install 16
nvm use 16
node --version

This should output a Nodejs version number that is similar to v16.15.1.

Repositories

You’ll need code and content from several repositories for this project, as listed below.

You only need to fork the translated-content repository. We will make direct clones of the other two repositories.

Clone the above repositories and your fork of translated-content as follows using the GitHub CLI:

gh repo clone mdn/markdown
gh repo clone mdn/content
gh repo clone username/translated-content # replace username with your GitHub username
Setting up the conversion tool
cd markdown
yarn

You’ll also need to add some configuration via an .env file. In the root of the directory, create a new file called .env with the following contents:

CONTENT_TRANSLATED_ROOT=../translated-content/files
Setting up the content repository
cd .. # This moves you out of the `markdown` folder
cd content
yarn

Converting to Markdown

I will touch on some specific commands here, but for detailed documentation, please check out the markdown repo’s README.

We maintain a list of documents that need to be converted to Markdown in this Google sheet. There is a worksheet for each language. The worksheets are sorted in the order of the number of documents to be converted in each language – from the lowest to the highest. You do not need to understand the language to do the conversion. As long as you are comfortable with Markdown and some HTML, you will be able to contribute.

NOTE: You can find a useful reference to the flavor of Markdown supported on MDN Web Docs. There are some customizations, but in general, it is based on GitHub flavoured Markdown.

The steps
Creating an issue

On the translated-content repository go to the Issues tab and click on the “New issue” button. As mentioned in the introduction, there is a tracking issue for this work and so, it is good practice to reference the tracking issue in the issue you’ll create.

You will be presented with three options when you click the “New issue” button. For our purposes here, we will choose the “Open a blank issue” option. For the title of the issue, use something like, “chore: convert mozilla/firefox/releases for Spanish to Markdown”. In your description, you can add something like the following:

As part of the larger 100% Markdown project, I am converting the set of documents under mozilla/firefox/releases to Markdown.

NOTE: You will most likely be unable to a assign an issue to yourself. The best thing to do here is to mention the localization team member for the appropriate locale and ask them to assign the issue to you. For example, on GitHub you would add a comment like this: “Hey @mdn/yari-content-es I would like to work on this issue, please assign it to me. Thank you!”

You can find a list of teams here.

Updating the spreadsheet

The tracking spreadsheet contains a couple of fields that you should update if you intend to work on speific items. The first item you need to add is your GitHub username and link the text to your GitHub profile. Secondly, set the status to “In progress”. In the issue column, paste a link to the issue you created in the previous step.

Creating a feature branch

It is a common practice on projects that use Git and GitHub to follow a feature branch workflow. I therefore need to create a feature branch for the work on the translated-content repository. To do this, we will again use our issue as a reference.

Let’s say your issue was called ” chore: convert mozilla/firefox/releases for Spanish to Markdown” with an id of 8192. You will do the following at the root of the translated-content repository folder:

NOTE: The translated content repository is a very active repository. Before creating your feature branch, be sure to pull the latest from the remote using the command git pull upstream main

git pull upstream main
git switch -c 8192-chore-es-convert-firefox-release-docs-to-markdown

NOTE: In older version of Git, you will need to use git checkout -B 8192-chore-es-convert-firefox-release-docs-to-markdown.

The above command will create the feature branch and switch to it.

Running the conversion

Now you are ready to do the conversion. The Markdown conversion tool has a couple of modes you can run it in:

  • dry – Run the script, but do not actually write any output
  • keep – Run the script and do the conversion but, do not delete the HTML file
  • replace – Do the conversion and delete the HTML file

You will almost always start with a dry run.

NOTE: Before running the command below, esnure that you are in the root of the markdown repository.

yarn h2m mozilla/firefox/releases --locale es --mode dry

This is because the conversion tool will sometimes encounter situations where it does not know how to convert parts of the document. The markdown tool will produce a report with details of the errors encountered. For example:

# Report from 9/1/2022, 2:40:14 PM
## All unhandled elements
- li.toggle (4)
- dl (2)
- ol (1)
## Details per Document
### [/es/docs/Mozilla/Firefox/Releases/1.5](<https://developer.mozilla.org/es/docs/Mozilla/Firefox/Releases/1.5>)
#### Invalid AST transformations
##### dl (101:1) => listItem

type: "text"
value: ""

### [/es/docs/Mozilla/Firefox/Releases/3](<https://developer.mozilla.org/es/docs/Mozilla/Firefox/Releases/3>)
### Missing conversion rules
- dl (218:1)

The first line in the report states that the tool had a problem converting four instances of li.toggle. So, there are four list items with the class attribute set to toggle. In the larger report, there is this section:

### [/es/docs/Mozilla/Firefox/Releases/9](<https://developer.mozilla.org/es/docs/Mozilla/Firefox/Releases/9>)
#### Invalid AST transformations
##### ol (14:3) => list

type: "html"
value: "<li class=\\"toggle\\"><details><summary>Notas de la Versión para Desarrolladores de Firefox</summary><ol><li><a href=\\"/es/docs/Mozilla/Firefox/Releases\\">Notas de la Versión para Desarrolladores de Firefox</a></li></ol></details></li>",type: "html"
value: "<li class=\\"toggle\\"><details><summary>Complementos</summary><ol><li><a href=\\"/es/Add-ons/WebExtensions\\">Extensiones del navegador</a></li><li><a href=\\"/es/Add-ons/Themes\\">Temas</a></li></ol></details></li>",type: "html"
value: "<li class=\\"toggle\\"><details><summary>Firefox por dentro</summary><ol><li><a href=\\"/es/docs/Mozilla/\\">Proyecto Mozilla (Inglés)</a></li><li><a href=\\"/es/docs/Mozilla/Gecko\\">Gecko</a></li><li><a href=\\"/es/docs/Mozilla/Firefox/Headless_mode\\">Headless mode</a></li><li><a href=\\"/es/docs/Mozilla/JavaScript_code_modules\\">Modulos de código JavaScript (Inglés)</a></li><li><a href=\\"/es/docs/Mozilla/js-ctypes\\">JS-ctypes (Inglés)</a></li><li><a href=\\"/es/docs/Mozilla/MathML_Project\\">Proyecto MathML</a></li><li><a href=\\"/es/docs/Mozilla/MFBT\\">MFBT (Inglés)</a></li><li><a href=\\"/es/docs/Mozilla/Projects\\">Proyectos Mozilla (Inglés)</a></li><li><a href=\\"/es/docs/Mozilla/Preferences\\">Sistema de Preferencias (Inglés)</a></li><li><a href=\\"/es/docs/Mozilla/WebIDL_bindings\\">Ataduras WebIDL (Inglés)</a></li><li><a href=\\"/es/docs/Mozilla/Tech/XPCOM\\">XPCOM</a></li><li><a href=\\"/es/docs/Mozilla/Tech/XUL\\">XUL</a></li></ol></details></li>",type: "html"
value: "<li class=\\"toggle\\"><details><summary>Crear y contribuir</summary><ol><li><a href=\\"/es/docs/Mozilla/Developer_guide/Build_Instructions\\">Instrucciones para la compilación</a></li><li><a href=\\"/es/docs/Mozilla/Developer_guide/Build_Instructions/Configuring_Build_Options\\">Configurar las opciones de compilación</a></li><li><a href=\\"/es/docs/Mozilla/Developer_guide/Build_Instructions/How_Mozilla_s_build_system_works\\">Cómo funciona el sistema de compilación (Inglés)</a></li><li><a href=\\"/es/docs/Mozilla/Developer_guide/Source_Code/Mercurial\\">Código fuente de Mozilla</a></li><li><a href=\\"/es/docs/Mozilla/Localization\\">Localización</a></li><li><a href=\\"/es/docs/Mozilla/Mercurial\\">Mercurial (Inglés)</a></li><li><a href=\\"/es/docs/Mozilla/QA\\">Garantía de Calidad</a></li><li><a href=\\"/es/docs/Mozilla/Using_Mozilla_code_in_other_projects\\">Usar Mozilla en otros proyectos (Inglés)</a></li></ol></details></li>"

The problem is therefore in the file /es/docs/Mozilla/Firefox/Releases/9. In this instance, we can ignore this as we will simply leave the HTML as is in the Markdown. This is sometimes needed as the HTML we need cannot be accurately represented in Markdown. The part you cannot see in the output above is this portion of the file:

<div><section id="Quick_links">
  <ol>
    <li class="toggle">

If you do a search in the main content repo you will find lots of instances of this. In all those cases, you will see that the HTML is kept in place and this section is not converted to Markdown.

The next two problematic items are two dl or description list elements. These elements will require manual conversion using the guidelines in our documentation. The last item, the ol is actually related to the li.toggle issue. Those list items are wrapped by an ol and because the tool is not sure what to do with the list items, it is also complaining about the ordered list item.

Now that we understand what the problems are, we have two options. We can run the exact same command but this time use the replace mode or, we can use the keep mode. I am going to go ahead and run the command with replace. While the previous command did not actually write anything to the translated content repository, when run with replace it will create a new file called index.md with the converted Markdown and delete the index.html that resides in the same directory.

yarn h2m mozilla/firefox/releases --locale es --mode replace

Following the guidelines from the report, I will have to pay particular attention to the following files post conversion:

  • /es/docs/Mozilla/Firefox/Releases/1.5
  • /es/docs/Mozilla/Firefox/Releases/3
  • /es/docs/Mozilla/Firefox/Releases/9

After running the command, run the following at the root of the translated content repository folder, git status. This will show you a list of the changes made by the command. Depending on the number of files touched, the output can be verbose. The vital thing to keep an eye out for is that there are no changes to folders or files you did not expect.

Testing the changes

Now that the conversion has been done, we need to review the syntax and see that the pages render correctly. This is where the content repo is going to come into play. As with the markdown repository, we also need to create a .env file at the root of the content folder.

CONTENT_TRANSLATED_ROOT=../translated-content/files

With this in place we can start the development server and take a look at the pages in the browser. To start the server, run yarn start. You should see output like the following:

❯ yarn start
yarn run v1.22.17
$ yarn up-to-date-check && env-cmd --silent cross-env CONTENT_ROOT=files REACT_APP_DISABLE_AUTH=true BUILD_OUT_ROOT=build yari-server
$ node scripts/up-to-date-check.js
[HPM] Proxy created: /  -> <https://developer.mozilla.org>
CONTENT_ROOT: /Users/schalkneethling/mechanical-ink/dev/mozilla/content/files
Listening on port 5042

Go ahead and open http://localhost:5042 which will serve the homepage. To find the URL for one of the pages that was converted open up the Markdown file and look at the slug in the frontmatter. When you ran git status earlier, it would have printed out the file paths to the terminal window. The file path will show you exactly where to find the file, for example, files/es/mozilla/firefox/releases/1.5/index.md. Go ahead and open the file in your editor of choice.

In the frontmatter, you will find an entry like this:

slug: Mozilla/Firefox/Releases/1.5

To load the page in your browser, you will always prepend http://localhost:5042/es/docs/ to the slug. In other words, the final URL you will open in your browser will be http://localhost:5042/es/docs/Mozilla/Firefox/Releases/1.5. You can open the English version of the page in a separate tab to compare, but be aware that the content could be wildly different as you might have converted a page that has not been updated in some time.

What you want to look out for is anything in the page that looks like it is not rendering correctly. If you find something that looks incorrect, look at the Markdown file and see if you can find any syntax that looks incorrect or completely broken. It can be extremely useful to use a tool such as VSCode with a Markdown tool and Prettier installed.

Even if the rendered content looks good, do take a minute and skim over the generated Markdown and see if the linters bring up any possible errors.

NOTE: If you see code like this {{FirefoxSidebar}} this is a macro call. There is not a lot of documentation yet but, these macros come from KumaScript in Yari.

A couple of other things to keep in mind. When you run into an error, before you spend a lot of time trying to understand what exatly the problem is or how to fix it, do the following:

  1. Look for the same page in the content repository and make sure the page still exists. If it was removed from the content repository, you can safely remove it from translated-content as well.
  1. Look at the same page in another language that has already been converted and see how they solved the problem.

For example, I ran into an error where a page I loaded simply printed the following in the browser: Error: 500 on /es/docs/Mozilla/Firefox/Releases/2/Adding_feed_readers_to_Firefox/index.json: SyntaxError: Expected "u" or ["bfnrt\\\\/] but "_" found.. I narrowed it down to the following piece of code inside the Markdown:

{{ languages( { "en": "en/Adding\\_feed\\_readers\\_to\\_Firefox", "ja": "ja/Adding\\_feed\\_readers\\_to\\_Firefox", "zh-tw": "zh\\_tw/\\u65b0\\u589e\\u6d88\\u606f\\u4f86\\u6e90\\u95b1\\u8b80\\u5de5\\u5177" } ) }}

In French it seems that they removed the page, but when I looked in zh-tw it looks like they simply removed this macro call. I opted for the latter and just removed the macro call. This solved the problem and the page rendered correctly. Once you have gone through all of the files you converted it is time to open a pull request.

Preparing and opening a pull request

# the dot says add everything
git add .

Start by getting all your changes ready for committing:

If you run git status now you will see something like the following:

❯ git status
On branch 8192-chore-es-convert-firefox-release-docs-to-markdown
Changes to be committed: # this be followed by a list of files that has been added, ready for commit

Commit your changes:

git commit -m 'chore: convert Firefox release docs to markdown for Spanish'

Finally you need to push the changes to GitHub so we can open the pull request:

git push origin 8192-chore-es-convert-firefox-release-docs-to-markdown

You can now head over to the translated content repository on GitHub where you should see a banner that asks whether you want to open a pull request. Click the “Compare and pull button” and look over your changes on the next page to ensure nothing surprises.

At this point, you can also add some more information and context around the pull request in the description box. It is also critical that you add a line as follows, “Fix #8192”. Substitute the number with the number of the issue you created earlier. The reason we do this is so that we link the issue and the pull request. What will also happen is, once the pull request is merged, GitHub will automatically close the issue.

Once you are satisfied with the changes as well as your description, go ahead and click the button to open the pull request. At this stage GitHub will auto-assign someone from the appropriate localization team to review your pull request. You can now sit back and wait for feedback. Once you receive feedback, address any changes requested by the reviewer and update your pull request.

Once you are both satisfied with the end result, the pull request will be merged and you will have helped us get a little bit closer to 100% Markdown. Thank you! One final step remains though. Open the spreadsheet and update the relevant rows with a link to the pull request, and update the status to “In review”.

Once the pull request has been merged, remember to come back and update the status to done.

Reach out if you need help

If you run into any problems and have questions, please join our MDN Web Docs channel on Matrix.

https://matrix.to/#/#mdn:mozilla.org

 

Photo by Cristian Grecu on Unsplash

The post The 100% Markdown Expedition appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Add-ons BlogHello from the new developer advocate

Hello extension developers, I’m Juhis, it’s a pleasure to meet you all. In the beginning of August I joined Mozilla and the Firefox add-ons team as a developer advocate. I expect us to see each other quite a lot in the future. My mom taught me to always introduce myself to new people so here we go!

My goal is to help all of you to learn from each other, to build great add-ons and to make that journey an enjoyable experience. Also, I want to be your voice to the teams building Firefox and add-ons tooling.

My journey into the world of software

I’m originally from Finland and grew up in a rather small town in the southwest. I got excited about computers from a very young age. I vividly remember a moment from my childhood when my sister created a digital painting of two horses, but since it was too large for the screen, I had to scroll to reveal the other horse. That blew my four-year old mind and I’ve been fascinated by the opportunities of technology ever since.

After some years working in professional software development, I realized I could offer maximum impact by building communities and helping others become developers rather than just coding myself. Ever since, I’ve been building developer communities, organized meetups, taught programming and served as a general advocate for the potentials of technology.

I believe in the positive empowerment that technology can bring to individuals all around the world. Whether it’s someone building something small to solve a problem in their daily life, someone building tools for their community, or being able to build and run your own business, there are so many ways we can leverage technology for good.

Customize your own internet experience with add-ons

The idea of shaping your own internet experience has been close to my heart for a long time. It can be something relatively simple like running custom CSS through existing extensions to make a website more enjoyable to use, or maybe it’s building big extensions for thousands of other people to enjoy. I’m excited to now be in a position where I can help others to build great add-ons of their own.

To understand better what a new extensions developer goes through, I built an extension following our documentation and processes. I built it for fellow Pokemon TCG players who want a a more visual way to read decklists online. Pokemon TCG card viewer can be installed from addons.mozilla.org. It adds a hover state to card codes it recognizes and displays a picture of the card on hover.

Best way to find me is in the Mozilla Matrix server as @hamatti:mozilla.org in the Add-ons channel. Come say hi!

The post Hello from the new developer advocate appeared first on Mozilla Add-ons Community Blog.

The Mozilla Thunderbird BlogThunderbird Tip: How To Manually Sort All Email And Account Folders

In our last blog post, you learned an easy way to change the order your accounts are displayed in Thunderbird. Today, we have a short video guide that takes your organizing one step further. You’ll learn how to manually sort all of the Thunderbird folders you have. That includes any newsgroup and RSS feed subscriptions too!


Have You Subscribed To Our YouTube Channel?

We’re currently building the next exciting era of Thunderbird, and developing a Thunderbird experience for mobile. We’re also putting out more content and communication across various platforms to keep you informed. And, of course, to show you some great usage tips along the way.

To accomplish that, we’ve launched our YouTube channel to help you get the most out of Thunderbird. You can subscribe here. Help us reach 1000 subscribers by the end of September!


Video Guide: Manually Sort Your Thunderbird Folders

The short video below shows you everything you need to know:

We plan to produce many more tips just like this on our YouTube channel. We’ll also share them right here on the Thunderbird blog, so grab our RSS feed! (Need a guide for using RSS with Thunderbird? Here you go!)

Do you have a good Thunderbird tip we should turn into a video? Let us know in the comments, and thank you for using Thunderbird!

The post Thunderbird Tip: How To Manually Sort All Email And Account Folders appeared first on The Thunderbird Blog.

SeaMonkeySeaMonkey 2.53.14 Beta 1 is out!

Hi All,

The SeaMonkey Project team is pleased to announce the immediate release of SeaMonkey 2.53.14 beta 1.

As it is a beta, please check out the release notes at [1] and/or [2] and take it for a spin with a new profile (or a copy/backup of your production profile).

The updates will be forthcoming, specifically, within the hour after I’ve pressed that shiny red button.  or was it the green one?  😛

Best Regards,

:ewong

[1] – https://www.seamonkey-project.org/releases/seamonkey2.53.14/

[2] – https://www.seamonkey-project.org/releases/2.53.14b1

SUMO BlogWhat’s up with SUMO – August 2022

Hi everybody,

Summer is not a thing in my home country, Indonesia. But I learn that taking some time off after having done a lot of work in the first half of the year is useful for my well-being. So I hope you had a chance to take a break this summer.

We passed half of Q3 already at this point, so let’s see what SUMO has been doing and up to with renewed excitement after this holiday season.

Welcome note and shout-outs

  • Thanks to Felipe for doing a short experiment on social support mentoring. This was helpful to understand what other contributors might need when they start contributing.
  • Thanks to top contributors for Firefox for iOS in the forum. We are in need of more iOS contributors in the forum, so your contribution is highly appreciated.
  • I’d like to give special thanks to a few contributors who start to contribute more to KB these days: Denys, Kaie, Lisah933, jmaustin, and many others.

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to add them in our next edition.

Community news

  • We are now sharing social and mobile support stats regularly. This is an effort to make sure that contributors are updated and exposed to both contribution areas. We knew it’s not always easy to discover opportunity to contribute to social or mobile support since we’re utilizing a different tool for these contribution areas. Check out the last one was from last week.
  • The long-awaited work to fix the automatic function to shorten KB article link has been released to production. Read more about this change in this contributor thread and how you can help remove manual links that we added in the past when the functionality was broken.
  • Check out our post about Back to School marketing campaign if you haven’t.

Catch up

  • Consider subscribing to Firefox Daily Digest if you haven’t to get daily updates about Firefox from across different platforms.
  • Watch the monthly community call if you haven’t. Learn more about what’s new in July! Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can answer them during the meeting.
  • If you’re an NDA’ed contributor, you can watch the recording of the Customer Experience weekly scrum meeting from AirMozilla to catch up with the latest product updates.
  • Check out the following release notes from Kitsune in the month:

Community stats

KB

KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only

Month Page views Vs previous month
Jul 2022 7,325,189 -5.94%

Top 5 KB contributors in the last 90 days: 

KB Localization

Top 10 locales based on total page views

Locale Jul 2022 pageviews (*)
de 8.31%
zh-CN 7.01%
fr 5.94%
es 5.91%
pt-BR 4.75%
ru 4.14%
ja 3.93%
It 2.13%
zh-TW 1.99%
pl 1.94%
* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

Forum Support

Forum stats

-TBD-

Top 5 forum contributors in the last 90 days: 

Social Support

Channel Total incoming conv Conv interacted Resolution rate
Jul 2022 237 251 75.11%

Top 5 Social Support contributors in the past 2 months: 

  1. Bithiah K
  2. Christophe Villeneuve
  3. Felipe Koji
  4. Kaio Duarte
  5. Matt Cianfarani

Play Store Support

Channel Jul 2022
Total priority review Total priority review replied Total reviews replied
Firefox for Android 2155 508 575
Firefox Focus for Android 45 18 92
Firefox Klar Android 3 0 0

Top 5 Play Store contributors in the past 2 months: 

  • Paul Wright
  • Selim Şumlu
  • Felipe Koji
  • Tim Maks
  • Matt Cianfarani

Product updates

To catch up on product releases update, please watch the recording of the Customer Experience scrum meeting from AirMozilla. You can also subscribe to the AirMozilla folder by clickling on the Subscribe button at the top right corner of the page to get notifications each time we add a new recording.

Useful links:

Open Policy & AdvocacyMozilla Meetups – The Long Road to Federal Privacy Protections: Are We There Yet?

Register Below!
Join us for a discussion about the need for comprehensive privacy reform and whether the political landscape is ready to make it happen.

The panel session will be immediately followed by a happy hour reception with drinks and light fare. 

Date and time: Wednesday, September 21st – panel starts @ 4:00PM promptly (doors @ 3:45pm)
Location: Wunder Garten, 1101 First St. NE, Washington, DC 20002

The post Mozilla Meetups – The Long Road to Federal Privacy Protections: Are We There Yet? appeared first on Open Policy & Advocacy.

hacks.mozilla.orgMerging two GitHub repositories without losing commit history

Merging two GitHub repositories without losing history

We are in the process of merging smaller example code repositories into larger parent repositories on the MDN Web Docs project. While we thought that copying the files from one repository into the new one would lose commit history, we felt that this might be an OK strategy. After all, we are not deleting the old repository but archiving it.

After having moved a few of these, we did receive an issue from a community member stating that it is not ideal to lose history while moving these repositories and that there could be a relatively simple way to avoid this. I experimented with a couple of different options and finally settled on a strategy based on the one shared by Eric Lee on his blog.

tl;dr The approach is to use basic git commands to apply all of the histories of our old repo onto a new repo without needing special tooling.

Getting started

For the experiment, I used the sw-test repository that is meant to be merged into the dom-examples repository.

This is how Eric describes the first steps:

# Assume the current directory is where we want the new repository to be created
# Create the new repository

git init

# Before we do a merge, we need to have an initial commit, so we’ll make a dummy commit

dir > deleteme.txt
git add .
git commit -m “Initial dummy commit”

# Add a remote for and fetch the old repo
git remote add -f old_a <OldA repo URL>

# Merge the files from old_a/master into new/master
git merge old_a/master

I could skip everything up to the git remote ... step as my target repository already had some history, so I started as follows:

git clone https://github.com/mdn/dom-examples.git
cd dom-examples

Running git log on this repository, I see the following commit history:

commit cdfd2aeb93cb4bd8456345881997fcec1057efbb (HEAD -> master, upstream/master)
Merge: 1c7ff6e dfe991b
Author:
Date:   Fri Aug 5 10:21:27 2022 +0200

    Merge pull request #143 from mdn/sideshowbarker/webgl-sample6-UNPACK_FLIP_Y_WEBGL

    “Using textures in WebGL”: Fix orientation of Firefox logo

commit dfe991b5d1b34a492ccd524131982e140cf1e555
Author:
Date:   Fri Aug 5 17:08:50 2022 +0900

    “Using textures in WebGL”: Fix orientation of Firefox logo

    Fixes <https://github.com/mdn/content/issues/10132>

commit 1c7ff6eec8bb0fff5630a66a32d1b9b6b9d5a6e5
Merge: be41273 5618100
Author:
Date:   Fri Aug 5 09:01:56 2022 +0200

    Merge pull request #142 from mdn/sideshowbarker/webgl-demo-add-playsInline-drop-autoplay

    WebGL sample8: Drop “autoplay”; add “playsInline”

commit 56181007b7a33907097d767dfe837bb5573dcd38
Author:
Date:   Fri Aug 5 13:41:45 2022 +0900

With the current setup, I could continue from the git remote command, but I wondered if the current directory contained files or folders that would conflict with those in the service worker repository. I searched around some more to see if anyone else had run into this same situation but did not find an answer. Then it hit me! I need to prepare the service worker repo to be moved.

What do I mean by that? I need to create a new directory in the root of the sw-test repo called service-worker/sw-test and move all relevant files into this new subdirectory. This will allow me to safely merge it into dom-examples as everything is contained in a subfolder already.

To get started, I need to clone the repo we want to merge into dom-examples.

git clone https://github.com/mdn/sw-test.git
cd sw-test

Ok, now we can start preparing the repo. The first step is to create our new subdirectory.

mkdir service-worker
mkdir service-worker/sw-test

With this in place, I simply need to move everything in the root directory to the subdirectory. To do this, we will make use of the move (mv) command:

NOTE: Do not yet run any of the commands below at this stage.


# enable extendedglob for ZSH
set -o extendedglob
mv ^sw-test(D) service-worker/swtest

The above command is a little more complex than you might think. It uses a negation syntax. The next section explains why we need it and how to enable it.

How to exclude subdirectories when using mv

While the end goal seemed simple, I am pretty sure I grew a small animal’s worth of grey hair trying to figure out how to make that last move command work. I read many StackOverflow threads, blog posts, and manual pages for the different commands with varying amounts of success. However, none of the initial set of options quite met my needs. I finally stumbled upon two StackOverflow threads that brought me to the answer.

To spare you the trouble, here is what I had to do.

First, a note. I am on a Mac using ZSH (since macOS Catalina, this is now the default shell). Depending on your shell, the instructions below may differ.

For new versions of ZSH, you use the set -o and set +o commands to enable and disable settings. To enable extendedglob, I used the following command:


# Yes, this _enables_ it
set -o extendedglob

On older versions of ZSH, you use the setopt and unsetopt commands.

setopt extendedglob

With bash, you can achieve the same using the following command:

shopt -s extglob

Why do you even have to do this, you may ask? Without this, you will not be able to use the negation operator I use in the above move command, which is the crux of the whole thing. If you do the following, for example:

mkdir service-worker
mv * service-worker/sw-test

It will “work,” but you will see an error message like this:

mv: rename service-worker to service-worker/sw-test/service-worker: Invalid argument

We want to tell the operating system to move everything into our new subfolder except the subfolder itself. We, therefore, need this negation syntax. It is not enabled by default because it could cause problems if file names contain some of the extendedglob patterns, such as ^. So we need to enable it explicitly.

NOTE: You might also want to disable it after completing your move operation.

Now that we know how and why we want extendedglob enabled, we move on to using our new powers.

NOTE: Do not yet run any of the commands below at this stage.

mv ^sw-test(D) service-worker/sw-test

The above means:

  • Move all the files in the current directory into service-worker/sw-test.
  • Do not try to move the service-worker directory itself.
  • The (D) option tells the move command to also move all hidden files, such as .gitignore, and hidden folders, such as .git.

NOTE: I found that if I typed mv ^sw-test and pressed tab, my terminal would expand the command to mv CODE_OF_CONDUCT.md LICENSE README.md app.js gallery image-list.js index.html service-worker star-wars-logo.jpg style.css sw.js. If I typed mv ^sw-test(D) and pressed tab, it would expand to mv .git .prettierrc CODE_OF_CONDUCT.md LICENSE README.md app.js gallery image-list.js index.html service-worker star-wars-logo.jpg style.css sw.js. This is interesting because it clearly demonstrates what happens under the hood. This allows you to see the effect of using (D) clearly. I am not sure whether this is just a native ZSH thing or one of my terminal plugins, such as Fig. Your mileage may vary.

Handling hidden files and creating a pull request

While it is nice to be able to move all of the hidden files and folders like this, it causes a problem. Because the .git folder is transferred into our new subfolder, our root directory is no longer seen as a Git repository. This is a problem.

Therefore, I will not run the above command with (D) but instead move the hidden files as a separate step. I will run the following command instead:

mv ^(sw-test|service-worker) service-worker/sw-test

At this stage, if you run ls it will look like it moved everything. That is not the case because the ls command does not list hidden files. To do that, you need to pass the -A flag as shown below:

ls -A

You should now see something like the following:

❯ ls -A
.git           .prettierrc    service-worker

Looking at the above output, I realized that I should not need to move the .git folder. All I needed to do now was to run the following command:

mv .prettierrc service-worker

After running the above command, ls -A will now output the following:

❯ ls -A
.git simple-service-worker

Time to do a little celebration dance 😁

We can move on now that we have successfully moved everything into our new subdirectory. However, while doing this, I realized I forgot to create a feature branch for the work.

Not a problem. I just run the command, git switch -C prepare-repo-for-move. Running git status at this point should output something like this:

❯ git status
On branch prepare-repo-for-move
Changes not staged for commit:
  (use "git add/rm <file>..." to update what will be committed)
  (use "git restore <file>..." to discard changes in working directory)
	deleted:    .prettierrc
	deleted:    CODE_OF_CONDUCT.md
	deleted:    LICENSE
	deleted:    README.md
	deleted:    app.js
	deleted:    gallery/bountyHunters.jpg
	deleted:    gallery/myLittleVader.jpg
	deleted:    gallery/snowTroopers.jpg
	deleted:    image-list.js
	deleted:    index.html
	deleted:    star-wars-logo.jpg
	deleted:    style.css
	deleted:    sw.js

Untracked files:
  (use "git add <file>..." to include in what will be committed)
	service-worker/

no changes added to commit (use "git add" and/or "git commit -a")

Great! Let’s add our changes and commit them.

git add .
git commit -m 'Moved all source files into new subdirectory'

Now we want to push our changes and open a pull request.

Woop! Let’s push:

git push origin prepare-repo-for-move

Head over to your repository on GitHub. You should see a banner like “mv-files-into-subdir had recent pushes less than a minute ago” and a “Compare & pull request” button.

Click the button and follow the steps to open the pull request. Once the pull request is green and ready to merge, go ahead and merge!

NOTE: Depending on your workflow, this is the point to ask a team member to review your proposed changes before merging. It is also a good idea to have a look over the changes in the “Files changed” tab to ensure nothing is part of the pull request you did not intend. If any conflicts prevent your pull request from being merged, GitHub will warn you about these, and you will need to resolve them. This can be done directly on GitHub.com or locally and pushed to GitHub as a separate commit.

When you head back to the code view on GitHub, you should see our new subdirectory and the .gitignore file.

With that, our repository is ready to move.

Merging our repositories

Back in the terminal, you want to switch back to the main branch:

git switch main

You can now safely delete the feature branch and pull down the changes from your remote.

git branch -D prepare-repo-for-move
git pull origin main

Running ls -A after pulling the latest should now show the following:

❯ ls -A
.git           README.md      service-worker

Also, running git log in the root outputs the following:

commit 8fdfe7379130b8d6ea13ea8bf14a0bb45ad725d0 (HEAD -> gh-pages, origin/gh-pages, origin/HEAD)
Author: Schalk Neethling
Date:   Thu Aug 11 22:56:48 2022 +0200

    Create README.md

commit 254a95749c4cc3d7d2c7ec8a5902bea225870176
Merge: f5c319b bc2cdd9
Author: Schalk Neethling
Date:   Thu Aug 11 22:55:26 2022 +0200

    Merge pull request #45 from mdn/prepare-repo-for-move

    chore: prepare repo for move to dom-examples

commit bc2cdd939f568380ce03d56f50f16f2dc98d750c (origin/prepare-repo-for-move)
Author: Schalk Neethling
Date:   Thu Aug 11 22:53:13 2022 +0200

    chore: prepare repo for move to dom-examples

    Prepping the repository for the move to dom-examples

commit f5c319be3b8d4f14a1505173910877ca3bb429e5
Merge: d587747 2ed0eff
Author: Ruth John
Date:   Fri Mar 18 12:24:09 2022 +0000

    Merge pull request #43 from SimonSiefke/add-navigation-preload

Here are the commands left over from where we diverted earlier on.

# Add a remote for and fetch the old repo
git remote add -f old_a <OldA repo URL>

# Merge the files from old_a/master into new/master
git merge old_a/master

Alrighty, let’s wrap this up. First, we need to move into the root of the project to which we want to move our project. For our purpose here, this is the dom-examples directory. Once in the root of the directory, run the following:

git remote add -f swtest https://github.com/mdn/sw-test.git

NOTE: The -f tells Git to fetch the remote branches. The ssw is a name you give to the remote so this could really be anything.

After running the command, I got the following output:

❯ git remote add -f swtest https://github.com/mdn/sw-test.git
Updating swtest
remote: Enumerating objects: 500, done.
remote: Counting objects: 100% (75/75), done.
remote: Compressing objects: 100% (57/57), done.
remote: Total 500 (delta 35), reused 45 (delta 15), pack-reused 425
Receiving objects: 100% (500/500), 759.76 KiB | 981.00 KiB/s, done.
Resolving deltas: 100% (269/269), done.
From <https://github.com/mdn/sw-test>
 * [new branch]      gh-pages        -> swtest/gh-pages
 * [new branch]      master          -> swtest/master
 * [new branch]      move-prettierrc -> swtest/move-prettierrc
 * [new branch]      rename-sw-test  -> swtest/rename-sw-test

NOTE: While we deleted the branch locally, this is not automatically synced with the remote, so this is why you will still see a reference to the rename-sw-test branch. If you wanted to delete it on the remote, you would run the following from the root of that repository: git push origin :rename-sw-test (if you have configured your repository “to automatically delete head branches”, this will be automatically deleted for you)

Only a few commands left.

NOTE: Do not yet run any of the commands below at this stage.

git merge swtest/gh-pages

Whoops! When I ran the above, I got the following error:

❯ git merge swtest/gh-pages
fatal: refusing to merge unrelated histories

But this is pretty much exactly what I do want, right? This is the default behavior of the merge command, but you can pass a flag and allow this behavior.

git merge swtest/gh-pages --allow-unrelated-histories

NOTE: Why gh-pages? More often than not, the one you will merge here will be main but for this particular repository, the default branch was named gh-pages. It used to be that when using GitHub pages, you would need a branch called gh-pages that will then be automatically deployed by GitHub to a URL that would be something like mdn.github.io/sw-test.

After running the above, I got the following:

❯ git merge swtest/gh-pages --allow-unrelated-histories
Auto-merging README.md
CONFLICT (add/add): Merge conflict in README.md
Automatic merge failed; fix conflicts and then commit the result.

Ah yes, of course. Our current project and the one we are merging both contain a README.md, so Git is asking us to decide what to do. If you open up the README.md file in your editor, you will notice something like this:

<<<<<<< HEAD

=======

There might be a number of these in the file. You will also see some entries like this, >>>>>>> swtest/gh-pages. This highlights the conflicts that Git is not sure how to resolve. You could go through and clear these manually. In this instance, I just want what is in the README.md at the root of the dom-examples repo, so I will clean up the conflicts or copy the content from the README.md from GitHub.

As Git requested, we will add and commit our changes.

git add .
git commit -m 'merging sw-test into dom-examples'

The above resulted in the following output:

❯ git commit
[146-chore-move-sw-test-into-dom-examples 4300221] Merge remote-tracking branch 'swtest/gh-pages' into 146-chore-move-sw-test-into-dom-examples

If I now run git log in the root of the directory, I see the following:

commit 4300221fe76d324966826b528f4a901c5f17ae20 (HEAD -> 146-chore-move-sw-test-into-dom-examples)
Merge: cdfd2ae 70c0e1e
Author: Schalk Neethling
Date:   Sat Aug 13 14:02:48 2022 +0200

    Merge remote-tracking branch 'swtest/gh-pages' into 146-chore-move-sw-test-into-dom-examples

commit 70c0e1e53ddb7d7a26e746c4a3412ccef5a683d3 (swtest/gh-pages)
Merge: 4b7cfb2 d4a042d
Author: Schalk Neethling
Date:   Sat Aug 13 13:30:58 2022 +0200

    Merge pull request #47 from mdn/move-prettierrc

    chore: move prettierrc

commit d4a042df51ab65e60498e949ffb2092ac9bccffc (swtest/move-prettierrc)
Author: Schalk Neethling
Date:   Sat Aug 13 13:29:56 2022 +0200

    chore: move prettierrc

    Move `.prettierrc` into the siple-service-worker folder

commit 4b7cfb239a148095b770602d8f6d00c9f8b8cc15
Merge: 8fdfe73 c86d1a1
Author: Schalk Neethling
Date:   Sat Aug 13 13:22:31 2022 +0200

    Merge pull request #46 from mdn/rename-sw-test

Yahoooo! That is the history from sw-test now in our current repository! Running ls -A now shows me:

❯ ls -A
.git                           indexeddb-examples             screen-wake-lock-api
.gitignore                     insert-adjacent                screenleft-screentop
CODE_OF_CONDUCT.md             matchmedia                     scrolltooptions
LICENSE                        media                          server-sent-events
README.md                      media-session                  service-worker
abort-api                      mediaquerylist                 streams
auxclick                       payment-request                touchevents
canvas                         performance-apis               web-animations-api
channel-messaging-basic        picture-in-picture             web-crypto
channel-messaging-multimessage pointer-lock                   web-share
drag-and-drop                  pointerevents                  web-speech-api
fullscreen-api                 reporting-api                  web-storage
htmldialogelement-basic        resize-event                   web-workers
indexeddb-api                  resize-observer                webgl-examples

And if I run ls -A service-worker/, I get:

❯ ls -A service-worker/
simple-service-worker

And finally, running ls -A service-worker/simple-service-worker/ shows:

❯ ls -A service-worker/simple-service-worker/
.prettierrc        README.md          image-list.js      style.css
CODE_OF_CONDUCT.md app.js             index.html         sw.js
LICENSE            gallery            star-wars-logo.jpg

All that is left is to push to remote.

git push origin 146-chore-mo…dom-examples

NOTE: Do not squash merge this pull request, or else all commits will be squashed together as a single commit. Instead, you want to use a merge commit. You can read all the details about merge methods in their documentation on GitHub.

After you merge the pull request, go ahead and browse the commit history of the repo. You will find that the commit history is intact and merged. o/\o You can now go ahead and either delete or archive the old repository.

At this point having the remote configured for our target repo serve no purpose so, we can safe remove the remote.

git remote rm swtest
In Conclusion

The steps to accomplish this task is then as follows:

# Clone the repository you want to merge
git clone https://github.com/mdn/sw-test.git
cd sw-test

# Create your feature branch
git switch -C prepare-repo-for-move
# NOTE: With older versions of Git you can run:
# git checkout -b prepare-repo-for-move

# Create directories as needed. You may only need one, not two as
# in the example below.
mkdir service-worker
mkdir service-worker/sw-test

# Enable extendedglob so we can use negation
# The command below is for modern versions of ZSH. See earlier
# in the post for examples for bash and older versions of ZSH
set -o extendedglob

# Move everything except hidden files into your subdirectory,
# also, exclude your target directories
mv ^(sw-test|service-worker) service-worker/sw-test

# Move any of the hidden files or folders you _do_ want
# to move into the subdirectory
mv .prettierrc service-worker

# Add and commit your changes
git add .
git commit -m 'Moved all source files into new subdirectory'

# Push your changes to GitHub
git push origin prepare-repo-for-move

# Head over to the repository on GitHub, open and merge your pull request
# Back in the terminal, switch to your `main` branch
git switch main

# Delete your feature branch
# This is not technically required, but I like to clean up after myself :)
git branch -D prepare-repo-for-move
# Pull the changes you just merged
git pull origin main

# Change to the root directory of your target repository
# If you have not yet cloned your target repository, change
# out of your current directory
cd ..

# Clone your target repository
git clone https://github.com/mdn/dom-examples.git
# Change directory
cd dom-examples

# Create a feature branch for the work
git switch -C 146-chore-move-sw-test-into-dom-examples

# Add your merge target as a remote
git remote add -f ssw https://github.com/mdn/sw-test.git

# Merge the merge target and allow unrelated history
git merge swtest/gh-pages --allow-unrelated-histories

# Add and commit your changes
git add .
git commit -m 'merging sw-test into dom-examples'

# Push your changes to GitHub
git push origin 146-chore-move-sw-test-into-dom-examples

# Open the pull request, have it reviewed by a team member, and merge.
# Do not squash merge this pull request, or else all commits will be
# squashed together as a single commit. Instead, you want to use a merge commit.

# Remove the remote for the merge target
git remote rm swtest

Hopefully, you now know how to exclude subdirectories using the mv command, set and view shell configuration, and merge the file contents of a git repo into a new repository while preserving the entire commit history using only basic git commands.

The post Merging two GitHub repositories without losing commit history appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla Thunderbird BlogWe Asked AI To Create These Beautiful Thunderbird Wallpapers

The buzz around AI-generated artwork continues to grow with each passing week. As machine-learning-driven AI systems like DALL·E 2, Midjourney, and Stable Diffusion continue to evolve, some truly awe-inspiring creations are being unleashed onto the world. We wanted to tap into that creative energy to produce some unique desktop wallpapers for the Thunderbird community!

So, we fed Midjourney the official Thunderbird logo and a series of descriptive text prompts to produce the stunning desktop wallpapers you see below. (Can you spot which one is also inspired by our friends at Firefox?)

Dozens of variations and hundreds of images later, we narrowed it down to four designs. Aside from adding a small Thunderbird watermark in the lower corners of each wallpaper, these images are exactly as the Midjourney AI produced them.

View And Download The Thunderbird Wallpapers

We did take the liberty of upscaling each image to UltraHD resolution, meaning they’ll look fantastic even on your 4K monitors. And of course, on your 1080p or 1440p panels as well.

Just click each image below to download the full-resolution file.

If you love them, share this page and tell people about Thunderbird! And if you end up using them as your PC wallpaper, send us a screenshot on Mastodon or Twitter.

Thunderbird is the leading open-source, cross-platform email and calendaring client, free for business and personal use. We want it to stay secure and become even better. Donations allow us to hire developers, pay for infrastructure, expand our userbase, and continue to improve.

Click here to make a donation

The post We Asked AI To Create These Beautiful Thunderbird Wallpapers appeared first on The Thunderbird Blog.

The Mozilla Thunderbird BlogThunderbird Tip: Rearrange The Order Of Your Accounts

One of Thunderbird’s strengths is managing multiple email accounts, newsgroup accounts, and RSS feed subscriptions. But how do you display those accounts in the order YOU want? It’s super easy, and our new Thunderbird Tips video (viewable below) shows you how in less than one minute!

But First: Our YouTube Channel!

We’re currently building the next exciting era of Thunderbird, and developing a Thunderbird experience for mobile. But we’re also trying to put out more content across various platforms to keep you informed — and maybe even entertained!

To that end, we are relaunching our YouTube channel with a forthcoming new podcast, and a series of tips and tricks to help you get the most out of Thunderbird. You can subscribe here. Help us reach 1000 subscribers by the end of August!


Bonus accessibility tips:

1) The keyboard shortcut for this command is ALT + ⬆/⬇ (OPTION + ⬆/⬇ on macOS).
2) Account Settings can also be accessed by the Spaces toolbar, App Menu, and Account Central.

As always, thanks for using Thunderbird! And thanks for making Thunderbird possible with your support and donations.

The post Thunderbird Tip: Rearrange The Order Of Your Accounts appeared first on The Thunderbird Blog.

Open Policy & AdvocacyIt’s Time to Pass U.S. Federal Privacy Legislation

Despite being a powerhouse of technology and innovation, the U.S. lags behind global counterparts when it comes to privacy protections. Everyday, people face the real possibility that their very personal information could fall into the hands of third parties seeking to weaponize it against them.

At Mozilla, we strive to not only empower people with tools to protect their own privacy, but also to influence other companies to adopt better privacy practices. That said, we can’t solve every problem with a technical fix or rely on companies to voluntarily prioritize privacy.

The good news? After decades of failed attempts and false starts, real reform may finally be on the horizon. We’ve recently seen more momentum than ever for policy changes that would provide meaningful protections for consumers and more accountability from companies. It’s time that we tackle the real-world harms that emerge as a result of pervasive data collection online and abusive privacy practices.

Strong federal privacy legislation is critical in creating an environment where users can truly benefit from the technologies they rely on without paying the premium of exploitation of their personal data. Last month, the House Committee on Energy & Commerce took the important step of voting the bipartisan American Data Privacy and Protection Act (ADPPA) out of committee and advancing the bill to the House floor. Mozilla supports these efforts and encourages Congress to pass the ADPPA.

Stalling on federal policy efforts would only hurt American consumers. We look forward to continuing our work with policymakers and regulators to achieve meaningful reform that restores trust online and holds companies accountable. There’s more progress than ever before towards a solution. We can’t miss this moment.

The post It’s Time to Pass U.S. Federal Privacy Legislation appeared first on Open Policy & Advocacy.

The Mozilla Thunderbird BlogThunderbird Time Machine: Windows XP + Thunderbird 1.0

Let’s step back into the Thunderbird Time Machine, and transport ourselves back to November 2004. If you were a tech-obsessed geek like me, maybe you were upgrading Windows 98 to Windows XP. Or playing Valve’s legendary shooter Half-Life 2. Maybe you were eagerly installing a pair of newly released open-source software applications called Firefox 1.0 and Thunderbird 1.0…


As we work toward a new era of Thunderbird, we’re also revisiting its roots. Because the entirety of Thunderbird’s releases and corresponding release notes have been preserved, I’ve started a self-guided tour of Thunderbird’s history. Read the first post in this series here:


“Thunderbirds Are GO!”

Before we get into the features of Thunderbird 1.0, I have to call out the endearing credits reel that could be viewed from the “About Thunderbird” menu. You could really feel that spark of creativity and fun from the developers:

<figcaption>Yes, we have a YouTube channel! Subscribe for more. </figcaption>

Windows XP + 2 New Open-Source Alternatives

Thunderbird 1.0 launched in the prime of Windows XP, and it had a companion for the journey: Firefox 1.0! Though both of these applications had previous versions (with different logos and different names), their official 1.0 releases were milestones. Especially because they were open-source and represented quality alternatives to existing “walled-garden” options.

Thunderbird 1.0 and Firefox 1.0 installers on Windows XP<figcaption>Thunderbird 1.0 and Firefox 1.0 installers on Windows XP</figcaption>

(Thunderbird was, and always has been, completely free to download and use. But the internet was far less ubiquitous than it is now, so we offered to mail users within the United States a CD-ROM for $5.95.)

Without a doubt, Mozilla Thunderbird is a very good e-mail client. It sends and receives mail, it checks it for spam, handles multiple accounts, imports data from your old e-mail application, scrapes RSS news feeds, and is even cross-platform.

Thunderbird 1.0 Review | Ars Technica

Visually, it prided itself on having a pretty consistent look across Windows, Mac OS X, and Linux distributions like CentOS 3.3 or Red Hat. And the iconography was updated to be more colorful and playful, in a time when skeuomorphic design reigned supreme.

Groundbreaking Features In Thunderbird 1.0

Thunderbird 1.0 launched with a really diverse set of features. It offered add-ons (just like its brother Firefox) to extend functionality. But it also delivered some cutting-edge stuff like:

  • Adaptive junk mail controls
  • RSS integration (this was only 2 months after podcasts first debuted)
  • Effortless migration from Outlook Express and Eudora
  • A Global Inbox that could combine multiple POP3 email accounts
  • Message Grouping (by date, sender, priority, custom labels, and more)
  • Automatic blocking of remote image requests from unknown senders
Thunderbird 1.0 About Page<figcaption>Thunderbird 1.0 About Page</figcaption>

Feeling Adventurous? Try It For Yourself!

If you have a PowerPC Mac, a 32-bit Linux distribution, or any real or virtualized version of Windows after Windows 98, you can take your own trip down memory lane. All of Thunderbird’s releases are archived here.

Thunderbird is the leading open-source, cross-platform email and calendaring client, free for business and personal use. We want it to stay secure and become even better. Donations allow us to hire developers, pay for infrastructure, expand our userbase, and continue to improve.

Click here to make a donation

The post Thunderbird Time Machine: Windows XP + Thunderbird 1.0 appeared first on The Thunderbird Blog.

Firefox UXHow We’ve Used Figma to Evolve Our Content Design Practice

Including tips and tools for UX writers

A couple of keys are suspended  from a row of colorful legos that form a key hook.<figcaption>Photo by Scott Webb on Unsplash</figcaption>

Firefox UX adopted Figma about two years ago. In this post, I’ll share how our content design team has used the tool to shape collaboration and influence how design gets done.

Our Figma Journey: From Coach House to Co-Creation

Before Figma, the Firefox content design team worked in a separate space from visual and interaction designers. Our own tools — primarily the Google suite of products — were like a copy coach house.

When it came time to collaborate with UX design colleagues, we had to walk to the main design house (like Sketch, for example). We had to ask for help with the fancy coffee machine (Can you update this label for me? Is it too late to make a change to the layout to fit our content needs?). We felt a bit like guests: out of our element.

A sketch of a coach house behind a main house.<figcaption>Image source</figcaption>

This way of working made collaboration more complex. Content and UX design were using different tools, working in silo and simultaneously, to create one experience. If a content designer needed to explore layout changes, we would find ourselves painstakingly recreating mocks in other tools.

Google Slide screenshot with a recreation of the Firefox homepage design, with comments calling our requested changes.<figcaption>To propose copy and design changes to the Firefox homepage, I had to re-create the mock in Google Slides.</figcaption>

Today, thanks to Figma, visual, interaction, and content design can co-create within the same space. This approach better reflects how our disciplines can and should collaborate:

  • In early stages, when exploring ideas and shaping the architecture of an experience, content and design work together in a shared tool.
  • When it’s time to refine visuals, annotate interactions, or polish copy, our disciplines can branch off in different tools as needed, while staying in sync and on the same page. Then, we come back together to reflect the final visual and copy design in Figma.
“As the design systems team, it is very important to foster and support a culture of collaboration within the Firefox org for designing and making decisions together. Figma is a flexible design tool that has allowed the design systems team to partner more closely with teammates like content design, which ultimately means we build better products.” — Jules Simplicio, Design Systems, Firefox

Figma, Two Years Later: What We’ve Learned

1. Content design still needs other tools to do our jobs well

While Figma gave us the keys to the main design house, we continue to use other tools for copy development. There are a few steps critical to our process that Figma just can’t do.

Tools like Google Docs and Google Slides continue to serve us well for:

  • Content reviews with stakeholders like product management, legal, and localization
  • Aligning cross-functionally on content direction
  • Documenting rationale and context to tee up content recommendations
  • Managing comments and feedback

There’s no silver bullet tool for us, and that reflects the diversity of our stakeholders and collaborators, within and outside of UX. For now, we’ve accepted that our discipline will continue to need multiple tools, and we’ve identified which ones are best for which purpose.

Table describing how and when to use Google Docs, Miro, Google Slides, and Figma.<figcaption>Guidelines for tool usage. How and when to use what will flex according to context. Link to file.</figcaption>

Our pull towards tools that focus on the words also isn’t a bad or surprising thing. Focusing on language forces us to focus on understanding, first and foremost.

“Design teams use software like Sketch or XD to show what they’re designing — and those tools are great — but it’s easy to get caught up in the details… there’s no selector for figuring out what you’re working on or why it matters. It’s an open world — a blank slate. It’s a job for words… so before you start writing button labels, working on voice and tone guidelines, use the world’s most understated and effective design tool: a text editor.” Writing is Designing, Metts & Welfle

2. Content designers don’t need to become Figma experts

As a content designer, our focus is still on content: how it’s organized, what it includes, what it says. We’ve learned we don’t need to get into the Figma weeds of interaction or visual design, like creating our own components or variants.

However, we’ve found these basic functions helpful for collaboration:

  1. Using a low-fidelity wireframe library (see below)
  2. Inserting a component from a library
  3. Copying a component
  4. Turning things on or off within a complex component (example: hide a button)
  5. Editing text within components
  6. Creating sticky notes, pulling from a project template file (see below)
  7. Exporting frames for sharing
  8. Following someone in a file (for presentation and collaboration)
  9. Using plugins (example: for finding text within a file)

You can learn all these things in Figma’s public forums. But when we get stuck, we’ve saved ourselves time and frustration by asking a UX design colleague to troubleshoot. We’ve found that designers, especially those who work in design systems, are happy to help. We recommend creating or joining a Figma Slack channel at work to share ideas and ask questions.

3. Content designers DO need a low-fidelity library

Figma made it much easier for UX teams, including content design, to work in high fidelity. But this can be a blessing and a curse. It’s nice to be able to make quick adjustments to a component. However, when we’re in the early stages of a project we need to focus on ideas, strategy, and structure, rather than polish. We found ourselves getting distracted by details, and turning to other tools (like pen and paper) for visual explorations.

To solve for this, we partnered with the design systems team to create a low-fidelity wireframing library. We built this together, tested it, and then rolled it out to the broader UX team. As a result, we now have a tool within Figma that allows us to create mocks quickly and collaboratively. We created our custom library specific to browser design but Figma has many wireframing kits that you can use (like this one).

Our low-fidelity library democratizes the design process in a way that’s especially helpful for us writers: we can co-create with UX design colleagues using the same easy tools. It also helps people understand the part we play in determining things like information architecture. More broadly, working in low-fidelity work prevents stakeholders like engineering from thinking something is ready for hand-off when it’s not.

Screenshot of low fidelity components: browser chrome, text treatments, image.<figcaption>Example components from our low-fidelity library.</figcaption>

4. File structure is important

As new roommates, when we first started collaborating with UX designers in Figma, there was some awkwardness. We were used to working in separate tools. The Figma design canvas yawned before us. How would we do this? What the fig?

We collaborated with design systems to build a project template file. Of course, having a standard file structure and naming system is good documentation hygiene for any design team, but the template file also supports cross-discipline UX collaboration:

  • It puts some structure and process in place for that wide open Figma space, including identifying the end-to-end steps of the design process.
  • It gives us a shared set of tools to align on goals, capture thoughts, and track progress.
  • It helps concretize and solidify content design’s place and role within that process.
Screenshot of a portion of the Figma Project Template, which includes project name, team members, a summary of the problem and requirements and due date, and a space to capture current state designs.<figcaption>Screenshot of our Firefox Figma Project Template.</figcaption>

The file template is like a kit of parts. You don’t need all the parts for every design project. Certain pieces are particularly helpful for collaboration:

  • Status strip punch list to track the design and content review process. You can adjust the pill for each step as you move through the review process.
<figcaption>Our Firefox design status strip. Note, steps may happen in a different order (for example, localization is often earlier in the process).</figcaption>
  • Summary card: This asks UX and content designers to summarize the problem and share relevant documentation. As content designers, this helps us context switch more quickly (as we frequently need to do).
Summary card component which includes problem to be solved, requirements, due date, solution, and links to more information.<figcaption>Summary card component in the Firefox Project Template.</figcaption>
  • Standardized layout: Design files can quickly get out of control, in particular for projects with a lot of exploration and iteration. The file suggests a vertical structure in which you move your explorations to a space below, with final design and copy at the top. This kind of documentation is helpful for cross-functional collaborators like product management and engineering so they can orient themselves to the designs and understand status, like what’s final and what’s still work-in-progress.
  • Content frame: This is a space to explore and document copy-specific issues like information architecture, localization, and terminology.
Content card which includes a space for copy iterations, as well as guidance to include notes on things like localization and terminology.<figcaption>Content card in the Firefox Project Template.</figcaption>
  • Sticky notes and call-outs. The comment function in Figma can be tricky to manage. Comments get lost in the sea of design, you can’t search them, and you lose a thread once they are closed. For all those reasons, we tend to prefer sticky notes and call-outs, especially for meatier topics.
A sticky-note component and call-out notes for Project Decisions, Critical Issues, and Open Questions.<figcaption>Sticky note and call-out cards in the Firefox Project Template.</figcaption>

Closing Thoughts

A tool is only as effective as the collaboration and process surrounding it. Our team is still figuring out the best way to use Figma and scale best practices.

At the end of the day, collaboration is about people. It’s messy and a continual work-in-progress, especially for content design. But, at least we’re in the same design house now, and it’s got good bones for us to continue defining and refining how our discipline does its work.

Thank you to Betsy Mikel, Brent G. Trotter, and Emily Wachowiak for reviewing this post. And thank you to our design systems collaborators, Jules Simplicio and Katie Caldwell, for all the work you do to make getting work done better.


How We’ve Used Figma to Evolve Our Content Design Practice was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla L10NL10n Report: July 2022 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New content and projects

What’s new or coming up in Firefox desktop

While the last months have been pretty quiet in terms of new content for Firefox, we’re approaching a new major release for 2022, and that will include new features and dedicated onboarding.

Part of the content has already started landing in the last days, expect more in the coming weeks. In the meantime, make sure to check out the feature name guidelines for Firefox View and Colorways.

In terms of upcoming deadlines: Firefox 104 is currently in Beta and it will be possible to update translations up to August 14.

What’s new or coming up in mobile

Mobile releases now align more closely to desktop release schedules, so you may notice that target dates for these projects are the same in Pontoon. As with desktop, things are quiet now for mobile, but we’ll be seeing more strings landing in the coming weeks for the next major release.

What’s new or coming up in web projects

Firefox Relay website & add-on

We’re expanding Firefox Relay Premium into new locales across Europe: Austria, Belgium, Cyprus, Estonia, Finland, France, Germany, Greece, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Portugal, Slovakia, Slovenia, Spain, Sweden, and Switzerland. In order to deliver a truly great experience to our users in these new locales, we would like to make sure that users can utilize our products in the language they feel most comfortable with. Having these languages localized will take already complex topics like privacy and security and help connect more with users and offer them greater protections.

If you don’t see the product offered in the language in the markets above, maybe you can help by requesting to localize the product. Thank you for helping spread the word.

What’s new or coming up in Pontoon

  • When 100% TM match is available, it now automatically appears in the editor if the string doesn’t have any translations yet.

    100% matches from Translation Memory now automatically appear in the editor

  • Before new users make their first contribution to a locale, they are now provided with guidelines. And when they submit their first suggestion, team managers get notified.

    Tooltip with guidelines for new contributors.

  • The Contributors page on the Team dashboard has been reorganized. Contributors are grouped by their role within the team, which makes it easier to identify and reach out to team managers.

    Team contributors grouped by role.

  • We have introduced a new list parameter in translate view URLs, which allows for presenting a selected list of strings in the sidebar.
  • Deadlines have been renamed to Target Dates.
  • Thanks to Eemeli for making a bunch of under-the-hood improvements, which make our codebase much easier to build on.

Events

Want to showcase an event coming up that your community is participating in? Contact us and we’ll include it.

Friends of the Lion

Know someone in your l10n community who’s been doing a great job and should appear here? Contact us and we’ll make sure they get a shout-out!

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

Open Policy & AdvocacyMozilla submits comments in OSTP consultation on privacy-preserving data sharing

Earlier this month, the US Office of Science and Technology Policy (OSTP) asked stakeholders to contribute to the development of a national strategy for “responsibly harnessing privacy-preserving data sharing and analytics to benefit individuals and society.” This effort offers a much-needed opportunity to advance privacy in online advertising, an industry that has not seen improvement in many years.

In our comments, we set out the work that Mozilla has undertaken over the past decade to shape the evolution of privacy preserving advertising, both in our products, and in how we engage with regulators and standards bodies.

Mozilla has often outlined that the current state of the web is not sustainable, particularly how online advertising works today. The ecosystem is broken. It’s opaque by design, rife with fraud, and does not serve the vast majority of those which depend on it – most importantly, the people who use the open web. The ways in which advertising is conducted today – through pervasive tracking, serial privacy violations, market consolidation, and lack of transparency – are not working and cause more harm than good.

At Mozilla, we’ve been working to drive the industry in a better direction through technical solutions. However, technical work alone can’t address disinformation, discrimination, societal manipulation, privacy violations, and more. A complementary regulatory framework is necessary to mitigate the most egregious practices in the ecosystem and ensure that the outcomes of such practices (discrimination, electoral manipulation, etc.) are untenable under law rather than due to selective product policy enforcement.

Our vision is a web which empowers individuals to make informed choices without their privacy and security being compromised.  There is a real opportunity now to improve the privacy properties of online advertising. We must draw upon the internet’s founding principles of transparency, public participation, and innovation. We look forward to seeing how OSTP’s national strategy progresses this vision.

The post Mozilla submits comments in OSTP consultation on privacy-preserving data sharing appeared first on Open Policy & Advocacy.

Blog of DataThis Week in Data: Python Environment Freshness

(“This Week in Glean Data” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

By: Perry McManis and Chelsea Troy

A note on audience: the intended reader for this post is a data scientist or analyst, product owner or manager, or similar who uses Python regularly but has not had the opportunity to work with engineering processes to the degree they may like. Experienced engineers may still benefit from the friendly reminder to keep their environments fresh and up-to-date.

When was the last time you remade your local Python environment? One month ago? Six months ago? 1997?

Wait, please, don’t leave. I know, I might as well have asked you when the last time you cleaned out the food trap in your dishwasher was and I apologize. But this is almost as important. Almost.

If you don’t recall when, go ahead and check when you made your currently most used environment. It might surprise you how long ago it was.

# See this helpful stack overflow post by Timur Shtatland: https://stackoverflow.com/a/69109373
Mac: conda env list -v -v -v | grep -v '^#' | perl -lane 'print $F[-1]' | xargs /bin/ls -lrtd
Linux: conda env list | grep -v '^#' | perl -lane 'print $F[-1]' | xargs ls -lrt1d
Windows: conda env list
# Find the top level directory of your envs, e.g. C:\Users\yourname\miniconda3\envs
Windows: dir /T:C C:\Users\yourname\miniconda3\envs

Don’t feel bad though, if it does surprise you, or the answer is one you’d not admit publicly. Python environments are hard. Not in the everything is hard until you know how way, but in the why doesn’t this work? This worked last week! way. And the impetus is often to just not mess with things. Especially if you have that one environment that you’ve been using for the last 4 years, you know the one you have propped up with popsicle sticks and duct tape? But I’d like to propose that you consider regularly remaking your environments, and you build your own processes for doing so.

It is my opinion that if you can, you should be working in a fresh environment.

Much like the best by date, what is fresh is contextual. But if you start getting that when did I stand this env up? feeling, it’s time. Working in a fresh environment has a few benefits. Firstly, it makes it more likely that other folks will be able to easily duplicate it. Similarly to how providing an accurate forecast becomes increasingly difficult as you go further into the future, as you get further away from the date you completed a task in a changing ecosystem, the less likely it is that task can be successfully completed again.

Perhaps even more relevant is that packages often release security updates, APIs improve, functionality that you originally had to implement yourself may even get an official release. Official releases, especially for higher level programming languages like Python, are often highly optimized. For many researchers, those optimizations are out of the scope of their work, and rightly so. But the included version of that calculation in your favorite stats package will not only have several engineers working on it to make it run as quickly as possible, now you have the benefit of many researchers testing it concurrently with you.

These issues can collide spectacularly in cases where people get stuck trying to replicate your environment due to a deprecated version of a requirement. And if you never update your own environment, it could take someone else bringing it up to you to even notice that one of the packages you are using is no longer available, or an API has been moved from experimental to release, or removed altogether.

There is no best way of making fresh environments, but I have a few suggestions you might consider.

I will preface by saying that my preference is for command line tools, and these suggestions reflect that. Using graphical interfaces is a perfectly valid way to handle your environments, I’m just not that familiar with them, so while I think the ideas of environment freshness still apply, you will have to find your own way with them. And more generally, I would encourage you to develop your own processes anyway. These are more suggestions on where to start, and not all of them need find their way into your routines.

If you are completely unfamiliar with these environments, and you’ve been working in your base environment, I would recommend in the strongest terms possible that you immediately back it up. Python environments are shockingly easy to break beyond repair and tend to do so at the worst possible time in the worst possible way. Think live demo in front of the whole company that’s being simulcast on youtube. LeVar Burton is in the audience. You don’t want to disappoint him, do you? The easiest way to quickly make a backup is to create a new environment through the normal means, confirm it has everything you need in it, and make a copy of the whole install folder of the original.

If you’re not in the habit of making new environments, the next time you need to do an update for a package you use constantly, consider making an entirely new environment for it. As an added bonus this will give you a fallback option in case something goes wrong. If you’ve not done this before, one of the easiest ways is to utilize pip’s freeze function.

pip list --format=freeze > requirements.txt
conda create -n {new env name}
conda activate {new env name}
pip install -r requirements.txt
pip install {package} --upgrade

When you create your requirements.txt file, it’s usually a pretty good idea to go through it. A common gotcha is that you might see local file paths in place of version numbers. That is why we used pip list here. But it never hurts to check.

Take a look at your version numbers, are any of these really out of date? That is something we want to fix, but often some of our important packages have dependencies that require specific versions and we have to be careful not to break those dependencies. But we can work around that while getting the newest version we can by removing those dependencies from our requirements file and installing our most critical packages separately. That way we let pip or conda get the newest versions of everything that will work. For example, if I need pandas, and I know pandas depends on numpy, I can remove both from my requirements document and let pip handle my dependencies for me.

pip install --upgrade pip
pip install -r requirements.txt
pip install pandas

Something you may notice is that this block looks like it should be something that could be packaged up since it’s just a collection of commands. And indeed it can. We can put this in a shell script and with a bit of work, add a command line option, to more or less fire off a new environment for us in one go. This can also be expanded with shell commands for cases where we may need a compiler, tool from another language, a github repo even, etc. Assuming we have a way to run shell scripts,  Let’s call this create_env.sh:

conda deactivate
conda create -n $1
conda activate $1apt install gcc

apt install g++pip install --upgrade pip
pip install pystan==2.19.1.1
python3 -m pip install prophet --no-cache-dir

pip install -r requirements.txt
pip install scikit-learn

git clone https://github.com/mozilla-mobile/fenix.git

cd ./fenix

echo "Finished creating new environment: $1"

And by adding some flag handling, now using bash we can call sh create_env.sh newenv and be ready to go.

It will likely take some experimentation the first time or two. But once you know the steps you need to follow, getting new environment creation down to just a few minutes is as easy as packaging your steps up. And if you want to share, you can send your setup script rather than a list of instructions. Including this in your repository with a descriptive name and a mention in your README.md is a low effort way to help other folks get going with less friction.

There are of course tons of other great ways to package environments, like Docker. I would encourage you to read into them if you are interested in reproducibility beyond the simpler case of just rebuilding your local environment with regularity. There are a huge number of fascinating and immensely powerful tools out there to explore, should you wish to bring even more rigor to your Python working environments.

In the end, the main thing is to work out a fast and repeatable method that enables you to get your environment up and going again quickly from scratch. One that works for you. And then when you get the feeling that your environment has been around for a while, you won’t have to worry about making a new environment being an all-day or even worse, all-week affair. By investing in your own process, you will save yourself loads of time in the long run, and you may even save your colleagues some too. And hopefully, save yourself some frustration, too.

Like anything, the key to working out your process is repetitions. The first time will be hard, though maybe some of the tips here can make it a bit easier. But the second time will be easier. And after a handful, you will have developed a framework that will allow you to make your work more portable, more resilient and less angering, even beyond Python.

SUMO BlogIntroducing Smith Ellis

Hi everybody,

I’m so happy to introduce our latest addition to the Customer Experience team. Smith Ellis is going to join forces with Tasos and Ryan to develop our support platform. It’s been a while since we got more than 2 engineers on the team, so I’m personally excited to see what we can unlock with more engineers.

Here’s a bit of an intro from Smith:

Hello Mozillians!  I’m Smith Ellis, and I’m joining the Customer Experience team as a Software Engineer. I’m more than happy to be here. I’ve held many technical and management roles in the past and have found that doing work that makes a difference is what makes me happy. My main hobbies are electronics, music, video games, programming, welding and playing with my kids.

I look forward to meeting you and making a difference with Mozilla.

Please join me to congratulate and welcome Smith into our SUMO family!

SUMO BlogWhat’s up with SUMO – July

Hi everybody,

There is a lot going on in Q2 but we also accomplished many things too! I hope you’re able to celebrate what you’ve contributed and let’s move forward to Q3 with renewed energy and excitement!

Welcome note and shout-outs

  • Welcome kaie, alineee, lisah9333, and Denys. Thanks for joining the KB world.
  • Thanks to Paul, Infinity, Anokhi, Noah, Wes, and many others for supporting Firefox users in the iOS platform.
  • Shout-outs to Kaio Duarte for doing a great job on being a social support moderator.
  • Congratulations to YongHan for getting both the l10n and forum badge in 2022. Keep up the good work!

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to add them in our next edition.

Community news

  • You should be able to watch the scrum meeting without NDA requirement by now. Subscribe to the AirMozilla folder if you haven’t already, so you will get notifications each time we added a new recording.
  • How is our localization community is doing? Check out the result of the SUMO localization audit that we did in Q2. You can also watch the recording of my presentation at the previous community meeting in June.
  • Are you enthusiastic about helping people? We need more contributors for social and mobile support! 
  • Are you a Thunderbird contributor? We need you! 
  • Say hi to Ryan Johnson, our latest addition to the CX team!

Catch up

  • Watch the monthly community call if you haven’t. Learn more about what’s new in June! Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can answer them during the meeting.
  • To catch up on product releases update, please watch the recording of the Customer Experience scrum meeting from AirMozilla.
  • Consider subscribe to Firefox Daily Digest to get daily updates about Firefox from across different platforms.
  • Also, check out SUMO Engineering Board to see what the platform team is currently doing.

Community stats

KB

KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only
Month Page views Vs previous month
May 2022 7,921,342 3.19%
Jun 2022 7,787,739 -1.69%

Top 5 KB contributors in the last 90 days: 

KB Localization

Top 10 locales based on total page views

Locale May 2022 pageviews (*) Jun 2022 pageviews (*) Localization progress (per Jul, 11)(**)
de 7.93% 7.94% 97%
zh-CN 6.86% 6.69% 100%
fr 6.22% 6.17% 89%
es 6.28% 5.93% 29%
pt-BR 5.26% 4.80% 52%
ru 4.03% 4.00% 77%
ja 3.75% 3.63% 46%
zh-TW 2.07% 2.26% 4%
It 2.31% 2.20% 100%
pl 1.97% 1.96% 87%
* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

Forum Support

Forum stats

-TBD-

Top 5 forum contributors in the last 90 days: 

Social Support

Channel Total incoming conv Conv interacted Resolution rate
May 2022 376 222 53.30%
Jun 2022 319 177 53.04%

Top 5 Social Support contributors in the past 2 months: 

  1. Kaio Duarte
  2. Bithiah K
  3. Magno Reis
  4. Christophe Villeneuve
  5. Felipe Koji

Play Store Support

Channel May 2022 Jun 2022
Total priority review Total reviews replied Total priority review Total reviews replied
Firefox for Android 570 474 648 465
Firefox Focus for Android 267 52 236 39
Firefox Klar Android 4 1 3 0

Top 5 Play Store contributors in the past 2 months: 

  • Paul Wright
  • Tim Maks
  • Kaio Duarte
  • Felipe Koji
  • Selim Şumlu

Product updates

To catch up on product releases update, please watch the recording of the Customer Experience scrum meeting from AirMozilla. You can also subscribe to the AirMozilla folder to get notifications each time we added a new recording.

Useful links:

Mozilla L10NIntroducing Pretranslation in Pontoon

In the coming months we’ll begin rolling out the Pretranslation feature in Pontoon.

Pretranslation is the process of using translation memory and machine translation systems to translate content before it is edited by translators. It is intended to speed up the translation process and ease the work of translators.

How it works

Pretranslation will be off by default and only enabled for a selected list of locales within selected projects by the L10n Program Managers.

When enabled, each string without any translations will be automatically pretranslated using the 100% match from translation memory or (should that not be available) using the machine translation engine. If there are multiple 100% matches in TM, the one which has been used the most times will be used.

Pretranslations will be assigned a new translation status – Pretranslated. That will allow translators to distinguish them from community submitted translations and suggestions, and make them easier to review and postedit.

The important bit is that pretranslations will be stored to version control systems immediately, which means the postediting step will take place after translations might have already shipped in the product.

Machine translation engines

We have evaluated several different options and came to conclusion that the following scenario works best for our use case:

  • If your locale is supported by the Google AutoML Translation, we’ll use that service and train the engine using the Mozilla translation data sets, which will result in better translation quality than what’s currently suggested in the Machinery tab by generic MT engines.
  • For other locales we’ll use Google Translation API or similar engines.

Get involved

We are in the early stages of the feature rollout. We’re looking for teams that would like to test the feature within a pilot project. If you’re a locale manager and want to opt in, please send an email to pontoon-team@mozilla.com and we’ll add your locale to our list of early adopters.

SeaMonkeySeaMonkey 2.53.13 is out!

Hi everyone!

The SeaMonkey Project is pleased to announce the immediate release of SeaMonkey 2.53.13!

Updates will be enabled once this is posted.

Please check out [1] and [2].

[1] – https://www.seamonkey-project.org/releases/seamonkey2.53.13/

[2] – https://www.seamonkey-project.org/releases/2.53.13

Open Policy & AdvocacyMozilla statement as EU Parliament adopts new pro-competition rulebook for Big Tech

The EU Parliament today adopted the ‘Digital Markets Act’, new rules that will empower consumers to easily choose and enjoy independent web browsers. We welcome the new pro-competition approach and call for a comprehensive designation of the full range of Big Tech gatekeepers to ensure the legislation contributes to a fairer and more competitive European digital market.

The DMA will grant consumers more freedom to choose what software they wish to use, while creating the conditions for independent developers to compete fairly with Big Tech. In particular, we see immense benefit in empowering consumer choice through prohibitions that tackle manipulative software designs and introduce safeguards that allow consumers to simply and easily try new apps, delete unwanted apps, switch between apps, change app defaults, and to expect similar functionality and use.

The browser is a unique piece of software that represents people online. It is a tool that allows individuals to exercise choices about what they do and what happens to them as the navigate across the web. But like other independent web browsers, Mozilla has been harmed by unfair business practices that take away consumer choice, for instance when gatekeepers make it effectively impossible for consumers to enable and keep Firefox as their default web browser, or when they erect artificial operating system barriers that mean we can’t even offer consumers the choice of a privacy- and security-first browser. Ensuring that gatekeepers allocate enough resources to fully and transparently comply with the DMA is the first step towards a more open web and increased competition, allowing users to easily install and keep Firefox as their preferred browser on both desktop and mobile” – added Owen Bennett, Mozilla Senior Policy Manager.

To make the DMA’s promise a reality, swift and effective enforcement of the new law is required. It’s essential that all gatekeepers – and their core platform services – are identified and designated as soon as possible. This is the first test for the European Commission’s enforcement approach, and regulators must set down a marker that Europe means business.

We look forward to contributing to remedies that can ensure independent browsers can compete and offer consumers meaningful choices.

The post Mozilla statement as EU Parliament adopts new pro-competition rulebook for Big Tech appeared first on Open Policy & Advocacy.

Blog of DataThis Week in Glean: Reviewing a Book – Rust in Action

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

This blog post is going to be a bit different from what you may have read from me in the past. Usually I write about things I am working on or things I have encountered while working that I find interesting. This is still a post about something I find interesting, but instead of directly related to the things I’ve been working on, it’s about something that Mozilla actively encourages me to do: furthering my knowledge and professional development. In this instance, I chose to read a book on Rust to try and increase my knowledge and fill in any gaps in my understanding of a programming language I use almost every day and have come to really enjoy working with. The book in question is Rust in Action by Tim McNamara.

The first thing I would like to call out is the great organization of the material in the book. The first few chapters go over a lot of basic material that was perfect for a beginner to Rust, but which I felt that I was already reasonably familiar with. So, I was able to skim over a few chapters and land at just the right point where I felt comfortable with my knowledge and start reading up on the things I was ready to learn more about. This happened to be right around the end of Part 1 with the bits about lifetimes and borrowing. I have been using Rust long enough to understand a lot of how this works, but learning some of the general strategies to help deal with ownership issues was helpful, especially thinking about wrapping data in types designed to aid in movement issues.

Being an especially data oriented person, I took extra enjoyment out of the “Data in depth” chapter. Having done quite a bit of embedded software development in the past, the explanation of endianness brought back some memories of some not-so-portable code that would break because of this. The rest of the chapter was filled with bit by bit explanations of types and how they represented data in memory, op-codes, and other intricacies of thinking about data at a very low level. Definitely one of my favorite chapters in the book!

I found that the other chapters that stood out to me did so because they explained a topic such as Networking (chapter 8) or Time (chapter 9) in the context of Rust. These were things I had worked with in the past with other languages and that recognition allowed the approaches that were being explained in Rust to really sink in.  Examples of patterns like sending raw TCP data and formatting of timestamps were interrupted with concepts like traits and ways to improve error handling at just the right times to explain Rust approaches to them.

I would definitely recommend this book to anyone who is interested in learning more about Rust, especially how that applies to “systems” programming. I feel like I have come away from this with a better understanding of a few things that were still a little fuzzy and I learned quite a bit about threading in Rust that I really hadn’t been exposed to before. We continue to rely on Rust to write our services in a cross-platform way and there is a lot of information and techniques in this book that I can directly apply to the real-world problems I face working on data and experimentation tools here at Mozilla.

All in all, a really enjoyable book with fun examples to work through. Thanks to the author, and to Mozilla for encouraging me to continually improve myself.

hacks.mozilla.orgNeural Machine Translation Engine for Firefox Translations add-on

Firefox Translations is a website translation add-on that provides an automated translation of web content. Unlike cloud-based alternatives, translation is done locally on the client-side in the user’s computer so that the text being translated does not leave your machine, making it entirely private. The add-on is available for installation on Firefox Nightly, Beta and in General Release.

The add-on utilizes proceedings of project Bergamot which is a collaboration between Mozilla, the University of Edinburgh, Charles University in Prague, the University of Sheffield, and the University of Tartu with funding from the 🇪🇺 European Union’s Horizon 2020 research and innovation programme.

The add-on is powered internally by Bergamot Translator, a Neural Machine Translation engine that performs the actual task of translation. This engine can also be utilized in different contexts, like in this demo website, which lets the user perform free-form translations without using the cloud.

In this article, we will discuss the technical challenges around the development of the translation engine and how we solved them to build a usable Firefox Translations add-on.

Challenges

The translation engine is built on top of marian framework, which is a free Neural Machine Translation framework written in pure C++. The framework has a standalone native application that can provide simple translation functionality. However, two novel features needed to be introduced to the add-on that was not present in the existing native application.

The first was translation of forms, allowing users to input text in their own language and dynamically translate it on-the-fly into the page’s language. The second was estimating the quality of the translations so that low-confidence translations could be automatically highlighted in the page, in order to notify the user of potential errors. This led to the development of the translation engine which is a high level C++ API layer on top of marian.

The resulting translation engine is compiled directly to native code. There were three potential architectural solutions to integrating it into the add-on:

  1. Native integration to Firefox: Bundling the entire translation engine native code into Firefox.
  2. Native messaging: Deploying the translation engine as a native application on the user’s computer and allowing the add-on to exchange messages with it.
  3. Wasm: Porting the translation engine to Wasm and integrating it to the add-on using the developed JS bindings.

We evaluated these solutions on the following factors which we believed were crucial to develop a production ready translation add-on:

  1. Security: The approach of native integration inside the Firefox Web Browser was discarded following Mozilla’s internal security review of the engine code base, which highlighted issues over the number of third-party dependencies of the marian framework.
  2. Scalability and Maintainability: Native messaging would have posed challenges around distributing the code for the project because of the overhead of providing builds compatible with all platforms supported by Firefox. This would have been impractical to scale and maintain.
  3. Platform Support: The underlying marian framework of the translation engine supports translation only on x86/x86_64 architecture based processors. Given the increasing availability of ARM based consumer devices, the native messaging approach would have restricted the reach of the private and local translation technology to a wider audience.
  4. Performance: Wasm runs slower compared to the native code. However, it has potential to execute at near native speed by taking advantage of common hardware capabilities available on a wide range of platforms.

Wasm design as a portable compilation target for programming languages means developing and distributing a single binary running on all platforms. Additionally, Wasm is memory-safe and runs in a sandboxed execution environment, making it secure when parsing and processing Web content. All these advantages coupled with its potential to execute at near native speed gave us motivation to prototype this architectural solution and evaluate whether it meets the performance requirement of the translation add-on.

Prototyping: Porting to Wasm

We chose the Emscripten toolchain for compiling the translation engine to Wasm. The engine didn’t compile to Wasm out of the box and we made few changes to successfully compile and perform translation using the generated Wasm binary, some of which are as follows:

Prototyping to integration

Problems

After having a working translation Wasm binary, we identified a few key problems that needed to be solved to convert the prototype to a usable product.

Scalability

Packaging of all the files for each supported language pair in the Wasm binary meant it was impractical to scale for new language pairs. All the files of each language pair (translating from one language to another and vice versa) in compressed form amount to ~40MB of disk space. As an example, supporting translation of 6 language pairs made the size of the binary ~250 MB.

Demand-based language support

The packaging of files for each supported language pair in the Wasm binary meant that the users will be forced to download all supported language pairs even if they intended to use only a few of them. This is highly inefficient compared to downloading files for language pairs based on the user’s demand.

Performance

We benchmarked the translation engine on three main metrics which we believed were critical from a usability perspective.

  1. Startup time: The time it takes for the engine to be ready for translation. The engine loads models, vocabularies, and optionally a shortlist file contents during this step.
  2. Translation speed: The time taken by the engine to translate a given text after its successful startup, measured in the number of words translated per second aka wps.
  3. Wasm binary size: The disk space of the generated Wasm binary.

The size of the generated Wasm binary, owing to the packaging, became dependent on the number of language pairs supported. The translation engine took an unusually long time (~8 seconds) to startup and was extremely slow in performing translation making it unusable.

As an example, translation from English to German language using corresponding trained models gave only 95 wps on a MacBook Pro (15-inch, 2017), MacOS version 11.6.2, 3.1 GHz Quad-Core Intel Core i7 processor, 16 GB 2133 MHz RAM.

Solution

Scalability, demand-based language support and binary size

As packaging of the files affected the usability of the translation engine on multiple fronts, we decided to solve that problem first. We introduced a new API in the translation engine to pass required files as byte buffers from outside instead of packing them during compile time in Emscripten’s virtual file system.

This allowed the translation engine to scale for new languages without increasing the size of the Wasm binary and enabled the add-on to dynamically download files of only those language pairs that the users were interested in. The final size of the Wasm binary (~6.5 MB) was well within the limits of the corresponding metric.

Startup time optimization

The new API that we developed to solve the packaging problem, coupled with few other optimizations in the marian framework, solved the long startup time problem. Engine’s startup time reduced substantially (~1 second) which was well within the acceptable limits of this performance criteria.

Translation speed optimization

Profiling the translation step in the browser indicated that the General matrix multiply (GEMM) instruction for 8-bit integer operands was the most computational intensive operation, and the exception handling code had a high overhead on translation speed. We focused our efforts to optimize both of them.

  1. Optimizing exception handling code: We replaced try/catch with if/else based implementation in a function that was frequently called during the translation step which resulted in ~20% boost in translation speed.
  2. Optimizing GEMM operation: Deeper investigation on profiling results revealed that the absence of GEMM instruction in Wasm standard was the reason for it to perform so poorly on Wasm.
    1. Experimental GEMM instructions: Purely for performance evaluation of GEMM instruction without getting it standardized in Wasm, we landed two experimental instructions in Firefox Nightly and Release for x86/x86_64 architecture. These instructions improved the translation speed by ~310% and the translation of webpages seemed fast enough for the feature to be usable on these architectures. This feature was protected behind a flag and was exposed only to privileged extensions in Firefox Release owing to its experimental nature. We still wanted to figure out a standard based solution before this could be released as production software but it allowed us to continue developing the extension while we worked with the Firefox WASM team on a better long-term solution.
    2. Non-standard long term solution: In the absence of a concrete timeline regarding the implementation of GEMM instruction in the Wasm standard, we replaced the experimental GEMM instructions with a Firefox specific non-standard long term solution which provided the same or more translation speeds as provided by the experimental GEMM instructions. Apart from privileged extensions, this solution enabled translation functionallity for non-privileged extensions as well as regular content with same translation speeds and enabled translation on ARM64 based platforms, albeit with low speeds. None of this was possible with experimental GEMM instructions.
    3. Native GEMM intrinsics: In an effort to improve translation speeds further, we landed a native GEMM implementation in Firefox Nightly protected behind a flag and exposed as intrinsics. The translation engine would directly call these intrinsics during the translation step whenever it is running in Firefox Nightly on x86/x86_64 architecture based systems. This work increased the translation speeds by 25% and 43% for SSSE3 and AVX2 simd extensions respectively compared to the experimental instructions that we had landed earlier.
  3. Emscripten toolchain upgrade: The most recent effort of updating the Emscripten toolchain to the latest version increased the translation speeds for all platforms by ~15% on Firefox and reduced the size of the Wasm binary further by ~25% (final size ~4.94 MB).

Eventually, we achieved the translation speeds of ~870 wps for translation from English to German language using corresponding trained models on Firefox Release on a MacBook Pro (15-inch, 2017), MacOS version 11.6.2, 3.1 GHz Quad-Core Intel Core i7 processor, 16 GB 2133 MHz RAM.

Future

The translation engine is optimized to run at high translation speeds only for x86/x86_64 processors and we have ideas for improving the situation on ARM. A standardized Wasm GEMM instruction can achieve similar speeds on ARM, providing benefits to emerging class of consumer laptops and mobile devices. We also know that the native Marian engine performs even better with multithreading, but we had to disable multithreaded code in this version of the translation engine. Once SharedArrayBuffer support is broadly enabled, we believe we could re-enable multithreading and even faster translation speeds are possible.

Acknowledgement

I would like to thank Bergamot consortium partners, Mozilla’s Wasm team and my teammates Andre Natal, Evgeny Pavlov for their contributions in developing a mature translation engine. I am thankful to Lonnen along with Mozilla’s Add-on team, Localization team, Q&A team and Mozilla community who supported us and contributed to the development of the Firefox Translations add-on.

This project has received funding from the 🇪🇺European Union’s Horizon 2020 research and innovation programme under grant agreement No 825303.

The post Neural Machine Translation Engine for Firefox Translations add-on appeared first on Mozilla Hacks - the Web developer blog.

hacks.mozilla.orgThe JavaScript Specification has a New License

Ecma International recently approved the 2022 standard of ECMAScript. There is something new in this edition that hasn’t been part of prior editions, but this isn’t a new programming feature.

In March of this year, Ecma International accepted a proposal led by Mozilla for a new alternative license. On June 22nd, the first requests to adopt this license were granted to TC39 and applied to the following documents: ECMA-262 (ECMAScript, the official name for JavaScript) and ECMA-402 (the Internationalization API for ECMAScript).

The ECMAScript specification is developed at Ecma International, while other web technologies like HTML and CSS are being developed at W3C. These institutions have different default license agreements, which creates two problems. First, having different licenses increases the overhead of legal review for participants. This can create a speed bump for contributing across different specifications. Second, the default ECMA license contains some restrictions against creating derivative works, in contrast to W3C. These provisions haven’t been a problem in practice, but they nevertheless don’t reflect how we think Open Source should work, especially for something as foundational as JavaScript. Mozilla wants to make it easy for everyone to participate in evolving the Web, so we took the initiative of introducing an alternative license for Ecma International specifications.

What is the alternative license?

The full alternative license text may be found on the Ecma License FAQ. Ecma now provides two licenses, which can be adopted depending on the needs of a given technical committee. The default Ecma International license provides a definitive document and location for work on a given standard, with the intention of preventing forking. The license has provisions that allow inlining a given standard into source text, as well as reproduction in part or full.

The new alternative license seeks to align with the work of the W3C, and the text is largely based on the W3C’s Document and Software License. This license is more permissive regarding derivative works of a standard. This provides a legal framework and an important guarantee that the development of internet infrastructure can continue independent of any organization. By applying the alternative license to a standard as significant as ECMAScript, Ecma International has demonstrated its stewardship of a fundamental building block of the web. In addition, this presents a potential new home for standardization projects with similar licensing requirements.

Standards and Open Source

Standardization arises from the need of multiple implementers to align on a common design. Standardization improves collaboration across the industry, and reduces replicated solutions to the same problem. It also provides a way to gather feedback from users or potential users. Both Standards and Open Source produce technical solutions through collaboration. One notable distinction between standardization and an Open Source project is that the latter often focuses on developing solutions within a single implementation.

Open source has led the way with permissive licensing of projects. Over the years, different licenses such as the BSD, Creative Commons, GNU GPL & co, MIT, and MPL have sought to allow open collaboration with different focuses and goals. Standardizing bodies are gradually adopting more of the techniques of Open Source. In 2015, W3C adopted its Document and Software License, and in doing so moved many of the specifications responsible for the Web such as CSS and HTML. Under this new license, W3C ensured that the ability to build on past work would exist regardless of organizational changes.

Mozilla’s Role

As part of our work to ensure a free and open web, we worked together with Ecma International, and many partners to write a License inspired by the W3C Document and Software License. Our goal was that JavaScript’s status would align with other specifications of the Web. In addition, with this new license available to all TCs at Ecma International, this will provide other organizations to approach standardization with the same perspective.

Changes like this come from the work of many different participants and we thank everyone at TC39 who helped with this effort. In addition, I’d like also thank my colleagues at Mozilla for their excellent work: Zibi Braniecki and Peter Saint-Andre, who supported me in writing the document drafts and the Ecma International discussions; Daniel Nazer, Eric Rescorla, Bobby Holley and Tantek Çelik for their advice and guidance of this project.

The post The JavaScript Specification has a New License appeared first on Mozilla Hacks - the Web developer blog.

hacks.mozilla.orgFuzzing rust-minidump for Embarrassment and Crashes – Part 2

This is part 2 of a series of articles on rust-minidump. For part 1, see here.

So to recap, we rewrote breakpad’s minidump processor in Rust, wrote a ton of tests, and deployed to production without any issues. We killed it, perfect job.

And we still got massively dunked on by the fuzzer. Just absolutely destroyed.

I was starting to pivot off of rust-minidump work because I needed a bit of palette cleanser before tackling round 2 (handling native debuginfo, filling in features for other groups who were interested in rust-minidump, adding extra analyses that we’d always wanted but were too much work to do in Breakpad, etc etc etc).

I was still getting some PRs from people filling in the corners they needed, but nothing that needed too much attention, and then @5225225 smashed through the windows and released a bunch of exploding fuzzy rabbits into my office.

I had no idea who they were or why they were there. When I asked they just lowered one of their seven pairs of sunglasses and said “Because I can. Now hold this bunny”. I did as I was told and held the bunny. It was a good bun. Dare I say, it was a true bnnuy: it was libfuzzer. (Huh? You thought it was gonna be AFL? Weird.)

As it turns out, several folks had built out some really nice infrastructure for quickly setting up a decent fuzzer for some Rust code: cargo-fuzz. They even wrote a little book that walks you through the process.

Apparently those folks had done such a good job that 5225225 had decided it would be a really great hobby to just pick up a random rust project and implement fuzzing for it. And then to fuzz it. And file issues. And PRs that fix those issues. And then implement even more fuzzing for it.

Please help my office is drowning in rabbits and I haven’t seen my wife in weeks.

As far as I can tell, the process seems to genuinely be pretty easy! I think their first fuzzer for rust-minidump was basically just:

  • checked out the project
  • run cargo fuzz init (which autogenerates a bunch of config files)
  • write a file with this:
#![no_main]

use libfuzzer_sys::fuzz_target;
use minidump::*;

fuzz_target!(|data: &[u8]| {
    // Parse a minidump like a normal user of the library
    if let Ok(dump) = minidump::Minidump::read(data) {
        // Ask the library to get+parse several streams like a normal user.

        let _ = dump.get_stream::<MinidumpAssertion>();
        let _ = dump.get_stream::<MinidumpBreakpadInfo>();
        let _ = dump.get_stream::<MinidumpCrashpadInfo>();
        let _ = dump.get_stream::<MinidumpException>();
        let _ = dump.get_stream::<MinidumpLinuxCpuInfo>();
        let _ = dump.get_stream::<MinidumpLinuxEnviron>();
        let _ = dump.get_stream::<MinidumpLinuxLsbRelease>();
        let _ = dump.get_stream::<MinidumpLinuxMaps>();
        let _ = dump.get_stream::<MinidumpLinuxProcStatus>();
        let _ = dump.get_stream::<MinidumpMacCrashInfo>();
        let _ = dump.get_stream::<MinidumpMemoryInfoList>();
        let _ = dump.get_stream::<MinidumpMemoryList>();
        let _ = dump.get_stream::<MinidumpMiscInfo>();
        let _ = dump.get_stream::<MinidumpModuleList>();
        let _ = dump.get_stream::<MinidumpSystemInfo>();
        let _ = dump.get_stream::<MinidumpThreadNames>();
        let _ = dump.get_stream::<MinidumpThreadList>();
        let _ = dump.get_stream::<MinidumpUnloadedModuleList>();
    }
});

And that’s… it? And all you have to do is type cargo fuzz run and it downloads, builds, and spins up an instance of libfuzzer and finds bugs in your project overnight?

Surely that won’t find anything interesting. Oh it did? It was largely all bugs in code I wrote? Nice.

cargo fuzz is clearly awesome but let’s not downplay the amount of bafflingly incredible work that 5225225 did here! Fuzzers, sanitizers, and other code analysis tools have a very bad reputation for drive-by contributions.

I think we’ve all heard stories of someone running a shiny new tool on some big project they know nothing about, mass filing a bunch of issues that just say “this tool says your code has a problem, fix it” and then disappearing into the mist and claiming victory.

This is not a pleasant experience for someone trying to maintain a project. You’re dumping a lot on my plate if I don’t know the tool, have trouble running the tool, don’t know exactly how you ran it, etc.

It’s also very easy to come up with a huge pile of issues with very little sense of how significant they are.

Some things are only vaguely dubious, while others are horribly terrifying exploits. We only have so much time to work on stuff, you’ve gotta help us out!

And in this regard 5225225’s contributions were just, bloody beautiful.

Like, shockingly fantastic.

They wrote really clear and detailed issues. When I skimmed those issues and misunderstood them, they quickly clarified and got me on the same page. And then they submitted a fix for the issue before I even considered working on the fix. And quickly responded to review comments. I didn’t even bother asking them to squashing their commits because damnit they earned those 3 commits in the tree to fix one overflow.

Then they submitted a PR to merge the fuzzer. They helped me understand how to use it and debug issues. Then they started asking questions about the project and started writing more fuzzers for other parts of it. And now there’s like 5 fuzzers and a bunch of fixed issues!

I don’t care how good cargo fuzz is, that’s a lot of friggin’ really good work! Like I am going to cry!! This was so helpful??? 😭

That said, I will take a little credit for this going so smoothly: both Rust itself and rust-minidump are written in a way that’s very friendly to fuzzing. Specifically, rust-minidump is riddled with assertions for “hmm this seems messed up and shouldn’t happen but maybe?” and Rust turns integer overflows into panics (crashes) in debug builds (and index-out-of-bounds is always a panic).

Having lots of assertions everywhere makes it a lot easier to detect situations where things go wrong. And when you do detect that situation, the crash will often point pretty close to where things went wrong.

As someone who has worked on detecting bugs in Firefox with sanitizer and fuzzing folks, let me tell you what really sucks to try to do anything with: “Hey so on my machine this enormous complicated machine-generated input caused Firefox to crash somewhere this one time. No, I can’t reproduce it. You won’t be able to reproduce it either. Anyway, try to fix it?”

That’s not me throwing shade on anyone here. I am all of the people in that conversation. The struggle of productively fuzzing Firefox is all too real, and I do not have a good track record of fixing those kinds of bugs. 

By comparison I am absolutely thriving under “Yeah you can deterministically trip this assertion with this tiny input you can just check in as a unit test”.

And what did we screw up? Some legit stuff! It’s Rust code, so I am fairly confident none of the issues were security concerns, but they were definitely quality of implementation issues, and could have been used to at very least denial-of-service the minidump processor.

Now let’s dig into the issues they found!

#428: Corrupt stacks caused infinite loops until OOM on ARM64

Issue

As noted in the background, stackwalking is a giant heuristic mess and you can find yourself going backwards or stuck in an infinite loop. To keep this under control, stackwalkers generally require forward progress.

Specifically, they require the stack pointer to move down the stack. If the stack pointer ever goes backwards or stays the same, we just call it quits and end the stackwalk there.

However, you can’t be so strict on ARM because leaf functions may not change the stack size at all. Normally this would be impossible because every function call at least has to push the return address to the stack, but ARM has the link register which is basically an extra buffer for the return address.

The existence of the link register in conjunction with an ABI that makes the callee responsible for saving and restoring it means leaf functions can have 0-sized stack frames!

To handle this, an ARM stackwalker must allow for there to be no forward progress for the first frame of a stackwalk, and then become more strict. Unfortunately I hand-waved that second part and ended up allowing infinite loops with no forward progress:

// If the new stack pointer is at a lower address than the old,
// then that's clearly incorrect. Treat this as end-of-stack to
// enforce progress and avoid infinite loops.
//
// NOTE: this check allows for equality because arm leaf functions
// may not actually touch the stack (thanks to the link register
// allowing you to "push" the return address to a register).
if frame.context.get_stack_pointer() < self.get_register_always("sp") as u64 {
    trace!("unwind: stack pointer went backwards, assuming unwind complete");
    return None;
}

So if the ARM64 stackwalker ever gets stuck in an infinite loop on one frame, it will just build up an infinite backtrace until it’s killed by an OOM. This is very nasty because it’s a potentially very slow denial-of-service that eats up all the memory on the machine!

This issue was actually originally discovered and fixed in #300 without a fuzzer, but when I fixed it for ARM (32-bit) I completely forgot to do the same for ARM64. Thankfully the fuzzer was evil enough to discover this infinite looping situation on its own, and the fix was just “copy-paste the logic from the 32-bit impl”.

Because this issue was actually encountered in the wild, we know this was a serious concern! Good job, fuzzer!

(This issue specifically affected minidump-processor and minidump-stackwalk)

#407: MinidumpLinuxMaps address-based queries didn’t work at all

Issue

MinidumpLinuxMaps is an interface for querying the dumped contents of Linux’s /proc/self/maps file. This provides metadata on the permissions and allocation state for mapped ranges of memory in the crashing process.

There are two usecases for this: just getting a full dump of all the process state, and specifically querying the memory properties for a specific address (“hey is this address executable?”). The dump usecase is handled by just shoving everything in a Vec. The address usecase requires us to create a RangeMap over the entries.

Unfortunately, a comparison was flipped in the code that created the keys to the RangeMap, which resulted in every correct memory range being discarded AND invalid memory ranges being accepted. The fuzzer was able to catch this because the invalid ranges tripped an assertion when they got fed into the RangeMap (hurray for redundant checks!).

// OOPS
if self.base_address < self.final_address { 
 return None; 
}

Although tests were written for MinidumpLinuxMaps, they didn’t include any invalid ranges, and just used the dump interface, so the fact that the RangeMap was empty went unnoticed!

This probably would have been quickly found as soon as anyone tried to actually use this API in practice, but it’s nice that we caught it beforehand! Hooray for fuzzers!

(This issue specifically affected the minidump crate which technically could affect minidump-processor and minidump-stackwalk. Although they didn’t yet actually do address queries, they may have crashed when fed invalid ranges.)

#381: OOM from reserving memory based on untrusted list length.

Issue

Minidumps have lots of lists which we end up collecting up in a Vec or some other collection. It’s quite natural and more efficient to start this process with something like Vec::with_capacity(list_length). Usually this is fine, but if the minidump is corrupt (or malicious), then this length could be impossibly large and cause us to immediately OOM.

We were broadly aware that this was a problem, and had discussed the issue in #326, but then everyone left for the holidays. #381 was a nice kick in the pants to actually fix it, and gave us a free simple test case to check in.

Although the naive solution would be to fix this by just removing the reserves, we opted for a solution that guarded against obviously-incorrect array lengths. This allowed us to keep the performance win of reserving memory while also making rust-minidump fast-fail instead of vaguely trying to do something and hallucinating a mess.

Specifically, @Swatinem introduced a function for checking that the amount of memory left in the section we’re parsing is large enough to even hold the claimed amount of items (based on their known serialized size). This should mean the minidump crate can only be induced to reserve O(n) memory, where n is the size of the minidump itself.

For some scale:

  • A minidump for Firefox’s main process with about 100 threads is about 3MB.
  • A minidump for a stackoverflow from infinite recursion (8MB stack, 9000 calls) is about 8MB.
  • A breakpad symbol file for Firefox’s main module can be about 200MB.

If you’re symbolicating, Minidumps probably won’t be your memory bottleneck. 😹

(This issue specifically affected the minidump crate and therefore also minidump-processor and minidump-stackwalk.)

The Many Integer Overflows and My Greatest Defeat

The rest of the issues found were relatively benign integer overflows. I claim they’re benign because rust-minidump should already be working under the assumption that all the values it reads out of the minidump could be corrupt garbage. This means its code is riddled with “is this nonsense” checks and those usually very quickly catch an overflow (or at worst print a nonsense value for some pointer).

We still fixed them all, because that’s shaky as heck logic and we want to be robust. But yeah none of these were even denial-of-service issues, as far as I know.

To demonstrate this, let’s discuss the most evil and embarrassing overflow which was definitely my fault and I am still mad about it but in a like “how the heck” kind of way!?

The overflow is back in our old friend the stackwalker. Specifically in the code that attempts to unwind using frame pointers. Even more specifically, when offsetting the supposed frame-pointer to get the location of the supposed return address:

let caller_ip = stack_memory.get_memory_at_address(last_bp + POINTER_WIDTH)?;
let caller_bp = stack_memory.get_memory_at_address(last_bp)?;
let caller_sp = last_bp + POINTER_WIDTH * 2;

If the frame pointer (last_bp) was ~u64::MAX, the offset on the first line would overflow and we would instead try to load ~null. All of our loads are explicitly fallible (we assume everything is corrupt garbage!), and nothing is ever mapped to the null page in normal applications, so this load would reliably fail as if we had guarded the overflow. Hooray!

…but the overflow would panic in debug builds because that’s how debug builds work in Rust!

This was actually found, reported, and fixed without a fuzzer in #251. All it took was a simple guard:

(All the casts are because this specific code is used in the x86 impl and the x64 impl.)

if last_bp as u64 >= u64::MAX - POINTER_WIDTH as u64 * 2 {
    // Although this code generally works fine if the pointer math overflows,
    // debug builds will still panic, and this guard protects against it without
    // drowning the rest of the code in checked_add.
    return None;
}

let caller_ip = stack_memory.get_memory_at_address(last_bp as u64 + POINTER_WIDTH as u64)?;
let caller_bp = stack_memory.get_memory_at_address(last_bp as u64)?;
let caller_sp = last_bp + POINTER_WIDTH * 2;

And then it was found, reported, and fixed again with a fuzzer in #422.

Wait what?

Unlike the infinite loop bug, I did remember to add guards to all the unwinders for this problem… but I did the overflow check in 64-bit even for the 32-bit platforms.

slaps forehead

This made the bug report especially confusing at first because the overflow was like 3 lines away from a guard for that exact overflow. As it turns out, the mistake wasn’t actually as obvious as it sounds! To understand what went wrong, let’s talk a bit more about pointer width in minidumps.

A single instance of rust-minidump has to be able to handle crash reports from any platform, even ones it isn’t natively running on. This means it needs to be able to handle both 32-bit and 64-bit platforms in one binary. To avoid the misery of copy-pasting everything or making everything generic over pointer size, rust-minidump prefers to work with 64-bit values wherever possible, even for 32-bit plaftorms.

This isn’t just us being lazy: the minidump format itself does this! Regardless of the platform, a minidump will refer to ranges of memory with a MINIDUMP_MEMORY_DESCRIPTOR whose base address is a 64-bit value, even on 32-bit platforms!

typedef struct _MINIDUMP_MEMORY_DESCRIPTOR {
  ULONG64                      StartOfMemoryRange;
  MINIDUMP_LOCATION_DESCRIPTOR Memory;
} MINIDUMP_MEMORY_DESCRIPTOR, *PMINIDUMP_MEMORY_DESCRIPTOR;

So quite naturally rust-minidump’s interface for querying saved regions of memory just operates on 64-bit (u64) addresses unconditionally, and 32-bit-specific code casts its u32 address to a u64 before querying memory.

That means the code with the overflow guard was manipulating those values as u64s on x86! The problem is that after all the memory loads we would then go back to “native” sizes and compute caller_sp = last_bp + POINTER_WIDTH * 2. This would overflow a u32 and crash in debug builds. 😿

But here’s the really messed up part: getting to that point meant we were successfully loading memory up to that address. The first line where we compute caller_ip reads it! So this overflow means… we were… loading memory… from an address that was beyond u32::MAX…!?

Yes!!!!!!!!

The fuzzer had found an absolutely brilliantly evil input.

It abused the fact that MINIDUMP_MEMORY_DESCRIPTOR technically lets 32-bit minidumps define memory ranges beyond u32::MAX even though they could never actually access that memory! It could then have the u64-based memory accesses succeed but still have the “native” 32-bit operation overflow!

This is so messed up that I didn’t even comprehend that it had done this until I wrote my own test and realized that it wasn’t actually failing because I foolishly had limited the range of valid memory to the mere 4GB a normal x86 process is restricted to.

And I mean that quite literally: this is exactly the issue that creates Parallel Universes in Super Mario 64.

But hey my code was probably just bad. I know google loves sanitizers and fuzzers, so I bet google breakpad found this overflow ages ago and fixed it:

uint32_t last_esp = last_frame->context.esp;
uint32_t last_ebp = last_frame->context.ebp;
uint32_t caller_eip, caller_esp, caller_ebp;

if (memory_->GetMemoryAtAddress(last_ebp + 4, &caller_eip) &&
    memory_->GetMemoryAtAddress(last_ebp, &caller_ebp)) {
    caller_esp = last_ebp + 8;
    trust = StackFrame::FRAME_TRUST_FP;
} else {
    ...

Ah. Hmm. They don’t guard for any kind of overflow for those uint32_t’s (or the uint64_t’s in the x64 impl).

Well ok GetMemoryAtAddress does actual bounds checks so the load from ~null will generally fail like it does in rust-minidump. But what about the Parallel Universe overflow that lets GetMemoryAtAddress succeed?

Ah well surely breakpad is more principled with integer width than I was–

virtual bool GetMemoryAtAddress(uint64_t address, uint8_t*  value) const = 0;
virtual bool GetMemoryAtAddress(uint64_t address, uint16_t* value) const = 0;
virtual bool GetMemoryAtAddress(uint64_t address, uint32_t* value) const = 0;
virtual bool GetMemoryAtAddress(uint64_t address, uint64_t* value) const = 0;

Whelp congrats to 5225225 for finding an overflow that’s portable between two implementations in two completely different languages by exploiting the very nature of the file format itself!

In case you’re wondering what the implications of this overflow are: it’s still basically benign. Both rust-minidump and google-breakpad will successfully complete the frame pointer analysis and yield a frame with a ~null stack pointer.

Then the outer layer of the stackwalker which runs all the different passes in sequence will see something succeeded but that the frame pointer went backwards. At this point it will discard the stack frame and terminate the stackwalk normally and just calmly output whatever the backtrace was up to that point. Totally normal and reasonable operation.

I expect this is why no one would notice this in breakpad even if you run fuzzers and sanitizers on it: nothing in the code actually does anything wrong. Unsigned integers are defined to wrap, the program behaves reasonably, everything is kinda fine. We only noticed this in rust-minidump because all integer overflows panic in Rust debug builds.

However this “benign” behaviour is slightly different from properly guarding the overflow. Both implementations will normally try to move on to stack scanning when the frame pointer analysis fails, but in this case they give up immediately. It’s important that the frame pointer analysis properly identifies failures so that this cascading can occur. Failing to do so is definitely a bug!

However in this case the stack is partially in a parallel universe, so getting any kind of useful backtrace out of it is… dubious to say the least.

So I totally stand by “this is totally benign and not actually a problem” but also “this is sketchy and we should have the bounds check so we can be confident in this code’s robustness and correctness”.

Minidumps are all corner cases — they literally get generated when a program encounters an unexpected corner case! It’s so tempting to constantly shrug off situations as “well no reasonable program would ever do this, so we can ignore it”… but YOU CAN’T.

You would not have a minidump at your doorstep if the program had behaved reasonably! The fact that you are trying to inspect a minidump means something messed up happened, and you need to just deal with it!

That’s why we put so much energy into testing this thing, it’s a nightmare!

I am extremely paranoid about this stuff, but that paranoia is based on the horrors I have seen. There are always more corner cases.

There are ALWAYS more corner cases. ALWAYS.

 

The post Fuzzing rust-minidump for Embarrassment and Crashes – Part 2 appeared first on Mozilla Hacks - the Web developer blog.

hacks.mozilla.orgHacks Decoded: Bikes and Boomboxes with Samuel Aboagye

Welcome to our Hacks: Decoded Interview series!

Once a month, Mozilla Foundation’s Xavier Harding speaks with people in the tech industry about where they’re from, the work they do and what drives them to keep going forward. Make sure you follow Mozilla’s Hacks blog to find more articles in this series and make sure to visit the Mozilla Foundation site to see more of our org’s work.

Meet Samuel Aboagye!

Samuel Aboagye is a genius. Aboagye is 17 years old. In those 17 years, he’s crafted more inventions than you have, probably. Among them: a solar-powered bike and a Bluetooth speaker, both using recycled materials. We caught up with Ghanaian inventor Samuel Aboagye over video chat in hopes that he’d talk with us about his creations, and ultimately how he’s way cooler than any of us were at 17.

 

Samuel, you’ve put together lots of inventions like an electric bike and Bluetooth speaker and even a fan. What made you want to make them?

For the speaker, I thought of how I could minimize the rate at which yellow plastic containers pollute the environment.  I tried to make good use of it after it served its purpose. So, with the little knowledge, I acquired in my science lessons, instead of the empty container just lying down and polluting the environment, I tried to create something useful with it.  

After the Bluetooth speaker was successful, I realized there was more in me I could show to the universe. More importantly, we live in a very poor ventilated room and we couldn’t afford an electric fan so the room was unbearably hot. As such, this situation triggered and motivated me to manufacture a fan to solve this family problem.

With the bike, I thought it would be wise to make life easier for the physically challenged because I was always sad to see them go through all these challenges just to live their daily lives. Electric motors are very expensive and not common in my country, so I decided to do something to help.

Since solar energy is almost always readily available in my part of the world and able to renew itself, I thought that if I am able to make a bike with it, it would help the physically challenged to move from one destination to another without stress or thinking of how to purchase a battery or fuel.  

So how did you go about making them? Did you run into any trouble?

I went around my community gathering used items and old gadgets like radio sets and other electronics and then removed parts that could help in my work. With the electrical energy training given to me by my science teacher after discovering me since JHS1, I was able to apply this and also combined with my God-given talent.

Whenever I need some sort of technical guidance, I call on my teacher Sir David. He has also been my financial help for all my projects.  Financing projects has always been my biggest struggle and most times I have to wait on him to raise funds for me to continue.

The tricycle: Was it much harder to make than a bike?

​​Yes, it was a little bit harder to make the tricycle than the bike. It’s time-consuming and also cost more than a bike. It needs extra technical and critical thinking too. 

You made the bike and speaker out of recycled materials. This answer is probably obvious but I’ve gotta ask: why recycled materials?  Is environment-friendly tech important to you?

I used recycled materials because they were readily available and comparable to cheap and easy to get. With all my inventions I make sure they are all environmentally friendly so as not to pose any danger now or future to the beings on Earth.  But also, I want the world to be a safe and healthy place to be. 

 

The post Hacks Decoded: Bikes and Boomboxes with Samuel Aboagye appeared first on Mozilla Hacks - the Web developer blog.

SeaMonkeySeaMonkey 2.53.13 Beta 1 is out!

Hi All,

The SeaMonkey Project is pleased to announce the immediate release of 2.53.13 Beta 1.

Please check out [1] and/or [2].   Updates will be available shortly.

:ewong

 

[1] – https://www.seamonkey-project.org/releases/seamonkey2.53.13/

[2] – https://www.seamonkey-project.org/releases/2.53.13b1

Firefox UXHow we created inclusive writing guidelines for Firefox

Prioritizing inclusion work as a small team, and what we learned in the process.

Sharpened pencil with pencil shavings and pencil sharpener on white background.<figcaption>Photo by Eduardo Casajús Gorostiaga on Unsplash.</figcaption>

Our UX content design team recently relaunched our product content guidelines. In addition to building out more robust guidance on voice, tone, and mechanics, we also introduced new sections, including inclusive writing.

At the time, our content design team had just two people. We struggled to prioritize inclusion work while juggling project demands and string requests. If you’re in a similar cozy boat, here’s how we were able to get our inclusive writing guidelines done.

1. Set a deadline, even if it’s arbitrary

It’s hard to prioritize work that doesn’t have a deadline or isn’t a request from your product manager. You keep telling yourself you’ll eventually get to it… and you don’t.

So I set a deadline. That deadline applied to our entire new content guidelines, but this section had its own due date to track against. To hold myself accountable, I scheduled weekly check-ins to review drafts.

Spoiler: I didn’t hit my deadline. But, I made significant progress. When the deadline came and went, the draft was nearly done. This made it easier to bring the guidelines across the finish line.

2. Gather inspiration

Developing inclusive writing guidance from scratch felt overwhelming. Where would I even start? Fortunately, a lot of great work has already been done by other product content teams. I started by gathering inspiration.

There are dozens of inclusive writing resources, but not all focus exclusively on product content. These were good models to follow:

I looked for common themes as well as how other organizations tackled content specific to their products. As a content designer, I also paid close attention to how to make writing guidelines digestible and easy to understand.

Based on my audit, I developed a rough outline:

  • Clarify what we mean by ‘inclusive language’
  • Include best practices for how to consider your own biases and write inclusively
  • Provide specific writing guidance for your own product
  • Develop a list of terms to avoid and why they’re problematic. Suggest alternate terms.

3. Align on your intended audience

Inclusivity has many facets, particularly when it comes to language. Inclusive writing could apply to job descriptions, internal communications, or marketing content. To start, our focus would be writing product content only.

Our audience was mainly Firefox content designers, but occasionally product designers, product managers, and engineers might reference these guidelines as well.

Having a clear audience was helpful when our accessibility team raised questions about visual and interaction design. We debated including color and contrast guidance. Ultimately, we decided to keep scope limited to writing for the product. At a later date, we could collaborate with product designers to draft more holistic accessibility guidance for the larger UX team.

4. Keep the stakes low for your first draft

This was our first attempt at capturing how to write inclusively for Firefox. I was worried about getting it wrong, but didn’t want that fear to stop me from even getting started.

So I let go of the expectation I’d write perfect, ready-to-ship guidance on the first try. I simply tried to get a “good start” on paper. Then I brought my draft to our internal weekly content team check-in. This felt like a safe space to bring unfinished work.

The thoughtful conversations and considerations my colleagues raised helped me move the work forward. Through multiple feedback loops, we worked collaboratively to tweak, edit, and revise.

5. Gather input from subject matter experts

I then sought feedback from our Diversity & Inclusion and Accessibility teams. Before asking them to weigh in, I wrote a simple half-page brief to clarify scope, deadlines, and the type of feedback we needed.

Our cross-functional peers helped identify confusing language and suggested further additions. With their input, I made significant revisions that made the guidelines even stronger.

6. Socialize, socialize, socialize

The work doesn’t stop once you hit publish. Documentation has a tendency to collect dust on the shelf unless you make sure people know it exists. Our particular strategy includes:

  • Include on our internal wiki, with plans to publish it to our new design system later this year
  • Seek placement in our internal company-wide newsletter
  • Promote in internal Slack channels
  • Look for opportunities to reference the guidelines in internal conversations and company meetings

7. Treat the guidelines as a living document

We take a continuous learning approach to inclusive work. I expect our inclusive writing guidance to evolve.

To encourage others to participate in this work, we will continue to be open to contributions and suggestions outside of our team, making updates as we go. We also intend to review the guidelines as a content team each quarter to see what changes we may need to make.

Wrapping up

My biggest learning from the process of creating new inclusive writing guidelines is this: Your impact can start small. It can even start as a messy Google Doc. But the more you bring other people in to contribute, the stronger the end result in a continuing journey of learning and evolution.

Acknowledgements

Thank you to the following individuals who contributed to the inclusive writing guidelines: Meridel Walkington, Leslie Gray, Asa Dotzler, Jainaba Seckan, Steven Potter, Kelsey Carson, Eitan Isaacson, Morgan Rae Reschenberg, Emily Wachowiak


How we created inclusive writing guidelines for Firefox was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

hacks.mozilla.orgEverything Is Broken: Shipping rust-minidump at Mozilla – Part 1

Everything Is Broken: Shipping rust-minidump at Mozilla

For the last year I’ve been leading the development of rust-minidump, a pure-Rust replacement for the minidump-processing half of google-breakpad.

Well actually in some sense I finished that work, because Mozilla already deployed it as the crash processing backend for Firefox 6 months ago, it runs in half the time, and seems to be more reliable. (And you know, isn’t a terrifying ball of C++ that parses and evaluates arbitrary input from the internet. We did our best to isolate Breakpad, but still… yikes.)

This is a pretty fantastic result, but there’s always more work to do because Minidumps are an inky abyss that grows deeper the further you delve… wait no I’m getting ahead of myself. First the light, then the abyss. Yes. Light first.

What I can say is that we have a very solid implementation of the core functionality of minidump parsing+analysis for the biggest platforms (x86, x64, ARM, ARM64; Windows, MacOS, Linux, Android). But if you want to read minidumps generated on a PlayStation 3 or process a Full Memory dump, you won’t be served quite as well.

We’ve put a lot of effort into documenting and testing this thing, so I’m pretty confident in it!

Unfortunately! Confidence! Is! Worth! Nothing!

Which is why this is the story of how we did our best to make this nightmare as robust as we could and still got 360 dunked on from space by the sudden and incredible fuzzing efforts of @5225225.

This article is broken into two parts:

  1. what minidumps are, and how we made rust-minidump
  2. how we got absolutely owned by simple fuzzing

You are reading part 1, wherein we build up our hubris.

Background: What’s A Minidump, and Why Write rust-minidump?

Your program crashes. You want to know why your program crashed, but it happened on a user’s machine on the other side of the world. A full coredump (all memory allocated by the program) is enormous — we can’t have users sending us 4GB files! Ok let’s just collect up the most important regions of memory like the stacks and where the program crashed. Oh and I guess if we’re taking the time, let’s stuff some metadata about the system and process in there too.

Congratulations you have invented Minidumps. Now you can turn a 100-thread coredump that would otherwise be 4GB into a nice little 2MB file that you can send over the internet and do postmortem analysis on.

Or more specifically, Microsoft did. So long ago that their docs don’t even discuss platform support. MiniDumpWriteDump’s supported versions are simply “Windows”. Microsoft Research has presumably developed a time machine to guarantee this.

Then Google came along (circa 2006-2007) and said “wouldn’t it be nice if we could make minidumps on any platform”? Thankfully Microsoft had actually built the format pretty extensibly, so it wasn’t too bad to extend the format for Linux, MacOS, BSD, Solaris, and so on. Those extensions became google-breakpad (or just Breakpad) which included a ton of different tools for generating, parsing, and analyzing their extended minidump format (and native Microsoft ones).

Mozilla helped out with this a lot because apparently, our crash reporting infrastructure (“Talkback”) was miserable circa 2007, and this seemed like a nice improvement. Needless to say, we’re pretty invested in breakpad’s minidumps at this point.

Fast forward to the present day and in a hilarious twist of fate, products like VSCode mean that Microsoft now supports applications that run on Linux and MacOS so it runs breakpad in production and has to handle non-Microsoft minidumps somewhere in its crash reporting infra, so someone else’s extension of their own format is somehow their problem now!

Meanwhile, Google has kind-of moved on to Crashpad. I say kind-of because there’s still a lot of Breakpad in there, but they’re more interested in building out tooling on top of it than improving Breakpad itself. Having made a few changes to Breakpad: honestly fair, I don’t want to work on it either. Still, this was a bit of a problem for us, because it meant the project became increasingly under-staffed.

By the time I started working on crash reporting, Mozilla had basically given up on upstreaming fixes/improvements to Breakpad, and was just using its own patched fork. But even without the need for upstreaming patches, every change to Breakpad filled us with dread: many proposed improvements to our crash reporting infrastructure stalled out at “time to implement this in Breakpad”.

Why is working on Breakpad so miserable, you ask?

Parsing and analyzing minidumps is basically an exercise in writing a fractal parser of platform-specific formats nested in formats nested in formats. For many operating systems. For many hardware architectures. And all the inputs you’re parsing and analyzing are terrible and buggy so you have to write a really permissive parser and crawl forward however you can.

Some specific MSVC toolchain that was part of Windows XP had a bug in its debuginfo format? Too bad, symbolicate that stack frame anyway!

The program crashed because it horribly corrupted its own stack? Too bad, produce a backtrace anyway!

The minidump writer itself completely freaked out and wrote a bunch of garbage to one stream? Too bad, produce whatever output you can anyway!

Hey, you know who has a lot of experience dealing with really complicated permissive parsers written in C++? Mozilla! That’s like the core functionality of a web browser.

Do you know Mozilla’s secret solution to writing really complicated permissive parsers in C++?

We stopped doing it.

We developed Rust and ported our nastiest parsers to it.

We’ve done it a lot, and when we do we’re always like “wow this is so much more reliable and easy to maintain and it’s even faster now”. Rust is a really good language for writing parsers. C++ really isn’t.

So we Rewrote It In Rust (or as the kids call it, “Oxidized It”). Breakpad is big, so we haven’t actually covered all of its features. We’ve specifically written and deployed:

  • dump_syms which processes native build artifacts into symbol files.
  • rust-minidump which is a collection of crates that parse and analyze minidumps. Or more specifically, we deployed minidump-stackwalk, which is the high-level cli interface to all of rust-minidump.

Notably missing from this picture is minidump writing, or what google-breakpad calls a client (because it runs on the client’s machine). We are working on a rust-based minidump writer, but it’s not something we can recommend using quite yet (although it has sped up a lot thanks to help from Embark Studios).

This is arguably the messiest and hardest work because it has a horrible job: use a bunch of native system APIs to gather up a bunch of OS-specific and Hardware-specific information about the crash AND do it for a program that just crashed, on a machine that caused the program to crash.

We have a long road ahead but every time we get to the other side of one of these projects it’s wonderful.

 

Background: Stackwalking and Calling Conventions

One of rust-minidump’s (minidump-stackwalk’s) most important jobs is to take the state for a thread (general purpose registers and stack memory) and create a backtrace for that thread (unwind/stackwalk). This is a surprisingly complicated and messy job, made only more complicated by the fact that we are trying to analyze the memory of a process that got messed up enough to crash.

This means our stackwalkers are inherently working with dubious data, and all of our stackwalking techniques are based on heuristics that can go wrong and we can very easily find ourselves in situations where the stackwalk goes backwards or sideways or infinite and we just have to try to deal with it!

It’s also pretty common to see a stackwalker start hallucinating, which is my term for “the stackwalker found something that looked plausible enough and went on a wacky adventure through the stack and made up a whole pile of useless garbage frames”. Hallucination is most common near the bottom of the stack where it’s also least offensive. This is because each frame you walk is another chance for something to go wrong, but also increasingly uninteresting because you’re rarely interested in confirming that a thread started in The Same Function All Threads Start In.

All of these problems would basically go away if everyone agreed to properly preserve their cpu’s PERFECTLY GOOD DEDICATED FRAME POINTER REGISTER. Just kidding, turning on frame pointers doesn’t really work either because Microsoft invented chaos frame pointers that can’t be used for unwinding! I assume this happened because they accidentally stepped on the wrong butterfly while they were traveling back in time to invent minidumps. (I’m sure it was a decision that made more sense 20 years ago, but it has not aged well.)

If you would like to learn more about the different techniques for unwinding, I wrote about them over here in my article on Apple’s Compact Unwind Info. I’ve also attempted to document breakpad’s STACK WIN and STACK CFI unwind info formats here, which are more similar to the  DWARF and PE32 unwind tables (which are basically tiny programming languages).

If you would like to learn more about ABIs in general, I wrote an entire article about them here. The end of that article also includes an introduction to how calling conventions work. Understanding calling conventions is key to implementing unwinders.

 

How Hard Did You Really Test Things?

Hopefully you now have a bit of a glimpse into why analyzing minidumps is an enormous headache. And of course you know how the story ends: that fuzzer kicks our butts! But of course to really savor our defeat, you have to see how hard we tried to do a good job! It’s time to build up our hubris and pat ourselves on the back.

So how much work actually went into making rust-minidump robust before the fuzzer went to work on it?

Quite a bit!

I’ll never argue all the work we did was perfect but we definitely did some good work here, both for synthetic inputs and real world ones. Probably the biggest “flaw” in our methodology was the fact that we were only focused on getting Firefox’s usecase to work. Firefox runs on a lot of platforms and sees a lot of messed up stuff, but it’s still a fairly coherent product that only uses so many features of minidumps.

This is one of the nice benefits of our recent work with Sentry, which is basically a Crash Reporting As A Service company. They are way more liable to stress test all kinds of weird corners of the format that Firefox doesn’t, and they have definitely found (and fixed!) some places where something is wrong or missing! (And they recently deployed it into production too! 🎉)

But hey don’t take my word for it, check out all the different testing we did:

Synthetic Minidumps for Unit Tests

rust-minidump includes a synthetic minidump generator which lets you come up with a high-level description of the contents of a minidump, and then produces an actual minidump binary that we can feed it into the full parser:

// Let’s make a synth minidump with this particular Crashpad Info…

let module = ModuleCrashpadInfo::new(42, Endian::Little)
    .add_list_annotation("annotation")
    .add_simple_annotation("simple", "module")
    .add_annotation_object("string", AnnotationValue::String("value".to_owned()))
    .add_annotation_object("invalid", AnnotationValue::Invalid)
    .add_annotation_object("custom", AnnotationValue::Custom(0x8001, vec![42]));

let crashpad_info = CrashpadInfo::new(Endian::Little)
    .add_module(module)
    .add_simple_annotation("simple", "info");

let dump = SynthMinidump::with_endian(Endian::Little).add_crashpad_info(crashpad_info);

// convert the synth minidump to binary and read it like a normal minidump
let dump = read_synth_dump(dump).unwrap();

// Now check that the minidump reports the values we expect…

minidump-synth intentionally avoids sharing layout code with the actual implementation so that incorrect changes to layouts won’t “accidentally” pass tests.

A brief aside for some history: this testing framework was started by the original lead on this project, Ted Mielczarek. He started rust-minidump as a side project to learn Rust when 1.0 was released and just never had the time to finish it. Back then he was working at Mozilla and also a major contributor to Breakpad, which is why rust-minidump has a lot of similar design choices and terminology.

This case is no exception: our minidump-synth is a shameless copy of the synth-minidump utility in breakpad’s code, which was originally written by our other coworker Jim Blandy. Jim is one of the only people in the world that I will actually admit writes really good tests and docs, so I am totally happy to blatantly copy his work here.

Since this was all a learning experiment, Ted was understandably less rigorous about testing than usual. This meant a lot of minidump-synth was unimplemented when I came along, which also meant lots of minidump features were completely untested. (He built an absolutely great skeleton, just hadn’t had the time to fill it all in!)

We spent a lot of time filling in more of minidump-synth’s implementation so we could write more tests and catch more issues, but this is definitely the weakest part of our tests. Some stuff was implemented before I got here, so I don’t even know what tests are missing!

This is a good argument for some code coverage checks, but it would probably come back with “wow you should write a lot more tests” and we would all look at it and go “wow we sure should” and then we would probably never get around to it, because there are many things we should do.

On the other hand, Sentry has been very useful in this regard because they already have a mature suite of tests full of weird corner cases they’ve built up over time, so they can easily identify things that really matter, know what the fix should roughly be, and can contribute pre-existing test cases!

Integration and Snapshot Tests

We tried our best to shore up coverage issues in our unit tests by adding more holistic tests. There’s a few checked in Real Minidumps that we have some integration tests for to make sure we handle Real Inputs properly.

We even wrote a bunch of integration tests for the CLI application that snapshot its output to confirm that we never accidentally change the results.

Part of the motivation for this is to ensure we don’t break the JSON output, which we also wrote a very detailed schema document for and are trying to keep stable so people can actually rely on it while the actual implementation details are still in flux.

Yes, minidump-stackwalk is supposed to be stable and reasonable to use in production!

For our snapshot tests we use insta, which I think is fantastic and more people should use. All you need to do is assert_snapshot! any output you want to keep track of and it will magically take care of the storing, loading, and diffing.

Here’s one of the snapshot tests where we invoke the CLI interface and snapshot stdout:

#[test]
fn test_evil_json() {
    // For a while this didn't parse right
    let bin = env!("CARGO_BIN_EXE_minidump-stackwalk");
    let output = Command::new(bin)
        .arg("--json")
        .arg("--pretty")
        .arg("--raw-json")
        .arg("../testdata/evil.json")
        .arg("../testdata/test.dmp")
        .arg("../testdata/symbols/")
        .stdout(Stdio::piped())
        .stderr(Stdio::piped())
        .output()
        .unwrap();

    let stdout = String::from_utf8(output.stdout).unwrap();
    let stderr = String::from_utf8(output.stderr).unwrap();

    assert!(output.status.success());
    insta::assert_snapshot!("json-pretty-evil-symbols", stdout);
    assert_eq!(stderr, "");
}

Stackwalker Unit Testing

The stackwalker is easily the most complicated and subtle part of the new implementation, because every platform can have slight quirks and you need to implement several different unwinding strategies and carefully tune everything to work well in practice.

The scariest part of this was the call frame information (CFI) unwinders, because they are basically little virtual machines we need to parse and execute at runtime. Thankfully breakpad had long ago smoothed over this issue by defining a simplified and unified CFI format, STACK CFI (well, nearly unified, x86 Windows was still a special case as STACK WIN). So even if DWARF CFI has a ton of complex features, we mostly need to implement a Reverse Polish Notation Calculator except it can read registers and load memory from addresses it computes (and for STACK WIN it has access to named variables it can declare and mutate).

Unfortunately, Breakpad’s description for this format is pretty underspecified so I had to basically pick some semantics I thought made sense and go with that. This made me extremely paranoid about the implementation. (And yes I will be more first-person for this part, because this part was genuinely where I personally spent most of my time and did a lot of stuff from scratch. All the blame belongs to me here!)

The STACK WIN / STACK CFI parser+evaluator is 1700 lines. 500 of those lines are a detailed documentation and discussion of the format, and 700 of those lines are an enormous pile of ~80 test cases where I tried to come up with every corner case I could think of.

I even checked in two tests I knew were failing just to be honest that there were a couple cases to fix! One of them is a corner case involving dividing by a negative number that almost certainly just doesn’t matter. The other is a buggy input that old x86 Microsoft toolchains actually produce and parsers need to deal with. The latter was fixed before the fuzzing started.

And 5225225 still found an integer overflow in the STACK WIN preprocessing step! (Not actually that surprising, it’s a hacky mess that tries to cover up for how messed up x86 Windows unwinding tables were.)

(The code isn’t terribly interesting here, it’s just a ton of assertions that a given input string produces a given output/error.)

Of course, I wasn’t satisfied with just coming up with my own semantics and testing them: I also ported most of breakpad’s own stackwalker tests to rust-minidump! This definitely found a bunch of bugs I had, but also taught me some weird quirks in Breakpad’s stackwalkers that I’m not sure I actually agree with. But in this case I was flying so blind that even being bug-compatible with Breakpad was some kind of relief.

Those tests also included several tests for the non-CFI paths, which were similarly wobbly and quirky. I still really hate a lot of the weird platform-specific rules they have for stack scanning, but I’m forced to work on the assumption that they might be load-bearing. (I definitely had several cases where I disabled a breakpad test because it was “obviously nonsense” and then hit it in the wild while testing. I quickly learned to accept that Nonsense Happens And Cannot Be Ignored.)

One major thing I didn’t replicate was some of the really hairy hacks for STACK WIN. Like there are several places where they introduce extra stack-scanning to try to deal with the fact that stack frames can have mysterious extra alignment that the windows unwinding tables just don’t tell you about? I guess?

There’s almost certainly some exotic situations that rust-minidump does worse on because of this, but it probably also means we do better in some random other situations too. I never got the two to perfectly agree, but at some point the divergences were all in weird enough situations, and as far as I was concerned both stackwalkers were producing equally bad results in a bad situation. Absent any reason to prefer one over the other, divergence seemed acceptable to keep the implementation cleaner.

Here’s a simplified version of one of the ported breakpad tests, if you’re curious (thankfully minidump-synth is based off of the same binary data mocking framework these tests use):

#[test]
fn test_x86_frame_pointer() {
    let mut f = TestFixture::new();
    let frame0_ebp = Label::new();
    let frame1_ebp = Label::new();
    let mut stack = Section::new();

    // Setup the stack and registers so frame pointers will work
    stack.start().set_const(0x80000000);
    stack = stack
        .append_repeated(12, 0) // frame 0: space
        .mark(&frame0_ebp)      // frame 0 %ebp points here
        .D32(&frame1_ebp)       // frame 0: saved %ebp
        .D32(0x40008679)        // frame 0: return address
        .append_repeated(8, 0)  // frame 1: space
        .mark(&frame1_ebp)      // frame 1 %ebp points here
        .D32(0)                 // frame 1: saved %ebp (stack end)
        .D32(0);                // frame 1: return address (stack end)
    f.raw.eip = 0x4000c7a5;
    f.raw.esp = stack.start().value().unwrap() as u32;
    f.raw.ebp = frame0_ebp.value().unwrap() as u32;

    // Check the stackwalker's output:
    let s = f.walk_stack(stack).await;
    assert_eq!(s.frames.len(), 2);
    {
        let f0 = &s.frames[0];
        assert_eq!(f0.trust, FrameTrust::Context);
        assert_eq!(f0.context.valid, MinidumpContextValidity::All);
        assert_eq!(f0.instruction, 0x4000c7a5);
    }
    {
        let f1 = &s.frames[1];
        assert_eq!(f1.trust, FrameTrust::FramePointer);
        assert_eq!(f1.instruction, 0x40008678);
    }
}

A Dedicated Production Diffing, Simulating, and Debugging Tool

Because minidumps are so horribly fractal and corner-casey, I spent a lot of time terrified of subtle issues that would become huge disasters if we ever actually tried to deploy to production. So I also spent a bunch of time building socc-pair, which takes the id of a crash report from Mozilla’s crash reporting system and pulls down the minidump, the old breakpad-based implementation’s output, and extra metadata.

It then runs a local rust-minidump (minidump-stackwalk) implementation on the minidump and does a domain-specific diff over the two inputs. The most substantial part of this is a fuzzy diff on the stackwalks that tries to better handle situations like when one implementation adds an extra frame but the two otherwise agree. It also uses the reported techniques each implementation used to try to identify whose output is more trustworthy when they totally diverge.

I also ended up adding a bunch of mocking and benchmarking functionality to it as well, as I found more and more places where I just wanted to simulate a production environment.

Oh also I added really detailed trace-logging for the stackwalker so that I could easily post-mortem debug why it made the decisions it made.

This tool found so many issues and more importantly has helped me quickly isolate their causes. I am so happy I made it. Because of it, we know we actually fixed several issues that happened with the old breakpad implementation, which is great!

Here’s a trimmed down version of the kind of report socc-pair would produce (yeah I abused diff syntax to get error highlighting. It’s a great hack, and I love it like a child):

comparing json...

: {
    crash_info: {
        address: 0x7fff1760aca0
        crashing_thread: 8
        type: EXCEPTION_BREAKPOINT
    }
    crashing_thread: {
        frames: [
            0: {
                file: wrappers.cpp:1750da2d7f9db490b9d15b3ee696e89e6aa68cb7
                frame: 0
                function: RustMozCrash(char const*, int, char const*)
                function_offset: 0x00000010
-               did not match
+               line: 17
-               line: 20
                module: xul.dll

.....

    unloaded_modules: [
        0: {
            base_addr: 0x7fff48290000
-           local val was null instead of:
            code_id: 68798D2F9000
            end_addr: 0x7fff48299000
            filename: KBDUS.DLL
        }
        1: {
            base_addr: 0x7fff56020000
            code_id: DFD6E84B14000
            end_addr: 0x7fff56034000
            filename: resourcepolicyclient.dll
        }
    ]
~   ignoring field write_combine_size: "0"
}

- Total errors: 288, warnings: 39

benchmark results (ms):
    2388, 1986, 2268, 1989, 2353, 
    average runtime: 00m:02s:196ms (2196ms)
    median runtime: 00m:02s:268ms (2268ms)
    min runtime: 00m:01s:986ms (1986ms)
    max runtime: 00m:02s:388ms (2388ms)

max memory (rss) results (bytes):
    267755520, 261152768, 272441344, 276131840, 279134208, 
    average max-memory: 258MB (271323136 bytes)
    median max-memory: 259MB (272441344 bytes)
    min max-memory: 249MB (261152768 bytes)
    max max-memory: 266MB (279134208 bytes)

Output Files: 
    * (download) Minidump: b4f58e9f-49be-4ba5-a203-8ef160211027.dmp
    * (download) Socorro Processed Crash: b4f58e9f-49be-4ba5-a203-8ef160211027.json
    * (download) Raw JSON: b4f58e9f-49be-4ba5-a203-8ef160211027.raw.json
    * Local minidump-stackwalk Output: b4f58e9f-49be-4ba5-a203-8ef160211027.local.json
    * Local minidump-stackwalk Logs: b4f58e9f-49be-4ba5-a203-8ef160211027.log.txt

Staging and Deploying to Production

Once we were confident enough in the implementation, a lot of the remaining testing was taken over by Will Kahn-Greene, who’s responsible for a lot of the server-side details of our crash-reporting infrastructure.

Will spent a bunch of time getting a bunch of machinery setup to manage the deployment and monitoring of rust-minidump. He also did a lot of the hard work of cleaning up all our server-side configuration scripts to handle any differences between the two implementations. (Although I spent a lot of time on compatibility, we both agreed this was a good opportunity to clean up old cruft and mistakes.)

Once all of this was set up, he turned it on in staging and we got our first look at how rust-minidump actually worked in ~production:

Terribly!

Our staging servers take in about 10% of the inputs that also go to our production servers, but even at that reduced scale we very quickly found several new corner cases and we were getting tons of crashes, which is mildly embarrassing for the thing that handles other people’s crashes.

Will did a great job here in monitoring and reporting the issues. Thankfully they were all fairly easy for us to fix. Eventually, everything smoothed out and things seemed to be working just as reliably as the old implementation on the production server. The only places where we were completely failing to produce any output were for horribly truncated minidumps that may as well have been empty files.

We originally did have some grand ambitions of running socc-pair on everything the staging servers processed or something to get really confident in the results. But by the time we got to that point, we were completely exhausted and feeling pretty confident in the new implementation.

Eventually Will just said “let’s turn it on in production” and I said “AAAAAAAAAAAAAAA”.

This moment was pure terror. There had always been more corner cases. There’s no way we could just be done. This will probably set all of Mozilla on fire and delete Firefox from the internet!

But Will convinced me. We wrote up some docs detailing all the subtle differences and sent them to everyone we could. Then the moment of truth finally came: Will turned it on in production, and I got to really see how well it worked in production:

*dramatic drum roll*

It worked fine.

After all that stress and anxiety, we turned it on and it was fine.

Heck, I’ll say it: it ran well.

It was faster, it crashed less, and we even knew it fixed some issues.

I was in a bit of a stupor for the rest of that week, because I kept waiting for the other shoe to drop. I kept waiting for someone to emerge from the mist and explain that I had somehow bricked Thunderbird or something. But no, it just worked.

So we left for the holidays, and I kept waiting for it to break, but it was still fine.

I am honestly still shocked about this!

But hey, as it turns out we really did put a lot of careful work into testing the implementation. At every step we found new problems but that was good, because once we got to the final step there were no more problems to surprise us.

And the fuzzer still kicked our butts afterwards.

But that’s part 2! Thanks for reading!

 

The post Everything Is Broken: Shipping rust-minidump at Mozilla – Part 1 appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Add-ons BlogManifest V3 Firefox Developer Preview — how to get involved

While MV3 is still in development, many major features are already included in the Developer Preview, which provides an opportunity to expose functionality for testing and feedback. With strong developer feedback, we’re better equipped to quickly address critical bug fixes, provide clear developer documentation, and reorient functionality.

Some features, such as a well defined and documented lifecycle for Event Pages, are still works in progress. As we complete features, they’ll land in future versions of Firefox and you’ll be able to test and progress your extensions into MV3 compatibility. In most ways Firefox is committed to MV3 cross browser compatibility. However, as explained in Manifest V3 in Firefox: Recap & Next Steps, in some cases Firefox will offer distinct extension functionality.

Developer Preview is not available to regular users; it requires you to change preferences in about:config. Thus you will not be able to upload MV3 extensions to addons.mozilla.org (AMO) until we have an official release available to users.

The following are key considerations about migration at this time and areas we’d greatly appreciate developer feedback.

  1. Read the MV3 migration guide. MV3 contains many changes and our migration guide covers the major necessary steps, as well as linking to documentation to help understand further details.
  2. Update your extension to be compatible with Event Pages. One major difference in Firefox is our use of Event Pages, which provides an alternative to the existing Background Pages that allows idle timeouts and page restarts. This adds resilience to the background, which is necessary for resource constraints and mobile devices. For the most part, Event Pages are compatible with existing Background Pages, requiring only minor changes. We plan to release Event Pages for MV2 in an upcoming Firefox release, so preparation to use Event Pages can be included in MV2 addons soon. Many extensions may not need all the capabilities available in Event Pages. The background scripts are easily transferable to the Service Worker background when it becomes available in a future release. In the meantime, extensions attempting to support both Chrome and Firefox can take advantage of Event Pages in Firefox.
  3. Test your content scripts with MV3. There are multiple changes that will impact content scripts, ranging from tighter restrictions on CORS, CSP, remote code execution, and more. Not all extensions will run into issues in these cases, and some may only require minor modifications that will likely work within MV2 as well.
  4. Understand and consider your migration path for APIs that have changed or deprecated. Deprecated APIs will require code changes to utilize alternate or new APIs. Examples include New Scripting API (which will be part of MV2 in a future release), changing page and browser actions to the action API, etc.
  5. Test and plan migration for permissions. Most permissions are already available as optional permissions in MV2. With MV3, we’re making host permissions optional — in many cases by default. While we do not yet have the primary UI for user control in Developer Preview, developers should understand how these changes will affect their extensions.
  6. Let us know how it’s going! Your feedback will help us make the transition from MV2 to MV3 as smooth as possible. Through Developer Preview we anticipate learning about MV3 rough edges, documentation needs, new features to be fleshed out, and bugs to be fixed. We have a host of community channels you can access to ask questions, help others, report problems, or whatever else you desire to communicate as it relates to the MV3 migration process.

Stay in touch with us on any of these forums…

 

The post Manifest V3 Firefox Developer Preview — how to get involved appeared first on Mozilla Add-ons Community Blog.

hacks.mozilla.orgTraining efficient neural network models for Firefox Translations

Machine Translation is an important tool for expanding the accessibility of web content. Usually, people use cloud providers to translate web pages. State-of-the-art Neural Machine Translation (NMT) models are large and often require specialized hardware like GPUs to run inference in real-time.

If people were able to run a compact Machine Translation (MT) model on their local machine CPU without sacrificing translation accuracy it would help to preserve privacy and reduce costs.

The Bergamot project is a collaboration between Mozilla, the University of Edinburgh, Charles University in Prague, the University of Sheffield, and the University of Tartu with funding from the European Union’s Horizon 2020 research and innovation programme. It brings MT to the local environment, providing small, high-quality, CPU optimized NMT models. The Firefox Translations web extension utilizes proceedings of project Bergamot and brings local translations to Firefox.

In this article, we will discuss the components used to train our efficient NMT models. The project is open-source, so you can give it a try and train your model too!

Architecture

NMT models are trained as language pairs, translating from language A to language B. The training pipeline was designed to train translation models for a language pair end-to-end, from environment configuration to exporting the ready-to-use models. The pipeline run is completely reproducible given the same code, hardware and configuration files.

The complexity of the pipeline comes from the requirement to produce an efficient model. We use Teacher-Student distillation to compress a high-quality but resource-intensive teacher model into an efficient CPU-optimized student model that still has good translation quality. We explain this further in the Compression section.

The pipeline includes many steps: compiling of components, downloading and cleaning datasets, training teacher, student and backward models, decoding, quantization, evaluation etc (more details below). The pipeline can be represented as a Directly Acyclic Graph (DAG).

 

Firfox Translation training pipeline DAG

The workflow is file-based and employs self-sufficient scripts that use data on disk as input, and write intermediate and output results back to disk.

We use the Marian Neural Machine Translation engine. It is written in C++ and designed to be fast. The engine is open-sourced and used by many universities and companies, including Microsoft.

Training a quality model

The first task of the pipeline is to train a high-quality model that will be compressed later. The main challenge at this stage is to find a good parallel corpus that contains translations of the same sentences in both source and target languages and then apply appropriate cleaning procedures.

Datasets

It turned out there are many open-source parallel datasets for machine translation available on the internet. The most interesting project that aggregates such datasets is OPUS. The Annual Conference on Machine Translation also collects and distributes some datasets for competitions, for example, WMT21 Machine Translation of News. Another great source of MT corpus is the Paracrawl project.

OPUS dataset search interface:

OPUS dataset search interface

It is possible to use any dataset on disk, but automating dataset downloading from Open source resources makes adding new language pairs easy, and whenever the data set is expanded we can then easily retrain the model to take advantage of the additional data. Make sure to check the licenses of the open-source datasets before usage.

Data cleaning

Most open-source datasets are somewhat noisy. Good examples are crawled websites and translation of subtitles. Texts from websites can be poor-quality automatic translations or contain unexpected HTML, and subtitles are often free-form translations that change the meaning of the text.

It is well known in the world of Machine Learning (ML) that if we feed garbage into the model we get garbage as a result. Dataset cleaning is probably the most crucial step in the pipeline to achieving good quality.

We employ some basic cleaning techniques that work for most datasets like removing too short or too long sentences and filtering the ones with an unrealistic source to target length ratio. We also use bicleaner, a pre-trained ML classifier that attempts to indicate whether the training example in a dataset is a reversible translation. We can then remove low-scoring translation pairs that may be incorrect or otherwise add unwanted noise.

Automation is necessary when your training set is large. However, it is always recommended to look at your data manually in order to tune the cleaning thresholds and add dataset-specific fixes to get the best quality.

Data augmentation

There are more than 7000 languages spoken in the world and most of them are classified as low-resource for our purposes, meaning there is little parallel corpus data available for training. In these cases, we use a popular data augmentation strategy called back-translation.

Back-translation is a technique to increase the amount of training data available by adding synthetic translations. We get these synthetic examples by training a translation model from the target language to the source language. Then we use it to translate monolingual data from the target language into the source language, creating synthetic examples that are added to the training data for the model we actually want, from the source language to the target language.

The model

Finally, when we have a clean parallel corpus we train a big transformer model to reach the best quality we can.

Once the model converges on the augmented dataset, we fine-tune it on the original parallel corpus that doesn’t include synthetic examples from back-translation to further improve quality.

Compression

The trained model can be 800Mb or more in size depending on configuration and requires significant computing power to perform translation (decoding). At this point, it’s generally executed on GPUs and not practical to run on most consumer laptops. In the next steps we will prepare a model that works efficiently on consumer CPUs.

Knowledge distillation

The main technique we use for compression is Teacher-Student Knowledge Distillation. The idea is to decode a lot of text from the source language into the target language using the heavy model we trained (Teacher) and then train a much smaller model with fewer parameters (Student) on these synthetic translations. The student is supposed to imitate the teacher’s behavior and demonstrate similar translation quality despite being significantly faster and more compact.

We also augment the parallel corpus data with monolingual data in the source language for decoding. This improves the student by providing additional training examples of the teacher’s behavior.

Ensemble

Another trick is to use not just one teacher but an ensemble of 2-4 teachers independently trained on the same parallel corpus. It can boost quality a little bit at the cost of having to train more teachers. The pipeline supports training and decoding with an ensemble of teachers.

Quantization

One more popular technique for model compression is quantization. We use 8-bit quantization which essentially means that we store weights of the neural net as int8 instead of float32. It saves space and speeds up matrix multiplication on inference.

Other tricks

Other features worth mentioning but beyond the scope of this already lengthy article are the specialized Neural Network architecture of the student model, half-precision decoding by the teacher model to speed it up, lexical shortlists, training of word alignments, and finetuning of the quantized student.

Yes, it’s a lot! Now you can see why we wanted to have an end-to-end pipeline.

How to learn more

This work is based on a lot of research. If you are interested in the science behind the training pipeline, check out reference publications listed in the training pipeline repository README and across the wider Bergamot project. Edinburgh’s Submissions to the 2020 Machine Translation Efficiency Task is a good academic starting article. Check this tutorial by Nikolay Bogoychev for a more practical and operational explanation of the steps.

Results

The final student model is 47 times smaller and 37 times faster than the original teacher model and has only a small quality decrease!

Benchmarks for en-pt model and Flores dataset:

Model Size Total number of parameters Dataset decoding time on 1 CPU core Quality, BLEU
Teacher 798Mb 192.75M 631s 52.5
Student quantized 17Mb 15.7M 17.9s 50.7

We evaluate results using MT standard BLEU scores that essentially represent how similar translated and reference texts are. This method is not perfect but it has been shown that BLEU scores correlate well with human judgment of translation quality.

We have a GitHub repository with all the trained models and evaluation results where we compare the accuracy of our models to popular APIs of cloud providers. We can see that some models perform similarly, or even outperform, the cloud providers which is a great result taking into account our model’s efficiency, reproducibility and open-source nature.

For example, here you can see evaluation results for the English to Portuguese model trained by Mozilla using open-source data only.

Evaluation results en-pt

Anyone can train models and contribute them to our repo. Those contributions can be used in the Firefox Translations web extension and other places (see below).

Scaling

It is of course possible to run the whole pipeline on one machine, though it may take a while. Some steps of the pipeline are CPU bound and difficult to parallelize, while other steps can be offloaded to multiple GPUs. Most of the official models in the repository were trained on machines with 8 GPUs. A few steps, like teacher decoding during knowledge distillation, can take days even on well-resourced single machines. So to speed things up, we added cluster support to be able to spread different steps of the pipeline over multiple nodes.

Workflow manager

To manage this complexity we chose Snakemake which is very popular in the bioinformatics community. It uses file-based workflows, allows specifying step dependencies in Python, supports containerization and integration with different cluster software. We considered alternative solutions that focus on job scheduling, but ultimately chose Snakemake because it was more ergonomic for one-run experimentation workflows.

Example of a Snakemake rule (dependencies between rules are inferred implicitly):

rule train_teacher:
    message: "Training teacher on all data"
    log: f"{log_dir}/train_teacher{{ens}}.log"
    conda: "envs/base.yml"
    threads: gpus_num*2
    resources: gpu=gpus_num
    input:
        rules.merge_devset.output, 
        train_src=f'{teacher_corpus}.{src}.gz',
        train_trg=f'{teacher_corpus}.{trg}.gz',
        bin=ancient(trainer), 
        vocab=vocab_path
    output: model=f'{teacher_base_dir}{{ens}}/{best_model}'
    params: 
        prefix_train=teacher_corpus, 
        prefix_test=f"{original}/devset", 
        dir=directory(f'{teacher_base_dir}{{ens}}'),
        args=get_args("training-teacher-base")
    shell: '''bash pipeline/train/train.sh \
                teacher train {src} {trg} "{params.prefix_train}" \
                "{params.prefix_test}" "{params.dir}" \
                "{input.vocab}" {params.args} >> {log} 2>&1'''

Cluster support

To parallelize workflow steps across cluster nodes we use Slurm resource manager. It is relatively simple to operate, fits well for high-performance experimentation workflows, and supports Singularity containers for easier reproducibility. Slurm is also the most popular cluster manager for High-Performance Computers (HPC) used for model training in academia, and most of the consortium partners were already using or familiar with it.

How to start training

The workflow is quite resource-intensive, so you’ll need a pretty good server machine or even a cluster. We recommend using 4-8 Nvidia 2080-equivalent or better GPUs per machine.

Clone https://github.com/mozilla/firefox-translations-training and follow the instructions in the readme for configuration.

The most important part is to find parallel datasets and properly configure settings based on your available data and hardware. You can learn more about this in the readme.

How to use the existing models

The existing models are shipped with the Firefox Translations web extension, enabling users to translate web pages in Firefox. The models are downloaded to a local machine on demand. The web extension uses these models with the bergamot-translator Marian wrapper compiled to Web Assembly.

Also, there is a playground website at https://mozilla.github.io/translate where you can input text and translate it right away, also locally but served as a static website instead of a browser extension.

If you are interested in an efficient NMT inference on the server, you can try a prototype HTTP service that uses bergamot-translator natively compiled, instead of compiled to WASM.

Or follow the build instructions in the bergamot-translator readme to directly use the C++, JavaScript WASM, or Python bindings.

Conclusion

It is fascinating how far Machine Translation research has come in recent years. Local high-quality translations are the future and it’s becoming more and more practical for companies and researchers to train such models even without access to proprietary data or large-scale computing power.

We hope that Firefox Translations will set a new standard of privacy-preserving, efficient, open-source machine translation accessible for all.

Acknowledgements

I would like to thank all the participants of the Bergamot Project for making this technology possible, my teammates Andre Natal and Abhishek Aggarwal for the incredible work they have done bringing Firefox Translations to life, Lonnen for managing the project and editing this blog post and of course awesome Mozilla community for helping with localization of the web-extension and testing its early builds.

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 825303 🇪🇺

The post Training efficient neural network models for Firefox Translations appeared first on Mozilla Hacks - the Web developer blog.

SUMO BlogIntroducing Ryan Johnson

Hi folks,

Please join me to welcome Ryan Johnson to the Customer Experience team as a Staff Software Engineer. He will be working closely with Tasos to maintain and improve the Mozilla Support platform.

Here’s a short intro from Ryan:

Hello everyone! I’m Ryan Johnson, and I’m joining the SUMO engineering team as a Staff Software Engineer. This is a return to Mozilla for me, after a brief stint away, and I’m excited to work with Tasos and the rest of the Customer Experience team in reshaping SUMO to better serve the needs of Mozilla and all of you. In my prior years at Mozilla, I was fortunate to work on the MDN team, and with many of its remarkable supporters, and this time I look forward to meeting, working with, and learning from many of you.

Once again, please join me to congratulate and welcome Ryan to our team!

Open Policy & AdvocacyEnhancing trust and security on the internet – browsers are the first line of defence

Enhancing trust and security online is one of the defining challenges of our time – in the EU alone, 37% of residents do not believe they can sufficiently protect themselves from cybercrime. Individuals need assurance that their credit card numbers, social media logins, and other sensitive data are protected from cybercriminals when browsing. With that in mind, we’ve just unveiled an update to the security policies that protect people from cybercrime, demonstrating again the critical role Firefox plays in ensuring trust and security online.

Browsers like Firefox use encryption to protect individuals’ data from eavesdroppers when they navigate online (e.g. when sending credit card details to an online marketplace). But protecting data from cybercriminals when it’s on the move is only part of the risk we mitigate. Individuals also need assurance that they are sending data to the correct domain (e.g., “amazon.com”). If someone sends their private data to a cybercriminal instead of to their bank, for example, it is of little consolation that the data was encrypted while getting there.

To address this we rely on cryptographic website certificates, which allow a website to prove that it controls the domain name that the individual has navigated to. Websites obtain these certificates from certificate authorities, organisations that run checks to verify that websites are not compromised. Certificate authorities are a critical pillar of trust in this ecosystem – if they mis-issue certificates to cybercriminals or other malicious actors, the consequences for individuals can be catastrophic.

To keep Firefox users safe, we ensure that only certificate authorities that maintain high standards of security and transparency are trusted in the browser (i.e., included in our ‘root certificate store’). We also continuously monitor and review the behaviour of certificate authorities that we opt to trust to ensure that we can take prompt action to protect individuals in cases where a trusted certificate authority has been compromised.

Properly maintaining a root certificate store is a significant undertaking, not least because the cybersecurity threat landscape is constantly evolving. We aim to ensure our security standards are always one step ahead, and as part of that effort, we’ve just finalised an important policy update that will increase transparency and security in the certificate authority ecosystem. This update introduces new standards for how audits of certificate authorities should be conducted and by whom; phases out legacy encryption standards that some certificate authorities still deploy today; and requires more transparency from certificate authorities when they revoke certificates. We’ve already begun working with certificate authorities to ensure they can properly transition to the new higher security standards.

The policy update is the product of a several-month process of open dialogue and debate amongst various stakeholders in the website security space. It is a further case-in-point of our belief in the value of transparent, community-based processes across the board for levelling-up the website security ecosystem. For instance, before accepting a certificate authority in Firefox we process lots of data and perform significant due diligence, then publish our findings and hold a public discussion with the community. We also maintain a public security incident reporting process to encourage disclosure and learning from experts in the field.

Ultimately, this update process highlights once again how operating an independent root certificate store allows us to drive the website security ecosystem towards ever-higher standards, and to serve as the first line of defence for when web certificates are misused. It’s a responsibility we take seriously and we see it as critical to enhancing trust on the internet.

It’s also why we’re so concerned about draft laws under consideration in the EU (Article 45 of the ‘eIDAS regulation’) that would forbid us from applying our security standards to certain certificate authorities and block us from taking action if and when those certificate authorities mis-issue certificates. If adopted in its current form by the EU, Article 45 would be a major step back for security on the internet, because of how it would restrict browser security efforts and because of the global precedent it would set. A broad movement of digital rights organisations; consumer groups; and numerous independent cybersecurity experts (here, here, and here) has begun to raise the alarm and to encourage the EU to change course on Article 45. We are working hard to do so too.

We’re proud of our root certificate store and the role it plays in enhancing trust and security online. It’s part of our contribution to the internet – we’ll continue to invest in it with security updates like this one and work with lawmakers on ensuring legal frameworks continue to support this critical work.

 

Thumbnail photo credit:

Creative Commons Attribution-Share Alike 4.0 International license.
Attribution: Santeri Viinamäki

 

The post Enhancing trust and security on the internet – browsers are the first line of defence appeared first on Open Policy & Advocacy.

SUMO BlogWhat’s up with SUMO – May

Hi everybody,

Q2 is a busy quarter with so many exciting projects on the line. The onboarding project implementation is ongoing, mobile support project also goes smoothly so far (we even start to scale to support Apple AppStore), but we also managed to audit our localization process (with the help of our amazing contributors!). Let’s dive more into it without further ado.

Welcome note and shout-outs

  • Welcome to the social support program to Magno Reis and Samuel. They both are long-time contributors on the forum who are spreading their wings to Social Support.
  • Welcome to the world of the SUMO forum to Dropa, YongHan, jeyson1099, simhk, and zianshi17.
  • Welcome to the KB world to kaie, alineee, and rodrigo.bucheli.
  • Welcome to the KB localization to Agehrg4 (ru), YongHan (zh-tw), ibrahimakgedik3 (tr), gabriele.siukstaite (t), apokvietyte (lt), Anokhi (ms), erinxwmeow (ms), and dvyarajan7 (ms). Welcome to the SUMO family!
  • Thanks to the localization contributors who helped me understand their workflow and pain points on the localization process. So many insightful feedback and things that we may not understand without your input. I can’t thank everybody enough for your input!
  • Huge shout outs to Kaio Duarte Costa for stepping up as Social Support moderator. He’s been an amazing contributor to the program since 2020, and I believe that he’ll be a great role model for the community. Thank you and congratulations!

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to add them in our next edition.

Community news

  • I highly recommend checking out KB call from May. We talked about many interesting topics, from KB review queue, to a group exercise on writing content for localization.
  • It’s been 2 months since we on board Dayana as a Community Support Advocate (read the intro blog post here) and we can’t wait to share more about our learnings and accomplishment!
  • Please read this forum thread and this bug report for those of you who experiences trouble with uploading images to Mozilla Support.

Catch up

  • Watch the monthly community call if you haven’t. Learn more about what’s new in April! Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can answer them during the meeting.
  • There’s also KB call, this one was the recording for the month of May. Find out more about KB call from this wikipage.
  • If you’re an NDA’ed contributor, you can watch the recording of the Customer Experience weekly scrum meeting from AirMozilla to catch up with the latest product updates.
  • Check out SUMO Engineering Board to see what the platform team is currently doing.

Community stats

KB

KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only

Month Page views Vs previous month
Apr 2022 7,407,129 -1.26%

Top 5 KB contributors in the last 90 days: 

KB Localization

Top 10 locales based on total page views

Locale Mar 2022 pageviews (*) Localization progress (per Apr, 11)(**)
de 7.99% 98%
zh-CN 7.42% 100%
fr 6.27% 87%
es 6.07% 30%
pt-BR 4.94% 54%
ru 4.60% 82%
ja 3.80% 48%
It 2.36% 100%
pl 2.06% 87%
ca 1.67% 0%

* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

Forum Support

Forum stats

-TBD-

 

Top 5 forum contributors in the last 90 days: 

Social Support

Channel Total incoming message Conv interacted Resolution rate
Apr 2022 504 316 75.00%

Top 5 Social Support contributors in the past 2 months: 

  1. Bithiah Koshy
  2. Christophe Villeneuve
  3. Magno Reis
  4. Md Monirul Alom
  5. Felipe Koji

Play Store Support

Channel Apr 2022
Total priority review Total priority review replied Total reviews replied
Firefox for Android 1226 234 291
Firefox Focus for Android 109 0 4
Firefox Klar Android 1 0 0

Top 5 Play Store contributors in the past 2 months: 

  • Paul Wright
  • Tim Maks
  • Selim Şumlu
  • Bithiah Koshy

Product updates

Firefox desktop

Firefox mobile

  • TBD

Other products / Experiments

  • TBD

Useful links:

Open Policy & AdvocacyMozilla Meetups: The Building Blocks of a Trusted Internet

Join us on June 9 at 3 PM ET for a virtual conversation on how the digital policy landscape not only shapes, but is also shaped by, the way we move around online and what steps our policymakers need to take to ensure a healthy internet.

View the event here!

The post Mozilla Meetups: The Building Blocks of a Trusted Internet appeared first on Open Policy & Advocacy.

Blog of DataCrash Reporting Data Sprint

Two weeks ago the Socorro Eng/Ops team, in charge of Socorro and Tecken, had its first remote 1-day Data Sprint to onboard folks from ops and engineering.

The Sprint was divided into three main parts, according to the objectives we initially had:

  • Onboard new folks in the team
  • Establish a rough roadmap for the next 3-6 months.
  • Find a more efficient way to work together,

The sprint was formatted as a conversation followed by a presentation guided by Will Kahn-Greene, who leads the efforts in maintaining and evolving the Socorro/Tecken platforms. In the end we went through the roadmap document to decide what our immediate future would look like.

 

Finding a more efficient way to work together

We wanted to track our work queue more efficiently and decided that a GitHub project would be a great candidate for the task. It is simple to set up and maintain and has different views that we can lightly customize.

That said, because our main issue tracker is Bugzilla, one slightly annoying thing we still have to do while creating an issue on our GitHub project is place the full url to the bug in the issue title:

If we could place links as part of the title, then we could do something like:

Which is much nicer, but GitHub doesn’t support that.

Here’s the link to our work queue: https://github.com/orgs/mozilla-services/projects/16/views/1

 

Onboarding new people to Socorro/Tecken

This was a really interesting part of the day in which we went through different aspects of the crash reporting ecosystem and crash ingestion pipeline.

Story time

The story of Mozilla’s whole Crash Reporting system dates back to 2007, when the Socorro project was created. Since then, Crash Reporting has been an essential part of our products. It is present in all stages, from development to release, and is comprised of an entire ecosystem of libraries and systems maintained by different teams across Mozilla.

Socorro is one of the longer-running projects we have at Mozilla. Along with Antenna and Crash Stats it comprises the Crash Ingestion pipeline, which is maintained by the socorro-eng team. The team is also in charge of the Symbol and Symbolication Servers a.k.a. Tecken.

Along with that story we also learned interesting facts about Crash Reporting, such as:

    • Crash Reports are not the same as Crash Pings: Both things are emitted by the Crash Reporter when Firefox crashes, but Reports go to Socorro and Pings go to the telemetry pipeline
    • Not all Crash Reports are accepted: The collector throttles crash reports according to a set of rules that can be found here
    • Crash Reports are pretty big compared to telemetry pings: They’re 600Kb aggregate but stack overflow crashes can be bigger than 25MB
    • Crash Reports are reprocessed regularly: Whenever something that is involved in generating crash signatures or crash stacks is changed or fixed we reprocess the Crash Reports to regenerate their signatures and stacks

What’s what

There are lots of names involved in the Crash Reporting. We went over what most of them mean:

Symbols: A symbol is an entry in a .sym file that maps from a memory location (a byte number) to something that’s going on in the original code. Since binaries don’t contain information about the code such as code lines, function names and stack navigation, symbols are used to enrich minidumps emitted by binaries with such info. This process is called symbolication. More on symbol files: https://chromium.googlesource.com/breakpad/breakpad/+/HEAD/docs/symbol_files.md

Crash Report: When an application process crashes, the Crash Reporter will submit a Crash Report with metadata annotations (BuildId, ProductName, Version, etc) and minidumps which contain info on the crashed processes.

Crash Signature: Generated to every Crash Report by an algorithm unique to Socorro with the objective of grouping similar crashes.

Minidump: A file created and managed by the Breakpad library. It holds info on a crashed process such as CPU, register contents, heap, loaded modules, threads etc.

Breakpad: A set of tools to work with minidump files. It defines the sym file format and includes components to extract information from processes as well as package, submit  and process them. More on Breakpad: https://chromium.googlesource.com/breakpad/breakpad/+/master/docs/getting_started_with_breakpad.md#the-minidump-file-format

A peek at how it works

Will also explained how things work under the hood and we had a look on the diagrams that show what comprises Tecken and Socorro:

 

Tecken/Eliot

Tecken architecture diagram

Tecken (https://symbols.mozilla.org/) is a Django web application that uses S3 for storage and RDS for bookkeeping.

Eliot (https://symbolication.services.mozilla.com/) is a webapp that downloads sym files from Tecken for symbolication.

 

Socorro

Socorro Architecture Diagram

Socorro has a Crash Report Collector, a Processor and a web application (https://crash-stats.mozilla.org/) for searching and analyzing crash data. Notice the Crash Ingestion pipeline processes Crash Reports and exports a safe form of the processed crash to Telemetry.

More on Crash Reports data

The Crash Reporter is an interesting piece of an application since it needs to do its work while the world around it is collapsing. That means a number of unusual things can happen to the data it collects to build a Report. That being said, there’s a good chance the data it collects is ok, and even when it isn’t, it can still be interesting.

A real concern toward Crash Report data is how toxic it can get: While some pieces of the data are things like the ProductName, BuildID, Version and so on, other pieces are highly identifiable such as URL, UserComments and Breadcrumbs.

Add that to the fact that minidumps contain copies of memory from the crashed processes, which can store usernames, passwords, credit card numbers and so on, and you end up with a very toxic dataset!

 

Establishing a roadmap for the next 3-6 months

Another interesting exercise that made that Sprint feel even more productive was going over the Tecken/Socorro roadmap and reprioritizing things. While Will was explaining the reasons why we should do certain things, I took that chance to also ask questions and get better context on the different decisions we made, our struggles of past and present and where we aim to be.

 

Conclusion

It was super productive to have a full day on which we could focus completely on all things Socorro/Tecken. That series of activities allowed us to improve the way we work, transfer knowledge and prioritize things for the not-so-distant future.

Big shout out to Will Kahn-Greene for organizing and driving this event, and also for the patience to explain things carefully and precisely.

Web Application SecurityUpgrading Mozilla’s Root Store Policy to Version 2.8

In accordance with the Mozilla Manifesto, which emphasizes the open development of policy that protects users’ privacy and security, we have worked with the Mozilla community over the past several months to improve the Mozilla Root Store Policy (MRSP) so that we can now announce version 2.8, effective June 1, 2022. These policy changes aim to improve the transparency of Certificate Authority (CA) operations and the certificates that they issue. A detailed comparison of the policy changes may be found here, and the significant policy changes that appear in this version are:

  • MRSP section 2.4: any matter documented in an audit as a qualification, a modified opinion, or a major non-conformity is also considered an incident and must have a corresponding Incident Report filed in Mozilla’s Bugzilla system;
  • MRSP section 3.2: ETSI auditors must be members of the Accredited Conformity Assessment Bodies’ Council and WebTrust auditors must be enrolled by CPA Canada as WebTrust practitioners;
  • MRSP section 3.3: CAs must maintain links to older versions of their Certificate Policies and Certification Practice Statements until the entire root CA certificate hierarchy operated in accordance with such documents is no longer trusted by the Mozilla root store;
  • MRSP section 4.1: before October 1, 2022, intermediate CA certificates capable of issuing TLS certificates are required to provide the Common CA Database (CCADB) with either the CRL Distribution Point for the full CRL issued by the CA certificate or a JSON array of partitioned CRLs that are equivalent to the full CRL for certificates issued by the CA certificate;
  • MRSP section 5.1.3: as of July 1, 2022, CAs cannot use the SHA-1 algorithm to issue S/MIME certificates, and effective July 1, 2023, CAs cannot use SHA-1 to sign any CRLs, OCSP responses, OCSP responder certificates, or CA certificates;
  • MRSP section 5.3.2: CA certificates capable of issuing working server or email certificates must be reported in the CCADB by July 1, 2022, even if they are technically constrained;
  • MRSP section 5.4: while logging of Certificate Transparency precertificates is not required by Mozilla, it is considered by Mozilla as a binding intent to issue a certificate, and thus, the misissuance of a precertificate is equivalent to the misissuance of a certificate, and CAs must be able to revoke precertificates, even if corresponding final certificates do not actually exist;
  • MRSP section 6.1.1:  specific RFC 5280 Revocation Reason Codes must be used under certain circumstances (see blog post Revocation Reason Codes for TLS Server Certificates)
  • MRSP section 8.4: new unconstrained third-party CAs must be approved through Mozilla’s review process that involves a public discussion.

These changes will provide Mozilla with more complete information about CA practices and certificate status. Several of these changes will require that CAs revise their practices, so we have also sent CAs a CA Communication and Survey to alert them about these changes and to inquire about their ability to comply with the new requirements by the effective dates.

In summary, these updates to the MRSP will improve the quality of information about CA operations and the certificates that they issue, which will increase security in the ecosystem by further enabling Firefox to keep your information private and secure.

The post Upgrading Mozilla’s Root Store Policy to Version 2.8 appeared first on Mozilla Security Blog.

Mozilla Add-ons BlogManifest v3 in Firefox: Recap & Next Steps

It’s been about a year since our last update regarding Manifest v3. A lot has changed since then, not least of which has been the formation of a community group under the W3C to advance cross-browser WebExtensions (WECG).

In our previous update, we announced that we would be supporting MV3 and mentioned Service Workers as a replacement for background pages. Since then, it became apparent that numerous use cases would be at risk if this were to proceed as is, so we went back to the drawing board. We proposed Event Pages in the WECG, which has been welcomed by the community and supported by Apple in Safari.

Today, we’re kicking off our Developer Preview program to gather feedback on our implementation of MV3. To set the stage, we want to outline the choices we’ve made in adopting MV3 in Firefox, some of the improvements we’re most excited about, and then talk about the ways we’ve chosen to diverge from the model Chrome originally proposed.

Why are we adopting MV3?

When we decided to move to WebExtensions in 2015, it was a long term bet on cross-browser compatibility. We believed then, as we do now, that users would be best served by having useful extensions available for as many browsers as possible. By the end of 2017 we had completed that transition and moved completely to the WebExtensions model. Today, many cross-platform extensions require only minimal changes to work across major browsers. We consider this move to be a long-term success, and we remain committed to the model.

In 2018, Chrome announced Manifest v3, followed by Microsoft adopting Chromium as the base for the new Edge browser. This means that support for MV3, by virtue of the combined share of Chromium-based browsers, will be a de facto standard for browser extensions in the foreseeable future. We believe that working with other browser vendors in the context of the WECG is the best path toward a healthy ecosystem that balances the needs of its users and developers. For Mozilla, this is a long term bet on a standards-driven future for WebExtensions.

Why is MV3 important to improving WebExtensions?

Manifest V3 is the next iteration of WebExtensions, and offers the opportunity to introduce improvements that would otherwise not be possible due to concerns with backward compatibility. MV2 had architectural constraints that made some issues difficult to address; with MV3 we are able to make changes to address this.

One core part of the extension architecture is the background page, which lives forever by design. Due to memory or platform constraints (e.g. on Android), we can’t guarantee this state, and termination of the background page (along with the extension) is sometimes inevitable. In MV3, we’re introducing a new architecture: the background script must be designed to be restartable. To support this, we’ve reworked existing and introduced new APIs, enabling extensions to declare how the browser should behave without requiring the background script.

Another core part of extensions are content scripts, to directly interact with web pages. We are blocking unsafe coding practices and are offering more secure alternatives to improve the base security of extensions: string-based code execution has been removed from extension APIs. Moreover, to improve the isolation of data between different origins, cross-origin requests are no longer possible from content scripts, unless the destination website opts in via CORS.

User controls for site access

Extensions often need to access user data on websites. While that has enabled extensions to provide powerful features and address numerous user needs, we’ve also seen misuse that impacts user’s privacy.

Starting with MV3, we’ll be treating all site access requests from extensions as optional, and provide users with transparency and controls to make it easier to manage which extensions can access their data for each website.

At the same time, we’ll be encouraging extensions to use models that don’t require permanent access to all websites, by making it easier to grant access for extensions with a narrow scope, or just temporarily. We are continuing to evaluate how to best handle cases, such as privacy and security extensions, that need the ability to intercept or affect all websites in order to fully protect our users.

What are we doing differently in Firefox?

WebRequest

One of the most controversial changes of Chrome’s MV3 approach is the removal of blocking WebRequest, which provides a level of power and flexibility that is critical to enabling advanced privacy and content blocking features. Unfortunately, that power has also been used to harm users in a variety of ways. Chrome’s solution in MV3 was to define a more narrowly scoped API (declarativeNetRequest) as a replacement. However, this will limit the capabilities of certain types of privacy extensions without adequate replacement.

Mozilla will maintain support for blocking WebRequest in MV3. To maximize compatibility with other browsers, we will also ship support for declarativeNetRequest. We will continue to work with content blockers and other key consumers of this API to identify current and future alternatives where appropriate. Content blocking is one of the most important use cases for extensions, and we are committed to ensuring that Firefox users have access to the best privacy tools available.

Event Pages

Chrome’s version of MV3 introduced Background Service Worker as a replacement for the (persistent) Background Page. Mozilla is working on extension Service Workers in Firefox for compatibility reasons, but also because we like that they’re an event-driven environment with defined lifetimes, already part of the Web Platform with good cross-browser support.

We’ve found Service Workers can’t fully support various use cases we consider important, especially around DOM-related features and APIs. Additionally, the worker environment is not as familiar to regular web developers, and our developer community has expressed that completely rewriting extensions can be tedious for thousands of independent developers of existing extensions.

In Firefox, we have decided to support Event Pages in MV3, and our developer preview will not include Service Workers (we’re continuing to work on supporting these for a future release). This will help developers to more easily migrate existing persistent background pages to support MV3 while retaining access to all of the DOM related features available in MV2. We will also support Event Pages in MV2 in an upcoming release, which will additionally aid migration by allowing extensions to transition existing MV2 extensions over a series of releases.

Next Steps for Firefox

In launching our Developer Preview program for Manifest v3, our hope is that authors will test out our MV3 implementation to help us identify gaps or incompatibilities in our implementation. Work is continuing in parallel, and we expect to launch MV3 support for all users by the end of 2022. As we get closer to completion, we will follow up with more detail on timing and how we will support extensions through the transition.

For more information on the Manifest v3 Developer Preview, please check out the migration guide.  If you have questions or feedback on Manifest v3, we would love to hear from you on the Firefox Add-ons Discourse.

The post Manifest v3 in Firefox: Recap & Next Steps appeared first on Mozilla Add-ons Community Blog.

Web Application SecurityRevocation Reason Codes for TLS Server Certificates

In our continued efforts to improve the security of the web PKI, we are taking a multi-pronged approach to tackling some long-existing problems with revocation of TLS server certificates. In addition to our ongoing CRLite work, we added new requirements to version 2.8 of Mozilla’s Root Store Policy that will enable Firefox to depend on revocation reason codes being used consistently, so they can be relied on when verifying the validity of certificates during TLS connections. We also added a new requirement that CA operators provide their full CRL URLs in the CCADB. This will enable Firefox to pre-load more complete certificate revocation data, eliminating dependency on the infrastructure of CAs during the certificate verification part of establishing TLS connections. The combination of these two new sets of requirements will further enable Firefox to enforce revocation checking of TLS server certificates, which makes TLS connections even more secure.

Previous Policy Updates

Significant improvements have already been made in the web PKI, including the following changes to Mozilla’s Root Store Policy and the CA/Browser Forum Baseline Requirements (BRs), which reduced risks associated with exposure of the private keys of TLS certificates by reducing the amount of time that the exposure can exist.

  • TLS server certificates issued on or after 1 September 2020 MUST NOT have a Validity Period greater than 398 days.
  • For TLS server certificates issued on or after October 1, 2021, each dNSName or IPAddress in the certificate MUST have been validated within the prior 398 days.

Under those provisions, the maximum validity period and maximum re-use of domain validation for TLS certificates roughly corresponds to the typical period of time for owning a domain name; i.e. one year. This reduces the risk of potential exposure of the private key of each TLS certificate that is revoked, replaced, or no longer needed by the original certificate subscriber.

New Requirements

In version 2.8 of Mozilla’s Root Store Policy we added requirements stating that:

  1. Specific RFC 5280 Revocation Reason Codes must be used under certain circumstances; and
  2. CA operators must provide their full CRL URLs in the Common CA Database (CCADB).

These new requirements will provide a complete accounting of all revoked TLS server certificates. This will enable Firefox to pre-load more complete certificate revocation data, eliminating the need for it to query CAs for revocation information when establishing TLS connections.

The new requirements about revocation reason codes account for the situations that can happen at any time during the certificate’s validity period, and address the following problems:

  • There were no policies specifying which revocation reason codes should be used and under which circumstances.
  • Some CAs were not using revocation reason codes at all for TLS server certificates.
  • Some CAs were using the same revocation reason code for every revocation.
  • There were no policies specifying the information that CAs should provide to their certificate subscribers about revocation reason codes.

Revocation Reason Codes

Section 6.1.1 of version 2.8 of Mozilla’s Root Store Policy states that when a TLS server certificate is revoked for one of the following reasons the corresponding entry in the CRL must include the revocation reason code:

  • keyCompromise (RFC 5280 Reason Code #1)
    • The certificate subscriber must choose the “keyCompromise” revocation reason code when they have reason to believe that the private key of their certificate has been compromised, e.g., an unauthorized person has had access to the private key of their certificate.
  • affiliationChanged (RFC 5280 Reason Code #3)
    • The certificate subscriber should choose the “affiliationChanged” revocation reason code when their organization’s name or other organizational information in the certificate has changed.
  • superseded (RFC 5280 Reason Code #4)
    • The certificate subscriber should choose the “superseded” revocation reason code when they request a new certificate to replace their existing certificate.
  • cessationOfOperation (RFC 5280 Reason Code #5)
    • The certificate subscriber should choose the “cessationOfOperation” revocation reason code when they no longer own all of the domain names in the certificate or when they will no longer be using the certificate because they are discontinuing their website.
  • privilegeWithdrawn (RFC 5280 Reason Code #9)
    • The CA will specify the “privilegeWithdrawn” revocation reason code when they obtain evidence that the certificate was misused or the certificate subscriber has violated one or more material obligations under the subscriber agreement or terms of use.

RFC 5280 Reason Codes that are not listed above shall not be specified in the CRL for TLS server certificates, for reasons explained in the wiki page.

Conclusion

These new requirements are important steps towards improving the security of the web PKI, and are part of our effort to resolve long-existing problems with revocation of TLS server certificates. The requirements about revocation reason codes will enable Firefox to depend on revocation reason codes being used consistently, so they can be relied on when verifying the validity of certificates during TLS connections. The requirement that CA operators provide their full CRL URLs in the CCADB will enable Firefox to pre-load more complete certificate revocation data, eliminating dependency on the infrastructure of CAs during the certificate verification part of establishing TLS connections. The combination of these two new sets of requirements will further enable Firefox to enforce revocation checking of TLS server certificates, which makes TLS connections even more secure.

The post Revocation Reason Codes for TLS Server Certificates appeared first on Mozilla Security Blog.

hacks.mozilla.orgImproved Process Isolation in Firefox 100

Introduction

Firefox uses a multi-process model for additional security and stability while browsing: Web Content (such as HTML/CSS and Javascript) is rendered in separate processes that are isolated from the rest of the operating system and managed by a privileged parent process. This way, the amount of control gained by an attacker that exploits a bug in a content process is limited.

Ever since we deployed this model, we have been working on improving the isolation of the content processes to further limit the attack surface. This is a challenging task since content processes need access to some operating system APIs to properly function: for example, they still need to be able to talk to the parent process. 

In this article, we would like to dive a bit further into the latest major milestone we have reached: Win32k Lockdown, which greatly reduces the capabilities of the content process when running on Windows. Together with two major earlier efforts (Fission and RLBox) that shipped before, this completes a sequence of large leaps forward that will significantly improve Firefox’s security.

Although Win32k Lockdown is a Windows-specific technique, it became possible because of a significant re-architecting of the Firefox security boundaries that Mozilla has been working on for around four years, which allowed similar security advances to be made on other operating systems.

The Goal: Win32k Lockdown

Firefox runs the processes that render web content with quite a few restrictions on what they are allowed to do when running on Windows. Unfortunately, by default they still have access to the entire Windows API, which opens up a large attack surface: the Windows API consists of many parts, for example, a core part dealing with threads, processes, and memory management, but also networking and socket libraries, printing and multimedia APIs, and so on.

Of particular interest for us is the win32k.sys API, which includes many graphical and widget related system calls that have a history of being exploitable. Going back further in Windows’ origins, this situation is likely the result of Microsoft moving many operations that were originally running in user mode into the kernel in order to improve performance around the Windows 95 and NT4 timeframe.

Having likely never been originally designed to run in this sensitive context, these APIs have been a traditional target for hackers to break out of application sandboxes and into the kernel.

In Windows 8, Microsoft introduced a new mitigation named PROCESS_MITIGATION_SYSTEM_CALL_DISABLE_POLICY that an application can use to disable access to win32k.sys system calls. That is a long name to keep repeating, so we’ll refer to it hereafter by our internal designation: “Win32k Lockdown“.

The Work Required

Flipping the Win32k Lockdown flag on the Web Content processes – the processes most vulnerable to potentially hostile web pages and JavaScript – means that those processes can no longer perform any graphical, window management, input processing, etc. operations themselves.

To accomplish these tasks, such operations must be remoted to a process that has the necessary permissions, typically the process that has access to the GPU and handles compositing and drawing (hereafter called the GPU Process), or the privileged parent process. 

Drawing web pages: WebRender

For painting the web pages’ contents, Firefox historically used various methods for interacting with the Windows APIs, ranging from using modern Direct3D based textures, to falling back to GDI surfaces, and eventually dropping into pure software mode.

These different options would have taken quite some work to remote, as most of the graphics API is off limits in Win32k Lockdown. The good news is that as of Firefox 92, our rendering stack has switched to WebRender, which moves all the actual drawing from the content processes to WebRender in the GPU Process.

Because with WebRender the content process no longer has a need to directly interact with the platform drawing APIs, this avoids any Win32k Lockdown related problems. WebRender itself has been designed partially to be more similar to game engines, and thus, be less susceptible to driver bugs.

For the remaining drivers that are just too broken to be of any use, it still has a fully software-based mode, which means we have no further fallbacks to consider.

Webpages drawing: Canvas 2D and WebGL 3D

The Canvas API provides web pages with the ability to draw 2D graphics. In the original Firefox implementation, these JavaScript APIs were executed in the Web Content processes and the calls to the Windows drawing APIs were made directly from the same processes.

In a Win32k Lockdown scenario, this is no longer possible, so all drawing commands are remoted by recording and playing them back in the GPU process over IPC.

Although the initial implementation had good performance, there were nevertheless reports from some sites that experienced performance regressions (the web sites that became faster generally didn’t complain!). A particular pain point are applications that call getImageData() repeatedly: having the Canvas remoted means that GPU textures must now be obtained from another process and sent over IPC.

We compensated for this in the scenario where getImageData is called at the start of a frame, by detecting this and preparing the right surfaces proactively to make the copying from the GPU faster.

Besides the Canvas API to draw 2D graphics, the web platform also exposes an API to do 3D drawing, called WebGL. WebGL is a state-heavy API, so properly and efficiently synchronizing child and parent (as well as parent and driver) takes great care.

WebGL originally handled all validation in Content, but with access to the GPU and the associated attack surface removed from there, we needed to craft a robust validating API between child and parent as well to get the full security benefit.

(Non-)Native Theming for Forms

HTML web pages have the ability to display form controls. While the overwhelming majority of websites provide a custom look and styling for those form controls, not all of them do, and if they do not you get an input GUI widget that is styled like (and originally was!) a native element of the operating system.

Historically, these were drawn by calling the appropriate OS widget APIs from within the content process, but those are not available under Win32k Lockdown.

This cannot easily be fixed by remoting the calls, as the widgets themselves come in an infinite amount of sizes, shapes, and styles can be interacted with, and need to be responsive to user input and dispatch messages. We settled on having Firefox draw the form controls itself, in a cross-platform style.

While changing the look of form controls has web compatibility implications, and some people prefer the more native look – on the few pages that don’t apply their own styles to controls – Firefox’s approach is consistent with that taken by other browsers, probably because of very similar considerations.

Scrollbars were a particular pain point: we didn’t want to draw the main scrollbar of the content window in a different manner as the rest of the UX, since nested scrollbars would show up with different styles which would look awkward. But, unlike the rather rare non-styled form widgets, the main scrollbar is visible on most web pages, and because it conceptually belongs to the browser UX we really wanted it to look native.

We, therefore, decided to draw all scrollbars to match the system theme, although it’s a bit of an open question though how things should look if even the vendor of the operating system can’t seem to decide what the “native” look is.

Final Hurdles

Line Breaking

With the above changes, we thought we had all the usual suspects that would access graphics and widget APIs in win32k.sys wrapped up, so we started running the full Firefox test suite with win32k syscalls disabled. This caused at least one unexpected failure: Firefox was crashing when trying to find line breaks for some languages with complex scripts.

While Firefox is able to correctly determine word endings in multibyte character streams for most languages by itself, the support for Thai, Lao, Tibetan and Khmer is known to be imperfect, and in these cases, Firefox can ask the operating system to handle the line breaking for it. But at least on Windows, the functions to do so are covered by the Win32k Lockdown switch. Oops!

There are efforts underway to incorporate ICU4X and base all i18n related functionality on that, meaning that Firefox will be able to handle all scripts perfectly without involving the OS, but this is a major effort and it was not clear if it would end up delaying the rollout of win32k lockdown.

We did some experimentation with trying to forward the line breaking over IPC. Initially, this had bad performance, but when we added caching performance was satisfactory or sometimes even improved, since OS calls could be avoided in many cases now.

DLL Loading & Third Party Interactions

Another complexity of disabling win32k.sys access is that so much Windows functionality assumes it is available by default, and specific effort must be taken to ensure the relevant DLLs do not get loaded on startup. Firefox itself for example won’t load the user32 DLL containing some win32k APIs, but injected third party DLLs sometimes do. This causes problems because COM initialization in particular uses win32k calls to get the Window Station and Desktop if the DLL is present. Those calls will fail with Win32k Lockdown enabled, silently breaking COM and features that depend on it such as our accessibility support. 

On Windows 10 Fall Creators Update and later we have a fix that blocks these calls and forces a fallback, which keeps everything working nicely. We measured that not loading the DLLs causes about a 15% performance gain when opening new tabs, adding a nice performance bonus on top of the security benefit.

Remaining Work

As hinted in the previous section, Win32k Lockdown will initially roll out on Windows 10 Fall Creators Update and later. On Windows 8, and unpatched Windows 10 (which unfortunately seems to be in use!), we are still testing a fix for the case where third party DLLs interfere, so support for those will come in a future release.

For Canvas 2D support, we’re still looking into improving the performance of applications that regressed when the processes were switched around. Simultaneously, there is experimentation underway to see if hardware acceleration for Canvas 2D can be implemented through WebGL, which would increase code sharing between the 2D and 3D implementations and take advantage of modern video drivers being better optimized for the 3D case.

Conclusion

Retrofitting a significant change in the separation of responsibilities in a large application like Firefox presents a large, multi-year engineering challenge, but it is absolutely required in order to advance browser security and to continue keeping our users safe. We’re pleased to have made it through and present you with the result in Firefox 100.

Other Platforms

If you’re a Mac user, you might wonder if there’s anything similar to Win32k Lockdown that can be done for macOS. You’d be right, and I have good news for you: we already quietly shipped the changes that block access to the WindowServer in Firefox 95, improving security and speeding process startup by about 30-70%. This too became possible because of the Remote WebGL and Non-Native Theming work described above.

For Linux users, we removed the connection from content processes to the X11 Server, which stops attackers from exploiting the unsecured X11 protocol. Although Linux distributions have been moving towards the more secure Wayland protocol as the default, we still see a lot of users that are using X11 or XWayland configurations, so this is definitely a nice-to-have, which shipped in Firefox 99.

We’re Hiring

If you found the technical background story above fascinating, I’d like to point out that our OS Integration & Hardening team is going to be hiring soon. We’re especially looking for experienced C++ programmers with some interest in Rust and in-depth knowledge of Windows programming.

If you fit this description and are interested in taking the next leap in Firefox security together with us, we’d encourage you to keep an eye on our careers page.

Thanks to Bob Owen, Chris Martin, and Stephen Pohl for their technical input to this article, and for all the heavy lifting they did together with Kelsey Gilbert and Jed Davis to make these security improvements ship.

The post Improved Process Isolation in Firefox 100 appeared first on Mozilla Hacks - the Web developer blog.

SeaMonkeySeaMonkey 2.53.12 is out!

Hi All,

The SeaMonkey Project is pleased to announce the immediate release of 2.53.12!

Please check out [1] and [2].  Updates forthcoming.

Nothing beats a quick release.  🙂  Kudos to the guys driving these bug fixes.

[1] – https://www.seamonkey-project.org/releases/seamonkey2.53.12/

[2] – https://www.seamonkey-project.org/releases/2.53.12

hacks.mozilla.orgCommon Voice dataset tops 20,000 hours

The latest Common Voice dataset, released today, has achieved a major milestone: More than 20,000 hours of open-source speech data that anyone, anywhere can use. The dataset has nearly doubled in the past year.

Why should you care about Common Voice?

  • Do you have to change your accent to be understood by a virtual assistant? 
  • Are you worried that so many voice-operated devices are collecting your voice data for proprietary Big Tech datasets?
  • Are automatic subtitles unavailable for you in your language?

Automatic Speech Recognition plays an important role in the way we can access information, however, of the 7,000 languages spoken globally today only a handful are supported by most products.

Mozilla’s Common Voice seeks to change the language technology ecosystem by supporting communities to collect voice data for the creation of voice-enabled applications for their own languages. 

Common Voice Dataset Release 

This release wouldn’t be possible without our contributors — from voice donations to initiating their language in our project, to opening new opportunities for people to build voice technology tools that can support every language spoken across the world.

Access the dataset: https://commonvoice.mozilla.org/datasets

Access the metadata: https://github.com/common-voice/cv-dataset 

Highlights from the latest dataset:

  • The new release also features six new languages: Tigre, Taiwanese (Minnan), Meadow Mari, Bengali, Toki Pona and Cantonese.
  • Twenty-seven languages now have at least 100 hours of speech data. They include Bengali, Thai, Basque, and Frisian.
  • Nine languages now have at least 500 hours of speech data. They include Kinyarwanda (2,383 hours), Catalan (2,045 hours), and Swahili (719 hours).
  • Nine languages now all have at least 45% of their gender tags as female. They include Marathi, Dhivehi, and Luganda.
  • The Catalan community fueled major growth. The Catalan community’s Project AINA — a collaboration between Barcelona Supercomputing Center and the Catalan Government — mobilized Catalan speakers to contribute to Common Voice. 
  • Supporting community participation in decision making yet. The Common Voice language Rep Cohort has contributed feedback and learnings about optimal sentence collection, the inclusion of language variants, and more. 

 Create with the Dataset 

How will you create with the Common Voice Dataset?

Take some inspiration from technologists who are creating conversational chatbots, spoken language identifiers, research papers and virtual assistants with the Common Voice Dataset by watching this talk: 

https://mozilla.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=6492f3ae-3a0d-4363-99f6-adc00111b706 

Share with us how you are using the dataset on social media using #CommonVoice or sharing on our Community discourse. 

 

The post Common Voice dataset tops 20,000 hours appeared first on Mozilla Hacks - the Web developer blog.

hacks.mozilla.orgMDN Plus now available in more regions

At the end of March this year, we announced MDN Plus, a new premium service on MDN that allows users to customize their experience on the website.

We are very glad to announce today that it is now possible for MDN users around the globe to create an MDN Plus free account, no matter where they are.

Click here to create an MDN Plus free account*.

The premium version of the service is currently available as follows: in the United States, Canada (since March 24th, 2022), Austria, Belgium, Finland, France, United Kingdom, Germany, Ireland, Italy, Malaysia, the Netherlands, New Zealand, Puerto Rico, Sweden, Singapore, Switzerland, Spain (since April 28th, 2022), Estonia, Greece, Latvia, Lithuania, Portugal, Slovakia and Slovenia (since June 15th, 2022).

We continue to work towards expanding this list even further.

Click here to create an MDN Plus premium account**.

* Now available to everyone

** You will need to subscribe from one of the regions mentioned above to be able to have an MDN Plus premium account at this time

The post MDN Plus now available in more regions appeared first on Mozilla Hacks - the Web developer blog.

SeaMonkeySeaMonkey 2.53.12 Beta 1 is out!

Hi All,

The SeaMonkey Project is pleased to announce the immediate release of 2.53.12 Beta 1.

As it is a beta, please do backup your profile before updating to it.

Please check out [1] and [2].

Updates are slowly being turned on for 2.53.12b1 after I post this blog.  The last few times users had updated to the newest version even before I had posted the blog, so that somewhat confused people.   This shouldn’t be the case now.(After all, I had posted the blog, *then* I flipped the update bit, then updated this blog with this side note. :))

Best Regards,

:ewong

[1] – https://www.seamonkey-project.org/releases/seamonkey2.53.12b1/

[2] – https://www.seamonkey-project.org/releases/2.53.12b1