Firefox NightlyThese Weeks in Firefox: Issue 105

Highlights

  • Starting from Firefox 96, a new “browser.runtime.getFrameId” method allows an extension content script to retrieve the frameId associated to a WindowProxy of an embedder DOM element (e.g. frame, object etc) – Bug 1733104
  • We published the Performance Tools Newsletter for Q3 2021. You can find it here if you are curious about the things we’ve worked on in Q3.
  • We have improved the flow around PDFs being downloaded/opened.
    • Content-Disposition: attachment pdfs will now open in PDF.js if it’s available.
    • Clicking the save/download button on Nightly now prompts for a location instead of opening another tab with PDF.js
  • Are you a volunteer contributor on Windows 10 or up? Good news! mozilla-build works with the latest version of the Windows Terminal, which is a much improved experience over using the old default terminal.
    • Features include:
    • You can set it up to just run the mozilla-build start-shell script inside of it.
      • New profile in settings, set command line to C:\mozilla-build\start-shell.bat (or wherever your start-shell.bat is)

 

Friends of the Firefox team

Introductions/Shout-Outs

  • Introducing Chris Bellini, a new Engineering Manager for the Search / New Tab team!

Resolved bugs (excluding employees)

Fixed more than one bug

  • Ava Katushka
  • Evgenia Kotovich
  • Geoff Lankow (:darktrojan)
  • Mathew Hodson

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • As part of Fission related changes for the WebExtensions and AddonManager internals, one more of the remaining framescripts has been removed (and we got a small but still good improvement on the Base Content JS perftest \o/) – Bug 1708193
WebExtensions Framework
  • Fixed a leak due to the ExpandedPrincipal and nsCSPContext keeping a strong reference to each other – Bug 1741600
    • This leak was only happening with content scripts defined by a “manifest_version: 3” addon, and so never actually triggered in release, our tests caught it first.
WebExtension APIs
  • As part of the ongoing ManifestVersion 3 work, William Durand landed in Firefox 96 the initial bits of the new “browser.scripting” API – Bug 1740601
    • This API requires the new “scripting” permission and it is currently only allowed in “manifest_version: 3” extension manifests (and so also locked behind the about:config preference “extensions.manifestV3.enabled”)
    • The work for “scripting” API namespace is tracked by the Bug 1687764 meta

Downloads Panel

Fission

  • Rollout to 100% on release is almost done!

Form Autofill

High-Contrast Mode (MSU Capstone project)

Desktop Integrations (Installer & Updater)

Lint, Docs and Workflow

  • Standard8 enabled ESLint rules no-undef and no-unused-vars on xhtml files under dom/.
    • This also found a broken test which was largely not running due to errors which were caught and hidden due to the test’s use of promise chaining and not checking the exceptions of thrown errors.

macOS Spotlight

  • Work is underway to support Apple’s Screen Time API. This lets you define time-based and/or parental control limits for certain webpages. When you’ve reached the limit, across all your devices, an overlay will occlude the webpage. Don’t worry: this is an opt-in feature at the OS level and the overlay can be dismissed.
  • We’re working on fixing issues where the video power consumption improvements from a few weeks ago were actually regressing power consumption on older Macs. Those improvements are disabled on affected Macs while we investigate.

Printing

Picture-in-Picture

Performance

Search and Navigation

  • Mandy Cheang [mcheang] improved copying partial URLs from the Address Bar so that the resulting url is encoded (note: there is a browser.urlbar.decodeURLsOnCopy pref to change the Address Bar behavior) – Bug 1539511
  • Drew has landed various improvements to Firefox Suggest and enabled the Merino service (Mozilla owned server for suggestions) for users who opted-in.

Mozilla Addons BlogNew JavaScript syntax support in add-on developer tools

It’s been a year since we last added support for new JavaScript syntax to the add-ons linter. In that time we’ve used it to validate over 150,000 submissions to AMO totalling hundreds of millions of lines of code. But it has been a year, and with both Javascript and Firefox are constantly and quickly evolving, the list of JavaScript features Firefox supports and what the AMO linter allows have drifted apart.

This drift is not an accident; Firefox and AMO don’t keep the same cadence on supported features, and this is deliberate. Upcoming JavaScript features are spread across different EcmaScript proposal stages, meaning different features are always in different stages of readiness. While Firefox often trials promising new JavaScript features that aren’t “finished” yet (stage 4 in the ECMAScript process) to better test their implementations and drive early adoption, the AMO team takes a different approach intended to minimize friction developers might face moving their addons between browsers. To that end, the AMO team only adds support for “finished”, stage 4 features to the linter.

This hybrid approach works well for everyone; while Firefox continues to push the web ecosystem forward, AMO is making it easier for add-on developers to move laterally within that ecosystem.

Today, we’re happy to announce that our linter has been updated to ESLint v8 for JavaScript validation. This upgrades linter support to ECMAScript 2022 syntax, including features like public field declaration and top-level await that add-on developers will find particularly useful.

If you’d like to know more about how these tools work, and maybe help us improve them, bug reports and new contributors are always welcome. Thank you for being a part of Mozilla, and the add-ons developer community.

The post New JavaScript syntax support in add-on developer tools appeared first on Mozilla Add-ons Community Blog.

Hacks.Mozilla.OrgWebAssembly and Back Again: Fine-Grained Sandboxing in Firefox 95

In Firefox 95, we’re shipping a novel sandboxing technology called RLBox — developed in collaboration with researchers at the University of California San Diego and the University of Texas — that makes it easy and efficient to isolate subcomponents to make the browser more secure. This technology opens up new opportunities beyond what’s been possible with traditional process-based sandboxing, and we look forward to expanding its usage and (hopefully) seeing it adopted in other browsers and software projects.

This technique, which uses WebAssembly to isolate potentially-buggy code, builds on the prototype we shipped last year to Mac and Linux users. Now, we’re bringing that technology to all supported Firefox platforms (desktop and mobile), and isolating five different modules: Graphite, Hunspell, Ogg, Expat and Woff2 [1].

Going forward, we can treat these modules as untrusted code, and — assuming we did it right — even a zero-day vulnerability in any of them should pose no threat to Firefox. Accordingly, we’ve updated our bug bounty program to pay researchers for bypassing the sandbox even without a vulnerability in the isolated library.

The Limits of Process Sandboxing

All major browsers run Web content in its own sandboxed process, in theory preventing it from exploiting a browser vulnerability to compromise your computer. On desktop operating systems, Firefox also isolates each site in its own process in order to protect sites from each other.

Unfortunately, threat actors routinely attack users by chaining together two vulnerabilities — one to compromise the sandboxed process containing the malicious site, and another to escape the sandbox [2]. To keep our users secure against the most well-funded adversaries, we need multiple layers of protection.

Having already isolated things along trust boundaries, the next logical step is to isolate across functional boundaries. Historically, this has meant hoisting a subcomponent into its own process. For example, Firefox runs audio and video codecs in a dedicated, locked-down process with a limited interface to the rest of the system. However, there are some serious limitations to this approach. First, it requires decoupling the code and making it asynchronous, which is usually time-consuming and may impose a performance cost. Second, processes have a fixed memory overhead, and adding more of them increases the memory footprint of the application.

For all of these reasons, nobody would seriously consider hoisting something like the XML parser into its own process. To isolate at that level of granularity, we need a different approach.

Isolating with RLBox

This is where RLBox comes in. Rather than hoisting the code into a separate process, we instead compile it into WebAssembly and then compile that WebAssembly into native code. This doesn’t result in us shipping any .wasm files in Firefox, since the WebAssembly step is only an intermediate representation in our build process.

However, the transformation places two key restrictions on the target code: it can’t jump to unexpected parts of the rest of the program, and it can’t access memory outside of a specified region. Together, these restrictions make it safe to share an address space (including the stack) between trusted and untrusted code, allowing us to run them in the same process largely as we were doing before. This, in turn, makes it easy to apply without major refactoring: the programmer only needs to sanitize any values that come from the sandbox (since they could be maliciously-crafted), a task which RLBox makes easy with a tainting layer.

The first step in this transformation is straightforward: we use Clang to compile Firefox, and Clang knows how to emit WebAssembly, so we simply need to switch the output format for the given module from native code to wasm. For the second step, our prototype implementation used Cranelift. Cranelift is excellent, but a second native code generator added complexity — and we realized that it would be simpler to just map the WebAssembly back into something that our existing build system could ingest.

We accomplished this with wasm2c, which performs a straightforward translation of WebAssembly into equivalent C code, which we can then feed back into Clang along with the rest of the Firefox source code. This approach is very simple, and automatically enables a number of important features that we support for regular Firefox code: profile-guided optimization, inlining across sandbox boundaries, crash reporting, debugger support, source-code indexing, and likely other things that we have yet to appreciate.

Next Steps

RLBox is a big win for us on several fronts: it protects our users from accidental defects as well as supply-chain attacks, and it reduces the need for us to scramble when such issues are disclosed upstream. As such, we intend to continue applying to more components going forward. Some components are not a good fit for this approach — either because they depend too much on sharing memory with the rest of the program, or because they’re too performance-sensitive to accept the modest overhead incurred — but we’ve identified a number of other good candidates.

Moreover, we hope to see this technology make its way into other browsers and software projects to make the ecosystem safer. RLBox is a standalone project that’s designed to be very modular and easy-to-use, and the team behind it would welcome other use-cases.

Speaking of the team: I’d like to thank Shravan Narayan, Deian Stefan, and Hovav Shacham for their tireless work in bringing this work from research concept to production. Shipping to hundreds of millions of users is hard, and they did some seriously impressive work.

Read more about RLBox and this announcement on the UC San Diego Jacobs School of Engineering website.


[1] Cross-platform sandboxing for Graphite, Hunspell, and Ogg is shipping in Firefox 95, while Expat and Woff2 will ship in Firefox 96.

[2] By using a syscall to to exploit a vulnerability in the OS, or by using an IPC message to exploit a vulnerability in a process hosting more-privileged parts of the browser.


The post WebAssembly and Back Again: Fine-Grained Sandboxing in Firefox 95 appeared first on Mozilla Hacks - the Web developer blog.

The Rust Programming Language BlogAnnouncing Rust 1.57.0

The Rust team is happy to announce a new version of Rust, 1.57.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.57.0 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.57.0 on GitHub.

What's in 1.57.0 stable

Rust 1.57 brings panic! to const contexts, adds support for custom profiles to Cargo, and stabilizes fallible reservation APIs.

panic! in const contexts

With previous versions of Rust, the panic! macro was not usable in const fn and other compile-time contexts. Now, this has been stabilized. Together with the stabilization of panic!, several other standard library APIs are now usable in const, such as assert!.

This stabilization does not yet include the full formatting infrastructure, so the panic! macro must be called with either a static string (panic!("...")), or with a single &str interpolated value (panic!("{}", a)) which must be used with {} (no format specifiers or other traits).

It is expected that in the future this support will expand, but this minimal stabilization already enables straightforward compile-time assertions, for example to verify the size of a type:

const _: () = assert!(std::mem::size_of::<u64>() == 8);
const _: () = assert!(std::mem::size_of::<u8>() == 1);

Cargo support for custom profiles

Cargo has long supported four profiles: dev, release, test, and bench. With Rust 1.57, support has been added for arbitrarily named profiles.

For example, if you want to enable link time optimizations (LTO) only when making the final production build, adding the following snippet to Cargo.toml enables the lto flag when this profile is selected, but avoids enabling it for regular release builds.

[profile.production]
inherits = "release"
lto = true

Note that custom profiles must specify a profile from which they inherit default settings. Once the profile has been defined, Cargo commands which build code can be asked to use it with --profile production. Currently, this will build artifacts in a separate directory (target/production in this case), which means that artifacts are not shared between directories.

Fallible allocation

Rust 1.57 stabilizes try_reserve for Vec, String, HashMap, HashSet, and VecDeque. This API enables callers to fallibly allocate the backing storage for these types.

Rust will usually abort the process if the global allocator fails, which is not always desirable. This API provides a method for avoiding that abort when working with the standard library collections. However, Rust does not guarantee that the returned memory is actually allocated by the kernel: for example, if overcommit is enabled on Linux, the memory may not be available when its use is attempted.

Stabilized APIs

The following methods and trait implementations were stabilized.

The following previously stable functions are now const.

Other changes

There are other changes in the Rust 1.57.0 release: check out what changed in Rust, Cargo, and Clippy.

Contributors to 1.57.0

Many people came together to create Rust 1.57.0. We couldn't have done it without all of you. Thanks!

The Mozilla BlogPocket’s state-by-state guide to the most popular articles in 2021

We’re just going to say it: it feels a little bit weird to wrap up 2021 because this year feels like three years in one and an extension of 2020 simultaneously. At some point in the near future, 2020 and 2021 will be studied in history books. While we can’t predict what the history books will say, we can analyze what defined this year for us. 

We do just that in Pocket’s Best of 2021 — the most-saved, -read and -shared articles by Pocket readers, spanning culture, science, tech and more. 

As we analyzed the winning articles, we wondered what we might learn if we looked at the data state by state. 

Setting aside the top story worldwide for 2021, Adam Grant’s piece naming that ‘blah’ feeling we felt after 2020, the top story in all but five states was a guide to deleting all of your old online accounts. And most of the five locales that differ — D.C., Maine, New York, North Dakota and Montana — have that story as the second most-saved story. 

We saw a few patterns among top stories across several states. Americans weren’t just deleting old online accounts; they were also trying to strengthen their memory, pondering how the rich avoid income tax and wondering how to be wittier in conversation

We might all have been languishing but we were also questioning if we could improve ourselves — or at least our bank accounts. Some things don’t change, even after two of the strangest years in modern history.

Check below to see the top two stories from your state.

Alabama

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. Train Your Brain to Remember Anything You Learn With This Simple, 20-Minute Habit published on Inc

Alaska

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. Delta Has Changed the Pandemic Endgame published on The Atlantic 

Arizona

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. Train Your Brain to Remember Anything You Learn With This Simple, 20-Minute Habit published on Inc 

Arkansas

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. How to be witty and clever in conversation published on Quartz 

California

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. ​​The Secret IRS Files: Trove of Never-Before-Seen Records Reveal How the Wealthiest Avoid Income Tax  published on ProPublica 

Colorado

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. Train Your Brain to Remember Anything You Learn With This Simple, 20-Minute Habit published on Inc 

Connecticut

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. Train Your Brain to Remember Anything You Learn With This Simple, 20-Minute Habit published on Inc 

Delaware

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. Among the Insurrectionists published on The New Yorker 

District of Columbia 

  1. The Pandemic Has Erased Entire Categories of Friendship published on The Atlantic 
  2. Grief and Conspiracy 20 Years After 9/11  published on The Atlantic 

Florida

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. The Curious Case of Florida’s Pandemic Response published on The Atlantic 

Georgia

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. ​​The Secret IRS Files: Trove of Never-Before-Seen Records Reveal How the Wealthiest Avoid Income Tax  published on ProPublica 

Hawaii

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. How to Practice published on The New Yorker 

Idaho

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. The Great Resignation Is Accelerating published on  The Atlantic 

Illinois

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. A Battle Between a Great City and a Great Lake published on The New York Times

Indiana

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. Train Your Brain to Remember Anything You Learn With This Simple, 20-Minute Habit published on Inc

Iowa

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. The Secret History of the Shadow Campaign That Saved the 2020 Election published on TIME

Kansas

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. ​​The Secret IRS Files: Trove of Never-Before-Seen Records Reveal How the Wealthiest Avoid Income Tax  published on ProPublica 

Kentucky

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. The Six Morning Routines that Will Make You Happier, Healthier and More Productive published on Scott H. Young

Louisiana

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. How to be witty and clever in conversation pubished on Quartz 

Maine

  1. Delta Has Changed the Pandemic Endgame published on The Atlantic 
  2. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek

Maryland

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. ​​The Secret IRS Files: Trove of Never-Before-Seen Records Reveal How the Wealthiest Avoid Income Tax  published on ProPublica 

Massachusetts

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. ​​The Secret IRS Files: Trove of Never-Before-Seen Records Reveal How the Wealthiest Avoid Income Tax  published on ProPublica 

Michigan

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. Train Your Brain to Remember Anything You Learn With This Simple, 20-Minute Habit published on Inc

Minnesota

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. ​​The Secret IRS Files: Trove of Never-Before-Seen Records Reveal How the Wealthiest Avoid Income Tax  published on ProPublica 

Mississippi

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. How Fit Can You Get From Just Walking? published on GQ

Missouri

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. How to be witty and clever in conversation published on Quartz 

Montana

  1. Train Your Brain to Remember Anything You Learn With This Simple, 20-Minute Habit published on Inc
  2. Scientist Author Busts Myths About Exercise, Sitting And Sleep : Shots – Health News published on NPR

Nebraska

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. Delta Has Changed the Pandemic Endgame published on The Atlantic 

Nevada

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. Train Your Brain to Remember Anything You Learn With This Simple, 20-Minute Habit published on Inc

New Hampshire

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. The Six Morning Routines that Will Make You Happier, Healthier and More Productive published on Scott H. Young

New Jersey

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. ​​The Secret IRS Files: Trove of Never-Before-Seen Records Reveal How the Wealthiest Avoid Income Tax  published on ProPublica 

New Mexico

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. Train Your Brain to Remember Anything You Learn With This Simple, 20-Minute Habit published on Inc

New York

  1. Who Is the Bad Art Friend? published on The New York Times Magazine
  2. The Secret IRS Files: Trove of Never-Before-Seen Records Reveal How the Wealthiest Avoid Income Tax  published on ProPublica  

North Carolina

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. Train Your Brain to Remember Anything You Learn With This Simple, 20-Minute Habit published on Inc

North Dakota

  1. Inside the Worst-Hit County in the Worst-Hit State in the Worst-Hit Country published on The New Yorker
  2. 5 Questions the Most Interesting People Will Always Ask in Conversations published on Inc

Ohio

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. Train Your Brain to Remember Anything You Learn With This Simple, 20-Minute Habit published on Inc

Oklahoma

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. 5 Questions the Most Interesting People Will Always Ask in Conversations published on Inc

Oregon

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. Delta Has Changed the Pandemic Endgame published on The Atlantic 

Pennsylvania

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. Train Your Brain to Remember Anything You Learn With This Simple, 20-Minute Habit published on Inc

Rhode Island

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. Among the Insurrectionists published on The New Yorker 

South Carolina

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. The Six Morning Routines that Will Make You Happier, Healthier and More Productive published on Scott H. Young

South Dakota

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. Train Your Brain to Remember Anything You Learn With This Simple, 20-Minute Habit published on Inc

Tennessee

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. The Secret IRS Files: Trove of Never-Before-Seen Records Reveal How the Wealthiest Avoid Income Tax  published on ProPublica  

Texas

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. The Secret IRS Files: Trove of Never-Before-Seen Records Reveal How the Wealthiest Avoid Income Tax  published on ProPublica  

Utah

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. 5 Questions the Most Interesting People Will Always Ask in Conversations published on Inc

Vermont

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. Among the Insurrectionists published on The New Yorker

Virginia

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. The Secret IRS Files: Trove of Never-Before-Seen Records Reveal How the Wealthiest Avoid Income Tax  published on ProPublica  

Washington

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. Delta Has Changed the Pandemic Endgame published on The Atlantic 

West Virginia

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. The Six Morning Routines that Will Make You Happier, Healthier and More Productive published on Scott H. Young

Wisconsin

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. Train Your Brain to Remember Anything You Learn With This Simple, 20-Minute Habit published on Inc

Wyoming

  1. How to Delete Your Old Online Accounts (and Why You Should) published on How To Geek
  2. Train Your Brain to Remember Anything You Learn With This Simple, 20-Minute Habit published on Inc

Learn more about Pocket’s Best of 2021:

The post Pocket’s state-by-state guide to the most popular articles in 2021 appeared first on The Mozilla Blog.

The Mozilla BlogCelebrating Pocket’s Best of 2021

Each December, Pocket celebrates the very best of the web — the must-read profiles, thought-provoking essays, and illuminating explainers that Pocket users saved and read the most over the past 12 months. Today, we’re delighted to bring you Pocket’s Best of 2021: more than a dozen collections spotlighting the year’s top articles across culture, technology, science, business, and more. 

We aren’t the only ones putting out Top 10 content lists or Year in Reviews, but we’d argue these lists are different from the rest — a cut above. Pocket readers are a discerning bunch: they gravitate to fascinating long reads that deeply immerse readers in a story or subject; explainers that demystify complex or poorly understood topics; big ideas that challenge us to think and maybe even act differently; and great advice for all facets of life. You’ll find must-read examples of all of these inside these eclectic Best of 2021 collections, from dozens of trustworthy and diverse publications.

The stories people save most to Pocket often provide a fascinating window into what’s occupying our collective attention each year. In 2019, the most-saved article on Pocket examined how modern economic precarity has turned millennials into the burnout generation. In 2020, the most-read article was a probing and prescient examination of how the Covid-19 pandemic might end

This year, the No. 1 article in Pocket put a name to the chronic sense of ‘blah’ that so many of us felt in 2021 as the uncertainty of the pandemic wore on: languishing. (For months, heads would nod all over Zoom whenever this article came up in conversation.) To mark the end of the year, we asked Adam Grant, the organizational psychologist and bestselling author who wrote the piece, to curate a special Pocket Collection all about how to leave languishing behind in 2021 — and start flourishing in 2022 by breaking free from stagnation and rekindling your spark. 

What you’ll also find in this year’s Best Of package: A journey through some of 2021’s most memorable events and storylines, as told through 12 exemplary articles that Pocket users saved to help them make sense of it all. Plus, recommendations from this year’s top writers on the unforgettable stories they couldn’t stop reading, and a special collection from those of us at Pocket about 2021 lessons we won’t soon forget.

If you haven’t read these articles yet, save them to your Pocket and dig in over the holidays. While you’re at it, join the millions of people discovering the thought-provoking articles we curate in our daily newsletter and on the Firefox and Chrome new tab pages each and every day.

From all of us at Pocket, have a joyous and safe holiday season and a happy — and flourishing — new year.

Carolyn O’Hara is senior director of content discovery at Pocket. 

The post Celebrating Pocket’s Best of 2021 appeared first on The Mozilla Blog.

Support.Mozilla.OrgWhat’s up with SUMO – November 2021

Hey SUMO folks,

November come with lots of rain, at least in my part of the world. It certainly creates a different vibe. I believe you also experience similar weather change lately, be it snow or rain. Whatever it is, I hope you all safe and healthy wherever you are. Oh, and happy thanksgiving for you who celebrate! Sorry for being late with the update this month (maybe it’s better to have it by the end of the month anyway), so let’s just dive into it!

Welcome on board!

  1. Welcome andmagdo to the forum world. He’s been around for awhile, but we’d like to make sure he gets a proper call out here.
  2. Also welcome to Abhishek and Bithiah to the Social Support program. Excited to have more people onboard.

Community news

  • November was intense due to the MR2 release. Luckily, we received good feedback for the overall release. Many people especially like Colorways and want us to keep it as a permanent feature. Many mobile users also enjoyed the customizable homepage and the new inactive tabs feature.
  • Firefox is officially on the Windows Store.
  • Time to say goodbye to Firefox Lockwise. But also a Hello to Firefox Relay Premium!
  • Check out the following release notes from Kitsune in the month:
    • No release notes from Kitsune from last month, however, please read this post to learn more about the website stability update.

Community call

  • Watch the monthly community call if you haven’t. Learn more about what’s new in October and November!
  • Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can address them during the meeting.

Community stats

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Oct 2021 3682 72.51% 16.21% 72.45%
Nov 2021 3463 73.92% 15.74% 74.78%

Top 5 forum contributors in the last 90 days: 

  1. FredMcD
  2. Cor-el
  3. Sfhowes
  4. Jscher2000
  5. Seburo

KB

KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only

Month Page views Vs previous month
Oct 2021 8,688,751 5.38%
Nov 2021 8,306,363 -4.40%

Top 5 KB contributors in the last 90 days: 

  1. AliceWyman
  2. Michele Rodaro
  3. Pierre Mozinet
  4. Julie
  5. Wayne Mery

KB Localization

Top 10 locale based on total page views

Locale Oct 2021 pageviews (*) Localization progress (per Nov, 4)(**)
de 8.41% 97%
zh-CN 6.78% 98%
fr 6.68% 90%
es 6.09% 37%
pt-BR 5.55% 57%
ru 4.27% 95%
ja 3.87% 53%
it 2.39% 100%
pl 2.18% 86%
zh-TW 1.79% 5%
* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

  1. Michele Rodaro
  2. Milupo
  3. Artist
  4. Jim Spentzos
  5. Mark Heijl

Social Support

Channel Oct 2021 Nov 2021
Total conv Conv interacted Total conv Conv interacted
@firefox 2659 434 2585 672
@FirefoxSupport 190 178 289 244

Top 5 contributors in Oct-Nov 2021:

  1. Tim Maks
  2. Felipe Koji
  3. Bithiah Koshy
  4. Christophe Villeneuve
  5. Matt Cianfarani

Play Store Support

Channel Oct 2021 Nov 2021
Total conv Conv interacted Total conv Conv interacted
@firefox 2659 434 2585 672
@FirefoxSupport 190 178 289 244

Top 5 contributors in Oct-Nov 2021:

  1. Paul Wright
  2. Selim Şumlu
  3. Matt Cianfarani
  4. Christophe Villeneuve
  5. Christian Noriega

Product updates

Firefox desktop

  • FX Desktop Version 95 (Dec 8)
    • TCP Roll Out/Continuous onboarding
    • Remove about:ion from Firefox
    • User don’t get interrupted when closing Firefox
    • Picture in Picture toggle button moved to opposite side of video

Firefox mobile

Other products / Experiments

  • Firefox Monitor/Kanary Pilot (Jan)
  • TCP breakage tracking experiment [unconfirmed]

Shout-outs!

  • Kudos to andmagdo for being an awesome forum contributor!
  • Shoutout to Bithiah who’s continue to be awesome contributor on the forum, and now also helping us on social support. Thank you so much for your support!
  • Shoutouts to Marcelo, Valery, Daisuke, and Krzysztof for the help with the Windows Store reviews!
  • Thanks for the feedback to everyone involved in the Firefox Suggest contributor thread. Special shoutout to Alice!
  • And all the forum folks who helped with the FVD incident last week. Thank you so much for being there for Firefox users! Special shoutout for Paul collaborating with the AMO team and raising the flag to them when it happened.

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to   add them in our next edition.

Useful links:

This Week In RustThis Week in Rust 419

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is poem-openapi, a framework to implement OpenAPI services.

llogiq is very pleased with his suggestion.

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

Ockam

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

244 pull requests were merged in the last week

Rust Compiler Performance Triage

Overall, many changes this week, but overall an improvement on multiple benchmarks over the week from a number of pull requests dedicated to optimizations of certain patterns. We are still seeing a large number of spurious changes due to rustc-perf#1105, which has yet to be addressed.

Triage done by @simulacrum. Revision range: 22c2d9d..1c028783

4 Regressions, 4 Improvements, 9 Mixed; 5 of them in rollups 41 comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
  • No RFCs entered final comment period this week.
Tracking Issues & PRs
New RFCs

Upcoming Events

Rusty Events between 12/01-12/15 🦀

Online
North America
Europe

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tangram

Flaps

CoBloX

Globelise

Bionaut Labs

Massa Labs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

The design of the safe/unsafe split means that there is an asymmetric trust relationship between Safe and Unsafe Rust. Safe Rust inherently has to trust that any Unsafe Rust it touches has been written correctly. On the other hand, Unsafe Rust cannot trust Safe Rust without care.

As an example, Rust has the PartialOrd and Ord traits to differentiate between types which can "just" be compared, and those that provide a "total" ordering (which basically means that comparison behaves reasonably).

BTreeMap doesn't really make sense for partially-ordered types, and so it requires that its keys implement Ord . However, BTreeMap has Unsafe Rust code inside of its implementation. Because it would be unacceptable for a sloppy Ord implementation (which is Safe to write) to cause Undefined Behavior, the Unsafe code in BTreeMap must be written to be robust against Ord implementations which aren't actually total — even though that's the whole point of requiring Ord .

Gankra citing the Rustonomicon on github

Thanks to robin for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, marriannegoldin.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Hacks.Mozilla.OrgHacks Decoded: Seyi Akiwowo, Founder of Glitch

Welcome to our Hacks: Decoded Interview series!

Once a month, Mozilla Foundation’s Xavier Harding speaks with people in the tech industry about where they’re from, the work they do and what drives them to keep going forward. Make sure you follow Mozilla’s Hacks blog to find more articles in this series and make sure to visit the Mozilla Foundation site to see more of our org’s work.

Meet Seyi Akiwowo (Shay-ee Aki-wo-wo)

seyi akiwowo

Seyi Akiwowo’s reputation precedes her. Akiwowo is the founder of Glitch, an organization that seeks to end online abuse. Akiwowo is a graduate of the London School of Economics. She’s delivered talks at TED, European Parliament, the U.N. and more and was elected a councillor for the Labor Party in East London — the youngest Black woman ever to do so. 

We spoke with Seyi over video chat to learn about what drives her, why she does what she does and what she’d be doing if not battling trolls online for a living. All that in this month’s Hacks: Decoded.

Where do we even begin? What do you consider to be the starting point of your story?

I’d start with my love for the internet. When you’re in this anti-troll, human rights space, you’re pitched as anti-tech and anti-change and it’s simply not true. It’s because I’m such a lover and fan of the internet, I do what I do. I’m part of the ’90s, Microsoft-PC-with-the-big-back-and-a-CD-drive generation.

I had dial-up internet and spent huge amounts of time on MSN and MySpace. There were times I was home alone, my mum out trying to make ends meet, my dad not around, and the computer was my outlet. I adored all of it. The worldwide web was my friend. It was my connection to the rest of the world, all while in a small council flat in East London. 

It was when I appeared in a video that went viral that I learned what it meant to be a black woman online. I realized, “Oh, there are people that don’t know me and do not like me and are telling me that they don’t like me in very violent ways.” So I’d say my journey starts with the love of the internet and innovation. When you’re in this kind of anti-troll, human rights space, you’re kind of pitched as anti-tech and anti-change and it’s not [true], it’s because I’m such a lover of the internet is the reason I do what I do.

How does where you’re from influence what you do now?

I went to a very good university but the mix of social classes felt almost like a class war! Going to university was the first time I ever felt othered. I didn’t feel it growing up. I grew up in Newham in East London and I didn’t really realize I was from a poor area. There’s such beauty and safety in that, but there’s also a glass ceiling without you really realizing it.

I grew up with an entrepreneurial spirit and we just made do with what we had and we just were still excellent with the minimum we had. I see how that translates to nowadays. I can really make £1 of funding go far at Glitch! My working-class bargain hunting roots really helped me be frugal with money.

It’s interesting how when you throw a Black woman into the mix, people don’t know how to act. Off the internet but specifically on the internet too.

It is, something I’ve been thinking about a lot is that the internet doesn’t need to be this bad if we just listen to Black women many many years ago. If we think about some of the Black activists or campaigners or even the Black women that were just minding their business but got forced into this issue because of their lived experience, like myself. Folks at Facebook, Twitter, Google who weren’t listening — you have Sydette and Michelle Ferrier. Michelle is a journalist who, after a hate campaign, started Troll Busters to help other journalists and women.

You’ve got Angry Black Lady on Twitter who I remember learning a lot about her experience on social media and she just wasn’t listened to. I think white men not only have the privilege to make things and get a lot of venture capital and raise a lot of money to make these huge products and break and fail. That’s one. But they’re also privileged in their echo chamber bubble that they didn’t have to listen to Black women. And it was only until it started affecting white middle-class upper-class Hollywood that we started paying closer attention to this issue.

You’re saying something a lot of us know. Black women encounter harms on these platforms that a white guy may not necessarily encounter. And yet, most of the leadership at these companies are not black women, they’re mostly white men — a group of folks who may not realize how bad these problems are. Why do you think that is? When will it change?

I really don’t know and I think it’s a conundrum that isn’t unique to the tech space. You see it everywhere. You see it in conversations in policy discussions about domestic abuse or refugees and you do not have the community that faces it the most, in those conversations.

That’s the whole reason I went into politics because decisions were being made about my community that’s predominantly people of colour, we have such a high transient population, one of the most diverse boroughs in the world, and yet the council did not look like its community. It’s a phenomenon that’s existed in so many places and it’s even worse in tech because you’re seeing such the direct harm it’s having tenfold. 

But it’s everywhere. The erasure and the lack of dignity and respect Black folks and people of colour are given. Issues have to become mainstream enough for people to act on that. Black Lives Matter had to become mainstream enough for people to finally listen. It’s a lesson for all of us. How do we make sure there is someone in the room who is more, in relative terms, a more minoritized community than you.

How do we make sure that we’re allies offline and online? How are we making sure we’re building community and sharing that legacy and our knowledge and our playbooks and our capital? So that more people from minoritized communities come together.

What’s been the most challenging thing about founding and running Glitch?

When you’re a Black founder CEO in a predominantly white charity sector and tech sector, things are just different. I’ve taken meetings where it’s supposed to be a prospective funding meeting about our work and before we can even give the pitch the first thing the prospective funder says is, “If Black lives really matter—” 

raises eyebrows, confusedly

Exactly. So he says, “If Black lives really matter, why are you all not getting the vaccine?” It feels like someone is putting you in an ice bucket or flushing your head down a toilet, comments like that, microaggressions too, are a jarring reminder that you don’t belong here. It’s like you’re finally at the table, and someone has banged your head against that very table as if to say “you’re stupid for thinking you can be here.”

I think those are moments that really bungee-cord pull you back to reality, That’s what’s really tough. Really, really, really, tough. And I think I got lost in negative thoughts this summer where I thought, “I don’t belong, I don’t know what I’m doing. This privilege of being CEO — how do I use it?” and overall just a massive loss of confidence. And everything. And I think that’s been really tough.

Wow. I’m still stuck on this “If Black Lives really matter—” guy. 

Seyi, what did you want to be when you grew up? Trolls attacked you online because you made a viral video online so you rose to the occasion and fought back. There are folks out there who don’t grapple with trolls, out there living their truth without a care in the world. What did that look like for you? What did you want to be when you grew up?

I wanted to be a dancer. I wanted to be the next Ciara. I wanted to be in Missy Elliot’s videos. I was like, ‘Move over, sis!’

What’s your favorite Ciara song? 

Goodies.

Classic. 

Back to trolls, what’s a topic regarding online abuse that you wish you saw people talk about more?

The topic of social media whistleblowers and how we should be less worried about making Facebook and Twitter look bad and more about holding them accountable. More specifically, what does this do in the way of changing the system? We shouldn’t have to keep relying on brave individuals who generally tend to be women and women of colour who really put themselves out on the line.

I don’t want this to be the trend where we get small bits of reform. We need to have the media holding companies to account, not looking to make Mark Zuckerberg the bad guy because then the narrative becomes ‘when he leaves everything will be sorted out’ and that is not the case.

Seyi, you have the longest resume I’ve ever seen in my life. What motivates you to keep going?

I’m not, I don’t think I want to “keep going” anymore. I grew my organization by 50% in terms of income and more in terms of staff and diversified our income streams before we hit the two-year mark — during a pandemic! I’m ready to rest, I’m ready to sleep more, I’m ready to do work that is still great with minimum viable effort. That’s the sweet spot I’m looking for. 

You can keep up with Seyi and Glitch’s work right here and support their special Christmas fundraiser for a safe internet here.

The post Hacks Decoded: Seyi Akiwowo, Founder of Glitch appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Performance BlogUpdates to Warm Page Load Tests

Background

We have recently begun the process of updating our warm page load tests to be more representative of real user behavior.

Cold page load is defined as the first page load, just after the initial startup of the browser. Warm page load is any load of a page after the first page load, and the cache has been populated with some data for the page, i.e. the cache is “warmed up”. The results of these tests can be found on the AreWeFastYet dashboard, also referred to as AWFY. Below is an example of one of our AWFY graphs:

are we fast yest graph

Our page load tests currently load the test URL 25 times, and navigate to about:blank in between each iteration.

This is not realistic behavior for a user to perform, however. Most users will never navigate to about:blank before loading a new url, and rarely reload the same URL again and again (unless perhaps waiting for tickets to go on sale 🎟).

Why These Changes?

Project Fission is Mozilla’s implementation of site isolation in Firefox.

When Fission is enabled, each site is run in a different process. Loading a sub-page of the same domain will not destroy the process, but when navigating to about:blank, the process may be destroyed. With the bfcache (back-forward cache) in Fission, the old page may be kept in a frozen state for a while, but not every page can enter the bfcache, such as those using unload handlers or webRTC.

For these reasons, we are currently updating these tests to navigate to another page on the same site between each iteration. This is more representative of user behavior: to load a site, navigate to another page on that site, navigate back to the home page, and repeat. The process termination when navigating to about:blank has been temporarily disabled until all tests have been updated.

Issues Encountered

At the time of writing this post, the update process is around 60% complete–19 of 33 tests have been updated. Several of the tests have encountered unexpected errors when re-recording these tests.

As an example, amazon is the only test we run on mozilla-central with the profiler enabled. (While developers have the ability to run any test with the profiler enabled, we have one site running regularly to detect issues related to this functionality.)

When running the test with a secondary URL while the profiler is running, the browser will successfully load the secondary URL, but crashes when it navigates back to the test URL.

Mitmproxy is the third-party software we use to record page loads. It has recently been updated from version 6 to 7. Several of the tests that encountered issues with testing using a secondary url while running mitmproxy 6 seem to be resolved when running using mitmproxy 7.

Going Forward

In the future, we plan to continue making our tests more representative of how users behave. Greg Mierzwinski has begun working on this with his responsiveness tests.

Thanks for reading! For more information on the update to mitmproxy 7, read my earlier post Upgrading Page Load Tests to Use Mitmproxy 7.

/kimberlythegeek

Related Links

 

Mozilla Privacy BlogMozilla files comments on UK Data Protection Consultation

Mozilla recently submitted its comments to a public consultation on reforming the UK’s data protection regime launched by the UK Department for Digital, Culture, Media & Sport. With the public consultation, titled ‘Data: A New Direction’, the UK government set out to re-evaluate the UK’s approach to data protection after no longer being bound by the bloc’s General Data Protection Regulation (GDPR). We took this opportunity to share our thoughts on data stewardship and the role effective regulation can play in addressing the lopsided power dynamics between large data collectors and users.

For Mozilla, privacy is not optional. It is an integral aspect of our Manifesto, which states that individuals’ security and privacy on the internet are fundamental and must not be treated as optional. This is why privacy is at the core of our product work and why we have long promoted robust data protection in our policy and advocacy work. Further, Mozilla’s Data Futures Lab is exploring alternative approaches to data governance and promoting data stewardship through original research and support to builders.

Our response to the consultation focused on the following themes and recommendations:

  • Data protection and individuals’ control over their data should remain the cornerstones of new legislation: Data privacy should be the bedrock of any law promoting data sharing and increased processing. In principle, the control over their data should lie with data subjects. The key underlying principles of any data protection regulation should include informing and empowering consumers, strong security, and limiting data collection to what is necessary and delivers value.
  • Alternative models of data governance can help shift power: Alternative data governance is a nascent field but has the potential to shift control and value creation back to data subjects and communities. However, considerable work will need to be done to ensure that they don’t duplicate the existing systemic problems of today. In light of this, due attention needs to be paid to several important considerations: consent as the basis for data stewardship; robust security; trust in new governance models; being mindful of legal context and accountability; transparency and notice; and inclusiveness to rectify existing digital inequalities.
  • Collective rights could complement individual data rights: Individual data rights can be a means to correct harms and power asymmetries, but can also fail to account for collective harms where data does not only concern one person but a group of individuals. New legislation should therefore take an expanded account of collective interests and provide mechanisms to address such harms.
  • Data sharing is best encouraged via incentives and legal protections: Public authorities should create incentives for and enable data sharing. In doing so, they should always ensure that individuals’ privacy and agency over their data is protected while preventing government abuse of these powers.

We are looking forward to working with regulators (both in the UK and beyond) as they revise their data protection framework over the coming months, especially around the important issue of data stewardship.

The post Mozilla files comments on UK Data Protection Consultation appeared first on Open Policy & Advocacy.

Mozilla Performance BlogPerformance Sheriff Newsletter (October 2021)

In October there were 303 alerts generated, resulting in 45 regression bugs being filed on average 5.2 days after the regressing change landed.

Welcome to the October 2021 edition of the performance sheriffing newsletter. Here you’ll find the usual summary of our sheriffing efficiency metrics. If you’re interested (and if you have access) you can view the full dashboard.

Sheriffing efficiency

  • All alerts were triaged in an average of 2 days
  • 71% of alerts were triaged within 3 days
  • Valid regressions were associated with bugs in an average of 3 days
  • 79% of valid regressions were associated with bugs within 5 days

Sheriffing Efficiency (October 2021)

We had a huge increase in alerts during October, which was primarily due to changes in our tooling. Whenever we re-record our page load tests we need to review and accept a new baseline. Last month we made improvements to how we measure warm page load, and this resulted in an increase in alerts. We’ll likely see this again as we migrate to the latest version of mitmproxy (our server replay tool).

Summary of alerts

Each month we’ll highlight the regressions and improvements found.

Note that whilst we usually allow one week to pass before generating the report, there are still alerts under investigation for the period covered in this article. This means that whilst we believe these metrics to be accurate at the time of writing, some of them may change over time.

We would love to hear your feedback on this article, the queries, the dashboard, or anything else related to performance sheriffing or performance testing. You can comment here, or find the team on Matrix in #perftest or #perfsheriffs.

The dashboard for October can be found here (for those with access).

Mozilla Privacy BlogMozilla reacts to new EU draft law on political advertising online

The European Commission has just published its draft regulation on the transparency of political advertising online. The draft law is an important step towards increasing the resilience of European democracies for the digital age. Below we give our preliminary reaction to the new rules.

We’ve long championed a healthier ecosystem for political advertising around the world, whether its by pushing for stronger commitments in the EU Code of Practice on Disinformation; uncovering the risks associated with undisclosed political influencer advertising on TikTok; supporting efforts to limit political microtargeting; or pushing platforms to effectively implement their Terms of Service during electoral periods. We’re glad to see that in its draft law the European Commission has taken on board many of our insights and recommendations, and those of our allies in the policy community.

Reacting to the publication of the EU Political Advertising regulation, Owen Bennett, Senior Policy Manager at Mozilla, said:

Political advertising is a crucial part of democratic discourse, and the means by which it is designed, delivered, and consumed has been radically transformed by digital technology. While that transformation has brought new opportunities for civil engagement and pluralism, we have seen too many examples around the world of how online political advertising can be a vector for disinformation; electoral interference; and a range of other societal harms.

We’re glad to see the EU respond to this challenge and set out new rules of the road. The draft law complements the platform accountability vision of the DSA, and it doesn’t shirk from tackling novel forms of paid online influence and the risks associated with amplification and microtargeting.

We look forward to working with lawmakers in the European Parliament and EU Council to ensure the final law increases trust in political advertising and enhances the resilience of democracy in the digital age.

The post Mozilla reacts to new EU draft law on political advertising online appeared first on Open Policy & Advocacy.

This Week In RustThis Week in Rust 418

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Foundation
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is rustc_codegen_nvvm, a rustc codegen backend that targets NVIDIA's libnvvm CUDA library.

Thanks to troiganto for the suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

Artichoke

Ockam

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

284 pull requests were merged in the last week

Rust Compiler Performance Triage

This week, there were a number of cases where the incr-unchanged variants of inflate went up or down by 5% to 6%; we believe these are instances of increased noise in benchmarks documented on rustc-perf#1105. I was tempted to remove these from the report, but its non-trivial to re-construct the report "as if" some benchmark were omitted.

Otherwise, there were some nice wins for performance. For example, PR #90996 more than halved the time to document builds of diesel by revising how we hash ObligationCauseData. If anyone is interested, it might be good to follow-up on the effects of PR #90352, "Simplify for loop desugar", where we have hypothesized that the increased compilation time is due to more LLVM optimizations being applied.

Triage done by @pnkfelix. Revision range: 934624fe..22c2d9dd

1 Regressions, 3 Improvements, 8 Mixed; 3 of them in rollups 34 comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
  • No RFCs entered final comment period this week.
Tracking Issues & PRs
New RFCs

Upcoming Events

Rusty Events between 11/24-12/08 🦀

Online
North America
Europe

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

StackBlitz

Elektron

tangram

Kraken

Maasa Labs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

On the topic of reframing UB, I was reminded of an article about the mechanics of oaths and vows in historical cultures.

When a programmer writes get_unchecked , we can imagine them wanting to promise the compiler that they uphold its preconditions. But since the compiler is normally not so trusting of unproven assertions, the programmer swears an oath that their argument is in bounds.

The compiler, seeing such a solemn commitment, treats the programmer's word as true and optimizes accordingly. The compiler is so thoroughly convinced that it never even entertains the possibility of doubting the programmer's oath.

But if the programmer has sworn falsely, then they might well suffer divine retribution in the form of nasal demons — or worse, subtly baffling program behaviour.

/u/scook0 on /r/rust

Thanks to G. Thorondorsen for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, marriannegoldin.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Cameron KaiserDo you run Void on your Power Mac?

If so, heads up, because builds for your configuration may be ending soon (along with Void PPC on big-endian platforms generally). If you want this to continue, and you've got the interest, chops or gumption, you can help by becoming a maintainer -- take a look at the Void PPC Github. Most of you are probably running the glibc variant, which will end by January 2023, but if you are running musl-based packages those repos will be taken down by the end of 2021. Don't whine to the maintainer, please: the current matrix is four different repos which all require their own maintenance and builds. Even just 32-bit glibc would probably benefit a whole lot of people and yourself. If this is important to you, there's no time like the present to step up.

The Talospace Project51,552 JavaScript tests can't be wrong

Yeah, so about that OpenPOWER Minimum Viable Product JavaScript JIT for Firefox. This happened (all timings from an unoptimized debug build on my dual-8 Talos II with -j24):

% ./mach jstests --args "--no-ion --no-baseline --blinterp-eager --regexp-warmup-threshold=0" -F -j24

[43359|    0|    0|  614] 100% ======================================>| 529.7s
PASS
% ./mach jstests --args "--no-ion --no-baseline" -F -j24
[43359|    0|    0|  614] 100% ======================================>| 499.0s
PASS
% js/src/jit-test/jit_test.py --args "--no-ion --no-baseline --blinterp-eager --regexp-warmup-threshold=0" -f -j24 obj/dist/bin/js
[8193|   0|   0|   0] 100% ==========================================>| 132.3s
PASSED ALL
% js/src/jit-test/jit_test.py --args "--no-ion --no-baseline" -f -j24 obj/dist/bin/js
[8193|   0|   0|   0] 100% ==========================================>| 133.3s
PASSED ALL

That's a wrap, folks: the MVP, defined as Baseline Interpreter with irregexp and Wasm support for little-endian POWER9, is now officially V. This is the first and lowest of the JIT tiers, but is already a significant improvement; the JavaScript conformance suite executed using the same interpreter with --no-ion --no-baseline --no-blinterp --no-native-regexp took 762.4 seconds (1.53x as long) and one test timed out completely. An optimized build would be even faster.

Currently the code generator makes heavy use of POWER9-specific instructions, as well as VSX to make efficient use of the FPU. There are secondary goals of little-endian POWER8 and big-endian support (including pre-OpenPOWER so your G5 can play too), but these weren't necessary for the MVP, and we'd need someone actually willing to maintain those since I don't run Linux on my G5 or my POWER6 and I don't run any of my OpenPOWER systems big. While we welcome patches for them, they won't hold up primary support for POWER9 little-endian, which is currently the only "tier 1" platform. I note parenthetically this should also work on LE Power10 but as a matter of policy I'm not going to allow any special support for the architecture until IBM gets off their corporate rear end and actually releases the firmware source code. No free work for a chip that isn't!

You should be able to build a JIT-enabled Firefox 86 off of what's in the Github tree now, but my current goal is to pull it up to 91ESR so that it can be issued as patches against a stable branch of Firefox. These patches will be part of my ongoing future status updates for Firefox on OpenPOWER (yes, you'll need to build it yourself, though I'm pondering setting up a Fedora copr at some point). The next phase will be getting Baseline Compiler passing everything, which should be largely done already because of the existing Baseline Interpreter and Wasm support, and then the final Ion JIT stage, which still needs a lot of work. We'll most likely set up a separate tree for it so you can help (ahem). No promises right now but I'd like to see the completed JIT reach the Firefox source tree in time for the next ESR, which is Firefox 102. That's more than you can say for Chrome/Chromium, which so far has refused to accept OpenPOWER-specific work at all.

Mike HommeyAnnouncing git-cinnabar 0.5.8

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.7?

  • Updated git to 2.34.0 for the helper.
  • Python 3.5 and newer are now officially supported. Git-cinnabar will try to use the python3 program by default, but will fallback to python2.7 if that’s where the Mercurial libraries are available. It is possible to pick a specific python with the GIT_CINNABAR_PYTHON environment variable.
  • Fixed compatibility with Mercurial 5.8 and newer.
  • The prebuilt binaries are now optimized on arm64 macOS and Windows.
  • git cinnabar download now properly returns an error code when failing to extract the prebuilt binaries.
  • Pushing to a non-empty Mercurial repository without having pulled at least once from it is now prevented.
  • Replaced the nagging about fsck with a smaller check always happening after pulling.
  • Fail earlier on git fetch hg::url <sha1> (it would properly fetch the Mercurial changeset and its ancestors, but git would fail at the end because the sha1 is not a git sha1 ; use git cinnabar fetch instead)
  • Minor fixes.

Mozilla Performance BlogUpgrading Page Load Tests to Use Mitmproxy 7

Background

mitmproxy is a third-party tool that we use to record and play back page loads in Firefox to detect performance regressions.

The page load is “recorded” to a file: the page is loaded while mitmproxy is running, and the proxy logs all requests and responses made and saves them to a file.

The page load can then be played back from this file; each response and request (referred to as a “flow”) made during the recording is played back without accessing the live site.

Recorded page load tests are valuable for detecting performance regressions in Firefox because they are not dependent on changes to the site we are testing. If we tested using only live sites, it would be much more difficult to tell if a regression was caused by changes in Firefox or changes in the site being tested.

So, as we run these tests over time, we have a history of how Firefox performs when replaying the same recording again and again, helping us to detect performance regressions that may be caused by recent changes to our code base.

Mitmproxy 7 Integration

Recently mitmproxy was updated from version 6 to version 7. Several new features and breaking changes were introduced, so this required a bit more work to get our tests working with mitmproxy 7.

The most notable change is interoperability between HTTP/1 and HTTP/2. In earlier versions of mitmproxy this was not supported, so engineers on the performance team had to do some hacking to be able to record and playback tests that used both HTTP/1 and HTTP/2. Prior to this change, mitmproxy would open a live connection to determine the protocol, which we want to avoid.

We also made a number of changes to the way that mitmproxy performs when recording and playing back.

http_protocol_extractor.py  detected the HTTP protocol being used when recording and saved this to a file. This information was used when playing back to set the appropriate protocol in the playback responses.

inject-deterministic.py was (and still is) used when recording page loads to avoid errors caused by non-deterministic javascript. (For example, if the name of a resource, such as an image, is based on the date and time the page is loaded, this can cause the image to not load when the recording is played back)

The biggest changes were made in alternate-server-replay.py. This file was copied and modified from an early version of mitmproxy’s server playback addon. In this file we return only the most recent flow, and will return a 404 instead of killing the flow for any request that does not have a matching response in the recording.

We have been using that file (with some modifications over time) for playing back recordings since mitmproxy version 2.0.2 (🤯) so it withstood the test of time through several versions of mitmproxy.

But, alas, all things must come to an end. This script is not compatible with mitmproxy 7, so I copied the latest version of mitmproxy’s server playback addon and made a few small changes to achieve our desired behaviors.

When playing back recordings, we only want to return the most recent flow. This enables us to record pages requiring log-in. The engineer recording the new page load has to manually log in to the site, but we only want to test the logged in site when playing back, not the login page.

netflix-not-logged-in

Netflix without flow order reversed

netflix logged in

Netflix with flow order reversed shows logged in session

In previous versions of mitmproxy, we accomplished this by returning only the most recent flow in the recording file. In mitmproxy 7 I was able to achieve the same behavior by adding a new option, server_replay_order_reversed that reverses the flow order if set to true.

With the flow order reversed, I then used mitmproxy’s existing option, server_replay_nopop, so that the flow was not removed after playing, and could be replayed multiple times. Without this option, the content that was loaded was inconsistent between page loads.

The last change I made was to how the option sever_replay_kill_extra behaves. When this option is set to true, mitmproxy will kill any requests that do not exist in the recording file. As mentioned above, doing so can leave the browser still waiting for a response, causing the test to fail. Instead we return a 404 when kill_extra is set, allowing the browser to resolve these requests cleanly.

Going Forward

We plan to contribute some of this work back to mitmproxy, such as the option to reverse the flow order, and returning a 404 instead of killing unknown/extra requests.

While there are still some outstanding issues regarding the update to mitmproxy 7, as of this writing, mitmproxy 7 and a recording of amazon using mitmproxy 7 have been landed to autoland.

(And there was much rejoicing)

 

Monty Python humor - and there was much rejoicing

Thanks for reading! For more information on our page load tests and the update to mitmproxy 7, check out the resources below.

/kimberlythegeek

Related Links

 

Firefox NightlyThese Weeks in Firefox: Issue 104

Highlights

Friends of the Firefox team

Introductions/Shout-Outs

  • Welcome Mandy Cheang [:mcheang] to the Search team!

For contributions from November 2nd to November 16th 2021, inclusive.

Resolved bugs (excluding employees)

Fixed more than one bug

  • Clinton
  • Evgenia Kotovich
  • John Bieling (:TbSync)
  • Jonas Jenwald [:Snuffleupagus]
  • raquelvargas@gmail.com

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • Emilio fixed a bug that was preventing users to interact with browserAction popups using touch screens – Bug 1696786 (regressed in Firefox 86 by enabling APZ in popups loading a document in a child process, Bug 1493208)
  • Fixed a bug in the child processes initialization, which was preventing the permissions to be correctly transmitted from the parent to the child process once a non-http/https blob url is created and new processes spawned – Bug 1738713
    • NOTE: technically this wasn’t an extension-specific issue, but it was easy enough to trigger by extensions using the “browser.contentScripts.register” API method and so it happened to be reported to us as a WebExtensions issue
  • As part of the ongoing work related to the “Manifest Version 3 background service worker”, we have landed some more changes to notify the WebExtensions internals when a background service worker is being spawned, loaded and destroyed – Bug 1728326 (still only enabled on Nightly along with other parts of the “MV3 background service workers” internals)

Fluent

  • flod has written up some excellent documentation on localization best practices that we should all check out.
    • Juicy tidbit: did you know that “Save document?” in English becomes “A bheil thu airson an sgrìobhainn a shàbhaladh” in Gaelic? So be careful with static dimensions for things containing text!

Form Autofill

High-Contrast Mode (MSU Capstone project)

Desktop Integrations (Installer & Updater)

Lint, Docs and Workflow

Password Manager

Performance

Screenshots

 

Firefox Add-on ReviewsThe magic of mouse gestures

Mouse gestures are mouse movement and key combinations that give you the power to customize the way you maneuver around web pages. If your online work requires a fair amount of distinct, repetitive activity—things like rapid page scrolling, opening links in background tabs, closing batches of open tabs, etc.—the right mouse gesture can make a major impact on your task efficiency. Here are a few browser extensions that provide excellent mouse gesture features…

Gesturefy

With 80+ predefined mouse gestures at your disposal, Gesturefy packs serious customization potential. Yet mouse gesture beginners won’t feel overwhelmed. The extension’s intuitive  controls make it a great choice for novice and mouse gesture veterans alike. 

Localized in 30+ languages, Gesturefy handles all the basics of mouse gesturing beautifully—all the mouse wheel and click variations you could want, plus advanced capabilities like “rocker” gestures (hold one mouse button while clicking the other) and even a way to customize the color and look of your tracing streaks. 

Foxy Gestures

Top notch mouse gesture extension with all of the expected features, but what really sets FoxyGestures apart is its uniquely refined user interface. 

FoxyGestures makes it easy to scan all the gestures available to you on a single Options page and allows you to design the gesture you want for any assignable action. You can even access chord gestures (pressing two or more keys in conjunction with the mouse). 

<figcaption>Foxy Gestures makes it easy to design and store your own custom mouse gestures.</figcaption>

ScrollAnywhere

Once you try ScrollAnywhere you may never go back to using a scrollbar ever again. This isn’t a mouse gesture extension with a bunch of standard gestures baked in, but rather a tool focused on giving you the power to move up and down pages with extreme speed and ease using just your mouse. 

With the press/hold of a single mouse button and the up or down movement of your mouse, you’re free to move around pages wherever your mouse happens to be. A remarkable feature called Momentum lets you flick up and down pages like you would with your finger on a smartphone (i.e. the speed of the page scrolling will correlate to the intensity of your mouse movement). Here’s a short video showing ScrollAnywhere in action.

We hope a mouse gesture extension helps you become a more efficient online task master! Feel free to explore more productivity super boosters on addons.mozilla.org

Niko MatsakisRustc Reading Club, Take 2

Wow! The response to the last Rustc Reading Club was overwhelming – literally! We maxed out the number of potential zoom attendees and I couldn’t even join the call! It’s clear that there’s a lot of demand here, which is great. We’ve decided to take another stab at running the Rustc Reading Club, but we’re going to try it a bit differently this time. We’re going to start by selecting a smaller group to do it a few times and see how it goes, and then decide how to scale up.

The ask

Here is what we want from you. If you are interested in the Rustc Reading Club, fill sign up on the form below!

Rustc reading club signup form

Start small…

As Doc Jones announced in her post, we’re going to hold our second meeting on December 2, 2021 at 12PM EST (see in your timezone). Read her post for all the details on how that’s going to work! To avoid a repeat of last time, this meeting will be invite only – we’re going to “hand select” about 10-15 people from the folks who sign up, looking for a range of experience and interests. The reason for this is that we want to try out the idea with a smaller group and see how it goes.

…and scale!

Presuming the club is a success, we would love to have more active clubs going on. My expectation is that we will have a number of rustc reading clubs of different kinds and flavors – for example, a recorded club, or a club that is held on Zulip instead of Zoom, or clubs in other languages.1 As we try out new ideas, we’ll make sure to reach out to people who signed up on the google form, so please do sign up if you are interested!


  1. In fact, if you’re really excited, you don’t need to wait for us – just create a zoom room and invite your friends to read some code! Or leave a message in #rustc-reading-club on zulip, I bet you’d find some takers. 

Karl DubostBrowser regression and tools

Sometimes a new release of a nightly version of a browser creates what we call a regression. How do we find out what exactly broke the code?

Illustration of surgery tools.

What is a regression?

In simplified terms, there is a regression when a code used to work and is not working properly after a specific release. For websites, a webpage would stop having the right behavior after updating to a new version of the browser.

Something was working with commit 𝑛 of the browser and it stopped working with commit 𝑛+1.

How do we try to catch regressions before production release?

All browsers have different versions. The production release is the one that most people are using. The one which is advertised for people to download on websites and stores. But there are also beta versions and nightly versions.

The nightly version is a fresh working build with the latest modifications of the day. It's not considered reliable for your main usage. Even if browser implementers try hard to keep them stable, they may break. They may even damage your browser profile. Use them only if you understand the consequences.

How do we find out the exact commit which has broken the browser?

You are now using commit 𝑛+𝑚 version which is broken. You want to find out the 𝑛+1 version which has broken the code.

You start bisecting the code. Let's say this is happening in between version 10 and 20 of the code.

  1. Verify this is working with 10
  2. Check this is not working with 20.
  3. Split the group in two. Pick up 15. Does the bug reproduce? Yes. So the issue is in between 10 and 15 No. So the issue is in between 16 and 20
  4. Take the new range and repeat and rinse, until you get a unique version.

That can become time consuming. There are tools to help with this task.The tool will download the nightly builds being tested and help figure out which specific commit in the code has broken the code.

Bisection tools

  • Chrome
  • Firefox. Probably the easiest to use of the 3. Well documented and even a version with a GUI
  • Safari

Your turn! When you find out a broken Web page next time (when previously, it was working), follow these steps:

  1. open a bug
  2. Run a regression tool
  3. give the precise commit where it might have happened.

This will speed up a lot the potential fix.

Comments

If you have more questions, things I may have missed, different take on them. Feel free to comment…. Be mindful.

Otsukare!

This Week In RustThis Week in Rust 417

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Foundation
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is starship, a fast featureful customizable UNIX terminal prompt.

Thanks to matchai for the suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

273 pull requests were merged in the last week

Rust Compiler Performance Triage

A large amount of noise in the comparisons this week, likely due to new probabilistic query hash verification increasing likelihood of changes in each benchmark; solutions are being tracked in rustc-perf#1105.

Otherwise, though, the week largely amounted to a neutral one for performance. There were some regressions, particularly in doc builds, as a result of the addition of portable SIMD. These are relatively speaking minor and primarily impact small crates.

Triage done by @simulacrum. Revision range: eee8b9c7..934624f

5 Regressions, 2 Improvements, 6 Mixed; 2 of them in rollups

41 comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
New RFCs

Upcoming Events

Rusty Events between 11/17-12/01 🦀

Online

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

If a normal add is a waffle iron , SIMD add is a double or quadruple waffle iron. You can make 2 or 4 or more waffles at the same time.

In case of waffles it would be called SIMW: S ingle I ron, M ultiple W affles.

It's not multithreading - because you open and close the waffle iron for all the waffles at the same time.

/u/EarthyFeet on /r/rust

Editors note: Do yourself a favor, click the link and read the whole thread, it's pure gold (chef's kiss).

Thanks to Stephan Sokolow for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, marriannegoldin.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Support.Mozilla.OrgIntroducing Firefox Relay Premium

If you’re a fan of Firefox Relay, you may have been waiting for the day when you can add more aliases. After a long wait, you can now celebrate because we’re launching Firefox Relay Premium today.

As a refresher, Firefox Relay is a free service available at relay.firefox.com where you’ll get five email aliases to use whenever you sign-up for an online account. Today, Firefox Relay is launching as a premium subscription where you can unlock unlimited aliases and additional features.

What will users get when they subscribe to Firefox Relay Premium?

Users who subscribe to Firefox Relay Premium will be able to create an unlimited number of aliases, add a custom subdomain, reply to forwarded emails, and get Premium support. To learn more about Firefox Relay Premium, please read this blog post.

What does this mean for the SUMO community? 

As of today, we have added Firefox Relay to the list of products in SUMO. Users will be able to browse through the Knowledge Base article and submit their questions directly to our support agent. There’s no traditional AAQ for Firefox Relay Premium. You’re welcome to reply to Firefox Relay question in social media/Firefox forum if you’re confident with your answer. Otherwise, please escalate the question to the admins (see here for forum, or here for social support). As contributors, we invite you to help improve our Knowledge Base articles and localize them.

This is a monumental moment for Mozilla in our journey to add more Premium product offerings, and we’re super excited about this release. If you have any questions, please reach out to any of the admins.

 

Keep rocking the helpful web,

Kiki

The Mozilla BlogIntroducing Firefox Relay Premium, allowing more aliases to protect your identity from spammers

Today, Firefox Relay, a privacy-first and free product that hides your real email address to help protect your identity, is available with a new paid Premium service offering. The release comes just in time for the holiday season to help spare your inbox from being inundated with emails from e-commerce sites, especially those sites where you may shop or visit a few times a year.

In real life you have a phone number where family and friends can call and reach out to you directly. You likely have it memorized by heart and it’s something you’ve had for years. In your online life your email address is like your phone number, it’s a personal and unique identifier. Your email address has become the way we login and access almost every website, app, newsletter, and hundreds of other interactions we have online every single day. That means your email address is in the hands of hundreds, if not thousands, of third parties. As you think more about your email address and the places it’s being used, Firefox Relay can help protect and limit where it’s being shared.

Firefox Relay is a free service available at relay.firefox.com where you’ll get five email aliases to use whenever you sign-up for an online account. Over the last year, the team has been experimenting with Firefox Relay, a smart, easy solution that can preserve the privacy of your email address. Firefox Relay was initially rolled out to a beta phase for early adopters who like to test new products. We heard back from beta testers who provided feedback where we improved the free service and added a new paid Premium service that we’re introducing today.

How Firefox Relay works 

Firefox Relay will send and forward your email messages from your alias email addresses to your primary email address. We do not read or keep any of the content in your messages, and all email messages are deleted after they’re sent and delivered to you. 

With Firefox Relay, you’ll get five free email aliases and up to 150 kb attachments. You can sign up for Firefox Relay through our site or download it as an add-on. Additionally, we’ve added the ability for labels to be synced across devices. Labels allow you to add information like an account name or a description so it’ll be easier for you to know which sites you are using the alias for. With this new syncing, you’ll be able to see these labels on all your devices, including mobile. 

To bring protection to more people, Firefox Relay will now be available in the following languages: Chinese, Dutch, French, English, German, Greek, Italian, Portuguese, Slovak, Spanish, Swedish, Ukrainian and Welsh.

Here’s how Firefox Relay works:

Step 1: Go to relay.firefox.com

Step 2: Sign in with your Firefox Account or sign up for a Firefox Account (this takes less than two minutes and it’s worth it!)

Step 3: Once you’re signed in you can generate up to five free random email aliases to use. If you need more than five email aliases, you can sign up for a Premium paid service.

Step 4: Then, when you sign up for a new online account you can go to the Firefox Relay dashboard to generate an email alias or you can click the Firefox Relay button that may appear in the login box to use one of those email aliases. Then, Firefox Relay will forward emails from the alias to your real inbox, keeping your actual email address hidden.

<figcaption>Sign up through our website or as an add-on</figcaption>

Want more email address aliases? Try Firefox Relay Premium

During the beta testing phase, we heard from many users who wanted more email address aliases. So, we decided to offer a Premium service where subscribers will receive one subdomain alias to create unlimited email aliases, for example coffeestore@yourdomain.mozmail.com or yourfavoriteshoestore@yourdomain.mozmail.com; a summary dashboard of your email aliases; the option to use your email aliases to reply to emails directly; and customer support through our convenient contact form. Premium subscribers will also receive the 150 kb attachment that is currently available to free subscribers. 

For a limited time, we will be offering a very low introductory price of $0.99 a month (available in Canada, United States, United Kingdom, Malaysia, Singapore and New Zealand) and 0.99 EUR/1.00 CHF in Europe (Austria, Belgium, France, Germany, Ireland, the Netherlands, Spain and Switzerland).

<figcaption>A summary dashboard of your email aliases</figcaption>

Thank you Firefox community and beta testers 

We appreciate the thousands of beta testers who participated in the early beta testing phase since we started this journey. It’s their voice and impact that have motivated and inspired us to continue to develop this product. Thanks to their support, we’re happy to graduate the Firefox Relay product and provide a Premium offering. 

To learn more about our other Mozilla products, check out these:

The post Introducing Firefox Relay Premium, allowing more aliases to protect your identity from spammers appeared first on The Mozilla Blog.

Mozilla Performance BlogPerformance Tools Newsletter (Q3 2021)

As the Perf-Tools team, we are responsible for the Firefox Profiler. This newsletter gives an overview of the new features and improvements we’ve done in Q3 2021.

This is our second newsletter and you can find the first one here which was about the H1 2021. With this newsletter, we wanted to focus on only Q3, so it would be less crowded and you can see the new features and improvements easily. I hope you enjoy the work that we’ve done this quarter.

Let’s get started with the highlights.

Network marker Improvements

We had network markers for a while and we had both a network track in the timeline and a network chart in the bottom panel. It was working fine for a while but it was lacking some functionalities. Also, we had some correctness issues around the markers, like sometimes we were missing some network markers. We’ve worked on this task to improve both the Firefox Profiler analysis page and also the correctness of the values in the back-end.
Here are some of the things that we’ve done:

Highlight network markers in both the timeline and the network chart

Previously, network markers in the timeline and network chart were independent. When you interacted with one, it wasn’t changing the other one. After this work, when you hover over a network marker, the same marker in the other part will be highlighted. Here’s an example:

 

Clicking a request in the network track selects the right line in the network chart

Previously, clicking on a network track wasn’t doing anything. With this work, if you click on a network request in the network track in the timeline, it will automatically select the network marker in the network chart. It is helpful for quickly finding the network request you are looking for.

New context menu in the network track

We have a context menu for the network chart in the bottom panel, but we didn’t have one in the timeline network tracks. We’ve added the context menu to the network tracks now and they can be used as the same. We are hoping that with this and other network track works, network track will be a lot more useful.

Picture of new network track context menu when you press right click on a network request.

Display network markers in the marker chart

We’ve started to show the network markers in the marker chart as well. It could be helpful when you are also looking for other markers and want to align the network markers with them.

Example that shows network markers in marker chart.

Support for network requests cancellations

This is one of the improvements that we made in the back-end to make the data more consistent and accurate. Firefox profiler didn’t support the network request cancellations on service workers before. Now, you can see if a network request is being canceled and when.

Profiling overhead reductions

Reducing the overhead of profiling Firefox sleeping threads

Previously, we were doing some costly operations for sampling even when a thread is idle. With this work, we’ve reduced the overhead of profiling the sleeping thread dramatically. I’m not going to get into the details of this improvement since our colleague Gerald Squelart wrote a great blog post about it already. You can take a look at it here if you are curious about the optimization he came up with and the implementation details.

Reducing the overhead of Firefox Profiler recording by reducing mutex lockings

Firefox Profiler has an internal mutex inside of it. It is being used so other threads can modify the same data without any data loss or data race. But this mutex brings some overhead because when two threads need to add a marker at the same time, they both need to acquire the mutex lock. In that case, one of them had to wait for the other one. And this was bringing some overhead because of this mutex lock waiting time.
With this work, we’ve removed the need for mutexes from lots of places, including most importantly from marker recording sites.

Rust API for Firefox Profiler

Firefox Profiler has some APIs for various languages like C++, JavaScript, and Java/Kotlin inside the mozilla-central codebase. We also had some hacks around the Rust codebases, but they were added for each Rust project when they were needed and we had lots of code duplication because of it. Also, they weren’t maintained by the Firefox Profiler team and they were prone to misuse. With this work, we’ve created a canonical Rust API for the Firefox Profiler and removed all the code duplications/hacks around the codebase.

We have three main functionalities with this API:

  1. Registering Rust threads:
    1. With this functionality, you can register a Rust thread, so Firefox Profiler can find it and possibly profile it. It’s good to keep in mind that only registering a thread will not make it appear in the profile data. In addition, you need to add it to the “Threads” filter in about:profiling.
      We had some hacks around the thread registration for Servo and WebRender threads, so we could profile them. But they were duplicated and were using the raw FFI bindings.
  2. Adding stack frame labels:
    1. Stack frame labels are useful for annotating a part of the call stack with a category. The category will appear in the various places on the Firefox Profiler analysis page like timeline, call tree tab, flame graph tab, etc.
  3. Adding profiler markers:
    1. Markers are packets of arbitrary data that are added to a profile by the Firefox code, usually to indicate something important happening at a point in time, or during an interval of time.

We also have documentation about this API in the Firefox source docs. Please take a look at it for more details and examples if you are interested. Also, we are going to be writing a blog post about this API soon. Stay tuned!

Show the eTLD+1 of Isolated Web Content Processes as their track name

The timeline of the Firefox Profiler analysis page is very crucial to finding the data we are looking for. Inside the timeline, we have various registered processes and threads that Firefox had during the profiling session. When we have lots of tracks in the timeline, it might be hard to figure out which track we are interested in.

Previously, all the web content processes had a single name called “Web Content” or “Isolated Web Content” (when Fission is enabled). This is not explicit when it comes to figuring out a specific tab. This was implemented this way before because there wasn’t a way to determine the tab URL for a specific “Web Content” process. But with Fission, we precisely know which process belongs to which eTLD+1. After this work, we’ve started to show their eTLD+1 as their track name. This way, it will be a lot easier to determine which track you are looking for.

Here’s an example of before and after:

Before the change, profiler timeline was showing not-very-helpful Isolated Content Process as the name

Before

After this change, profiler timeline shows the ETLD+1 as the name which is more helpful.

After

Linux perf importer improvements

Firefox Profiler had Linux perf support for quite some time. We have this documentation about how to profile with it and how to import it.
Our contributor, Mark Hansen, made some great improvements on the Linux perf support to make it even better. Here are the things Mark Hansen improved:

  • Add support for Linux perf profiles with a header.
    • Firefox Profiler can import the profiles directly when the user records a profile with `perf script –header`. Previously it was giving an error and the header had to be removed.
  • Add categories and colors to our Linux perf importer.
    • In the Firefox Profiler, we have various profiling categories for annotating different parts of the code. For example, we have JavaScript, DOM, Network, etc. For the Linux perf profiles, we didn’t have any specific categories, so all the samples were marked as “Other”. With this work, we now have two categories for kernel and non-kernel native code. Here’s a before and after:
      Before the grap was all gray but now it's more colorful depending on the stack function category.Also, he wrote an awesome blog post about this work. You can find it here.

Support for dhat imports

Firefox Profiler now supports imports of dhat memory profiles. “dhat” is a dynamic heap analysis tool that is a part of the valgrind tools. It’s useful for examining how programs use their heap allocations. After this work, all you need to do is to drag and drop the dhat memory profile file into the Firefox Profiler and it will automatically import everything and load it for you.

Other updates

  • Localization of the Firefox Profiler
    • We’ve finished the internationalization work of the Firefox Profiler analysis page in the H1 with the help of our Outreachy intern. And we were working with the l10n team to localize the Firefox Profiler.
      In Q3, we’ve enabled 12 locales in total, and we hope to add more once the locales under development reach a certain limit! Here are the locales that we enabled so far:
      de, el, en-GB, es-CL, ia, it, nl, pt-BR, sv-SE, uk, zh-CN, zh-TW.
      If you want to help translate the Firefox Profiler to your language, you can do that by visiting our project on Pontoon.
  • Compare view shows the profile names as the track names
    • Previously the compare view was only showing Profile 1 and Profile 2 as the profile names. Now, it will display the name if it’s provided in the profile link.
  • Create more compact URLs for profiles with lots of tracks
    • Firefox Profiler keeps most of the data persistent in the URL, so when you share the link with someone else, they will see the same thing as you see. But that brings some challenges. Because there is a lot of data to keep track of, the URL sometimes ends up being really long. We are using bitly to shorten the URLs, so you don’t have to worry about long URLs. But when the URL is too long, bitly fails to shorten it and you are stuck with the long URL. With this work, we’ve made the URLs more compact, to ensure that we will never fail to shorten them.
  • Updated “Expand all” menu item to include its shortcut key
    • We have an “Expand all” menu item in the call tree context menu. It has a shortcut key as “*” but that wasn’t really visible before. Now, we are showing the shortcut on the right side of the menu item, so you can learn it by just looking at the context menu.
      This is implemented by our contributor Duncan Bain. Thanks, Duncan!

"Expand all" context menu item shows the shortcut now.

  • When a window is closed, its screenshot will stop showing up in the timeline at the point of window destruction.
      • Our screenshots were always showing like a window is always open even though it’s being destroyed already. Now, we know when a window is being destroyed and stop showing the screenshots of that window.

Contributors in Q3 2021

Lots of awesome people contributed to our codebases both on GitHub and mozilla-central in. We are thankful to all of them! Here’s a list of people who contributed to Firefox Profiler code:

  • Duncan Bain
  • Florian Quèze
  • Gerald Squelart
  • Greg Tatum
  • Julien Wajsberg
  • Mark Hansen
  • Markus Stange
  • Michael Comell
  • Nadinda Rachmat
  • Nazım Can Altınova

And here’s a list of contributors who helped on the localization of Firefox Profiler:

  • Brazilian Portuguese: Marcelo Ghelman
  • British English: Ian Neal
  • Chilean Spanish: ravmn
  • Chinese: Gardenia Liu, hyperlwk, 你我皆凡人
  • Dutch: Mark Heijl
  • German: Michael Köhler
  • Greek: Jim Spentzos
  • Interlingua: Martijn Dekker, Melo46
  • Italian: Francesco Lodolo
  • Kabyle: Selyan Slimane Amiri, ZiriSut
  • Swedish: Andreas Pettersson, Luna Jernberg, Peter Kihlstedt
  • Taiwanese Chinese: Pin-guang Chen
  • Ukrainian: Artem Polivanchuk, Lobodzets, Іhor Hordiichuk

Thanks a lot!

Conclusion

Thanks for reading! If you have any questions or feedback, please feel free to reach out to me on Matrix (@canova:mozilla.org). You can also reach out to our team on Firefox Profiler channel on Matrix (#profiler:mozilla.org).

If you profiled something and are puzzled with the profile you captured, we also have the Joy of Profiling (#joy-of-profiling:mozilla.org) channel where people share their profiles and get help from the people who are more familiar with the Firefox Profiler. In addition to that, we have the Joy of Profiling Open Sessions where some Firefox Profiler and Performance engineers gather together on a Zoom call to answer questions or analyze the profiles you captured. It’s usually happening every Monday, and you can follow the “Performance Office Hours” calendar to learn more about it.

Niko MatsakisCTCFT 2021-11-22 Agenda

The next “Cross Team Collaboration Fun Times” (CTCFT) meeting will take place next Monday, on 2021-11-22 at 11am US Eastern Time (click to see in your time zone). Note that this is a new time: we are experimenting with rotating in an earlier time that occurs during the European workday. This post covers the agenda. You’ll find the full details (along with a calendar event, zoom details, etc) on the CTCFT website.

Agenda

This meeting we’ve invited some of the people working to integrate Rust into the Linux kernel to come and speak. We’ve asked them to give us a feel for how the integration works and help identify those places where the experience is rough. The expectation is that we can use this feedback as an input when deciding what work to pursue and what features to prioritize for stabilization.

  • (5 min) Opening remarks 👋 (nikomatsakis)
  • (40 min) Rust for Linux (ojeda, alex, wedsonaf)
    • The Rust for Linux project is adding Rust support to the Linux kernel. While it is still the early days, there are some areas of the Rust language, library, and tooling where the Rust project might be able to help out - for instance, via stabilization of features, suggesting ways to tackle particular problems, and more. This talk will walk through the issues found, along with examples where applicable.
  • (5 min) Closing (nikomatsakis)

Afterwards: Social Hour

After the CTCFT this week, we are going to try an experimental social hour. The hour will be coordinated in the #ctcft stream of the rust-lang Zulip. The idea is to create breakout rooms where people can gather to talk, hack together, or just chill.

Eitan IsaacsonspeechSynthesis.getVoices()

Half of the DOM Web Speech API deals with speech synthesis. There is a method called speechSynthesis.getVoices that returns a list of all the supported voices in the given browser. Your website can use it to choose a nice voice to use, or present a menu to the user for them to choose.

The one tricky thing about the getVoices() method is that the underlying implementation will usually not have a list of voices ready when first called. Since speech synthesis is not a commonly used API, most browsers will initialize their speech synthesis lazily in the background when a speechSynthesis method is first called. If that method is getVoices() the first time it is called it will return an empty list. So what will conventional wisdom have you do? Something like this:

function getVoices() {
  let voices = speechSynthesis.getVoices();
  while (!voices.length) {
    voices = speechSynthesis.getVoices()
  }

  return voices;
}

If synthesis is indeed not initialized and first returns an empty list, the page will hang in an infinite CPU-bound loop. This is because the loop is monopolizing the main thread and not allowing synthesis to initialize. Also, an empty voice list is a valid value! For example, Chrome does not have speech synthesis enabled on Linux and will always return an empty list.

So, to get this working we need to not block the main thread by making asynchronous calls to getVoices, we should also have a limit on how many times we attempt to call getVoices() before giving up, in the case where there are indeed no voices:

async function getVoices() {
  let voices = speechSynthesis.getVoices();
  for (let attempts = 0; attempts < 100; attempts++) {
    if (voices.length) {
      break;
    }

    await new Promise(r => requestAnimationFrame(r));
    voices = speechSynthesis.getVoices();
  }

  return voices;
}

But that method still polls, which isn’t great and is needlessly wasteful. There is another way to do it. You could rely on the voiceschanged DOM event that will be fired once synthesis voices become available. We will also add a timeout to that so our async method returns even if the browser never fires that event.

  async function getVoices() {
    const GET_VOICES_TIMEOUT = 2000; // two second timeout

    let voices = window.speechSynthesis.getVoices();
    if (voices.length) {
      return voices;
    }

    let voiceschanged = new Promise(
      r => speechSynthesis.addEventListener(
        "voiceschanged", r, { once: true }));

    let timeout = new Promise(r => setTimeout(r, GET_VOICES_TIMEOUT));

    // whatever happens first, a voiceschanged event or a timeout.
    await Promise.race([voiceschanged, timeout]);

    return window.speechSynthesis.getVoices();
  }

You’re welcome, Internet!

The Mozilla BlogA Firefox mobile product manager on her favorite corners of the internet

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we are also quick to point out that the internet is pretty darn magical. The internet opens up doors and opportunities, allows for people to connect with others, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner Of The Internet, we talk with people about the online spaces they can’t get enough of, what we should save in Pocket to read later, and what sites and forums shaped them.

First up is Vesta Zare, a staff product manager at Firefox Mobile here at Mozilla on the parts of the internet she can’t stop talking about (and, yes, that includes Firefox).

What is your favorite corner of the internet?

I love exploring podcasts where I learn something new or feel inspired. I also enjoy browsing the instagram feed of talented photographers and local artists.

What is an internet deep dive that you can’t wait to jump back into?

I can’t wait to go back and finish this super long yet fascinating blogpost on “The history of us” on waitbutwhy.com

What is the one tab you always regret closing?

Just like most people, I sometimes keep tabs open as a way of remembering things I want to get back to and read, research, listen to, or buy. If I accidentally close any tab that I am not yet done with, then I experience momentarily regret until I remember that Firefox makes it super easy to access and open a recently closed tab. 

What can you not stop talking about on the internet right now?

I love how the internet has made it so easy and accessible for anyone to learn about anything, connect with people globally, be entertained, or find a product. However, there is too much content competing for our attention, and so much noise to sort through, that we often end up just going back to the same apps or sites, prioritizing convenience and familiarity over variety and diversity of content, or even our own privacy. But what if it felt just as convenient and familiar to explore the whole web and not just a tiny slice of it? That’s what Firefox Mobile is set out to create and I am really excited about some of the initial steps we’ve taken to remove clutter and highlight content that people care about. I look forward to feedback from our users that will help us stay on track for building a personalized and joyful, yet private experience of the whole web. 

What was the first online community you engaged with?

I used to write blog posts about film and photography and I remember how exciting it was to see people comment on my posts and engage with them. It was the first time that I experienced the global collective power of the web.

What articles and videos are in your Pocket waiting to be read/watched right now?

There are so many! I love exploring new ideas and perspectives and always found it difficult to keep track of all the articles I wanted to save and read later. Then I was introduced to Pocket, even before joining Mozilla, and it was just the perfect solution for me. Now when I open Pocket, there is usually a good mix of articles about Tech, Entertainment, and Food waiting for me.

If you could create your own corner of the internet what would it look like?

I’m passionate about creating experiences that help people stay in the moment and not feel like they need to multitask all the time, nurturing their creativity and long-term happiness.

Vesta Zare is a staff product manager at Firefox Mobile where she is currently focused on empowering mobile users to have joyful, diverse and private browsing journeys through the web. Since majoring in cognitive science in university, Vesta became passionate about creating human-technology interfaces that support people’s mental models and solve real everyday problems. She has worked on mobile apps that made financial planning less complicated, advocated for safe community engagement platforms, and built optimized workflows for film and media management. Vesta sees a lot of opportunities to innovate within the mobile space and she likes to stay closely connected with mobile consumers to anticipate their ever-changing needs and create experiences that support them.

The post A Firefox mobile product manager on her favorite corners of the internet appeared first on The Mozilla Blog.

The Mozilla BlogFirefox’s Private Browsing mode upleveled for you

There are plenty of reasons why you might want to keep something you are doing on the web to yourself. You might be looking for a ring for your soon-to-be fiance, looking up what those mysterious skin rashes could be, or reading a salacious celebrity gossip blog. That’s where Private Browsing mode comes in handy. This year, we upleveled and added new advanced features to our Private Browsing mode. Before we share more about these new features we wanted to share some of the misconceptions about Private Browsing. 

One of the top common myths about Private Browsing (in any major web browser) is that it makes you anonymous on the Internet. The Private Browsing mode on Chrome, Safari, Edge and Firefox are primarily designed to keep your activity private from other users on the same computer, but websites and Internet service providers can still gather information about your visit, even if you are not signed in. To learn more about other Common Myths, visit our site. You should know though, that Firefox offers something that other browsers don’t, which is advanced privacy protections. Read on to learn more about our unique tracking protections.

How we upgraded Private Browsing mode

Firefox’s Private Browsing was built to give you extra protection. We believe it’s vitally important to do what we can to protect your private browsing history from internet tracking companies. These companies’ trackers are ubiquitous, hiding in web pages to follow you around the web and build detailed profiles about you based on your browsing habits. To combat this problem, Firefox introduced Tracker Blocking for Private Browsing Windows. Tracker Blocking prevents tracking content (images and scripts engineered to spy on you) from a long list of tracking companies from being loaded into your browser.

This year, the Firefox team added new privacy protections to Private Browsing Mode and strengthened the ones we have. We were determined to deliver extra strong privacy protections in Private Browsing mode. Here’s a list of the advanced privacy protections we added to Private Browsing:

Total Cookie Protection – Stop cookie tracking with separate cookie jars

Total Cookie Protection stops cookies from tracking you around the web. Total Cookie Protection joins our suite of privacy protections called ETP (Enhanced Tracking Protection). In combining Total Cookie Protection with supercookie protections, Firefox is now armed with very strong protection against cookie tracking. Total Cookie Protection works by maintaining a separate “cookie jar” for each website you visit. Any time a website, or third-party content embedded in a website, deposits a cookie in your browser, that cookie is confined to the cookie jar assigned to that website, such that it is not allowed to be shared with any other website.

<figcaption>Total Cookie Protection works by maintaining a separate “cookie jar” for each website you visit</figcaption>

Smart Block – Ensuring smoother logins, even in Facebook

SmartBlock, our advanced tracker blocking mechanism combines a great web browsing experience with robust privacy protection, by ensuring that you can still use third-party login buttons, including Facebook, to sign in to websites, while providing strong defenses against cross-site tracking. SmartBlock provides local stand-ins for blocked third-party tracking scripts. These stand-in scripts behave just enough like the original ones to make sure that the website works properly. They allow sites relying on the original scripts to load with their functionality intact. The SmartBlock stand-ins are bundled with Firefox, so the chances for third-party content from the trackers to load and track you is very slim, unless you interact with the buttons to sign into Facebook. Additionally, the stand-ins themselves do not contain any code that would support tracking functionality.

HTTPS by Default – Automatically establish a secure, encrypted connection over HTTPS 

Insecure connections are not only a risk to your online security, they also reveal the full content of the websites you are browsing to anyone who can monitor your internet traffic, including your ISP. In Private Browsing Windows, Firefox will favor secure connections to the web by default for every website you visit. Firefox’s HTTPS by Default policy in Private Browsing Windows represents a major improvement in the way the browser handles insecure web page addresses. As illustrated in the Figure below, whenever you enter an insecure (HTTP) URL in Firefox’s address bar, or you click on an insecure link on a web page, Firefox will now first try to establish a secure, encrypted HTTPS connection to the website. In the cases where the website does not support HTTPS, Firefox will automatically fall back and establish a connection using the legacy HTTP protocol instead:

<figcaption>Firefox will now first try to establish a secure, encrypted HTTPS connection to the website</figcaption>

You’re still protected even when you’re not in Private Browsing mode

In addition to our protections in Private Browsing, we strive to combat tracking in everyday browsing in Firefox overall, and have brought many protections to normal windows. Our Enhanced Tracking Protection feature blocks many of the worst cookies, fingerprinters and social media tracking cookies by default in all windows. 
Anyone familiar with Mozilla knows that caring about your privacy is at the heart of our mission.

For more on Firefox:

Firefox browser privacy features explained

Superhero passwords may be your kryptonite wherever you go online

Latest Firefox release includes Multiple Picture-in-Picture and Total Cookie Protection

The post Firefox’s Private Browsing mode upleveled for you appeared first on The Mozilla Blog.

Mozilla Privacy BlogMozilla submits comments to the California Privacy Protection Agency

This week, Mozilla submitted comments in response to the California Privacy Protection Agency’s Invitation for Preliminary Comments on Proposed Rulemaking Under the California Privacy Rights Act (CPRA).

Mozilla has long been a supporter of data privacy laws that empower people, including the trailblazing California privacy laws, California Consumer Privacy Act (CCPA) and California Privacy Rights Act (CPRA). We welcome the opportunity to offer feedback as California considers how to best evolve its privacy protections, and we support the progress made thus far, particularly as federal efforts languish — but there’s more to do.

Our comments this week focused specifically on Global Privacy Control (GPC), a mechanism for people to tell companies to respect their privacy rights through their browser. Once turned on, GPC sends a signal to the websites that people visit, telling them that the person does not want to be tracked and does not want their data to be sold. Mozilla is experimenting with GPC within Firefox and we think it can play an integral role in making a right to opt-out meaningful and easy to use for consumers.

Unfortunately, the enforceability of GPC under CCPA remains ambiguous, with competing interpretations of do-not-sell requirements and with many businesses uncertain about their exact obligations when they receive a signal such as the GPC. The practical impact is that businesses may simply ignore the GPC signal. Mozilla therefore encourages the California AG—and other privacy agencies globally—to expressly require businesses to comply with GPC. Regulators must step in to provide enforcement teeth and to ensure consumers’ choices are honored.

For Mozilla, privacy is not optional. We will continue to work with policymakers and regulators in California and across the globe to advance an Internet that truly treats people’s privacy and security as fundamental.

The post Mozilla submits comments to the California Privacy Protection Agency appeared first on Open Policy & Advocacy.

Firefox NightlyThese Weeks in Firefox: Issue 103

Highlights

  • The current plan is to begin the slow rollout of Fission (Site Isolation in Firefox) next week for users on the release channel
  • We enabled the new downloads experience on Nightly only! (not shipping or on beta with 95)
    • If you have questions, please see the public explainer doc first to see if it’s answered there
    • If you see issues, please use one of the links in that doc to file a bug.
  • Some really great stuff for users on macOS have landed recently in Nightly:
    • Firefox now supports sending webpages via Handoff: when you have a page open in Firefox on your Mac, you will see a prompt to open that page on your other nearby Apple devices (bug 1525788).
    • Starting in Firefox 94, watching fullscreen video on Mac will consume significantly less battery power (meta 1653417).
    • Fonts are now rendered correctly on non-English systems running macOS 12 (bug 1732629).
    • Content process startup is 30-70% faster (bug 1467758).
  • Thanks to :emilio for making the autofill background color configurable

Friends of the Firefox team

For contributions from October 19th to November 1st 2021, inclusive.

Resolved bugs (excluding employees)

Fixed more than one bug
  • Itiel
  • jbarson
  • Leslie
  • Mathew Hodson
  • onuohamiriam44
  • onuohaoluebube05
  • Oriol Brufau [:Oriol]
New contributors (🌟 = first patch)
  • 🌟Miriam Onuoha fixed three bugs:
  • lesore0789 (Leslie) fixed four bugs:

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons

  • John Bieling contributed a fix to about:addons to ensure that about:addons “Languages” category listview will be pointing the user to the language section on addons.mozilla.org when all langpacks has been uninstalled and the about:addons Languages category listview is the currently selected view – Bug 1737875

 

WebExtensions Framework

  • Fixed an intermittent issue on opening unpacked extensions’ browserAction popups – Bug 1735899 in Firefox 95
    • This issue was actually related to a pre-existing issue, only revealed after we fixed Bug 1706594 (and it was triggered by a stale preloaded stylesheet cached entry, which was stuck in the loading state because of a pre-existing bug in RemoteLazyInputStream::Close)
  • Mathew Hodson got rid of the remaining uses of “ChromeUtils.import(…, null)” in the WebExtensions framework internals – Bug 1531368 (and Bug 1733851, Bug 1733871, Bug 1733883, Bug 1733886).
  • Tomislav landed a fix for a regression related to the sender.url value sent along with the messages sent from a content script using the WebExtensions messaging APIs – Bug 1734984 (originally introduced from Bug 1729395)

 

WebExtension APIs

  • :dw-dev contributed a fix for the “browserSettings.zoomSiteSpecific.set” WebExtensions API method (which ensures that the controlled browser setting is going to be reset as expected when an extension using this API is uninstalled) – Bug 1735047

Downloads Panel

  • With the new Downloads Panel experience enabled, work is ongoing on the issues that people are finding on Nightly
  • Ava, this past summer’s outreachy intern, is also continuing to work on download spam protection.

Fission

Form Autofill

Desktop Integrations (Installer & Updater)

Lint, Docs and Workflow

macOS Spotlight

  • Fixed a performance issue affecting users with pinned tabs in fullscreen mode. UI jank occurred when the user moused to the top of the screen to reveal the menu bar (bug 1701929).
  • Users with M1 Macs can now import bookmarks from Safari on startup (bug 1735140).

Password Manager

Search and Navigation

  • Daisuke fixed telemetry for the New Tab Page search field hand-off to the Address Bar, so that search counts are appropriately counted – Bug 1732429
  • Harry fixed the search mode indicator colors in high contrast mode – Bug 1735643
  • Thanks to all the contributors from the community:
    • Simon Farre contributed a patch improving performance of some tokenization in the address bar. Bug 1726853
    • Tanju Brunostar contributed a patch fixing searches for “.de” or “.com” in the address bar trying to visit a page instead of searching – Bug 1724473
    • Leslie contributed a patch fixing high contrast hover state of the shield/lock icons – Bug 1737054

The Mozilla BlogPersonalize Firefox with colorways

Starting with Firefox version 94, you will be able to personalize your browsing experience with 18 exciting new colorways themes. Each limited edition colorway presents its own individual bespoke characteristic. Find a color that better fits you with our palette.

Learn more about how to get colorways.

The post Personalize Firefox with colorways appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 416

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Foundation
Project/Tooling Updates
Newsletter
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is chumsky, a friendly parser combinator crate.

Thanks to Jan Riemer for the suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

296 pull requests were merged in the last week

Rust Compiler Performance Triage

Largely a positive week despite taking a significant performance hit from turning on incremental compilation verification for a subsection of the total queries that the compiler does in order to more quickly catch bugs in incremental compilation. Luckily optimizations in bidi detection brought large performance improvements.

Triage done by @rylev. Revision range: 6384dc..eee8b

2 Regressions, 4 Improvements, 4 Mixed; 1 of them in rollups 45 comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
New RFCs

Upcoming Events

Rusty Events between 11/10-11/24 🦀

Online
North America
Europe

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

CoScreen

Polar Sync

Tangram

Toposware

Kraken

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

And even if you could fix all of rustc's soundness holes, or otherwise prevent user code from exploiting them, a soundness bug in any third-party library can also make it possible for malicious crates to trigger arbitrary behavior from safe code.

[...]

This is why we need to emphasize that while Rust's static analyses are very good at limiting accidental vulnerabilties in non-malicious code, they are not a sandbox system that can place meaningful limits on malicious code.

Matt Brubeck on rust-users

Thanks to robin for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, marriannegoldin.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Mozilla BlogFirefox: the first major browser to be available in the Windows Store

As of today, Firefox desktop is the first major browser to become available in the Windows Store for Windows 10 and Windows 11 users. Previously, if you were on Windows and wanted to use Firefox, you had to download it from the internet and go through a clunky process from Microsoft. Now that Microsoft has changed its Store policies, choosing Firefox as your desktop browser is even more seamless – and it comes with all the latest Firefox features.

screenshot of Firefox available in the Windows Store

On Windows? Download Firefox directly from the Windows Store

Why choose Firefox?

The core of a web browser is what’s called a “browser engine”. The engine is responsible for loading web pages from sites and displaying them on your screen so that you can see and interact with them. Until recently, Microsoft’s store policies required that all web browsers use the engine that Microsoft had built into their platform which meant we were unable to ship the Firefox you know and love in the Windows Store. This was not only bad for you but bad for the web because it meant that the web on Windows 11 would only have the features Microsoft was willing to provide. People deserve choice and we’re glad there is an easier option to download Firefox on Windows.

Now that Microsoft has changed their policies, we are finally able to ship Firefox with our industry-leading Gecko engine in the Windows Store. This lets us make your experience of the web joyful, safe, private, and fast with unique features like:

  • Personalizing your experience with seasonal Colorways.

When you choose to use Firefox, you help us advocate for a web that is safer, more private and fast. You signal that you want a choice and the freedom to experience the web on your own terms. We’re excited to make Firefox available in the Windows Store. Try it out!

The post Firefox: the first major browser to be available in the Windows Store appeared first on The Mozilla Blog.

The Mozilla BlogWelcome Eric Muhlheim, our incoming Chief Financial Officer

I am excited to announce that Eric Muhlheim has joined Mozilla Corporation as our Chief Financial Officer (CFO).

As our CFO Eric will be a key member of our steering committee reporting to me. He will lead our continued strategy to scale our mission impact by growing and diversifying our revenue through new investments, product offerings and business opportunities that allow us to better serve our users and advance our agenda for a healthier, more joyful internet. 

Eric stood out as a candidate because of his deep operational expertise in both developing and leading organizations, strategic planning, and in growing revenue streams through operations, acquisitions and partnerships. He’s passionate about contributing to broader business issues outside of finance and has demonstrated a strong commitment to our mission and values.

“I’ve long admired Mozilla’s mission to shape the internet as a force for the public good, give people more control over their lives online, and build products that deliver on these promises,” said Muhlheim. “People are looking today more than ever for a trusted guide to help them navigate the web with safety and joy, and Mozilla has the perspective, technology, and products to be this guide. I look forward to putting my background and skills to work to help Mozilla achieve greater impact and build a better internet for everyone.”

Most recently, Eric provided strategic financial and operating services as an independent consultant to a variety of early stage and privately-funded startups. Prior to that, he served as Chief Financial and Administrative Officer at BuzzFeed where he oversaw the restructuring of the company to drive impact, creating a unified sales organization and managing Finance, Accounting, HR, Legal, IT, Facilities, and Security as an integrated operational support team. 

Eric started his career at The Walt Disney Company, where he held various leadership roles over more than 15 years, including spending three years as an expatriate in China managing the expansion of Disney English, the company’s China-based learning center business. Following his tenure at Disney, Eric was CFO at Helix Education, a provider of technologies and services to power data-driven higher education growth, and at the programmatic advertising exchange OpenX Technologies.

Eric currently serves on the boards of the Independent Shakespeare Co. of Los Angeles and Temple Emanuel of Beverly Hills. He graduated from Princeton University cum laude in Mathematics and holds an MBA from The Stanford Graduate School of Business. He’s based in Los Angeles, California. 

Please join me in welcoming Eric to Mozilla.

The post Welcome Eric Muhlheim, our incoming Chief Financial Officer appeared first on The Mozilla Blog.

The Mozilla Blog8 Firefox pro tips and tricks for Android and iOS (plus a few more)

With something like 15 billion mobile phones in the world, our collective thumbs are getting a workout from swiping and tapping tiny screens all day. Check out some of our favorite pro tips and tricks for getting the most out of Firefox on your phone and tablet that might also give your thumbs and your brain a break. 

1. Jump back in like you never left

You know that feeling of walking into a room and forgetting why you went there, then you leave and suddenly remember? That happens online, too! The new Firefox mobile experience is here to help. Whenever you open the Firefox mobile app, all your open tabs are intuitively grouped and displayed along with your most recent saves (bookmarks and reading list), searches and favorite sites. Even if you get distracted, Firefox gets you back on track so you can pick right up where you left off.

How to see it: Open a new tab, and jump back in through your home screen. To move things around, scroll to the bottom of your home screen and tap Customize homepage.

2. Take a shortcut to your favorite sites 

Firefox learns what sites you visit most and populates the app’s home screen with shortcuts so you can jump quickly back into those sites. With eight shortcut spots available at the top of the screen, you might also want to pin a few favorite sites yourself like:

How to pin a site: Tap the Settings menu (the three dots in the URL bar), then tap Add to top sites (Android) or Add to Shortcuts (iOS). To remove, rename or open a pin in a private tab, long-press to activate those options. 

3. Add widgets for quick access

On iOS: The iOS Widget feature is perfect for setting up shortcuts to all your favorite quick actions in Firefox — searching the web, private browsing, private web searches and opening a link from your clipboard for example. You can even turn the top news and entertainment sites you cycle through daily into a widget for fast access from your home screen.

On Android: Add a Firefox widget to search with from your Android home screen without needing to first launch the browser. 

4. Tap to reader mode for instant decluttering

Using a mobile phone to read on the go is essential, however ads, images and other embedded content can make for a chaotic visual experience, especially on small screens. Firefox reader mode is the go-to trick to cut the clutter and get to the text. 

How to do it: Tap the icon in your URL bar to turn on reader mode. Tap it again to turn it off. 

5. Send tabs across the room, across town and anywhere you’re logged in

This is a pro move that anyone can pull off. Want to send a tab from your small phone screen to your big computer screen? Beam that tab up and send it on its way! It’s a clever way to move content from on screen to another.

From your phone: Tap the share icon in your menu bar, then Send to Device. If no device is listed, sign into your Firefox account to enable tab sending.

From your computer: Right click on a tab, then select Send Tab to Device.

6. Tap, swipe, share tabs through any app

Speaking of tabs, Firefox makes it easier to share website tabs through other apps (like chats, texts and social media) on your phone that’s so much better than copy/paste. 

How to do it: Tap the Firefox share icon (three connected dots), then swipe through your apps to share directly through any of them.

7. Go easy on the eyes with dark mode

Dark mode is handy if your eyes need a break. Browsing in dark mode or light mode is a tap away in Firefox, or you can set it to follow your device mode schedule.

Android modes: Tap into the Settings menu, then hit Customize. 

iOS modes: Firefox will automatically match the iOS mode — dark or light. You can override that in your settings, or enable Night Mode to reduce brightness. 

8. Mask up for private browsing

We don’t need to know everything you do online, because honestly, we’re obsessed with protecting your privacy. Firefox Private Browsing automatically erases info like passwords, cookies and history from your device so that when you close out, you leave no trace. While it won’t make you totally anonymous to websites or your ISP, private browsing mode makes it easier to keep what you do online private from anyone else who happens to use your device. 

How to do it: Tap the mask for instant Private Browsing. The background turns purple to denote Private Browsing mode. Tap the mask again to switch back to regular browsing. 

Plus some Android-only tips

Curate tabs into Collections

Our phones have quickly become central organizers to everything we think about, like puppy training tips, best one pot meals and dream vacation lists. Firefox for Android makes it easy to organize browser tabs into any grouping you want with Collections. 

How to do it: Tap the Settings menu (the three stacked dots) on an open tab to start a Collection, then name it and add tabs to it. You can also send entire tab collections to Firefox on your computer (see the Send tabs tip above) or share them them all with other people.

Expand browser features with add-ons

Add-ons add features to Firefox to make browsing faster, safer or just plain fun. To see the vetted Firefox for Android add-ons, tap the the three dots, then tap Add-ons.

Clear and clean up your tabs automatically

Like any self-proclaimed tab hoarder, we know that tabs can pile up. Ever notice that the number of open tabs on your phone is ∞ ? When you hit that, you will have officially crossed over to infinite tabs (which is really just >100.) Now you can set Firefox to clean up on your behalf. 

How to do it: Go to your Settings and tap Tabs to select the duration that you want to close them automatically. Then, tap back to Delete browsing data on quit to tell Firefox when you want to clear your history, cookies, tabs and more from your device every time you quit the app. 

Get it all from an indie company that puts people before profit

When you download Firefox, you’re choosing to support an independent tech company. Firefox is the only major browser backed by a non-profit fighting to give you more openness, transparency and control of your life online.

The post 8 Firefox pro tips and tricks for Android and iOS (plus a few more) appeared first on The Mozilla Blog.

Spidermonkey Development BlogSpiderMonkey Newsletter (Firefox 94-95)

SpiderMonkey is the JavaScript engine used in Mozilla Firefox. This newsletter gives an overview of the JavaScript and WebAssembly work we’ve done as part of the Firefox 94 and 95 Nightly release cycles.

👷🏽‍♀️ JS features

⚡ WebAssembly

  • We landed more changes for Wasm exception support.
  • Executable code for Wasm modules can now be cached in the network cache. We also added gzip compression for this.
  • The fuzzing team integrated the wasm-smith fuzzer in SpiderMonkey.
  • We prototyped various instructions that are part of the Relaxed SIMD proposal.
  • Code allocation failures are now reported to the console.
  • We fixed a performance cliff in the register allocator that caused hangs on certain large Wasm modules.
  • We landed the remaining functionality for Wasm64.
  • Type definitions for Wasm GC support are now properly collected.

❇️ Stencil

Stencil is our project to create an explicit interface between the frontend (parser, bytecode emitter) and the rest of the VM, decoupling those components. This lets us improve web-browsing performance, simplify a lot of code and improve bytecode caching.

  • We’ve migrated Gecko’s ScriptPreloader to use the new Stencil XDR serialization format.
  • We were then able to remove the legacy, error-prone XDR code and replace the JSScript cloning mechanism with sharing stencils.
  • These changes also allowed us to tighten invariants for scripts with non-syntactic scopes, allowing us to move certain checks from the VM to the bytecode emitter.
  • We optimized string literals to not always require atomization because this can be slow.
  • With these changes, the new Stencil architecture is utilized for all existing scenarios and the error-prone legacy code is now all removed. This unified architecture allows us to continue improving caching and speculation techniques with far less risk of stability or correctness bugs. Congratulations to the team for passing this milestone. 🎉

🚿DOM Streams

We’re moving our implementation of the Streams specification out of SpiderMonkey into the DOM. This lets us take advantage of Gecko’s WebIDL machinery, making it much easier for us to implement this complex specification in a standards-compliant way and stay up-to-date.

A preliminary implementation of ReadableStreams (without integration into other browser specifications) has landed disabled, but it’s a bit too early for people to play with yet.

🧹Garbage Collection

  • We fixed a memory leak involving weak maps. This leak affected some popular websites.
  • We changed permanent atoms and symbols to always be marked, this let us remove checks for this from the marking path.
  • We optimized gray root marking to be incremental. This fixes a source of long GC slices.
  • We fixed the rooting hazard static analysis to handle virtual method calls better. We also parallelized the call graph generation step.
  • We removed some overhead from the gray unmarking code that showed up in hang stacks.
  • We fixed a performance issue where we could collect the nursery even if it’s empty or disabled.

🌍 Unified Intl implementation

Work is underway to unify the Intl (Internalization) code in SpiderMonkey and the rest of Gecko as a shared mozilla::intl component. This results in less code duplication and will make it easier to migrate from the ICU library to ICU4X in the future.

🗂 ReShape

ReShape is a project to optimize and simplify our object layout and property representation after removing TI. This will help us fix some long-standing issues related to performance, memory usage and code complexity.

  • We optimized object allocation by moving handling of TypedArrays and ArrayBuffers out of the generic allocation path
  • We were then able to remove the NewObjectCache, saving some memory.
  • We optimized property enumeration for for-in with null/undefined to reuse the same empty iterator.
  • We optimized the generic property enumeration code to do less work in most cases.

📚 Miscellaneous

  • We added a better JSAPI based on templates for Typed Arrays and ArrayBuffers.
  • We are experimenting with suppressing the lazy parser when parsing off-main-thread. This improves page load performance in a number of scenarios.
  • We optimized comparisons with small constant strings to generate specialized JIT code.
  • We optimized comparisons of the form typeof x === "y". This fixes an old bug that was filed almost 10 years ago!
  • We moved the documentation for running our test suites into firefox-source-docs.
  • We optimized some code in the register allocator to avoid iterating over many unrelated registers.
  • We added markers to JIT code generation debug output to make the output easier to read.
  • We started tidying up and enforcing invariants for the context’s exception state.
  • We fixed a performance issue where JS code throwing many exceptions was very slow due to collecting exception stacks.
  • Lukas.bernhard added shape information to our CacheIR Health Report tool.
  • TheIDInside updated the UI for CacheIR Health Report to add a filter for JS opcodes.

Data@MozillaDetecting Internet Outages with Mozilla Telemetry Data

Whenever an internet connection is cut in a country or city, the safety and security of millions of people may be at stake. Documenting outages helps internet access defenders understand when and where they took place even when authorities or service providers may deny them.

When large numbers of Firefox users experience connection failures for any reason, this produces an anomaly in the recorded telemetry data. At the country or city level, this can provide a corroborative signal of whether an outage or intentional shutdown occurred.

Several large technology companies, including Google and Cloudflare, publicly share data about outages of consumer-facing products in different ways. But researchers and journalists can usually only hone in on the exact nature of an outage by combining data from multiple sources.

Data validation process

Mozilla’s aggregate data for detecting outages is not publicly shared although it contains no personally identifiable information. In order to first assess the reliability of a dataset designed for this purpose, we invited a group of external researchers to query a dataset of aggregated signals that could be indicative of outages (e.g. the time it takes to perform a TLS handshake).

Researchers from Internet Outage Detection and Analysis (IODA) of the Center for Applied Internet Data Analysis (CAIDA), Open Observatory of Network Interference (OONI), RIPE Network Coordination Center (RIPE NCC), Measurement Lab (M-Lab), Internews and Access Now joined a collaborative effort in 2020 to compare existing data on outages with Mozilla’s dataset. This research took place over several months, anchored by a series of video calls in which researchers took turns sharing their screens to display visual explorations of data.

The main case studies were shutdowns in Belarus, Uganda and Myanmar, each with different time spans and characteristics. Individual research methods varied as teams undertook comparisons of Mozilla’s data with the data they typically use for this purpose.

New report explores the dataset

Today, OONI and IODA/CAIDA are releasing an independent report of their own research into Mozilla’s dataset that highlights their assessment of its advantages and limitations.

It’s called “Investigating Internet shutdowns through Mozilla telemetry” and it dives into the different cases, describing in which ways the dataset confirms or augments insights, and how it could be improved to better serve data researchers, journalists and civil society.

The collaborative research project ended in June 2021 with individual interviews of all participants by Mozilla to gather insights. They voiced unanimous support for facilitating greater access to the dataset to serve digital rights advocates worldwide.

In their report, OONI and IODA/CAIDA also confirmed its potential value to the measurement community if Mozilla’s dataset were to be shared publicly.

“It would be a really great addition to the datasets civil society uses to investigate and confirm internet outages,” says Arturo Filastò, the Project Lead and Co-Founder of OONI.

“The fact that it’s from so many different clients in locations where people are trying to browse the internet means that it’s a good representation of what the internet user experience is like around the world. The breadth and granularity of the data is unlike any other dataset out there,” says Filastò, adding that including mobile telemetry to the dataset would be a great advantage since shutdowns often target mobile networks in areas where desktop connectivity is limited.

With thanks to everyone who took part in or supported the research and interviews:

Simone Basso, Jochai Ben-Avie, Rafael Bezerra Nunes, Georgia Bullen, Jon Camfield, Federico Ceratto, Sage Cheng, Alberto Dainotti, Marianne Díaz Hernández, Michael Droettboom, Arturo Filastò, Georg Fritzsche, Saptarshi Guha, Eric Jjemba, Emily Litka, Lai Yi Ohlsen Ramakrishna Padmanabhan, Melody Patry, Jan-Erik Rediger, Stephen D. Strowes, Berhan Taye, Hamilton Ulmer, Vasilis Ververis, Maria Xynou, Mingwei Zhang.

This post was co-authored by Solana Larsen, Alessio Placitelli, Udbhav Tiwari.

Niko MatsakisView types for Rust

I wanted to write about an idea that’s been kicking around in the back of my mind for some time. I call it view types. The basic idea is to give a way for an &mut or & reference to identify which fields it is actually going to access. The main use case for this is having “disjoint” methods that don’t interfere with one another.

This is not a propsoal (yet?)

To be clear, this isn’t an RFC or a proposal, at least not yet. It’s some early stage ideas that I wanted to document. I’d love to hear reactions and thoughts, as I discuss in the conclusion.

Running example

As a running example, consider this struct WonkaChoclateShipment. It combines a vector bars of ChocolateBars and a list golden_tickets of indices for bars that should receive a ticket.

struct WonkaShipmentManifest {
    bars: Vec<ChocolateBar>,
    golden_tickets: Vec<usize>,
}

Now suppose we want to iterate over those bars and put them into their packaging. Along the way, we’ll insert a golden ticket. To start, we write a little function that checks whether a given bar should receive a golden ticket:

impl WonkaShipmentManifest {
    fn should_insert_ticket(&self, index: usize) -> bool {
        self.golden_tickets.contains(&index)
    }
}

Next, we write the loop that iterates over the chocolate bars and prepares them for shipment:

impl WonkaShipmentManifest {
    fn prepare_shipment(self) -> Vec<WrappedChocolateBar> {
        let mut result = vec![];
        for (bar, i) in self.bars.into_iter().zip(0..) {
            let opt_ticket = if self.should_insert_ticket(i) {
                Some(GoldenTicket::new())
            } else {
                None
            };
            result.push(bar.into_wrapped(opt_ticket));
        }
        result
    }
}

Satisfied with our code, we sit back and fire up the compiler and, wait… what’s this?

error[E0382]: borrow of partially moved value: `self`
   --> src/lib.rs:16:33
    |
15  |         for (bar, i) in self.bars.into_iter().zip(0..) {
    |                                   ----------- `self.bars` partially moved due to this method call
16  |             let opt_ticket = if self.should_insert_ticket(i) {
    |                                 ^^^^ value borrowed here after partial move
    |

Well, the message makes sense, but it’s unnecessary! The compiler is concerned because we are borrowing self when we’ve already moved out of the field self.bars, but we know that should_insert_ticket is only going to look at self.golden_tickets, and that value is still intact. So there’s not a real conflict here.

Still, thinking on it more, you can see why the compiler is complaining. It only looks at one function at a time, so how would it know what fields should_insert_ticket is going to read? And, even if were to look at the body of should_insert_ticket, maybe it’s reasonable to give a warning for future-proofing. Without knowing more about our plans here at Wonka Inc., it’s reasonable to assume that future code authors may modify should_insert_ticket to look at self.bars or any other field. This is part of the reason that Rust does its analysis on a per-function basis: checking each function independently gives room for other functions to change, so long as they don’t change their signature, without disturbing their callers.

What we need, then, is a way for should_insert_ticket to describe to its callers which fields it may use and which ones it won’t. Then the caller could permit invoking should_insert_ticket whenever the field self.golden_tickets is accessible, even if other fields are borrowed or have been moved.

An idea

When I’ve thought about this problem in the past, I’ve usually imagined that the list of “fields that may be accessed” would be attached to the reference. But that’s a bit odd, because a reference type &mut T doesn’t itself have an fields. The fields come from T.

So recently I was thinking, what if we had a view type? I’ll write it {place1, ..., placeN} T for now. What it means is “an instance of T, but where only the paths place1...placeN are accessible”. Like other types, view types can be borrowed. In our example, then, &{golden_tickets} WonkaShipmentManifest would describe a reference to WonkaShipmentManifest which only gives access to the golden_tickets field.

Creating a view

We could use some syntax like {place1..placeN} expr to create a view type1. This would be a place expression, which means that it refers to a specific place in memory. This means that it can be directly borrowed without creating a temporary. So I can create a view onto self that only has access to bars_counter like so:

impl WonkaShipmentManifest {
    fn example_a(&mut self) {
        let self1 = &{golden_tickets} self;
        println!("tickets = {:#?}", self1.golden_tickets);
    }
}

Notice the distinction between &self.golden_tickets and &{golden_tickets} self. The former borrows the field directly. The latter borrows the entire struct, but only gives access to one field. What happens if you try to access another field? An error, of course:

impl WonkaShipmentManifest {
    fn example_b(&mut self) {
        let self1 = &{golden_tickets} self;
        println!("tickets = {:#?}", self1.golden_tickets);
        for bar in &self1.bars {
            //      ^^^^^^^^^^
            // Error: self1 does not have access to `bars`
        }
    }
}

Of course, when a view is active, you can still access other fields through the original path, without disturbing the borrow:

impl WonkaShipmentManifest {
    fn example_c(&mut self) {
        let self1 = &{golden_tickets) self;
        
        for bar in &mut self.bars {
            println!("tickets = {:#?}", self1.golden_tickets);
        }
    }
}

And, naturally, that access includes the ability to create multiple views at once, so long as they have disjoint paths:

impl WonkaShipmentManifest {
    fn example_d(&mut self) {
        let self1 = &{golden_tickets) self;
        let self2 = &mut {bars} self;
        
        for bar in &mut self2.bars {
            println!("tickets = {:#?}", self1.golden_tickets);
            bar.modify();
        }
    }
}

View types in methods

As example C in the previous section suggested, we can use a view type in our definition of should_insert_ticket to specify which fields it will use:

impl WonkaChocolateFactory {
    fn should_insert_ticket(&{golden_tickets} self, index: usize) -> bool {
        self.golden_tickets.contains(&index)
    }
}

As a result of doing this, we can successfully compile the prepare_shipment function:

impl WonkaShipmentManifest {
    fn prepare_shipment(self) -> Vec<WrappedChocolateBar> {
        let mut result = vec![];
        for (bar, i) in self.bars.into_iter().zip(0..) {
            //          ^^^^^^^^^^^^^^^^^^^^^
            // Moving out of `self.bars` here....
            let opt_ticket = if self.should_insert_ticket(i) {
                //              ^^^^
                // ...does not conflict with borrowing a
                // view of `{golden_tickets}` from `self` here.
                Some(GoldenTicket::new())
            } else {
                None
            };
            result.push(bar.into_wrapped(opt_ticket));
        }
        result
    }
}

View types with access modes

All my examples so far were with “shared” views through & references. We could of course say that &mut {bars} WonkaShipmentManifest gives mutable access to the field bars, but it might also be nice to have an explicit mut mode, such that you write &mut {mut bars} WonkaShipmentManifest. This is more verbose, but it permits one to give away a mix of “shared” and “mut” access:

impl WonkaShipmentManifest {
    fn add_ticket(&mut {bars, mut golden_tickets} self, index: usize) {
        //              ^^^^  ^^^^^^^^^^^^^^^^^^^
        //              |     mut access to golden-tickets
        //              shared access to bars
        assert!(index < self.bars.len());
        self.golden_tickets.push(index);
    }
}

One could invoke add_ticket even if you had existing borrows to bars:

fn foo() {
    let manifest = WonkaShipmentManifest { bars, golden_tickets };
    let bar0 = &manifest.bars[0];
    //         ^^^^^^^^^^^^^^ shared borrow of `manifest.bars`...
    manifest.add_ticket(22);
    //      ^ borrows `self` mutably, but with view
    //        `{bars, mut golden_tickets}`
    println!("debug: {:?}", bar0);
}

View types and ownership

I’ve always shown view types with references, but combining them with ownership makes for other interesting possibilities. For example, suppose I wanted to extend GoldenTicket with some kind of unique serial_number that should never change, along with a owner field that will be mutated over time. For various reasons2, I might like to make the fields of GoldenTicket public:

pub struct GoldenTicket {
    pub serial_number: usize,
    pub owner: Option<String>,
}

impl GoldenTicket {
    pub fn new() -> Self {
        Self { .. }
    }
}

However, if I do that, then nothing stops future owners of a GoldenTicket from altering its serial_number:

let mut t = GoldenTicket::new();
t.serial_number += 1; // uh-oh!

The best answer today is to use a private field and an accessor:

pub struct GoldenTicket {
    pub serial_number: usize,
    pub owner: Option<String>,
}

impl GoldenTicket {
    pub fn new() -> Self {
        
    }
    
    pub fn serial_number(&self) -> usize {
        self.serial_number
    }
}

However, Rust’s design kind of discourages accessors. For one thing, the borrow checker doesn’t know which fields are used by an accessor, so you have code like this, you will now get annoying errors (this has been the theme of this whole post, of course):

let mut t = GoldenTicket::new();
let n = &mut t.owner;
compute_new_owner(n, t.serial_number());

Furthermore, accessors can be kind of unergonomic, particularly for things that are not copy types. Returning (say) an &T from a get can be super annoying.

Using a view type, we have some interesting other options. I could define a type alias GoldenTicket that is a limited view onto the underlying data:

pub type GoldenTicket = {serial_number, mut owner} GoldenTicketData;

pub struct GoldenTicketData {
    pub serial_number: usize,
    pub owner: Option<String>,
    dummy: (),
}

Now if my constructor function only ever creates this view, we know that nobody will be able to modify the serial_number for a GoldenTicket:

impl GoldenTicket {
    pub fn new() -> GoldenTicket {
        
    }
}

Obviously, this is not ergonomic to write, but it’s interesting that it is possible.

View types vs privacy

As you may have noticed in the previous example, view types interact with traditional privacy in interesting ways. It seems like there may be room for some sort of unification, but the two are also different. Traditional privacy (pub fields and so on) is like a view type in that, if you are outside the module, you can’t access private fields. Unlike a view, though, you can call methods on the type that do access those fields. In other words, traditional privacy denies you direct access, but permits intermediated access.

View types, in contrast, are “transitive” and apply both to direct and intermediated actions. If I have a view {serial_number} GoldenTicketData, I cannot access the owner field at all, even by invoking methods on the type.

Longer places

My examples so far have only shown views onto individual fields, but there is no reason we can’t have a view onto an arbitrary place. For example, one could write:

struct Point { x: u32, y: u32 }
struct Square { upper_left: Point, lower_right: Point }

let mut s: Square = Square { upper_left: Point { x: 22, y: 44 }, lower_right: Point { x: 66, y: 88 } };
let s_x = &{upper_left.x} s;

to get a view of type &{upper_left.x} Square. Paths like s.upper_left.y and s.lower_right would then still be mutable and not considered borrowed.

View types and named groups

There is another interaction with view types and privacy: view types name fields, but if you have private fields, you probably don’t want people outside your module typing their names, since that would prevent you from renaming them. At the same time, you might like to be able to let users refer to “groups of data” more abstractly. For example, for a WonkaChocolateShipment, I might like users to know they can iterate the bars and check if they have a golden ticket at once:

impl WonkaShipmentManifest {
    pub fn should_insert_ticket(&{golden_tickets} self, index: usize) -> bool {
        self.golden_tickets.contains(&index)
    }
    pub fn iter_bars_mut(&mut {bars} self) -> impl Iterator<Item = &mut Bar> {
        &mut self.bars
    }
}

But how should we express that to users without having them name fields directly? The obvious extension is to have some kind of “logical” fields that represent groups of data that can change over time. I don’t know how to declare those groups though.

Groups could be more DRY

Another reason to want named groups is to avoid repeating the names of common sets of fields over and over. It’s easy to imagine that there might be a few fields that some cluster of methods all want to access, and that repeating those names will be annoying and make the code harder to edit.

One positive thing from Rust’s current restrictions is that it has sometimes encouraged me to factor a single large type into multiple smaller ones, where the smaller ones encapsulate a group of logically related fields that are accessed together.[^ex] On the other hand, I’ve also encountered situations where such refactorings feel quite arbitrary – I have groups of fields that, yes, are accessed together, but which don’t form a logical unit on their own.

As an example of both why this sort of refactoring can be good and bad at the same time, I introduced the [cfg] field of the MIR Builder type to resolve errors where some methods only accessed a subset of fields. On the one hand, the CFG-related data is indeed conceptually distinct from the rest. On the other, the CFG type isn’t something you would use independently of the Builder itself, and I don’t feel that writing self.cfg.foo instead of self.foo made the code particularly clearer.

View types and fields in traits

Some time back, I had a draft RFC for fields in traits. That RFC was “postponed” and moved to a repo to iterate, but I have never had the time to invest in bringing it back. It has some obvious overlap with this idea of views, and (iirc) I had at some point considered using “fields in traits” as the basis for declaring views. I think I rather like this more “structural” approach, but perhaps traits with fields might be a way to give names to groups of fields that public users can reference. Have to mull on that.

View types and disjoint closure capture

Rust 2021 introduced disjoint closure capture. The idea is that closures capture one reference per path that is referenced, subject to some caveats. One of the things I am very happy with is that this was implemented with virtually no changes to the borrow checker: we basically just tweaked how closures are desugared. Besides saving a bunch of effort on the implementation3, this means that the risk of soundness problems is not increased. This strategy does have a downside, however: closures can sometimes get bigger (though we found experimentally that they rarely do in practice, and sometimes get smaller too).

Closures that access two paths like a.foo and a.bar can get bigger because they capture those paths independently, whereas before they have just captured a as a whole. Interestingly, using view types offers us a way to desugar those closures without introducing unsafe code. Closures could capture {foo, bar} a instead of the two fields independently. Neat!

How does this affect learning?

I’m always way about extending “core Rust” because I don’t want to make Rust harder to learn. However, I also tend to feel that extensions like this one can have the opposite effect: I think that what throws people the most when learning Rust is trying to get a feel for what they can and cannot do. When they hit “arbitrary” restrictions like “cannot say that my helper function only uses a subset of my fields”4 that can often be the most confusing thing of all, because at first people think that they just don’t understand the system. “Surely there must be some way to do this!”

Going a bit further, one of the other challenges with Rust’s borrow checker is that so much of its reasoning is invisible and lacks explicit syntax. There is no way to “hand annotate” the value of lifetime parameters, for example, so as to explore how they work. Similarly, the borrow checker is currently tracking fine-grained state about which paths are borrowed in your program, but you have no way to talk about that logic explicitly. Adding explicit types may indeed prove helpful for learning.

But there must be some risks?

Yes, for sure. One of the best and worst things about Rust is that your public API docs force you to make decisions like “do I want &self or &mut self access for this function?” It pushes a lot of design up front (raising the risk of premature commitment) and makes things harder to change (more viscous). If it became “the norm” for people to document fine-grained information about which methods use which groups of fields, I worry that it would create more opportunities for semver-hazards, and also just make the docs harder to read.

On the other side, one of my observations it that public-facing types don’t want views that often; the main exception is that sometimes it’d be nice small accessors (for example, a Vec might like to document that one can read len even when iterating). Most of the time I find myself frustrated with this particular limitation of Rust, it has to do with private helper functions (similar to the initial example). In those cases, I think that the documentation is actually helpful, since it guides people who are reading and helps them know what to expect from the function.

Conclusion

This concludes our tour of “view types”, a proto-proposal. I hope you enjoyed your ride. Curious to hear what people think! I’ve opened an thread on internals for feedback. I’d love to know if you feel this would solve problems for you, but also how you think it would affect Rust learning – not to mention better syntax ideas.

I’d also be interested to read about related work. The idea here seems likely to have been invented and re-invented numerous times. What other languages, either in academic or industry, have similar mechanisms? How do they work? Educate me!

Footnotes

  1. Yes, this is ambiguous. Think of it as my way of encouraging you to bikeshed something better. 

  2. Shout out to the RFC 2229 working group folks, who put in months and months and months of work on this. 

  3. Another example is that there is no way to have a struct that has references to its own fields. 

The Talospace ProjectFirefox 94 on POWER

Firefox 94 is released. I have little interest in the colourizer, but I do like about:unloads and EGL support on Linux for great WebGL justice even on X11 (I don't use the Wayland Wasteland), at least if you have an AMD/ATI card like the WX7100 Raptor sells as a BTO option. There are also various performance improvements and a fun feature where you can use a different Mozilla VPN server for each separate multi-account container, the latter probably being Firefox's most useful capability right now. The LTO-PGO patch is unchanged from Firefox 93 and the .mozconfigs are unchanged from Firefox 90.

Mozilla Privacy BlogMozilla publishes position paper on the EU Digital Identity Framework

Earlier this year the European Commission unveiled its proposed ‘Digital Identity Framework’, a revision to the 2014 eIDAS regulation. While the draft law includes many welcome provisions on the security and interoperability of digital ID, it also contains a set of provisions that, if adopted, would have a fundamentally negative impact on the website security ecosystem. Our new position paper spells out the risks involved in forcing browsers to support a kind of web certificate known as Qualified Web Authentication Certificates (QWACs), and provides recommendations for lawmakers in the European Parliament and EU Council who are presently amending the draft law.

Web browsers are key user agents in our modern digital world. The web browser helps people visit the sites and services they want to use, and it protects them while they are there. One of the most important ways in which browsers protect users is through website authentication. For instance, if a person wants to visit Europa.eu, the web browser must reliably ensure that the site is actually under control of the owner of the domain ‘Europa.eu’, and not an attacker on the network impersonating the European Commission’s domain. Absent that assurance, users might send passwords, personal details, and other compromising information to the wrong party, putting them at risk of identity theft, fraud, and other privacy interferences.

An insecure website authentication ecosystem would lead to significant harms, both online and off. Put simply, the trust benefits of website authentication and the ecosystem that underpins it are essential for the Digital Single Market, e-government, as well as to protect the public interest work of journalists, politicians, and human rights defenders.

Unfortunately, the draft eIDAS revision would undermine years of advancements in this space. In a nutshell, the revised Article 45 would force browsers to suspend the ‘root store’ policies that are essential for maintaining trust and security online. These rigorous and independent policies and vetting practices underpin a system of online trust that is put into practice every single second, and which is fundamental to ensuring the online security of every person on the planet who uses a browser to navigate the web.

At the same time, the types of website certificates that browsers would be forced to accept, namely QWACs, are based on a flawed certificate architecture that is ill-suited for the security risks users face online today. In the years since the original eIDAS regulation was adopted in 2014, an increasing body of research has illustrated how the certificate architecture upon which QWACs are inspired – namely, extended validation certificates – lull individuals into a false sense of security that is often exploited for malicious purposes such as phishing and domain impersonation. For that reason, since 2019 no major browser showcases EV certificates directly in the URL address bar.

As such, should the revised Article 45 be adopted as is, Mozilla would no longer be able to honour the security commitments we make to the hundreds of millions of people who use our Firefox browser or any of the other browser and email products that also depend on Mozilla’s Root Program. It would amount to an unprecedented weakening of the website security ecosystem, and undercut the browser community’s ability to push back against authoritarian regimes’ interference with fundamental rights (see here and here for two recent examples).

Fortunately, there is still time to address the problems wrought by this proposal, and our position paper includes recommendations for how lawmakers in the European Parliament and EU Council can amend the relevant provisions. As the discussions on the eIDAS revision heat up in the EU Institutions, we’ll be engaging intensively with lawmakers and the broader community to protect trust and security on the web.

The post Mozilla publishes position paper on the EU Digital Identity Framework appeared first on Open Policy & Advocacy.

Mozilla Addons BlogAdd-on Policy Changes 2021

From time to time, the Add-ons Team makes changes to the policies in order to provide more clarity for developers, improve privacy and security for users, and to adapt to the evolving needs of the ecosystem. Today we’d like to announce another such update, to make sure the Add-ons developer community is well-prepared for when we start to enforce them on December 1st, 2021.

In this update, we’ve put a major focus on clarity and accessibility, taking a holistic view of our policies and making them as easy to understand and navigate as possible. While this has resulted in a substantially rewritten and reorganized document, the policy changes are modest and unlikely to surprise anyone. The most notable changes that may require action on the part of add-on developers are as follows:

  • Collecting browsing activity data, such as visited URLs, history, associated page data or similar information, is only permitted as part of an add-on’s primary function. Collecting user data or browsing information secretively remains prohibited.
  • Add-ons that serve the sole purpose of promoting, installing, loading or launching another website, application or add-on are no longer permitted to be listed on addons.mozilla.org.
  • Encryption – standard, in-browser HTTPS – is now always required when communicating with remote services. In the past, this was only required when transporting sensitive information.
  • The section on cookie policies has been removed, and providing a consent experience for accessing cookies is no longer required. Note however, that if you use cookies to access or collect technical data, user interaction data or personal data, you will still require a consent experience at first run of the add-on.

The remaining changes in the document focus on improving the clarity, discoverability and examples. While the policies have not substantially changed, it will be worth your time to review them.

  • If your add-on collects technical data, user interaction data, or personal data, you must show a consent experience at the first run of the add-on. This update improves our description of these requirements, and we encourage you to review both the requirements and  our recommended best practices for implementing them.
  • There are certain types of prohibited data collection. We do this to ensure user privacy and to avoid add-ons collecting more information than necessary, and in this update we’ve added a section describing the types of data collection that fall under this requirement.
  • Most add-ons require a privacy policy. For add-ons listed on addons.mozilla.org, the policy must be included in the listing in its full text. We’ve created a section specific to the privacy policy that lays out these requirements in more detail.
  • If your add-on makes use of monetization, the monetization practices must adhere to the data collection requirements in the same way the add-on does. While we have removed duplicate wording from the monetization section, the requirements have not changed and we encourage you to review them as well.

You can preview the policy and ensure your extensions abide by them to avoid any disruption. If you have questions about these updated policies or would like to provide feedback, please post to this forum thread.

Update: The policies are now live, please see the main policy for details.

The post Add-on Policy Changes 2021 appeared first on Mozilla Add-ons Community Blog.

Mozilla Attack & DefenseFinding and Fixing DOM-based XSS with Static Analysis

Despite all the efforts of fixing Cross-Site Scripting (XSS) on the web, it continuously ranks as one of the most dangerous security issues in software.

In particular, DOM-based XSS is gaining increasing relevance: DOM-based XSS is a form of XSS where the vulnerability resides completely in the client-side code (e.g., in JavaScript). Indeed, more and more web applications implement all of their UI code using fronted web technologies: Single Page Applications (SPAs) are more prone to this vulnerability, mainly because they are more JavaScript-heavy than other web applications. An XSS in Electron applications, however, has the potential to cause even more danger due to the system-level APIs available in the Electron framework (e.g., reading local files and executing programs).

The following article will take a deeper look into Mozilla’s eslint-based tooling to detect and prevent DOM-based XSS and how it might be useful for your existing web applications. The eslint plugin was developed as part of our mitigations against injection attacks in the Firefox browser, for which the user interface is also written in HTML, JavaScript and CSS.

Background: Real world example of DOM-based XSS

Let’s take a moment to look at typical sources of DOM-based XSS first. Imagine a bit of JavaScript (JS) code like here:

let html = `
  <div class="image-box">
    <img class="image"
         src="${imageUrl}"/>
  </div>`;
// (...)
main.innerHTML = html;

You first will notice the variable called html, which constructs a bit of HTML using a Javascript template string. It also features an inclusion of another variable – imageUrl – to be used in the src attribute. The full html string is then assigned to main.innerHTML.

If we assume that the imageUrl variable is controlled by an attacker – then they might easily break out of the src attribute syntax and enter arbitrary HTML of their choosing to launch an XSS attack.

This example demonstrates how easy it is to accidentally implement a DOM XSS vulnerability: The application was expecting an image URL, but also accepts all sorts of strings, which are then parsed into HTML and JavaScripts. This is enough to enable XSS attacks.

If we want to avoid these bugs we need to find all instances in which the application parses a string into HTML and then determine if it can be controlled from the outside (e.g., form input, URL parameters, etc).

To do so efficiently, we are required to inspect various patterns in source code. First, let’s look at all assignments to innerHTML or outerHTML. In order not to miss other sources of XSS, we also need to inspect calls to the following functions: insertAdjacentHTML(), document.write(), document.writeln().

When first trying this ourselves, we at Mozilla used text search with tools like grep or ripgrep, but it did not turn out successful: Even a complicated search pattern gave us thousands of results and contained many false positives (e.g., assignments of safe, hardcoded strings). We knew we needed something that is more syntax-aware.

Linting and Static Analysis

Static Analysis is just another way to say that we want to inspect source code automatically. Our static analysis method builds on existing JavaScript linting with eslint, which supports robust JS source code parsing and also supports new JavaScript syntax extensions. Furthermore, the provided plugin API helps us build an automated check with relatively little new code. However, there are some limitations:

Caveats

Since we are scanning the JavaScript source code, there are some things we can not easily do:

  • Static Analysis has almost no visibility into a variable’s content (i.e., harmful, harmless, attacker controlled, hardcoded).
  • In JavaScript, the source code does not tell us a variable’s type (e.g., Number, String, Array, Object)
  • Static Analysis is easily fooled by minification, bundling or obfuscation.

At Mozilla, we managed to accept these limitations because we can build on our existing engineering practices:

  • All proposed patches are going through code review.
  • The repository contains all relevant JavaScript source code (e.g., third-party libraries are vendored in).

The latter point is sometimes hard to guarantee and requires using dependencies through published and versioned libraries. Third-party JavaScript dependencies through <script> elements are therefore out of scope. For a cohesive security posture, the associated security risks need to be mitigated by other means (e.g., using in-browser checks at runtime like CSP). You should validate whether the following assumptions also hold true for your project.

How Static Analysis works

To explain the implementation of our eslint plugin, let’s take a look at how JavaScript can be parsed and understood by eslint: A common representation is the so-called Abstract Syntax Tree (AST). Let’s take a look at the AST for a simplified version of our vulnerability from above:

foo.innerHTML = evil:

AssignmentExpression (operator: =)
|-- left: MemberExpression
| |-- object: Identifier "foo"
| `-- property: Identifier "innerHTML"
`-- right
`-- Identifier "evil"

Indeed, the whole line is seen as an assignment, with a left and a right side. The right side is a variable (Identifier) and the left side foo.innerHTML is accessing the property of an object (MemberExpression).

Now let’s look at the AST representation for a case, where XSS is not possible, which just assigns an empty string: foo.innerHTML = "".

AssignmentExpression (operator: =)
|-- left: MemberExpression
| |-- object: Identifier "foo"
| `-- property: Identifier "innerHTML"
`-- right
`-- Literal ""

Did you spot the difference? Again the assignment has a left and right side. But in this case, the right node is of type Literal (i.e., a hardcoded string).

We can use exactly these kinds of differences to understand the basics of our linter plugin: When looking at assignments, all hardcoded strings are considered trustworthy and do not need further static analysis. But only if all patches are subject to code review, before being committed to the source code. Naturally, the plugin has many more syntax expressions to take into account.

While bearing in mind, that the abstract syntax tree can not tell us anything about a variable despite its name, we probably want to allow some other things: In our case, we configured our linter runtime (not the plugin itself) to skip files if they are in the test/ folder, as we do not expect test code to be running on our users’ systems.

We also need to take false positives into account. False positives are incorrect detection of code, in which the content of the variable is known to be safe through other means. Here, we recommend our developers to use a trusted Sanitizer library that will always return XSS-safe HTML. Essentially, we allow all code on the right side of the assignment as long as it is wrapped in a function call to a known sanitizer like so:

foo.innerHTML = DOMPurify.purify(evil);

We currently recommend using DOMPurify as your sanitizer and our linter allows such calls in our default configuration. In parallel, we are also actively working on specifying and implementing a secure Sanitizer API for the web platform. Either way, as long as our sanitizer function is well implemented, the input data doesn’t have to be.

With all these techniques and decisions in mind, we ended up developing an eslint plugin called eslint-plugin-no-unsanitized, which also contains checks for other typical XSS-related source code fragments like document.write() and is fully configurable in terms of which sanitizers you want to allow.

Evaluation & Integration

When we first tried finding XSS in the Firefox UI automatically, we used grep and spotted thousands of potential vulnerabilities. With the eslint plugin, we reduced this number to 34 findings! This reduction enabled us to start a focused manual code audit and resulted in finding only two critical security bugs. Imagine trying to identify those two bugs by going through thousands of potential findings manually.

Eventually, we fully integrated eslint-plugin-no-unsanitized into our CI systems by choosing an iterative approach:

  • We enabled the linter over time and directory by directory.
  • We skip test files.
  • We also had to allow some exceptions for code that violates the linter but was not actually insecure (validated through code audit).

An important note here is that allowing linter violations incurs a risk that needs to be temporary. It’s still useful to tolerate exceptions during the migration to the linter plugin, but not after. We’ve also experienced that developers misunderstand the purpose of the linter and try to design their own path of evading these checks. Our lesson: By controlling the path for exceptions and escalations, we were able to understand and adopt the tool to find workable solutions for all developers and their use cases.

Once all code has been integrated, it should be on the security & analysis teams to get the number of exceptions down to zero. With all those bugs fixed and most linter violations resolved, we are running the plugin against all newly submitted Firefox code and have a pretty good handle on XSS issues in our codebase.

Conclusion: You can fix DOM-XSS

Fixing DOM-based XSS across a whole codebase is not easy, but we believe this overview will serve as a useful guide: As a first step, we can highly recommend just using the eslint plugin no-unsanitized as it is and running it against your source code. A dry-run will already tell you whether the topic of DOM-based XSS is a problem at all. Our integration section showed how you can integrate the linter gradually, based on risk or feasibility. But we also want to note that source code analysis is not a silver bullet: There are notable caveats and it is useful to complement static analysis with enforcement in the browser and at runtime.
But, eventually you will be able to get rid of a lot of DOM-XSS.

This is a summary of my presentation of the same title, delivered at Sekurak Mega Hacking Party (June 2021) and JSCamp Barcelona (July 2021). Feel free to reach out, if you want me to talk about web security at your event.

The Mozilla BlogLife is complicated. There’s more than one way to browse to meet your needs with Firefox Focus.

According to researchers, the average person has about 6,200 thoughts throughout the day. That’s a lot of time spent bouncing between random thought bubbles like:

  • Does my cat understand me? 
  • Who is Kaia Gerber’s mom?
  • Best sneaky vegan side dishes
  • Why is everyone talking about bones or no bones?
  • Parkour vs slacklining
  • Cheapest flights to New York City
  • When does the World Cup start?
  • Minecraft hacks
  • What happened to the Firefox logo?

Naturally, you hop on the web to get answers. If you want to keep your random web browsing separate and private, Firefox Focus is THE companion phone app for all those forget-about-it moments online.

Q: Firefox or Firefox Focus?

A: Both! 

Using Firefox as your default mobile browser makes life’s busy moments easier and more efficient with tabs and bookmarks synced between devices, secure password management, streamlined reader mode and a customizable home screen.

Simply private mobile browsing for your Android and iOS devices

Get Firefox Focus

No matter what mobile browser you use — Firefox, Chrome or Safari — Focus can work right alongside it. Focus is the ideal companion app for instant privacy so you can do quick searches on the go and then tap the trash button or close the app to make your browser history and searches disappear behind the purple curtain. 

@mozilla

Reply to @owo_hasawoken it’s complicated! There *are* good cookies, but SO many are for ads & you should be in control of your own info! #privacy

♬ original sound – Mozilla

Automatic tracking protection means speed and privacy on mobile

The latest version of Firefox Focus comes with a refreshed, distraction-free design and the same privacy protections you’ve come to expect from us.

The protections dashboard from Firefox for desktop now appears in Focus for your mobile devices. Tap the shield icon to expose how many trackers the Focus app has blocked from snooping on you. Here’s the best part — automatic tracking protection and ad blocking means your pages load faster while your data stays private.

Firefox Focus Tracking Protection screen

Instant privacy that’s also customizable

You can also set Focus as your go-to app for return visits to your favorite sites, going beyond the quick and private searches. Just pin those sites to your Focus home screen, so you can quickly hop back in as soon as you open the app.

Firefox’s famous Enhanced Tracking Protection is enabled automatically in Focus to block known advertising, analytics, and social trackers and scripts. That’s easy to modify by tapping on the shield icon and adjusting the sliders. Even if you do change your privacy settings, you can do so with the knowledge that your browsing history along with any cookies that try to follow you are deleted from your device when you tap the Focus trash can. 

When it comes to quick privacy while browsing the web, tap the Firefox Focus icon on your phone. Go ahead and mull some of 6,200 thoughts that pop into your head each day, and keep it to yourself. With Firefox Focus, your business isn’t our business.

The post Life is complicated. There’s more than one way to browse to meet your needs with Firefox Focus. appeared first on The Mozilla Blog.

Mozilla Privacy BlogMozilla suggests improvements to Canada’s online harms agenda

Later this year the Canadian government will publish new laws to overhaul how platforms in the country must tackle illegal and harmful content. The government’s desire to intervene is unsurprising – around the world, policymakers and the public are pressing for greater responsibility and accountability on the part of Big Tech. Yet in its proposal that platforms take down more content in ever-shorter periods of time, the government’s approach merely responds to symptoms and not structural causes of online harms. Worse still, the government’s proposal includes some suggested policy ideas that would have the opposite effect of making online spaces healthier and more inclusive. As we seek to advance a better vision for platform accountability across the world, we’re weighing-in here with some recommendations on how Canadian lawmakers can use this moment to meaningfully enhance responsibility while protecting rights and competition online.

As detailed in a white paper released in the summer, the government wants content-sharing platforms to monitor their services for certain forms of objectionable content and act on user reports within 24 hours. The new rules will apply to some categories of content that are already illegal under the criminal code (like child abuse material) as well as forms of content that, though not captured under the criminal code, are nonetheless considered harmful when tranmitted though online services (e.g. unconsensual nude imagery). A new regulator will police the rules, and companies will be subject to strict retention and reporting requirements.

We understand the desire to act against online harms, but the government’s suggested approach misses the mark. It focuses merely on symptoms of harmful online experiences, and not the structural factors that make those experiences harmful. The government’s approach is underpinned by the implicit belief that it is possible to sanitise the web of objectionable content – companies just need to take down more content, more quickly. Yet the reality is quite different. We know that harmful content experiences often owe themselves to how objectionable content is amplified, targeted, and presented to individuals online (e.g. through content recommender systems; ad microtargeting techniques).

The government’s apparent ‘zero-tolerance’ approach to objectionable content likewise manifests through the proposal that online services must report instances of ‘potentially criminal content’ to national security agencies. This attempt to responsibilise online services is deeply concerning. It will incentivise greater and more invasive monitoring of individuals by platforms (e.g. upload filtering; real-name policies) and have a disparate impact on those individuals and communities who already face structural oppression in the criminal justice system.

On the basis of the above, we believe that policymakers should pursue a systems-level approach to addressing content-related harms. Our vision sees policy as serving to incentivise greater responsibility from companies in how they design and operate their services, and helping to ensure that companies’ business practices do not inadvertently engender or exacerbate content-related harms. As we’ve engaged in these conversations around the world, we’ve built out a vision for what a systems-level approach to addressing online harms and improving platform accountability could look like. As a starting point, we have four recommendations for how Canadian lawmakers should design their upcoming policy intervention:

  • Asymmetry of obligations: The government should avoid one-size-fits-all approaches that put regressive and unnecessary compliance burdens on small and low-risk companies. Many of the most pressing policy issues pertain to types of companies that constitute a small subset of the market in terms of scale and business practices, and the rules should reflect that (e.g. stricter rules for companies of a certain size, scale, or target market).
  • A risk-based approach: Companies should be obligated or incentivised to undertake risk assessments that identify the ways in which their service’s design, operation, and misuse could engender or compound online harms. The rules should likewise oblige or incentivise companies to take steps to reduce the probability and severity of these risks, in a continuous cycle of procedural accountability.
    ___
  • Systemic transparency: Some of the most egregious harms in the online ecosystem remain hidden from view, simply because we do not have insight into how platforms shape online experiences. Transparency is a crucial prerequisite for accountability, and so Canadian lawmakers should implement a robust regime that allows regulators and public interest researchers to look under the hood of platforms (e.g. mandating that platforms disclose meaningful data concerning the ads that run on their services).
    ___
  • Polycentric oversight: Standing up a dedicated oversight body for content-sharing platforms makes sense in principle, but the devil is in the detail. It’s essential that oversight bodies are well-resourced, staffed with the appropriate technical expertise, and do not serve to undercut judicial processes or safeguards. We also think it’s important that oversight is polycentric (e.g. by incentivising third-party auditing; data access regimes for researchers) to avoid a single point of failure in the oversight function.

We believe an approach that is built around these features is more likely to achieve the government’s objectives, and ensure all Canadians can enjoy an internet experience defined by civil discourse, human dignity, and individual expression. The government is expected to continue consulting on these plans throughout the fall, and we encourage the policy community in Canada and elsewhere to feed into the discussions to help guide lawmakers towards progressive, thoughtful policy paths. For Mozilla this workstream raises issues that are core to our mission, and we’ll continue to follow the discussions closely.

 

The post Mozilla suggests improvements to Canada’s online harms agenda appeared first on Open Policy & Advocacy.

Jan-Erik RedigerThis Week in Glean: Crashes & a buggy Glean

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All "This Week in Glean" blog posts are listed in the TWiG index (and on the Mozilla Data blog). This article is cross-posted on the Mozilla Data blog.


In September I finally landed work to ship Glean through GeckoView. Contrary to what that post said Fenix did not actually use Glean coming from GeckoView immediately due to another bug that took us another few days to land. Shortly after that was shipped in a Fenix Nightly release we received a crash report (bug 1733757) pointing to code that we haven't touched in a long time. And yet the change of switching from a standalone Glean library to shipping Glean in GeckoView uncovered a crashing bug, that quickly rose to be the top crasher for Fenix for more than a week.

When I picked up that bug after the weekend I was still thinking that this would be just a bug, which we can identify & fix and then get into the next Fenix release. But in data land nothing is ever just a bug.

As I don't own an Android device myself I was initially restricted to the Android emulator, but I was unsuccessful in triggering that bug in my simple use of Fenix or in any tests I wrote. At some point I went as far as leveraging physical devices in the Firebase test lab, still with no success of hitting the bug. Later that week I picked up an Android test device, hoping to find easy steps to reproduce the crash. Thankfully we also have quality assurance people running tests and simply using the browser and reporting back steps to reproduce bugs and crashes. And so that's what one of them, Delia, did: Providing simple steps to reproduce the crash. Simple here meant: open a website and leave the phone untouched for roughly 8-9 minutes. With those steps I was able to reproduce the crash in both the emulator and my test device as well.

As I was waiting for the crash to occur I started reading the code again. From the crash report I already knew the exact place where the code panics and thus crashes the application, I just didn't know how it got there. While reading the code around the panic and the knowledge that it takes several minutes to get to the panic I finally stumbled upon a clue: The map implementation that was used internally has a maximum capacity of elements it can store. It even documents that it will panic1!

/// The maximum capacity of a [`HandleMap`]. Attempting to instantiate one with
/// a larger capacity will cause a panic.
///
/// [...]
pub const MAX_CAPACITY: usize = (1 << 15) - 1;

I now knew that at some point Glean exhausts the map capacity, slowly but eventually leading to a panic2. Most of Glean is set up to not require dynamically allocating new things other than the data itself, but we never store the data itself within such a map. So where are we using a map and dynamically add new entries to it and potentially forget to remove them after use?

Luckily the stack trace from those crash reports gave me another clue: All crashes were caused by labeled counters. Some instrument-the-code-paths-and-recompile cycles later I was able to pinpoint it to the exact metric that was causing the crash.

With all this information collected I was able to determine that while the panic was happening in a library we use, the problem was that Glean had the wrong assumptions about how the use of labeled counters in Kotlin maps to objects in a Rust map. On every subscript access to a labeled counter (myCounter["label"]) Glean would create a new object, store it in the map and return a handle to this object. The solution was to avoid creating new entries in that map on every access from a foreign language and instead cache created objects by a unique identifier and hand out already-created objects when re-accessed. This fix was implemented in a few lines, accompanied by an extensive commit message as well as tests in all of the 3 major foreign-language Glean SDKs.

That fix was released in Glean v42.0.1, and shortly after in Fenix Nightly and Beta.

The bug was fixed and the crash prevented, but that wasn't the end of it. The code leading to the bug has been in Glean since at least March 2020 and yet we only started seeing crashes with the GeckoView change. What was happening before? Did we miss these crashes for more than a year or were there simply no crashes?

To get a satisfying answer to these questions I had to retrace the issue in older Glean versions. I took older versions of Glean & Fenix from just before the GeckoView-Glean changes, added logging and built myself a Fenix to run in the emulator. And as quickly as that3 logcat spit out the following lines repeatedly:

E  ffi_support::error: Caught a panic calling rust code: "called `Result::unwrap()` on an `Err` value: \"PoisonError { inner: .. }\""
E  glean_ffi::handlemap_ext: Glean failed (ErrorCode(-1)): called `Result::unwrap()` on an `Err` value: "PoisonError { inner: .. }"

No crashes! It panics, but it doesn't crash in old Fenix versions. So why does it crash in new versions? It's a combination of things.

First Glean data recording happens in a thread to avoid blocking the main thread. Right now that thread is handled on the Kotlin side. A panic in the Rust code base will bubble up the stack and crash the thread, thus poisoning the internal lock of the map, but it will not abort the whole application by itself. Subsequent calls will run on a new thread, handled by the Kotlin side. When using a labeled counter again it will try to acquire the lock, detect that it is poisoned and panic once more. That panic is caught by the FFI layer of Glean and turned into a log message.

Second the standalone Glean library is compiled with panic=unwind, the default in Rust, which unwinds the stack on panics. If not caught the runtime will abort the current thread, writing a panic message to the error output. ffi-support however catches it, logs it and returns without crashing or aborting. Gecko on the other hand sets panic=abort. In this mode a panic will immediately terminate the current process (after writing the crash message to the error output), without ever trying to unwind the stack, giving no chance for the support library to catch it. The Gecko crash reporter is able to catch those hard aborts and send them as crash reports. As Glean is now part of the overall Gecko build all of Gecko's build flags will transitively apply to Glean and its dependencies, too. So when Glean is shipped as part of GeckoView it runs in panic=abort mode, leading to internal panics aborting the whole application.

That behavior by itself is fine: Glean should only panic in exceptional cases and we'd like to know about them. It's good to know that an application could continue running without Glean working correctly; we won't be able to record and send telemetry data, but at least we're not interrupting someone using the application. However unless we engineers run into those bugs and see the log we will not notice them and thus can't fix them. So ultimately this change in (crashing) behavior is acceptable (and wanted) going forward.

After fixing the initial bug and being able to answer why it only started crashing recently my work was still not done. We were likely not recording data in exceptional cases for quite some time, which is not acceptable for a telemetry library. I had to explore our data, estimate how many metrics for how many clients were affected, inform relevant stakeholders and plan further mitigations. But that part of the investigation is a story for another time.

This bug investigation had help from a lot of people. Thanks to Mike Droettboom for coordination, Marissa Gorlick for pushing me to evaluate the impact on data & reaching the right people, Travis Long for help with the Glean code & speedy reviews, Alessio Placitelli for reviews on the Gecko side, Delia Pop for the initial steps to reproduce the bug & Christian Sadilek for help on the Android Components & Fenix side.


Footnotes:

1

Except that it panics at a slightly different place than the explicit check in the code base would suggest.
2: That meant I could also tune how quickly it crashed. A smaller maximum capacity means its reached more quickly, reducing my bug reproduction time significantly.
3: No, not after 9 minutes, but in just under 3 minutes after tuning the maximum map capacity, see 2.

Data@MozillaThis Week in Glean: Crashes & a buggy Glean

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index (and on the Mozilla Data blog).


In September I finally landed work to ship Glean through GeckoView. Contrary to what that post said Fenix did not actually use Glean coming from GeckoView immediately due to another bug that took us another few days to land. Shortly after that was shipped in a Fenix Nightly release we received a crash report (bug 1733757) pointing to code that we haven’t touched in a long time. And yet the change of switching from a standalone Glean library to shipping Glean in GeckoView uncovered a crashing bug, that quickly rose to be the top crasher for Fenix for more than a week.

When I picked up that bug after the weekend I was still thinking that this would be just a bug, which we can identify & fix and then get into the next Fenix release. But in data land nothing is ever just a bug.

As I don’t own an Android device myself I was initially restricted to the Android emulator, but I was unsuccessful in triggering that bug in my simple use of Fenix or in any tests I wrote. At some point I went as far as leveraging physical devices in the Firebase test lab, still with no success of hitting the bug. Later that week I picked up an Android test device, hoping to find easy steps to reproduce the crash. Thankfully we also have quality assurance people running tests and simply using the browser and reporting back steps to reproduce bugs and crashes. And so that’s what one of them, Delia, did: Providing simple steps to reproduce the crash. Simple here meant: open a website and leave the phone untouched for roughly 8-9 minutes. With those steps I was able to reproduce the crash in both the emulator and my test device as well.

As I was waiting for the crash to occur I started reading the code again. From the crash report I already knew the exact place where the code panics and thus crashes the application, I just didn’t know how it got there. While reading the code around the panic and the knowledge that it takes several minutes to get to the panic I finally stumbled upon a clue: The map implementation that was used internally has a maximum capacity of elements it can store. It even documents that it will panic1!

I now knew that at some point Glean exhausts the map capacity, slowly but eventually leading to a panic2. Most of Glean is set up to not require dynamically allocating new things other than the data itself, but we never store the data itself within such a map. So where are we using a map and dynamically add new entries to it and potentially forget to remove them after use?

Luckily the stack trace from those crash reports gave me another clue: All crashes were caused by labeled counters. Some instrument-the-code-paths-and-recompile cycles later I was able to pinpoint it to the exact metric that was causing the crash.

With all this information collected I was able to determine that while the panic was happening in a library we use, the problem was that Glean had the wrong assumptions about how the use of labeled counters in Kotlin maps to objects in a Rust map. On every subscript access to a labeled counter (myCounter["label"]) Glean would create a new object, store it in the map and return a handle to this object. The solution was to avoid creating new entries in that map on every access from a foreign language and instead cache created objects by a unique identifier and hand out already-created objects when re-accessed. This fix was implemented in a few lines, accompanied by an extensive commit message as well as tests in all of the 3 major foreign-language Glean SDKs.

That fix was released in Glean v42.0.1, and shortly after in Fenix Nightly and Beta.

The bug was fixed and the crash prevented, but that wasn’t the end of it. The code leading to the bug has been in Glean since at least March 2020 and yet we only started seeing crashes with the GeckoView change. What was happening before? Did we miss these crashes for more than a year or were there simply no crashes?

To get a satisfying answer to these questions I had to retrace the issue in older Glean versions. I took older versions of Glean & Fenix from just before the GeckoView-Glean changes, added logging and built myself a Fenix to run in the emulator. And as quickly as that3 logcat spit out the following lines repeatedly:

E  ffi_support::error: Caught a panic calling rust code: "called `Result::unwrap()` on an `Err` value: \"PoisonError { inner: .. }\""
E  glean_ffi::handlemap_ext: Glean failed (ErrorCode(-1)): called `Result::unwrap()` on an `Err` value: "PoisonError { inner: .. }"

No crashes! It panics, but it doesn’t crash in old Fenix versions. So why does it crash in new versions? It’s a combination of things.

First Glean data recording happens in a thread to avoid blocking the main thread. Right now that thread is handled on the Kotlin side. A panic in the Rust code base will bubble up the stack and crash the thread, thus poisoning the internal lock of the map, but it will not abort the whole application by itself. Subsequent calls will run on a new thread, handled by the Kotlin side. When using a labeled counter again it will try to acquire the lock, detect that it is poisoned and panic once more. That panic is caught by the FFI layer of Glean and turned into a log message.

Second the standalone Glean library is compiled with panic=unwind, the default in Rust, which unwinds the stack on panics. If not caught the runtime will abort the current thread, writing a panic message to the error output. ffi-support however catches it, logs it and returns without crashing or aborting. Gecko on the other hand sets panic=abort. In this mode a panic will immediately terminate the current process (after writing the crash message to the error output), without ever trying to unwind the stack, giving no chance for the support library to catch it. The Gecko crash reporter is able to catch those hard aborts and send them as crash reports. As Glean is now part of the overall Gecko build all of Gecko’s build flags will transitively apply to Glean and its dependencies, too. So when Glean is shipped as part of GeckoView it runs in panic=abort mode, leading to internal panics aborting the whole application.

That behavior by itself is fine: Glean should only panic in exceptional cases and we’d like to know about them. It’s good to know that an application could continue running without Glean working correctly; we won’t be able to record and send telemetry data, but at least we’re not interrupting someone using the application. However unless we engineers run into those bugs and see the log we will not notice them and thus can’t fix them. So ultimately this change in (crashing) behavior is acceptable (and wanted) going forward.

After fixing the initial bug and being able to answer why it only started crashing recently my work was still not done. We were likely not recording data in exceptional cases for quite some time, which is not acceptable for a telemetry library. I had to explore our data, estimate how many metrics for how many clients were affected, inform relevant stakeholders and plan further mitigations. But that part of the investigation is a story for another time.

This bug investigation had help from a lot of people. Thanks to Mike Droettboom for coordination, Marissa Gorlick for pushing me to evaluate the impact on data & reaching the right people, Travis Long for help with the Glean code & speedy reviews, Alessio Placitelli for reviews on the Gecko side, Delia Pop for the initial steps to reproduce the bug & Christian Sadilek for help on the Android Components & Fenix side.


Footnotes:

  1. Except that it panics at a slightly different place than the explicit check in the code base would suggest.↩︎

  2. That meant I could also tune how quickly it crashed. A smaller maximum capacity means its reached more quickly, reducing my bug reproduction time significantly.↩︎

  3. No, not after 9 minutes, but in just under 3 minutes after tuning the maximum map capacity, see 2.↩︎

The Rust Programming Language BlogAnnouncing Rust 1.56.1

The Rust team has published a new point release of Rust, 1.56.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.56.1 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.56.1 stable

Rust 1.56.1 introduces two new lints to mitigate the impact of a security concern recently disclosed, CVE-2021-42574. We recommend all users upgrade immediately to ensure their codebase is not affected by the security issue.

You can learn more about the security issue in the advisory.

The Rust Programming Language BlogSecurity advisory for rustc (CVE-2021-42574)

This is a lightly edited cross-post of the official security advisory. The official advisory contains a signed version with our PGP key, as well.

The Rust Security Response WG was notified of a security concern affecting source code containing "bidirectional override" Unicode codepoints: in some cases the use of those codepoints could lead to the reviewed code being different than the compiled code.

This is an issue with how source code may be rendered in certain contexts, and its assigned identifier is CVE-2021-42574. While the issue itself is not a flaw in rustc, we're taking proactive measures to mitigate its impact on Rust developers.

Overview

Unicode has support for both left-to-right and right-to-left languages, and to aid writing left-to-right words inside a right-to-left sentence (or vice versa) it also features invisible codepoints called "bidirectional override".

These codepoints are normally used across the Internet to embed a word inside a sentence of another language (with a different text direction), but it was reported to us that they could be used to manipulate how source code is displayed in some editors and code review tools, leading to the reviewed code being different than the compiled code. This is especially bad if the whole team relies on bidirectional-aware tooling.

As an example, the following snippet (with {U+NNNN} replaced with the Unicode codepoint NNNN):

if access_level != "user{U+202E} {U+2066}// Check if admin{U+2069} {U+2066}" {

...would be rendered by bidirectional-aware tools as:

if access_level != "user" { // Check if admin

Affected Versions

Rust 1.56.1 introduces two new lints to detect and reject code containing the affected codepoints. Rust 1.0.0 through Rust 1.56.0 do not include such lints, leaving your source code vulnerable to this attack if you do not perform out-of-band checks for the presence of those codepoints.

To assess the security of the ecosystem we analyzed all crate versions ever published on crates.io (as of 2021-10-17), and only 5 crates have the affected codepoints in their source code, with none of the occurrences being malicious.

Mitigations

We will be releasing Rust 1.56.1 today, 2021-11-01, with two new deny-by-default lints detecting the affected codepoints, respectively in string literals and in comments. The lints will prevent source code files containing those codepoints from being compiled, protecting you from the attack.

If your code has legitimate uses for the codepoints we recommend replacing them with the related escape sequence. The error messages will suggest the right escapes to use.

If you can't upgrade your compiler version, or your codebase also includes non-Rust source code files, we recommend periodically checking that the following codepoints are not present in your repository and your dependencies: U+202A, U+202B, U+202C, U+202D, U+202E, U+2066, U+2067, U+2068, U+2069.

Timeline of events

  • 2021-07-25: we received the report and started working on a fix.
  • 2021-09-14: the date for the embargo lift (2021-11-01) is communicated to us.
  • 2021-10-17: performed an analysis of all the source code ever published to crates.io to check for the presence of this attack.
  • 2021-11-01: embargo lifts, the vulnerability is disclosed and Rust 1.56.1 is released.

Acknowledgments

Thanks to Nicholas Boucher and Ross Anderson from the University of Cambridge for disclosing this to us according to our security policy!

We also want to thank the members of the Rust project who contributed to the mitigations for this issue. Thanks to Esteban Küber for developing the lints, Pietro Albini for leading the security response, and many others for their involvement, insights and feedback: Josh Stone, Josh Triplett, Manish Goregaokar, Mara Bos, Mark Rousskov, Niko Matsakis, and Steve Klabnik.

Appendix: Homoglyph attacks

As part of their research, Nicholas Boucher and Ross Anderson also uncovered a similar security issue identified as CVE-2021-42694 involving homoglyphs inside identifiers. Rust already includes mitigations for that attack since Rust 1.53.0. Rust 1.0.0 through Rust 1.52.1 is not affected due to the lack of support for non-ASCII identifiers in those releases.

Mozilla GFXSwitching the Linux graphics stack from GLX to EGL

Hi there! This is a guest post from Robert Mader, who contributed enormous improvements to Firefox’s graphics stack on Linux.

TL;DR

In the upcoming Firefox 94 release we will enable the EGL backend for a big group of our Linux users. This will increase WebGL performance, reduce resource consumption and make our life as developers easier going forward.

Background

In order to use hardware accelerated APIs like OpenGL with windowing systems like X11 or Wayland there needs to be an interface bringing them together. For OpenGL on X11 most programs use GLX, while its successor, EGL, gets used on Wayland, Android and in the embedded space. While EGL has some major advantages compared to GLX and, in theory, can be used on X11 just as well, its adoption there has been very slow.

I can only speculate why exactly that is, but I think it comes down to the following reasons:

  1. Games and similar applications barely benefit from the switch
  2. Applications and toolkits that would benefit from it often don’t enable hardware accelerated rendering on X11 in the first place. Likely because of the bad and complex driver situation in the past etc.
  3. Because of the slow adoption, X11 EGL implementations remained buggy and incomplete → back to 2.

What changed?

Firefox is an application that benefits heavily from hardware acceleration in many areas. However, until recently, software rendering remained the default. It was only this year that finally Webrender, Firefox’s new rendering engine, got enabled for most Linux users.
There is a very long list of developments that made this step easier and thus possible. To name a few:

  1. OpenGL drivers got better
  2. Xorg DDX drivers got better (e.g. the “modesetting” driver becoming the standard for Intel)
  3. Composited desktops became more common
  4. Plugin support (Flash Player) was dropped from Firefox
  5. Webrender made hardware acceleration much more desirable compared the old OpenGL layers backend
  6. New technologies such as Wayland and DMABUF emerged

The last point was crucial for the topic of the post. When Martin Stránský implemented Wayland hardware acceleration support in Firefox, he could not reuse GLX code, but instead used the Android EGL one. From there, an interesting dynamic started.

Improving the EGL backend and sharing code

Step by step, a number of improvements were made to the EGL/Wayland backend which had effects on other platforms as well:

  1. In order to improve WebGL performance and allow efficient hardware video decoding, Martin implemented zero-copy GPU buffer sharing via DMABUF. This is much easier on EGL than on GLX. And while Firefox did have a similar buffer sharing implementation for X11 (using Xrender), that one was never stable enough to get turned on by default.
  2. I improved the EGL backend to not only support OpenGL ES but also “desktop” OpenGL, making sure it’s not lacking behind the GLX backend.
  3. I went on an made it possible to use the EGL backend on X11 as well.
  4. Martin extended the DMABUF and VAAPI support to X11.
  5. Greg, an independent Wayland contributor, wrote an initial implementation for partial damage on EGL.
  6. Jamie Nicol extended the partial damage support to properly work on Android – and thus on X11 as well.
  7. Greg made sure our GPU detection (and smoke test from days when drivers would often crash) works on Wayland without requiring Xwayland to be present, making it not require GLX any more.

This is just a very small extraction of examples and maybe it gives you an idea of what I’m trying to say: more and more code gets shared between Wayland, X11/EGL and Android. This improves code quality, increases available time to spend on features and bugs, reduces the maintenance burden – you name it.

Making EGL the default

Over the last year, more and more user found out about the possibility to use EGL on X11 – likely because it’s a prerequisite for hardware video decoding. Lots of bugs got fixed in Firefox but also other components. Now we finally feel ready to let it ride the trains. As of Firefox 94, users using Mesa driver >= 21 will get it by default. Users of the proprietary Nvidia driver will need to wait a little bit longer as the currently released drivers lack an important extension. However, most likely we’ll be able to enable EGL on the 470 series onwards. DMABUF support (and thus better WebGL performance) requires GBM support and will be limited to the 495 series onwards.

Benefits for users

So what exactly can you expect, and why? Mainly:

  1. Improved WebGL performance. Thanks to DMABUF zero-copy buffer sharing, WebGL can be done both sandboxed and without round-trip to system ram. WebGL is not only used in obvious places such as games, but also in more subtle ways, e.g. on Google Maps.
  2. Reduced power consumption. With partial damage we don’t need to redraw the whole window any more if only a small part of the content changed. Common examples here are small animations on websites or when loading tabs.
  3. Less bugs. EGL is more modern, much better suited for complex hardware accelerated desktop applications and used on more platforms, compared to GLX.
  4. Hardware video decoding by default is another crucial step closer – in fact for most users it should now be only one preference away (but beware, it still has a couple of bugs).

Special thanks

There is long list of people who have contributed to this step. To name a few: Martin Stránský, Andrew Osmond, Jamie Nicol, Grep V, Jan Ikenmeyer (Darkspirit), Michel Dänzer, the Firefox GFX-Team, the Mesa project and contributors, the Nvidia drivers team, the GTK team.

Finally: thanks a lot to all users who filed bugs and helped us fix them!

About the author

Hi, I’m Robert Mader, a free time FOSS contributor, mainly working on Firefox and Mutter/Gnome-Shell.

Cameron KaiserThe current status of DIY TenFourFox

Due to family and work issues my time has been curtailed for all kinds of things, but at this point, at least, there's something for you to work with: as promised, the TenFourFox source code has been updated to use 91ESR for the certificate and security base and the roots pulled up accordingly. I've also got a few security updates loaded and backported a performance tweak intended for Monterey systems but also yields a small boost on any version of Mac OS X. The browser will now be forever "45.41.6" (ESR32 SPR6) with the perpetual name "Rolling Release." This version number will not be revved again without good reason.

So now it's time for you to make your first build (and, if you feel adventurous, find a problem and try to fix it, but let's take baby steps). Officially, we have documentation for that already using MacPorts. A semi-frozen build of MacPorts what I use on my G5: I have three trees, one being the main testing debug tree which pulls from Github, and then two local subtrees that pull from the local debug tree (created with git clone --shared so that they are about 25% the size) which I use to make rolling G5-optimized (for my Quad) and 7450-optimized (for my iMac and iBook) builds. I do my work in the debug tree and make sure everything functions properly, then check it in and git pull and gmake -f client.mk build in the optimized subtrees to roll up the changes. When the subtrees are happy too, I'll git push from the main debug tree into Github. I consider this as officially supported a solution as presently exists under the circumstances. The Quad runs TenFourFox directly from the G5 subtree now.

However, MacPorts does have a lot of prereqs and requires some additional prep time (sometimes many hours) to build the tools from source. Macintosh Garden has an "unofficial TenFourFox toolkit" that contains an Automator workflow, a supervising script and a fully precompiled toolchain. You will have to install Xcode first (2.5 for Tiger, 3.1.4 for Leopard), but that is the only apparent requirement, and multiple users have reported it builds the browser successfully.

One common problem that gets reported on non-G5 systems is the dreaded internal compiler error. However, when the build is restarted, it usually progresses and continues for awhile without incident. The problem is likely tied to memory pressure and compilers really thrash memory. If your system hits this a lot and starts to annoy you, consider removing -j2 out of the build flags in whatever .mozconfig you're using (change your copy in .mozconfig, not the master *.mozcfg). This will only run one compiler instance at a time, which is slower, but requires less memory and is more likely to complete the build in one shot without manual intervention.

If you really don't want to build it yourself, however, you do have at least one option: InterWebPPC. This is a modified build of TenFourFox that explicitly removes some features for performance, so it is not equivalent with TenFourFox, and it is not necessarily built on any particular schedule either. It also does not have separate G4/7400 and G4/7450 builds, though this may not be noticeable on your particular system. You can download prebuilt binaries for G3, G4 or G5 as well as compile it from source using the "unofficial toolkit" above. I haven't seen other downstream builds yet but if you know of one, plan to make one or are using one, post it in the comments.

There are a couple other security fixes I'm reviewing, and I'm toying with some Github specific hacks to deal with its dependence on async/await, but these again will not be done on any particular timetable (I'll post here when or if I get around to them). Still, some of you have already built the browser successfully, and if you can build TenFourFox on your Power Mac you can build pretty much anything. Perhaps this might spark some additional development interest ...

Niko MatsakisRustc Reading Club

Ever wanted to understand how rustc works? Me too! Doc Jones and I have been talking and we had an idea we wanted to try. Inspired by the very cool Code Reading Club, we are launching an experimental Rustc Reading Club. Doc Jones posted an announcement on her blog, so go take a look!

The way this club works is pretty simple: every other week, we’ll get together for 90 minutes and read some part of rustc (or some project related to rustc), and talk about it. Our goal is to walk away with a high-level understanding of how that code works. For more complex parts of the code, we may wind up spending multiple sessions on the same code.

We may yet tweak this, but the plan is to follow a “semi-structured” reading process:

  • Identify the modules in the code and their purpose.
  • Look at the type definitions and try to describe their high-level purpose.
  • Identify the most important functions and their purpose.
  • Dig into how a few of those functions are actually implemented.

The meetings will not be recorded, but they will be open to anyone. The first meeting of the Rustc Reading Club will be November 4th, 2021 at 12:00pm US Eastern time. Hope to see you there!

Mozilla Privacy BlogImplementing Global Privacy Control

We’ve taken initial steps in experimenting with the implementation of Global Privacy Control (GPC) in Firefox.

GPC is a mechanism for people to tell websites to respect their privacy rights under the California Consumer Privacy Act (CCPA), the California Privacy Rights Act (CPRA) and legislation in other jurisdictions.

At this moment, GPC is a prerelease feature available for experimental use in Firefox Nightly. Once turned on, it sends a signal to the websites users visit telling them that the user doesn’t want to be tracked and doesn’t want their data to be sold. GPC is getting traction both in California and in Colorado. Now that we expect websites to start honoring GPC, we want to start providing this option to Firefox users.

Mozilla was one of the early supporters of the CCPA and of the CPRA and, in 2020, we became one of the founding members of the Global Privacy Control. We endorsed this concept because it gives more control to people over their data online and sets a path for the enforcement of their privacy rights.

Here is how to turn Global Privacy Control on in Firefox Nightly:

1. Type about:config in the URL bar of your Firefox browser.

2. In the search box, type `globalprivacycontrol`.

3. Toggle `privacy.globalprivacycontrol.enabled` to true.

4. Toggle `privacy.globalprivacycontrol.functionality.enabled` to true.

5. You’re all set!

To make sure GPC is turned on in Firefox Nightly, visit https://globalprivacycontrol.org/. The website will flag if the GPC signal has been detected.

“GPC signal not detected” by the globalprivacycontrol.org website = GPC is not on in your browser

“GPC signal detected” by the globalprivacycontrol.org website = GPC is on in your browser

The post Implementing Global Privacy Control appeared first on Open Policy & Advocacy.

Mozilla Privacy BlogKuwezesha Uvumbuzi wa Kiasilia katika Ulimwengu wa Kusini: Matarajio na Njia ya Kusonga mbele

Mozilla na Omidyar Network wanasisimka kuanzisha mpango mpya wa ulimwengu, Kuwezesha Uvumbuzi wa Kiasilia (Powering Local Innovation), ambao unalenga kuendeleza mazungumzo kuhusu “uvumbuzi wa kiasilia” katika maeneo tofauti katika Ulimwengu wa Kusini.  Mashirika washirika wetu kutoka eneo la Afrika  (AfriLabs, Lawyer’s Hub, African Union Development Agency, Thomson Reuters Foundation, Smart Africa), na kutoka India (Hasgeek), watawakutanisha pamoja wajasiriamali, wanateknolojia, watunga sera, viongozi katika sekta binafsi na wanasheria kwa mazungumzo bunifu kuhusu uvumbuzi wa teknologia inayotumika na watu, katika maeneo yao mbalimbali,  hivi sasa na itakayotumika katika siku za baadaye.

Mpango huu unakusudia kusaidia mbinu mpya za uvumbuzi wa kiasilia zinazowezesha na kuhusisha wadau wa Kiafrika. Mwaka uliopita, tuliunga mkono hafla iliyoandaliwa na Lawyer’s Hub, yenye mada Mtaji Mweupe, Wakurugenzi Weusi (“White Capital, Black Founders”). Hafla hii ilihoji jinsi upendeleo wa mbari, utaifa na wa kitabaka unavyowekea mipaka fursa zinazoletwa na makampuni na wawekezaji wa kigeni  kwa wavumbuzi wa Kiafrika. Halfla hii ilifuatiwa na utafiti wa Afrika yote uliofanywa na Afrilabs kuelewa chamgamoto muhimu zinazowakabili wajasiriamali wa kiteknologia, kama upatikanaji wa mtaji, vizuizi vya kisheria na sera, ushirikiano wa kuvuka mipaka ya nchi, na ushindani unaohusiana na majukwaa yaliyodhibitiwa mtandaoni. Wakati huo huo, wasimamizi wa India wameongeza shughuli za kisera zinazolenga kukuza ‘mabingwa wa kitaifa’ na vile vile kukuza wazo la mfumo wa ikolojia ya mtumiaji wa India la “kujitegemea” kama ushindani kwa  utawala wa soko la Silicon Valley. Haswa, wimbi hili la sera limepokea upinzani kutoka kwa asasi za kiraia na sehemu za jamii ya waanzilishi wa biashara ambao wanahoji kuwa hatua hizi zinaweza kusababisha nguvu kukita mizizi miongoni mwa wanabiashara wachache nchini, na hivyo kuendeleza changamoto zinazohusishwa na mbinu za kibiashara za Silicon Valley.

Kuendeleza mradi wa Mozilla wa Kubuni Tena Maana ya Wazi(Mozilla’s Reimagine Open), mpango wa Kuwezesha Uvumbuzi wa Kiasilia utachunguza kama na jinsi kanuni za msingi za uwazi zinaweza kutumika kwa mifumo ya uvumbuzi katika Ulimwengu wa Kusini. Mitchell Baker, Mkurugenzi Mtendaji wa Shirika la Mozilla, ametetea kwamba wazo la “wazi” halipaswi kuchaguliwa na mifumo na kampuni ambazo hazizingatii kanuni za ufikiaji, uwezeshaji, na fursa ambayo ilifanya mtandao wazi wa asili kufanikiwa.

Katika kipindi cha miaka miwili ijayo, washirika wetu watatekeleza shughuli zinazolengwa kimkoa  ambazo zinajenga juu ya nguvu zao za shirika. Shughuli hizi zitajumuisha safu ya majadiliano ya baadhi ya washika dau, hackathons za sera, nakala za habari, utafiti na warsha zinazolenga kushughulikia changamoto kuu zinazokabili biashara ndogo sana, biashara ndogo, za kati na mifumo ya waanzilishi wa biashara katika eneo za Afrika na India na kupendekeza suluhisho. Shughuli hizi zitaendeleza malengo ya kipekee ya Mradi wa Ubunifu wa Afrika(Africa Innovation Mradi), kunufaika na jukumu la Mozilla kama msimamizi wa wavuti wazi kukuza mifano ya uvumbuzi ambao umezingatia mahitaji ya kipekee ya watumiaji katika bara la Afrika na India.

Kile washirika wetu wanasema kuhusu mpango wa Kuwezesha Uvumbuzi wa Kiasilia:

AfriLabs

Hapa Afrilabs tunaaminia nguvu ya wavumbuzi na wajasiriamali kukuzia suluhisho changamoto kubwa katika jamii zao. Kama ekolojia ya bara ya vituo 292 katika nchi 49 za Afrika, tunafanya kazi kuhakikisha kuwa fursa, ufadhili, zana za uvumbuzi na ustadi wa kiufundi zinapatikana kwa wavumbuzi hawa kote barani bila kuzingatia jinsia, umri, eneo, lugha na sababu zingine.  Tunafurahi kufanya kazi na Mozilla, Omidyar, na washiriki wa jamii yetu kuwezesha uvumbuzi wa kiasilia na kuchochea suluhisho zenye uwezo wa kukua na kuenea, zenye ujasiri na zinazojumuisha zilizo na mizizi barani Afrika.

AUDA-NEPAD

African Union Development Agency – AUDA-NEPAD inafurahi kushirikiana na washirika anuwai kwenye mpango huu ambao utachangia uvumbuzi wa kienyeji kote barani. Kukuza uwazi na mazungumzo ya wazi ili kuboresha mfumo wa ikolojia wa uvumbuzi humu Afrika ni muhimu kuelekea kufikia Afrika Tunayotaka(the Africa We Want) kama ilivyoagizwa katika Ajenda ya AU 2063″.

Hasgeek

Hasgeek inakuza mazungumzo kati ya wataalamu wa teknolojia ili kuhimiza kuenea kwa maoni mazuri na kuendeleza mazingira kwa ujumla. Hasgeek ni mbia katika mpango wa Kuwezesha Uvumbuzi wa Kiasili katika Ulimwengu wa Kusini ili kukuza umakinifu wa fikra kuhusu jukumu la programu zenye chanzo wazi katika uvumbizi nchini India na changamoto ambazo zinakabili biashara ndogo na za kati kwa kuzingatia kukubali na kutumia programu zenye chanzo wazi na kunufaika kwa kutumia programu zenye chanzo wazi kwa uvumbuzi. Hasgeek itaangalia tajriba za kienyeji kutoka kwa taswira ya weledi.

Lawyers Hub

Lawyers Hub ilianzishwa kutumikia Afrika kwenye sera za kidijitali na uvumbuzi wa haki kwa kutoa sera mpya za kuvumbua na suluhisho zinazoendeshwa na teknolojia. Kwa hivyo tunafurahi kushiriki katika ari ya Kuwezesha Uvumbuzi wa Kiasilia katika Ulimwengu wa Kusini wa Mozilla kwani uvumbuzi wazi na wa kiasilia ni chombo muhimu kwa maendeleo ya mazungumzo ya sera za kiasilia juu ya maswala yanayoibuka na ya kimfumo. Mradi huu utaelekeza mazingira yenye sera dhabiti za uvumbuzi, kukuza mtandao wenye afya bora zaidi na kulinda haki za kidijitali katika eneo la Afrika.

Omidyar Network

Ufadhili wa Omidyar Network kwa kazi hii ulihusisha kusaidia asasi za kiraia barani Afrika, na waandishi wa habari wanaojitegemea, ili kuimarisha sera na muundo wa teknologia ya sera nyingi za uvumbuzi wa teknologia ambazo zinawasilishwa na serikali za Afrika. Kazi hii inasaidia na kupanua jalada letu inayolenga kuimarisha kanuni katika uchumi wa kidijitali, kuboresha ushawishi wa taasisi za kiafrika kwa teknolojia inayoibuka na kufuatilia juhudi za mabadiliko ya tarakimu barani Afrika.

Thomson Reuters Foundation

“Tunafurahi kushirikiana na Mozilla na Omidyar Network katika mpango huu wa maana. Tunaongozwa na kanuni zetu za uaminifu; hizi ni usahihi, uadilifu na uhuru wa kutopendelea upande, tutatumia nguvu ya uandishi wetu wa habari na wa kutangaza, kuangazia maswala yasiyoripotiwa – na yasiyoeleweka – ya uvumbuzi na haki za kidijitali katika bara la Afrika.” alisema  Antonio Zappulla, Mkurugenzi Mtendaji wa Thomson Reuters Foundation.

Kuhusu Mradi wa Uvumbuzi Afrika

Mpango wa Kuwezesha Uvumbuzi wa Kiasilia Afrika ni Mradi wa Uvumbuzi Afrika wa Mozilla, mpango ambao unataka kukuza mazingira ya washirika wanaofanya kazi kuendeleza mtandao wenye afya bora zaidi na kukuza uvumbuzi unaotokana na mahitaji ya kipekee ya watumiaji katika eneo la Afrika

Ili kujua zaidi au kama una maswali, tafadhali wasiliana na Noémie Hailu, Meneja wa Programu, Africa Innovation Mradi, nhailu@mozilla.com.

Mozilla: www.mozilla.org

 

The post Kuwezesha Uvumbuzi wa Kiasilia katika Ulimwengu wa Kusini: Matarajio na Njia ya Kusonga mbele appeared first on Open Policy & Advocacy.

Mozilla Privacy BlogMozilla and Omidyar Network launch new Reimagine Open initiative: Powering Local Innovation in the Global South

Powering Local Innovation in the Global South: Prospects and a Way Forward

Mozilla and the Omidyar Network are thrilled to launch the new global initiative, Powering Local Innovation, focused on deepening the conversation around “local innovation” within different regions in the Global South. Our organizational partners in the Africa region (AfriLabs, Lawyer’s Hub, African Union Development Agency, Thomson Reuters Foundation, Smart Africa), and in India (Hasgeek), will bring together entrepreneurs, technologists, activists, policymakers, private sector leaders and lawyers for creative dialogue around the present and future of user technology innovation in their regions.

This initiative aims to support local innovation approaches that are empowering and inclusive of African stakeholders. Last year, we supported an event convened by Lawyer’s Hub titled “White Capital, Black Founders” that interrogated how race, nationality, and class biases limit the opportunities available to African innovators by foreign companies and investors. This was followed by a pan-African study conducted by AfriLabs on the key challenges faced by African technology entrepreneurs, that range from access to capital, legal and policy barriers, cross border collaboration, to competition as it relates to closed platforms. Meanwhile, Indian regulators have increased policy activity aimed at cultivating ‘national champions’ as well as promoting the idea of a ‘self-reliant’ Indian consumer technology ecosystem as a challenge to Silicon Valley market dominance. Notably, this wave of policy has received push back from both civil society and some sections of the start-up community who argue these moves could potentially entrench concentration of power among select domestic business elites, and risk replicating concerns with Silicon Valley business models.

Expanding on Mozilla’s Reimagine Open project, the Powering Local Innovation initiative will also explore how and if foundational principles of openness can be applied to innovation ecosystems in the Global South. Mitchell Baker, CEO of Mozilla Corporation, has advocated that the concept of “open” should not be co-opted by systems and companies that don’t uphold the principles of access, empowerment, and opportunity that made the original open Internet so successful.

Over the next two years, our partners will implement regionally tailored activities that build on their organizational strengths. These activities will include a series of multi-stakeholder discussions, policy hackathons, news articles, research, and workshops, aimed at tackling key challenges and propose solutions facing the local Micro, Small and Medium Enterprises, and startup ecosystems in the Africa region and India. These activities will further the unique goals of the Africa Innovation Mradi, to leverage Mozilla’s role as stewards of the open web to promote models of innovation that are grounded in the unique needs of users in the African continent and India.

What our partners are saying about the Powering Local Innovation in the Global South initiative:

AfriLabs

At AfriLabs we believe in the power of innovators and entrepreneurs to develop solutions for the biggest challenges in their own communities. As a continental ecosystem of 292 hubs in 49 African countries we work to ensure that opportunities, funding, tools for innovation and technical skills are available to these innovators across the continent regardless of gender, age, location, language and other factors. We are excited to work with Mozilla, Omidyar, and members of our community to power local innovation and catalyse scalable, bold and inclusive solutions rooted in Africa.

AUDA-NEPAD

“The African Union Development Agency (AUDA-NEPAD) is pleased to associate with the various partners on the Mradi initiative which will contribute to local innovation across the continent. Fostering openness and frank conversations to enhance Africa’s innovation ecosystem is key towards attaining the Africa We Want as espoused in AU’s Agenda 2063”

Hasgeek

Hasgeek fosters conversations between technologists to encourage the spread of good ideas and advance the ecosystem as a whole. Hasgeek is a partner in The Powering Local Innovation in the Global South Initiative to promote critical thinking about open source’s role in innovation in India, and the challenges that Small and Medium Enterprises (SMEs) face with respect to adoption and use of open source and leveraging open source for innovation. Hasgeek will look at experiences from the ground, and from practitioners’ lenses.

Lawyers Hub

Lawyers Hub exists to serve Africa on Digital Policy and Justice Innovation by providing innovative policy and technology-driven solutions.  We are therefore elated to be participating in Mozilla’s The Powering Local Innovation in the Global South Initiative Mradi because local open innovation is the key driver to the development of local policy conversations on emerging and systemic issues. This project will culminate into a robust policy  innovation ecosystem, promoting a  healthier internet and safeguarding of digital rights  in the African region.

Omidyar Network

Omidyar Network’s funding of this work was linked to supporting civil society actors in Africa and independent journalists so as to strengthen regulations in the digital economy, improve proposals for emerging technology by African institutions, and track digital transformation efforts in Africa.

Thomson Reuters Foundation

We are thrilled to be partnering with Mozilla and Omidyar Network on this meaningful initiative. Guided by our Trust Principles of accuracy, impartiality and freedom from bias, we will be using the power of our journalism and reach to shed light on the under-reported – and little understood – issues of innovation and digital rights across the African continent,” said Antonio Zappulla, CEO of the Thomson Reuters Foundation.

About the Africa Innovation Mradi

The Powering Local Innovation in the Global South initiative is a project of Mozilla’s Africa Innovation Mradi, a programme that seeks to foster an ecosystem of allies working toward a healthier internet, and promoting innovation grounded in the unique needs of users in the African region.

To find out more / for inquiries, please contact Noémie Hailu, Programme Manager, Africa Innovation Mradi, nhailu@mozilla.com

The post Mozilla and Omidyar Network launch new Reimagine Open initiative: Powering Local Innovation in the Global South appeared first on Open Policy & Advocacy.

Karl DubostWebcompat issues and the bots!

Some ideas and contexts around auto-discovering webcompat issues.

Graffiti of a robot on a wall.

Recently Brian Grinstead asked me:

Are you familiar with this?

which I answered: Yes since 2018. And I remembered the challenges and so probably it's worth to do a bit of history on identifying webcompat issues. The objectives being often:

  1. How to massively test websites and their different renderings across browsers?
  2. How to reduce human time spent on manually testing the site?
  3. Can we discover the type of issues?

I have been doing webcompat work since October 2010 (when I started working at Opera Software with the amazing Opera devrel team). There's no perfect technique, but there are a couple of things you can try.

Screenshots Comparison

We often associate webcompat issues with sites which are not looking the same in two different browsers. It's a simplistic approximation but can help in some type of webcompat issues.

  • Mobile versus desktop

    Some websites will adjust their content depending on the user agent strings. They will either deliver a specific content, or redirect to a domain which is friendly for mobile or desktop. This can be directly detected with the homepage of the website. You could quickly identify if a site sends the same design/content to Firefox Android, Safari iOS or a blink browser on Android. This is less and less meaningful, as many websites in the last ten years have switched to responsive design where the content automatically adjusts depending on the size of the screen.

  • Rendering Issues

    This is slightly more complex. There might be multiple issues with regards to rendering. I'll talk about the caveats later. This could potentially identify a wrong color, a wrong position of the boxes, a difference in details such as scrollbars or boxes radius, etc.

With a simple URLs list and using the webdriver API, it is possible to fetch websites for Gecko, WebKit and Blink and take a screenshot for each of them. It becomes very easy to test the top 1000 websites in a specific locale. You can discriminate visually quickly the screenshots which are different.

But we said we wanted to be more effective. We can use a bit of maths for this. Let's make 𝑠¹ and 𝑠², the screenshots we want to compare, then we can use a simple library like difflib in python to compute the similarity of the images.

def diff_ratio(s1, s2):
    s = difflib.SequenceMatcher(None, s1, s2)
    return s.quick_ratio()

Then it becomes easy to define the diff_ratio which is acceptable for the series of tests we run. After fixing a threshold this will identify the sites with potential issues. It will not identify the type of issues. It will not provide a diagnosis.

And the method has some limitations which are interesting to understand if we want to be effective in pre-filtering the issues.

Some Limitations Around Screenshots Comparison

The screenshots might be different but that doesn't necessary mean there is a webcompat issue. Here some cases:

  • Anti-Tracking Mechanisms

    Every browser has its own strategy with regards to tracking protection. These are browsers breaking websites on purpose to reduce users fingerprinting. Hence a screenshot for the same site might create different results.

  • A/B Testing

    Some sites test A/B scenario for a more profitable user experience. They will send two different versions of the site to different users. If one browser is one pool and the other browser in another pool at the moment of the tests, the screenshots will be different.

  • Android/iOS banner for apps

    Testing the rendering in between a browser on iOS and a browser on Android will create different results, as the banner for apps will display and link to different stores.

  • Dynamic Content (News sites/Social Network)

    There's a big category of websites where the content changes or offers rotations of the content in between each reload. Caroussels, ads, news article, user posts, etc. are all likely to modify the screenshots in between two queries in the same browser.

  • Tier 1

    Some sites provide a different experience to different browsers. This one is more subtle to deal with. They rely more on a business decision. Compare for example the results of Google search on Firefox Android and Google Chrome. Google Chrome definitely receives the Tier 1. Other browsers receive different content. The diagnosis here is not technical, but business priorities.

Quick summary about autowebcompat.

autowebcompat, that Brian was mentionning, is a nice project from Marco Castelluccio to attempt to auto-detect web compatibility issues. Basically the code tries to learn if screenshots for a similar set of interactions in two different browsers create the same end result. The silverlining being that if there is a difference, there's probably something to better understand. The project used the issues already reported on webcompat.com. In that sense it's already biaised by the fact that the issues have already been identified as being different. But it make possible to train a model on learning on what creates a webcompat issue.

Training A Bot To Identify Valid Issues

Recently, Ksenia (Mozilla Webcompat team) adjusted BugBug to make it work on GitHub. It helped the webcompat team to move away from the old ML classifier to the BugBug infrastructure.

It identifies already reported issues and closes the ones which have similar features than previous invalid bugs. Invalid here means not a webcompat issue. Some sites are broken in all browsers, that doesn't create a webcompat issue.

Compatipede, Another Project For Auto Webcompat

Compatipede is a project which predates autowebcompat (started in October 2013!) with the intent to identify more parameters and extend the scope of tests.

  • Equal redirects
  • CSS style compatibility
  • Source code compatibility
  • Other custom tests

This was quite interesting as it was trying to explore the unseen issues and avoid the pitfalls of screenshots.

It had also a modular architecture providing a system of custom plugins to run probes on the payloads sent by the website.

SiteCompTester

With the same spirit than Compatipede, SiteCompTester was an extension which made possible to target some type of issues and would surface bugs associated with a specific list of known issues. This makes it easier to diagnose a website.

Template Extraction Mining

The variability of content may be avoided by using a mechanism such as templatemaker. This is a clever little tool which extracts the common features of a series of text and extract a template.

So let's say for a news website, we could imagine running template maker with one browser for a couple of days and extract its templates. And do the same in parallel with another browser. Then we would compare the templates instead of comparing two unique rendering of the websites. That would probably makes it possible to have a better understanding of certain features variability. This could be applied to markup, to JavaScript, to HTTP headers.

Webcompat Auto-Detection Caveats

The issue with auto-detection of webcompat issues is that we don't know what is broken before someone experience it in real life. The level of interactions it requires is really delicate.

And it's why the people working on triaging and diagnosis in the Mozilla webcompat team are top-notch. * Oana and Raul are triaging the issues after poor description by most users. * Ksenia, Dennis and Thomas are diagnosing relentlessly minified obfuscated code to decipher what is breaking in the current site.

Auto-Discovery Of Webcompat

The auto-discovery may work in very specific use cases when we know what we try to identify as an issue. Let's say we already identify a pattern in one bug and we want to understand to which extend this bug is affecting other websites. Then using a framework going through the sites and searching for this pattern might reveal potential webcompat issues.

Targeted surveys are the key to understand the priority of some issues.

Otsukare!

Mozilla Security BlogSecuring the proxy API for Firefox add-ons

Add-ons are a powerful way to extend and customize Firefox. At Mozilla, we are committed not only to supporting WebExtensions APIs, but also ensuring the safety and reliability of the ecosystem for the long term.

In early June, we discovered add-ons that were misusing the proxy API, which is used by add-ons to control how Firefox connects to the internet. These add-ons interfered with Firefox in a way that prevented users who had installed them from downloading updates, accessing updated blocklists, and updating remotely configured content.

In total these add-ons were installed by 455k users.

This post outlines the steps we have taken to mitigate this issue as well as provide details of what users should do to check if they are affected. Developers of add-ons that use the proxy API will find some specific instructions below that are required for future submissions.

 

What have we done to address this?

The malicious add-ons were blocked, to prevent installation by other users.

To prevent additional users from being impacted by new add-on submissions misusing the proxy API, we paused on approvals for add-ons that used the proxy API until fixes were available for all users.

Starting with Firefox 91.1, Firefox now includes changes to fall back to direct connections when Firefox makes an important request (such as those for updates) via a proxy configuration that fails. Ensuring these requests are completed successfully helps us deliver the latest important updates and protections to our users. We also deployed a system add-on named “Proxy Failover” (ID: proxy-failover@mozilla.com) with additional mitigations that has been shipped to both current and older Firefox versions.

 

As a Firefox user, what should I do next?

It is always a good idea to keep Firefox up to date, and if you’re using Windows to make sure Microsoft Defender is running. Together, Firefox 93 and Defender will make sure you’re protected from this issue.

First, check what version of Firefox you are running. Assuming you have not disabled updates specifically, you should be running at minimum the latest release version, which is Firefox 93 as of today (or Firefox ESR 91.2). If you are not running the latest version, and have not disabled updates, you might want to check if you are affected by this issue. First, try updating Firefox. Recent versions of Firefox come with an updated blocklist that automatically disables the malicious add-ons. If that doesn’t work, there are a few ways to fix this:

  • Search for the problematic add-ons and remove them.
    1. Visit the Troubleshooting Information page.
    2. In the Add-ons section, search for one of the following entries:
      Name: Bypass
      ID: {7c3a8b88-4dc9-4487-b7f9-736b5f38b957}
      Name: Bypass XM
      ID: {d61552ef-e2a6-4fb5-bf67-8990f0014957}
      Please make sure the ID matches exactly as there might be other, unrelated add-ons using  those or similar names. If none of those IDs are shown in the list, you are not affected.
      If you find a match, follow these instructions to remove the add-on(s).

 

As a Firefox add-on developer, what should I do next?

Note: The following only applies to add-ons that require the use of the proxy API.

We are asking all developers requiring the proxy API to start including a strict_min_version key in their manifest.json files targeting “91.1” or above as shown in this example:

“browser_specific_settings”: {   “gecko”: {     “strict_min_version”: “91.1”   } }

Setting this explicitly will help us to expedite review for your add-on; thank you in advance for helping us to keep Firefox users secure.

 

In Summary

We take user security very seriously at Mozilla. Our add-on submission process includes automated and manual reviews that we continue to evolve and improve in order to protect Firefox users.

If you uncover a security vulnerability, please report it via our bug bounty program.

The post Securing the proxy API for Firefox add-ons appeared first on Mozilla Security Blog.

Data@MozillaThis Week in Glean: The Three Roles of Data Engagements

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index).

I’ve just recently started my sixth year working at Mozilla on data and data-adjacent things. In those years I’ve started to notice some patterns in how data is approached, so I thought I’d set them down in a TWiG because Glean’s got a role to play in them.

Data Engagements

A Data Engagement is when there’s a question that needs to engage with data to be answered. Something like “How many bookmarks are used by Firefox users?”.

(No one calls these Data Engagements but me, and I only do because I need to call them _something_.)

I’ve noticed three roles in Data Engagements at Mozilla:

  1. Data Consumer: The Question-Asker. The Temperature-Taker. This is the one who knows what questions are important, and is frustrated without an answer until and unless data can be collected and analysed to provide it. “We need to know how many bookmarks are used to see if we should invest more in bookmark R&D.”
  2. Data Analyst: The Answer-Maker. The Stats-Cruncher. This is the one who can use Data to answer a Consumer’s Question. “Bookmarks are used by Canadians more than Mexicans most of the time, but only amongst profiles that have at least one bookmark.”
  3. Data Instrumentor: The Data-Digger. The Code-Implementor. This one can sift through product code and find the correct place to collect the right piece of data. “The Places database holds many things, we’ll need to filter for just bookmarks to count them.”

This image has an empty alt attribute; its file name is fFe-UchE_Xy3d1LhX0shV41TJeABBBxJZigDH9_HXKAhu-m0JaM9fhfS8PQvW2WknXlMfk8lSIheZ-YMtT-NaQcLfdLYZnHC_f3LCkAIb-yYN0qWVvi0UPjQnz9C77sX5r0VLbsR=s1600

(diagrams courtesy of :brizental)

It’s through these three working in concert — The Consumer having a question that the Instrumentor instruments to generate data the Analyst can analyse to return an answer back to the Consumer — that a Data Engagement succeeds.

At Mozilla, Data Engagements succeed very frequently in certain circumstances. The Graphics team answers many deeply-technical questions about Firefox running in the wild to determine how well WebRender is working. The Telemetry team examines the health of the data collection system as a whole. Mike Conley’s old Tab Switcher Dashboard helped find and solve performance regressions in (unsurprisingly) Tab Switching. These go well, and there’s a common thread here that I think is the secret of why:

In these and the other high-success-rate Data Engagements, all three roles (Consumer, Analyst, and Instrumentor) are embodied by the same person.

This image has an empty alt attribute; its file name is laonFTnKBH6lmRWFRhcjUGx2aTG8iZbf3Wp99ulVqsu5J4qwZuq2pRaJ9WtBoXTEeAeDtui1yFn2gqMxxoFFZ1F87pLUXgmsymS9alcMqH0QBD7mz1bsTINN5FuVW1s9L0KSew8j=s1600

It’s a common problem in the industry. It’s hard to build anything at all, but it’s least hard to build something for yourself. When you are in yourself the Question-Asker, Answer-Maker, and Data-Digger, you don’t often mistakenly dig the wrong data to create an answer that isn’t to the question you had in mind. And when you accidentally do make a mistake (because, remember, this is hard), you can go back in and change the instrumentation, update the analysis, or reword the question.

But when these three roles are in different parts of the org, or different parts of the planet, things get harder. Each role is now trying to speak the others’ languages and infer enough context to do their jobs independently.

In comes the Data Org at Mozilla which has had great successes to date on the theme of “Making it easier for anyone to be their own Analyst”. Data Democratization. When you’re your own Analyst, then there’s fewer situations when the roles are disparate: Instrumentors who are their own Analysts know when data won’t be the right shape to answer their own questions and Consumers who are their own Analysts know when their questions aren’t well-formed.

Unfortunately we haven’t had as much success in making the other roles more accessible. Everyone can theoretically be their own Consumer: curiosity in a data-rich environment is as common as lanyards at an industry conference[1]. Asking _good_ questions is hard, though. Possible, but hard. You could just about imagine someone in a mature data organization becoming able to tell the difference between questions that are important and questions that are just interesting through self-serve tooling and documentation.

As for being your own Instrumentor… that is something that only a small fraction of folks have the patience to do. I (and Mozilla’s Community Managers) welcome you to try: it is possible to download and build Firefox yourself. It’s possible to find out which part of the codebase controls which pieces of UI. It’s… well, it’s more than possible, it’s actually quite pleasant to add instrumentation using Glean… but on the whole, if you are someone who _can_ Instrument Firefox Desktop you probably already have a copy of the source code on your hard drive. If you check right now and it’s not there, then there’s precious little likelihood that will change.

(Unless you come and work for Mozilla, that is.)

So let’s assume for now that democratizing instrumentation is impossible. Why does it matter? Why should it matter that the Consumer is a separate person from the Instrumentor?

Communication

Each role communicates with each other role with a different language:

  • Consumers talk to Instrumentors and Analysts in units of Questions and Answers. “How many bookmarks are there? We need to know whether people are using bookmarks.”
  • Analysts speak Data, Metadata, and Stats. “The median number of bookmarks is, according to a representative sample of Firefox profiles, twelve (confidence interval 99.5%).”
  • Instrumentors speak Data and Code. “There’s a few ways we delete bookmarks, we should cover them all to make sure the count’s correct when the next ping’s sent”

Some more of the Data Org and Mozilla’s greatest successes involve supplying context at the points in a Data Engagement where they’re most needed. We’ve gotten exceedingly good at loading context about data (metadata) to facilitate communication between Instrumentors and Analysts with tools like Glean Dictionary.

Ah, but once again the weak link appears to be the communication of Questions and Answers between Consumers and Instrumentors. Taking the above example, does the number of bookmarks include folders?

The Consumer knows, but the further away they sit from the Instrumentor, the less likely that the data coming from the product and fueling the analysis will be the “correct” one.

(Either including or excluding folders would be “correct” for different cases. Which one do you think was “more correct”?)

So how do we improve this?

Glean

Well, actually, Glean doesn’t have a solution for this. I don’t actually know what the solutions are. I have some ideas. Maybe we should share more context between Consumers and Instrumentors somehow. Maybe we should formalize the act of question-asking. Maybe we should build into the Glean SDK a high-enough level of metric abstraction that instead of asking questions, Consumers learn to speak a language of metrics.

The one thing I do know is that Glean is absolutely necessary to making any of these solutions possible. Without Glean, we have too many systems that are fractally complex for any context to be relevantly shared. How can we talk about sharing context about bookmark counts when we aren’t even counting things consistently[2]?

Glean brings that consistency. And from there we get to start solving these problems.

Expect me to come back to this realm of Engagements and the Three Roles in future posts. I’ve been thinking about:

  • how tooling affects the languages the roles speak amongst themselves and between each other,
  • how the roles are distributed on the org chart,
  • which teams support each role,
  • how Data Stewardship makes communication easier by adding context and formality,
  • how Telemetry and Glean handle the same situations in different ways, and
  • what roles Users play in all this. No model about data is complete without considering where the data comes from.

I’m not sure how many I’ll actually get to, but at least I have ideas.

:chutten

[1] Other rejected similes include “as common as”: maple syrup on Canadian breakfast tables, frustration in traffic, sense isn’t.

[2] Counting is harder than it looks.

(( This post is a syndicated copy of the original. ))

Chris H-CThis Week in Glean: The Three Roles of Data Engagements

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index).

I’ve just recently started my sixth year working at Mozilla on data and data-adjacent things. In those years I’ve started to notice some patterns in how data is approached, so I thought I’d set them down in a TWiG because Glean’s got a role to play in them.

Data Engagements

A Data Engagement is when there’s a question that needs to engage with data to be answered. Something like “How many bookmarks are used by Firefox users?”.

(No one calls these Data Engagements but me, and I only do because I need to call them _something_.)

I’ve noticed three roles in Data Engagements at Mozilla:

  1. Data Consumer: The Question-Asker. The Temperature-Taker. This is the one who knows what questions are important, and is frustrated without an answer until and unless data can be collected and analysed to provide it. “We need to know how many bookmarks are used to see if we should invest more in bookmark R&D.”
  2. Data Analyst: The Answer-Maker. The Stats-Cruncher. This is the one who can use Data to answer a Consumer’s Question. “Bookmarks are used by Canadians more than Mexicans most of the time, but only amongst profiles that have at least one bookmark.”
  3. Data Instrumentor: The Data-Digger. The Code-Implementor. This one can sift through product code and find the correct place to collect the right piece of data. “The Places database holds many things, we’ll need to filter for just bookmarks to count them.”

(diagrams courtesy of :brizental)

It’s through these three working in concert — The Consumer having a question that the Instrumentor instruments to generate data the Analyst can analyse to return an answer back to the Consumer — that a Data Engagement succeeds.

At Mozilla, Data Engagements succeed very frequently in certain circumstances. The Graphics team answers many deeply-technical questions about Firefox running in the wild to determine how well WebRender is working. The Telemetry team examines the health of the data collection system as a whole. Mike Conley’s old Tab Switcher Dashboard helped find and solve performance regressions in (unsurprisingly) Tab Switching. These go well, and there’s a common thread here that I think is the secret of why: 

In these and the other high-success-rate Data Engagements, all three roles (Consumer, Analyst, and Instrumentor) are embodied by the same person.

It’s a common problem in the industry. It’s hard to build anything at all, but it’s least hard to build something for yourself. When you are in yourself the Question-Asker, Answer-Maker, and Data-Digger, you don’t often mistakenly dig the wrong data to create an answer that isn’t to the question you had in mind. And when you accidentally do make a mistake (because, remember, this is hard), you can go back in and change the instrumentation, update the analysis, or reword the question.

But when these three roles are in different parts of the org, or different parts of the planet, things get harder. Each role is now trying to speak the others’ languages and infer enough context to do their jobs independently.

In comes the Data Org at Mozilla which has had great successes to date on the theme of “Making it easier for anyone to be their own Analyst”. Data Democratization. When you’re your own Analyst, then there’s fewer situations when the roles are disparate: Instrumentors who are their own Analysts know when data won’t be the right shape to answer their own questions and Consumers who are their own Analysts know when their questions aren’t well-formed.

Unfortunately we haven’t had as much success in making the other roles more accessible. Everyone can theoretically be their own Consumer: curiosity in a data-rich environment is as common as lanyards at an industry conference[1]. Asking _good_ questions is hard, though. Possible, but hard. You could just about imagine someone in a mature data organization becoming able to tell the difference between questions that are important and questions that are just interesting through self-serve tooling and documentation.

As for being your own Instrumentor… that is something that only a small fraction of folks have the patience to do. I (and Mozilla’s Community Managers) welcome you to try: it is possible to download and build Firefox yourself. It’s possible to find out which part of the codebase controls which pieces of UI. It’s… well, it’s more than possible, it’s actually quite pleasant to add instrumentation using Glean… but on the whole, if you are someone who _can_ Instrument Firefox Desktop you probably already have a copy of the source code on your hard drive. If you check right now and it’s not there, then there’s precious little likelihood that will change.

(Unless you come and work for Mozilla, that is.)

So let’s assume for now that democratizing instrumentation is impossible. Why does it matter? Why should it matter that the Consumer is a separate person from the Instrumentor?

Communication

Each role communicates with each other role with a different language:

  • Consumers talk to Instrumentors and Analysts in units of Questions and Answers. “How many bookmarks are there? We need to know whether people are using bookmarks.”
  • Analysts speak Data, Metadata, and Stats. “The median number of bookmarks is, according to a representative sample of Firefox profiles, twelve (confidence interval 99.5%).”
  • Instrumentors speak Data and Code. “There’s a few ways we delete bookmarks, we should cover them all to make sure the count’s correct when the next ping’s sent”

Some more of the Data Org and Mozilla’s greatest successes involve supplying context at the points in a Data Engagement where they’re most needed. We’ve gotten exceedingly good at loading context about data (metadata) to facilitate communication between Instrumentors and Analysts with tools like Glean Dictionary.

Ah, but once again the weak link appears to be the communication of Questions and Answers between Consumers and Instrumentors. Taking the above example, does the number of bookmarks include folders?

The Consumer knows, but the further away they sit from the Instrumentor, the less likely that the data coming from the product and fueling the analysis will be the “correct” one.

(Either including or excluding folders would be “correct” for different cases. Which one do you think was “more correct”?)

So how do we improve this?

Glean

Well, actually, Glean doesn’t have a solution for this. I don’t actually know what the solutions are. I have some ideas. Maybe we should share more context between Consumers and Instrumentors somehow. Maybe we should formalize the act of question-asking. Maybe we should build into the Glean SDK a high-enough level of metric abstraction that instead of asking questions, Consumers learn to speak a language of metrics.

The one thing I do know is that Glean is absolutely necessary to making any of these solutions possible. Without Glean, we have too many systems that are fractally complex for any context to be relevantly shared. How can we talk about sharing context about bookmark counts when we aren’t even counting things consistently[2]?

Glean brings that consistency. And from there we get to start solving these problems.

Expect me to come back to this realm of Engagements and the Three Roles in future posts. I’ve been thinking about:

  • how tooling affects the languages the roles speak amongst themselves and between each other,
  • how the roles are distributed on the org chart,
  • which teams support each role,
  • how Data Stewardship makes communication easier by adding context and formality,
  • how Telemetry and Glean handle the same situations in different ways, and
  • what roles Users play in all this. No model about data is complete without considering where the data comes from.

I’m not sure how many I’ll actually get to, but at least I have ideas.

:chutten

[1] Other rejected similes include “as common as”: maple syrup on Canadian breakfast tables, frustration in traffic, sense isn’t.

[2] Counting is harder than it looks.

Firefox NightlyThese Weeks in Firefox: Issue 102

Highlights

Friends of the Firefox team
Fixed more than one bug

  • Itiel

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons

  • Fixed a bug related to the addon descriptions not being localized as expected when switching the browser to a different locale – Bug 1712024
  • Introduced a new “extensions.logging.productaddons.level” about:config pref to control log level related to GMP updates independently from the general AddonManager/XPIProvider logging level – Bug 1733670  

WebExtensions Framework

WebExtension APIs

  • Starting from Firefox 94, a new partitionKey property is being introduced in the cookies API, this new property is meant to help extensions to better handle cookies partitioned by the dFPI feature

Downloads Panel

  • Many tests being addressed and fixed with improvements pref enabled (ticket)
  • [kpatenio] New context menu item being worked on for new pref (ticket)

Fission

  • The Fission experiment on Release has concluded, and the data science team is now analyzing the data. So far, nothing has jumped out to us showing stability or performance issues.
  • Barring any serious issues in the data analysis, the plan is to slowly rollout Fission to more release channel users in subsequent releases.

Fluent

Password Manager 

Performance

  • dthayer landed a fix that improves scaling of iframes with Fission enabled
  • mconley helped harry find a solution for a white flash that can occur when a theme is applied to about:home during the first boot
  • Special shout-out to zombie from the WebExtensions team for helping to reduce Base Content JS memory usage by 3-4% on all desktop platforms!
  • We’re starting to get numbers back on how the Fluent migrations have been impacting startup:
    • There appears to be evidence that the localization cycle for the first window is ~32% faster for the 95th percentile of users on Nightly, and ~12% faster for the 75th percentile.
    • Subsequent new windows see localization cycle improvements of ~12% for the 95th percentile
    • TL;DR: Removing DTDs from the main windows has improved startup time and new window opening for some of the slowest machines in our user pool.

Performance Tools

  • Isolated web content processes now display eTLD+1 of their origin in the Firefox Profiler timeline when Fission is enabled.
Before Fission was enabled, web content processes did not display their origin in the Firefox Profiler..

Before

After Fission is enabled, users can now see the origin of web content processes in the Firefox Profiler.

After

  • Gecko profiler Rust marker API has landed. It’s possible to add a profiler marker from the Rust to annotate a part of the code now. See the gecko-profiler crate for more information. Documentation is also coming soon.

Search and Navigation

  • Daisuke has replaced the DDG icon with one having better quality. Bug 1731538 
  • Thanks to Antonin Loubiere for contributing a patch to make ESC actually undo changes in the separate search bar, instead of doing nothing, more similarly to how the address bar behaves. Bug 350079 

Screenshots

  • Thanks again to module owner Emma whose last day was Friday. Sam Foster will take over as module owner. 
  • niklas working on bug 1714234 that fixes screenshot test issues when copying image to clipboard.

The Rust Programming Language BlogAnnouncing Rust 1.56.0 and Rust 2021

The Rust team is happy to announce a new version of Rust, 1.56.0. This stabilizes the 2021 edition as well. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.56.0 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.56.0 on GitHub.

What's in 1.56.0 stable

Rust 2021

We wrote about plans for the Rust 2021 Edition in May. Editions are a mechanism for opt-in changes that may otherwise pose backwards compatibility risk. See the edition guide for details on how this is achieved. This is a smaller edition, especially compared to 2018, but there are still some nice quality-of-life changes that require an edition opt-in to avoid breaking some corner cases in existing code. See the new chapters of the edition guide below for more details on each new feature and guidance for migration.

Disjoint capture in closures

Closures automatically capture values or references to identifiers that are used in the body, but before 2021, they were always captured as a whole. The new disjoint-capture feature will likely simplify the way you write closures, so let's look at a quick example:

// 2015 or 2018 edition code
let a = SomeStruct::new();

// Move out of one field of the struct
drop(a.x);

// Ok: Still use another field of the struct
println!("{}", a.y);

// Error: Before 2021 edition, tries to capture all of `a`
let c = || println!("{}", a.y);
c();

To fix this, you would have had to extract something like let y = &a.y; manually before the closure to limit its capture. Starting in Rust 2021, closures will automatically capture only the fields that they use, so the above example will compile fine!

This new behavior is only activated in the new edition, since it can change the order in which fields are dropped. As for all edition changes, an automatic migration is available, which will update your closures for which this matters by inserting let _ = &a; inside the closure to force the entire struct to be captured as before.

Migrating to 2021

The guide includes migration instructions for all new features, and in general transitioning an existing project to a new edition. In many cases cargo fix can automate the necessary changes. You may even find that no changes in your code are needed at all for 2021!

However small this edition appears on the surface, it's still the product of a lot of hard work from many contributors: see our dedicated celebration and thanks tracker!

Cargo rust-version

Cargo.toml now supports a [package] rust-version field to specify the minimum supported Rust version for a crate, and Cargo will exit with an early error if that is not satisfied. This doesn't currently influence the dependency resolver, but the idea is to catch compatibility problems before they turn into cryptic compiler errors.

New bindings in binding @ pattern

Rust pattern matching can be written with a single identifier that binds the entire value, followed by @ and a more refined structural pattern, but this has not allowed additional bindings in that pattern -- until now!

struct Matrix {
    data: Vec<f64>,
    row_len: usize,
}

// Before, we need separate statements to bind
// the whole struct and also read its parts.
let matrix = get_matrix();
let row_len = matrix.row_len;
// or with a destructuring pattern:
let Matrix { row_len, .. } = matrix;

// Rust 1.56 now lets you bind both at once!
let matrix @ Matrix { row_len, .. } = get_matrix();

This actually was allowed in the days before Rust 1.0, but that was removed due to known unsoundness at the time. With the evolution of the borrow checker since that time, and with heavy testing, the compiler team determined that this was safe to finally allow in stable Rust!

Stabilized APIs

The following methods and trait implementations were stabilized.

The following previously stable functions are now const.

Other changes

There are other changes in the Rust 1.56.0 release: check out what changed in Rust, Cargo, and Clippy.

Contributors to 1.56.0

Many people came together to create Rust 1.56.0 and the 2021 edition. We couldn't have done it without all of you. Thanks!

Mark SurmanExploring better data stewardship at Mozilla

Over the last few years, Mozilla has increasingly turned its attention to the question of ‘how we build more trustworthy AI?’ Data is at the core of this question. Who has our data? What are they using it for? Do they have my interests in mind, or only their own? Do I trust them?

We decided earlier this year that ‘better data stewardship’ should be one of the three big areas of focus for our trustworthy AI work.

One part of this focus is supporting the growing field of people working on data trusts, data cooperatives and other efforts to build trust and shift power dynamics around data. In partnership with Luminate and Siegel, we launched the Mozilla Data Futures Lab in March as a way to drive this part of the work.

At the same time, we have started to ask ourselves: how might Mozilla itself explore and use some of these new models of responsible data governance? We have long championed more responsible use of data, with everything we do living up to the Mozilla Lean Data Practices. The question is: are there ways we can go further? Are there ways we can more actively engage with people around their data that builds trust — and that helps the tech industry shift the way it thinks about and uses data?

This post includes some early learning on these questions. The TLDR: 1. the list of possible experiments is compelling — and vast; and 2. we should start small, looking at how emerging data governance models might apply to areas where we already use data in our products and programs.

Digging into more detail: we started looking at these questions in 2020 by asking two leading experts — Sarah Gold from Projects by IF and Sean McDonald from Digital Public — to generate hypothetical scenarios where Mozilla deployed radically new approaches to data governance. These scenarios included three big ideas:

  • Collective Rights Representation: Mozilla could represent the data rights of citizens collectively, effectively forming a ‘data union’. This could include negotiating better terms of service or product improvements, or enforcing rights held under regimes like GDPR or CCPA.
  • Data Donation Trust: As Mozilla projects like Rally, Regrets Reporter and Common Voice demonstrate, there can be great power in citizens coming together to donate and aggregate their data. We could take these platforms further by creating a data trust or coop to actively steward and create collective value from this data over time.
  • Consent Management via a Privacy Assistant: a digital assistant powered by a data trust could mediate between citizens and tech companies, handling real time ‘negotiations’ about how their data is used. This would give users more control — and ultimately more leverage over how individuals and companies manage data.

Other scenarios included Mozilla as a consumer steward, creating and building an advocacy infrastructure platform, or managing an industry association. Sarah and Sean have each written up their work and shared in these blog posts: Bringing better data stewardship to life; and A Couch-to-5K Plan for Digital Governance.

This reflective process was at once exciting and sobering. The ideas are compelling — and include things we might do one day (and that we’re even doing now in small ways). But, by their nature, they are without context, leadership or products. Reading these scenarios, the path from a ‘big data governance idea’ to something real in the world wasn’t at all clear to us.

As Sean pointed out in his post: “There isn’t ‘a’ way to design data governance – as a system or as a commercial offering. Beyond this point, the process relies a lot on context, and the unique value a person or organization brings to a process.”

For me, this was really the key ‘aha’ (even though it should have been obvious). We need to start from the places where we have data and context and leaders — not from big ideas. With this in mind, Mozilla Foundation Data Lead, Jackie Lu, and Data Futures Lab Lead, Champika Fernando have offered to take over this internal exploration by identifying practical ways Mozilla can improve the ways we collect and use data today.

They will begin this work later this year with a review of data governance practices and open questions within Mozilla Foundation, where our trustworthy AI work is housed. This will include a look at data-centric projects like Common Voice and YouTube Regrets Reporter as well as programs like online campaigning and MozFest that rely heavily on the Foundation’s CRM. This work explores questions like: what would it look like for Mozilla Foundation to more fully “walk the talk” when it comes to data stewardship? And, what kind of processes might we need to put in place, to have our own organization’s use of data be a learning opportunity for how we shift power back to people, and imagine new ways to act collectively through, and with, our data? They are starting to explore those questions and more.

In parallel, the Data Futures Lab and Mozilla’s Insights team will be working on a Legal and Policy Playbook for Builders outlining existing regulatory opportunities that can be leveraged for experimentation across the field in various jurisdictions. While the primary audience of this work is external, we will also look at whether there are ways to apply these practices to the Mozilla Foundation’s work internally.

Personally, I believe that new models of responsible data governance have huge potential to shift technology, society and our economy — much like open source and the open web shifted things 20 years ago. I also think that the path to this shift will be driven by people who just start building things differently, inventing new models and shifting power as they go. I’m hoping that looking at new, more responsible ways to steward data everyday will set Mozilla up to again play a significant role in this kind of innovation and change.

This is part one of a four-part series on how we approach data at Mozilla. Read the others here: Part 2; Part 3; Part 4.

The post Exploring better data stewardship at Mozilla appeared first on Mark Surman.

Hacks.Mozilla.OrgHacks Decoded: Thomas Park, Founder of Codepip

Welcome to our Hacks: Decoded Interview series!

Once a month, Mozilla Foundation’s Xavier Harding speaks with people in the tech industry about where they’re from, the work they do and what drives them to keep going forward. Make sure you follow Mozilla’s Hacks blog to find more articles in this series and make sure to visit the Mozilla Foundation site to see more of our org’s work. 

Meet Thomas Park 

Thomas Park is a software developer based in the U.S. (Philadelphia, specifically). Previously, he was a teacher and researcher at Drexel University and even worked at Mozilla Foundation for a stint. Now, he’s the founder of Codepip, a platform that offers games that teach players how to code. Park has made a couple games himself: Flexbox Froggy and Grid Garden.

We spoke with Thomas over email about coding, his favourite apps and his past life at Mozilla. Check it out below and welcome to Hacks: Decoded.

Where’d you get your start, Thomas? How did you end up working in tech, what was the first piece of code you wrote, what’s the Thomas Park origin story?

The very first piece of code I wrote was in elementary school. We were introduced to Logo, an educational programming language that was used to draw graphics with a turtle (a little cursor that was shaped like the animal). I drew a rudimentary weapon that shot an animated laser beam, with the word “LAZER” misspelled under it.

Afterwards, I took an extremely long hiatus from coding. Dabbled with HyperCard and HTML here and there, but didn’t pick it up in earnest until college.

Post-college, I worked in the distance education department at the Center for Talented Youth at Johns Hopkins University, designing and teaching online courses. It was there I realized how much the technology we used mediated the experience of our students. I also realized how much better the design of this tech should be. That motivated me to go to grad school to study human-computer interaction, with a focus on educational technology. I wrote a decent amount of code to build prototypes and analyze data during my time there.

What is Codepip? What made you want to create it? 

Codepip is a platform I created for coding games that help people learn HTML, CSS, JavaScript, etc. The most popular game is Flexbox Froggy.

Codepip actually has its roots in Mozilla. During grad school, I did an internship with the Mozilla Foundation. At the time, they had a code editor geared toward teachers and students called Thimble. For my internship, I worked with Mozilla employees to integrate a tutorial feature into Thimble.

Anyway, through this internship I got to attend Mozilla Festival. And there I met many people who did brilliant work inside and outside of Mozilla. One was an extremely talented designer named Luke Pacholski. By that time, he had created CSS Diner, a game about CSS selectors. And we got to chatting about other game ideas.

After I returned from MozFest, I worked weekends for about a month to create Flexbox Froggy. I was blown away by the reception, from both beginners who wanted to learn CSS, to more experienced devs curious about this powerful new CSS module called flexbox. To me, this affirmed that coding games could make a good complement to more traditional ways of learning. Since then, I’ve made other games that touch on CSS grid, JS math, HTML shortcuts with Emmet, and more.

Gamified online learning has become quite popular in the past couple of years, what are some old school methods that you still recommend and use?

Consulting the docs, if you can call that old school. I often visit the MDN Web Docs to learn some aspect of CSS or JS. The articles are detailed, with plenty of examples.

On occasion I find myself doing a deep dive into the W3C standards, though navigating the site can be tricky.

Same goes for any third-party library or framework you’re working with — read the docs!

What’s one thing you wish you knew when you first started to code?

I wish I knew git when I first started to code. Actually, I wish I knew git now.

It’s never too early to start version controlling your projects. Sign up for a free GitHub account, install GitHub’s client or learn a handful of basic git commands, and backup your code. You can opt for your code to be public if you’re comfortable with it, private if not. There’s no excuse.

Plus, years down the line when you’ve mastered your craft, you can get some entertainment value from looking back at your old code.

Whose work do you admire right now? Who should more people be paying attention to?

I’m curious how other people answer this. I feel like I’m out of the loop on this one.

But since you asked, I will say that when it comes to web design with high stakes, the teams at Stripe and Apple have been the gold standard for years. I’ll browse their sites and get inspired by the many small, almost imperceptible details that add up to something magical. Or something in your face that blows my mind.

On a more personal front, there’s the art of Diana Smith and Ben Evans, which pushes the boundaries of what’s possible with pure CSS. I love how Lynn Fisher commits to weird side projects. And I admire the approachability of Josh Comeau’s writings on technical subjects.

What’s a part of your journey that many may not realize when they look at your resume or LinkedIn page?

My resume tells a cohesive story that connects the dots of my education and employment. As if there was a master plan that guided me to where I am.

The truth is I never had it all figured out. I tried some things I enjoyed, tried other things which I learned I did not, and discovered whole new industries that I didn’t even realize existed. On the whole, the journey has been rewarding, and I feel fortunate to be doing work right now that I love and feel passionate about. But that took time and is subject to change.

Some beginners may feel discouraged that they don’t have their career mapped out from A to Z, like everyone else seemingly does. But all of us are on our own journeys of self-discovery, even if the picture we paint for prospective employers, or family and friends, is one of a singular path.

What’s something you’ve realized since we’ve been in this pandemic? Tech-related or otherwise?

Outside of tech, I’ve realized how grateful I am for all the healthcare workers, teachers, caretakers, sanitation workers, and food service workers who put themselves at risk to keep things going. At times I got a glimpse of what happens without them and it wasn’t pretty.

Tech-related, the pandemic has accelerated a lot of tech trends by years or even decades. Not everything is as stark as, say, Blockbuster getting replaced by Netflix, but industries are irreversibly changing and new technology is making that happen. It really underscores how in order to survive and flourish, we as tech workers have to always be ready to learn and adapt in a fast-changing world.

Okay a random one — you’re stranded on a desert island with nothing but a smartphone. Which three apps could you not live without?

Assuming I’ll be stuck there for a while, I’d definitely need my podcasts. My podcast app of choice has long been Overcast. I’d load it up with some 99% Invisible and Planet Money. Although I’d probably only need a single episode of Hardcore History to last me before I got rescued.

I’d also have Simplenote for all my note-taking needs. When it comes to notes, I prefer the minimalist, low-friction approach of Simplenote to manage my to-dos and projects. Or count days and nights in this case.

Assuming I have bars, my last app is Reddit. The larger subs get most of the attention, but there are plenty of smaller ones with strong communities and thoughtful discussion. Just avoid the financial investing advice from there.

Last question — what’s next for you?

I’m putting the finishing touches on a new coding game called Disarray. You play a cleaning expert who organizes arrays of household objects using JavaScript methods like push, sort, splice, and map, sparking joy in the homeowner.

And planning for a sequel. Maybe a game about databases…

Thomas Park is a software developer living in Philly. You can keep up with his work right here and keep up with Mozilla on Twitter and Instagram. Tune into future articles in the Hacks: Decoded series on this very blog.

The post Hacks Decoded: Thomas Park, Founder of Codepip appeared first on Mozilla Hacks - the Web developer blog.

Chris H-CSix-Year Moziversary

I’ve been working at Mozilla for six years today. Wow.

Okay, so what’s happened… I’ve been promoted to Staff Software Engineer. Georg and I’d been working on that before he left, and then, well *gestures at everything*. This means it doesn’t really _feel_ that different to be a Staff instead of a Senior since I’ve been operating at the latter level for over a year now, but the it’s nice that the title caught up. Next stop: well, actually, I think Staff’s a good place for now.

Firefox On Glean did indeed take my entire 2020 at work, and did complete on time and on budget. Glean is now available to be used in Firefox Desktop.

My efforts towards getting folks to actually _use_ Glean instead of Firefox Telemetry in Firefox Desktop have been mixed. The Background Update Task work went exceedingly well… but when there’s 2k pieces of instrumentation, you need project management and I’m trying my best. Now to “just” get buy-in from the powers that be.

I delivered a talk to Ubisoft (yeah, the video game folks) earlier this year. That was a blast and I’m low-key looking for another opportunity like it. If you know anyone who’d like me to talk their ears off about Data and Responsibility, do let me know.

Blogging’s still low-frequency. I rely on the This Week in Glean rotation to give me the kick to actually write long-form ideas down from time-to-time… but it’s infrequent. Look forward to an upcoming blog post about the Three Roles in Data Engagements.

Predictions for the future time:

  • There will be at least one Work Week planned if not executed by this time next year. Vaccines work.
  • Firefox Desktop will have at least started migrating its instrumentation to Glean.
  • I will still be spending a good chunk of my time coding, though I expect this trend of spending ever more time writing proposals and helping folks on chat will continue.

And that’s it for me for now.

:chutten

Support.Mozilla.OrgWhat’s up with SUMO – October 2021

Hey folks,

As we enter October, I hope you’re all pumped up to welcome the last quarter of the year and, basically, wrapping up projects that we have for the remainder of the year. With that spirit, let’s start by welcoming the following folks into our community.

Welcome on board!

  1. Welcome to the support forum crazy.cat, Losa, and Zipyio!
  2. Also, welcome to Ihor from Ukraine, Static_salt from the Netherlands, as well as Eduardo and hcasellato from Brazil. Thanks for your contribution to the KB localization!

Community news

  • If you’ve been hearing about Firefox Suggest and are confused about what exactly is that, please read this contributor forum thread to find out more and join our discussion about it.
  • Last month, we welcomed Firefox Focus into the Play Store Support program. We connected the app to Conversocial so now, Play Store Support contributors should be able to reply to Google Play Store reviews for Firefox Focus from the tool. We also prepared this guideline on how to reply to the reviews.
  • Learn more about Firefox 93 here.
  • Another warm welcome for our new content manager, Abby Parise! She made a quick appearance in our community call last month. So go ahead and watch the call if you haven’t!
  • Check out the following release notes from Kitsune during the previous period:

Community call

  • Watch the monthly community call if you haven’t. Learn more about what’s new in September!
  • Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can address them during the meeting.

Community stats

KB

KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only
Month Page views Vs previous month
Sep 2021 8,244,817 -2.57%

Top 5 KB contributors in the last 90 days: 

  1. AliceWyman
  2. Michele Rodaro
  3. Pierre Mozinet
  4. K_alex
  5. Julie

KB Localization

Top 10 locale based on total page views

Locale Sep 2021 pageviews (*) Localization progress (per Sep, 14)(**)
de 8.13% 100%
zh-CN 7.56% 100%
fr 6.59% 88%
es 6.10% 39%
pt-BR 5.96% 60%
ja 3.85% 54%
ru 3.77% 100%
it 2.22% 100%
pl 2.09% 87%
zh-TW 1.91% 5%
* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

  1. Milupo
  2. Michele Rodaro
  3. Jim Spentzos
  4. Valery Ledovskoy
  5. Soucet

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Sep 2021 2274 85.31% 24.32% 65.89%

Top 5 forum contributors in the last 90 days: 

  1. FredMcD
  2. Cor-el
  3. Seburo
  4. Jscher2000
  5. Sfhowes

Social Support

Twitter stats

Channel Sep 2021
Total conv Conv interacted
@firefox 3318 785
@FirefoxSupport 290 240

Top 5 contributors in Q3 2021

  1. Christophe Villeneuve
  2. Felipe Koji
  3. Andrew Truong

Play Store Support

We don’t have enough data for the Play Store Support yet. However, you can check out the overall Respond Tool metrics here.

Product updates

Firefox desktop

  • FX Desktop 94 (Nov 2)
    • Monochromatic Themes (Personalize Fx by opting into a polished monochromatic theme from a limited set)
    • Avoid interruptions when closing Firefox
    • Fx Desktop addition to Windows App store
    • Video Playback testing on MacOS: (Decrease power consumption during full screen playback)

Firefox mobile

Major Release 2 Mobile (Nov 2)

Area Feature Android IOS Focus
Firefox Home Jump Back in (Open Tabs) X X
Recently saved/Reading List X
Recent bookmarks X X
Customize Pocket Articles X
Clutter Free Tabs Inactive Tabs X
Better Search History Highlights in Awesome bar X
Themes Settings Themes Settings X
  • Check out Android Beta which has most of major feature updates
    • More features to come in FX Android V95/IOS V40 and beyond.

Other products / Experiments

  • Mozilla VPN V2.6 (Oct 20)
    • Multi-Account Container: When used with Mozilla VPN on, MAC allows for even greater privacy by having separate Wireguard tunnels for each container.  This will allow users to have tabs exit in different nodes in the same instance of the browser.
  • Firefox Relay Premium – launch (Oct 27)
    • Unlimited aliases
    • Create your own Domain name

Shout-outs!

  • Thanks for Selim and Chris for helping me with Turkish and Polish keywords for Conversocial.
  • Thanks for Wxie for the help in recognizing other zh-cn locale contributors! Thanks for taking the lead. The team is lucky to have you as a locale leader!
  • Props to Julie for her video experiment in the KB and for sharing the stats to the rest of us. Thanks for bringing more colors to our Knowledge Base!
  • Thanks for Jefferson Scher for straightening the Firefox Suggest confusion on Reddit. That definitely help people to understand the feature better.

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to   add them in our next edition.

Useful links:

William LachanceLearning about Psychological Safety at the Recurse Center

Last summer, I took a 6-week sabbatical from my job to attend a virtual “programmers retreat” at the Recurse Center. I thought I’d write up some notes on the experience, with a particular lens towards what makes an environment suited towards learning, innovation, and personal growth.

Some context: I’m currently working as a software engineer at Mozilla, building out our data pipeline and analysis tooling. I’ve been at my current position for more than 10 years (my “anniversary” actually passed while I was out). I started out as a senior engineer in 2011, and was promoted to staff engineer in 2016. In tech-land, this is a really long tenure at a company. I felt like it was time to take a break from my day-to-day, explore some new ideas and concepts, and hopefully expose myself to a broader group of people in my field.

My original thinking was that I would mostly be spending this time building out an interactive computation environment I’ve been working on called Irydium. And I did quite a bit of that. However, I think the main thing I took away from this experience was some insight on what makes a remote environment for knowledge work really “click”. In particular, what makes somewhere feel psychologically safe, and how this feeling allows us to innovate and do our best work.

While the Recurse Center obviously has different goals than an organization that builds and delivers consumer software, I do think there are some things that it does that could be applied to Mozilla (and, likely, many other tech workplaces).

What is the Recurse Center?

Most succinctly, the Recurse Center is a “writer’s retreat for programmers”. It tries to provide an environment conducive to learning and creativity, an opportunity to refine your craft and learn new things, both from the act of programming itself and from interactions with the other like-minded people attending. The Recurse Center admits a wide variety of people, from those who have only been through a coding bootcamp to those who have been in the industry many years, like myself. The main admission criteria, from what I gather, are curiosity and friendliness.

Once admitted, you do a “batch”— either a mini (1 week), half-batch (6 weeks), or a full batch (12 weeks). I did a half-batch.

How does it work (during a global pandemic)?

The Recurse experience used to be entirely in-person, in a space in New York City - if you wanted to go, you needed to move there at least temporarily. Obviously that’s out the window during a Global Pandemic, and all activities are currently happening online. This was actually pretty ideal for me at this point in my life, as it allowed me to participate entirely remotely from my home in Hamilton, Ontario, Canada (near Toronto).

There’s a few elements that make “Virtual RC” tick:

  • A virtual space (pictured below) where you can see other people in your cohort. This is particularly useful when you want to jump into a conference room.
  • A shared “calendar” where people can schedule events, either adhoc (e.g. a one off social event, discussing a paper) or on a regular basis (e.g. a reading group)
  • A zulip chat server (which is a bit like Slack) for adhoc conversation with people in your cohort and alumni. There are multiple channels, covering a broad spectrum of interests.

Why does it work?

So far, what I’ve described probably sounds a lot like any remote tech workplace during the pandemic… and it sort of is! In some ways, my schedule and life while at Recurse didn’t feel all that different from my normal day-to-day. Wake up in the morning, drink coffee, meditate, work for roughly 8 hours, done. Qualitatively, however, my experience at Recurse felt unusually productive, and I learned a lot more than I expected to: not just the core stuff related to Irydium, but also unexpected new concepts like CRDTs, product design, and even how visual studio code syntax highlighting works.

What made the difference? Certainly, not having the normal pressures of a workplace helps - but I think there’s more to it than that. The way RC is constructed reinforces a sense of psychological safety which I think is key to learning and growth.

What is psychological safety and why should I care?

Psychological safety is a bit of a hot topic these days and there’s a lot of discussion about in management circles. I think it comes down to a feeling that you can take risks and “put yourself out there” without fear that you’ll be ignored, attacked, or ridiculed.

Why is this important? I would argue, because knowledge work is about building understanding — going from a place of not understanding to understanding. If you’re working on anything at all innovative, there is always an element of the unknown. In my experience, there is virtually always a sense of discomfort and uncertainty that goes along with that. This goes double when you’re working around and with people that you don’t know terribly well (and who might have far more experience than you). Are they going to make fun of you for not knowing a basic concept or for expressing an idea that’s “so wrong I don’t even know where to begin”? Or, just as bad, will you not get any feedback on your work at all?

In reality, except in truly toxic environments, you’ll rarely encounter outright abusive behaviour. But the isolation of remote work can breed similar feelings of disquiet and discomfort over time. My sense, after a year of working “hardcore” remote in COVID times, is that our normal workplace rituals of meetings, “stand ups”, and discussions over Slack don’t provide enough space for a meaningful sense of psychological safety to develop. They’re good enough for measuring progress towards agreed-upon goals but a true sense of belonging depends on less tightly scripted interactions among peers.

How the Recurse environment creates psychological safety

But the environment I described above isn’t that different from a workplace, is it? Speaking from my own experience, my coworkers at Mozilla are all pretty nice people. There’s also many channels for informal discussion at Mozilla, and of course direct messaging is always available (via Slack or Matrix). And yet, I still feel there is a pretty large gap between the two experiences. So what makes the difference? I’d say there were three important aspects of Recurse that really helped here: social rules, gentle prompts, and a closed space.

Social rules

There’s been a lot of discussion about community participation guidelines and standards of behaviour in workplaces. In general, these types of policies target really egregious behaviour like harassment: this is a pretty low bar. They aren’t, in my experience, sufficient to actually create an environment that actually feels safe.

The Recurse Center goes over and above a basic code of conduct, with four simple social rules:

  • No well-actually’s: corrections that aren’t relevant to the point someone was trying to make (this is probably the rule we’re most heavily conditioned to break).
  • No feigned surprise: acting surprised when someone doesn’t know something.
  • No backseat driving: lobbing advice from across the room (or across the online chat) without really joining or engaging in a conversation.
  • No subtle -isms: subtle expressions of racism, sexism, ageism, homophobia, transphobia and other kinds of bias and prejudice.

These rules aren’t “commandments” and you’re not meant to feel shame for violating them. The important thing is that by being there, the rules create an environment conducive to learning and growth. You can be reasonably confident that you can bring up a question or discussion point (or respond to one) and it won’t lead to a bad outcome. For example, you can expect not to be made fun of for asking what a UNIX socket is (and if you are, you can tell the person doing so to stop). Rather than there being an unspoken rule that everyone should already know everything about what they are trying to do, there is a spoken rule that states it’s expected that they don’t.

Working on Irydium, there’s an infinite number of ways I can feel incompetent: this is a requirement when engaging with concepts that I still don’t feel completely comfortable with: parsers, compilers, WebAssembly… the list goes on. Knowing that I could talk about what I’m working on (or something I’m interested in) and that the responses I got would be constructive and directed to the project, not the person made all the difference.1

Gentle prompts

The thing I loved the most about Recurse were the gentle prompts to engage with other people, talk about your work, and get help. A few that I really enjoyed during my time there:

  • The “checkins” channel. People would post what’s going on with their time at RC, their challenges, their struggles. Often there would be little snippits about people’s lives in there, which built to a feeling of community.
  • Hack & Tell: A weekly event where a group of us would get together in a Zoom room, talk about working on or building something, then rejoin the chat an hour later to show off what we accomplished.
  • Coffee Chats: A “coffee chat” bot at RC would pair you with other people in your batch (or alumni) on a cadence of your choosing. I met so many great people this way!
  • Weekly Presentations: At the end of each week, people would sign up to share something that they were working on our learned.

… and I could go on. What’s important are not the specific activities, but their end effect of building connectedness, creating opportunities for serendipitous collaboration and interaction (more than one discussion group came out of someone’s checkin post on Zulip) and generally creating an environment well-suited to learning.

A (semi) closed space

One of the things that makes the gentle prompts above “work” is that you have some idea of who you’re going to be interacting with. Having some predictability about who’s going to see what you post and engage with you (that they were vetted by RC’s interview process and are committed to the above-mentioned social rules) gives you some confidence to be vulnerable and share things that you might be reluctant to otherwise.

Those who have known me for a while will probably see the above as being a bit of departure from what I normally preach: throughout my tenure at Mozilla, I’ve constantly pushed the people I’ve worked with to do more work in public. In the case of a product like Firefox, which touches so many people, I think open and transparent practices are absolutely essential to building trust, creating opportunity, and ensuring that our software reflects a diversity of views. I applied the same philosophy to Irydium’s development while I was at the Recurse Center: I set up a public Matrix channel to discuss the project, published all my work on GitHub, and was quite chatty about what I was working on, both in this blog and on Twitter.

The key, I think, is being deliberate about what approach you take when: there is a place for both public and private conversations about what we work on. I’m strongly in favour of open design documents, community calls, public bug trackers and open source in general. But I think it’s also pretty ok to have smaller spaces for learning, personal development, and question asking. I know I strongly appreciated having a smaller group of people that I could talk to about ideas that were not yet fully formed: you can always bring them out into the open later. The psychological risk of working in public can be mitigated by the psychological safety that can be developed within an intentional community.

Bringing it back

Returning to my job, I wondered if it might be possible to bring some of what I described above back to Mozilla? Obviously not everything would be directly transferable: Mozilla has its own mission and goals, and there are pressures that exist in a workplace that do not exist in an environment purely directed at learning. Still, I suspected that there was something we could do here. And that it would be worth doing, not just to improve the felt experience of the people here (though that would be reason enough) but also to get more feedback on our work and create more opportunities for collaboration and innovation.

I felt like trying to do something inside our particular organization (Data Engineering and Data Science) would be the most tractable initial step. I talked a bit about my experience with Will Kahn-Green (who has been at Mozilla around the same length of time as I have) and we came up with what we called the “Data Neighbourhood” project: a set of grassroots micro-initiatives to increase our connectedness as a group. As an organization directed primarily at serving other parts of Mozilla, most of our team’s communication is directed outward. It’s often hard to know what everyone else is up to, where they’re struggling, and how we could help each other out. Attacking that problem directly seemed like the best place to start.

The first experiment we tried was a “data checkins” channel on Slack, a place for people to talk informally about their work (or life!). I explicitly set it up with a similar set of social rules as outlined above and tried to emphasize that it was a place to talk about how things are going, rather than a place to report status to your manager. After a somewhat slow start (the initial posts were from Will, myself, and a few other people from Data Engineering who had been around for a long time) we’re beginning to see engagement from others, including some newer people I hadn’t interacted with much before. There’s also been a few useful threads of conversations across different sub-teams (for example, a discussion on how we identify distinct versions of Firefox for iOS) that likely would not have happened without the channel.

Since then, others have tried a few other things in the same vein (an adhoc coffee chat pairing bot, a “writing help” channel) and there are some signs of success. There’s clearly an appetite for new and better ways for us to relate to each other about the work we’re doing, and I’m excited to see how these ideas evolve over time.

I suspect there are limits to how psychologically safe a workplace can ever feel (and some of that is probably outside of any individual’s control). There are dynamics in a workplace which make applying some of Recurse’s practices difficult. In particular, a posture of “not knowing things is o.k.” may not apply perfectly to a workplace where people are hired (and promoted) based on perceived competence and expertise. Still, I think it’s worth investigating what might be possible within the constraints of the system we’re in. There are big potential benefits, for our creative output and our well-being.

Many thanks to Jenny Zhang, Kathleen Beckett, Joe Trellick, Taylor Phebillo and Vaibhav Sagar, and Will Kahn-Greene for reviewing earlier drafts of this post

  1. This is generally considered best practice inside workplaces as well. For example, see Google’s guide on how to write code review comments

Data@MozillaThis Week in Glean: Designing a telemetry collection with Glean

Designing a telemetry collection with Glean

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index).

Whenever I get a chance to write about Glean, I am usually writing about some aspects of working on Glean. This time around I’m going to turn that on its head by sharing my experience working with Glean as a consumer with metrics to collect, specifically in regards to designing a Nimbus health metrics collection. This post is about sharing what I learned from the experience and what I found to be the most important considerations when designing a telemetry collection.

I’ve been helping develop Nimbus, Mozilla’s new experimentation platform, for a while now. It is one of many cross-platform tools written in Rust and it exists as part of the Mozilla Application Services collection of components. With Nimbus being used in more and more products we have a need to monitor its “health”, or how well it is performing in the wild. I took on this task of determining what we would need to measure and designing the telemetry and visualizations because I was interested in experiencing Glean from a consumer’s perspective.

So how exactly do you define the “health” of a software component? When I first sat down to work on this project, I had some vague idea of what this meant for Nimbus, but it really crystallized once I started looking at the types of measurements enabled by Glean. Glean offers different metric types designed to measure anything from a text value, multiple ways to count things, and even events to see how things occur in the flow of the application. For Nimbus, I knew that we would want to track errors, as well as a handful of numeric measurements like how much memory we used and how long it takes to perform certain critical tasks.

As a starting point, I began thinking about how to record errors, which seemed fairly straightforward. The first thing I had to consider was exactly what it was we were measuring (the “shape” of the data), and what questions we wanted to be able to answer with it. Since we have a good understanding about the context in which each of the errors can occur, we really only wanted to monitor the counts of errors to know if they increase or decrease. So, counting things, that’s one of the things Glean is really good at! So my choice in which metric type to use came down to flexibility and organization. Since there are 20+ different errors that are interesting to Nimbus, we could have used a separate counter metric for each of them, but this starts to get a little burdensome when declaring them in the metrics.yaml file. That would require a separate entry in the file for each. The other problem with using a separate counter for each error comes in adding just a bit of complexity to writing SQL for analysis or a dashboard. A query for analyzing the errors if the metrics are defined separately would require each error metric to be in the select statement, and any new errors that are added would also require the query to be modified to add them.

Instead of distinct counters for each error, I chose to model recording Nimbus errors after how Glean records its own internal errors, by using a LabeledCounterMetric. This means that all errors are collected under the same metric name, but have an additional property that is a “label”. Labels are like sub-categories within that one metric. That makes it a little easier to instrument, first in keeping clutter down in the metrics.yaml file, and maybe making it a little easier to create useful dashboards for monitoring error rates. We want to end up with a chart of errors that lets us see if we start to see an unusual spike or change in the trends, something like this:

A line graph showing multiple colored lines and their changes over time

We expect some small amount of errors, these are computers after all, but we can easily establish a baseline for each type of error, which allows us to configure some alerts if things are too far outside expectations.

The next set of things I wanted to know about Nimbus were in the area of performance. We want to detect regressions or problems with our implementation that might not show up locally for a developer in a debug build, so we measure these things at scale to see what performance looks like for everyone using Nimbus. Once again, I needed to think about what exactly we wanted to measure, and what sort of questions we wanted to be able to answer with the data. Since the performance data we were interested in was a measurement of time or memory, we wanted to be able to measure samples from a client periodically and then look at how different measurements are distributed across the population. We also needed to consider exactly when and where we wanted to measure these things. For instance, was it more important or more accurate to measure the database size as we were initializing, or deinitializing? Finally, I knew we would be interested in how that distribution changes over time so having some way to represent this by date or by version when we analyzed the data.

Glean gives us some great metric types to measure samples of things like time and size such as TimingDistributionMetrics and MemoryDistributionMetrics. Both of these metric types allow us to specify a resolution that we care about so that they can “bucket” up the samples into meaningfully sized chunks to create a sparse payload of data to keep things lean. These metric types also provide a “sum” so we can calculate an average from all the samples collected. When we sum these samples across the population, we end up with a histogram like the following, where measurements collected are on the x-axis, and the counts or occurrences of those measurements on the y-axis:

A histogram showing bell curve shaped data

This is a little limited because we can only look at the point in time of the data as a single dimension, whether that’s aggregated by time such as per day/week/year or aggregated on something else like the version of the Nimbus SDK or application. We can’t really see the change over time or version to see if something we added really impacted our performance. Ideally, we wanted to see how Nimbus performed compared to other versions or other weeks. When I asked around for good representations to show something like this, it was suggested that something like a ridgeline chart would be a great visualization for this sort of data:

A ridgeline chart, represented as a series of histograms arranged to form visualization that looks like a mountain ridge

Ridgeline charts give us a great idea of how the distribution changes, but unfortunately I ran into a little setback when I found out that the tools we use don’t currently have a view like that, so I may be stuck in a bit of a compromise until it does. Here is another visualization example, this time with the data stacked on top of each other:

A series of histograms stacked on top of each other

Even though something like this is much harder to read than the ridgeline, we still can see some change from one version to the next, just picking out the sequence becomes much harder. So I’m still left with a little bit of an issue with representing the performance data the way that we wanted. I think it’s at least something that can be iterated on to be more usable in the future, perhaps using something similar to GLAM’s visualization of percentiles of a histogram.

To conclude, I really learned the value of planning and thinking about telemetry design before instrumenting anything. The most important things to consider when designing a collection is what are you measuring, and what questions will you need to answer with the data. Both of those questions can affect not only which metric type you choose to represent your data, but where you want to measure something. Thinking about what questions you want to answer ahead of time allows you to be able to make sure that you are measuring the right things to be able to answer those questions. Planning before instrumenting can also help you to choose the right visualizations to make answering those questions easier, as well as being able to add things like alerts for when things aren’t quite right. So, take a little time to think about your telemetry collection ahead of instrumenting metrics, and don’t forget to validate the metrics once they are instrumented to ensure that they are, in fact, measuring what you think and expect. Plan ahead and I promise you, your data scientists will thank you.

Firefox Add-on ReviewsHow to choose the right password manager browser extension

All good password managers should, of course, effectively secure passwords; and they all basically do the same thing—you create a single, easy-to-remember master password to access your labyrinth of complex logins. Password managers not only spare you the hassle of remembering a maze of logins; they can also offer suggestions to help make your passwords even stronger. Fortunately there’s no shortage of capable password protectors out there. But with so many options, how to choose the one that’ll work best for you?  

Here are some of our favorite password managers. They all offer excellent password protection, but with distinct areas of strength.

What are the best FREE password manager extensions? 

Bitwarden

With the ability to create unlimited passwords across as many devices as you like, Bitwarden – Free Password Manager is one of the best budget-minded choices. 

Fortunately you don’t have to sacrifice strong security just because Bitwarden is free. The extension provides advanced end-to-end 256-bit AES encryption for extraordinary protection. 

Paid tiers include Team and Business plans that offer additional benefits like priority tech support, self-hosting capabilities, and more.

Roboform Password Manager

Also utilizing end-to-end 256-bit AES encryption, Roboform has a limited but potent feature set. 

A very intuitive interface makes it easy to manage compelling features like… 

  • Sync with 2FA apps (e.g. Google Authenticator)
  • Sync your Roboform data across multiple devices
  • Single-click logins
  • Automatically save new passwords 
  • Handles multi-step logins
  • Strong password generator
  • 7 form-filling templates for common cases (Person, Business, Passport, Address, Credit Card, Bank Account, Car, Custom)
<figcaption>Roboform makes it easy to manage your most sensitive data like banking information. </figcaption>

LastPass Password Manager

If you’re searching for an all-around solid password manager on desktop, Lastpass is a worthy consideration. Its free tier supports only one device, but you get a robust feature set if you’re okay with a single device limitation (price tiers available for multiple device and user plans).

Key features include…

  • Simple, intuitive interface
  • Security Dashboard suggests password improvements and performs Dark Web monitoring to see if any of your vital information has leaked 
  • Multi-factor authentication
  • Save all types of auto-fill forms like credit cards, addresses, banking information, etc.

What are the most professional grade password managers?

1Password

The most full featured password manager available, 1Password is not only a password manager but a dynamic digital vault system that secures private notes, financial information, auto-fill forms, and more. With a slick, intuitive interface, the extension makes managing your sensitive information a breeze. 

The “catch” is that 1Password has no free tier (just a free trial period). But the cost of 1Password may be worth it for folks who want effective password management (end-to-end 256-bit AES encryption) plus a bevy of other great features like…

  • Vaults help you keep your various protected areas (e.g. passwords, financial info, addresses, etc.) segregated so if your 1Password account is set for family or business, it’s easy to grant specific Vault access to certain members. 
  • Watchtower is a collection of security services that alert you to emerging 1Password threats, potentially compromised logins, and more
  • Travel mode is great for international travellers; when Vaults are in Travel mode they’ll automatically become inaccessible when you cross over potentially insecure borders 
  • App support across Mac, iOS, Windows, Android, Linux, and Chrome OS
  • Two-factor authentication for additional protection

From individual to family, team, and full-scale enterprise plans, you can see if 1Password’s pricing tiers are worth it for you. 

MYKI Password Manager & Authenticator

With a unique, decentralized approach to data storage plus other distinct features, MYKI stands apart in many interesting ways from its password manager peers. Do note, however, that MYKI is optimized to work best with a mobile device, should that be a consideration. 

Beautifully designed and easy to use, MYKI handles all the standard stuff you’d expect—it creates and stores strong passwords and has various auto-fill functions. Where MYKI earns distinction is through two key features: 

  1. Local data storage. Your passwords and other personal info only exist on your devices—not some remote cloud service. Reasonable minds may differ on the security benefits of cloud versus local storage, but if you’re concerned about your information existing on a cloud service that could be compromised, you might consider keeping all this critical data within your localized system
  2. Mobile device optimization. While you don’t need a mobile device to use MYKI as a basic password manager, the extension is certainly augmented by a mobile companion; with an integrated iOS or Android device you can… 
    1. Enable two-factor authentication (2FA) for added security
    2. Enable biometric authentication (e.g. iOS Touch ID, Windows Hello, etc.) and avoid a master password

These are some of our favorite browser based password managers. Feel free to explore more password managers on addons.mozilla.org

From individual to family, team, and full-scale enterprise plans, you can see if 1Password’s pricing tiers are worth it for you. 

Mozilla Performance BlogPerformance Sheriff Newsletter (September 2021)

In September there were 174 alerts generated, resulting in 23 regression bugs being filed on average 6.4 days after the regressing change landed.

Welcome to the September 2021 edition of the performance sheriffing newsletter. Here you’ll find the usual summary of our sheriffing efficiency metrics. If you’re interested (and if you have access) you can view the full dashboard.

Sheriffing efficiency

  • All alerts were triaged in an average of 2 days
  • 80% of alerts were triaged within 3 days
  • Valid regressions were associated with bugs in an average of 3.5 days
  • 74% of valid regressions were associated with bugs within 5 days

Sheriffing Efficiency (September 2021)

 

Summary of alerts

Each month we’ll highlight the regressions and improvements found.

Note that whilst we usually allow one week to pass before generating the report, there are still alerts under investigation for the period covered in this article. This means that whilst we believe these metrics to be accurate at the time of writing, some of them may change over time.

We would love to hear your feedback on this article, the queries, the dashboard, or anything else related to performance sheriffing or performance testing. You can comment here, or find the team on Matrix in #perftest or #perfsheriffs.

The dashboard for September can be found here (for those with access).

Mozilla Localization (L10N)L10n Report: October Edition

October 2021 Report

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

New l10n-driver

Welcome eemeli, our new l10n-driver! He will be working on Fluent and Pontoon, and is part of our tech team along with Matjaž. We hope we can all connect soon so you can meet him.

New localizers

Katelem from Obolo locale. Welcome to localization at Mozilla!

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

Obolo (ann) locale was added to Pontoon.

New content and projects

What’s new or coming up in Firefox desktop

A new major release (MR2) is coming for Firefox desktop with Firefox 94. The deadline to translate content for this version, currently in Beta, is October 24.

While MR2 is not as content heavy as MR1, there are changes to very visible parts of the UI, like the onboarding for both new and existing users. Make sure to check out the latest edition of the Firefox L10n Newsletter for more details, and instructions on how to test.

What’s new or coming up in mobile

Focus for Android and iOS have gone through a new refresh! This was done as part of our ongoing MR2 work – which has also covered Firefox for Android and iOS. You can read about all of this here.

Many of you have been heavily involved in this work, and we thank you for making this MR2 launch across all mobile products such a successful release globally.

We are now starting our next iteration of MR2 releases. We are still currently working on scoping out the mobile work for l10n, so stay tuned.

One thing to note is that the l10n schedule dates for mobile should now be aligned across product operating systems: one l10n release cycle for all of android, and another release cycle for all of iOS. As always, Pontoon deadlines remain your source of truth for this.

What’s new or coming up in web projects
Firefox Accounts

Firefox Accounts team has been working on transitioning Gettext to Fluent. They are in the middle of migrating server.po to auth.ftl, the component that handles the email feature. Unlike previous migrations where the localized strings were not part of the plan, this time, the team wanted to include them as much as possible. The initial attempt didn’t go as planned due to multiple technical issues. The new auth.ftl file made a brief appearance in Pontoon and is now disabled. They will give it a go after confirming that the identified issues were addressed and tested.

Legal docs

All the legal docs are translated by our vendor. Some of you have reported translation errors or they are out of sync with the English source. If you spot any issues, wrong terminology, typo, missing content, to name a few, you can file a bug. Generally we do not encourage localizers to provide translations because of the nature of the content. If they are minor changes, you can create a PR and ask for a peer review to confirm your change before the change can be merged. If the overall quality is bad, we will request the vendor to change the translators.

Please note, the locale support for legal docs varies from product to product. Starting this year, the number of supported locales also has decreased to under 20. Some of the previously localized docs are no longer updated. This might be the reason you see your language out of sync with the English source.

Mozilla.org

Five more mobile specific pages were added since our last report. If you need to prioritize them, please give higher priority to the Focus, Index and Compare pages.

What’s new or coming up in SuMo

Lots of new stuff since our last update here in June. Here are some of the highlights:

  • We’re working on refreshing the onboarding experience in SUMO. The content preparation has mostly done in Q3 and the implementation is expected in this quarter before the end of the year.
  • Catch up on what’s new in our support platform by reading our release notes in Discourse. One highlight of the past quarter is that we integrated Zendesk form for Mozilla VPN into SUMO. We don’t have the capability to detect subscriber at the moment, so everyone can file a ticket now. But we’re hoping to add the capability for that in the future.
  • Firefox Focus joined our forces in Play Store support. Contributors should be able to reply to Google Play Store reviews for Firefox Focus from Conversocial now. We also create this guideline to help contributors compose a reply for Firefox Focus reviews.
  • We welcomed 2 new team members in Q3. Joe who is our Support Operation Manager is now taking care of the premium customer support experience. And Abby, the new Content Manager, is our team’s latest addition who will be working closely with Fabi and our KB contributors to improve our help content.

You’re always welcome to join our Matrix or the contributor forum to talk more about anything related to support!

What’s new or coming up in Pontoon

Submit your ideas and report bugs via GitHub

We have enabled GitHub Issues in the Pontoon repository and made it the new place for tracking bugs, enhancements and tasks for Pontoon development. At the same time, we have disabled the Pontoon Component in Bugzilla, and imported all open bugs into GitHub Issues. Old bugs are still accessible on their existing URLs. For reporting security vulnerabilities, we’ll use a newly created component in Bugzilla, which allows us to hide security problems from the public until they are resolved.

Using GitHub Issues will make it easier for the development team to resolve bugs via commit messages and put them on a Roadmap, which will also be moved to GitHub soon. We also hope GitHub Issues will make suggesting ideas and reporting issues easier for the users. Let us know if you run into any issues or have any questions!

More improvements to the notification system coming

As part of our H1 effort to better understand how notifications are being used, the following features have received most votes in a localizer survey:

  • Notifications for new strings should link to the group of strings added.
  • For translators and locale managers, get notifications when there are pending suggestions to review.
  • Add the ability to opt-out of specific notifications.

Thanks to eemeli, the first item was resolved back in August. The second feature has also been implemented, which means reviewers will receive weekly notifications about newly created unreviewed suggestions within the last week. Work on the last item – ability to opt-out of specific notification types – has started.

Newly published localizer facing documentation

We published two new posts in the Localization category on Discourse:

Events

  • Michal Stanke shared his experience as a volunteer in the open source community at the annual International Translation Day event hosted by WordPress! Way to go!
  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Useful Links

Questions? Want to get involved?

  • If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Niko MatsakisDyn async traits, part 6

A quick update to my last post: first, a better way to do what I was trying to do, and second, a sketch of the crate I’d like to see for experimental purposes.

An easier way to roll our own boxed dyn traits

In the previous post I covered how you could create vtables and pair the up with a data pointer to kind of “roll your own dyn”. After I published the post, though, dtolnay sent me this Rust playground link to show me a much better approach, one based on the erased-serde crate. The idea is that instead of make a “vtable struct” with a bunch of fn pointers, we create a “shadow trait” that reflects the contents of that vtable:

// erased trait:
trait ErasedAsyncIter {
    type Item;
    fn next<'me>(&'me mut self) -> Pin<Box<dyn Future<Output = Option<Self::Item>> + 'me>>;
}

Then the DynAsyncIter struct can just be a boxed form of this trait:

pub struct DynAsyncIter<'data, Item> {
    pointer: Box<dyn ErasedAsyncIter<Item = Item> + 'data>,
}

We define the “shim functions” by implementing ErasedAsyncIter for all T: AsyncIter:

impl<T> ErasedAsyncIter for T
where
    T: AsyncIter,
{
    type Item = T::Item;
    fn next<'me>(&'me mut self) -> Pin<Box<dyn Future<Output = Option<Self::Item>> + 'me>> {
        // This code allocates a box for the result
        // and coerces into a dyn:
        Box::pin(AsyncIter::next(self))
    }
}

And finally we can implement the AsyncIter trait for the dynamic type:

impl<'data, Item> AsyncIter for DynAsyncIter<'data, Item> {
    type Item = Item;

    type Next<'me>
    where
        Item: 'me,
        'data: 'me,
    = Pin<Box<dyn Future<Output = Option<Item>> + 'me>>;

    fn next(&mut self) -> Self::Next<'_> {
        self.pointer.next()
    }
}

Yay, it all works, and without any unsafe code!

What I’d like to see

This “convert to dyn” approach isn’t really specific to async (as erased-serde shows). I’d like to see a decorator that applies it to any trait. I imagine something like:

// Generates the `DynAsyncIter` type shown above:
#[derive_dyn(DynAsyncIter)]
trait AsyncIter {
    type Item;
    async fn next(&mut self) -> Option<Self::Item>;
}

But this ought to work with any -> impl Trait return type, too, so long as Trait is dyn safe and implemented for Box<T>. So something like this:

// Generates the `DynAsyncIter` type shown above:
#[derive_dyn(DynSillyIterTools)]
trait SillyIterTools: Iterator {
    // Iterate over the iter in pairs of two items.
    fn pair_up(&mut self) -> impl Iterator<(Self::Item, Self::Item)>;
}

would generate an erased trait that returns a Box<dyn Iterator<(...)>>. Similarly, you could do a trick with taking any impl Foo and passing in a Box<dyn Foo>, so you can support impl Trait in argument position.

Even without impl trait, derive_dyn would create a more ergonomic dyn to play with.

I don’t really see this as a “long term solution”, but I would be interested to play with it.

Comments?

I’ve created a thread on internals if you’d like to comment on this post, or others in this series.

Niko MatsakisDyn async traits, part 5

If you’re willing to use nightly, you can already model async functions in traits by using GATs and impl Trait — this is what the Embassy async runtime does, and it’s also what the real-async-trait crate does. One shortcoming, though, is that your trait doesn’t support dynamic dispatch. In the previous posts of this series, I have been exploring some of the reasons for that limitation, and what kind of primitive capabilities need to be exposed in the language to overcome it. My thought was that we could try to stabilize those primitive capabilities with the plan of enabling experimentation. I am still in favor of this plan, but I realized something yesterday: using procedural macros, you can ALMOST do this experimentation today! Unfortunately, it doesn’t quite work owing to some relatively obscure rules in the Rust type system (perhaps some clever readers will find a workaround; that said, these are rules I have wanted to change for a while).

Just to be crystal clear: Nothing in this post is intended to describe an “ideal end state” for async functions in traits. I still want to get to the point where one can write async fn in a trait without any further annotation and have the trait be “fully capable” (support both static dispatch and dyn mode while adhering to the tenets of zero-cost abstractions1). But there are some significant questions there, and to find the best answers for those questions, we need to enable more exploration, which is the point of this post.

Code is on github

The code covered in this blog post has been prototyped and is available on github. See the caveat at the end of the post, though!

Design goal

To see what I mean, let’s return to my favorite trait, AsyncIter:

trait AsyncIter {
    type Item;
    async fn next(&mut self) -> Option<Self::Item>;
}

The post is going to lay out how we can transform a trait declaration like the one above into a series of declarations that achieve the following:

  • We can use it as a generic bound (fn foo<T: AsyncIter>()), in which case we get static dispatch, full auto trait support, and all the other goodies that normally come with generic bounds in Rust.
  • Given a T: AsyncIter, we can coerce it into some form of DynAsyncIter that uses virtual dispatch. In this case, the type doesn’t reveal the specific T or the specific types of the futures.
    • I wrote DynAsyncIter, and not dyn AsyncIter on purpose — we are going to create our own type that acts like a dyn type, but which manages the adaptations needed for async.
    • For simplicity, let’s assume we want to box the resulting futures. Part of the point of this design though is that it leaves room for us to generate whatever sort of wrapping types we want.

You could write the code I’m showing here by hand, but the better route would be to package it up as a kind of decorator (e.g., #[async_trait_v2]2).

The basics: trait with a GAT

The first step is to transform the trait to have a GAT and a regular fn, in the way that we’ve seen many times:

trait AsyncIter {
    type Item;

    type Next<me>: Future<Output = Option<Self::Item>>
    where
        Self: me;

    fn next(&mut self) -> Self::Next<_>;
}

Next: define a “DynAsyncIter” struct

The next step is to manage the virtual dispatch (dyn) version of the trait. To do this, we are going to “roll our own” object by creating a struct DynAsyncIter. This struct plays the role of a Box<dyn AsyncIter> trait object. Instances of the struct can be created by calling DynAsyncIter::from with some specific iterator type; the DynAsyncIter type implements the AsyncIter trait, so once you have one you can just call next as usual:

let the_iter: DynAsyncIter<u32> = DynAsyncIter::from(some_iterator);
process_items(&mut the_iter);

async fn sum_items(iter: &mut impl AsyncIter<Item = u32>) -> u32 {
    let mut s = 0;
    while let Some(v) = the_iter.next().await {
        s += v;
    }
    s
}

Struct definition

Let’s look at how this DynAsyncIter struct is defined. First, we are going to “roll our own” object by creating a struct DynAsyncIter. This struct is going to model a Box<dyn AsyncIter> trait object; it will have one generic parameter for every ordinary associated type declared in the trait (not including the GATs we introduced for async fn return types). The struct itself has two fields, the data pointer (a box, but in raw form) and a vtable. We don’t know the type of the underlying value, so we’ll use ErasedData for that:

type ErasedData = ();

pub struct DynAsyncIter<Item> {
    data: *mut ErasedData,
    vtable: &static DynAsyncIterVtable<Item>,
}

For the vtable, we will make a struct that contains a fn for each of the methods in the trait. Unlike the builtin vtables, we will modify the return type of these functions to be a boxed future:

struct DynAsyncIterVtable<Item> {
    drop_fn: unsafe fn(*mut ErasedData),
    next_fn: unsafe fn(&mut *mut ErasedData) -> Box<dyn Future<Output = Option<Item>> + _>,
}

Implementing the AsyncIter trait

Next, we can implement the AsyncIter trait for the DynAsyncIter type. For each of the new GATs we introduced, we simply use a boxed future type. For the method bodies, we extract the function pointer from the vtable and call it:

impl<Item> AsyncIter for DynAsyncIter<Item> {
    type Item = Item;

    type Next<me> = Box<dyn Future<Output = Option<Item>> + me>;

    fn next(&mut self) -> Self::Next<_> {
        let next_fn = self.vtable.next_fn;
        unsafe { next_fn(&mut self.data) }
   }
}

The unsafe keyword here is asserting that the safety conditions of next_fn are met. We’ll cover that in more detail later, but in short those conditions are:

  • The vtable corresponds to some erased type T: AsyncIter
  • …and each instance of *mut ErasedData points to a valid Box<T> for that type.

Dropping the object

Speaking of Drop, we do need to implement that as well. It too will call through the vtable:

impl Drop for DynAsyncIter {
    fn drop(&mut self) {
        let drop_fn = self.vtable.drop_fn;
        unsafe { drop_fn(self.data); }
    }
}

We need to call through the vtable because we don’t know what kind of data we have, so we can’t know how to drop it correctly.

Creating an instance of DynAsyncIter

To create one of these DynAsyncIter objects, we can implement the From trait. This allocates a box, coerces it into a raw pointer, and then combines that with the vtable:

impl<Item, T> From<T> for DynAsyncIter<Item>
where
    T: AsyncIter<Item = Item>,
{
    fn from(value: T) -> DynAsyncIter {
        let boxed_value = Box::new(value);
        DynAsyncIter {
            data: Box::into_raw(boxed_value) as *mut (),
            vtable: dyn_async_iter_vtable::<T>(), // we’ll cover this fn later
        }
    }
}

Creating the vtable shims

Now we come to the most interesting part: how do we create the vtable for one of these objects? Recall that our vtable was a struct like so:

struct DynAsyncIterVtable<Item> {
    drop_fn: unsafe fn(*mut ErasedData),
    next_fn: unsafe fn(&mut *mut ErasedData) -> Box<dyn Future<Output = Option<Item>> + _>,
}

We are going to need to create the values for each of those fields. In an ordinary dyn, these would be pointers directly to the methods from the impl, but for us they are “wrapper functions” around the core trait functions. The role of these wrappers is to introduce some minor coercions, such as allocating a box for the resulting future, as well as to adapt from the “erased data” to the true type:

// Safety conditions:
//
// The `*mut ErasedData` is actually the raw form of a `Box<T>` 
// that is valid for ‘a.
unsafe fn next_wrapper<a, T>(
    this: &a mut *mut ErasedData,
) -> Box<dyn Future<Output = Option<T::Item>> + a
where
    T: AsyncIter,
{
    let unerased_this: &mut Box<T> = unsafe { &mut *(this as *mut Box<T>) };
    let future: T::Next<_> = <T as AsyncIter>::next(unerased_this);
    Box::new(future)
}

We’ll also need a “drop” wrapper:

// Safety conditions:
//
// The `*mut ErasedData` is actually the raw form of a `Box<T>` 
// and this function is being given ownership of it.
fn drop_wrapper<T>(
    this: *mut ErasedData,
)
where
    T: AsyncIter,
{
    let unerased_this = Box::from_raw(this as *mut T);
    drop(unerased_this); // Execute destructor as normal
}

Constructing the vtable

Now that we’ve defined the wrappers, we can construct the vtable itself. Recall that the From impl called a function dyn_async_iter_vtable::<T>. That function looks like this:

fn dyn_async_iter_vtable<T>() -> &static DynAsyncIterVtable<T::Item>
where
    T: AsyncIter,
{
    const {
        &DynAsyncIterVtable {
            drop_fn: drop_wrapper::<T>,
            next_fn: next_wrapper::<T>,
        }
    }
}

This constructs a struct with the two function pointers: this struct only contains static data, so we are allowed to return a &’static reference to it.

Done!

And now the caveat, and a plea for help

Unfortunately, this setup doesn’t work quite how I described it. There are two problems:

  • const functions and expressions stil lhave a lot of limitations, especially around generics like T, and I couldn’t get them to work;
  • Because of the rules introduced by RFC 1214, the &’static DynAsyncIterVtable<T::Item> type requires that T::Item: 'static, which may not be true here. This condition perhaps shouldn’t be necessary, but the compiler currently enforces it.

I wound up hacking something terrible that erased the T::Item type into uses and used Box::leak to get a &'static reference, just to prove out the concept. I’m almost embarassed to show the code, but there it is.

Anyway, I know people have done some pretty clever tricks, so I’d be curious to know if I’m missing something and there is a way to build this vtable on Rust today. Regardless, it seems like extending const and a few other things to support this case is a relatively light lift, if we wanted to do that.

Conclusion

This blog post presented a way to implement the dyn dispatch ideas I’ve been talking using only features that currently exist and are generally en route to stabilization. That’s exiting to me, because it means that we can start to do measurements and experimentation. For example, I would really like to know the performance impact of transitiong from async-trait to a scheme that uses a combination of static dispatch and boxed dynamic dispatch as described here. I would also like to explore whether there are other ways to wrap futures (e.g., with task-local allocators or other smart pointers) that might perform better. This would help inform what kind of capabilities we ultimately need.

Looking beyond async, I’m interested in tinkering with different models for dyn in general. As an obvious example, the “always boxed” version I implemented here has some runtime cost (an allocation!) and isn’t applicable in all environments, but it would be far more ergonomic. Trait objects would be Sized and would transparently work in far more contexts. We can also prototype different kinds of vtable adaptation.

  1. In the words of Bjarne Stroustroup, “What you don’t use, you don’t pay for. And further: What you do use, you couldn’t hand code any better.” 

  2. Egads, I need a snazzier name than that! 

Jan-Erik RedigerFenix Physical Device Testing

The Firefox for Android (Fenix) project runs extensive tests on every pull request and when merging code back into the main branch.

While many tests run within an isolated Java environment, Fenix also contains a multitude of UI tests. They allow testing the full application, interaction with the UI and other events. Running these requires the Android emulator running or a physical Android device connected. To run these tests in the CI environment the Fenix team relies on the Firebase test lab, a cloud-based testing service offering access to a range of physical and virtual devices to run Android applications on.

To speed up development, the automatically scheduled tests associated with a pull request are only run on virtual devices. These are quick to spin up, there is basically no upper limit of devices that can spawn on the cloud infrastructure and they usually produce the same result as running the test on a physical device.

But once in a while you encounter a bug that can only be reproduced reliably on a physical device. If you don't have access to such a device, what do you do? Or you know the bug happens on that one specific device type you don’t have?

You remember that the Firebase Test Lab offers physical devices as well and the Fenix repository is very well set up to run your test on these too if needed!

Here's how you change the CI configuration to do this.

NOTE: Do not land a Pull Request that switches CI from virtual to physical devices! Add the pr:do-not-land label and call out that the PR is only there for testing!

By default the Fenix CI runs tests using virtual devices on x86. That's faster when the host is also a x86(_64) system, but most physical devices use the Arm platform. So first we need to instruct it to run tests on Arm.

Which platform to test on is defined in taskcluster/ci/ui-test/kind.yml. Find the line where it downloads the target.apk produced in a previous step and change it from x86 to arm64-v8a:

  run:
      commands:
-         - [wget, {artifact-reference: '<signing/public/build/x86/target.apk>'}, '-O', app.apk]
+         - [wget, {artifact-reference: '<signing/public/build/arm64-v8a/target.apk>'}, '-O', app.apk]

Then look for the line where it invokes the ui-test.sh and tell it to use arm64-v8a again:

  run:
      commands:
-         - [automation/taskcluster/androidTest/ui-test.sh, x86, app.apk, android-test.apk, '-1']
+         - [automation/taskcluster/androidTest/ui-test.sh, arm64-v8a, app.apk, android-test.apk, '-1']

With the old CI configuration it will look for Firebase parameters in automation/taskcluster/androidTest/flank-x86.yml. Now that we switched the architecture it will pick up automation/taskcluster/androidTest/flank-arm64-v8a.yml instead.

In that file we can now pick the device we want to run on:

   device:
-   - model: Pixel2
+   - model: dreamlte
      version: 28

You can get a list of available devices by running gcloud locally:

gcloud firebase test android models list

The value from the MODEL_ID column is what you use for the model parameter in flank-arm64-v8a.yml. dreamlte translates to a Samsung Galaxy S8, which is available on Android API version 28.

If you only want to run a subset of tests define the test-targets:

test-targets:
 - class org.mozilla.fenix.glean.BaselinePingTest

Specify an exact test class as above to run tests from just that class.

And that's all the configuration necessary. Save your changes, commit them, then push up your code and create a pull request. Once the decision task on your PR finishes you will find a ui-test-x86-debug job (yes, x86, we didn't rename the job). Its log file will have details on the test run and contain links to the test run summary. Follow them to get more details, including the logcat output and a video of the test run.


This explanation will eventually move into documentation for Mozilla's Android projects.
Thanks to Richard Pappalardo & Aaron Train for the help figuring out how to run tests on physical devices and early feedback on the post. Thanks to Will Lachance for feedback and corrections. Any further errors are mine alone.