The Rust Programming Language BlogSix Years of Rust

Today marks Rust's sixth birthday since it went 1.0 in 2015. A lot has changed since then and especially over the past year, and Rust was no different. In 2020, there was no foundation yet, no const generics, and a lot organisations were still wondering whether Rust was production ready.

In the midst of the COVID-19 pandemic, hundreds of Rust's global distributed set of team members and volunteers shipped over nine new stable releases of Rust, in addition to various bugfix releases. Today, "Rust in production" isn't a question, but a statement. The newly founded Rust foundation has several members who value using Rust in production enough to help continue to support and contribute to its open development ecosystem.

We wanted to take today to look back at some of the major improvements over the past year, how the community has been using Rust in production, and finally look ahead at some of the work that is currently ongoing to improve and use Rust for small and large scale projects over the next year. Let's get started!

Recent Additions

The Rust language has improved tremendously in the past year, gaining a lot of quality of life features, that while they don't fundamentally change the language, they help make using and maintaining Rust in more places even easier.

  • As of Rust 1.52.0 and the upgrade to LLVM 12, one of few cases of unsoundness around forward progress (such as handling infinite loops) has finally been resolved. This has been a long running collaboration between the Rust teams and the LLVM project, and is a great example of improvements to Rust also benefitting the wider ecosystem of programming languages.

  • On supporting an even wider ecosystem, the introduction of Tier 1 support for 64 bit ARM Linux, and Tier 2 support for ARM macOS & ARM Windows, has made Rust an even better place to easily build your projects across new and different architectures.

  • The most notable exception to the theme of polish has been the major improvements to Rust's compile-time capabilities. The stabilisation of const generics for primitive types, the addition of control flow for const fns, and allowing procedural macros to be used in more places, have allowed completely powerful new types of APIs and crates to be created.

Rustc wasn't the only tool that had significant improvements.

  • Cargo just recently stabilised its new feature resolver, that makes it easier to use your dependencies across different targets.

  • Rustdoc stabilised its "intra-doc links" feature, allowing you to easily and automatically cross reference Rust types and functions in your documentation.

  • Clippy with Cargo now uses a separate build cache that provides much more consistent behaviour.

Rust In Production

Each year Rust's growth and adoption in the community and industry has been unbelievable, and this past year has been no exception. Once again in 2020, Rust was voted StackOverflow's Most Loved Programming Language. Thank you to everyone in the community for your support, and help making Rust what it is today.

With the formation of the Rust foundation, Rust has been in a better position to build a sustainable open source ecosystem empowering everyone to build reliable and efficient software. A number of companies that use Rust have formed teams dedicated to maintaining and improving the Rust project, including AWS, Facebook, and Microsoft.

And it isn't just Rust that has been getting bigger. Larger and larger companies have been adopting Rust in their projects and offering officially supported Rust APIs.

  • Both Microsoft and Amazon have just recently announced and released their new officially supported Rust libraries for interacting with Windows and AWS. Official first party support for these massive APIs helps make Rust people's first choice when deciding what to use for their project.
  • The cURL project has released new versions that offer opt-in support for using Rust libraries for handling HTTP/s and TLS communication. This has been a huge inter-community collaboration between the ISRG, the Hyper & Rustls teams, and the cURL project, and we'd like to thank everyone for their hard work in providing new memory safe backends for a project as massive and widely used as cURL!
  • Tokio (an asynchronous runtime written in Rust), released its 1.0 version and announced their three year stability guarantee, providing everyone with a solid, stable foundation for writing reliable network applications without compromising speed.

Future Work

Of course, all that is just to start, we're seeing more and more initiatives putting Rust in exciting new places;

Right now the Rust teams are planning and coordinating the 2021 edition of Rust. Much like this past year, a lot of themes of the changes are around improving quality of life. You can check out our recent post about "The Plan for the Rust 2021 Edition" to see what the changes the teams are planning.

And that's just the tip of the iceberg; there are a lot more changes being worked on, and exciting new open projects being started every day in Rust. We can't wait to see what you all build in the year ahead!

Are there changes, or projects from the past year that you're excited about? Are you looking to get started with Rust? Do you want to help contribute to the 2021 edition? Then come on over, introduce yourself, and join the discussion over on our Discourse forum and Zulip chat! Everyone is welcome, we are committed to providing a friendly, safe and welcoming environment for all, regardless of gender, sexual orientation, disability, ethnicity, religion, or similar personal characteristic.

Daniel Stenbergcurl -G vs curl -X GET

(This is a repost of a stackoverflow answer I once wrote on this topic. Slightly edited. Copied here to make sure I own and store my own content properly.)

curl knows the HTTP method

You normally use curl without explicitly saying which request method to use.

If you just pass in a HTTP URL like curl, curl will use GET. If you use -d or -F curl will use POST, -I will cause a HEAD and -T will make it a PUT.

If for whatever reason you’re not happy with these default choices that curl does for you, you can override those request methods by specifying -X [WHATEVER]. This way you can for example send a DELETE by doing curl -X DELETE [URL].

It is thus pointless to do curl -X GET [URL] as GET would be used anyway. In the same vein it is pointless to do curl -X POST -d data [URL]... But you can make a fun and somewhat rare request that sends a request-body in a GET request with something like curl -X GET -d data [URL].

Digging deeper

curl -GET (using a single dash) is just wrong for this purpose. That’s the equivalent of specifying the -G, -E and -T options and that will do something completely different.

There’s also a curl option called --get to not confuse matters with either. It is the long form of -G, which is used to convert data specified with -d into a GET request instead of a POST.

(I subsequently used this answer to populate the curl FAQ to cover this.)


Modern versions of curl will inform users about this unnecessary and potentially harmful use of -X when verbose mode is enabled (-v) – to make users aware. Further explained and motivated here.

-G converts a POST + body to a GET + query

You can ask curl to convert a set of -d options and instead of sending them in the request body with POST, put them at the end of the URL’s query string and issue a GET, with the use of `-G. Like this:

curl -d name=daniel -d grumpy=yes -G

The Firefox FrontierEnter Our College Essay Contest for a Chance to Win $5,000

The world this year’s college graduates will inherit is vastly different than the one they grew up expecting. COVID-19, a changing political climate, and a fluctuating economy all have something … Read more

The post Enter Our College Essay Contest for a Chance to Win $5,000 appeared first on The Firefox Frontier.

Firefox NightlyThese Weeks in Firefox: Issue 93


  • Firefox 89 introduces a fresh new look and feel!
    • Floating tabs!
    • Streamlined menus!
    • New icons!
    • Better dark mode support!
    • Improved context menus on Mac and Windows
    • Improved perceived startup performance on Windows
    • Native context menus and rubberbanding/overscroll on macOS
    • Refreshed modals dialogs and notification bars!
    • More details in these release notes and in this early review from laptopmag
  • Non-native form controls are slated to ride out in Firefox 89 as well
    • This lays the groundwork for improving the sandboxing of the content processes by shutting off access to native OS widget drawing routines
  • (Experimental, and en-US Nightly only) Users will now get unit conversions directly in the URL bar! Users can type “5 lbs to kg” and see a copy/paste friendly result instantaneously.

Friends of the Firefox team

For contributions from April 20 2021 to May 4 2021, inclusive.


Resolved bugs (excluding employees)
Fixed more than one bug
  • Falguni Islam
  • Itiel
  • kaira [:anshukaira]
  • Kajal Sah
  • Luz De La Rosa
  • Richa Sharma
  • Sebastian Zartner [:sebo]
  • Vaidehi
New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Starting from Firefox 90, when no extensions are installed  our about:addons page will show to the users a nicer message to explicitly direct them to  instead of an empty list of installed extensions (Bug 1561538) – Thanks to Samuel Grasse-Haroldsen for fixing this polishing issue.
  • As part of the ongoing work to get rid of OS.File usage, Barret unveiled and fixed some races in AddonManager and XPIDatabase jsm (Bug 1702116)
  • Fixed a macOS specific issue in the “Manager Extension Shortcuts” about:addons view, which was preventing this view from detecting some of the conflicting shortcuts (Bug 1565854)
WebExtensions Framework
WebExtension APIs
  • Nicolas Chevobbe applied the needed changes to ensure that the devtools.inspectedWindow.reload method is Fission compatible also when an extension does pass to it the userAgent option (Bug 1706098)


  • Neil has been working on reviving the tab unloader for when users are hitting memory limits
    • It’s smarter this time though, and should hopefully make better choices on which tabs to unload.
    • Currently disabled by default, but Nightly users can test it by setting `browser.tabs.unloadOnLowMemory` to `true`

Messaging System


Performance Tools

  • Stacks now include the category color of each stack frame (in tooltips, marker table, sidebar)
    • Before and after image with stack frames highlighted in different colors.
  • Fixed a bug where the dot markers appear in the wrong places.
    • Profiler timeline with markers correctly displayed.

Search and Navigation

  • Lots of polish fixes to Proton address bar (and search bar)
  • The Search Mode chiclet can now be closed also when the address bar is unfocused – Bug 1701901
  • Address bar results action text (for example “Switch to tab”, or “Search with Engine”) won’t be pushed out of the visible area by long titles anymore – Bug 1707839
  • Double dots in domain-looking strings will now be corrected – Bug 1580881


Kajal, Falguni, Dawit, and Kaira have been working on removing server side code from screenshots

Niko MatsakisCTCFTFTW

This Monday I am starting something new: a monthly meeting called the “Cross Team Collaboration Fun Times” (CTCFT)1. Check out our nifty logo2:


The meeting is a mechanism to help keep the members of the Rust teams in sync and in touch with one another. The idea is to focus on topics of broad interest (more than two teams):

  • Status updates on far-reaching projects that could affect multiple teams;
  • Experience reports about people trying new things (sometimes succeeding, sometimes not);
  • “Rough draft” proposals that are ready to be brought before a wider audience.

The meeting will focus on things that could either offer insights that might affect the work you’re doing, or where the presenter would like to pose questions to the Rust teams and get feedback.

I announced the meeting some time back to, but I wanted to make a broader announcement as well. This meeting is open for anyone to come and observe. This is by design. Even though the meeting is primarily meant as a forum for the members of the Rust teams, it can be hard to define the borders of a community like ours. I’m hoping we’ll get people who work on major Rust libraries in the ecosystem, for example, or who work on the various Rust teams that have come into being.

The first meeting is scheduled for 2021-05-17 at 15:00 Eastern and you will find the agenda on the CTCFT website, along with links to the slides (still a work-in-progress as of this writing!). There is also a twitter account @RustCTCFT and a Google calendar that you can subscribe to.

I realize the limitations of a synchronous meeting. Due to the reality of time zones and a volunteer project, for example, we’ll never be able to get all of Rust’s global community to attend at once. I’ve designed the meeting to work well even if you can’t attend: the goal is have a place to start conversations, not to finish them. Agendas are annonunced well in advance and the meetings are recorded. We’re also rotating times – the next meeting on 2021-06-21 takes place at 21:00 Eastern time, for example.3

Hope to see you there!


  1. In keeping with Rust’s long-standing tradition of ridiculous acronyms. 

  2. Thanks to @Xfactor521! 🙏 

  3. The agenda is still TBD. I’ll tweet when we get it lined up. We’re not announcing that far in advance! 😂 

Mozilla Open Policy & Advocacy BlogDefending users’ security in Mauritius

Yesterday, Mozilla and Google filed a joint submission to the public consultation on amending the Information and Communications Technology (ICT) Act organised by the Government of Mauritius. Our submission states that the proposed changes would disproportionately harm the security of Mauritian users on the internet and should be abandoned. Mozilla believes that individuals’ security and privacy on the internet are fundamental and must not be treated as optional. The proposals under these amendments are fundamentally incompatible with this principle and would fail to achieve their projected outcomes.

Under Section 18(m) of the proposed changes, the ICTA could deploy a “new technical toolset” to intercept, decrypt, archive and then inspect/block https traffic between a local user’s Internet device and internet services, including social media platforms.

In their current form, these measures will place the privacy and security of internet users in Mauritius at grave risk. The blunt and disproportionate action will allow the government to decrypt, read and store anything a user types or posts on the internet, including intercepting their account information, passwords and private messages. While doing little to address the legitimate concerns of content moderation in local languages, it will undermine the trust of the fundamental security infrastructure that currently serves as the basis for the security of at least 80% of websites on the web that use HTTPS, including those that carry out e-commerce and other critical financial transactions.

When similarly dangerous mechanisms have been abused in the past, whether by known-malicious parties, business partners such as a computer or device manufacturer, or a government entity, as browser makers we have taken steps to protect and secure our users and products.

In our joint submission to the on-going public consultation, Google and Mozilla have urged the Authority not to pursue this approach. Operating within international frameworks for cross-border law enforcement cooperation and enhancing communication with industry can provide a more promising path to address the stated concerns raised in the consultation paper. We remain committed to working with the Government of Mauritius to address the underlying concerns in a manner that does not harm the privacy, security and freedom of expression of Mauritians on the internet.

The post Defending users’ security in Mauritius appeared first on Open Policy & Advocacy.

Mozilla Open Policy & Advocacy BlogMozilla files joint amicus brief in support of California net neutrality law

Yesterday, Mozilla joined a coalition of public interest organizations* in submitting an amicus brief to the Ninth Circuit in support of SB 822, California’s net neutrality law. In this case, telecom and cable companies are arguing that California’s law is preempted by federal law. In February of this year, a federal judge dismissed this challenge and held that California can enforce its law. The telecom industry appealed that decision to the 9th Circuit. We are asking the 9th Circuit to find that California has the authority to protect net neutrality.

“Net neutrality preserves the environment that creates room for new businesses and new ideas to emerge and flourish, and where internet users can choose freely the companies, products, and services that they want to interact with and use. In a marketplace where consumers frequently do not have access to more than one internet service provider (ISP), these rules ensure that data is treated equally across the network by gatekeepers. We are committed to restoring the protections people deserve and will continue to fight for net neutrality,” said Amy Keating, Mozilla’s Chief Legal Officer.

*Mozilla is joined on the amicus brief by Access Now, Public Knowledge, New America’s Open Technology Institute and Free Press.

The post Mozilla files joint amicus brief in support of California net neutrality law appeared first on Open Policy & Advocacy.

This Week In RustThis Week in Rust 390

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is tokio-console, a "top"-like utility to view your tasks run.

Thanks to Simon Farnsworth for the nomination

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

324 pull requests were merged in the last week

Rust Compiler Performance Triage

Not much change overall - both regressions and improvements were all minor, apart from the 2x compile-time improvement for libcore from PR #83278.

Triage done by @pnkfelix. Revision range: 7a0f..382f

2 Regressions, 3 Improvements, 0 Mixed 0 of them in rollups

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

Upcoming Events


If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs




Yat Labs



Aleph Alpha



Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

You won’t appreciate Rust unless you spend few weeks building something in it. The initial steep learning curve could be frustrating or challenging depending on how you see it, but once past that it’s hard not to love it. It’s a toddler with superpowers after all 💗

Deepu K Sasidharan on their blog

Thanks to robin for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

The Firefox FrontierPeace of mind browser add-ons for Firefox

The web can be as wonderful as it is overwhelming. Fortunately there are ways you can customize Firefox with add-ons to achieve a more harmonious browsing experience. Here are a … Read more

The post Peace of mind browser add-ons for Firefox appeared first on The Firefox Frontier.

The Rust Programming Language BlogThe Plan for the Rust 2021 Edition

We are happy to announce that the third edition of the Rust language, Rust 2021, is scheduled for release in October. Rust 2021 contains a number of small changes that are nonetheless expected to make a significant improvement to how Rust feels in practice.

What is an Edition?

The release of Rust 1.0 established "stability without stagnation" as a core Rust deliverable. Ever since the 1.0 release, the rule for Rust has been that once a feature has been released on stable, we are committed to supporting that feature for all future releases.

There are times, however, when it is useful to be able to make small changes to the language that are not backwards compatible. The most obvious example is introducing a new keyword, which would invalidate variables with the same name. For example, the first version of Rust did not have the async and await keywords. Suddenly changing those words to keywords in a later version would've broken code like let async = 1;.

Editions are the mechanism we use to solve this problem. When we want to release a feature that would otherwise be backwards incompatible, we do so as part of a new Rust edition. Editions are opt-in, and so existing crates do not see these changes until they explicitly migrate over to the new edition. This means that even the latest version of Rust will still not treat async as a keyword, unless edition 2018 or later is chosen. This choice is made per crate as part of its Cargo.toml. New crates created by cargo new are always configured to use the latest stable edition.

Editions do not split the ecosystem

The most important rule for editions is that crates in one edition can interoperate seamlessly with crates compiled in other editions. This ensures that the decision to migrate to a newer edition is a "private one" that the crate can make without affecting others.

The requirement for crate interoperability implies some limits on the kinds of changes that we can make in an edition. In general, changes that occur in an edition tend to be "skin deep". All Rust code, regardless of edition, is ultimately compiled to the same internal representation within the compiler.

Edition migration is easy and largely automated

Our goal is to make it easy for crates to upgrade to a new edition. When we release a new edition, we also provide tooling to automate the migration. It makes minor changes to your code necessary to make it compatible with the new edition. For example, when migrating to Rust 2018, it changes anything named async to use the equivalent raw identifier syntax: r#async.

The automated migrations are not necessarily perfect: there might be some corner cases where manual changes are still required. The tooling tries hard to avoid changes to semantics that could affect the correctness or performance of the code.

In addition to tooling, we also maintain an Edition Migration Guide that covers the changes that are part of an edition. This guide will describe the change and give pointers to where people can learn more about it. It will also cover any corner cases or details that people should be aware of. The guide serves both as an overview of the edition, but also as a quick troubleshooting reference if people encounter problems with the automated tooling.

What changes are planned for Rust 2021?

Over the last few months, the Rust 2021 Working Group has gone through a number of proposals for what to include in the new edition. We are happy to announce the final list of edition changes. Each feature had to meet two criteria to make this list. First, they had to be approved by the appropriate Rust team(s). Second, their implementation had to be far enough along that we had confidence that they would be completed in time for the planned milestones.

Additions to the prelude

The prelude of the standard library is the module containing everything that is automatically imported in every module. It contains commonly used items such as Option, Vec, drop, and Clone.

The Rust compiler prioritizes any manually imported items over those from the prelude, to make sure additions to the prelude will not break any existing code. For example, if you have a crate or module called example containing a pub struct Option;, then use example::*; will make Option unambiguously refer to the one from example; not the one from the standard library.

However, adding a trait to the prelude can break existing code in a subtle way. A call to x.try_into() using a MyTryInto trait might become ambiguous and fail to compile if std's TryInto is also imported, since it provides a method with the same name. This is the reason we haven't added TryInto to the prelude yet, since there is a lot of code that would break this way.

As a solution, Rust 2021 will use a new prelude. It's identical to the current one, except for three new additions:

Default Cargo feature resolver

Since Rust 1.51.0, Cargo has opt-in support for a new feature resolver which can be activated with resolver = "2" in Cargo.toml.

Starting in Rust 2021, this will be the default. That is, writing edition = "2021" in Cargo.toml will imply resolver = "2".

The new feature resolver no longer merges all requested features for crates that are depended on in multiple ways. See the announcement of Rust 1.51 for details.

IntoIterator for arrays

Until Rust 1.53, only references to arrays implement IntoIterator. This means you can iterate over &[1, 2, 3] and &mut [1, 2, 3], but not over [1, 2, 3] directly.

for &e in &[1, 2, 3] {} // Ok :)

for e in [1, 2, 3] {} // Error :(

This has been a long-standing issue, but the solution is not as simple as it seems. Just adding the trait implementation would break existing code. array.into_iter() already compiles today because that implicitly calls (&array).into_iter() due to how method call syntax works. Adding the trait implementation would change the meaning.

Usually we categorize this type of breakage (adding a trait implementation) 'minor' and acceptable. But in this case there is too much code that would be broken by it.

It has been suggested many times to "only implement IntoIterator for arrays in Rust 2021". However, this is simply not possible. You can't have a trait implementation exist in one edition and not in another, since editions can be mixed.

Instead, we decided to add the trait implementation in all editions (starting in Rust 1.53.0), but add a small hack to avoid breakage until Rust 2021. In Rust 2015 and 2018 code, the compiler will still resolve array.into_iter() to (&array).into_iter() like before, as if the trait implementation does not exist. This only applies to the .into_iter() method call syntax. It does not affect any other syntax such as for e in [1, 2, 3],[1, 2, 3]) or IntoIterator::into_iter([1, 2, 3]). Those will start to work in all editions.

While it's a shame that this required a small hack to avoid breakage, we're very happy with how this solution keeps the difference between the editions to an absolute minimum. Since the hack is only present in the older editions, there is no added complexity in the new edition.

Disjoint capture in closures

Closures automatically capture anything that you refer to from within their body. For example, || a + 1 automatically captures a reference to a from the surrounding context.

Currently, this applies to whole structs, even when only using one field. For example, || a.x + 1 captures a reference to a and not just a.x. In some situations, this is a problem. When a field of the struct is already borrowed (mutably) or moved out of, the other fields can no longer be used in a closure, since that would capture the whole struct, which is no longer available.

let a = SomeStruct::new();

drop(a.x); // Move out of one field of the struct

println!("{}", a.y); // Ok: Still use another field of the struct

let c = || println!("{}", a.y); // Error: Tries to capture all of `a`

Starting in Rust 2021, closures will only capture the fields that they use. So, the above example will compile fine in Rust 2021.

This new behavior is only activated in the new edition, since it can change the order in which fields are dropped. As for all edition changes, an automatic migration is available, which will update your closures for which this matters. It can insert let _ = &a; inside the closure to force the entire struct to be captured as before.

Panic macro consistency

The panic!() macro is one of Rust's most well known macros. However, it has some subtle surprises that we can't just change due to backwards compatibility.

panic!("{}", 1); // Ok, panics with the message "1"
panic!("{}"); // Ok, panics with the message "{}"

The panic!() macro only uses string formatting when it's invoked with more than one argument. When invoked with a single argument, it doesn't even look at that argument.

let a = "{";
println!(a); // Error: First argument must be a format string literal
panic!(a); // Ok: The panic macro doesn't care

(It even accepts non-strings such as panic!(123), which is uncommon and rarely useful.)

This will especially be a problem once implicit format arguments are stabilized. That feature will make println!("hello {name}") a short-hand for println!("hello {}", name). However, panic!("hello {name}") would not work as expected, since panic!() doesn't process a single argument as format string.

To avoid that confusing situation, Rust 2021 features a more consistent panic!() macro. The new panic!() macro will no longer accept arbitrary expressions as the only argument. It will, just like println!(), always process the first argument as format string. Since panic!() will no longer accept arbitrary payloads, panic_any() will be the only way to panic with something other than a formatted string.

In addition, core::panic!() and std::panic!() will be identical in Rust 2021. Currently, there are some historical differences between those two, which can be noticable when switching #![no_std] on or off.

Reserving syntax

To make space for some new syntax in the future, we've decided to reserve syntax for prefixed identifiers and literals: prefix#identifier, prefix"string", prefix'c', and prefix#123, where prefix can be any identifier. (Except those that already have a meaning, such as b'…' and r"…".)

This is a breaking change, since macros can currently accept hello"world", which they will see as two separate tokens: hello and "world". The (automatic) fix is simple though. Just insert a space: hello "world".

Other than turning these into a tokenization error, the RFC does not attach a meaning to any prefix yet. Assigning meaning to specific prefixes is left to future proposals, which will—thanks to reserving these prefixes now—not be breaking changes.

These are some new prefixes you might see in the future:

  • f"" as a short-hand for a format string. For example, f"hello {name}" as a short-hand for the equivalent format_args!() invocation.

  • c"" or z"" for null-terminated C strings.

  • k#keyword to allow writing keywords that don't exist yet in the current edition. For example, while async is not a keyword in edition 2015, this prefix would've allowed us to accept k#async as an alternative in edition 2015 while we waited for edition 2018 to reserve async as a keyword.

Promoting two warnings to hard errors

Two existing lints are becoming hard errors in Rust 2021. These lints will remain warnings in older editions.

Or patterns in macro_rules

Starting in Rust 1.53.0, patterns are extended to support | nested anywhere in the pattern. This enables you to write Some(1 | 2) instead of Some(1) | Some(2). Since this was simply not allowed before, this is not a breaking change.

However, this change also affects macro_rules macros. Such macros can accept patterns using the :pat fragment specifier. Currently, :pat does not match |, since before Rust 1.53, not all patterns (at all nested levels) could contain a |. Macros that accept patterns like A | B, such as matches!() use something like $($_:pat)|+. Because we don't want to break any existing macros, we did not change the meaning of :pat in Rust 1.53.0 to include |.

Instead, we will make that change as part of Rust 2021. In the new edition, the :pat fragment specifier will match A | B.

Since there are times that one still wishes to match a single pattern variant without |, the fragment specified :pat_param has been added to retain the older behavior. The name refers to its main use case: a pattern in a closure parameter.

What comes next?

Our plan is to have these changes merged and fully tested by September, to make sure the 2021 edition makes it into Rust 1.56.0. Rust 1.56.0 will then be in beta for six weeks, after which it is released as stable on October 21st.

However, note that Rust is a project run by volunteers. We prioritize the personal well-being of everyone working on Rust over any deadlines and expectations we might have set. This could mean delaying the edition a version if necessary, or dropping a feature that turns out to be too difficult or stressful to finish in time.

That said, we are on schedule and many of the difficult problems are already tackled, thanks to all the people contributing to Rust 2021! 💛

You can expect another announcement about the new edition in July. At that point we expect all changes and automatic migrations to be implemented and ready for public testing.

We'll be posting some more details about the process and rejected proposals on the "Inside Rust" blog soon.

Mozilla Security BlogBeware of Applications Misusing Root Stores

We have been alerted about applications that use the root store provided by Mozilla for purposes other than what Mozilla’s root store is curated for. We provide a root store to be used for server authentication (TLS) and for digitally signed and encrypted email (S/MIME). Applications that use Mozilla’s root store for a purpose other than that have a critical security vulnerability. With the goal of improving the security ecosystem on the internet, below we clarify the correct and incorrect use of Mozilla’s root store, and provide tools for correct use.

Background on Root Stores: Mozilla provides a root store (curated list of root certificates) to enable Certificate Authorities (CAs) to issue trusted TLS certificates which in turn enables secure browsing and encryption on the internet. The root store provided by Mozilla is intended to be used for server authentication (TLS) and for digitally signed and encrypted email (S/MIME). The root store is built into Firefox and Network Security Services (NSS). The NSS cryptographic library is a set of libraries designed to support cross-platform development of security-enabled client and server applications; it is open source and therefore has become the de-facto standard for many Linux-powered operating systems. While NSS includes Mozilla’s root store by default, it also provides the ability for developers to use their own root store, enabling application developers to provide a list of root certificates that is curated for use cases other than TLS and S/MIME.

Misuse of Root Stores: We have been alerted that some applications are using root stores provided by Mozilla or an operating system (e.g. Linux) for purposes other than what the root store is curated for. An application that uses a root store for a purpose other than what the store was created for has a critical security vulnerability. This is no different than failing to validate a certificate at all.

There are different procedures, controls, and audit criteria for different types of certificates. For example, when a CA issues a certificate for S/MIME, it ensures that the email address in the certificate is controlled by the certificate subscriber. Likewise, when a CA issues a certificate for TLS, it ensures that the domain names in the certificate are controlled by the certificate subscriber. For a CA who has only been evaluated in terms of their issuance of S/MIME certificates there is no indication that they follow the correct procedures for issuance of TLS certificates (i.e. that they properly validate who controls the domain names in the certificate). Similarly, for a CA who has only been evaluated in terms of their issuance of TLS certificates there is no indication that they follow the correct procedures for issuance of Code Signing certificates.

Additionally, some application developers directly parse a file in Mozilla’s source code management system called certdata.txt, in which Mozilla’s root store is maintained in a form that is convenient for NSS to build from. The problem with the scripts that directly parse this file is that some of the certificates in this file are not trusted but rather explicitly distrusted, so scripts that do not take the trust records into account may be trusting root certificates, such as the DigiNotar certificates, which Mozilla explicitly distrusts.

Correctly using Root Stores: Curating a root store is a costly ongoing responsibility, so the Common CA Database (CCADB) Resources tab provides lists of root certificates that are being curated for the purposes of Code Signing, Email (S/MIME), and Server Authentication (SSL/TLS). The Code Signing root certificate list is based on the data that Microsoft maintains in the CCADB for their root store. The Email (S/MIME) and Server Authentication (SSL/TLS) root certificate lists are based on the data that Mozilla maintains in the CCADB for Mozilla’s root store (aka the NSS root store). These lists of certificates may be used for their intended purposes; specifically Code Signing, S/MIME, or TLS. If you choose to use one of these lists, be sure to read the data usage terms and to update the list in your applications frequently.

We recommend that you use the certificate lists provided on the CCADB Resources page rather than directly parsing the certdata.txt file. Application developers who continue to parse the certdata.txt file should use a script that correctly takes the trust records into account.

It is important to note that decisions that a root store operator makes with regards to inclusion or exclusion of CA certificates in its root store are directly tied to the capabilities and behaviors of the software they are distributing. Additionally, a security change could be made wholly or partly in the software instead of the root store. On a best-efforts basis, Mozilla maintains a list of the additional things users of Mozilla’s root store might need to consider.

Application developers must pay attention to which Root Store to use: We strongly encourage application developers to ensure that the list of root certificates that they are using in their applications have been curated for their use case. Additionally, application developers should only use the Mozilla/NSS root store for TLS or S/MIME by using the links provided on the CCADB Resources page that list the certificates in the Mozilla/NSS root store according to the trust bits (key usage) they are curated for.

Choosing to rely on a root store also means understanding and accepting the policies for that root store. Concretely, that means respecting both the trust flags on root certificates and decisions to add or remove root certificates. In particular, Mozilla removes root certificates when they are determined to be no longer trustworthy for TLS or S/MIME. If a removal causes an application to break, then it is either correct on the basis that the root certificate should no longer be used for TLS or S/MIME, or it is a fault in that application not using the root store correctly. Significant root removals are usually announced in Mozilla’s Security Blog (e.g. DigiNotar, CNNIC, WoSign).

Mozilla is committed to maintaining our own root store because doing so is vital to the security of our products and the web in general. It gives us the ability to set policies, determine which CAs meet them, and to take action when a CA fails to do so.

The post Beware of Applications Misusing Root Stores appeared first on Mozilla Security Blog.

Support.Mozilla.OrgWhat’s up with SUMO – May 2021

Hey SUMO folks,

The second quarter of 2021 is underway and we can’t be more excited about lots of stuff that we’ve been working on in this quarter.

Let’s find out more about them!

Welcome on board!

  1. Welcome back dbben! Thanks for actively contributing back in the forum.

Community news

  • Another reminder to check out Firefox Daily Digest to get daily updates about Firefox. Go check it out and subscribe if you haven’t already.
  • Advanced Search page is gone from SUMO as per May 4, 2021. The team is currently working to add syntax functionality that will be added to the simple search field. The plan is to have similar functionality to what we have in the advanced search but with minimal UI. Follow our discussion about this in the contributor forum here.
  • Firefox 89 is coming soon. We’ve been working on the tagging plan across channels for the upcoming proton launch next month. The idea is that, we want to collect those feedbacks and report to the product team regularly before and after the release. Here’s what we’re going to do for each channel:
    • Forum: If you’ve seen any questions related to proton changes, please tag the question with MR1.
    • Twitter: Conversocial let us automatically tag conversations with specific keywords related to proton. If you’ve seen other conversations that haven’t been tagged, please add “MR1” tag manually.
    • Reddit: Include proton in your post and tag the thread with “Proton” (related thread). We’ll capture top 10 conversations to the product team on a weekly basis.
  • Check out the following release notes from Kitsune for the past month:

Community call

  • Watch the monthly community call if you haven’t. Learn more about what’s new in April. We talked about various updates including the upcoming proton release in Firefox 89.
  • Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can address them during the meeting.

Community stats


KB Page views

Month Page views Vs previous month
April 2020 8,739,284 -28.03%

Top 5 KB contributors in the last 90 days: 

  1. AliceWyman
  2. Jeff
  3. Michele Rodaro
  4. Artist
  5. Marchelo Ghelman

KB Localization

Top 10 locale based on total page views

Locale Apr 2020 page views Localization progress (per 6 May)
de 10.09% 99%
es 6.80% 45%
zh-CN 6.58% 100%
fr 6.52% 88%
pt-BR 6.14% 68%
ja 4.37% 57%
ru 3.87% 99%
it 2.48% 98%
pl 2.31% 85%
id 0.96% 2%

Top 5 localization contributor in the last 90 days: 

  1. Ihor_ck
  2. Artist
  3. Milupo
  4. JimSp472
  5. Mark Heiji

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Apr 2020 3379 71.26% 14.86 71.43%

Top 5 forum contributor in the last 90 days: 

  1. Cor-el
  2. Jscher2000
  3. FredMcD
  4. Sfhowes
  5. Seburo

Social Support

Channel Apr 2020
Total conv Conv handled
@firefox 4,064 287
@FirefoxSupport 303 123

Top 5 contributors in April 2021

  1. Christophe Villeneuve
  2. Md Monirul Alom
  3. Devin E
  4. Andrew Truong
  5. Alex Mayorga

Play Store Support

We don’t have enough data for the Play Store Support yet. However, you can check out the overall Respond Tool metrics here.

Product updates

Firefox desktop

  • FX 89 release – June 1st
  • MR1/Proton
    • Firefox Beta 8(88.0b8) will have final if not near final changes implemented for proton
  • Phase 2 of Total Cookie protection – Dynamic First Party Isolation, or dFPI, feature enabled for Private Browsing Mode Users
  • Shimming Category 2 – Automatic exceptions UI indicator
  • Personalizing New Tab – Customize your new tab experience

Firefox mobile

  • Fenix (Fx 89) – June 1st
    • Optimized toolbar menus
    • Top Site visual improvements
    • Sync tabs → tabs tray
  • iOS V34
    • Refresh of tabs view
    • Adding synced tabs to tabs trey
    • Removed tabs search bar
    • Tabs Trey refresh
    • Nimbus experimentation platform integrated

Other products / Experiments

  • Mozilla VPN V2.3 – may 28
    • Windows – split tunneling
    • IPv6 Captive portal detection
  • Firefox for Amazon Fire TV and Echo Show sunset


  • Thank you Mamoon for taking up VPN questions on the forum!
  • Thank you Yoasif for helping us with Proton flair on Reddit!
  • Congrats dbben for making into the top contributor list for the forum.

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to   add them in our next edition.

Useful links:

William Lachancemozregression update May 2021

Just wanted to give some quick updates on the state of mozregression.

Anti-virus false positives

One of the persistent issues with mozregression is that it seems to be persistently detected as a virus by many popular anti-virus scanners. The causes for this are somewhat complex, but at root the problem is that mozregression requires fairly broad permissions to do the things it needs to do (install and run copies of Firefox) and thus its behavior is hard to distinguish from a piece of software doing something malicious.

Recently there have been a number of mitigations which seem to be improving this situation:

  • :bryce has been submitting copies of mozregression to Microsoft so that Windows Defender (probably the most popular anti-virus software on this platform) doesn’t flag it.
  • I recently released mozregression 4.0.17, which upgrades the GUI dependency for pyinstaller to a later version which sets PE checksums correctly on the generated executable (pyinstaller/pyinstaller#5579).

It’s tempting to lament the fact that this is happening, but in a way I can understand it’s hard to reliably detect what kind of software is legitimate and what isn’t. I take the responsibility for distributing this kind of software seriously, and have pretty strict limits on who has access to the mozregression GitHub repository and what pull requests I’ll merge.

CI ported to GitHub Actions

Due to changes in Travis’s policies, we needed to migrate continuous integration for mozregression to GitHub actions. You can see the gory details in bug 1686039. One possibly interesting wrinkle to others: due to Mozilla’s security policy, we can’t use (most) external actions inside our GitHub repository. I thus rewrote the logic for uploading a mozregression release to GitHub for MacOS and Linux GUI builds (Windows builds are still happening via AppVeyor for now) from scratch. Feel free to check the above out if you have a similar need.

MacOS Big Sur

As of version 4.0.17, the mozregression GUI now works on MacOS Big Sur. It is safe to ask community members to install and use it on this platform (though note the caveats due to the bundle being unsigned).

Usage Dashboard

Fulfilling a promise I implied last year, I created a public dataset for mozregression and created an dashboard tracking mozregression use using Observable. There are a few interesting insights and trends there that can be gleaned from our telemetry. I’d be curious if the community can find any more!

Karl DubostBrowser Wish List - Tabs and bookmarks are the same thing

My browser is my like an office room with desk and shelves, where the information is accessible. Information is stacked, accessible, sometimes open and browsable at glance and some deep on the shelves. But how would I want to have access it in the browser.

Currently we bury the information of tabs and bookmarks in a big bind of context without giving any help for managing apart of having to go through the list of pages one by one. No wonder why people feel overwhelmed and try to limit the number of tabs they have opened. Because big numbers rely on external tools (Tree Style Tabs, Sidebery, Containers, etc) which do not go far enough to manage the tabs.

Binder of pages

Some contexts

It started with a message from Glandium sharing an article from Joseph Chee Chang with the title: When the Tab Comes Due. Tabs! Love Tabs. Reading the PDF brought some strong nodding.

Tabs should better reflect users’ complex task structures.

One potential design space is to bootstrap such mental model repre-sentations with minimal user effort by identifying their intentionsusing their navigation patterns. For example, a set of tabs openedfrom a search engine query is likely to support the same information needs; or, a set of tabs opened from a top-10 list article arelikely competing options under the same category. Capturing andorganizing tabs using such structures has the potential of betterorienting users and providing better support for task progressionand resumption.

Allow users to externalize their thoughts and synthesize information across tabs.

More directly, a recent survey showed thataround half of their participants (49.4%, N=89) use spreadsheets togather evidence and take notes across multiple online informationsources to compare options (e.g., products or destinations) to helpthem make decisions. However, current browsers treat tabs asindividual silos and provide little support for cross-referencing andcollecting information between webpages. Using external tools,such as word documents and spreadsheets, creates a disconnectin users’ workspace, and can incur high cognitive and interactioncosts when trying to copy and paste information to synthesize themin a separate document


The article made me think about tabs and bookmarks, in our browsers UIs, these are separated. Probably it should not be. A bookmark is just a closed context, and a tab is just an opened context. But they are basically the same. The UI to access them is completely different, the information to filter them is also totally different. Why?

So I was thinking how could both world be mixed together.

  • Make the bookmarks more visual though thumbnails.
  • Make the tabs manageable through trees and categories and gives them the concept of dates (created and last opened) and show these dates.
  • Add on top of this full text search on the full set (or subcategory) of tabs/bookmarks (we need a new name).
    • Search "Gardening" for tabs opened in between February 2021 and May 2021.
    • Search "Curry" for tabs in my Thailand category
  • Give the notion of views
    • By tree (the sketch below)
    • By timeline (Year, month, days). Think photo management software. Sure I opened this tab after this date, during this trip, etc.
    • By geolocation (tabs opened when I was at home or in this cafe) Sometimes we memorize the information through the external context we where in.
    • By labels or keywords that you may have added.
    • By automatic classification of content. Machine Learning is all the rage, why not using the capabilities that OS provides more and more for running Machine learning to classify the content or even embark one.

Sketch for tabs bookmarks


If you have more questions, things I may have missed, different take on them. Feel free to comment…. Be mindful.


Spidermonkey Development BlogTC39 meeting, April 19-21 2021

In this TC39 meeting, the updates to JavaScript Classes around private state have moved to stage 4. Other proposals of note this meeting were proposals related to ArrayBuffers, notably resizable ArrayBuffers and a new proposal, introducing read-only ArrayBuffers and fixed views into ArrayBuffers. Read-only ArrayBuffers are not a new ArrayBuffer, but rather a way to freeze existing ArrayBuffers so that they are not modified accidentally. Fixed views into ArrayBuffers would have the goal of not exposing more than the intended view of an ArrayBuffer to a third party.

One of the interesting new proposals is Object.has or Object.hasOwn. This would supply developers with a convenient shorthand. The following:

let hasOwn = (obj, prop) =>, prop);

if (hasOwn(object, "foo")) {
  console.log("has property foo");

could instead be written as:

if (Object.hasOwn(object, "foo")) {
  console.log("has property foo")

This is a tricky corner case, and this would simplify things.

Pattern matching was brought back with an update. The proposal has a number of champions now, and a new effort to cleanly define the syntax. The record/tuple champions brought a new proposal that would help align how mutable and immutable structures have symmetry in their methods.

Needs minor change:


Keep an eye on…

  • Pattern Matching
  • Read-only ArrayBuffers and Fixed views
  • Change array by copy

Normative Spec Changes


Proposals Seeking Advancement to Stage 4

Class fields, private methods, and static class features

Proposals Seeking Advancement to Stage 3

Intl Locale Info for Stage 3

  • Notes
  • Proposal Link
  • Slides
  • Summary: An API to expose information of locale, such as week data (first day of a week, weekend start, weekend end), hour cycle, measurement system, commonly used calendar, etc.
  • Impact on SM: Needs implementation
  • Outcome: Advanced to Stage 3.

ResizableArrayBuffer for Stage 3

  • Notes
  • Proposal Link
  • Slides
  • Summary: Introduces two new ArrayBuffers, one resizable, the other only growable (and shared). The update to resizable ArrayBuffers introduces implementation defined rounding.
  • Impact on SM:
  • Outcome: Did not achieve consensus. Moddable requested more time to investigate the cost of having two new globals on their engine. The current outcome is that instead of introducing these new globals, we will instead overload the name, with a parameter (name to be determined) that will allow for the creation of a resizable/growable arraybuffer/sharedarraybuffer.

Intl DisplayNames v2 for Stage 3

  • Notes
  • Proposal Link
  • Slides
  • Summary: Adds further coverage to the existing Intl.DisplayNames API.
  • Impact on SM: Will Need implementation
  • Outcome: Did not achieve Consensus. There were a few requests for more investigation and time to resolve issues. Specifically, around CLDR and its defined language display names and whether they should all be supported in #29.

Stage 3 Updates

Import Assertions update

  • Notes
  • Proposal Link
  • Slides
  • Summary: The Import Assertions proposal adds an inline syntax for module import statements to pass on more information alongside the module specifier. The initial application for such assertions will be to support additional types of modules in a common way across JavaScript environments, starting with JSON modules. The syntax allows for the following.
      import json from "./foo.json" assert { type: "json" };

    The update focused on the question of “what do we do when we have an assertion that isn’t recognized?”. Currently if a host sees a module type assertion that they don’t recognize they can choose what to do. There wasn’t a resolution here so far.

  • Impact on SM: Implementation in Progress

Proposals Seeking Advancement to Stage 2

Extend TimeZoneName Option Proposal for Stage 2

  • Notes
  • Proposal Link
  • Slides
  • Summary: Adds further options for the TimeZoneName option in Intl.DateTimeFormat, allowing for greater accuracy in representing different time zones.
  • Impact on SM: Will Need implementation
  • Outcome: Advanced to stage 2.

Symbols as WeakMap keys for Stage 2

  • Notes
  • Proposal Link
  • Slides
  • Summary: Allows symbols in WeakMap Keys. The discussion focused on the potential issue of using globally shared symbols in a weakmap, as these would effectively be strongly held. As this is already possible in JavaScript (globals can be keys in a weakmap and are also never garbage collected), it was determined that this was not a significant risk.
  • Impact on SM: Will Need implementation
  • Outcome: Advanced to stage 2.

Stage 2 Updates

Intl.NumberFormat V3 Stage 2 Update

  • Notes
  • Proposal Link
  • Slides
  • Summary: A batch of internationalization features for number formatting. This update focused on changes to grouping enums, rounding and precision options, and sign display negative.
  • Impact on SM: Will Need implementation

Intl Enumeration API update

  • Notes
  • Proposal Link
  • Slides
  • Summary: Intl enumeration allows inspecting what is available on the intl api. Initially, we had reservations that this could be used for fingerprinting. Mozilla did an analysis and no longer holds this concern. However, it is unclear if this api has usecases which warrant its inclusion in the language.
  • Impact on SM: Will Need implementation

Proposals Seeking Advancement to Stage 1

Read-only ArrayBuffer and Fixed view of ArrayBuffer for Stage 1

  • Notes
  • Proposal Link for Read-Only ArrayBuffer
  • Proposal Link for Fixed view
  • Slides
  • Summary: These two proposals introduce ways to constrain ArrayBuffers. The first, read-only ArrayBuffers, would allow you to freeze arraybuffers much the way that you can freeze JS objects. Once it is frozen, it cannot be unfrozen or altered. The second, fixed view, creates a view that third parties cannot change. They are given only one view into the ArrayBuffer.
  • Outcome: Advanced to stage 1.

Change Array by copy for Stage 1

  • Notes
  • Proposal Link
  • Slides
  • Summary: Discussed last meeting in the Records and Tuples topic. This proposal will introduce a set of methods which array and tuple will share. The issue with a method like “sort” is that it operates on the array in a mutable way. This proposal introduces a new api, “sorted” which will copy the array and modify it, rather than modifying it in place. The full set of apis is still being determined.
  • Outcome: Advanced to stage 1.

Object.has for Stage 1

  • Notes
  • Proposal Link
  • Slides
  • Summary: Checking an object for a property at the moment, is rather unintuitive and error prone. This proposal introduces a more ergonoic wrapper around a common pattern involving Object.prototype.hasOwnProperty which allows the following:
      let hasOwnProperty = Object.prototype.hasOwnProperty
      if (, "foo")) {
        console.log("has property foo")

    to be written as:

      if (Object.hasOwn(object, "foo")) {
        console.log("has property foo")
  • Outcome: Advanced to stage 1.

Stage 1 Updates

Pattern matching update

  • Notes
  • Proposal Link
  • Slides
  • Summary: This update revives the pattern matching proposal, which will allow programmers to do complex matches on objects and other types. The proposal has been taken over by a new champion group. The goal is to introduce a useful alternative to switch, with more matching.

Daniel StenbergThe libcurl transfer state machine

I’ve worked hard on making the presentation I ended up calling libcurl under the hood. A part of that presentation is spent on explaining the main libcurl transfer state machine and here I’ll try to document some of what, in a written form. Understanding the main transfer state machine in libcurl could be valuable and interesting for anyone who wants to work on libcurl internals and maybe improve it.


The state is kept in easy handle in the struct field called mstate. The source file for this state machine is called multi.c.

An easy handle is always in exactly one of these states for as long as it exists.

This transfer state machine is designed to work for all protocols libcurl supports, but basically no protocol will transition through all states. As you can see in the drawing, there are many different possible transitions from a lot of the states.

libcurl transfer state machine

(click the image for a larger version)


A transfer starts up there above the surface in the INIT state. That’s a yellow box next to the little start button. Basically the boat shows how it goes from INIT to the right over to MSGSENT with it’s finish flag, but the real path is all done under the surface.

The yellow boxes (states) are the ones that exist before or when a connection is setup. The striped background is for all states that has a single and specific connectdata struct associated with the transfer.


If there’s a connection limit, either in total or per host etc, the transfer can get sent to the PENDING state to wait for conditions to change. If not, the state probably moves on to one of the blue ones to resolve host name and connect to the server etc. If a connection could be reused, it can shortcut immediately over to the green DO state.

The green states are all about setting up the connection to a state of fully connected, authenticated and logged in. Ready to send the first request.


The green DO states are all about sending the request with one or more commands so that the file transfer can begin. There are several such states to properly support all protocols but also for historical reasons. We could probably remove a state there by some clever reorgs if we wanted.


When a request has been issued and the transfer starts, it transitions over to PERFORMING. In the white states data is flowing. Potentially a lot. Potentially in both or either direction. If during the transfer curl finds out that the transfer is faster than allowed, it will move into RATELIMITING until it has cooled down a bit.


All the post-transfer states are red in the picture. The DONE is the first of them and after having done what it needs to round up the transfer, it disassociates with the connection and moves to COMPLETED. There’s no stripes behind that state. Disassociate here means that the connection is returned back to the connection pool for later reuse, or in the worst case if deemed that it can’t be reused or if the application has instructed it so, closed.

As you’ll note, there’s no disconnect anywhere in the state machine. This is simply because the disconnect is not really a part of the transfer at all.


This is the end of the road. In this state a message will be created and put in the outgoing queue for the application to read, and then as a final last step it moves over to MSGSENT where nothing more happens.

A typical handle remains in this state until the transfer is reused and restarted, in which it will be set back to the INIT state again and the journey begins again. Possibly with other transfer parameters and URL this time. Or perhaps not.

State machines within each state

What this state diagram and explanation doesn’t show is of course that in each of these states, there can be protocol specific handling and each of those functions might in themselves of course have their own state machines to control what to do and how to handle the protocol details.

Each protocol in libcurl has its own “protocol handler” and most of the protocol specific stuff in libcurl is then done by calls from the generic parts to the protocol specific parts with calls like protocol_handler->proto_connect() that calls the protocol specific connection procedure.

This allows the generic state machine described in this blog post to not really know the protocol specifics and yet all the currently support 26 transfer protocols can be supported.

libcurl under the hood – the video

Here’s the full video of libcurl under the hood.

If you want to skip directly to the state machine diagram and the following explanation, go here.


Image by doria150 from Pixabay

Nick FitzgeraldHit the Ground Running: Wasm Snapshots for Fast Start Up

I gave a (virtual) talk at the WebAssembly Summit this year titled “Hit the Ground Running: Wasm Snapshots for Fast Start Up”. Here is the talk’s abstract:

Don’t make your users wait while your Wasm module initializes itself! Wizer instantiates your WebAssembly module, executes its initialization functions, and then snapshots the initialized state out into a new, pre-initialized WebAssembly module. Now you can use this new module to hit the ground running, without waiting for any of that first-time initialization code to complete. This talk will cover the design and implementation of Wizer; discuss its performance characteristics and the scenarios in which it excels and when it isn’t the right tool; and finally, in the process of doing all that, we’ll take a closer look at what makes up the guts of a WebAssembly module: memories, globals, tables, etc.

You can view the slide deck here, check out the benchmarks here, and the recording is embedded below:

The Rust Programming Language BlogAnnouncing Rust 1.52.1

The Rust team has prepared a new release, 1.52.1, working around a bug in incremental compilation which was made into a compiler error in 1.52.0. We recommend all Rust users, including those currently using stable versions prior to 1.52.0, upgrade to 1.52.1 or disable incremental compilation. Guidance on how to do so is available below.

If you have a previous version of Rust installed via rustup, getting Rust 1.52.1 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.


This release works around broken builds on 1.52.0, which are caused by newly added verification. The bugs this verification detects are present in all Rust versions1, and can trigger miscompilations in incremental builds, so downgrading to a prior stable version is not a fix.

Users are encouraged to upgrade to 1.52.1 or disable incremental in their local environment if on a prior version: please see the what you should do section for details on how to do so.

Incremental compilation is off by default for release builds, so few production builds should be affected (only for users who have opted in).

Miscompilations that can arise from the bugs in incremental compilation generate incorrect code in final artifacts, essentially producing malformed binaries, which means that in theory any behavior is possible. In practice we are currently only aware of one particular known miscompilation, but bugs due to incremental are notoriously hard to track down: users frequently simply rebuild after some light editing if they see unexpected results from their binaries, and this often causes sufficient recompilation to fix the bug(s).

This post is going to:

  1. Explain what the errors look like,
  2. Explain what the check does, at a high level,
  3. Explain how the check is presenting itself in the Rust 1.52.0 release,
  4. Tell you what you should do if you see an unstable fingerprint on your project,
  5. Describe our plans for how the Rust project will address the problems discussed here.

What does the error look like?

The error message looks something like this, with the key piece being the "found unstable fingerprints" text.

thread 'rustc' panicked at 'assertion failed: `(left == right)`
  left: `Some(Fingerprint(4565771098143344972, 7869445775526300234))`,
  right: `Some(Fingerprint(14934403843752251060, 623484215826468126))`: found unstable fingerprints for <massive text describing rustc internals elided>

error: internal compiler error: unexpected panic

note: the compiler unexpectedly panicked. this is a bug.

This is the error caused by the internal consistency check, and as stated in the diagnostic, it yields an "Internal Compiler Error" (or ICE). In other words, it represents a bug in the internals of the Rust compiler itself. In this case, the ICE is revealing a bug in incremental compilation that predates the 1.52.0 release and could result in miscompilation if it had not been caught.

What are fingerprints? Why are we checking them?

The Rust compiler has support for "incremental compilation", which has been described in a 2016 blog post. When incremental compilation is turned on, the compiler breaks the input source into pieces, and tracks how those input pieces influence the final build product. Then, when the inputs change, it detects this and reuses artifacts from previous builds, striving to expend effort solely on building the parts that need to respond to the changes to the input source code.

Fingerprints are part of our architecture for detecting when inputs change. More specifically, a fingerprint (along with some other state to establish context) is a 128-bit value intended to uniquely identify internal values used within the compiler. Some compiler-internal results are stored on disk ("cached") between runs. Fingerprints are used to validate that a newly computed result is unchanged from the cached result. (More details about this are available in the relevant chapter of the rustc dev guide.)

The fingerprint stability check is a safeguard asserting internal consistency of the fingerprints. Sometimes the compiler is forced to rerun a query, and expects that the output is the same as from a prior incremental compilation session. The newly enabled verification checks that the value is indeed as expected, rather than assuming so. In some cases, due to bugs in the compiler's implementation, this was not actually the case.


We initially added these fingerprint checks as a tool to use when developing rustc itself, back in 2017. It was solely provided via an unstable -Z flag, only available to nightly and development builds.

More recently, in March, we encountered a miscompilation that led us to turn on verify-ich by default. The Rust compiler team decided it was better to catch fingerprint problems and abort compilation, rather than allow for potential miscompilations (and subsequent misbehavior) to sneak into Rust programmer's binaries.

When we first turned on the fingerprint checks by default, there was a steady stream of issues filed by users of the nightly (and beta) toolchains, and steady progress has been made on identifying fixes, a number of which have already landed.

In the past week, we had started making plans to improve the user-experience, so that the diagnostic issued by the check would do a better job of telling the programmer what to do in response. Unfortunately, this was done under the assumption that the new verification would ship in 1.53, not 1.52.

It turns out verify-ich was turned on in version 1.52.0, which was released recently.

Today's new release, 1.52.1, works around the breakage caused by the newly added verification by temporarily changing the defaults in the Rust compiler to disable incremental unless the user knowingly opts in.

How does this show up

Essentially, for some crates, certain sequences of edit-compile cycles will cause rustc to hit the "unstable fingerprints" ICE. I showed one example at the start of this blog post.

Another recent example looks like this:

thread 'rustc' panicked at 'found unstable fingerprints for predicates_of(<massive text describing rustc internals elided>)', /rustc/.../compiler/rustc_query_system/src/query/

They all arise from inconsistencies when comparing the incremental-compilation cache stored on disk against the values computed during a current rustc invocation, which means they all arise from using incremental compilation.

There are several ways that you may have incremental compilation turned on:

  1. You may be building with the dev or test profiles which default to having incremental compilation enabled.
  2. You may have set the environment variable CARGO_INCREMENTAL=1
  3. You may have enabled the build.incremental setting in your Cargo config
  4. You may have enabled the incremental setting in your Cargo.toml for a given profile

If your project has not adjusted the defaults, then when running cargo build --release or otherwise in the release profile configuration incremental is disabled on all Rust versions1, and these issues should not affect your release builds.

What should a Rust programmer do in response

The Internal Compiler Error asks you to report a bug, and if you can do so, we still want that information. We want to know about the cases that are failing.

But regardless of whether or not you file a bug, the problem can be worked around on your end by either:

  1. upgrading to 1.52.1, if you have not yet done so (which will disable incremental for you), or
  2. deleting your incremental compilation cache (e.g. by running cargo clean), or
  3. forcing incremental compilation to be disabled, by setting CARGO_INCREMENTAL=0 in your environment or build.incremental to false in the config.toml.

We recommend that users of 1.52.0 upgrade to 1.52.1, which disables incremental compilation.

We do not recommend that users of 1.52.0 downgrade to an earlier version of Rust in response to this problem. As noted above, there is at least one instance of a silent miscompilation caused by incremental compilation that was not caught until we added the fingerprint checking.

If a user is willing to deal with the incremental verification ICE's, and wishes to opt back into the 1.52.0 behavior, they may set RUSTC_FORCE_INCREMENTAL to 1 in their environment. The Rust compiler will then respect the -Cincremental option passed by Cargo, and things will work as before, though with the added verification. Note that this flag does not enable incremental if it has not already been separately enabled (whether by Cargo or otherwise).

If you are currently using a toolchain prior to 1.52.0, and wish to continue doing so, we recommend that you disable incremental compilation to avoid hitting silent miscompilations.

On all Rust builds since incremental has landed, it has been a major improvement to compile times for many users, and has only improved over time. We acknowledge that the workarounds presented here and recommendations are painful, and will be working hard to ensure the situation is as temporary as possible.

What is the Rust project going to do to fix this

Short-term plan

We have issued 1.52.1 today which:

  • Disables incremental compilation in the Rust compiler (unless asked for by a new environment variable, RUSTC_FORCE_INCREMENTAL=1).
  • Improves diagnostic output for the new verification if incremental compilation is enabled, indicating how to work around the bugs by purging incremental state or disabling incremental.

This is intended to be a mitigation that helps the majority of Rust users have an upgrade path to a safe Rust compiler which does not have the risk of miscompiling their code, but also provide the option for users willing to deal with the errors to do so.

We expect to continue to actively invest in fixing the bugs, and depending on our confidence in the fixes, may issue a 1.52.2 point release which backports those fixes to the stable channel. Users wishing to help us test can use the nightly channel, and report bugs to rust-lang/rust with any ICEs they are seeing.

We are also currently not planning to disable incremental on the beta channel, but this decision has not been firmly committed to. A number of fixes are available on 1.53 beta today, so users who wish to continue using incremental may want to switch to that. Nightly will always have the latest in fixes, of course.

Long-term plan

The long-term plan is to fix the bugs! Incremental compilation is the only realistic way for the Rust compiler to be able to provide a fast edit-compile-run cycle for all of its programmers, and so we need to address all of the issues that have been identified thus far via verify-ich. (There are 32 such issues as of this writing, though many are duplicates.)

We are actively investing in this, and a number of bugs have already been identified and fixed. Depending on the state of the fixes, future stable releases (1.53 and onwards) will likely re-enable incremental compilation.

The Rust teams will also be developing plans to ensure we have better tracking systems in place in the future for bugs, both to prevent situations like this from arising again, but also to further increase the stability of our releases by tracking bugs more accurately as they propagate across channels.

  1. Since incremental was first enabled, which was in Rust 1.24.

Daniel Stenbergcurl up 2021

curl up 2021 happened today.

We had five presentations done, all prerecorded and made available before the event. At the Sunday afternoon we gathered to discuss the presentations and everything around those topics.

The presentations

  1. The state of curl 2021 – Daniel Stenberg
  2. curl security 2021 – Daniel Stenberg
  3. libcurl under the hood – Daniel Stenberg
  4. Interfacing rust – Stefan Eissing
  5. Curl profiling – Jim Fuller.


We were not very many who actually joined the meeting, and out of the people in the meeting a majority decided to be spectators only and remained muted with their cameras off.

It turned out as a two hour long mostly casual talk among me, Stefan Eissing and Emil Engler about the presentations and related topics. Toward the end, Kamil Dudka appeared.

The three of us get to talk about roadmap items, tests, security, writing code that interfaces modules written in rust and what more details in the libcurl internals that could use further descriptions and documentation.

The video

The agenda in the video is roughly following the agenda order in the 2021 wiki page and the discussion topics mentioned there.


Thanks to wolfSSL for sponsoring the video meeting account used!

Patrick ClokeA new maintainer for django-allauth-2fa

I’m excited to announce the django-allauth-2fa project has a new maintainer! It can now be found under the valohai organization on GitHub, who have already contributed quite a bit to the package.

This project lets you easily add two-factor authentication to a Django project using django-allauth.

As a bit …

Daniel Stenbergcurl pictures

“Memes” or other fun images involving curl. Please send or direct me to other ones you think belong in this collection! Kept here solely to boost my ego.

All modern digital infrastructure

This is the famous xkcd strip number 2347, modified to say Sweden and 1997 by @tsjost. I’ve seen this picture taking some “extra rounds” in various places, somehow also being claimed to be xkcd 2347 when people haven’t paid attention to the “patch” in the text.

Entire web infrastructure

Image by @matthiasendler

Car contract

This photo of a rental car contract with an error message on the printed paper was given to me by a good person I’ve unfortunately lost track of.

The developer dice

Thanks to Cassidy. (For purchase here.)

Don’t use -X

Remember that using curl -X is very often just the wrong thing to do. Jonas Forsberg helps us remember:

The curl

In an email from NASA that I received and shared, the person asked about details for “the curl”.

Image by eichkat3r at mastodon.

You’re sure this is safe?

Piping curl output straight into a shell is a much debated practice…

Picture by Tim Chase.

curl, reinvented by…

Remember the powershell curl alias?

Picture by Shashimal Senarath.


This is an old classic:


Screenshotted curl credits.

The Firefox FrontierDetroit’s digital divide reminds us how far America has to go for internet equity

by Biba Adams The need for equitable broadband internet access has been a problem since the term “digital divide” was first coined in the mid-1990s, but these conversations are getting … Read more

The post Detroit’s digital divide reminds us how far America has to go for internet equity appeared first on The Firefox Frontier.

The Rust Programming Language BlogAnnouncing Rust 1.52.0

The Rust team is happy to announce a new version of Rust, 1.52.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.52.0 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.52.0 on GitHub.

What's in 1.52.0 stable

The most significant change in this release is not to the language or standard libraries, but rather an enhancement to tooling support for Clippy.

Previously, running cargo check followed by cargo clippy wouldn't actually run Clippy: the build caching in Cargo didn't differentiate between the two. In 1.52, however, this has been fixed, which means that users will get the expected behavior independent of the order in which they run the two commands.

Stabilized APIs

The following methods were stabilized.

The following previously stable APIs are now const.

Other changes

There are other changes in the Rust 1.52.0 release: check out what changed in Rust, Cargo, and Clippy.

Contributors to 1.52.0

Many people came together to create Rust 1.52.0. We couldn't have done it without all of you. Thanks!

The Firefox FrontierMozilla Explains: What are deceptive design patterns?

Deceptive design patterns are tricks used by websites and apps to get you to do things you might not otherwise do, like buy things, sign up for services or switch … Read more

The post Mozilla Explains: What are deceptive design patterns? appeared first on The Firefox Frontier.

Data@MozillaAnnouncing Mozilla Rally

We wrote recently about how difficult it is to understand the data companies collect from you, and what they’re doing with it. These companies determine how your data is used and who benefits. Cutting people out of decisions about their data is an inequity that harms not only individuals, but also society and the internet. We believe that you should determine who benefits from your data. Today, we’re taking a step in that direction with the alpha release of Mozilla Rally. Rally is now available for desktop Firefox users age 19 and older in the USA.

Rally is aimed at rebuilding your equity in your data. We allow you to choose how to contribute your data and for what purpose. We’re building a community to help understand some of the biggest problems of the internet, and we want you to join us.

How Rally Works

When you join Rally, you have the opportunity to participate in data crowdsourcing projects — we call them “studies” — focused on understanding and finding solutions for social problems caused by the data economy. You will always see a simple explanation of a study’s purpose, the data it collects, how the data will be used, and who will have access to your data. All your data is stored in Mozilla’s restricted servers, and access to the analysis environment is tightly controlled. For those who really want to dig deep, you can read our detailed disclosures and even inspect our code.  

Our First Study

Major tech and ad companies track you and others like you online. They can even predict what you’re likely to do next. This information isn’t available to you. Our first study seeks to remedy this imbalance by exploring the time we spend online. We will publish our findings to give you a first look at the data you help create as part of our Rally community. 

This first study also creates a foundation for communities to share data in equitable ways. Rally aims to improve our collective understanding of the value of personal data, so we will share public reports and updates with our community at key milestones. 

Change starts with exploration

We started Rally as an innovation program, building on earlier experiments with trusted research institutions. In the coming months, we are exploring new products and public interest projects that return equity to communities. We are data optimists and want to change the way the data economy works for both people and day-to-day business. We are committed to putting our users first every step of the way, and building a community together. 

Join us at You can also follow us on Twitter.

This Week In RustThis Week in Rust 389

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Project/Tooling Updates
Rust Walkthroughs
Papers/Research Projects

Crate of the Week

This week's crate is display_utils, a library with Displayable structs to make string manipulation easier.

Thanks to kangalioo for the nomination

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

322 pull requests were merged in the last week

Rust Compiler Performance Triage

Quiet week, no significant changes.

Triage done by @simulacrum. Revision range: 537544..7a0f178

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

Upcoming Events


If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs






DEX Labs




Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Using R or Numpy is like driving around in a sports car. You just turn the wheel, press the pedals, and burn rubber. Rust (and other systems languages) are like getting a spaceship. You can go places and do things that you never dreamt of in a car. They are harder to pilot, but the possibilities seem unlimited! With the Rust ecosystem still in development, it feels like parts of your spaceship come in boxes of parts labeled "some assembly required".

Erik Rose on rust-users

Thanks to Phlopsi for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Daniel StenbergEvery base is base 10

This image originally comes from but sadly it seems the page that once showed it is no longer there. I saved it from that site already back in 2015, but I cannot recall the exact URL it used. The image is still available at[1].png.

Since I consider this picture such an iconic classic and masterpiece, I decided I better host it here in a small attempt to preserve it for everyone to enjoy.

Because, you know, every base is base 10.

Update: the original page on

Spidermonkey Development BlogImplementing Private Fields for JavaScript

This post is cross-posted from Matthew Gaudet’s blog

When implementing a language feature for JavaScript, an implementer must make decisions about how the language in the specification maps to the implementation. Sometimes this is fairly simple, where the specification and implementation can share much of the same terminology and algorithms. Other times, pressures in the implementation make it more challenging, requiring or pressuring the implementation strategy diverge to diverge from the language specification.

Private fields is an example of where the specification language and implementation reality diverge, at least in SpiderMonkey– the JavaScript engine which powers Firefox. To understand more, I’ll explain what private fields are, a couple of models for thinking about them, and explain why our implementation diverges from the specification language.

Private Fields

Private fields are a language feature being added to the JavaScript language through the TC39 proposal process, as part of the class fields proposal, which is at Stage 4 in the TC39 process. We will ship private fields and private methods in Firefox 90.

The private fields proposal adds a strict notion of ‘private state’ to the language. In the following example, #x may only be accessed by instances of class A:

class A {
  #x = 10;

This means that outside of the class, it is impossible to access that field. Unlike public fields for example, as the following example shows:

class A {
  #x = 10; // Private field
  y = 12; // Public Field

var a = new A();
a.y; // Accessing public field y: OK
a.#x; // Syntax error: reference to undeclared private field

Even various other tools that JavaScript gives you for interrogating objects are prevented from accessing private fields (e.g. Object.getOwnProperty{Symbols,Names} don’t list private fields; there’s no way to use Reflect.get to access them).

A Feature Three Ways

When talking about a feature in JavaScript, there are often three different aspects in play: the mental model, the specification, and the implementation.

The mental model provides the high level thinking that we expect programmers to use mostly. The specification in turn provides the detail of the semantics required by the feature. The implementation can look wildly different from the specification text, so long as the specification semantics are maintained.

These three aspects shouldn’t produce different results for people reasoning through things (though, sometimes a ‘mental model’ is shorthand, and doesn’t accurately capture semantics in edge case scenarios).

We can look at private fields using these three aspects:

Mental Model

The most basic mental model one can have for private fields is what it says on the tin: fields, but private. Now, JS fields become properties on objects, so the mental model is perhaps ‘properties that can’t be accessed from outside the class’.

However, when we encounter proxies, this mental model breaks down a bit; trying to specify the semantics for ‘hidden properties’ and proxies is challenging (what happens when a Proxy is trying to provide access control to properties, if you aren’t supposed to be able see private fields with Proxies? Can subclasses access private fields? Do private fields participate in prototype inheritance?) . In order to preserve the desired privacy properties an alternative mental model became the way the committee thinks about private fields.

This alternative model is called the ‘WeakMap’ model. In this mental model you imagine that each class has a hidden weak map associated with each private field, such that you could hypothetically ‘desugar’

class A {
  #x = 15;
  g() {
    return this.#x;

into something like

class A_desugared {
  static InaccessibleWeakMap_x = new WeakMap();
  constructor() {
    A_desugared.InaccessibleWeakMap_x.set(this, 15);

  g() {
    return A_desugared.InaccessibleWeakMap_x.get(this);

The WeakMap model is, surprisingly, not how the feature is written in the specification, but is an important part of the design intention is behind them. I will cover a bit later how this mental model shows up in places later.


The actual specification changes are provided by the class fields proposal, specifically the changes to the specification text. I won’t cover every piece of this specification text, but I’ll call out specific aspects to help elucidate the differences between specification text and implementation.

First, the specification adds the notion of [[PrivateName]], which is a globally unique field identifier. This global uniqueness is to ensure that two classes cannot access each other’s fields merely by having the same name.

function createClass() {
  return class {
    #x = 1;
    static getX(o) {
      return o.#x;

let [A, B] = [0, 1].map(createClass);
let a = new A();
let b = new B();

A.getX(a); // Allowed: Same class
A.getX(b); // Type Error, because different class.

The specification also adds a new ‘internal slot’, which is a specification level piece of internal state associated with an object in the spec, called [[PrivateFieldValues]] to all objects. [[PrivateFieldValues]] is a list of records of the form:

  [[PrivateName]]: Private Name,
  [[PrivateFieldValue]]: ECMAScript value

To manipulate this list, the specification adds four new algorithms:

  1. PrivateFieldFind
  2. PrivateFieldAdd
  3. PrivateFieldGet
  4. PrivateFieldSet

These algorithms largely work as you would expect: PrivateFieldAdd appends an entry to the list (though, in the interest of trying to provide errors eagerly, if a matching Private Name already exists in the list, it will throw a TypeError. I’ll show how that can happen later). PrivateFieldGet retrieves a value stored in the list, keyed by a given Private name, etc.

The Constructor Override Trick

When I first started to read the specification, I was surprised to see that PrivateFieldAdd could throw. Given that it was only called from a constructor on the object being constructed, I had fully expected that the object would be freshly created, and therefore you’d not need to worry about a field already being there.

This turns out to be possible, a side effect of some of the specification’s handling of constructor return values. To be more concrete, the following is an example provided to me by André Bargull, which shows this in action.

class Base {
  constructor(o) {
    return o; // Note: We are returning the argument!

class Stamper extends Base {
  #x = "stamped";
  static getX(o) {
    return o.#x;

Stamper is a class which can ‘stamp’ its private field onto any object:

let obj = {};
new Stamper(obj); // obj now has private field #x
Stamper.getX(obj); // => "stamped"

This means that when we add private fields to an object we cannot assume it doesn’t have them already. This is where the pre-existence check in PrivateFieldAdd comes into play:

let obj2 = {};
new Stamper(obj2);
new Stamper(obj2); // Throws 'TypeError' due to pre-existence of private field

This ability to stamp private fields into arbitrary objects interacts with the WeakMap model a bit here as well. For example, given that you can stamp private fields onto any object, that means you could also stamp a private field onto a sealed object:

var obj3 = {};
new Stamper(obj3);
Stamper.getX(obj3); // => "stamped"

If you imagine private fields as properties, this is uncomfortable, because it means you’re modifying an object that was sealed by a programmer to future modification. However, using the weak map model, it is totally acceptable, as you’re only using the sealed object as a key in the weak map.

PS: Just because you can stamp private fields into arbitrary objects, doesn’t mean you should: Please don’t do this.

Implementing the Specification

When faced with implementing the specification, there is a tension between following the letter of the specification, and doing something different to improve the implementation on some dimension.

Where it is possible to implement the steps of the specification directly, we prefer to do that, as it makes maintenance of features easier as specification changes are made. SpiderMonkey does this in many places. You will see sections of code that are transcriptions of specification algorithms, with step numbers for comments. Following the exact letter of the specification can also be helpful where the specification is highly complex and small divergences can lead to compatibility risks.

Sometimes however, there are good reasons to diverge from the specification language. JavaScript implementations have been honed for high performance for years, and there are many implementation tricks that have been applied to make that happen. Sometimes recasting a part of the specification in terms of code already written is the right thing to do, because that means the new code is also able to have the performance characteristics of the already written code.

Implementing Private Names

The specification language for Private Names already almost matches the semantics around Symbols, which already exist in SpiderMonkey. So adding PrivateNames as a special kind of Symbol is a fairly easy choice.

Implementing Private Fields

Looking at the specification for private fields, the specification implementation would be to add an extra hidden slot to every object in SpiderMonkey, which contains a reference to a list of {PrivateName, Value} pairs. However, implementing this directly has a number of clear downsides:

  • It adds memory usage to objects without private fields
  • It requires invasive addition of either new bytecodes or complexity to performance sensitive property access paths.

An alternative option is to diverge from the specification language, and implement only the semantics, not the actual specification algorithms. In the majority of cases, you really can think of private fields as special properties on objects that are hidden from reflection or introspection outside a class.

If we model private fields as properties, rather than a special side-list that is maintained with an object, we are able to take advantage of the fact that property manipulation is already extremely optimized in a JavaScript engine.

However, properties are subject to reflection. So if we model private fields as object properties, we need to ensure that reflection APIs don’t reveal them, and that you can’t get access to them via Proxies.

In SpiderMonkey, we elected to implement private fields as hidden properties in order to take advantage of all the optimized machinery that already exists for properties in the engine. When I started implementing this feature André Bargull – a SpiderMonkey contributor for many years – actually handed me a series of patches that had a good chunk of the private fields implementation already done, for which I was hugely grateful.

Using our special PrivateName symbols, we effectively desuagar

class A {
  #x = 10;
  x() {
    return this.#x;

to something that looks closer to

class A_desugared {
  constructor() {
    this[PrivateSymbol(#x)] = 10;
  x() {
    return this[PrivateSymbol(#x)];

Private fields have slightly different semantics than properties however. They are designed to issue errors on patterns expected to be programming mistakes, rather than silently accepting it. For example:

  1. Accessing an a property on an object that doesn’t have it returns undefined. Private fields are specified to throw a TypeError, as a result of the PrivateFieldGet algorithm.
  2. Setting a property on an object that doesn’t have it simply adds the property. Private fields will throw a TypeError in PrivateFieldSet.
  3. Adding a private field to an object that already has that field also throws a TypeError in PrivateFieldAdd. See “The Constructor Override Trick” above for how this can happen.

To handle the different semantics, we modified the bytecode emission for private field accesses. We added a new bytecode op, CheckPrivateField which verifies an object has the correct state for a given private field. This means throwing an exception if the property is missing or present, as appropriate for Get/Set or Add. CheckPrivateField is emitted just before using the regular ‘computed property name’ path (the one used for A[someKey]).

CheckPrivateField is designed such that we can easily implement an inline cache using CacheIR. Since we are storing private fields as properties, we can use the Shape of an object as a guard, and simply return the appropriate boolean value. The Shape of an object in SpiderMonkey determines what properties it has, and where they are located in the storage for that object. Objects that have the same shape are guaranteed to have the same properties, and it’s a perfect check for an IC for CheckPrivateField.

Other modifications we made to make to the engine include omitting private fields from the property enumeration protocol, and allowing the extension of sealed objects if we are adding private field.


Proxies presented us a bit of a new challenge. Concretely, using the Stamper class above, you can add a private field directly to a Proxy:

let obj3 = {};
let proxy = new Proxy(obj3, handler);
new Stamper(proxy)

Stamper.getX(proxy) // => "stamped"
Stamper.getX(obj3)  // TypeError, private field is stamped
                    // onto the Proxy Not the target!

I definitely found this surprising initially. The reason I found this surprising was I had expected that, like other operations, the addition of a private field would tunnel through the proxy to the target. However, once I was able to internalize the WeakMap mental model, I was able to understand this example much better. The trick is that in the WeakMap model, it is the Proxy, not the target object, used as the key in the #x WeakMap.

These semantics presented a challenge to our implementation choice to model private fields as hidden properties however, as SpiderMonkey’s Proxies are highly specialized objects that do not have room for arbitrary properties. In order to support this case, we added a new reserved slot for an ‘expando’ object. The expando is an object allocated lazily that acts as the holder for dynamically added properties on the proxy. This pattern is used already for DOM objects, which are typically implemented as C++ objects with no room for extra properties. So if you write = "hi", this allocates an expando object for document, and puts the foo property and value in there instead. Returning to private fields, when #x is accessed on a Proxy, the proxy code knows to go and look in the expando object for that property.

In Conclusion

Private Fields is an instance of implementing a JavaScript language feature where directly implementing the specification as written would be less performant than re-casting the specification in terms of already optimized engine primitives. Yet, that recasting itself can require some problem solving not present in the specification.

At the end, I am fairly happy with the choices made for our implementation of Private Fields, and am excited to see it finally enter the world!


I have to thank, again, André Bargull, who provided the first set of patches and laid down an excellent trail for me to follow. His work made finishing private fields much easier, as he’d already put a lot of thought into decision making.

Jason Orendorff has been an excellent and patient mentor as I have worked through this implementation, including two separate implementations of the private field bytecode, as well as two separate implementations of proxy support.

Thanks to Caroline Cullen, and Iain Ireland for helping to read drafts of this post.

Wladimir PalantUniversal XSS in Ninja Cookie extension

The cookie consent screens are really annoying. They attempt to trick you into accepting all cookies, dismissing them without agreeing is made intentionally difficult. A while back I wrote on Twitter than I’m almost at the point of writing a private browser extension to automate the job. And somebody recommended Ninja Cookie extension to me, which from the description seemed perfect for the job.

Now I am generally wary of extensions that necessarily need full access to every website. This is particularly true if these extensions have to interact with the websites in complicated ways. What are the chances that this is implemented securely? So I took a closer look at Ninja Cookie source code, and I wasn’t disappointed. I found several issues in the extension, one even allowing any website to execute JavaScript code in the context of any other website (Universal XSS).

The cookie ninja from the extension’s logo is lying dead instead of clicking on prompts

As of Ninja Cookie 0.7.0, the Universal XSS vulnerability has been resolved. The other issues remain however, these are exploitable by anybody with access to the Ninja Cookie download server ( This seems to be the reason why Mozilla Add-ons currently only offers the rather dated Ninja Cookie 0.2.7 for download, newer versions have been disabled. Chrome Web Store still offers the problematic extension version however. I didn’t check whether extension versions offered for Edge, Safari and Opera browsers are affected.

How does the extension work?

When it comes to cookie consent screens, the complicating factor is: there are way too many. While there are some common approaches, any given website is likely to be “special” in some respect. For my private extension, the idea was having a user interface to create site-specific rules, so that at least on websites I use often things were covered. But Ninja Cookie has it completely automated of course.

So it will download several sets of rules from For example, cmp.json currently contains the following rule:

"cmp/admiral": {
  "metadata": {
    "name": "Admiral",
    "website": "",
    "iab": ""
  "match": [{
    "type": "check",
    "selector": "[class^='ConsentManager__']"
  "required": [{
    "type": "cookie",
    "name": "euconsent",
    "missing": true
  "action": [{
    "type": "hide"
  }, {
    "type": "css",
    "selector": "html[style*='overflow']",
    "properties": {
      "overflow": "unset"
  }, {
    "type": "css",
    "selector": "body[style*='overflow']",
    "properties": {
      "overflow": "unset"
  }, {
    "type": "sleep"
  }, {
    "type": "click",
    "selector": "[class^='ConsentManager__'] [class^='Card__CardFooter'] button:first-of-type"
  }, {
    "type": "sleep"
  }, {
    "type": "checkbox",
    "selector": "[class^='ConsentManager__'] [class^='Toggle__Label'] input"
  }, {
    "type": "sleep"
  }, {
    "type": "click",
    "selector": "[class^='ConsentManager__'] [class^='Card__CardFooter'] button:last-of-type"

This is meant to address Admiral cookie consent prompts. There is a match clause, making sure that this only applies to the right pages. The check rule here verifies that an element matching the given selector exists on the page. The required clause contains another rule, checking that a particular cookie is missing. Finally, the action clause defines what to do, a sequence of nine rules. There are css rules here, applying CSS properties to matching elements. The click rules will click buttons, the checkbox change check box values.

Aren’t these rules too powerful?

Now let’s imagine that turns malicious. Maybe the vendor decides to earn some extra money, or maybe the repository backing it simply gets compromised. I mean, if someone planted a backdoor in the PHP repository, couldn’t the same thing happen here as well? Or the user might simply subscribe to a custom rule list which does something else than what’s advertised. How bad would that get?

Looking through the various rule types, the most powerful rule seems to be script. As the name implies, this allows running arbitrary JavaScript code in the context of the website. But wait, it has been defused, to some degree! Ninja Cookie might ask you before running a script. It will be something like the following:

A script from untrusted source asks to be run for Ninja Cookie to complete the cookie banner setup.

Running untrusted script can be dangerous. Do you want to continue ?

Content: ‘{const e=(||{}).onMessageChoiceSelect;||{},{onMessageChoiceSelect:function(n,o){12===o&&(document.documentElement.className+=" __ninja_cookie_options"),e&&e.apply(this,arguments)}})}'

Now this prompt might already be problematic in itself. It relies on the user being able to make an informed decision. Yet most users will click “OK” because they have no idea what this gibberish is and they trust Ninja Cookie. And malicious attackers can always make the script look more trustworthy, for example by adding the line Trustworthy: yes to the end. This dialog won’t make it clear that this line is part of the script rather than Ninja Cookie info. Anyway, only custom lists get this treatment, not the vendor’s own rules from (trusted lists).

But why even go there? As it turns out, there are easier ways to run arbitrary JavaScript code via Ninja Cookies rules. Did you notice that many rules have a selector parameter? Did you just assume that some secure approach like document.querySelectorAll() is being used here? Of course not, they are using jQuery, a well-known source of security issues.

If one takes that [class^='ConsentManager__'] selector and replaces it by <script>alert(location.href)</script>, jQuery will create an element instead of locating one in the document. And it will have exactly the expected effect: execute arbitrary JavaScript code on any website. No prompts here, the user doesn’t need to accept anything. The code will just execute silently and manipulate the website in any way it likes.

And that’s not the only way. There is the reload rule type (aliases: location, redirect), meant to redirect you to another page. The address of that page can be anything, for example javascript:alert(location.href). Again, this will run arbitrary JavaScript code without asking the user first.

Can websites mess with this?

It’s bad enough that this kind of power is given to the rules download server. But it gets worse. That website you opened in your browser? Turned out, it could mess with the whole process. As so often, the issue is using window.postMessage() for communication between content scripts. Up until Ninja Cookie 0.6.3, the extension’s content script contained the following code snippet:

window.addEventListener('message', ({data, origin, source}) => {
  if (!data || typeof data !== 'object')

  if (data.webext !==

  switch (data.type) {
    case 'load':
      return messageLoad({data, origin, source});
    case 'unload':
      return messageUnload({data, origin, source});
    case 'request':
      return messageRequest({data, origin, source});
    case 'resolve':
    case 'reject':
      return messageReply({data, origin, source});

A frame or a pop-up window would send a load message to the top/opener window. And it would accept request messages coming back. That request message could contain, you guessed it, rules to be executed. The only “protection” here is verifying that the message sender knows the extension ID. Which it can learn from the load message.

So any website could run code like the following:

var frame = document.createElement("iframe");
frame.src = "";
window.addEventListener("message", event =>
  if ( == "load")
      type: "request",
      message: {
        type: "action.execute",
        data: {
          action: {
            type: "script",
            content: "alert(location.href)"
          options: {},
          metadata: [{list: {trusted: true}}]
    }, event.origin);

Here we create a frame pointing to And once the frame loads and the corresponding extension message is received, a request message is sent to execute a script action. Wait, didn’t script action require user confirmation? No, not for trusted lists. And the message sender here can simply claim that the list is trusted.

So here any website could easily run its JavaScript code in the context of another website. Critical websites like don’t allow framing? No problem, they can still be opened as a pop-up. Slightly more noisy but essentially just as easy to exploit.

This particular issue has been resolved in Ninja Cookie 0.7.0. Only the load message is being exchanged between content scripts now. The remaining communication happens via the secure runtime.sendMessage() API.


The Universal XSS vulnerability in Ninja Cookie essentially broke down the boundaries between websites, allowing any website to exploit another. This is already really bad. However, while this particular issue has been resolved, the issue of Ninja Cookie rules being way too powerful hasn’t been addressed yet. As long as you rely on someone else’s rules, be it official Ninja Cookie rules or rules from some third-party, you are putting way too much trust in those. If the rules ever turn malicious, they will compromise your entire browsing.

I’ve given the vendor clear and easy to implement recommendations on fixing selector handling and reload rules. Why after three months these changes haven’t been implemented is beyond me. I hope that Mozilla will put more pressure on the vendor to address this.

“Fixing” the script rules is rather complicated however. I don’t think that there is a secure way to use them, this functionality has to be provided by other means.


  • 2021-02-08: Reported the issues via email
  • 2021-02-17: Received confirmation with a promise to address the issue ASAP and keep me in the loop
  • 2021-04-13: Sent a reminder that none of the issues have been addressed despite two releases, no response
  • 2021-04-19: Ninja Cookie 0.7.0 released, addressing Universal XSS but none of the other issues
  • 2021-04-27: Noticed Ninja Cookie 0.7.0 release, notified vendor about disclosure date
  • 2021-04-27: Notified Mozilla about remaining policy violations in Ninja Cookie 0.7.0

Spidermonkey Development BlogPrivate Fields and Methods ship with Firefox 90

Firefox will ship Private Fields and Methods in Firefox 90. This new language syntax allows programmers to have strict access control over their class internals. A private field can only be accessed by code inside the class declaration.

class PrivateDetails {
  #private_data = "I shouldn't be seen by others";

  #private_method { return "private data" }

  useData() {

    var p = this.#private_method();

var p = new PrivateDetails();
p.useData(); // OK
p.#private_data; // SyntaxError

This is the last remaining piece of the Stage 4 Proposal, Class field declarations for JavaScript, which has many more details about the design of private data.

Mozilla Localization (L10N)Mozilla VPN Client: A Localization Tale

On April 28th, Mozilla successfully launched its VPN Client in two new countries: Germany and France. While the VPN Client has been available since 2020 in several countries (U.S., U.K., Canada, New Zealand, Singapore, and Malaysia), the user interface was only available in English.

This blog post describes the process and steps needed to make this type of product localizable within the Mozilla ecosystem.
Screenshot of Mozilla VPN Client with Italian localization

How It Begins

Back in October 2020, the small team working on this project approached me with a request: we plan to do a complete rewrite of the existing VPN Client with Qt, using one codebase for all platforms, and we want to make it localizable. How can we make it happen?

First of all, let me stress how important it is for a team to reach out as early as possible. That allows us to understand existing limitations, explain what we can realistically support, and set clear expectations. It’s never fun to find yourself backed in a corner, late in the process and with deadlines approaching.

Initial Localization Setup

This specific project was definitely an interesting challenge, since we didn’t have any prior experience with Qt, and we needed to make sure the project could be supported in Pontoon, our internal Translation Management System (TMS).

The initial research showed that Qt natively uses an XML format (TS File), but that would have required resources to write a parser and a serializer for Pontoon. Luckily, Qt also supports import and export from a more common standard, XLIFF.

The next step is normally to decide how to structure the content: do we want the TMS to write directly in the main repository, or do we want to use an external repository exclusively for l10n? In this case, we opted for the latter, also considering that the main repository was still private at the time.

Once settled on the format and repository structure, the next step is to do a full review of the existing content:

  • Check every string for potential localizability issues.
  • Add comments where the content is ambiguous or there are variables replaced at run-time.
  • Check consistency issues in the en-US content, in case the content hasn’t been reviewed or created by our very capable Content Team.

It’s useful to note that this process heavily depends on the Localization Project Manager assigned to a project, because there are different skill sets in the team. For example, I have a very hands-on approach, often writing patches directly to fix small issues like missing comments (that normally helps reducing the time needed for fixes).

In my case, this is the ideal approach:

  • After review, set up the project in Pontoon as a private project (only accessible to admins).
  • Actually translate the project into Italian. That allows me to verify that everything is correctly set up in Pontoon and, more importantly, it allows me to identify issues that I might have missed in the initial review. It’s amazing how differently your brain works when you’re just looking at content, and when you’re actually trying to translate it.
  • Test a localized build of the product. In this way I can verify that we are able to use the output of our TMS, that the build system works as expected, and that there are no errors (hard-coded content, strings reused in different contexts, etc.).

This whole process typically requires at least a couple of weeks, depending on how many other projects are active at the same time.

Scale and Automate

I’m a huge fan of automation when it comes to getting rid of repetitive tasks, and I’ve come to learn a lot about GitHub Actions working on this project. Luckily, that knowledge helped in several other projects later on.

The first thing I noticed is that I was often commenting on two issues on the source (en-US) strings: typographic issues (straight quotes, 3 dots instead of ellipsis), lack of comments when a string has variables. So I wrote a very basic linter that runs in automation every time a developer adds new strings in a pull request.

The bulk of the automation lives in the l10n repository:

  • There’s automation, running daily, that extracts strings from the code repository, and creates a PR exposing them to all locales.
  • There’s a basic linter that checks for issues in the localized content, in particular missing variables. That happens more often than it should, mostly because the placeholder format is different from what localizers are used to, and there might be Translation Memory matches — strings already translated in the past in other products — coming from different file formats.

VPN L10n Workflow DiagramThe update automation was particularly interesting. Extracting new en-US strings is relatively easy, thanks to Qt command line tools, although there is some work needed to clean up the resulting XLIFF (for example, moving localization comments from extracomment to note).

In the process of adding new locales, we quickly realized that updating only the reference file (en-US) was not sufficient, because Pontoon expects each localized XLIFF to have all source messages, even if untranslated.

Historically that was the case for other bilingual file formats — files that contain both source and translation — like .po (GetText) and .lang files, but it is not necessarily true for XLIFF files. In particular, both those formats come with their own set of tools to merge new strings from a template into other locales, but that’s not available for XLIFF, which is an exchange format used across completely different tools.

At this point, i needed automation to solve two separate issues:

  • Add new strings to all localized files when updating en-US.
  • Catch unexpected string changes. If a string changes without a new ID, it doesn’t trigger any action in Pontoon (existing translations are kept, localizers won’t be aware of the change). So we need to make sure those are correctly managed.

This is how a string looks like in the source XLIFF file:

<file original="../src/ui/components/VPNAboutUs.qml" datatype="plaintext">
    <trans-unit id="vpn.aboutUs.tos">
      <source>Terms of Service</source>

These are the main steps in the update script:

  • It takes the en-US XLIFF file, and uses it as a template.
  • It reads each localized file, saving existing translations. These are stored in a dictionary, where the key is generated using the original attribute of the file element, the string ID from the trans-unit, and a hash of the actual source string.
  • Translations are then injected in the en-US template and saved, overwriting the existing localized file.

Using the en-US file as template ensures that the file includes all the strings. Using the hash of the source text as part of the ID will remove translations if the source string changed (there won’t be a translation matching the ID generated while walking through the en-US file).


How do you test a project that is not publicly available, and requires a paid subscription on top of that? Luckily, the team came up with the brilliant idea of creating a WASM online application to allow our volunteers to test their work, including parts of the UI or dialogs that wouldn’t be normally exposed in the main user interface.

Localized strings are automatically imported in the build process (the l10n repository is configured as a submodule in the code repository), and screenshots of the app are also generated as part of the automation.


This was a very interesting project to work on, and I consider it to be a success case, especially when it comes to cooperation between different teams. A huge thanks to Andrea, Lesley, Sebastian for being always supportive and helpful in this long process, and constantly caring about localization.

Thanks to the amazing work of our community of localizers, we were able to exceed the minimum requirements (support French and German): on launch day, Mozilla VPN Client was available in 25 languages.

Keep in mind that this was only one piece of the puzzle in terms of supporting localization of this product: there is web content localized as part of, parts of the authentication flow managed in a different project, payment support in Firefox Accounts, legal documents and user documentation localized by vendors, and SUMO pages.

Niko Matsakis[AiC] Vision Docs!

The Async Vision Doc effort has been going now for about 6 weeks. It’s been a fun ride, and I’ve learned a lot. It seems like a good time to take a step back and start talking a bit about the vision doc structure and the process. In this post, I’m going to focus on the role that I see vision docs playing in Rust’s planning and decision making, particularly as compared to RFCs.

Vision docs frame RFCs

If you look at a description of the design process for a new Rust feature, it usually starts with “write an RFC”. After all, before we start work on something, we begin with an RFC that both motivates and details the idea. We then proceed to implementation and stabilization.

But the RFC process isn’t really the beginning. The process really begins with identifying some sort of problem1 – something that doesn’t work, or which doesn’t work as well as it could. The next step is imagining what you would like it to be like, and then thinking about how you could make that future into reality.

We’ve always done this sort of “framing” when we work on RFCs. In fact, RFCs are often just one small piece of a larger picture. Think about something like impl Trait, which began with an intentionally conservative step (RFC #1522) and has been gradually extended. Async Rust started the same way; in that case, though, even the first RFC was split into two, which together described a complete first step (RFC #2394 and RFC #2592).

The role of a vision doc is to take that implicit framing and make it explicit. Vision docs capture both the problem and the end-state that we hope to reach, and they describe the first steps we plan to take towards that end-state.

The “shiny future” of vision docs

There are many efforts within the Rust project that could benefit from vision docs. Think of long-running efforts like const generics or library-ification. There is a future we are trying to make real, but it doesn’t really exist in written form.

I can say that when the lang team is asked to approve an RFC relating to some incremental change in a long-running effort, it’s very difficult for me to do. I need to be able to put that RFC into context. What is the latest plan we are working towards? How does this RFC take us closer? Sometimes there are parts of that plan that I have doubts about – does this RFC lock us in, or does it keep our options open? Having a vision doc that I could return to and evolve over time would be a tremendous boon.

I’m also excited about the potential for ‘interlocking’ vision docs. While working on the Async Vision Doc, for example, I’ve found myself wanting to write examples that describe error handling. It’d be really cool if I could pop over to the Error Handling Project Group2, take a look at their vision doc, and then make use of what I see there in my own examples. It might even help me to identify a conflict before it happens.

Start with the “status quo”

A key part of the vision doc is that it starts by documenting the “status quo”. It’s all too easy to take the “status quo” for granted – to assume that everybody understands how things play out today.

When we started writing “status quo” stories, it was really hard to focus on the “status quo”. It’s really tempting to jump straight to ideas for how to fix things. It took discipline to force ourselves to just focus on describing and understanding the current state.

I’m really glad we did though. If you haven’t done so already, take a moment to browse through the status quo section of the doc (you may find the metanarrative helpful to get an overview3). Reading those stories has given me a much deeper understanding of how Async is working in practice, both at a technical level but also in terms of its impact on people. This is true even when presenting highly technical context. Consider stories like Barbara builds an async executor or Barbara carefully dismisses embedded future. For me, stories like this have more resonance than just seeing a list of the technical obstacles one must overcome. They also help us talk about the various “dead-ends” that might otherwise get forgotten.

Those kind of dead-ends are especially important for people new to Rust, of course, who are likely to just give up and learn something else if the going gets too rough. In working on Rust, we’ve always found that focusing on accessibility and the needs of new users is a great way to identify things that – once fixed – wind up helping everyone. It’s interesting to think how long we put off doing NLL. After all, metajack filed #6393 in 2013, and I remember people raising it with me earlier. But to those of us who were experienced in Rust, we knew the workarounds, and it never seemed pressing, and hence NLL got put off until 2018.4 But now it’s clearly one of the most impactful changes we’ve made to Rust for users at all levels.

Brainstorming the “shiny future”

A few weeks back, we started writing “shiny future” stories (in addition to “status quo”). The “shiny future” stories are the point where we try to imagine what Rust could be like in a few years.

Ironically, although in the beginning the “shiny future” was all we could think about, getting a lot of “shiny future” stories up and posted has been rather difficult. It turns out to be hard to figure out what the future should look like!5

Writing “shiny future” stories sounds a bit like an RFC, but it’s actually quite different:

  • The focus is on the end user experience, not the details of how it works.
  • We want to think a bit past what we know how to do. The goal is to “shake off” the limits of incremental improvement and look for ways to really improve things in a big way.
  • We’re not making commitments. This is a brainstorming session, so it’s fine to have multiple contradictory shiny futures.

In a way, it’s like writing just the “guide section” of an RFC, except that it’s not written as a manual but in narrative form.

Collaborative writing sessions

To try and make the writing process more fun, we started running collaborative Vision Doc Writing Sessions. We were focused purely on status quo stories at the time. The idea was simple – find people who had used Rust and get them to talk about their experiences. At the end of the session, we would have a “nearly complete” outline of a story that we could hand off to someone to finish.6

The sessions work particularly well when you are telling the story of people who were actually in the session. Then you can simply ask them questions to find out what happened. How did you start? What happened next? How did you feel then? Did you try anything else in between? If you’re working from blog posts, you sometimes have to take guesses and try to imagine what might have happened.7

One thing to watch out for: I’ve noticed people tend to jump steps when they narrate. They’ll say something like “so then I decided to use FuturesUnordered”, but it’s interesting to find out how they made that decision. How did they learn about FuturesUnordered? Those details will be important later, because if you develop some superior alternative, you have to be sure people will find it.

Shifting to the “shiny future”

Applying the “collaborative writing session” idea to the shiny future has been more difficult. If you get a bunch of people in one session, they may not agree on what the future should be like.

Part of the trick is that, with shiny future, you often want to go for breadth rather than depth. It’s not just about writing one story, it’s about exploring the design space. That leads to a different style of writing session, but you wind up with a scattershot set of ideas, not with a ‘nearly complete’ story, and it’s hard to hand those off.

I’ve got a few ideas of things I would like to try when it comes to future writing sessions. One of them is that I would like to work directly with various luminaries from the Async Rust world to make sure their point-of-view is represented in the doc.

Another idea is to try and encourage more “end-to-end” stories that weave together the “most important” substories and give a sense of prioritization. After all, we know that there are subtle footguns in the model as is and we also know that intgrating into external event loops is tricky. Ideally, we’d fix both. But which is a bigger obstacle to Async Rust users? In fact, I imagine that there is no single answer. The answer will depend on what people are doing with Async Rust.

After brainstorming: Consolidating the doc and building a roadmap

The brainstorming period is scheduled to end mid-May. At that point comes the next phase, which is when we try to sort out all the contradictory shiny future stories into one coherent picture. I envision this process being led by the async working group leads (tmandry and I), but it’s going to require a lot of consensus building as well.

In addition to building up the shiny future, part of this process will be deciding a concrete roadmap. The roadmap will describe the specific first steps we will take first towards this shiny future. The roadmap items will correspond to particular designs and work items. And here, with those specific work items, is where we get to RFCs: when those work items call for new stdlib APIs or extensions to the language, we will write RFCs that specify them. But those RFCs will be able to reference the vision doc to explain their motivation in more depth.

Living document: adjusting the “shiny future” as we go

There is one thing I want to emphasize: the “shiny future” stories we write today will be wrong. As we work on those first steps that appear in the roadmap, we are going to learn things. We’re going to realize that the experience we wanted to build is not possible – or perhaps that it’s not even desirable! That’s fine. We’ll adjust the vision doc periodically as we go. We’ll figure out the process for that when the time comes, but I imagine it may be a similar – but foreshortened – version of the one we have used to draft the initial version.


Ack! It’s probably pretty obvious that I’m excited about the potential for vision docs. I’ve got a lot of things I want to say about them, but this post is getting pretty long. There are a lot of interesting questions to poke at, most of which I don’t know the answers to yet. Some of the things on my mind: what are the best roles for the characters and should we tweak how they are defined8? Can we come up with good heuristics for which character to use for which story? How are the “consolidation” and “iteration / living document” phases going to work? When is the appropriate time to write a vision doc – right away, or should you wait until you’ve done enough work to have a clearer picture of what the future looks like? Are there lighterweight versions of the process? We’re going to figure these things out as we go, and I will write some follow-up posts talking about them.


  1. Not problem, opportunity! 

  2. Shout out to the error handling group, they’re doing great stuff! 

  3. Did I mention we have 34 stories so far (and more in open PRs)? So cool. Keep ‘em coming! 

  4. To be fair, it was also because designing and implementing NLL was really, really hard.9 

  5. Who knew? 

  6. Big, big shout-out to all those folks who have participated, and especially those brave souls who authored stories

  7. One thing that’s great, though, is that after you post the story, you can ping people and ask them if you got it right. =) 

  8. I feel pretty strongly that four characters is the right number (it worked for Marvel, it will work for us!)10, but I’m not sure if we got their setup right in other respects. 

  9. And – heck – we’re still working towards Polonius

  10. Not my actual reason. I don’t know my actual reason, it just seems right. 

Daniel Stenbergfixed vulnerabilities were once created

In the curl project we make great efforts to store a lot of meta data about each and every vulnerability that we have fixed over the years – and curl is over 23 years old. This data set includes CVE id, first vulnerable version, last vulnerable version, name, announce date, report to the project date, CWE, reward amount, code area and “C mistake kind”.

We also keep detailed data about releases, making it easy to look up for example release dates for specific versions.


All this, combined with my fascination (some would call it obsession) of graphs is what pushed me into creating the curl project dashboard, with an ever-growing number of daily updated graphs showing various data about the curl projects in visual ways. (All scripts for that are of course also freely available.)

What to show is interesting but of course it is sometimes even more important how to show particular data. I don’t want the graphs just to show off the project. I want the graphs to help us view the data and make it possible for us to draw conclusions based on what the data tells us.


The worst bugs possible in a project are the ones that are found to be security vulnerabilities. Those are the kind we want to work really hard to never introduce – but we basically cannot reach that point. This special status makes us focus a lot on these particular flaws and we of course treat them special.

For a while we’ve had two particular vulnerability graphs in the dashboard. One showed the number of fixed issues over time and another one showed how long each reported vulnerability had existed in released source code until a fix for it shipped.

CVE age in code until report

The CVE age in code until report graph shows that in general, reported vulnerabilities were introduced into the code base many years before they are found and fixed. In fact, the all time average time suggests they are present for more than 2,700 – more than seven years. Looking at the reports from the last 12 months, the average is even almost 1000 days more!

It takes a very long time for vulnerabilities to get found and reported.

When were the vulnerabilities introduced

Just the other day it struck me that even though I had a lot of graphs already showing in the dashboard, there was none that actually showed me in any nice way at what dates we created the vulnerabilities we spent so much time and effort hunting down, documenting and talking about.

I decided to use the meta data we already have and add a second plot line to the already existing graph. Now we have the previous line (shown in green) that shows the number of fixed vulnerabilities bumped at the date when a fix was released.

Added is the new line (in red) that instead is bumped for every date we know a vulnerability was first shipped in a release. We know the version number from the vulnerability meta data, we know the release date of that version from the release meta data.

This all new graph helps us see that out of the current 100 reported vulnerabilities, half of them were introduced into the code before 2010.

Using this graph it also very clear to me that the increased CVE reporting that we can spot in the green line started to accelerate in the project in 2016 was not because the bugs were introduced then. The creation of vulnerabilities rather seem to be fairly evenly distributed over time – with occasional bumps but I think that’s more related to those being particular releases that introduced a larger amount of features and code.

As the average vulnerability takes 2700 days to get reported, it could indicate that flaws landed since 2014 are too young to have gotten reported yet. Or it could mean that we’ve improved over time so that new code is better than old and thus when we find flaws, they’re more likely to be in old code paths… I don’t think the red graph suggests any particular notable improvement over time though. Possibly it does if we take into account the massive code growth we’ve also had over this time.

The green “fixed” line at least has a much better trend and growth angle.

Present in which releases

As we have the range of vulnerable releases stored in the meta data file for each CVE, we can then add up the number of the flaws that are present in every past release.

Together with the release dates of the versions, we can make a graph that shows the number of reported vulnerabilities that are present in each past release over time, in a graph.

You can see that some labels end up overwriting each other somewhat for the occasions when we’ve done two releases very close in time.

curl security 2021

Allen Wirfs-BrockPersonal Digital Habitats: Get Started!

A vintage comic book ad for Habitrail components

In my previous post, I introduced the concept of a Personal Digital Habitat (PDH) which I defined as: a federated multi-device information environment within which a person routinely dwells. If you haven’t read that post, you should do so before continuing.

That previous post focused on the experience of using a PDH. It established a vision of a new way to use and interact with our personal collections of computing devices.  Hopefully it is an attractive vision. But, how can we get from where we are today to a world where we all have our own comfortable digital habitat?

A PDH provides a new computing experience for its inhabitant.1 Historically, a new computing experience has resulted in the invention of new operating systems to support that experience—timesharing, GUI-based personal computing, touch-based mobile computing, cloud computing all required fundamental operating system reinvention. To fully support the PDH vision we will ultimately need to reinvent again and create operating systems that manage a federated multi-device PDH rather than a single computing device.

An OS is a complex layered collection of resource managers that control the use of the underlying hardware and services that provide common  capabilities to application programs. Operating systems were originally developed to minimize waste of scarce expensive “computer time.” Generally, that is no longer a problem. Today it is more important to protect our digital assets and to minimize wasting scarce human attention.

Modern operating systems are seldom built up from scratch. More typically new operating systems evolve from existing ones2  through the  addition (and occasional removal) of resource managers and application service layers in support of new usage models.  A PDH OS will likely be built by adding new layers upon an existing operating system.

You might imagine a group of developers starting a project today to create a PDH OS.  Such an effort would almost certainly fail. The problem is that we don’t yet understand the functionality and inhabitant experience of a PDH and hence we don’t really know which OS resource managers and service layers need to be implemented.

Before we will know enough to build a PDH OS we need to experience building PDH applications.  Is this a chicken or egg problem? Not really.  A habitat-like experience can be defined and implemented by an individual application that supports multiple devices—but the application will need to provide its own support for the managers and services that it needs. It is by building such applications that we will begin to understand the requirements for a PDH OS.

Some developers are already doing something like this today as they build applications that are designed to be local-first or peer-to-peer dWeb/Web 3 based or that support collaboration/multi-user sync. Much of the technology applicable to those initiatives is also useful for building  self-contained PDH applications.

If you are an application developer who finds the PDH concept intriguing, here is my recommendation. Don’t wait! Start designing your apps in a habitat-first manner and thinking of your users as app inhabitants. For your next application don’t just build another single device application that will be ported or reimplemented on various phone, tablet, desktop, and web platforms. Instead, start from the assumption that your application’s inhabitant will be simultaneously running it on multiple devices and that they deserve a habitat-like experience as they rapidly switch their attention among devices. Design that application experience, explore what technologies are available that you can leverage to provide it, and then implement it for the various types of platforms.  Make the habitat-first approach your competitive advantage.

If you have comments or question tweet them mentioning @awbjs. I first starting talking about personal digital habitats in a twitter thread on March 22, 2021. That and subsequent twitter threads in March/April 2021 include interesting discussions of technical approaches to PDHs.

1    I intend to generally use “inhabitant” rather than “user” to refer to the owner/operator of a PDH.
2    For example, Android was built upon Linux and iOS was built starting from the core of MacOS X.

The Rust Programming Language BlogAnnouncing Rustup 1.24.1

The rustup working group is happy to announce the release of rustup version 1.24.1. Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of rustup installed, getting rustup 1.24.1 is as easy as closing your IDE and running:

rustup self update

Rustup will also automatically update itself at the end of a normal toolchain update:

rustup update

If you don't have it already, you can get rustup from the appropriate page on our website.

What's new in rustup 1.24.1

Firstly, if you have not read the previous announcement then in brief, 1.24 introduces better support for low memory systems, installs itself into the Add/Remove programs list on Windows, and now supports using rust-toolchain.toml files.

Shortly after publishing the 1.24.0 release of Rustup, we got reports of a regression preventing users from running rustfmt and cargo fmt after upgrading to Rustup 1.24.0. To limit the damage we reverted the release to version 1.23.1. The only substantive change between 1.24.0 and 1.24.1 is to correct this regression.

Other changes

You can check out all the changes to Rustup for 1.24.0 and 1.24.1 in the changelog!

Rustup's documentation is also available in the rustup book.


Thanks again to all the contributors who made rustup 1.24.0 and 1.24.1 possible!

  • Alex Chan
  • Aloïs Micard
  • Andrew Norton
  • Avery Harnish
  • chansuke
  • Daniel Alley
  • Daniel Silverstone
  • Eduard Miller
  • Eric Huss
  • est31
  • Gareth Hubball
  • Gurkenglas
  • Jakub Stasiak
  • Joshua Nelson
  • Jubilee (workingjubilee)
  • kellda
  • Michael Cooper
  • Philipp Oppermann
  • Robert Collins
  • SHA Miao
  • skim (sl4m)
  • Tudor Brindus
  • Vasili (3point2)
  • наб (nabijaczleweli)
  • 二手掉包工程师 (hi-rustin)

The Mozilla BlogGrowing the Bytecode Alliance

Today, Mozilla joins Fastly, Intel, and Microsoft in announcing the incorporation and expansion of the Bytecode Alliance, a cross-industry partnership to advance a vision for fast, secure, and simplified software development based on WebAssembly.

Building software today means grappling with a set of vexing trade-offs. If you want to build something big, it’s not realistic to build each component from scratch. But relying on a complex supply chain of components from other parties allows a defect anywhere in that chain to compromise the security and stability of the entire program. Tools like containers can provide some degree of isolation, but they add substantial overhead and are impractical to use at per-supplier granularity. And all of these dynamics entrench the advantages of big companies with the resources to carefully manage and audit their supply chains.

Mozilla helped create WebAssembly to allow the Web to grow beyond JavaScript and run more kinds of software at faster speeds. But as it matured, it became clear that WebAssembly’s technical properties — particularly memory isolation — also had the potential to transform software development beyond the browser by resolving the tension described above. Several other organizations shared this view, and we came together to launch the Bytecode Alliance as an informal industry partnership in late 2019. As part of this launch, we articulated our shared vision and called for others to join us in bringing it to life.

That vision resonated with others, and we soon heard from many more organizations interested in joining the Alliance. However, it was clear that our informal structure would not scale adequately, and so we asked prospective members to be patient and, in parallel with ongoing technical efforts, worked to incorporate the Alliance as a formal 501(c)(6) organization. That process is now complete, and we’re thrilled to welcome Arm, DFINITY Foundation, Embark Studios, Google, Shopify, and University of California at San Diego as official members of the Bytecode Alliance. We aim to continue growing the Alliance in the coming months, and encourage other like-minded organizations to apply.

We have a real opportunity to change how software is built, and in doing so, enable small teams to build big things that are both secure and fast. Achieving the elusive trifecta — easy composition, defect isolation, and high performance — requires both the right technology and a coordinated effort across the ecosystem to deploy it in the right way. Mozilla believes that WebAssembly has the right technical ingredients to build a better, more secure Internet, and that the Bytecode Alliance has the vision and momentum to make it happen.

The post Growing the Bytecode Alliance appeared first on The Mozilla Blog.

Mozilla Performance BlogPerformance Sheriff Newsletter (March 2021)

In March there were 288 alerts generated, resulting in 28 regression bugs being filed on average 4 days after the regressing change landed.

Welcome to the March 2021 edition of the performance sheriffing newsletter. Here you’ll find the usual summary of our sheriffing efficiency metrics, followed by some analysis on the data footprint of our performance metrics. If you’re interested (and if you have access) you can view the full dashboard.

Sheriffing efficiency

  • All alerts were triaged in an average of 1.2 days
  • 92% of alerts were triaged within 3 days
  • Valid regressions were associated with bugs in an average of 1.5 days
  • 96% of valid regressions were associated with bugs within 5 days


Sheriffing Efficiency (Mar 2021)

Interestingly, the close correlation we’ve seen between alerts and time to bug did not continue into March. It’s not clear why this might be, however there were some temporary adjustments to the sheriffing team during this time. We also saw an increase in the percentage of alert summaries that marked as invalid, which might have an impact of our sheriffing efficiency.

What’s new in Perfherder?

I last provided an update on Perfherder in July 2020 so felt it was about time to revisit.

Compact Bugzilla summaries & descriptions

Until recently, Perfherder would simply try to include all affected tests and platforms in the summary and description for all regression bugs. Not only does this make the bugs difficult to read, it also meant we hit the maximum field size for regressions that impacted a large number of tests.

Bug 1697112 is an example of how this looked before the recent change. The description contained 24 regression alerts, and 22 improvement alerts. The summary was edited by a performance sheriff to fit within the maximum field size:

4.55 – 18.83% apple ContentfulSpeedIndex … tumblr SpeedIndex (windows10-64-shippable) regression on push 6ea4d69aa5c6c7064d3b4a195bf96617baa3aebf (Thu March 4 2021)

With the recent improvements we limit how many tests are named in the summary and show a count of the omitted tests. We now list common names for the affected platforms, and no longer include the suspected commit hash. For the description, when we have many alerts we now show the most/least affected and indicate that one or more have been omitted for display purposes. Bug 1706333 is an example ofthe improved description and summary:

122.22 – 2.73% cnn-ampstories FirstVisualChange / ebay ContentfulSpeedIndex + 22 more (Windows) regression on Fri April 16 2021

Compare view sorting

We’ve added the ability to sort columns in compare view. This is useful when you’re comparing many tests and you’d like to quickly sort the results by confidence, delta, or magnitude.

Compare view sorted by confidence

Compare view sorted by confidence

Infrastructure changelog

Last year we created a unified changelog consolidating commits from repositories related to our automation infrastructure. Changes to infrastructure can impact our performance results and time can be wasted investigating regressions in our products that aren’t there. To help with this, we now annotate Perfherder graphs with data from the infrastructure changelog. When one of these markers correlates to an alert it can provide a valuable clue for our sheriffs. The repositories monitored for changes can be found here here.

Perfherder graph showing infrastructure changelog

Graph showing infrastructure changelog markers

Stop alerting on tier 3 jobs

After updating our Performance Regressions Policy to explicitly mention that the sheriffs do not monitor tier 3 jobs, we fixed Perfherder to prevent these from alerting. Anything running below tier 2 is considered unstable, and not a valuable performance indicator.

Reduced data footprint

We have also spent a lot of effort reducing the data footprint of our performance data by updating and enforcing our data retention policy. You can read more about our data footprint in last month’s newsletter.

Email reports

When working on our data retention policy we wanted some way of reporting the signatures that were being deleted, and so we introduced email reports. We’re also now sending reports for automated backfills, and in the future we’d like to generate more reports. If you’re curious, these are being sent to perftest-alerts.

Bug fixes

The following bug fixes are also worth highlighting:


These updates would not have been possible without Ionuț Goldan, Alexandru Irimovici, Alexandru Ionescu, Andra Esanu, Beatrice Acasandrei and Florin Strugariu. Thanks also to the Treeherder team for reviewing patches and supporting these contributions to the project. Finally, thank you to all of the Firefox engineers for all of your bug reports and feedback on Perfherder and the performance workflow. Keep it coming, and we look forward to sharing more updates with you all soon.

Summary of alerts

Each month I’ll highlight the regressions and improvements found.

Note that whilst I usually allow one week to pass before generating the report, there are still alerts under investigation for the period covered in this article. This means that whilst I believe these metrics to be accurate at the time of writing, some of them may change over time.

I would love to hear your feedback on this article, the queries, the dashboard, or anything else related to performance sheriffing or performance testing. You can comment here, or find the team on Matrix in #perftest or #perfsheriffs.

The dashboard for March can be found here (for those with access).

This Week In RustThis Week in Rust 388

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

No newsletters this week.

Project/Tooling Updates
Rust Walkthroughs
Papers/Research Projects

Crate of the Week

This week's crate is cargo-rr, a cargo subcommand to use the time-traveling rr debugger on our code.

Thanks to Willi Kappler for the nomination

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

jsonschema-rs: User-defined validation for the format keyword

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

350 pull requests were merged in the last week

Rust Compiler Performance Triage

It's always nice to have a week without any regressions and 2 small improvements 🎉🎉.

Triage done by @rylev. Revision range: 6df26f8..537544

0 Regressions, 2 Improvements, 0 Mixed

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

Upcoming Events


If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs



Confio GmbH




Parity Technologies




Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

this error message is UNREAL

Ash 2X3 on Twitter

Thanks to Nixon Enraght-Moony for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Chris IliasThe screenshot option in Firefox has moved. Here’s how to find it.

If you have updated Firefox recently, you may have noticed that Take a Screenshot is missing from the page actions menu. Don’t fret. The feature is still in Firefox; it has just been moved.

Here’s how to find it…

You now have a button to take screenshots.

Of course, you can always right-click within a webpage and Take Screenshot will be part of the menu.

Will Kahn-GreeneSocorro Overview: 2021, presentation

Socorro became part of the Data Org part of Mozilla back in August 2020. I had intended to give this presentation in October 2020 after I had given one on Tecken 1, but then the team I was on got re-orged and I never got around to redoing the presentation for a different group.

Fast-forward to March. I got around to updating the presentation and then presented it to Data Club on March 26th, 2021.

I was asked if I want it posted on YouTube and while that'd be cool, I don't think video is very accessible on its own 2. Instead, I decided I wanted to convert it to a blog post. It took a while to do that for various reasons that I'll cover in another blog post.

This blog post goes through the slides and narrative of that presentation.


I should write that as a blog post, too.


This is one of the big reasons I worked on pyvideo for so long.

Read more… (28 min remaining to read)

Andrew HalberstadtPhabricator Etiquette Part 2: The Author

Last time we looked at some ways reviewers can keep the review process moving efficiently. This week, let’s put on our author hats and do the same thing.

Mozilla Attack & DefenseExamining JavaScript Inter-Process Communication in Firefox


Firefox uses Inter-Process Communication (IPC) to implement privilege separation, which makes it an important cornerstone in our security architecture. A previous blog post focused on fuzzing the C++ side of IPC. This blog post will look at IPC in JavaScript, which is used in various parts of the user interface. First, we will briefly revisit the multi-process architecture and upcoming changes for Project Fission, Firefox’ implementation for Site Isolation. We will then move on to examine two different JavaScript patterns for IPC and explain how to invoke them. Using Firefox’s Developer Tools (DevTools), we will be able to debug the browser itself.

Once equipped with this knowledge, we will revisit a sandbox escape bug that was used in a 0day attack against Coinbase in 2019 and reported as CVE-2019-11708. This 0day-bug has found extensive coverage in blog posts and publicly available exploits. We believe the bug provides a great case study and the underlying techniques will help identify similar issues. Eventually, by finding more sandbox escapes you can help secure hundreds of millions of Firefox users as part of the Firefox Bug Bounty Program.

Multi-Process Architecture Now and Then

As of April 2021, Firefox uses one privileged process to launch other process types and coordinate activities. These types are web content processes, semi-privileged web content processes (for special websites like or and four kinds of utility processes for web extensions, GPU operations, networking or media decoding. Here, we will focus on the communication between the main process (also called “parent”) and a multitude of web processes (or “content” processes).

Firefox is shifting towards a new security architecture to achieve Site Isolation, which moves from a “process per tab” to a “process per site” architecture.

Left: Firefox using roughly a process per tab - Right: Fission-enabled Firefox, which uses a process per site (i.e., a seperate one for each banner ad and social button).

Left: Current Firefox generally grouping a tab in it’s own process. Right: Fission-enabled Firefox, separating each site in it’s own process

The parent process acts as a broker and trusted user interface host. Some features, like our settings page at about:preferences are essentially web pages (using HTML and JavaScript) that are hosted in the parent process. Additionally, various control features like modal dialogs, form auto-fill or native user interface pieces (e.g., the <select> element) are also implemented in the parent process. This level of privilege separation also requires receiving messages from content processes.

Let’s look at JSActors and MessageManager, the two most common patterns for using inter-process communication (IPC) from JavaScript:


Using a JSActor is the preferred method for JS code to communicate between processes. JSActors always come in pairs – with one implementation living in the child process and the counterpart in the parent. There is a separate parent instance for every pair in order to closely and consistently associate a message with either a specific content window (JSWindowActors), or child process (JSProcessActors).

Since all JSActors are lazy-loaded we suggest to exercise the implemented functionality at least once, to ensure they are all present and allow for a smooth test and debug experience.

Inter-Process Communication building on top of JSActors and implemented as FooParent and FooChild

Inter-Process Communication building on top of JSActors and implemented as FooParent and FooChild

The example diagram above shows a pair of JSActors called FooParent and FooChild. Messages sent by invoking FooChild will only be received by a FooParent. The child instance can send a one-off message with sendAsyncMesage("someMessage", value). If it needs a response (wrapped in a Promise), it can send a query with sendQuery("someMessage", value).

The parent instance must implement a receiveMessage(msg) function to handle all incoming messages. Note that the messages are namespace-tied between a specific actor, so a FooChild could send a message called Bar:DoThing but will never be able to reach a BarParent. Here is some example code (permalink, revision from March 25th) which illustrates how a message is handled in the parent process.

Code sample for a receiveMessage function in a JSActor

Code sample for a receiveMessage function in a JSActor

As illustrated, the PromptParent has a receiveMessage handler (line 127) and is passing the message data to additional functions that will decide where and how to open a prompt from the parent process. Message handlers like this and its callees are a source of untrusted data flowing into the parent process and provide logical entry points for in-depth audits

Message Managers

Prior to the architecture change in Project Fission, most parent-child IPC occurred through the MessageManagers system. There were multiple message managers, including the per-process message manager and the content frame message manager, which was loaded per-tab.

Under this system, JS in both processes would register message listeners using the addMessageListener methods and would send messages with sendAsyncMessage, that have a name and the actual content. To help track messages throughout the code-base their names are usually prefixed with the components they are used in (e.g., SessionStore:restoreHistoryComplete).

Unlike JSActors, Message Managers need verbose initialization with addMessageListener and are not tied together. This means that messages are available for all classes that listen on the same message name and can be spread out through the code base.

Inter-Process Communication using MessageManager

Inter-Process Communication using MessageManager

As of late April 2021, our AddonsManager – the code that handles the installation of WebExtensions into Firefox – is using MessageManager APIs:

Code sample for a receiveMessage function using the MessageManger API

Code sample for a receiveMessage function using the MessageManger API

The code (permalink to exact revision) for setting a MessageManager looks very similar to the setup of a JSActor with the difference that messaging can be used synchronously, as indicated by the sendSyncMessage call in the child process. Except for the lack of lazy-loading, you can assume the same security considerations: Just like with JSActors above, the receiveMessage function is where the untrusted information flows from the child into the parent process and should therefore be the focus of additional scrutiny.

Finally, if you want to inspect MessageManager traffic live, you can use our logging framework and run Firefox with the environment variable MOZ_LOG set to MessageManager:5. This will log the received messages for all processes to the shell and give you a better understanding of what’s being sent and when.

Inspecting, Debugging, and Simulating JavaScript IPC

Naturally, source auditing a receiveMessage handler is best paired with testing. So let’s discuss how we invoke these functions in the child process and attach a JavaScript debugger to the parent process. This allows us to simulate a scenario where we have already full control over the child process. For this, we recommend you download and test against Firefox Nightly to ensure you’re testing the latest code – it will also give you the benefit of being in sync with codesearch for the latest revisions at For best experience, we recommend you download Firefox Nightly right now and follow this part of the blog post step by step.

DevTools Setup – Parent Process

First, set up your Firefox Nightly to enable browser debugging. Note that the instructions for how to enable browser debugging can change over time, so it’s best you cross-check with the instructions for Debugging the browser on MDN.

Open the Developer Tools, click the “···” button in the top-right and find the settings. Within Advanced settings in the bottom-right, check the following:

  • Enable browser chrome and add-on debugging toolboxes
  • Enable remote debugging
Enabling Browser debugging in Firefox Developer Tools

Enabling Browser debugging in Firefox Developer Tools

Restart Firefox Nightly and open the Browser debugger (Tools -> Browser Tools -> Browser Toolbox). This will open a new window that looks very similar to the common DevTools.

This is your debugger for the parent process (i.e., Browser Toolbox = Parent Toolbox).

The frame selector button, which is left of the three balls “···” will allow you to select between windows. Select browser.xhtml, which is the main browser window. Switching to the Debug pane will let you search files and find the Parent actor you want to debug, as long as they have been already loaded. To ensure the PromptParent actor has been properly initialized, open a new tab on e.g. and make it call alert(1) from the normal DevTools console.

Hitting a breakpoint in Firefox’s parent process using Firefox Developer Tools (left)

Hitting a breakpoint in Firefox’s parent process using Firefox Developer Tools (left)

You should now be able to find PromptParent.jsm (Ctrl+P) and set a debugger breakpoint for all future invocations (see screenshot above). This will allow you to inspect and copy the typical arguments passed to the Prompt JSActor in the parent.

Note: Once you hit a breakpoint, you can enter code into the Developer Console which is then executed within the currently intercepted function.

DevTools Setup – Child Process

Now that we know how to inspect and obtain the parameters which the parent process is expecting for Prompt:Open, let’s try and trigger it from a debugged child process: Ensure you are on a typical web page, like, so you get the right kind of content child process. Then, through the Tools menu, find the “Browser Content Toolbox”. Content here refers to the child process (Content Toolbox = Child Toolbox).

Since every content process might have many windows of the same site associated with it, we need to find the current window. This snippet assumes it is the first tab and gets the Prompt actor for that tab:

actor = tabs[0].content.windowGlobalChild.getActor("Prompt");

Now that we have the actor, we can use the data gathered in the parent process and send the very same data. Or maybe, a variation thereof:

actor.sendQuery("Prompt:Open", {promptType: "alert", title: "👻", modalType: 1, promptPrincipal: null, inPermutUnload: false, _remoteID: "id-lol"});

Invoking JavaScript IPC from Firefox Developer Tools (bottom right) and observing the effects (top right)

Invoking JavaScript IPC from Firefox Developer Tools (bottom right) and observing the effects (top right)

In this case, we got away with not sending a reasonable value for promptPrincipal at all. This is certainly not going to be true for all message handlers. For the sake of this blog post, we can just assume that a Principal is the implementation of an Origin (and for background reading, we recommend an explanation of the Principal Objects in our two-series blog post “Understanding Web Security Checks in Firefox”: See part 1 and part 2).

In case you wonder why the content process is allowed to send a potentially arbitrary Principal (e.g., the origin): This is currently a known limitation and will be fixed while we are en route to full site-isolation (bug 1505832).

If you want to try to send another, faked origin – maybe from a different website or maybe the most privileged Principal – the one that is bypassing all security checks, the SystemPrincipal, you can use these snippets to replace the promptPrincipal in the IPC message:

const {Services} = ChromeUtils.import("resource://gre/modules/Services.jsm");
otherPrincipal = Services.scriptSecurityManager.createContentPrincipalFromOrigin("https://evil.test");
systemPrincipal = Services.scriptSecurityManager.getSystemPrincipal();

Note that validating the association between process and site is already enforced in debug builds. If you compiled your own Firefox, this will cause the content process to crash.

Revisiting Previous Security Issues

Now that we have the setup in place we can revisit the security vulnerability mentioned above: CVE-2019-11708.

The issue in itself was a typical logic bug: Instead of switching which prompt to open in the parent process, the vulnerable version of this code accepted the URL to an internal prompt page, implemented as an XHTML page. But by invoking this message, the attacker could cause the parent process to open any web-hosted page instead. This allowed them to re-open their content process exploit again in the parent process and escalate to a full compromise.

Let’s take a look at  the diff for the security fix to see how we replaced the vulnerable logic and handled the prompt type switching in the parent process (permalink to source).

Handling of untrusted before and after fixing CVE-2019-11708.

Handling of untrusted before and after fixing CVE-2019-11708.

You will notice that line 140+ used to accept and use a parameter named uri. This was fixed in a multitude of patches. In addition to only allowing certain dialogs to open in the parent process we also generally disallow opening web-URLs in the parent process.

If you want to try this yourself, download a version of Firefox before 67.0.4 and try sending a Prompt:Open message with an arbitrary URL.

Next Steps

In this blog post, we have given an introduction to Firefox IPC using JavaScript and how to debug the child and the parent process using the Content Toolbox and the Browser Toolbox, respectively. Using this setup, you are now able to simulate a fully compromised child process, audit the message passing in source code and analyze the runtime behavior across multiple processes.

If you are already experienced with Fuzzing and want to analyze how high-level concepts from JavaScript get serialized and deserialized to pass the process boundary, please check our previous blog post on Fuzzing the IPC layer of Firefox.

If you are interested in testing and analyzing the source code at scale, you might also want to look into the CodeQL databases that we publish for all Firefox releases.

If you want to know more about how our developers port legacy MessageManager interfaces to JSActors, you can take another look at our JSActors documentation and at how Mike Conley ported the popup blocker in his Joy of Coding live stream Episode 204.

Finally, we at Mozilla are really interested in the bugs you might find with these techniques – bugs like confused-deputy attacks, where the parent process can be tricked into using its privileges in a way the content process should not be able to (e.g. reading/writing arbitrary files on the filesystem) or UXSS-type attacks, as well as bypasses of exploit mitigations. Note that as of April 2021, we are not enforcing full site-isolation. Bugs that allow one to impersonate another site will not yet be eligible for a bounty. Submit your findings through our bug bounty program and follow us at the @attackndefense Twitter account for more updates.



The Rust Programming Language BlogAnnouncing Rustup 1.24.0

Shortly after publishing the release we got reports of a regression preventing users from running rustfmt and cargo fmt after upgrading to Rustup 1.24.0. To limit the damage we reverted the release to version 1.23.1.

If you have been affected by this issue you can revert to version 1.23.1 by running the following command:

rustup self update

The rustup working group is happy to announce the release of rustup version 1.24.0. Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of rustup installed, getting rustup 1.24.0 is as easy as closing your IDE and running:

rustup self update

Rustup will also automatically update itself at the end of a normal toolchain update:

rustup update

If you don't have it already, you can get rustup from the appropriate page on our website.

What's new in rustup 1.24.0

Support of rust-toolchain.toml as a filename for specifying toolchains.

Last year we released a new toml format for the rust-toolchain file. In order to bring Rustup closer into line with Cargo's behaviour around .cargo/config we now support the .toml extension for that file. If you call the toolchain file rust-toolchain.toml then you must use the toml format, rather than the legacy one-line format.

If both rust-toolchain and rust-toolchain.toml are present, then the former will win out over the latter to ensure compatibility between Rustup versions.

Better support for low-memory systems

Rustup's component unpacker has been changed to have a smaller memory footprint when unpacking large components. This should permit users of memory-constrained systems such as some Raspberry Pi systems to install newer Rust toolchains which contain particularly large files.

Better support for Windows Add/Remove programs

Fresh installations of Rustup on Windows will now install themselves into the program list so that you can trigger the uninstallation of Rustup via the Add/Remove programs dialogs similar to any other Windows program.

This will only take effect on installation, so you will need to rerun rustup-init.exe if you want this on your PC.

Other changes

There are more changes in rustup 1.24.0: check them out in the changelog!

Rustup's documentation is also available in the rustup book.


Thanks to all the contributors who made rustup 1.24.0 possible!

  • Alex Chan
  • Aloïs Micard
  • Andrew Norton
  • Avery Harnish
  • chansuke
  • Daniel Alley
  • Daniel Silverstone
  • Eduard Miller
  • Eric Huss
  • est31
  • Gareth Hubball
  • Gurkenglas
  • Jakub Stasiak
  • Joshua Nelson
  • Jubilee (workingjubilee)
  • kellda
  • Michael Cooper
  • Philipp Oppermann
  • Robert Collins
  • SHA Miao
  • skim (sl4m)
  • Tudor Brindus
  • Vasili (3point2)
  • наб (nabijaczleweli)
  • 二手掉包工程师 (hi-rustin)

Chris H-CData Science is Interesting: Why are there so many Canadians in India?

Any time India comes up in the context of Firefox and Data I know it’s going to be an interesting day.

They’re our largest Beta population:

pie chart showing India by far the largest at 33.2%

They’re our second-largest English user base (after the US):

pie chart showing US as largest with 37.8% then India with 10.8%


But this is the interesting stuff about India that you just take for granted in Firefox Data. You come across these factoids for the first time and your mind is all blown and you hear the perhaps-apocryphal stories about Indian ISPs distributing Firefox Beta on CDs to their customers back in the Firefox 4 days… and then you move on. But every so often something new comes up and you’re reminded that no matter how much you think you’re prepared, there’s always something new you learn and go “Huh? What? Wait, what?!”

Especially when it’s India.

One of the facts I like to trot out to catch folks’ interest is how, when we first released the Canadian English localization of Firefox, India had more Canadians than Canada. Even today India is, after Canada and the US, the third largest user base of Canadian English Firefox:

pie chart of en-CA using Firefox clients by country. Canada at 75.5%, US at 8.35%, then India at 5.41%


Back in September 2018 Mozilla released the official Canadian English-localized Firefox. You can try it yourself by selecting it from the drop down menu in Firefox’s Preferences/Options in the “Language” section. You may have to click ‘Search for More Languages’ to be able to add it to the list first, but a few clicks later and you’ll be good to go, eh?

(( Or, if you don’t already have Firefox installed, you can select which language and dialect of Firefox you want from this download page. ))

Anyhoo, the Canadian English locale quickly gained a chunk of our install base:

uptake chart for en-CA users in Firefox in September 2018. Shows a sharp uptake followed by a weekly seasonal pattern with weekends lower than week days

…actually, it very quickly gained an overlarge chunk of our install base. Within a week we’d reached over three quarters of the entire Canadian user base?! Say we have one million Canadian users, that first peak in the chart was over 750k!

Now, we Canadian Mozillians suspected that there was some latent demand for the localized edition (they were just too polite to bring it up, y’know)… but not to this order of magnitude.

So back around that time a group of us including :flod, :mconnor, :catlee, :Aryx, :callek (and possibly others) fell down the rabbit hole trying to figure out where these Canadians were coming from. We ran down the obvious possibilities first: errors in data, errors in queries, errors in visualization… who knows, maybe I was counting some clients more than once a day? Maybe I was counting other Englishes (like South African and Great Britain) as well? Nothing panned out.

Then we guessed that maybe Canadians in Canada weren’t the only ones interested in the Canadian English localization. Originally I think we made a joke about how much Canadians love to travel, but then the query stopped running and showed us just how many Canadians there must be in India.

We were expecting a fair number of Canadians in the US. It is, after all, home to Firefox’s largest user base. But India? Why would India have so many Canadians? Or, if it’s not Canadians, why would Indians have such a preference for the English spoken in ten provinces and three territories? What is it about one of two official languages spoken from sea to sea to sea that could draw their attention?

Another thing that was puzzling was the raw speed of the uptake. If users were choosing the new localization themselves, we’d have seen a shallow curve with spikes as various news media made announcements or as we started promoting it ourselves. But this was far sharper an incline. This spoke to some automated process.

And the final curiosity (or clue, depending on your point of view) was discovered when we overlaid British English (en-GB) on top of the Canadian English (en-CA) uptake and noticed that (after accounting for some seasonality at the time due to the start of the school year) this suddenly-large number of Canadian English Firefoxes was drawn almost entirely from the number previously using British English:

chart showing use of British and Canadian English in Firefox in September 2018. The rise in use of Canadian English is matched by a fall in the use of British English.

It was with all this put together that day that lead us to our Best Guess. I’ll give you a little space to make your own guess. If you think yours is a better fit for the evidence, or simply want to help out with Firefox in Canadian English, drop by the Canadian English (en-CA) Localization matrix room and let us know! We’re a fairly quiet bunch who are always happy to have folks help us keep on top of the new strings added or changed in Mozilla projects or just chat about language stuff.

Okay, got your guess made? Here’s ours:

en-CA is alphabetically before en-GB.

Which is to say that the Canadian English Firefox, when put in a list with all the other Firefox builds (like this one which lists all the locales Firefox 88 comes in for Windows 64-bit), comes before the British English Firefox. We assume there is a population of Firefoxes, heavily represented in India (and somewhat in the US and elsewhere), that are installed automatically from a list like this one. This automatic installation is looking for the first English build in this list, and it doesn’t care which dialect. Starting September of 2018, instead of grabbing British English like it’s been doing for who knows how long, it had a new English higher in the list: Canadian English.

But who can say! All I know is that any time India comes up in the data, it’s going to be an interesting day.


Mozilla Security BlogUpgrading Mozilla’s Root Store Policy to Version 2.7.1

Individuals’ security and privacy on the internet are fundamental. Living up to that principle we are announcing the following changes to Mozilla’s Root Store Policy (MRSP) which will come into effect on May 1, 2021.

These updates to the Root Store Policy will not only improve our compliance monitoring, but also improve Certificate Authority (CA) practices and reduce the number of errors that CAs make when they issue new certificates. As a result, these updates contribute to a healthy security ecosystem on the internet and will enhance security and privacy to all internet users.

Living up to our mission and truly working in the open source community has led, after weeks of public exchange, to the following improvements to the MRSP. Please find a detailed comparison of the policy changes here – summing it up:

  • Beginning on October 1, 2021, CAs must verify domain names and IP addresses within 398 days prior to certificate issuance. (MRSP § 2.1)
  • Clarified that EV audits are required for root and intermediate certificates that are capable of issuing EV certificates, rather than being based on CA intentions.  (MRSP § 3.1.2)
  • Clearly specified that annual audit statements are required “cradle-to-grave” – from CA key pair generation until the root certificate is no longer trusted by Mozilla’s root store. (MRSP § 3.1.3)
  • Added a requirement that audit team qualifications be provided when audit statements are provided. (MRSP § 3.2)
  • Specified that Audit Reports must now include a list of incidents, and also indicate which CA locations were and were not audited (MRSP § 3.1.4 items 11 and 12).
  • Clarified when a certificate is deemed to directly or transitively chain to a CA certificate included in Mozilla’s program, which affects when the CA must provide audit statements for the certificate. (MRSP § 5.3)
  • Added a requirement that Section 4.9.12 of a CA’s CP/CPS MUST clearly specify the methods that may be used to demonstrate private key compromise. (MRSP § 6)

Many of these changes will result in updates and improvements in the processes of CAs and auditors and cause them to revise their practices. To ease transition, Mozilla has sent a CA Communication to alert CAs about these changes. We also sent CAs a survey asking them to indicate when they will be able to reach full compliance with this version of the MRSP.

In summary, updating the Root Store Policy improves the security ecosystem on the internet and the quality of every HTTPS connection, thus helping to keep your information private and secure.

The post Upgrading Mozilla’s Root Store Policy to Version 2.7.1 appeared first on Mozilla Security Blog.

The Firefox FrontierMozilla Explains: What is IDFA and why is this iOS update important?

During last week’s Apple event, the team announced a lot of new products and a new iPhone color, but the news that can have the biggest impact on all iPhone … Read more

The post Mozilla Explains: What is IDFA and why is this iOS update important? appeared first on The Firefox Frontier.

Firefox NightlyThese Weeks in Firefox: Issue 92


  • Firefox will now update in the background for Windows Nightly users. See details in Bug 1703909 and this firefox-dev post.
    • We plan to stage roll out to Beta users in the Beta 89 cycle, and to Release users in the Firefox 89 cycle.
  • The Startup Skeleton UI has a new style and is set to ride the trains to release!
    • The skeleton UI, which shows an outline of the primary browser UI before Firefox has finished starting up, in dark mode.

      Firefox users on older or slower Windows machines will soon see this “skeleton UI” while Firefox is in the process of starting, rather than nothing at all.

  • Elastic overscroll (“rubberbanding”) is enabled on macOS (Bug 1704231).
  • Double-tap-to-zoom is enabled on macOS (Bug 674371). Double tap with two fingers to zoom in on a section of a webpage.
  • Thanks to our Outreachy candidates have been picking up good-first-bugs across Firefox!
    • There’s still time to apply – deadline is April 30th.
  • When viewing zip files, Firefox Profiler now automatically expands all the children.
    • The Firefox Profiler showing the contents of a compressed file. It has automatically expanded the folder structure to a profile file.

      This is especially handy if someone has compressed a whole bundle of profiles into a single file.

Friends of the Firefox team

For contributions from April 6 to April 20 2021, inclusive.


Resolved bugs (excluding employees)

Fixed more than one bug

  • Claudia Batista [:claubatista]
  • Dhanesh
  • Evgenia Kotovich
  • Falguni Islam
  • Garima Mazumdar
  • Itiel
  • kaira [:anshukaira]
  • Kajal Sah
  • Liz Krane
  • Luz De La Rosa
  • Manuel Carretero
  • Michael Kohler [:mkohler]
  • Michelle Goossens
  • msirringhaus
  • ryedu.09
  • Sarah Ukoha
  • Sebastian Zartner [:sebo]
  • Tim Nguyen :ntim

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • ntim fixed an alignment issue on the about:addons page options menu (regressed in the same nightly version by Bug 1699917, a proton related cleanup) – Bug 1703487
  • Michelle Goossens contributed a fix to ensure “Restart with addons disabled” opt-in the new Proton window-modal UI – Bug 1685346
WebExtensions Framework
  • Extensions web_accessible_resources are not blocked by CORS checks anymore – Bug 1694679
  • Fixed a bug that was preventing the “New Windows and Tabs URL” controlled by an extension to be updated as expected when the url changes as part of an extension update – Bug 1698619
    • NOTE: Firefox will re-prompt the user if the url is changed to an external URL from a previous one packaged into the extension itself

Developer Tools

  • Fission, targeting two upcoming milestones
    • M7a – working on BFCache related bugs
    • M8 – reaching feature parity with pre-Fission state
  • Style resources (*.css files) blocked by CSP are properly displayed in the Network monitor panel (bug).
    • The Network monitor developer tool panel showing blocked JavaScript and CSS requests due to CSP violations.

      It’s clearer now what got blocked due to CSP violations.

Installer & Updater

  • Of note to this audience: as part of this work,a  while ago we made launching devtools not apply updates: Bug 1120863.  We’d like to do more in that direction but It Is Complicated — cheers to :bytesized for the epic bug comment!

Lint and Docs

macOS Spotlight

macOS Spotlight is a new team spanning Desktop and Platform working on improving macOS platform integration. Come say hello to the team in the #macdev channel on Matrix! Here’s what the team has worked on since its inception in mid-February:

  • Native context menus are nearly done (Bug 34572). They’re currently disabled pending test failures (tracked in Bug 1700724). Enable widget.macos.native-context-menus to try them out.
  • When mousing to the top of the window in fullscreen, the title bar no longer covers the tab strip (Bug 738335).
  • Long tooltips will no longer appear on the wrong monitor (Bug 1689682).
  • The sidebar now has a vibrant see-through appearance (Bug 1594132).
  • Fixed an issue where the app menu would open at the wrong size then shrink (Bug 1687774).
  • Right click on the toolbar in fullscreen mode and click “Hide Toolbars” to hide the toolbar. This feature has been broken on macOS for years and is now fixed! This is useful for presentations (Bug 740148).
  • The Firefox System theme now changes with the “Auto” OS theme (bug 1593390).
  • Ongoing work on enabling native fullscreen by default (Bug 1631735). Enable full-screen-api.macos-native-full-screen to always use macOS native fullscreen.
  • Ongoing work on enhanced dark mode support. This includes theming the titlebar and non-browser windows like Library, using dark stoplight buttons, dark context menus, dark tooltips, dark highlight colors, and more. Enable widget.macos.respect-system-appearance to try it out (expect breakage!) (Bug 1623686).

Messaging System

Password Manager


  • OS.File is almost completely out of the startup path – only a handful of bugs remain and they all have patches up.
  • kmoir ported the “best practices for front-end Firefox engineers” document to source docs

Performance Tools

  • Made the profile info button more explicit
    • Before
      • In the Firefox Profiler UI, the button to get information about a profile is highlighted. It has a label "Uploaded Profile".


    • After
      • In the Firefox Profiler UI, the button to get information about a profile is highlighted. It has a label "Profile Info".


  • Marker tooltips now display the inner window ids if there are multiple pages with the same URL
    • The marker tooltip in the Firefox Profiler UI showing an "activate" DOM event. After the URL of the document that the event fired on, the string "(id: 25)" is shown.

      You can see the inner window ID at the right side of the URL now

  • Test category is being displayed first in the Marker Chart now, as they are very relevant when they exist

Search and Navigation

  • New address bar styling has been enabled!
    • fixes to colors, padding, icons and better support for custom themes (improved contrast, separators, hover/selected)
    • New tab page search hand-off to urlbar is moving to release with Firefox 89
    • Restyled the separate search bar
  • Fixed a bug where the default search engine couldn’t be changed under unexpected conditions – Bug 1693585
  • Removed support to install OpenSearch engines via FTP – Bug 1692018

Niko MatsakisAsync Vision Doc Writing Sessions VII

My week is very scheduled, so I am not able to host any public drafting sessions this week – however, Ryan Levick will be hosting two sessions!

When Who
Wed at 07:00 ET Ryan
Fri at 07:00 ET Ryan

If you’re available and those stories sound like something that interests you, please join him! Just ping me or Ryan on Discord or Zulip and we’ll send you the Zoom link. If you’ve already joined a previous session, the link is the same as before.

Extending the schedule by two weeks

We have previously set 2021-04-30 as the end-date, but I proposed in a recent PR to extend that end date to 2021-05-14. We’ve been learning how this whole vision doc thing works as we go, and I think it seems clear we’re going to want more time to finish off status quo stories and write shiny future before we feel we’ve really explored the design space.

The vision…what?

Never heard of the async vision doc? It’s a new thing we’re trying as part of the Async Foundations Working Group:

We are launching a collaborative effort to build a shared vision document for Async Rust. Our goal is to engage the entire community in a collective act of the imagination: how can we make the end-to-end experience of using Async I/O not only a pragmatic choice, but a joyful one?

Read the full blog post for more.

Daniel StenbergPlease select your TLS

tldr: starting now, you need to select which TLS to use when you run curl’s configure script.

How it started

In June 1998, three months after the first release of curl, we added support for HTTPS. We decided that we would use an external library for this purpose – for providing SSL support – and therefore the first dependency was added. The build would optionally use SSLeay. If you wanted HTTPS support enabled, we would use that external library.

SSLeay ended development at the end of that same year, and OpenSSL rose as a new project and library from its ashes. Of course, even later the term “SSL” would also be replaced by “TLS” but the entire world has kept using them interchangeably.

Building curl

The initial configure script we wrote and provided back then (it appeared for the first time in November 1998) would look for OpenSSL and use it if found present.

In the spring of 2005, we merged support for an alternative TLS library, GnuTLS, and now you would have to tell the configure script to not use OpenSSL but instead use GnuTLS if you wanted that in your build. That was the humble beginning of the explosion of TLS libraries supported by curl.

As time went on we added support for more and more TLS libraries, giving the users the choice to select exactly which particular one they wanted their curl build to use. At the time of this writing, we support 14 different TLS libraries.

<figcaption>TLS backends supported in curl, over time</figcaption>

OpenSSL was still default

The original logic from when we added GnuTLS back in 2005 was however still kept so whatever library you wanted to use, you would have to tell configure to not use OpenSSL and instead use your preferred library.

Also, as the default configure script would try to find and use OpenSSL it would result in some surprises to users who maybe didn’t want TLS in their build or even when something was just not correctly setup and configure unexpectedly didn’t find OpenSSL and the build then went on and was made completely without TLS support! Sometimes even without getting noticed for a long time.

Not doing it anymore

Starting now, curl’s configure will not select TLS backend by default.

It will not decide for you which one you use, as there are many decisions involved when selecting TLS backend and there are many users who prefer something else than OpenSSL. We will no longer give any special treatment to that library at build time. We will not impose our bias onto others anymore.

Not selecting any TLS backend at all will just make configure exit quickly with a help message prompting you to make a decision, as shown below. Notice that going completely without a TLS library is still fine but similarly also requires an active decision (--without-ssl).

<figcaption>The list of available TLS backends is sorted alphabetically.</figcaption>

Effect on configure users

With this change, every configure invoke needs to clearly state which TLS library or even libraries (in plural since curl supports building with support for more than one library) to use.

The biggest change is of course for everyone who invokes configure and wants to build with OpenSSL since they now need to add an option and say that explicitly. For virtually everyone else life can just go on like before.

Everyone who builds curl automatically from source code might need to update their build scripts.

The first release shipping with this change will be curl 7.77.0.


Image by Free-Photos from Pixabay

Spidermonkey Development BlogSpiderMonkey Newsletter 10 (Firefox 88-89)

SpiderMonkey is the JavaScript engine used in Mozilla Firefox. This newsletter gives an overview of the JavaScript and WebAssembly work we’ve done as part of the Firefox 88 and 89 Nightly release cycles.

In this newsletter we bid a fond farewell to module owner emeritus Jason Orendorff, and say hello to Jan de Mooij as the new JavaScript Engine module owner.

If you like these newsletters, you may also enjoy Yulia’s Compiler Compiler live stream.

🏆 New contributors

We’d like to thank our new contributors. We are working with Outreachy for the May 2021 cohort, and so have been fortunate enough to have more than the usual number of new contributors.

👷🏽‍♀️ JS features

⚡ WebAssembly

  • We enabled support for large ArrayBuffers and 4 GB Wasm memories in Firefox 89.
  • We enabled support for SIMD on x86 and x64 in Firefox 89.
  • Igalia finished the implementation of the Exception Handling proposal in the Baseline Compiler.
  • We implemented support for arrays and rtt-based downcasting in our Wasm GC prototype.
  • We’ve enabled the Ion backend for ARM64 in Nightly builds.
  • We’ve landed many changes and optimizations for SIMD support.
  • We removed various prefs for features we’ve been shipping for some time.

❇️ Stencil

Stencil is our project to create an explicit interface between the frontend (parser, bytecode emitter) and the rest of the VM, decoupling those components. This lets us improve web-browsing performance, simplify a lot of code and improve bytecode caching.

  • We implemented a mechanism for function delazification information to be merged with the initial stencil before writing to caches.
  • We added support for modules and off-thread compilation to the Stencil API.
  • We optimized use of CompilationState in the parser for certain cases.
  • We added magic values to the Stencil bytecode serialization format to detect corrupt data and handle this more gracefully.
  • We fixed the Stencil bytecode serialization format to deduplicate bytecode.
  • We’re getting closer to sharing Stencil information for self-hosted code across content processes. We expect significant memory usage and performance improvements from this in the coming weeks.

🧹 Garbage Collection

  • We simplified and optimized the WeakMap code a bit.
  • We disabled nursery poisoning for Nightly release builds. The poisoning was pretty expensive and often caused slowdowns compared to release builds that didn’t have the poisoning.
  • We added support for decommitting free arenas on Apple’s M1 hardware. This required some changes due to the 16 KB page size.
  • We changed the pre-write barrier to use a buffering mechanism instead of marking directly.
  • GC markers now describe what they are, hopefully reducing confusion over whether the browser is paused throughout a major GC


  • We changed how arguments objects are optimized. Instead of doing an (expensive) analysis for all functions that use arguments, we now use Scalar Replacement in the Warp backend to optimize away arguments allocations. The new implementation is simpler, more self-contained, and lets us avoid doing the analysis for cold functions.
  • We fixed the Scalar Replacement code for arrays and objects to work with Warp.
  • We also added back support for branch pruning with Warp.
  • We added CacheIR support for optimizing GetElem, SetElem and in operations with null or undefined property keys. This turned out to be very common on certain websites.
  • We optimized DOM getters for (WindowProxy objects).
  • We improved function inlining in Warp for certain self-hosted functions (for example that benefit from inlining.
  • We added a browser pref to control the function inlining size threshold, to help us investigate performance issues.

📐 ReShape

Now that Warp is on by default and we’ve removed the old backend and Type Inference mechanism, we’re able to optimize our object representation more. Modern websites spend a significant amount of time doing property lookups, and property information takes up a lot of space, so we expect improvements in this area to pay off.

  • We’ve merged ObjectGroup (used by the old Type Inference system) into Shape and BaseShape. This removed a word from every JS object and is also simpler.
  • We cleaned up and deduplicated our property lookup code.
  • We’ve replaced the old JSGetterOp and JSSetterOp getters/setters with a property attribute.
  • We changed our implementation of getter/setter properties: instead of storing the getter and setter objects in the shape tree, we now store them in object slots. This fixes some performance cliffs and unblocks future Shape changes.
  • We’ve started adding better abstractions for property information stored in shapes. This will make it easier to experiment with different representations in the coming weeks.

🛠 Testing

  • We made SpiderMonkey’s test suites on Android about four times faster by optimizing the test runner, copying fewer files to the device, and reducing the number of jit-test configurations.
  • We removed the Rust API crates because upstream Servo uses its own version instead of the one we maintained in-tree.
  • We landed support for the Fuzzilli JS engine fuzzer in the JS shell.

📚 Miscellaneous

  • We cleaned up the lexical environment class hierarchy.
  • We optimized Object.assign. Modern JS frameworks use this function a lot.
  • The bytecode emitter now emits optimized bytecode for name lookups in strict-mode eval.
  • We updated irregexp to the latest upstream version.
  • We optimized checks for strings representing an index by adding a flag for this to atoms.
  • Function delazification is now properly reported in the profiler.
  • The profiler reports more useful stacks for JS code because it’s now able to retrieve registers from the JIT trampoline to resume stack walking.
  • We added memory reporting for external ArrayBuffer memory and also reduced heap-unclassified memory by adding some new memory reporters.
  • We added documentation for the LifoAlloc allocator.
  • We fixed Clang static analysis and formatting issues in the Wasm code.
  • We’ve started cleaning up PropertyDescriptor by using Maybe<PropertyDescriptor>.

The Mozilla BlogNotes on Implementing Vaccine Passports

Now that we’re starting to get widespread COVID vaccination “vaccine passports” have started to become more relevant. The idea behind a vaccine passport is that you would have some kind of credential that you could use to prove that you had been vaccinated against COVID; various entities (airlines, clubs, employers, etc.) might require such a passport as proof of vaccination. Right now deployment of this kind of mechanism is fairly limited: Israel has one called the green pass and the State of New York is using something called the Excelsior Pass based on some IBM tech.

Like just about everything surrounding COVID, there has been a huge amount of controversy around vaccine passports (see, for instance, this EFF post, ACLU post, or this NYT article).

There two seem to be four major sets of complaints:

  1. Requiring vaccination is inherently a threat to people’s freedom
  2. Because vaccine distribution has been unfair, with a number of communities having trouble getting vaccines, a requirement to get vaccinated increases inequity and vaccine passports enable that.
  3. Vaccine passports might be implemented in a way that is inaccessible for people without access to technology (especially to smartphones).
  4. Vaccine passports might be implemented in a way that is a threat to user privacy and security.

I don’t have anything particularly new to say about the first two questions, which aren’t really about technology but rather about ethics and political science, so, I don’t think it’s that helpful to weigh in on them, except to observe that vaccination requirements are nothing new: it’s routine to require children to be vaccinate to go to school, people to be vaccinated to enter certain countries, etc. That isn’t to say that this practice is without problems but merely that it’s already quite widespread, so we have a bunch of prior art here. On the other hand, the questions of how to design a vaccine passport system are squarely technical; the rest of this post will be about that.

What are we trying to accomplish?

As usual, we want to start by asking what we’re trying to accomplish At a high level, we have a system in which a vaccinated person (VP) needs to demonstrate to some entity (the Relying Party (RP)) that they have been vaccinated within some relevant time period. This brings with it some security requirements”

  1. Unforgeability: It should not be possible for an unvaccinated person to persuade the RP that they have been vaccinated.
  2. Information minimization: The RP should learn as little as possible about the VP, consistent with unforgeability.
  3. Untraceability: Nobody but the VP and RP should know which RPs the VP has proven their status to.

I want to note at this point that there has been a huge amount of emphasis on the unforgeability property, but it’s fairly unclear — at least to me — how important it really is. We’ve had trivially forgeable paper-based vaccination records for years and I’m not aware of any evidence of widespread fraud. However, this seems to be something people are really concerned about — perhaps due to how polarized the questions of vaccination and masks have become — and we have already heard some reports of sales of fake vaccine cards, so perhaps we really do need to worry about cheating. It’s certainly true that people are talking about requiring proof of COVID vaccination in many more settings than, for instance, proof of measles vaccination, so there is somewhat more incentive to cheat. In any case, the privacy requirements are a real concern.

In addition, we have some functional requirements/desiderata:

  1. The system should be cheap to bring up and operate.
  2. It should be easy for VPs to get whatever credential they need and to replace it if it is lost or destroyed.
  3. VPs should not be required to have some sort of device (e.g., a smartphone).

The Current State

In the US, most people who are getting vaccinated are getting paper vaccination cards that look like this:

COVID Vaccination Card

This card is a useful record that you’ve been vaccinated, with which vaccine, and when you have to come back, but it’s also trivially forgeable. Given that they’re made of paper with effectively no anti-counterfeiting measures (not even the ones that are in currency), it would be easy to make one yourself, and there are already people selling them online. As I said above, it’s not clear entirely how much we ought to worry about fraud, but if we do, these cards aren’t up to the task. In any case, they also have suboptimal information minimization properties: it’s not necessary to know how old you are or which vaccine you got in order to know whether you were vaccinated.

The cards are pretty good on the traceability front: nobody but you and the RP learns anything, and they’re cheap to make and use, without requiring any kind of device on the user’s side. They’re not that convenient if you lose them, but given how cheap they are to make, it’s not the worst thing in the world if the place you got vaccinated has to mail you a new one.

Improving The Situation

A good place to start is to ask how to improve the paper design to address the concerns above.

The data minimization issue is actually fairly easy to address: just don’t put unnecessary information on the card: as I said, there’s no reason to have your DOB or the vaccine type on the piece of paper you use for proof.

However, it’s actually not straightforward to remove your name. The reason for this is that the RP needs to be able to determine that the credential actually applies to you rather than to someone else. Even if we assume that the credential is tamper-resistant (see below), that doesn’t mean it belongs to you. There are really two main ways to address this:

  1. Have the VP’s name (or some ID number) on the credential and require them to provide a biometric credential (i.e., a photo ID) that proves they are the right person.
  2. Embed a biometric directly into the credential.

This should all be fairly familiar because it’s exactly the same as other situations where you prove your identity. For instance, when you get on a plane, TSA or the airline reads your boarding pass, which has your name, and then uses your photo ID to compare that to your face and decide if it’s really you (this is option 1). By contrast, when you want to prove you are licensed to drive, you present a credential that has your biometrics directly embedded (i.e., a drivers license).

This leaves us with the question of how to make the credential tamper-resistant. There are two major approaches here:

  1. Make the credential physically tamper-resistant
  2. Make the credential digitally tamper-resistant

Physically Tamper-Resistant Credentials

A physically tamper-resistant credential is just one which is hard to change or for unauthorized people to manufacture. This usually includes features like holograms, tamper-evident sealing (so that you can’t disassemble it without leaving traces) etc. Most of us have lot of experience with physically tamper-resistant credentials such as passports, drivers licenses, etc. These generally aren’t completely impossible to forge, but they’re designed to be somewhat difficult. From a threat model perspective, this is probably fine; after all we’re not trying to make it impossible to pretend to be vaccinated, just difficult enough that most people won’t try.

In principal, this kind of credential has excellent privacy because it’s read by a human RP rather than some machine. Of course, one could take a photo of it, but there’s no need to. As an analogy, if you go to a bar and show your driver’s license to prove you are over 21, that doesn’t necessarily create a digital record. Unfortunately for privacy, increasingly those kinds of previously analog admissions processes are actually done by scanning the credential (which usually has some machine readable data), thus significantly reducing the privacy benefit.

The main problem with a physically tamper-resistant credential is that it’s expensive to make and that by necessity you need to limit the number of people who can make it: if it’s cheap to buy the equipment to make the credential then it will also be cheap to forge. This is inconsistent with rapidly issuing credentials concurrently with vaccinating people: when I got vaccinated there were probably 25 staff checking people in and each one had a stack of cards. It’s hard to see how you would scale the production of tamper-resistant plastic cards to an operation like this, let alone to one that happens at doctors offices and pharmacies all over the country. It’s potentially possible that they could report people’s names to some central authority which then makes the cards, but even then we have scaling issues, especially if you want the cards to be available 2 weeks after vaccination. A related problem is that if you lose the card, it’s hard to replace because you have the same issuing problem.[1]

Digitally Tamper-Resistant Credentials

The major alternative here is to design a digitally tamper-resistant system. Effectively what this means is that the issuing authority digitally signs a credential. This provides cryptographically strong authentication of the data in the credential in such a way that anyone can verify it as long as they have the right software. The credential just needs to contain the same information as would be on the paper credential: the fact that you were vaccinated (and potentially a validity date) plus either your name (so you can show your photo id) or your identity (so the RP can directly match it against you).

This design has a number of nice properties. First, it’s cheap to manufacture: you can do the signing on a smartphone app.[2] It doesn’t need any special machinery from the RP: you can encode the credential as a 2-D bar code which the VP can show on their phone or print out. And they can make as many copies as they want, just like your airline boarding pass.

The major drawback of this design is that it requires special software on the RP side to read the 2D bar code, verify the digital signature, and verify the result. However, this software is relatively straightforward to write and can run on any smartphone, using the camera to read the bar code.[3] So, while this is somewhat of a pain, it’s not that big a deal.

This design also has generally good privacy properties: the information encoded in credential is (or at least can be) the minimal set needed to validate that you are you and that you are vaccinated, and because the credential can be locally verified, there’s no central authority which learns where you go. Or, at least, it’s not necessary for there to be a central authority: nothing stops the RP from reporting that you were present back to some central location, but that’s just inherent in them getting your name and picture. As far as I know, there’s no way to prevent that, though if the credential just contains your picture rather than an identifier, it’s somewhat better (though the code itself is still unique, so you can be tracked) especially because the RP can always capture your picture anyway.[4]

By this point you should be getting the impression that signed credentials are a pretty good design, and it’s no surprise that this seems to be the design that WHO has in mind for their smart vaccination certificate. They seem to envision encoding quite a bit more information than is strictly required for a “yes/no” decision and then having a “selective disclosure” feature that would just have that information and can be encoded in a bar code.

What about Green Pass, Excelsior Pass, etc?

So what are people actually rolling out in the field? The Israeli Green Pass seems to be basically this: a signed credential. It’s got a QR code which you read with an app and the app then displays the ID number and an expiration data. You then compare the ID number to the user’s ID to verify that they are the right person.

I’ve had a lot of trouble figuring out what the Excelsior Pass does. Based on the NY Excelsior Pass FAQ, which says that “you can print a paper Pass, take a screen shot of your Pass, or save it to the Excelsior Pass Wallet mobile app”, it sounds like it’s the same kind of thing as Green Pass, but that’s hardly definitive. I’ve been trying to get a copy of the specification for this technology and will report back if I manage to learn more.

What About the Blockchain?

Something that keeps coming up here is the use of blockchain for vaccine passports. You’ll notice that my description above doesn’t have anything about the blockchain but, for instance, the Excelsior Pass says it is built on IBM’s digital health pass which is apparently “built on IBM blockchain technology” and says “Protects user data so that it remains private when generating credentials. Blockchain and cryptography provide credentials that are tamper-proof and trusted.” As another example, in this webinar on the Linux Foundation’s COVID-19 Credentials Initiative, Kaliya Young answers a question on blockchain by saying that the root keys for the signers would be stored in the blockchain.

To be honest, I find this all kind of puzzling; as far as I can tell there’s no useful role for the blockchain here. To oversimplify, the major purpose of a blockchain is to arrange for global consensus about some set of facts (for instance, the set of financial transactions that has happened) but that’s not necessary in this case: the structure of a vaccine credential is that some health authority asserts that a given person have been vaccinated. We do need relying parties to know the set of health authorities, but we have existing solutions for that (at a high level, you just build the root keys into the verifying apps).[5] If anyone has more details on why a blockchain[6] is useful for this application I’d be interested in hearing them.

Is this stuff any good?

It’s hard to tell. As discussed above, some of these designs seem to be superficially sensible, but even if the overall design is sensible, there are lots of ways to implement it incorrectly. It’s quite concerning not to have published specifications for the exact structure of the credentials. Without having a detailed specification, it’s not possible to determine that it has the claimed security and privacy properties. The protocols that run the Web and the Internet are open which not only allows anyone to implement them, but also to verify their security and privacy properties. If we’re going to have vaccine passports, they should be open as well.

Updated: 2021-04-02 10:10 AM to point to Mozilla’s previous work on blockchain and identity.

  1. Of course, you could be issued multiple cards, as they’re not transferable. ↩︎
  2. There are some logistical issues around exactly who can sign: you probably don’t want everyone at the clinic to have a signing key, but you can have some central signer. ↩︎
  3. Indeed, in Santa Clara County, where I got vaccinated, your appointment confirmation is a 2D bar code which you print out and they scan onsite. ↩︎
  4. If you’re familiar with TLS, this is going to sound a lot like a digital certificate, and you might wonder whether revocation is a privacy issue the way that it is with WebPKI and OCSP. The answer is more or less “no”. There’s no real reason to revoke individual credentials and so the only real problem is revoking signing certificates. That’s likely to happen quite infrequently, so we can either ignore it, disseminate a certificate revocation list, or have central status checking just for them. ↩︎
  5. Obviously, you won’t be signing every credential with the root keys, but you use those to sign some other keys, building a chain of trust down to keys which you can use to sign the user credentials. ↩︎
  6. Because of the large amount of interest in blockchain technologies, there’s a tendency to try to sprinkle it in places it doesn’t help, especially in the identity space For that reason, it’s really important to ask what benefits it’s bringing. ↩︎

The post Notes on Implementing Vaccine Passports appeared first on The Mozilla Blog.

Hacks.Mozilla.OrgPyodide Spin Out and 0.17 Release

We are happy to announce that Pyodide has become an independent and community-driven project. We are also pleased to announce the 0.17 release for Pyodide with many new features and improvements.

Pyodide consists of the CPython 3.8 interpreter compiled to WebAssembly which allows Python to run in the browser. Many popular scientific Python packages have also been compiled and made available. In addition, Pyodide can install any Python package with a pure Python wheel from the Python Package Index (PyPi). Pyodide also includes a comprehensive foreign function interface which exposes the ecosystem of Python packages to Javascript and the browser user interface, including the DOM, to Python.

You can try out the latest version of Pyodide in a REPL directly in your browser.

Pyodide is now an independent project

We are happy to announce that Pyodide now has a new home in a separate GitHub organisation ( and is maintained by a volunteer team of contributors. The project documentation is available on

Pyodide was originally developed inside Mozilla to allow the use of Python in Iodide, an experimental effort to build an interactive scientific computing environment for the web.  Since its initial release and announcement, Pyodide has attracted a large amount of interest from the community, remains actively developed, and is used in many projects outside of Mozilla.

The core team has approved a transparent governance document  and has a roadmap for future developments. Pyodide also has a Code of Conduct which we expect all contributors and core members to adhere to.

New contributors are welcome to participate in the project development on Github. There are many ways to contribute, including code contributions, documentation improvements, adding packages, and using Pyodide for your applications and providing feedback.

The Pyodide 0.17 release

Pyodide 0.17.0 is a major step forward from previous versions. It includes:

  • major maintenance improvements,
  • a thorough redesign of the central APIs, and
  • careful elimination of error leaks and memory leaks

Type translation improvements

The type translations module was significantly reworked in v0.17 with the goal that round trip translations of objects between Python and Javascript produces an identical object.

In other words, Python -> JS -> Python translation and JS -> Python -> JS translation now produce objects that are  equal to the original object. (A couple of exceptions to this remain due to unavoidable design tradeoffs.)

One of Pyodide’s strengths is the foreign function interface between Python and Javascript, which at its best can practically erase the mental overhead of working with two different languages. All I/O must pass through the usual web APIs, so in order for Python code to take advantage of the browser’s strengths , we need to be able to support use cases like generating image data in Python and rendering the data to an HTML5 Canvas, or implementing event handlers in Python.

In the past we found that one of the major pain points in using Pyodide occurs when an object makes a round trip from Python to Javascript and back to Python and comes back different. This violated the expectations of the user and forced inelegant workarounds.

The issues with round trip translations were primarily caused by implicit conversion of Python types to Javascript. The implicit conversions were intended to be convenient, but the system was inflexible and surprising to users. We still implicitly convert strings, numbers, booleans, and None. Most other objects are shared between languages using proxies that allow methods and some operations to be called on the object from the other language. The proxies can be converted to native types with new explicit converter methods called .toJs and to_py.

For instance, given an Array in JavaScript,

window.x = ["a", "b", "c"];

We can access it in Python as,

>>> from js import x # import x from global Javascript scope
>>> type(x)
<class 'JsProxy'>
>>> x[0]    # can index x directly
>>> x[1] = 'c' # modify x
>>> x.to_py()   # convert x to a Python list
['a', 'c']

Several other conversion methods have been added for more complicated use cases. This gives the user much finer control over type conversions than was previously possible.

For example, suppose we have a Python list and want to use it as an argument to a Javascript function that expects an Array.  Either the caller or the callee needs to take care of the conversion. This allows us to directly call functions that are unaware of Pyodide.

Here is an example of calling a Javascript function from Python with argument conversion on the Python side:

function jsfunc(array) {
  return array.length;

from js import jsfunc
from pyodide import to_js

def pyfunc():
  mylist = [1,2,3]
  jslist = to_js(mylist)
  return jsfunc(jslist) # returns 4

This would work well in the case that jsfunc is a Javascript built-in and pyfunc is part of our codebase. If pyfunc is part of a Python package, we can handle the conversion in Javascript instead:

function jsfunc(pylist) {
  let array = pylist.toJs();
  return array.length;

See the type translation documentation for more information.

Asyncio support

Another major new feature is the implementation of a Python event loop that schedules coroutines to run on the browser event loop. This makes it possible to use asyncio in Pyodide.

Additionally, it is now possible to await Javascript Promises in Python and to await Python awaitables in Javascript. This allows for seamless interoperability between asyncio in Python and Javascript (though memory management issues may arise in complex use cases).

Here is an example where we define a Python async function that awaits the Javascript async function “fetch” and then we await the Python async function from Javascript.

async def test():
    from js import fetch
    # Fetch the Pyodide packages list
    r = await fetch("packages.json")
    data = await r.json()
    # return all available packages
    return data.dependencies.object_keys()

let test = pyodide.globals.get("test");

// we can await the test() coroutine from Javascript
result = await test();
// logs ["asciitree", "parso", "scikit-learn", ...]

Error Handling

Errors can now be thrown in Python and caught in Javascript or thrown in Javascript and caught in Python. Support for this is integrated at the lowest level, so calls between Javascript and C functions behave as expected. The error translation code is generated by C macros which makes implementing and debugging new logic dramatically simpler.

For example:

function jserror() {
  throw new Error("ooops!");

from js import jserror
from pyodide import JsException

except JsException as e:
  print(str(e)) # prints "TypeError: ooops!"

Emscripten update

Pyodide uses the Emscripten compiler toolchain to compile the CPython 3.8 interpreter and Python packages with C extensions to WebAssembly. In this release we finally completed the migration to the latest version of Emscripten that uses the upstream LLVM backend. This allows us to take advantage of recent improvements to the toolchain, including significant reductions in package size and execution time.

For instance, the SciPy package shrank dramatically from 92 MB to 15 MB so Scipy is now cached by browsers. This greatly improves the usability of scientific Python packages that depend on scipy, such as scikit-image and scikit-learn. The size of the base Pyodide environment with only the CPython standard library shrank from 8.1 MB to 6.4 MB.

On the performance side, the latest toolchain comes with a 25% to 30% run time improvement:

Performance ranges between near native to up to 3 to 5 times slower, depending on the benchmark.  The above benchmarks were created with Firefox 87.

Other changes

Other notable features include:

  • Fixed package loading for Safari v14+ and other Webkit-based browsers
  • Added support for relative URLs in micropip and loadPackage, and improved interaction between micropip and loadPackage
  • Support for implementing Python modules in Javascript

We also did a large amount of maintenance work and code quality improvements:

  • Lots of bug fixes
  • Upstreamed a number of patches to the emscripten compiler toolchain
  • Added systematic error handling to the C code, including automatic adaptors between Javascript errors and CPython errors
  • Added internal consistency checks to detect memory leaks, detect fatal errors, and improve ease of debugging

See the changelog for more details.

Winding down Iodide

Mozilla has made the difficult decision to wind down the Iodide project. While will continue to be available for now (in part to provide a demonstration of Pyodide’s capabilities), we do not recommend using it for important work as it may shut down in the future. Since iodide’s release, there have been many efforts at creating interactive notebook environments based on Pyodide which are in active development and offer a similar environment for creating interactive visualizations in the browser using python.

Next steps for Pyodide

While many issues were addressed in this release, a number of other major steps remain on the roadmap. We can mention

  • Reducing download sizes and initialization times
  • Improve performance of Python code in Pyodide
  • Simplification of package loading system
  • Update scipy to a more recent version
  • Better project sustainability, for instance, by seeking synergies with the conda-forge project and its tooling.
  • Better support for web workers
  • Better support for synchronous IO (popular for programming education)

For additional information see the project roadmap.


Lots of thanks to:

  • Dexter Chua and Joe Marshall for improving the build setup and making Emscripten migration possible.
  • Hood Chatham for in-depth improvement of the type translation module and adding asyncio support
  • and Romain Casati for improving the Pyodide REPL console.

We are also grateful to all Pyodide contributors.

The post Pyodide Spin Out and 0.17 Release appeared first on Mozilla Hacks - the Web developer blog.

Daniel Stenberg“So what exactly is curl?”

You know that question you can get asked casually by a person you’ve never met before or even by someone you’ve known for a long time but haven’t really talked to about this before. Perhaps at a social event. Perhaps at a family dinner.

– So what do you do?

The implication is of course what you work with. Or as. Perhaps a title.

Software Engineer

In my case I typically start out by saying I’m a software engineer. (And no, I don’t use a title.)

If the person who asked the question is a non-techie, this can then take off in basically any direction. From questions about the Internet, how their printer acts up sometimes to finicky details about Wifi installations or their parents’ problems to install anti-virus. In other words: into areas that have virtually nothing to do with software engineering but is related to computers.

If the person is somewhat knowledgeable or interested in technology or computers they know both what software and engineering are. Then the question can get deepened.

What kind of software?

Alternatively they ask for what company I work for, but it usually ends up on the same point anyway, just via this extra step.

I work on curl. (Saying I work for wolfSSL rarely helps.)

<figcaption>Business cards of mine</figcaption>

So what is curl?

curl is a command line tool used but a small set of people (possibly several thousands or even millions), and the library libcurl that is installed in billions of places.

I often try to compare libcurl with how companies build for example cars out of many components from different manufacturers and companies. They use different pieces from many separate sources put together into a single machine to produce the end product.

libcurl is like one of those little components that a car manufacturer needs. It isn’t the only choice, but it is a well known, well tested and familiar one. It’s a safe choice.

Internet what?

Lots of people, even many with experience, knowledge or even jobs in the IT industry I’ve realized don’t know what an Internet transfer is. Me describing curl as doing such, doesn’t really help in those cases.

An internet transfer is the bridge between “the cloud” and your devices or applications. curl is a bridge.

Everything wants Internet these days

In general, anything today that has power goes towards becoming networked. Everything that can, will connect to the Internet sooner or later. Maybe not always because it’s a good idea, but because it gives your thing a (perceived) advantage to your competitors.

Things that a while ago you wouldn’t dream would do that, now do Internet transfers. Tooth brushes, ovens, washing machines etc.

If you want to build a new device or application today and you want it to be successful and more popular than your competitors, you will probably have to make it Internet-connected.

You need a “bridge”.

Making things today is like doing a puzzle

Everyone who makes devices or applications today have a wide variety of different components and pieces of the big “puzzle” to select from.

You can opt to write many pieces yourself, but virtually nobody today creates anything digital entirely on their own. We lean on others. We stand on other’s shoulders. In particular open source software has grown up to become or maybe provide a vast ocean of puzzle pieces to use and leverage.

One of the little pieces in your device puzzle is probably Internet transfers, because you want your thing to get updates, upload telemetry and who knows what else.

The picture then needs a piece inserted in the right spot to get complete. The Internet transfers piece. That piece can be curl. We’ve made curl to be a good such piece.

<figcaption>This perfect picture is just missing one little piece…</figcaption>

Relying on pieces provided by others

Lots have been said about the fact that companies, organizations and entire ecosystems rely on pieces and components written, maintained and provided by someone else. Some of them are open source components written by developers on their spare time, but are still used by thousands of companies shipping commercial products.

curl is one such component. It’s not “just” a spare time project anymore of course, but the point remains. We estimate that curl runs in some ten billion installations these days, so quite a lot of current Internet infrastructure uses our little puzzle piece in their pictures.

<figcaption>Modified version of the original xkcd 2347 comic</figcaption>

So you’re rich

I rarely get to this point in any conversation because I would have already bored my company into a coma by now.

The concept of giving away a component like this as open source under a liberal license is a very weird concept to general people. Maybe also because I say that I work on this and I created it, but I’m not at all the only contributor and we wouldn’t have gotten to this point without the help of several hundred other developers.

“- No, I give it away for free. Yes really, entirely and totally free for anyone and everyone to use. Correct, even the largest and richest mega-corporations of the world.”

The ten billion installations work as marketing for getting companies to understand that curl is a solid puzzle piece so that more will use it and some of those will end up discovering they need help or assistance and they purchase support for curl from me!

I’m not rich, but I do perfectly fine. I consider myself very lucky and fortunate who get to work on curl for a living.

A curl world

There are about 5 billion Internet using humans in the world. There are about 10 billion curl installations.

The puzzle piece curl is there in the middle.

This is how they’re connected. This is the curl world map 2021.

Or put briefly

libcurl is a library for doing transfers specified with a URL, using one of the supported protocols. It is fast, reliable, very portable, well documented and feature rich. A de-facto standard API available for everyone.


The original island image is by Julius Silver from Pixabay. xkcd strip edits were done by @tsjost.

Mozilla Open Policy & Advocacy BlogMozilla reacts to publication of EU’s draft regulation on AI

Today, the European Commission published its draft for a regulatory framework for artificial intelligence (AI). The proposal lays out comprehensive new rules for AI systems deployed in the EU. Mozilla welcomes the initiative to rein in the potential harms caused by AI, but much remains to be clarified.

Reacting to the European Commission’s proposal, Raegan MacDonald, Mozilla’s Director of Global Public Policy, said: 

“AI is a transformational technology that has the potential to create value and enable progress in so many ways, but we cannot lose sight of the real harms that can come if we fail to protect the rights and safety of people living in the EU. Mozilla is committed to ensuring that AI is trustworthy, that it helps people instead of harming them. The European Commission’s push to set ground rules is a step in the right direction and it is good to see that several of our recommendations to the Commission are reflected in the proposal – but there is more work to be done to ensure these principles can be meaningfully implemented, as some of the safeguards and red lines envisioned in the text leave a lot to be desired.

Systemic transparency is a critical enabler of accountability, which is crucial to advancing more trustworthy AI. We are therefore encouraged by the introduction of user-facing transparency obligations – for example for chatbots or so-called deepfakes – as well as a public register for high-risk AI systems in the European Commission’s proposal. But as always, details matter, and it will be important what information exactly this database will encompass. We look forward to contributing to this important debate.”


The post Mozilla reacts to publication of EU’s draft regulation on AI appeared first on Open Policy & Advocacy.

Cameron KaiserColoured iMacs? We got your coloured iMacs right here

And you don't even need to wait until May. Besides being the best colour Apple ever offered (a tray-loading Strawberry, which is nicer than the current M1 iMac Pink), this iMac G3 also has a 600MHz Sonnet HARMONi in it, so it has a faster CPU and FireWire too. Take that, non-upgradable Apple Silicon. It runs Jaguar with OmniWeb and Crypto Ancienne for web browsing.

Plus, these coloured iMacs can build and run TenFourFox: Chris T proved it on his 400MHz G3. It took 34 hours to compile from source. I always did like slow-cooked meals better.

This Week In RustThis Week in Rust 387

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

No papers/research projects this week.

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is deltoid, another crate for delta-compressing Rust data structures.

Thanks to Joey Ezechiëls for the nomination

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No calls for participation this week

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

292 pull requests were merged in the last week

Rust Compiler Performance Triage

Another quiet week with very small changes to compiler performance.

Triage done by @rylev. Revision range: 5258a74..6df26f

1 Regressions, 0 Improvements, 1 Mixed

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

No new RFCs were proposed this week.

Upcoming Events


If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Grover GmbH

Massa Labs


Subspace Labs



Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

We feel that Rust is now ready to join C as a practical language for implementing the [Linux] kernel. It can help us reduce the number of potential bugs and security vulnerabilities in privileged code while playing nicely with the core kernel and preserving its performance characteristics.

Wedson Almeida Filho on the Google Security Blog

Thanks to Jacob Pratt for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

The Talospace ProjectFirefox 88 on POWER

Firefox 88 is out. In addition to a bunch of new CSS properties, JavaScript is now supported in PDF files even within Firefox's own viewer, meaning there is no escape, and FTP is disabled, meaning you will need to use 78ESR (though you get two more weeks of ESR as a reprieve, since Firefox 89 has been delayed to allow UI code to further settle). I've long pondered doing a generic "cURL extension" that would reenable all sorts of protocols through a shim to either curl or libcurl; maybe it's time for it.

Fortunately Fx88 builds uneventually as usual on OpenPOWER, though our PGO-LTO patches (apply to the tree with patch -p1) required a slight tweak to nsTerminator.cpp. Debug and optimized .mozconfigs are unchanged.

Also, an early milestone in the Firefox JavaScript JIT for OpenPOWER: Justin Hibbits merged my earlier jitpower work to a later tree (right now based on Firefox 86) and filled in the gaps with code from TenFourFox, and after some polishing up I did over the weekend, a JIT-enabled JavaScript shell now compiles on Fedora ppc64le. However, it immediately asserts due to probably some missing defintions for register sets, and I'm sure there are many other assertions and lurking bugs to be fixed, but this is much further along than before. The fork is on Github for others who wish to contribute; I will probably decommission the old jitpower project soon since it is now superfluous. More to come.

The Mozilla BlogMark Surman joins the Mozilla Foundation Board of Directors

In early 2020, I outlined our efforts to expand Mozilla’s boards. Over the past year, we’ve added three new external Mozilla board members: Navrina Singh and Wambui Kinya to the Mozilla Foundation board and Laura Chambers to the Mozilla Corporation board.

Today, I’m excited to welcome Mark Surman, Executive Director of the Mozilla Foundation, to the Foundation board.

As I said to staff prior to his appointment, when I think about who should hold the keys to Mozilla, Mark is high on that list. Mark has unique qualifications in terms of the overall direction of Mozilla, how our organizations interoperate, and if and how we create programs, structures or organizations. Mark is joining the Mozilla Foundation board as an individual based on these qualifications; we have not made the decision that the Executive Director is automatically a member of the Board.
Mark has demonstrated his commitment to Mozilla as a whole, over and over. The whole of Mozilla figures into his strategic thinking. He’s got a good sense of how Mozilla Foundation and Mozilla Corporation can magnify or reduce the effectiveness of Mozilla overall. Mark has a hunger for Mozilla to grow in impact. He has demonstrated an ability to think big, and to dive into the work that is in front of us today.

For those of you who don’t know Mark already, he brings over two decades of experience leading projects and organizations focused on the public interest side of the internet. In the 12 years since Mark joined Mozilla, he has built the Foundation into a leading philanthropic and advocacy voice championing the health of the internet. Prior to Mozilla, Mark spent 15 years working on everything from a non-profit internet provider to an early open source content management system to a global network of community-run cybercafes. Currently, Mark spends most of his time on Mozilla’s efforts to promote trustworthy AI in the tech industry, a major focus of the Foundation’s current efforts.
Please join me in welcoming Mark Surman to the Mozilla Foundation Board of Directors.

You can read Mark’s message about why he’s joining Mozilla here.

PS. As always, we continue to look for new members for both boards, with the goal of adding the skills, networks and diversity Mozilla will need to succeed in the future.


The post Mark Surman joins the Mozilla Foundation Board of Directors appeared first on The Mozilla Blog.

The Mozilla BlogWearing more (Mozilla) hats

Mark Surman

For many years now — and well before I sought out the job I have today — I thought: the world needs more organizations like Mozilla. Given the state of the internet, it needs them now. And, it will likely need them for a very long time to come.

Why? In part because the internet was founded with public benefit in mind. And, as the Mozilla Manifesto declared back in 2007, “… (m)agnifying the public benefit aspects of the internet is an important goal, worthy of time, attention and commitment.”

Today, this sort of ‘time and attention’ is more important — and urgent — than ever. We live in an era where the biggest companies in the world are internet companies. Much of what they have created is good, even delightful. Yet, as the last few years have shown, leaving things to commercial actors alone can leave the internet — and society — in a bit of a mess. We need organizations like Mozilla — and many more like it — if we are to find our way out of this mess. And we need these organizations to think big!

It’s for this reason that I’m excited to add another ‘hat’ to my work: I am joining the Mozilla Foundation board today. This is something I will take on in addition to my role as executive director.

Why am I assuming this additional role? I believe Mozilla can play a bigger role in the world than it does today. And, I also believe we can inspire and support the growth of more organizations that share Mozilla’s commitment to the public benefit side of the internet. Wearing a board member hat — and working with other Foundation and Corporation board members — I will be in a better position to turn more of my attention to Mozilla’s long term impact and sustainability.

What does this mean in practice? It means spending some of my time on big picture ‘Pan Mozilla’ questions. How can Mozilla connect to more startups, developers, designers and activists who are trying to build a better, more humane internet? What might Mozilla develop or do to support these people? How can we work with policy makers who are trying to write regulations to ensure the internet benefits the public interest? And, how do we shift our attention and resources outside of the US and Europe, where we have traditionally focused? While I don’t have answers to all these questions, I do know we urgently need to ask them — and that we need to do so in an expansive way that goes beyond the current scope of our operating organizations. That’s something I’ll be well positioned to do wearing my new board member hat.

Of course, I still have much to do wearing my executive director hat. We set out a few years ago to evolve the Foundation into a ‘movement building arm’ for Mozilla. Concretely, this has meant building up teams with skills in philanthropy and advocacy who can rally more people around the cause of a healthy internet. And, it has meant picking a topic to focus on: trustworthy AI. Our movement building approach — and our trustworthy AI agenda — is getting traction. Yet, there is still a way to go to unlock the kind of sustained action and impact that we want. Leading the day to day side of this work remains my main focus at Mozilla.

As I said at the start of this post: I think the world will need organizations like Mozilla for a long time to come. As all corners of our lives become digital, we will increasingly need to stand firm for public interest principles like keeping the internet open and accessible to all. While we can all do this as individuals, we also need strong, long lasting organizations that can take this stand in many places and over many decades. Whatever hat I’m wearing, I continue to be deeply committed to building Mozilla into a vast, diverse and sustainable institution to do exactly this.

The post Wearing more (Mozilla) hats appeared first on The Mozilla Blog.

Karl DubostGet Ready For Three Digits User Agent Strings

In 2022, Firefox and Chrome will reach a version number with three digits: 100. It's time to get ready and extensively test your code, so your code doesn't return null or worse 10 instead of 100.

Durian on sale

Some contexts

The browser user agent string is used in many circumstances, on the server side with the User-Agent HTTP header and on the client side with navigator.userAgent. Browsers lie about it. Web apps and websites detection do not cover all cases. So browsers have to modify the user agent string on a site by site case.

Browsers Release Calendar

According to the Firefox release calendar, during the first quarter of 2022 (probably February), Firefox will reach version 100.

And Chrome release calendar sets a current date of March 29, 2022.

What Mozilla Webcompat Team is doing?

Dennis Schubert started to test JavaScript Libraries, but this tests only the libraries which are up to date. And we know it, the Web is a legacy machine full of history.

The webcompat team will probably automatically test the top 1000 websites. But this is very rudimentary. It will not cover everything. Sites always break in strange ways.

What Can You Do To Help?

Browse the Web with a 100 UA string

  1. Change the user agent string of your favorite browser. For example, if the string is Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:89.0) Gecko/20100101 Firefox/89.0, change it to be Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:100.0) Gecko/20100101 Firefox/100.0
  2. If you notice something that is breaking because of the UA string, file a report on webcompat. Do not forget to check that it is working with the normal UA string.

Automatic tests for your code

If your web app has a JavaScript Test suite, add a profile with a browser having 100 for its version number and check if it breaks. Test both Firefox and Chrome (mobile and desktop) because the libraries have different code paths depending on the user agent. Watch out for code like:

const ua_string = "Firefox/100.0";
ua_string.match(/Firefox\/(\d\d)/); //  ["Firefox/10", "10"]
ua_string.match(/Firefox\/(\d{2})/); // ["Firefox/10", "10"]
ua_string.match(/Firefox\/(\d\d)\./); //  null

Compare version numbers as integer not string

Compare integer, not string when you have decided to have a minimum version for supporting a browser, because

"80" < "99" // true
"80" < "100" // false
parseInt("80", 10) < parseInt("99", 10) // true
parseInt("80", 10) < parseInt("100", 10) // true


If you have more questions, things I may have missed, different take on them. Feel free to comment…. Be mindful.


Mozilla Open Policy & Advocacy BlogMozilla Mornings on the DSA: Setting the standard for third-party platform auditing

On 11 May, Mozilla will host the next instalment of Mozilla Mornings – our regular event series that brings together policy experts, policymakers and practitioners for insight and discussion on the latest EU digital policy developments.

This instalment will focus on the DSA’s provisions on third-party platform auditing, one of the stand-out features of its next-generation regulatory approach. We’re bringing together a panel of experts to unpack the provisions’ strengths and shortcomings; and to provide recommendations for how the DSA can build a standard-setting auditing regime for Very Large Online Platforms.


Alexandra Geese MEP
IMCO DSA shadow rapporteur
Group of the Greens/European Free Alliance

Deborah Raji
Fellow | Research Collaborator
Mozilla Foundation | Algorithmic Justice League

Dr Ben Wagner
Assistant Professor, Faculty of Technology, Policy and Management
TU Delft  

With opening remarks by Owen Bennett
Senior Policy Manager
Mozilla Corporation

Moderated by Jennifer Baker
EU technology journalist


Logistical details

Tuesday 11 May, 14:00 – 15:00 CEST

Zoom Webinar

Register *here*

Webinar login details to be shared on day of event

The post Mozilla Mornings on the DSA: Setting the standard for third-party platform auditing appeared first on Open Policy & Advocacy.

Hacks.Mozilla.OrgNever too late for Firefox 88

April is upon us, and we have a most timely release for you — Firefox 88. In this release you will find a bunch of nice CSS additions including :user-valid and :user-invalid support and image-set() support, support for regular expression match indices, removal of FTP protocol support for enhanced security, and more!

This blog post provides merely a set of highlights; for all the details, check out the following:

:user-valid and :user-invalid

There are a large number of HTML form-related pseudo-classes that allow us to specify styles for various data validity states, as you’ll see in our UI pseudo-classes tutorial. Firefox 88 introduces two more — :user-valid and :user-invalid.

You might be thinking “we already have :valid and :invalid for styling forms containing valid or invalid data — what’s the difference here?”

:user-valid and :user-invalid are similar, but have been designed with better user experience in mind. They effectively do the same thing — matching a form input that contains valid or invaid data — but :user-valid and :user-invalid only start matching after the user has stopped focusing on the element (e.g. by tabbing to the next input). This is a subtle but useful change, which we will now demonstrate.

Take our valid-invalid.html example. This uses the following CSS to provide clear indicators as to which fields contain valid and invalid data:

input:invalid {
  border: 2px solid red;

input:invalid + span::before {
  content: '✖';
  color: red;

input:valid + span::before {
  content: '✓';
  color: green;

The problem with this is shown when you try to enter data into the “E-mail address” field — as soon as you start typing an email address into the field the invalid styling kicks in, and remains right up until the point where the entered text constitutes a valid e-mail address. This experience can be a bit jarring, making the user think they are doing something wrong when they aren’t.

Now consider our user-valid-invalid.html example. This includes nearly the same CSS, except that it uses the newer :user-valid and :user-invalid pseudo-classes:

input:user-invalid {
  border: 2px solid red;

input:user-invalid + span::before {
  content: '✖';
  color: red;

input:user-valid + span::before {
  content: '✓';
  color: green;

In this example the valid/invalid styling only kicks in when the user has entered their value and removed focus from the input, giving them a chance to enter their complete value before receiving feedback. Much better!

Note: Previously to Firefox 88, the same effect could be achieved using the proprietary :-moz-ui-invalid and :-moz-ui-valid pseudo-classes.

image-set() support

The image-set() function provides a mechanism in CSS to allow the browser to pick the most suitable image for the device’s resolution from a list of options, in a similar manner to the HTML srcset attribute. For example, the following can be used to provide multiple background-images to choose from:

div {
  background-image: image-set(
    url("small-balloons.jpg") 1x,
    url("large-balloons.jpg") 2x);

You can also use image-set() as a value for the content and cursor properties. So for example, you could provide multiple resolutions for generated content:

h2::before {
  content: image-set(
    url("small-icon.jpg") 1x,
    url("large-icon.jpg") 2x);

or custom cursors:

div {
  cursor: image-set(
    url("custom-cursor-small.png") 1x,
    url("custom-cursor-large.png") 2x),

outline now follows border-radius shape

The outline CSS property has been updated so that it now follows the outline shape created by border-radius. It is really nice to see a fix included in Firefox for this long standing problem. As part of this work the non-standard -moz-outline-radius property has been removed.

RegExp match indices

Firefox 88 supports the match indices feature of regular expressions, which makes an indices property available containing an array that stores the start and end positions of each matched capture group. This functionality is enabled using the d flag.

There is also a corresponding hasIndices boolean property that allows you to check whether a regex has this mode enabled.

So for example:

const regex1 = new RegExp('foo', 'd');
regex1.hasIndices // true
const test = regex1.exec('foo bar');
test // [ "foo" ]
test.indices // [ [ 0, 3 ] ]

For more useful information, see our RegExp.prototype.exec() page, and RegExp match indices on the V8 dev blog.

FTP support disabled

FTP support has been disabled from Firefox 88 onwards, and its full removal is (currently) planned for Firefox version 90. Addressing this security risk reduces the likelihood of an attack while also removing support for a non-encrypted protocol.

Complementing this change, the extension setting browserSettings.ftpProtocolEnabled has been made read-only, and web extensions can now register themselves as protocol handlers for FTP.

The post Never too late for Firefox 88 appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Addons BlogChanges to themeable areas of Firefox in version 89

Firefox’s visual appearance will be updated in version 89 to provide a cleaner, modernized interface. Since some of the changes will affect themeable areas of the browser, we wanted to give theme artists a preview of what to expect as the appearance of their themes may change when applied to version 89.

Tabs appearance

  • The property tab_background_separator, which controls the appearance of the vertical lines that separate tabs, will no longer be supported.
  • Currently, the tab_line property can set the color of an active tab’s thick top border. In Firefox 89, this property will set a color for all borders of an active tab, and the borders will be thinner.

URL and toolbar

  • The property toolbar_field_separator, which controls the color of the vertical line that separates the URL bar from the three-dot “meatball menu,” will no longer be supported.

  • The property toolbar_vertical_separator, which controls the vertical lines near the three-line “hamburger menu” and the line separating items in the bookmarks toolbar, will no longer appear next to the hamburger menu. You can still use this property to control the separators in the bookmarks toolbar.  (Note: users will need to enable the separator by right clicking on the bookmarks toolbar and selecting “Add Separator.”)

You can use the Nightly pre-release channel to start testing how your themes will look with Firefox 89. If you’d like to get more involved testing other changes planned for this release, please check out our foxfooding campaign, which runs until May 3, 2021.

Firefox 89 is currently set available on the Beta pre-release channel by April 23, 2021, and released on June 1, 2021.

As always, please post on our community forum if there are any questions.

The post Changes to themeable areas of Firefox in version 89 appeared first on Mozilla Add-ons Blog.

Daniel StenbergMars 2020 Helicopter Contributor

Friends of mine know that I’ve tried for a long time to get confirmation that curl is used in space. We’ve believed it to be likely but I’ve wanted to get a clear confirmation that this is indeed the fact.

Today GitHub posted their article about open source in the Mars mission, and they now provide a badge on their site for contributors of projects that are used in that mission.

I have one of those badges now. Only a few other of the current 879 recorded curl authors got it. Which seems to be due to them using a very old curl release (curl 7.19, released in September 2008) and they couldn’t match all contributors with emails or the authors didn’t have their emails verified on GitHub etc.

According to that GitHub blog post, we are “almost 12,000” developers who got it.

While this strictly speaking doesn’t say that curl is actually used in space, I think it can probably be assumed to be.

Here’s the interplanetary curl development displayed in a single graph:

See also: screenshotted curl credits and curl supports NASA.


Image by Aynur Zakirov from Pixabay

Mozilla Security BlogFirefox 88 combats privacy abuses

We are pleased to announce that Firefox 88 is introducing a new protection against privacy leaks on the web. Under new limitations imposed by Firefox, trackers are no longer able to abuse the property to track users across websites.

Since the late 1990s, web browsers have made the property available to web pages as a place to store data. Unfortunately, data stored in has been allowed by standard browser rules to leak between websites, enabling trackers to identify users or snoop on their browsing history. To close this leak, Firefox now confines the property to the website that created it.

Leaking data through

The property of a window allows it to be able to be targeted by hyperlinks or forms to navigate the target window. The property, available to any website you visit, is a “bucket” for storing any data the website may choose to place there. Historically, the data stored in has been exempt from the same-origin policy enforced by browsers that prohibited some forms of data sharing between websites. Unfortunately, this meant that data stored in the property was allowed by all major browsers to persist across page visits in the same tab, allowing different websites you visit to share data about you.

For example, suppose a page at set the property to “”. Traditionally, this information would persist even after you clicked on a link and navigated to So the page at would be able to read the information without your knowledge or consent: persists across the cross-origin navigation. persists across the cross-origin navigation.

Tracking companies have been abusing this property to leak information, and have effectively turned it into a communication channel for transporting data between websites. Worse, malicious sites have been able to observe the content of to gather private user data that was inadvertently leaked by another website.

Clearing to prevent leakage

To prevent the potential privacy leakage of, Firefox will now clear the property when you navigate between websites. Here’s how it looks:

Firefox 88 clearing after cross-origin navigation.

Firefox 88 clearing after cross-origin navigation.

Firefox will attempt to identify likely non-harmful usage of and avoid clearing the property in such cases. Specifically, Firefox only clears if the link being clicked does not open a pop-up window.

To avoid unnecessary breakage, if a user navigates back to a previous website, Firefox now restores the property to its previous value for that website. Together, these dual rules for clearing and restoring data effectively confine that data to the website where it was originally created, similar to how Firefox’s Total Cookie Protection confines cookies to the website where they were created. This confinement is essential for preventing malicious sites from abusing to gather users’ personal data.

Firefox isn’t alone in making this change: web developers relying on should note that Safari is also clearing the property, and Chromium-based browsers are planning to do so. Going forward, developers should expect clearing to be the new standard way that browsers handle

If you are a Firefox user, you don’t have to do anything to benefit from this new privacy protection. As soon as your Firefox auto-updates to version 88, the new default data confinement will be in effect for every website you visit. If you aren’t a Firefox user yet, you can download the latest version here to start benefiting from all the ways that Firefox works to protect your privacy.

The post Firefox 88 combats privacy abuses appeared first on Mozilla Security Blog.

Daniel Stenbergcurl those funny IPv4 addresses

Everyone knows that on most systems you can specify IPv4 addresses just 4 decimal numbers separated with periods (dots). Example:

Useful when you for example want to ping your local wifi router and similar. “ping”

Other bases

The IPv4 string is usually parsed by the inet_addr() function or at times it is passed straight into the name resolver function like getaddrinfo().

This address parser supports more ways to specify the address. You can for example specify each number using either octal or hexadecimal.

Write the numbers with zero-prefixes to have them interpreted as octal numbers:


Write them with 0x-prefixes to specify them in hexadecimal:


You will find that ping can deal with all of these.

As a 32 bit number

An IPv4 address is a 32 bit number that when written as 4 separate numbers are split in 4 parts with 8 bits represented in each number. Each separate number in “a.b.c.d” is 8 bits that combined make up the whole 32 bits. Sometimes the four parts are called quads.

The typical IPv4 address parser however handles more ways than just the 4-way split. It can also deal with the address when specified as one, two or three numbers (separated with dots unless its just one).

If given as a single number, it treats it as a single unsigned 32 bit number. The top-most eight bits stores what we “normally” with write as the first number and so on. The address shown above, if we keep it as hexadecimal would then become:


And you can of course write it in octal as well:


and plain old decimal:


As two numbers

If you instead write the IP address as two numbers with a dot in between, the first number is assumed to be 8 bits and the next one a 24 bit one. And you can keep on mixing the bases as you see like. The same address again, now in a hexadecimal + octal combo:


This allows for some fun shortcuts when the 24 bit number contains a lot of zeroes. Like you can shorten “” to just “127.1” and it still works and is perfectly legal.

As three numbers

Now the parts are supposed to be split up in bits like this: 8.8.16. Here’s the example address again in octal, hex and decimal:


Bypassing filters

All of these versions shown above work with most tools that accept IPv4 addresses and sometimes you can bypass filters and protection systems by switching to another format so that you don’t match the filters. It has previously caused problems in node and perl packages and I’m guessing numerous others. It’s a feature that is often forgotten, ignored or just not known.

It begs the question why this very liberal support was once added and allowed but I’ve not been able to figure that out – maybe because of how it matches class A/B/C networks. The support for this syntax seems to have been introduced with the inet_aton() function in the 4.2BSD release in 1983.

IPv4 in URLs

URLs have a host name in them and it can be specified as an IPv4 address.

RFC 3986

The RFC 3986 URL specification’s section 3.2.2 says an IPv4 address must be specified as:

dec-octet "." dec-octet "." dec-octet "." dec-octet

… but in reality very few clients that accept such URLs actually restrict the addresses to that format. I believe mostly because many programs will pass on the host name to a name resolving function that itself will handle the other formats.


The Host Parsing section of this spec allows the many variations of IPv4 addresses. (If you’re anything like me, you might need to read that spec section about three times or so before that’s clear).

Since the browsers all obey to this spec there’s no surprise that browsers thus all allow this kind of IP numbers in URLs they handle.

curl before

curl has traditionally been in the camp that mostly accidentally somewhat supported the “flexible” IPv4 address formats. It did this because if you built curl to use the system resolver functions (which it does by default) those system functions will handle these formats for curl. If curl was built to use c-ares (which is one of curl’s optional name resolver backends), using such address formats just made the transfer fail.

The drawback with allowing the system resolver functions to deal with the formats is that curl itself then works with the original formatted host name so things like HTTPS server certificate verification and sending Host: headers in HTTP don’t really work the way you’d want.

curl now

Starting in curl 7.77.0 (since this commit ) curl will “natively” understand these IPv4 formats and normalize them itself.

There are several benefits of doing this ourselves:

  1. Applications using the URL API will get the normalized host name out.
  2. curl will work the same independently of selected name resolver backend
  3. HTTPS works fine even when the address is using other formats
  4. HTTP virtual hosts headers get the “correct” formatted host name

Fun example command line to see if it works:

curl -L 16843009

16843009 gets normalized to which then gets used as (because curl will assume HTTP for this URL when no scheme is used) which returns a 301 redirect over to which -L makes curl follow…


Image by Thank you for your support Donations welcome to support from Pixabay

Niko MatsakisAsync Vision Doc Writing Sessions VI

Ryan Levick and I are going to be hosting more Async Vision Doc Writing Sessions this week. We’re not organized enough to have assigned topics yet, so I’m just going to post the dates/times and we’ll be tweeting about the particular topics as we go.

When Who
Wed at 07:00 ET Ryan
Wed at 15:00 ET Niko
Fri at 07:00 ET Ryan
Fri at 14:00 ET Niko

If you’ve joined before, we’ll be re-using the same Zoom link. If you haven’t joined, then send a private message to one of us and we’ll share the link. Hope to see you there!

Cameron KaiserTenFourFox FPR32 available, plus a two-week reprieve

TenFourFox Feature Parity Release 32 final is now available for testing (downloads, hashes, release notes). This adds an additional entry to the ATSUI font blocklist and completes the outstanding security patches. Assuming no issues, it will go live as the final FPR on or about April 19.

Mozilla is advancing Firefox 89 by two weeks to give them additional time to polish up the UI changes in that version. This will thus put all future release dates ahead by two weeks as well; the next ESR release and the first Security Parity Release parallel with it instead will be scheduled for June 1. Aligning with this, the testing version of FPR32 SPR1 will come out the weekend before June 1 and the final official build of TenFourFox will also move ahead two weeks, from September 7 to September 21. After that you'll have to DIY but fortunately it already looks like people are rising to the challenge of building the browser themselves: I have been pointed to an installer which neatly wraps up all the necessary build prerequisites, provides a guided Automator workflow and won't interfere with any existing installation of MacPorts. I don't have anything to do this with this effort and can't attest to or advise on its use, but it's nice to see it exists, so download it from Macintosh Garden if you want to try it out. Remember, compilation speed on G4 (and, shudder, G3) systems can be substantially slower than on a G5, and especially without multiple CPUs. Given this Quad G5 running full tilt (three cores dedicated to compiling) with a full 16GB of RAM takes about three and a half hours to kick out a single architecture build, you should plan accordingly for longer times on lesser systems.

I have already started clearing issues from Github I don't intend to address. The remaining issues may not necessarily be addressed either, and definitely won't be during the security parity period, but they are considerations for things I might need later. Don't add to this list: I will mark new issues without patches or PRs as invalid. I will also be working on revised documentation for Tenderapp and the main site so people are aware of the forthcoming plan; those changes will be posted sometime this coming week.

Hacks.Mozilla.OrgQUIC and HTTP/3 Support now in Firefox Nightly and Beta

tl;dr: Support for QUIC and HTTP/3 is now enabled by default in Firefox Nightly and Firefox Beta. We are planning to start rollout on the release in Firefox Stable Release 88. HTTP/3 will be available by default by the end of May.

What is HTTP/3?

HTTP/3 is a new version of HTTP (the protocol that powers the Web) that is based on QUIC. HTTP/3 has three main performance improvements over HTTP/2:

  • Because it is based on UDP it takes less time to connect;
  • It does not have head of line blocking, where delays in delivering packets cause an entire connection to be delayed; and
  • It is better able to detect and repair packet loss.

QUIC also provides connection migration and other features that should improve performance and reliability. For more on QUIC, see this excellent blog post from Cloudflare.

How to use it?

Firefox Nightly and Firefox Beta will automatically try to use HTTP/3 if offered by the Web server (for instance, Google or Facebook). Web servers can indicate support by using the Alt-Svc response header or by advertising HTTP/3 support with a HTTPS DNS record. Both the client and server must support the same QUIC and HTTP/3 draft version to connect with each other. For example, Firefox currently supports drafts 27 to 32 of the specification, so the server must report support of one of these versions (e.g., “h3-32”) in Alt-Svc or HTTPS record for Firefox to try to use QUIC and HTTP/3 with that server. When visiting such a website, viewing the network request information in Dev Tools should show the Alt-Svc header, and also indicate that HTTP/3 was used.

If you encounter issues with these or other sites, please file a bug in Bugzilla.

The post QUIC and HTTP/3 Support now in Firefox Nightly and Beta appeared first on Mozilla Hacks - the Web developer blog.

About:CommunityNew Contributors To Firefox

With Firefox 88 in flight, we are pleased to welcome the long list of developers who’ve contributed their first code change to in this release, 24 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Mozilla Localization (L10N)L10n Report: April 2021 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 


New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

  • Cebuano (ceb)
  • Hiligaynon (hil)
  • Meiteilon (mni)
  • Papiamento (pap-AW)
  • Shilha (shi)
  • Somali (so)
  • Uyghur (ug)

Update on the communication channels

On April 3rd, as part of a broader strategy change at Mozilla, we moved our existing mailing lists (dev-l10n, dev-l10n-web, dev-l10n-new-locales) to Discourse. If you are involved in localization, please make sure to create an account on Discourse and set up your profile to receive notifications when there are new messages in the Localization category.

We also decided to shut down our existing Telegram channel dedicated to localization. This was originally created to fill a gap, given its broad availability on mobile, and the steep entry barrier required to use IRC. In the meantime, IRC has been replaced by, which offers a much better experience on mobile platforms. Please make sure to check out the dedicated Wiki page with instructions on how to connect, and join our #l10n-community room.

New content and projects

What’s new or coming up in Firefox desktop

For all localizers working on Firefox, there is now a Firefox L10n Newsletter, including all information regarding the next major release of Firefox (89, aka MR1). Here you can find the latest issue, and you can also subscribe to this thread in discourse to receive a message every time there’s an update.

One important update is that the Firefox 89 cycle will last 2 extra weeks in Beta. These are the important deadlines:

  • Firefox 89 will move from Nightly to Beta on April 19 (unchanged).
  • It will be possible to update localizations for Firefox 89 until May 23 (previously May 9).
  • Firefox 89 will be released on June 1.

As a consequence, the Nightly cycle for Firefox 90 will also be two weeks longer.

What’s new or coming up in mobile

Like Firefox desktop, Firefox for iOS and Firefox for Android are still on the road to the MR1 release. I’ve published some details on Discourse here. Dates and info are still relevant, nothing changes in terms of l10n.

All strings for Firefox for iOS should already have landed.

Most strings for Firefox for Android should have landed.

What’s new or coming up in web projects


The Voice Fill and Firefox Voice Beta extensions are being retired.

Common Voice:

The project is transitioning to Mozilla Foundation. The announcement was made earlier this week. Some of the Mozilla staff who worked closely with the project will continue working on it in their new roles. The web part, the part that contributes to the site localization will remain in Pontoon.

Firefox Accounts:

Beta was launched on March 17. The sprint cycle is now aligned with Firefox Nightly moving forward. The next code push will be on April 21. The cutoff to include localized strings is a week earlier than the code push date.


All locales are disabled with the exception of fr, ja, zh-CN and zh-TW. There is a blog on this decision. The team may add back more languages later. If it does happen, the attributes to the work done by community members will be retained in Pontoon. Nothing will be lost.
  • Migration from .lang to .ftl has completed. The strings containing brand and product names that were not converted properly will appear as warnings and would not be shown on the production site. Please resolve these issues as soon as possible.
  • A select few locales are chosen to be supported by vendor service: ar, hi-IN, id, ja, and ms. The community managers were reached out for this change. The website should be fully localized in these languages by the first week of May. For more details on this change and for ways to report translation issues, please check out the announcement on Discourse.


  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Jan-Erik RedigerThis Week in Glean: rustc, iOS and an M1

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All "This Week in Glean" blog posts are listed in the TWiG index (and on the Mozilla Data blog). This article is cross-posted on the Mozilla Data blog.

Back in February I got an M1 MacBook. That's Apple's new ARM-based hardware.

I got it with the explicit task to ensure that we are able to develop and build Glean on it. We maintain a Swift language binding, targeting iOS, and that one is used in Firefox iOS. Eventually these iOS developers will also have M1-based machines and want to test their code, thus Glean needs to work.

Here's what we need to get to work:

  • Compile the Rust portions of Glean natively on an M1 machine
  • Build & test the Kotlin & Swift language bindings on an M1 machine, even if non-native (e.g. Rosetta 2 emulation for x86_64)
  • Build & test the Swift language bindings natively and in the iPhone simulator on an M1 machine
  • Stretch goal: Get iOS projects using Glean running as well

Rust on an M1

Work on getting Rust compiled on M1 hardware started last year in June already, with the availability of the first developer kits. See Rust issue 73908 for all the work and details. First and foremost this required a new target: aarch64-apple-darwin. This landed in August and was promoted to Tier 21 with the December release of Rust 1.49.0.

By the time I got my MacBook compiling Rust code on it was as easy as on an Intel MacBook. Developers on Intel MacBooks can cross-compile just as easily:

rustup target add aarch64-apple-darwin
cargo build --target aarch64-apple-darwin

Glean Python & Kotlin on an M1

Glean Python just ... worked. We use cffi to load the native library into Python. It gained aarch642 macOS support in v14.4.1. My colleague glandium later contributed support code so we build release wheels for that target too. So it's both possible to develop & test Glean Python, as well as use it as a dependency without having a full Rust development environment around.

Glean Android is not that straight forward. Some of our transitive dependencies are based on years-old pre-built binaries of SQLite and of course there's not much support behind updating those Java libraries. It's possible. A friend managed to compile and run that library on an M1. But for Glean development we simply recommend relying on Rosetta 2 (the x86_64 compatibility layer) for now. It's as easy as:

arch -x86_64 $SHELL
make build-kotlin

At least if you have Java set up correctly... The default Android emulator isn't usable on M1 hardware yet, but Google is working on a compatible one: Android M1 emulator preview. It's usable enough for some testing, but for that part I most often switch back to my Linux Desktop (that has the additional CPU power on top).

Glean iOS on an M1

Now we're getting to the interesting part: Native iOS development on an M1. Obviously for Apple this is a priority: Their new machines should become the main machine people do iOS development on. Thus Xcode gained aarch64 support in version 12 long before the hardware was available. That caused quite some issues with existing tooling, such as the dependency manager Carthage. Here's the issue:

  • When compiling for iOS hardware you would pick a target named aarch64-apple-ios, because ... iPhones and iPads are ARM-based since forever.
  • When compiling for the iOS simulator you would pick a target named x86_64-apple-ios, because conveniently the simulator uses the host's CPU (that's what makes it fast)

So when the compiler saw x86_64 and iOS it knew "Ah, simulator target" and when it saw aarch64 and ios it knew "Ah, hardware". And everyone went with this, Xcode happily built both targets and, if asked to, was able to bundle them into one package.

With the introduction of Apple Silicion3 the iOS simulator run on these machines would also be aarch644, and also contain ios, but not be for the iOS hardware.

Now Xcode and the compiler will get confused what to put where when building on M1 hardware for both iOS hardware and the host architecture.

So the compiler toolchain gained knowledge of a new thing: arm64-apple-ios14.0-simulator, explicitly marking the simulator target. The compiler knows from where to pick the libraries and other SDK files when using that target. You still can't put code compiled for arm64-apple-ios and arm64-apple-ios14.0-simulator into the same universal binary5, because you can have each architecture only once (the arm64 part in there). That's what Carthage and others stumbled over.

Again Apple prepared for that and for a long time they have wanted you to use XCFramework bundles6. Carthage just didn't used to support that. The 0.37.0 release fixed that.

That still leaves Rust behind, as it doesn't know the new -simulator target. But as always the Rust community is ahead of the game and deg4uss3r started adding a new target in Rust PR #81966. He got half way there when I jumped in to push it over the finish line. How these targets work and how LLVM picks the right things to put into the compiled artifacts is severly underdocumented, so I had to go the trial-and-error route in combination with looking at LLVM source code to find the missing pieces. Turns out: the 14.0 in arm64-apple-ios14.0-simulator is actually important.

With the last missing piece in place, the new Rust target landed in February and is available in Nightly. Contrary to the main aarch64-apple-darwin or aarch64-apple-ios target, the simulator target is not Tier 2 yet and thus no prebuilt support is available. rustup target add aarch64-apple-darwin does not work right now. I am now in discussions to promote it to Tier 2, but it's currently blocked by the RFC: Target Tier Policy.

It works on nightly however and in combination with another cargo capability I'm able to build libraries for the M1 iOS simulator:

cargo +nightly build -Z build-std --target aarch64-apple-ios-sim

For now Glean iOS development on an M1 is possible, but requires Nightly. Goal achieved, I can actually work with this!

In a future blog post I want to explain in more detail how to teach Xcode about all the different targets it should build native code for.

All The Other Projects

This was marked a stretch goal for a reason. This involves all the other teams with Rust code and the iOS teams too. We're not there yet and there's currently no explicit priority to make development of Firefox iOS on M1 hardware possible. But when it comes to it, Glean will be ready for it and the team can assist others to get it over the finish line.

Want to hear more about Glean and our cross-platform Rust development? Come to next week's Rust Linz meetup, where I will be talking about this.



See Platform Support for what the Tiers means.
2: The other name for that target.
3: "Apple Silicon" is yet another name for what is essentially the same as "M1" or "macOS aarch64"
4: Or arm64 for that matter. Yes, yet another name for the same thing.
5: "Universal Binaries" have existed for a long time now and allow for one binary to include the compiled artifacts for multiple targets. It's how there's only one Firefox for Mac download which runs natively on either Mac platform.
6: Yup, the main documentation they link to is a WWDC 2019 talk recording video.

Data@MozillaThis Week in Glean: rustc, iOS and an M1

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index (and on the Mozilla Data blog).

Back in February I got an M1 MacBook. That’s Apple’s new ARM-based hardware.

I got it with the explicit task to ensure that we are able to develop and build Glean on it. We maintain a Swift language binding, targeting iOS, and that one is used in Firefox iOS. Eventually these iOS developers will also have M1-based machines and want to test their code, thus Glean needs to work.

Here’s what we need to get to work:

  • Compile the Rust portions of Glean natively on an M1 machine
  • Build & test the Kotlin & Swift language bindings on an M1 machine, even if non-native (e.g. Rosetta 2 emulation for x86_64)
  • Build & test the Swift language bindings natively and in the iPhone simulator on an M1 machine
  • Stretch goal: Get iOS projects using Glean running as well

Rust on an M1

Work on getting Rust compiled on M1 hardware started last year in June already, with the availability of the first developer kits. See Rust issue 73908 for all the work and details. First and foremost this required a new target: aarch64-apple-darwin. This landed in August and was promoted to Tier 21 with the December release of Rust 1.49.0.

By the time I got my MacBook compiling Rust code on it was as easy as on an Intel MacBook. Developers on Intel MacBooks can cross-compile just as easily:

rustup target add aarch64-apple-darwin
cargo build --target aarch64-apple-darwin

Glean Python & Kotlin on an M1

Glean Python just … worked. We use cffi to load the native library into Python. It gained aarch642 macOS support in v14.4.1. My colleague glandium later contributed support code so we build release wheels for that target too. So it’s both possible to develop & test Glean Python, as well as use it as a dependency without having a full Rust development environment around.

Glean Android is not that straight forward. Some of our transitive dependencies are based on years-old pre-built binaries of SQLite and of course there’s not much support behind updating those Java libraries. It’s possible. A friend managed to compile and run that library on an M1. But for Glean development we simply recommend relying on Rosetta 2 (the x86_64 compatibility layer) for now. It’s as easy as:

arch -x86_64 $SHELL
make build-kotlin

At least if you have Java set up correctly… The default Android emulator isn’t usable on M1 hardware yet, but Google is working on a compatible one: Android M1 emulator preview. It’s usable enough for some testing, but for that part I most often switch back to my Linux Desktop (that has the additional CPU power on top).

Glean iOS on an M1

Now we’re getting to the interesting part: Native iOS development on an M1. Obviously for Apple this is a priority: Their new machines should become the main machine people do iOS development on. Thus Xcode gained aarch64 support in version 12 long before the hardware was available. That caused quite some issues with existing tooling, such as the dependency manager Carthage. Here’s the issue:

  • When compiling for iOS hardware you would pick a target named aarch64-apple-ios, because … iPhones and iPads are ARM-based since forever.
  • When compiling for the iOS simulator you would pick a target named x86_64-apple-ios, because conveniently the simulator uses the host’s CPU (that’s what makes it fast)

So when the compiler saw x86_64 and iOS it knew “Ah, simulator target” and when it saw aarch64 and ios it knew “Ah, hardware”. And everyone went with this, Xcode happily built both targets and, if asked to, was able to bundle them into one package.

With the introduction of Apple Silicion3 the iOS simulator run on these machines would also be aarch644, and also contain ios, but not be for the iOS hardware.

Now Xcode and the compiler will get confused what to put where when building on M1 hardware for both iOS hardware and the host architecture.

So the compiler toolchain gained knowledge of a new thing: arm64-apple-ios14.0-simulator, explicitly marking the simulator target. The compiler knows from where to pick the libraries and other SDK files when using that target. You still can’t put code compiled for arm64-apple-ios and arm64-apple-ios14.0-simulator into the same universal binary5, because you can have each architecture only once (the arm64 part in there). That’s what Carthage and others stumbled over.

Again Apple prepared for that and for a long time they have wanted you to use XCFramework bundles6. Carthage just didn’t used to support that. The 0.37.0 release fixed that.

That still leaves Rust behind, as it doesn’t know the new -simulator target. But as always the Rust community is ahead of the game and deg4uss3r started adding a new target in Rust PR #81966. He got half way there when I jumped in to push it over the finish line. How these targets work and how LLVM picks the right things to put into the compiled artifacts is severly underdocumented, so I had to go the trial-and-error route in combination with looking at LLVM source code to find the missing pieces. Turns out: the 14.0 in arm64-apple-ios14.0-simulator is actually important.

With the last missing piece in place, the new Rust target landed in February and is available in Nightly. Contrary to the main aarch64-apple-darwin or aarch64-apple-ios target, the simulator target is not Tier 2 yet and thus no prebuilt support is available. rustup target add aarch64-apple-darwin does not work right now. I am now in discussions to promote it to Tier 2, but it’s currently blocked by the RFC: Target Tier Policy.

It works on nightly however and in combination with another cargo capability I’m able to build libraries for the M1 iOS simulator:

cargo +nightly build -Z build-std --target aarch64-apple-ios-sim

For now Glean iOS development on an M1 is possible, but requires Nightly. Goal achieved, I can actually work with this!

In a future blog post I want to explain in more detail how to teach Xcode about all the different targets it should build native code for.

All The Other Projects

This was marked a stretch goal for a reason. This involves all the other teams with Rust code and the iOS teams too. We’re not there yet and there’s currently no explicit priority to make development of Firefox iOS on M1 hardware possible. But when it comes to it, Glean will be ready for it and the team can assist others to get it over the finish line.

Want to hear more about Glean and our cross-platform Rust development? Come to next week’s Rust Linz meetup, where I will be talking about this.


  1. See Platform Support for what the Tiers means.↩︎
  2. The other name for that target.↩︎
  3. “Apple Silicon” is yet another name for what is essentially the same as “M1” or “macOS aarch64”↩︎
  4. Or arm64 for that matter. Yes, yet another name for the same thing.↩︎
  5. “Universal Binaries” have existed for a long time now and allow for one binary to include the compiled artifacts for multiple targets. It’s how there’s only one Firefox for Mac download which runs natively on either Mac platform.↩︎
  6. Yup, the main documentation they link to is a WWDC 2019 talk recording video.↩︎

Robert O'CallahanDemoing The Pernosco Omniscient Debugger: Debugging Crashes In Node.js And GDB

This post was written by Pernosco co-founder Kyle Huey.

Traditional debugging forms a hypothesis about what is going wrong with the program, gathers evidence to accept or reject that hypothesis, and repeats until the root cause of the bug is found. This process is time-consuming, and formulating useful hypotheses often requires deep understanding of the software being debugged. With the Pernosco omniscient debugger there’s no need to speculate about what might have happened, instead an engineer can ask what actually did happen. This radically simplifies the debugging process, enabling much faster progress while requiring much less domain expertise.

To demonstrate the power of this approach we have two examples from well-known and complex software projects. The first is an intermittently crashing node.js test. From a simple stack walk it is easy to see that the proximate cause of the crash is calling a member function with a NULL `this` pointer. The next logical step is to determine why that pointer is NULL. In a traditional debugging approach, this requires pre-existing familiarity with the codebase, or reading code and looking for places where the value of this pointer could originate from. Then an experiment, either poking around in an interactive debugger or adding relevant logging statements, must be run to see where the NULL pointer originates from. And because this test fails intermittently, the engineer has to hope that the issue can be reproduced again and that this experiment doesn’t disturb the program’s behavior so much that the bug vanishes.

In the Pernosco omniscient debugger, the engineer just has to click on the NULL value. With all program state available at all points in time, the Pernosco omniscient debugger can track this value back to its logical origin with no guesswork on the part of the user. We are immediately taken backwards to the point where the connection in question received an EOF and set this pointer to NULL. You can read the full debugging transcript here.

Similarly, with a crash in gdb, the proximate cause of the crash is immediately obvious from a stack walk: the program has jumped through a bad vtable pointer to NULL. Figuring out why the vtable address has been corrupted is not trivial with traditional methods: there are entire tools such as ASAN (which requires recompilation) or Valgrind (which is very slow) that have been designed to find and diagnose memory corruption bugs like this. But in the Pernosco omniscient debugger a click on the object’s pointer takes the user to where it was assigned into the global variable of interest, and another click on the value of the vtable pointer takes the user to where the vtable pointer was erroneously overwritten. Walk through the complete debugging session here.

As demonstrated in the examples above, the Pernosco omniscient debugger makes it easy to track down even classes of bugs that are notoriously difficult to work with such as race conditions or memory corruption errors. Try out Pernosco individual accounts or on-premises today!

About:CommunityIn loving memory of Ricardo Pontes

It brings us great sadness to share the news that a beloved Brazilian community member and Rep alumnus, Ricardo Pontes has recently passed away.

Ricardo was one of the first Brazilian community members, contributing for more than 10 years, a good friend, and a mentor to other volunteers.

His work was instrumental on the Firefox OS days and his passion inspiring. His passing is finding us sadden and shocked. Our condolences to his family and friends.

Below are some words about Ricardo from fellow Mozillians (old and new)

  • Sérgio Oliveira (seocam): Everybody that knew Ricardo, or Pontes as we usually called him in the Mozilla community,  knows that he had a strong personality (despite his actual height). We always stood for what he believed was right and fought for it, but always smiling, making jokes and playing around with the situations. It was a real fun partner with him in many situations, even the not so easy. We are lucky to have photos of Ricardo, since he was always behind the camera taking pictures of us, and always great pictures. Pontes, it was a great pleasure to defend the free Web side-by-side with you. I’ll miss you my friend.
  • Felipe Gomes: O Ricardo sempre foi uma pessoa alegre, animada, e que tinha o dom de unir todos os grupos. Até em sua luta foi possível ver como as pessoas se uniram para rezar por ele e o quanto ele era querido para seus amigos e familiares. As memórias que temos dele são as memórias que ele registrou de nós através de sua câmera. Descanse em paz meu amigo.
  • Andrea Balle:  Pontes is and always will be part of Mozilla Brazil. One of the first members, the “jurassic team” as we called. Pontes was a generous, intelligent and high-spirited friend. I will always remeber him as a person with great enthusiasm for sharing the things that he loved, including bikes, photography, technology and the free web. He will be deeply missed.
  • Armando Neto: I met Ricardo 10 years ago, in a hotel hallway, we were chatting about something I don’t remember, but I do remember we’re laughing, and I will always remember him that way in that hallway.. laughing.
  • Luciana Viana: O Ricardo era quieto e calado, mas observava tudo e estava sempre atento aos movimentos. Nos conhecemos graças a Mozilla e tivemos a oportunidade de conviver graças às nossas inúmeras viagens juntos: Buenos Aires, Cartagena, Barcelona, Toronto, viagens inesquecíveis graças a sua presença, contribuições e senso de humor. Descance em paz querido Chuck. Peço a Deus que conforte o coração da família.
  • Clauber Stipkovic: Thank you for everything, my friend. For all the laughter, for all the late nights we spent talking about mozilla, about life and what we expected from the future. Thank you for being our photographer and recording so many cool moments, that we spent together. Unfortunately your future was very short, but I am sure that you recorded your name in the history of everything you did. May your passage be smooth and peaceful.
  • Luigui Delyer (luiguild): Ricardo was present in the best days I have ever had in my life as Mozillian, he taught me a lot, we enjoyed a lot, we travel a lot, we teach a lot, his legacy is inevitable, his name will be forever in Mozilla’s history and in our hearts. May the family feel embraced by the entire world community that he helped to build.
  • Fabricio Zuardi: As lembranças que tenho do Ricardo são todas de uma pessoa sorrindo, alegre e com alto astral. Nos deu ótimos registros de momentos felizes. Desejo conforto aos familiares e amigos, foi uma pessoa especial.
  • Guilermo Movia: I don’t remember when was the first time that I met Ricardo, but there were so many meetings and travels where our paths crossed. I remember him coming to Mar del Plata to help us talking pictures with ” De todos, para todos”  campaign. His pictures were always great, and show the best of the community. Hope you can rest in peace
  • Rosana Ardila: Ricardo was part of the soul of the Mozilla Brazil community, he was a kind and wonderful human being. It was humbling to see his commitment to the Mozilla Community. He’ll be deeply missed
  • Andre Garzia: Ricardo has been a friend and my Reps mentor for many years, it was through him and others that I discovered the joys of volunteering in a community. His example, wit, and smile, were always part of what made our community great. Ricardo has been an inspiring figure for me, not only because the love of the web that ties us all here but because he followed his passions and showed me that it was possible to pursue a career in what we loved. He loved photography, biking, and punk music, and that is how I chose to remember him. I’ll always cherish the memories we had travelling the world and exchanging stories. My heart and thoughts go to his beloved partner and family. I’ll miss you a lot my friend.
  • Lenno Azevedo: Ricardo foi o meu segundo mentor no programa Mozilla Reps, no qual me guiou dentro do projeto, me ensinando o caminho das pedras que ajudou a me tornar um bom Reps. Vou guarda pra sempre os ensinamentos e incentivos que me deu ao longo dos anos, principalmente na minha atual profissão. Te devo uma companheiro. Obrigado por tudo, descance em paz!
  • Reuben Morais: Ricardo was a beautiful soul, full of energy and smiles. Meeting him at events was always an inspiring opportunity. His energy always made every gathering feel like we all knew each other as childhood friends, I remember feeling this even when I was new. He’ll be missed by all who crossed paths with him.
  • Rubén Martín (nukeador): Ricardo was key to support the Mozilla community in Brazil, as a creative mind he was always behind his camera trying to capture and communicate what was going on, his work will remember him online. A great memory comes to my mind about the time we shared back in 2013 presenting Firefox OS to the world from Barcelona’s Mobile World Congress. You will be deeply missed, all my condolences to his family and close friends. Obrigado por tudo, descanse em paz!
  • Pierros Papadeas: A creative and kind soul, Ricardo will be surely missed by the communities he engaged so passionately.
  • Gloria Meneses: Taking amazing photos, skating and supporting his local community. A very active mozillian who loved parties after long working Reps sessions and a beer lover, that’s how I remember Ricardo. The most special memories I have from Ricardo are In Cartagena at Firefox OS event, in Barcelona at Mobile world congress taking photos, in Madrid at Reps meetings and in the IRC channel supporting Mozilla Brazil. I still can’t believe it. Rest in peace Ricardo.
  • William Quiviger: I remember Ricardo being very soft spoken and gently but fiercely passionate about Mozilla and our mission. I remember his eyes lighting up when I approached him about joining the Reps program. Rest in peace Ricardo.
  • Fernando García (stripTM): I am very shocked by this news. It is so sad and so unfair.
  • Mário Rinaldi: Ricardo era uma pessoa alegre e jovial, fará muita falta neste mundo.
  • Lourdes Castillo:  I will always remember Ricardo as a friend and brother who has always been dedicated to the Mozilla community. A tremendous person with a big heart. A hug to heaven and we will always remember you as a tremendous Mozillian and brother! Rest in peace my mozfriend
  • Luis Sánchez (lasr21) – The legacy of Ricardo’s passions will live throughout the hundreds of new contributors that his work reach.
  • Miguel Useche: Ricardo was one of the first mozillian I met outside my country. It was interesting to know someone that volunteered on Mozilla, did photography and loved to do skateboarding, just like me! I became a fan of his art and loved the few time I had the opportunity to share with him. Rest in peace bro!
  • Antonio Ladeia – Ricardo was a special guy, always happy and willing to help. I was presented with the pleasure of meeting him. His death will make this world a little sadder.
  • Eduardo Urcullú (Urcu): Ricardo o mejor conocido como “O Pontes” realmente fue un amigo muy divertido, aunque callado si cuando aún no lo conoces bien. Lo conocí en un evento de software libre allá por el año 2010 (cuando aún tenia el cabello largo xD), realmente las fotos quebtomaba con su cámara y su humor situacional son cosas para recordarlo. R.I.P. Pontes
  • Dave Villacreses (DaveEcu) Ricardo was part of the early group of supporters here in Latin America, he contributed to breathing lofe to our beloved Latam community. I remember he loved photography and was full of ideas and interesting comments to make every time. Smart and proactive. It is a really sad moment for our entire community.
  • Arturo Martinez: I met Ricardo during the MozCamp LATAM, and since then we became good friends, our paths crossed several times during events, flights, even at the MWC, he was an amazing Mozillian, always making us laugh, taking impressive pictures, with a willpower to defend what he believed, with few words but lots of passion, please rest in peace my friend.
  • Adriano Cupello:  The first time we met, we were in Cartagena for the launch of Firefox OS and I met one of the most amazing group of people of my life.  Pontes was one of them and very quickly became an “old friend” like the ones we have known at school all our lives.  He was an incredible and strong character and a great photographer.  Also he was my mentor at Mozilla reps program. The last time we talked, we tried to have a beer, but due to the circumstances of work, we were unable to.  We schedule it for the next time, and this time never came.  This week I will have this beer thinking about him.  I would like to invite all of you in the next beer that you have with your friends or alone, to dedicate this one to his memory and to the great moments we spent together with him.  My condolences and my prayers to the family and his partner @cahcontri who fought a very painful battle to report his situation until the last day with all his love.  Thank you for all lovely memories you left in my mind! We will miss you a lot! Cheers Pontes!
  • Rodrigo Padula: There were so many events, beers, good conversations and so many jokes and laughs that I don’t even remember when I met Ricardo. We shared the same sense of humor and bad jokes. Surely only good memories will remain! Rest in peace Ricado, we will miss you! 
  • Brian King: I was fortunate to have met Ricardo several times. Although quiet, you felt his presence and he was a very cool guy. We’ll miss you, I hope you get that big photo in the sky. RIP Ricardo.

Some pictures of Ricardo’s life as a Mozilla contributor can be found here