Daniel StenbergWhat goes into curl?

curl is a command line tool and library for doing Internet data transfers. It has been around for a loooong time (over 23 years) but there is still a flood of new things being added to it and development being made, to take it further and to keep it relevant today and in the future.

I’m the lead developer and head maintainer of the curl project.

How do we decide what goes into curl? And perhaps more importantly, what does not get accepted into curl?

Let’s look how this works in the curl factory.

Stick to our principles

curl has come this far by being reliable, trusted and familiar. We don’t rock the boat: curl does Internet transfers specified as URLs and it doesn’t parse or understand the content it transfers. That goes for libcurl too.

Whatever we add should stick to these constraints and core principles, at least. Then of course there are more things to consider.

A shortlist of things I personally want to see

I personally usually have a shortlist of a few features I personally want to work on in the coming months and maybe half year. Items I can grab when other things are slow or if I need a change or fun thing to work on a rainy day. These items are laid out in the ROADMAP document – which also tends to be updated a little too infrequently…

There’s also the TODO document that lists things we consider could be good to do and KNOWN_BUGS that lists known shortcomings we want to address.

Sponsored works have priority

I’m the lead developer of the curl project but I also offer commercial support and curl services to allow me to work on curl full-time. This means that paying customers can get a “priority lane” into landing new features or bug-fixes in future releases of curl. They still need to suit the project though, we don’t abandon our principles even for money. (Contact me to learn how I can help you get your proposed changes done!)

Keep up with where the world goes

All changes and improvements that help curl keep up with and follow where the Internet protocol community is moving, are considered good and necessary changes. The curl project has always been on the front-lines of protocols and that is where we want to remain. It takes a serious effort.

Asking the community

Every year around the May time frame we do a “user survey” that we try to get as many users as possible to respond to. It asks about user patterns, what’s missing and how things are working.

The results from that work provide good feedback on areas to improve and help us identify features our community think curl lacks etc. (The 2020 survey analysis)

Even outside of the annual survey, discussions on the mailing list is a good way for getting direct feedback on questions and ideas and users very often bring up their ideas and suggestions using those channels.

Ideas are easy, code is harder

Actually implementing and providing a feature is a lot harder than just providing an idea. We almost drown among all the good ideas people propose we might or could do one day. What someone might think is a great idea may therefore still not be implemented very soon. Because of the complexity of implementing it or because of lack of time or energy etc.

But at the same time: oftentimes, someone needs to bring the idea or crack the suggestion for it to happen.

It needs to exist to be considered

Related to the previous section. Code and changes that exist, that are provided are of course much more likely to actually end up in curl than abstract ideas. If a pull-request comes to curl and the change adheres to our standards and meet the requirements mentioned in this post, then the chances are very good that it will be accepted and merged.

As I am currently the only one working on curl professionally (ie I get paid to do it). I can rarely count on or assume work submissions from other team members. They usually show up more or less by surprise, which of course is awesome in itself but also makes such work and features very hard to plan for ahead of time. Sometimes people bring new features. Then we deal with them!

Half-baked is not good enough

A decent amount of all pull requests submitted to the project never get merged because they aren’t good enough and the person who submitted them doesn’t respond to feedback and improvement requests properly so that they never become good enough. Things like documentation and tests are typically just as important as the functionality itself.

Pull requests that are abandoned by the author can of course also get taken over by someone else but it cannot be expected or relied upon. A person giving up on the pull request is also a strong sign to the rest of us that obviously the desire to get that specific change landed wasn’t that big and that tells us something.

We don’t accept and merge partial changes that for example lack a crucial part like tests or documentation because we’ve learned the hard way many times over the years that it is just too common that the author then vanishes before completing the work – forcing others to do that work or we have to rip the change out again.

Standards and in-use are preferred properties

At times people suggest we support new protocols or experiments for new things. While that can be considered fun and useful, we typically want both the protocol and the associated URL syntax to already be in use and be somewhat established and preferably even standardized and properly documented in specifications. One of the fundamental core ideas with URLs is that they should mean the same thing for more than one application.

When no compass needle exists, maintain existing direction

Most changes are in line with what we already do and how the products work so no major considerations are necessary. Only once in a while do we get requests or suggestions that actually challenge the direction or forces us to consider what is the right and the wrong way.

If the reason and motivation provided is valid and holds up, then we might agree and go in that direction, If we don’t, we discuss the topic and see if we perhaps can change someone’s mind or “wiggle” the concepts and ideas to see whether we can change the suggestion or perhaps see it from n a different angle to reconsider. Sometimes we just have to decline and say no: that’s not something we think is in line with curl.

Who decides if its fine?

curl is not a democracy, we don’t vote about decisions or what to accept etc.

curl is also not a strict dictatorship where a single leader dictates all truths and facts from above for all subjects to accept and obey.

We’re somewhere in between. We discuss and try to find consensus of what and how to do things. The persons who bring the code or experience the actual problems of course will have more to say. Experienced and long-term maintainers’ opinions have more weight in discussions and they’re free and allowed to merge pull-requests they think are good.

I retain the right to veto stuff, but I very rarely exercise that right.

curl is still a small project. You’ll notice that you’ll quickly recognize the same handful of maintainers in all pull-requests and long tail of others chipping in here and there. There’s no massive crowd anywhere. That’s also the explanation why sometimes your pull-requests might not get reviewed instantly but you must rather wait for a while until you get someone’s attention.

If you’re curious to learn how the project is governed in more detail, then check out the governance docs.

How to land code in curl

I’ve done a previous presentation on how to work with the project get your code landed in curl. Check it out!

Your feedback helps!

Listening to what users want, miss and think are needed when going forward is very important to us. Even if it sometimes is hard to react immediately and often we have to bounce things a little back and forth before they can become “curl material”. So, please don’t expect us to immediately implement what you suggest, but please don’t let that stop you from bringing your grand ideas.

And bring your code. We love your code.

Mozilla Open Policy & Advocacy BlogMozilla joins call for fifth FCC Commissioner appointment

In a letter sent to the White House on Friday, June 11, 2021, Mozilla joined over 50 advocacy groups and unions asking President Biden and Vice President Harris to appoint the fifth FCC Commissioner. Without a full team of appointed Commissioners, the Federal Communications Commission (FCC) is limited in its ability to move forward on crucial tech agenda items such as net neutrality and on addressing the country’s digital divide.

“Net neutrality preserves the environment that creates room for new businesses and new ideas to emerge and flourish, and where internet users can choose freely the companies, products, and services that they want to interact with and use. In a marketplace where consumers frequently do not have access to more than one internet service provider (ISP), these rules ensure that data is treated equally across the network by gatekeepers. We are committed to restoring the protections people deserve and will continue to fight for net neutrality,” said Amy Keating, Mozilla’s Chief Legal Officer.

In March 2021, we sent a joint letter to the FCC asking for the Commission to reinstate net neutrality as soon as it is in working order. Mozilla has been one of the leading voices in the fight for net neutrality for almost a decade, together with other advocacy groups. Mozilla has defended user access to the internet, in the US and around the world. Our work to preserve net neutrality has been a critical part of that effort, including our lawsuit against the FCC to keep these protections in place for users in the US.

The post Mozilla joins call for fifth FCC Commissioner appointment appeared first on Open Policy & Advocacy.

Daniel StenbergBye bye Travis CI

In the afternoon of October 17, 2013 we merged the first config file ever that would use Travis CI for the curl project using the nifty integration at GitHub. This was the actual introduction of the entire concept of building and testing the project on every commit and pull request for the curl project. Before this merge happened, we only had our autobuilds. They are systems run by volunteers that update the code from git maybe once per day, build and run the tests and then upload all the logs.

Don’t take this wrong: the autobuilds are awesome and have helped us make curl what it is. But they rely on individuals to host and admin the machines and to setup the specific configs that are tested.

With the introduction of “proper” CI, the configs that are tested are now also hosted in git and allows the project members to better control and adjust the tests and configs, plus that we can run them on already on pull-requests so that we can verify code before merge instead of having to first merge the code to master before the changes can get verified.

Seriously. Always.

Travis provided a free service with a great promise.

<figcaption>Promise from Travis website as recently as late 2020.</figcaption>

In 2017 we surpassed 10 jobs per commit, all still on Travis.

In early 2019 we reached 30 jobs per commit, and at that time we started to use and spread out the work on more CI services. Travis would still remain as the one we’d lean on the heaviest. It was there and we had custom-written a bunch of jobs for it and it performed well.

Travis even turned some levers for us so that we got more parallel processing powers than on the regular open source tier, and we listed them as sponsors of the curl project for their gracious help. This may or may not be related to the fact that I met Josh Kalderimis (co-founder of travis) in 2019 and we talked about curl’s use of it and they possibly helping us more.

Transition to death

This year, 2021, the curl project runs around 100 CI jobs per commit and PR. 33 of them ran on Travis when we were finally pushed over from travis-ci.org to their new travis-ci.com domain. A transition they’d been advertising for a while but wasn’t very clearly explained or motivated in my eyes.

The new domain also implied new rules and new tiers, we quickly learned. Now we would have to apply to be recognized as an open source project (after 7.5 years of using their services as an open source project). But also, in order to get to take advantage of their free tier being an open source project was no longer enough. Among the new requirements on the project was this:

Project must not be sponsored by a commercial company or
organization (monetary or with employees paid to work on the project)

We’re a small independent open source project, but yes I work on curl full-time thanks to companies paying for curl support. I’m paid to work on curl and therefore we cannot meet that requirement.

Not eligible but still there

I’m not sure why, but apparently we still got free “credits” for running CI on Travis. The CI jobs kept working and I think maybe I sighed a little from relief – of course I did it prematurely as it only took us a few days into the month of June until we had run out of the free credits. There’s no automatic refill but we can apparently ask for more. We asked, but many days after having asked we still had no more credits and no CI jobs could run on Travis anymore. CI on Travis at the same level as before would cost more than 249 USD/month. Maybe not so much “it will always be free”.

The 33 jobs on Travis were there for a purpose. They’re prerequisites for us to develop and ship a quality product. Without the CI jobs running, we risk landing bad code. This was not a sustainable situation.

We looked for alternative services and we quickly got several offers of help and assistance.

New service

Friends from both Zuul CI and Circle CI stepped up and helped us started to get CI jobs transitioned over from Travis over to their new homes.

At June 14th 2021, we officially had no more jobs running on Travis.

Visualized as a graph, we can see the Travis jobs “falling off a cliff” with Zuul rising to the challenge:

Services come and go. There’s no need to get hung up on that fact but instead keep moving forward with our eyes fixed on the horizon.

Thank you Travis CI for all those years of excellent service!

Pay up?

Lots of people have commented and think I’m “whining” about Travis CI charging for something that is useful and that I should rather just pay up. I could probably have gone with that but I dislike their broken promise and that they don’t consider us Open source anymore and I feel I have a responsibility to use the funds we get from gracious donors as wisely and economically as possible, and that includes using no-cost or cheap services rather than services charging thousands of dollars per year.

If there really were no other available and viable options, then paying could’ve been an alternative. Now, moving on to something else was the right choice for us.


Image by Gerd Altmann from Pixabay

Niko MatsakisCTCFT 2021-06-21 Agenda

The second “Cross Team Collaboration Fun Times” (CTCFT) meeting will take place one week from today, on 2021-06-21 (in your time zone)! This post describes the main agenda items for the meeting; you’ll find the full details (along with a calendar event, zoom details, etc) on the CTCFT website.

Afterwards: Social hour

After the CTCFT this week, we are going to try an experimental social hour. The hour will be coordinated in the #ctcft stream of the rust-lang Zulip. The idea is to create breakout rooms where people can gather to talk, hack together, or just chill.

Turbowish and Tokio console

Presented by: pnkfelix and Eliza (hawkw)

Rust programs are known for being performant and correct – but what about when that’s not true? Unfortunately, the state of the art for Rust tooling today can often be a bit difficult. This is particularly true for Async Rust, where users need insights into the state of the async runtime so that they can resolve deadlocks and tune performance. This talk discuss what top-notch debugging and tooling for Rust might look like. One particularly exciting project in this area is tokio-console, which lets users visualize the state of projects build on the tokio library.

Guiding principles for Rust

Presented by: nikomatsakis

As Rust grows, we need to ensure that it retains a coherent design. Establishing a set of “guiding principles” is one mechanism for doing that. Each principle captures a goal that Rust aims to achieve, such as ensuring correctness, or efficiency. The principles give us a shared vocabulary to use when discussing designs, and they are ordered so as to give guidance in resolving tradeoffs. This talk will walk through a draft set of guiding principles for Rust that nikomatsakis has been working on, along with examples of how they those principles are enacted through Rust’s language, library, and tooling.

François MarierHow to get a direct WebRTC connections between two computers

WebRTC is a standard real-time communication protocol built directly into modern web browsers. It enables the creation of video conferencing services which do not require participants to download additional software. Many services make use of it and it almost always works out of the box.

The reason it just works is that it uses a protocol called ICE to establish a connection regardless of the network environment. What that means however is that in some cases, your video/audio connection will need to be relayed (using end-to-end encryption) to the other person via third-party TURN server. In addition to adding extra network latency to your call that relay server might overloaded at some point and drop or delay packets coming through.

Here's how to tell whether or not your WebRTC calls are being relayed, and how to ensure you get a direct connection to the other host.

Testing basic WebRTC functionality

Before you place a real call, I suggest using the official test page which will test your camera, microphone and network connectivity.

Note that this test page makes use of a Google TURN server which is locked to particular HTTP referrers and so you'll need to disable privacy features that might interfere with this:

  • Brave: Disable Shields entirely for that page (Simple view) or allow all cookies for that page (Advanced view).

  • Firefox: Ensure that http.network.referer.spoofSource is set to false in about:config, which it is by default.

  • uMatrix: The "Spoof Referer header" option needs to be turned off for that site.

Checking the type of peer connection you have

Once you know that WebRTC is working in your browser, it's time to establish a connection and look at the network configuration that the two peers agreed on.

My favorite service at the moment is Whereby (formerly Appear.in), so I'm going to use that to connect from two different computers:

  • canada is a laptop behind a regular home router without any port forwarding.
  • siberia is a desktop computer in a remote location that is also behind a home router, but in this case its internal IP address ( is set as the DMZ host.


For all Chromium-based browsers, such as Brave, Chrome, Edge, Opera and Vivaldi, the debugging page you'll need to open is called chrome://webrtc-internals.

Look for RTCIceCandidatePair lines and expand them one at a time until you find the one which says:

  • state: succeeded (or state: in-progress)
  • nominated: true
  • writable: true

Then from the name of that pair (N6cxxnrr_OEpeash in the above example) find the two matching RTCIceCandidate lines (one local-candidate and one remote-candidate) and expand them.

In the case of a direct connection, I saw the following on the remote-candidate:

  • ip shows the external IP address of siberia
  • port shows a random number between 1024 and 65535
  • candidateType: srflx

and the following on local-candidate:

  • ip shows the external IP address of canada
  • port shows a random number between 1024 and 65535
  • candidateType: prflx

These candidate types indicate that a STUN server was used to determine the public-facing IP address and port for each computer, but the actual connection between the peers is direct.

On the other hand, for a relayed/proxied connection, I saw the following on the remote-candidate side:

  • ip shows an IP address belonging to the TURN server
  • candidateType: relay

and the same information as before on the local-candidate.


If you are using Firefox, the debugging page you want to look at is about:webrtc.

Expand the top entry under "Session Statistics" and look for the line (should be the first one) which says the following in green:

  • ICE State: succeeded
  • Nominated: true
  • Selected: true

then look in the "Local Candidate" and "Remote Candidate" sections to find the candidate type in brackets.

Firewall ports to open to avoid using a relay

In order to get a direct connection to the other WebRTC peer, one of the two computers (in my case, siberia) needs to open all inbound UDP ports since there doesn't appear to be a way to restrict Chromium or Firefox to a smaller port range for incoming WebRTC connections.

This isn't great and so I decided to tighten that up in two ways by:

  • restricting incoming UDP traffic to the IP range of siberia's ISP, and
  • explicitly denying incoming to the UDP ports I know are open on siberia.

To get the IP range, start with the external IP address of the machine (I'll use the IP address of my blog in this example: and pass it to the whois command:

$ whois | grep CIDR

To get the list of open UDP ports on siberia, I sshed into it and ran nmap:

$ sudo nmap -sU localhost

Starting Nmap 7.60 ( https://nmap.org ) at 2020-03-28 15:55 PDT
Nmap scan report for localhost (
Host is up (0.000015s latency).
Not shown: 994 closed ports
631/udp   open|filtered ipp
5060/udp  open|filtered sip
5353/udp  open          zeroconf

Nmap done: 1 IP address (1 host up) scanned in 190.25 seconds

I ended up with the following in my /etc/network/iptables.up.rules (ports below 1024 are denied by the default rule and don't need to be included here):

# Deny all known-open high UDP ports before enabling WebRTC for canada
-A INPUT -p udp --dport 5060 -j DROP
-A INPUT -p udp --dport 5353 -j DROP
-A INPUT -s -p udp --dport 1024:65535 -j ACCEPT

Patrick ClokeConverting Twisted’s inlineCallbacks to async

Almost a year ago we had a push at Element to convert the remaining instances of Twisted’s inlineCallbacks to use native async/await syntax from Python [1]. Eventually this work got covered by issue #7988 (which is the original basis for this blogpost).

Note that Twisted itself gained some …

Data@Mozilla⚠️Danger zone⚠️: handling sensitive data in Glean

Co-authored by Alessio Placitelli and Beatriz Rizental.
(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

🎵 “Precious and fragile things, need special handling […]” 🎵, and that applies to data, too!

Over the years, a number of projects at Mozilla had to handle the collection of sensitive data users explicitly decided to share with us (think, just as an example, things as sensitive as full URLs). Most of the time projects were designed and built over our legacy telemetry systems, leaving developers with the daunting task of validating their implementations, asking for security reviews and re-inventing their APIs.

With the advent of Glean, Mozilla’s Data Org took the opportunity to improve this area, allowing our internal customers to build better data products.

Fresh Prince saying: Wait a Minute..?


Data collection + Pipeline Encryption = ✨

We didn’t really talk about what we mean by “special handling”, did we?

For data that is generally not sensitive (e.g. the amount of available RAM), after a product using Glean submits a ping, it hits the ingestion pipeline. The communication channel between the Glean client and the ingestion server is HTTPS, which means the channel is encrypted from one end (the client) to the other end (the ingestion server). After the ingestion server is hit, unencrypted pings are routed within our ingestion pipeline and dispatched to the destination tables.

For products requesting pipeline encryption to make sure only specific individuals and pipeline engineers can access the data, the path is slightly different. When enabling them in the ingestion pipeline, an encryption key is provisioned and must be configured in the product using Glean before new pings can be successfully ingested into a data store. From that moment on, all the pings generated by the Glean client will look like this:

"payload": "eyJhbGciOiJFQ0RILUVTI..."

Not a lot of info to route things within the pipeline, right? 🤦

Luckily for our pipeline, all Glean ping submissions conform to the HTTP Edge Specification. By knowing the Glean application id (which maps to the document namespace from the HTTP Edge Specification) and the ping type, the pipeline knows everything it needs to route pings to their destination, look up the decryption keys and decrypt the payload before inserting it into the destination table.

It’s important to note that only a handful of pipeline engineers are authorized to inspect the encrypted payload (and enabled to fix things if they break!) and only an explicit list of individuals, created when enabling the product in the pipeline, is allowed to access the final data within a secure, locked down environment.

How does the ✨magic✨ happen in the Glean SDKs?

As discussed, ping encryption is not a feature required by all products using Glean. From a client standpoint, it is also a feature that has the potential to significantly increase the size of the final Glean SDK because, in most environments, external dependencies are necessary to encrypt the ping payload. Ideally, we should find a way to make it an opt-in feature i.e. only users that actually need it pay the (size) price for it. And so we did.

Ping encryption was the perfect use case to implement a new and long discussed feature in the Glean SDKs: plugins. By implementing the ping encryption as a plugin and not a core feature, we achieve the goal of making it an opt-in feature. This strategy also has the added bonus of keeping the encryption initialization parameters out of the Glean configuration object, win/win.

Since the ping encryption plugin would be the first ever Glean SDK plugin, we needed to figure out our plugin architecture. In a nutshell, the concept we settled for is: plugins are classes that define an action to be performed when a specific Glean event happens. Each event might provide extra context for the action performed by the plugin and might require that the plugin return a modified version of said context. Plugin instances are passed to Glean as initialization parameters.

Let’s put a shape to this, by describing the ping encryption plugin.

  • The ping encryption plugin is registered to the afterPingCollection event.
    •  This event will call a plugin action after the ping is collected, but before it is stored and queued for upload. This event will also provide the collected ping payload as context to the plugin action and requires that the action return a JSON object. Whatever the action returns is what will be saved and queued for upload in place of the original payload. If no plugin is registered to this event, collection happens as usual.
  • The ping encryption plugin action, gets the ping payload from this event and returns the encrypted version of this payload.

In order to use this plugin, products using Glean need to pass an instance of it to the Glean SDK of their choice during initialization.

import Glean from "@mozilla/glean/webext"
import PingEncryptionPlugin from "@mozilla/glean/plugins/encryption"
    plugins: [
      new PingEncryptionPlugin({
        "crv": "P-256",
        "kid": "fancy",
        "kty": "EC",
        "x": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
        "y": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

And that is it. All pings sent from this Glean instance will now be encrypted before they are sent.

Harry Potter loves magic

Note: The ping encryption plugin is only available on the Glean JavaScript SDK at the moment. Please refer to the Glean book for comprehensive documentation on using the PingEncryptionPlugin.

Limitations and next steps

Futurama Welcome to the world of tomorrow!

While the current approach serves the needs of Mozilla’s internal customers, there are some limitations that we are planning to smooth out in the future. For example, in order to be properly routed, products that want to opt-into Glean pipeline encryption will need to use a fixed, common prefix in their application id. Another constraint of the current system is that once a product opts into Pipeline encryption, all the pings are expected to be encrypted: the same product won’t be able to send both pipeline-encrypted and pipeline-unencrypted pings.

One final constraint is that the tooling available in the secure environment is limited to Jupyter notebooks.


The pipeline encryption support in Glean wasn’t built in a day! This major feature is based on the incremental work that happened over the past year, of many Mozillians (thank you Wesley Dawson, Anthony Miyaguchi, Arkadiusz Komarzewski and anyone else who helped with it!).

And kudos to the first product making use of this neat feature!

Support.Mozilla.OrgWhat’s up with SUMO – June 2021

Hey SUMO folks,

Welcome to the month of June 2021. A new mark for Firefox with the release of Firefox 89. Lots of excitement and anticipation for the changes.

Let’s see what we’re up to these days!

Welcome on board!

  1. Welcome and thanks to TerryN21 and Mamoon for being active in the forum.

Community news

  • June is the month of Major Release 1 (MR1) or commonly known as Proton release. We have prepared a spreadsheet to list down the changes for this release, so you can easily find the workarounds, related bugs, and common responses for each issue. You can join Firefox 89 discussion in this thread and find out about our tagging plan here.
  • If an advanced topic like pref modification in the about:config is something that you’re interested in, please join our discussion in this community thread. We talked about how we can accommodate this in a more responsible and safer way without harming our normal users.
  • What do you think of supporting Firefox users on Facebook? Join our discussion here.
  • We said goodbye to Joni last month and Madalina has also bid farewell to us in our last community call (though she’ll stay until the end of the quarter). It’s sad to let people go, but we know that changes are normal and expected. We’re grateful for what both Joni and Madalina have done in SUMO and hope the best for whatever comes next for them.
  • Another reminder to check out Firefox Daily Digest to get daily updates about Firefox. Go check it out and subscribe if you haven’t already.
  • There’s only one update from our dev team in the past month:

Community call

  • Find out what we talked about in our community call in May.
  • Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can address them during the meeting.

Community stats


KB Page views

Month Page views Vs previous month
May 2021 7,601,709 -13.02%

Top 5 KB contributors in the last 90 days: 

  1. AliceWyman
  2. Jeff
  3. Michele Rodaro
  4. Underpass
  5. Marchelo Ghelman

KB Localization

Top 10 locale based on total page views

Locale Apr 2021 page views Localization progress (per Jun, 3)
de 10.05% 99%
zh-CN 6.82% 100%
es 6.71% 42%
pt-BR 6.61% 65%
fr 6.37% 86%
ja 4.33% 53%
ru 3.54% 95%
it 2.28% 98%
pl 2.17% 84%
zh-TW 1.04% 6%

Top 5 localization contributor in the last 90 days: 

  1. Milupo
  2. Artist
  3. Markh2
  4. Soucet
  5. Goudron

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Jun 2021 3091 65.97% 13.62% 63.64%

Top 5 forum contributor in the last 90 days: 

  1. Cor-el
  2. FredMcD
  3. Jscher2000
  4. Seburo
  5. Databaseben

Social Support

Channel May 2021
Total conv Conv handled
@firefox 4012 212
@FirefoxSupport 367 267

Top 5 contributors in Q1 2021

  1. Christophe Villeneuve
  2. Md Monirul Alom
  3. Devin E
  4. Andrew Truong
  5. Dayana Galeano

Play Store Support

We don’t have enough data for the Play Store Support yet. However, you can check out the overall Respond Tool metrics here.

Product updates

Firefox desktop

  • Fx 89 / MR1 released (June 1)
    • BIG THANKS – to all the contributors who helped with article revisions, localization, and for the help with ongoing MR1 Rapid Feedback Collection reporting
  • Fx 90 (July 13)
    • Background update Agents
    • SmartBlock UI improvements
    • About:third-party addition

Firefox mobile

  • Fx for Android 89 (June 1)
    • Improved menus
    • Redesigned Top Sites
    • Easier access to Synced Tabs
  • Fx for iOS V34 (June 1)
    • Updated Look
    • Search enhancements
    • Tab improvements
  • Fx for Android 90 (July 13th)
    • CC autocomplete

Other products / Experiments

  • Sunset of Firefox lite (June 1)
    • Effective June 30, this app will no longer receive security or other updates. Get the official Firefox Android app now for a fast, private & safe web browser
  • Mozilla VPN V2.3 (June 8)
    • Captive Portal Alerts
  • Mozilla VPN V2.4 (July 14)
    • Split tunneling for Windows
    • Local DNS: user settings for local dns server


  • Thanks to Danny Colin and Monirul Alom for helping with the MR1 feedback collection project! 🙌

If you know anyone that we should feature here, please contact Kiki, and we’ll make sure to add them in our next edition.

Useful links:

Mozilla Localization (L10N)L10n Report: June 2021 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 


Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New content and projects

Firefox 89 (MR1)

On June 1st, Mozilla released Firefox 89. That was a major milestone for Firefox, and a lot of work went into this release (internally called MR1, which stands for Major Release 1). If this new update was well received — see for example this recent article from ZDNet — it’s also thanks to the amazing work done by our localization community.

For the first time in over a decade, we looked at Firefox holistically, making changes across the board to improve messages, establish a more consistent tone, and modernize some dialogs. This inevitably generated a lot of new content to localize.

Between November 2020 and May 2021, we added 1637 strings (6798 words). To get a point of reference, that’s almost 14% of the entire browser. What’s amazing is that the completion levels didn’t fall drastically:

  • Nov 30, 2020: 89.03% translated across all shipping locales, 99.24% for the top 15 locales.
  • May 24, 2021: 87.85% translated across all shipping locales, 99.39% for the top 15 locales.

The completion level across all locales is lower, but that’s mostly due to locales that are completely unmaintained, and that we’ll likely need to drop from release later this year. If we exclude those 7 locales, overall completion increased by 0.10% (to 89.84%).

Once again, thanks to all the volunteers who contributed to this successful release of Firefox.

What’s new or coming up in Firefox desktop

These are the important deadlines for Firefox 90, currently in Beta:

  • Firefox 90 will be released on July 13. It will be possible to update localizations until July 4.
  • Firefox 91 will move to beta on July 12 and will be released on August 10.

Keep in mind that Firefox 91 is also going to be the next ESR version. Once that moves to release, it won’t generally be possible to update translations for that specific version.

Talking about Firefox 91, we’re planning to add a new locale: Scots. Congratulations to the team for making it so quickly to release!

On a final note, expect to see more updates to the Firefox L10n Newsletter, since this has proved to be an important tool to provide more context to localizers, and help them with testing.

What’s new or coming up in mobile

Next l10n deadlines for mobile projects:

  • Firefox for Android v91: July 12
  • Firefox for iOS v34.1: June 9

Once more, we want to thank all the localizers who worked hard for the MR1 (Proton) mobile release. We really appreciate the time and effort spent on helping ensure all these products are available globally (and of course, also on desktop). THANK YOU!

What’s new or coming up in web projects


There are a few strings exposed to Pontoon that do not require translation. Only Mozilla staff in the admin role to the product would be able to see them. The developer for the feature will add a comment of “no need to translate” or context to the string at a later time. We don’t know when this will be added. For the time being, please ignore them. Most of the strings with a source string ID of src/olympia/scanners/templates/admin/* can be ignored. However, there are still a handful of strings that fall out of the category.


The project continues to be on hold in Pontoon. The product repository doesn’t pick up any changes made in Pontoon, so fr, ja, zh-CN, and zh-TW are now read-only for now.  The MDN site, however, is still maintaining the articles localized in these languages plus ko, pt-BR, and ru.


The websites in ar, hi-IN, id, ja, and ms languages are now fully localized through vendor service since our last report. Communities of these languages are encouraged to help promote the sites through various social media platforms to  increase download, conversion and create new profiles.

What’s new or coming up in SuMo

Lots of exciting things happening in SUMO in Q2. Here’s a recap of what’s happening:

  • You can now subscribe to Firefox Daily Digest to get updates about what people are saying about Firefox and other Mozilla products on social media like Reddit and Twitter.
  • We now have release notes for Kitsune in Discourse. The latest one was about advanced search syntax which is a replacement for the former Advanced Search feature.
  • We are trying something new for Firefox 89 by collecting MR1 (Major Release 1) specific feedback from across channels (support forum, Twitter, and Reddit). You can look into how we’re doing it on the contributor thread and learn more about MR1 changes from a list that we put together on this spreadsheet.

As always, feel free to join SUMO Matrix room to discuss or just say hi to the rest of the community.

What’s new or coming up in Pontoon

Since May, we’ve been running experiments in Pontoon to increase the number of users reading notifications. For example, as part of this campaign, you might have seen a banner encouraging you to install the Pontoon Add-on — which you really should do — or noticed a slightly different notification icon in the top right corner of the window.

Pontoon NotificationRecently, we also sent an email to all Pontoon accounts active in the past 2 years, with a link to a survey specifically about further improving notifications. If you haven’t completed the survey yet, or haven’t received the email, you can still take the survey here (until June 20th).

Look out for pilcrows

When a source string includes line breaks, Pontoon will show a pilcrow character (¶) where the line break happens.

This is how the Fluent file looks like:

onboarding-multistage-theme-tooltip-automatic-2 =
    .title =
        Inherit the appearance of your operating
        system for buttons, menus, and windows.

While in most cases the line break is not relevant — it’s just used to make the source file more readable — double check the resource comment: if the line break is relevant, it will be pointed out explicitly.

If they’re not relevant, you can just put your translation on one line.

If you want to preserve the line breaks in your translation, you have a few options:

  • Use SHIFT+ENTER to create a new line while translating.
  • Click the ¶ character in the source: that will create a new line in the position where your cursor currently sits.
  • Use the COPY button to copy the source, then edit it. That’s not really efficient, as your locale might need a line break in a different place.

Do not select the text with your mouse, and copy it in the translation field. That will copy the literal character in the translation, and it will be displayed in the final product, causing bugs.

If you see the ¶ character in the translation field (see red arrow in the image below), it will also appear in the product you are translating, which is most likely not what you want. On the other hand, it’s expected to see the ¶ character in the list of translations under the translation field (green arrow), as it is in the source string and the string list.


  • We have held our first Localization Workshop Zoom event on Saturday June 5th. Next iterations will happen on Friday June 11th and Saturday June 12th. We have invited active managers and translators from a subset of locales. If this experience turns out to be useful, we will consider opening up to an even larger audience with expanded locales.
  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

  • If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

The Mozilla BlogPrivacy analysis of FLoC

In a previous post, I wrote about a new set of technologies “Privacy Preserving Advertising”, which are intended to allow for advertising without compromising privacy. This post discusses one of those proposals–Federated Learning of Cohorts (FLoC)–which Chrome is currently testing. The idea behind FLoC is to make it possible to target ads based on the interests of users without revealing their browsing history to advertisers. We have conducted a detailed analysis of FLoC privacy. This post provides a summary of our findings.

In the current web, trackers (and hence advertisers) associate a cookie with each user. Whenever a user visits a website that has an embedded tracker, the tracker gets the cookie and can thus build up a list of the sites that a user visits. Advertisers can use the information gained from tracking browsing history to target ads that are potentially relevant to a given user’s interests. The obvious problem here is that it involves advertisers learning everywhere you go. 

FLoC replaces this cookie with a new “cohort” identifier which represents not a single user but a group of users with similar interests. Advertisers can then build a list of the sites that all the users in a cohort visit, but not the history of any individual user. If the interests of users in a cohort are truly similar, this cohort identifier can be used for ad targeting. Google has run an experiment with FLoC; from that they’ve stated that FLoC provides 95% of the per-dollar conversion rate when compared to interest-based ad targeting using tracking cookies.

Our analysis shows several privacy issues that we believe need to be addressed:

Cohort IDs can be used for tracking

Although any given cohort is going to be relatively large (the exact size is still under discussion, but these groups will probably consist of thousands of users), that doesn’t mean that they cannot be used for tracking. Because only a few thousand people will share a given cohort ID, if trackers have any significant amount of additional information, they can narrow down the set of users very quickly. There are a number of possible ways this could happen:

Browser Fingerprinting

Not all browsers are the same. For instance, some people use Chrome and some use Firefox; some people are on Windows and others are on Mac; some people speak English and others speak French. Each piece of user-specific variation can be used to distinguish between users. When combined with a FLoC cohort that only has a few thousand users, a relatively small amount of information is required to identify an individual person or at least narrow the FLoC cohort down to a few people. Let’s give an example using some numbers that are plausible. Imagine you have a fingerprinting technique which divides people up into about 8000 groups (each group here is somewhat bigger than a ZIP code). This isn’t enough to identify people individually, but if it’s combined with FLoC using cohort sizes of about 10000, then the number of people in each fingerprinting group/FLoC cohort pair is going to be very small, potentially as small as one. Though there might be larger groups that can’t be identified this way, that is not the same as having a system that is free from individual targeting.

Multiple Visits

People’s interests aren’t constant and neither are their FLoC IDs. Currently, FLoC IDs seem to be recomputed every week or so. This means that if a tracker is able to use other information to link up user visits over time, they can use the combination of FLoC IDs in week 1, week 2, etc. to distinguish individual users. This is a particular concern because it works even with modern anti-tracking mechanisms such as Firefox’s Total Cookie Protection (TCP). TCP is intended to prevent trackers from correlating visits across sites but not multiple visits to one site. FLoC restores cross-site tracking even if users have TCP enabled. 

FLoC leaks more information than you want

With cookie-based tracking, the amount of information a tracker gets is determined by the number of sites it is embedded on. Moreover, a site which wants to learn about user interests must itself participate in tracking the user across a large number of sites, work with some reasonably large tracker, or work with other trackers. Under a permissive cookie policy, this type of tracking is straightforward using third-party cookies and cookie syncing. However, when third-party cookies are blocked (or isolated by site in TCP) it’s much more difficult for trackers to collect and share information about a user’s interests across sites.

FLoC undermines these more restrictive cookie policies: because FLoC IDs are the same across all sites, they become a shared key to which trackers can associate data from external sources. For example, it’s possible for a tracker with a significant amount of first-party interest data to operate a service which just answers questions about the interests of a given FLoC ID. E.g., “Do people who have this cohort ID like cars?”. All a site needs to do is call the FLoC APIs to get the cohort ID and then use it to look up information in the service. In addition, the ID can be combined with fingerprinting data to ask “Do people who live in France, have Macs, run Firefox, and have this ID like cars?” The end result here is that any site will be able to learn a lot about you with far less effort than they would need to expend today.

FLoC’s countermeasures are insufficient

Google has proposed several mechanisms to address these issues.

First, sites have the option of whether or not to participate in FLoC. In the current experiment that Chrome is conducting, sites are included in the FLoC computation if they do ads-type stuff, either “load ads-related resources” or call the FLoC APIs. It’s not clear what the eventual inclusion criteria are, but it seems likely that any site which includes advertising will be included in the computation by default. Sites can also opt-out of FLoC entirely using the Permissions-Policy HTTP header but it seems likely that many sites will not do so.

Second, Google itself will suppress FLoC cohorts which it thinks are too closely correlated with “sensitive” topics. Google provides the details in this whitepaper, but the basic idea is that they will look to see if the users in a given cohort are significantly more likely to visit a set of sites associated with sensitive categories, and if so they will just return an empty cohort ID for that cohort. Similarly, they say they will remove sites which they think are sensitive from the FLoC computation. These defenses seem like they are going to be very difficult to execute in practice for several reasons: (1) the list of sensitive categories may be incomplete or people may not agree on what categories are sensitive, (2) there may be other sites which correlate to sensitive sites but are not themselves sensitive, and (3) clever trackers may be able to learn sensitive information despite these controls. For instance: it might be the case that English-speaking users with FLoC ID X are no more likely to visit sensitive site type A, but French-speaking users are. 

While these mitigations seem useful, they seem to mostly be improvements at the margins, and don’t address the basic issues described above, which we believe require further study by the community.


FLoC is premised on a compelling idea: enable ad targeting without exposing users to risk. But the current design has a number of privacy properties that could create significant risks if it were to be widely deployed in its current form. It is possible that these properties can be fixed or mitigated — we suggest a number of potential avenues in our analysis — further work on FLoC should be focused on addressing these issues.

For more on this:

Building a more privacy-preserving ads-based ecosystem

The future of ads and privacy

The post Privacy analysis of FLoC appeared first on The Mozilla Blog.

Mozilla Open Policy & Advocacy BlogWorking in the open: Enhancing privacy and security in the DNS

In 2018, we started pioneering work on securing one of the oldest parts of the Internet, one that had till then remained largely untouched by efforts to make the web safer and more private: the Domain Name System (DNS). We passed a key milestone in that endeavor last year, when we rolled out DNS-over-HTTPS (DoH) technology by default in the United States, thus improving privacy and security for millions of people. Given the transformative nature of this technology and in line with our mission commitment to transparency and collaboration, we have consistently sought to implement DoH thoughtfully and inclusively. Today we’re sharing our latest update on that continued effort.

Between November 2020 and January 2021 we ran a public comment period, to give the broader community who care about the DNS – including human rights defenders; technologists; and DNS service providers – the opportunity to provide recommendations for our future DoH work. Specifically, we canvassed input on our Trusted Recursive Resolver (TRR) policies, the set of privacy, security, and integrity commitments that DNS recursive resolvers must adhere to in order to be considered as default partner resolvers for Mozilla’s DoH roll-out.

We received rich feedback from stakeholders across the world, and we continue to reflect on how it can inform our future DoH work and our TRR policies. As we continue that reflection, we’re today publishing the input we received during the comment period – acting on a commitment to transparency that we made at the outset of the process. You can read the comments here.

During the comment period and prior, we received substantial input on the blocklist publication requirement of our TRR policies. This requirement means that resolvers in our TRR programme  must publicly release the list of domains that they block access to. This blocking could be the result of either legal requirements that the resolver is subject to, or because a user has explicitly consented to certain forms of DNS blocking. We are aware of the downsides associated with blocklist publication in certain contexts, and one of the primary reasons for undertaking our  comment period was to solicit constructive feedback and suggestions on how to best ensure meaningful transparency when DNS blocking takes place. Therefore, while we reflect on the input regarding our TRR policies and solutions for blocking transparency, we will relax this blocklist publication requirement. As such, current or prospective TRR partners will not be required to mandatorily publish DNS blocklists from here on out.

DoH is a transformative technology. It is relatively new and, as such, is of interest to a variety of stakeholders around the world. As we bring the privacy and security benefits of DoH to more Firefox users, we will continue our proactive engagement with internet service providers, civil society organisations, and everyone who cares about privacy and security in the internet ecosystem.

We look forward to this collaborative work. Stay tuned for more updates in the coming months.

The post Working in the open: Enhancing privacy and security in the DNS appeared first on Open Policy & Advocacy.

Firefox UXContent design considerations for the new Firefox

How we collaborated on a major redesign to clean up debt and refresh the product.

Image of a Firefox browser window with the application menu opened on the right.

Introducing the redesigned Firefox browser, featuring the Alpenglow them

We just launched a major redesign of the Firefox desktop browser to 240 million users. The effort was so large that we put our full content design team — all two of us — on the case. Over the course of the project, we updated nearly 1,000 strings, re-architected our menus, standardized content patterns, established new principles, and cleaned up content debt.

Creating and testing language to inform visual direction

The primary goal of the redesign was to make Firefox feel modern. We needed to concretize that term to guide the design and content decisions, as well as to make the measurement of visual aesthetics more objective and actionable.

To do this, we used the Microsoft Desirability Toolkit, which measures people’s attitudes towards a UI with a controlled vocabulary test. Content design worked with our UX director to identify adjectives that could embody what “modern” meant for our product. The UX team used those words for early visual explorations, which we then tested in a qualitative usertesting.com study.

Based on the results, we had an early idea of where the designs were meeting goals and where we could make adjustments.

Image of a word cloud that includes words like ‘clean,’ ‘easy,’ and ‘simple, as well as two comments from research participants about the application menu redesign.

Sampling of qualitative feedback from the visual appeal test with word cloud and participant comments.

Improving way-finding in menus

Over time, our application menu had grown unwieldy. Sub-menus proliferated like dandelions. It was difficult to scan, resulting in high cognitive load. Grouping of items were not intuitive. By re-organizing the items, prioritizing high-value actions, using clear language, and removing icons, the new menu better supports people’s ability to move quickly and efficiently in the Firefox browser.

To finalize the menu’s information architecture, we leveraged a variety of inputs. We studied usage data, reviewed past user research, and referenced external sources like the Nielsen Norman Group for menu design best practices. We also consulted with product managers to understand the historical context of prior decisions.

Before and after images of the redesigned Firefox application menu.

The Firefox application menu, before and after the information architecture redesign.

Image of the redesigned Firefox application menu with annotations about what changed.

Changes made to the Firefox application menu include removing icons, grouping like items together, and reducing the number of sub-menus.

As a final step, we created principles to document the rationale behind the menu redesign so a consistent approach could be applied to other menu-related decisions across the product and platforms.

Image of content design principles for menus, such as ‘Use icons sparingly’ and ‘Write options as verb phrases.’

Content design developed these principles to help establish a consistent approach for other menus in the product.

Streamlining high-visibility messages

Firefox surfaces a number of messages to users while they use the product. Those messages had dated visuals, inconsistent presentation, and clunky copy.

We partnered with our UX and visual designers to redesign those message types using a content-first approach. By approaching the redesign this way, we better ensured the resulting components supported the message needs. Along the way, we were able to make some improvements to the existing copy and establish guidelines so future modals, infobars, and panels would be higher quality.

Cleaning up paper cuts in modal dialogues

A modal sits on top of the main content of a webpage. It’s a highly intrusive message that disables background content and requires user interaction. By redesigning it we made one of the most interruptive browsing moments smoother and more cohesive.
Before and after images of a redesigned Firefox modal dialog. Content decisions are highlighted.
Annotated example of the content decisions in a redesigned Firefox modal dialog.

Before and after images of a redesigned Firefox modal dialog. Content decisions are highlighted.

Annotated example of the content decisions in a redesigned Firefox modal dialog

Defining new content patterns for permissions panels

Permissions panels get triggered when you visit certain websites. For example, a website may request to send you notifications, know your location, or gain access to your camera and microphone. We addressed inconsistencies and standardized content patterns to reduce visual clutter. The redesigned panels are cleaner and more concise.
Before and after images of a redesigned Firefox permissions panel. Content decisions are highlighted.

Before and after images of a redesigned Firefox permissions panel. Content decisions are highlighted.

Annotated example of the content decisions in a redesigned Firefox permissions panel.

Closing thoughts

This major refresh appears simple and somewhat effortless, which was the goal. A large amount of work happened behind the scenes to make that end result possible — a whole lot of auditing, iteration, communication, collaboration, and reviews. As usual, the lion’s share of content design happened before we put ‘pen to paper.’

Like any major renovation project, we navigated big dreams, challenging constraints, tough compromises, and a whole lot of dust. Software is never ‘done,’ but we cleared significant content weeds and co-created a future-forward design experience.

As anyone who has contributed to a major redesign knows, this involved months of collaboration between our user experience team, engineers, and product managers, as well as our partners in localization, accessibility, and quality assurance. We were fortunate to work with such a smart, hard-working group.

This post was originally published on Medium.

Mozilla Attack & DefenseEliminating Data Races in Firefox – A Technical Report

We successfully deployed ThreadSanitizer in the Firefox project to eliminate data races in our remaining C/C++ components. In the process, we found several impactful bugs and can safely say that data races are often underestimated in terms of their impact on program correctness. We recommend that all multithreaded C/C++ projects adopt the ThreadSanitizer tool to enhance code quality.

What is ThreadSanitizer?

ThreadSanitizer (TSan) is compile-time instrumentation to detect data races according to the C/C++ memory model on Linux. It is important to note that these data races are considered undefined behavior within the C/C++ specification. As such, the compiler is free to assume that data races do not happen and perform optimizations under that assumption. Detecting bugs resulting from such optimizations can be hard, and data races often have an intermittent nature due to thread scheduling.

Without a tool like ThreadSanitizer, even the most experienced developers can spend hours on locating such a bug. With ThreadSanitizer, you get a comprehensive data race report that often contains all of the information needed to fix the problem.

An example for a ThreadSanitizer report, showing where each thread is reading/writing, the location they both access and where the threads were created. ThreadSanitizer Output for this example program (shortened for article)

One important property of TSan is that, when properly deployed, the data race detection does not produce false positives. This is incredibly important for tool adoption, as developers quickly lose faith in tools that produce uncertain results.

Like other sanitizers, TSan is built into Clang and can be used with any recent Clang/LLVM toolchain. If your C/C++ project already uses e.g. AddressSanitizer (which we also highly recommend), deploying ThreadSanitizer will be very straightforward from a toolchain perspective.

Challenges in Deployment

Benign vs. Impactful Bugs

Despite ThreadSanitizer being a very well designed tool, we had to overcome a variety of challenges at Mozilla during the deployment phase. The most significant issue we faced was that it is really difficult to prove that data races are actually harmful at all and that they impact the everyday use of Firefox. In particular, the term “benign” came up often. Benign data races acknowledge that a particular data race is actually a race, but assume that it does not have any negative side effects.

While benign data races do exist, we found (in agreement with previous work on this subject [1] [2]) that data races are very easily misclassified as benign. The reasons for this are clear: It is hard to reason about what compilers can and will optimize, and confirmation for certain “benign” data races requires you to look at the assembler code that the compiler finally produces.

Needless to say, this procedure is often much more time consuming than fixing the actual data race and also not future-proof. As a result, we decided that the ultimate goal should be a “no data races” policy that declares even benign data races as undesirable due to their risk of misclassification, the required time for investigation and the potential risk from future compilers (with better optimizations) or future platforms (e.g. ARM).

However, it was clear that establishing such a policy would require a lot of work, both on the technical side as well as in convincing developers and management. In particular, we could not expect a large amount of resources to be dedicated to fixing data races with no clear product impact. This is where TSan’s suppression list came in handy:

We knew we had to stop the influx of new data races but at the same time get the tool usable without fixing all legacy issues. The suppression list (in particular the version compiled into Firefox) allowed us to temporarily ignore data races once we had them on file and ultimately bring up a TSan build of Firefox in CI that would automatically avoid further regressions. Of course, security bugs required specialized handling, but were usually easy to recognize (e.g. racing on non-thread safe pointers) and were fixed quickly without suppressions.

To help us understand the impact of our work, we maintained an internal list of all the most serious races that TSan detected (ones that had side-effects or could cause crashes). This data helped convince developers that the tool was making their lives easier while also clearly justifying the work to management.

In addition to this qualitative data, we also decided for a more quantitative approach: We looked at all the bugs we found over a year and how they were classified. Of the 64 bugs we looked at, 34% were classified as “benign” and 22% were “impactful” (the rest hadn’t been classified).

We knew there was a certain amount of misclassified benign issues to be expected, but what we really wanted to know was: Do benign issues pose a risk to the project? Assuming that all of these issues truly had no impact on the product, are we wasting a lot of resources on fixing them? Thankfully, we found that the majority of these fixes were trivial and/or improved code quality.

The trivial fixes were mostly turning non-atomic variables into atomics (20%), adding permanent suppressions for upstream issues that we couldn’t address immediately (15%), or removing overly complicated code (20%). Only 45% of the benign fixes actually required some sort of more elaborate patch (as in, the diff was larger than just a few lines of code and did not just remove code).

We concluded that the risk of benign issues being a major resource sink was not an issue and well acceptable for the overall gains that the project provided.

False Positives?

As mentioned in the beginning, TSan does not produce false positive data race reports when properly deployed, which includes instrumenting all code that is loaded into the process and avoiding primitives that TSan doesn’t understand (such as atomic fences). For most projects these conditions are trivial, but larger projects like Firefox require a bit more work. Thankfully this work largely amounted to a few lines in TSan’s robust suppression system.

Instrumenting all code in Firefox isn’t currently possible because it needs to use shared system libraries like GTK and X11. Fortunately, TSan offers the “called_from_lib” feature that can be used in the suppression list to ignore any calls originating from those shared libraries. Our other major source of uninstrumented code was build flags not being properly passed around, which was especially problematic for Rust code (see the Rust section below).

As for unsupported primitives, the only issue we ran into was the lack of support for fences. Most fences were the result of a standard atomic reference counting idiom which could be trivially replaced with an atomic load in TSan builds. Unfortunately, fences are fundamental to the design of the crossbeam crate (a foundational concurrency library in Rust), and the only solution for this was a suppression.

We also found that there is a (well known) false positive in deadlock detection that is however very easy to spot and also does not affect data race detection/reporting at all. In a nutshell, any deadlock report that only involves a single thread is likely this false positive.

The only true false positive we found so far turned out to be a rare bug in TSan and was fixed in the tool itself. However, developers claimed on various occasions that a particular report must be a false positive. In all of these cases, it turned out that TSan was indeed right and the problem was just very subtle and hard to understand. This is again confirming that we need tools like TSan to help us eliminate this class of bugs.

Interesting Bugs

Currently, the TSan bug-o-rama contains around 20 bugs. We’re still working on fixes for some of these bugs and would like to point out several particularly interesting/impactful ones.

Beware Bitfields

Bitfields are a handy little convenience to save space for storing lots of different small values. For instance, rather than having 30 bools taking up 240 bytes, they can all be packed into 4 bytes. For the most part this works fine, but it has one nasty consequence: different pieces of data now alias. This means that accessing “neighboring” bitfields is actually accessing the same memory, and therefore a potential data race.

In practical terms, this means that if two threads are writing to two neighboring bitfields, one of the writes can get lost, as both of those writes are actually read-modify-write operations of all the bitfields:

If you’re familiar with bitfields and actively thinking about them, this might be obvious, but when you’re just saying myVal.isInitialized = true you may not think about or even realize that you’re accessing a bitfield.

We have had many instances of this problem, but let’s look at bug 1601940 and its (trimmed) race report:

When we first saw this report, it was puzzling because the two threads in question touch different fields (mAsyncTransformAppliedToContent vs. mTestAttributeAppliers). However, as it turns out, these two fields are both adjacent bitfields in the class.

This was causing intermittent failures in our CI and cost a maintainer of this code valuable time. We find this bug particularly interesting because it demonstrates how hard it is to diagnose data races without appropriate tooling and we found more instances of this type of bug (racy bitfield write/write) in our codebase. One of the other instances even had the potential to cause network loads to supply invalid cache content, another hard-to-debug situation, especially when it is intermittent and therefore not easily reproducible.

We encountered this enough that we eventually introduced a MOZ_ATOMIC_BITFIELDS macro that generates bitfields with atomic load/store methods. This allowed us to quickly fix problematic bitfields for the maintainers of each component without having to redesign their types.

Oops That Wasn’t Supposed To Be Multithreaded

We also found several instances of components which were explicitly designed to be single-threaded accidentally being used by multiple threads, such as bug 1681950:

The race itself here is rather simple, we are racing on the same file through stat64 and understanding the report was not the problem this time. However, as can be seen from frame 10, this call originates from the PreferencesWriter, which is responsible for writing changes to the prefs.js file, the central storage for Firefox preferences.

It was never intended for this to be called on multiple threads at the same time and we believe that this had the potential to corrupt the prefs.js file. As a result, during the next startup the file would fail to load and be discarded (reset to default prefs). Over the years, we’ve had quite a few bug reports related to this file magically losing its custom preferences but we were never able to find the root cause. We now believe that this bug is at least partially responsible for these losses.

We think this is a particularly good example of a failure for two reasons: it was a race that had more harmful effects than just a crash, and it caught a larger logic error of something being used outside of its original design parameters.

Late-Validated Races

On several occasions we encountered a pattern that lies on the boundary of benign that we think merits some extra attention: intentionally racily reading a value, but then later doing checks that properly validate it. For instance, code like:

See for example, this instance we encountered in SQLite.

Please Don’t Do This. These patterns are really fragile and they’re ultimately undefined behavior, even if they generally work right. Just write proper atomic code — you’ll usually find that the performance is perfectly fine.

What about Rust?

Another difficulty that we had to solve during TSan deployment was due to part of our codebase now being written in Rust, which has much less mature support for sanitizers. This meant that we spent a significant portion of our bringup with all Rust code suppressed while that tooling was still being developed.

We weren’t particularly concerned with our Rust code having a lot of races, but rather races in C++ code being obfuscated by passing through Rust. In fact, we strongly recommend writing new projects entirely in Rust to avoid data races altogether.

The hardest part in particular is the need to rebuild the Rust standard library with TSan instrumentation. On nightly there is an unstable feature, -Zbuild-std, that lets us do exactly that, but it still has a lot of rough edges.

Our biggest hurdle with build-std was that it’s currently incompatible with vendored build environments, which Firefox uses. Fixing this isn’t simple because cargo’s tools for patching in dependencies aren’t designed for affecting only a subgraph (i.e. just std and not your own code). So far, we have mitigated this by maintaining a small set of patches on top of rustc/cargo which implement this well-enough for Firefox but need further work to go upstream.

But with build-std hacked into working for us we were able to instrument our Rust code and were happy to find that there were very few problems! Most of the things we discovered were C++ races that happened to pass through some Rust code and had therefore been hidden by our blanket suppressions.

We did however find two pure Rust races:

The first was bug 1674770, which was a bug in the parking_lot library. This Rust library provides synchronization primitives and other concurrency tools and is written and maintained by experts. We did not investigate the impact but the issue was a couple atomic orderings being too weak and was fixed quickly by the authors. This is yet another example that proves how difficult it is to write bug-free concurrent code.

The second was bug 1686158, which was some code in WebRender’s software OpenGL shim. They were maintaining some hand-rolled shared-mutable state using raw atomics for part of the implementation but forgot to make one of the fields atomic. This was easy enough to fix.

Overall Rust appears to be fulfilling one of its original design goals: allowing us to write more concurrent code safely. Both WebRender and Stylo are very large and pervasively multi-threaded, but have had minimal threading issues. What issues we did find were mistakes in the implementations of low-level and explicitly unsafe multithreading abstractions — and those mistakes were simple to fix.

This is in contrast to many of our C++ races, which often involved things being randomly accessed on different threads with unclear semantics, necessitating non-trivial refactorings of the code.


Data races are an underestimated problem. Due to their complexity and intermittency, we often struggle to identify them, locate their cause and judge their impact correctly. In many cases, this is also a time-consuming process, wasting valuable resources. ThreadSanitizer has proven to be not just effective in locating data races and providing adequate debug information, but also to be practical even on a project as large as Firefox.


We would like to thank the authors of ThreadSanitizer for providing the tool and in particular Dmitry Vyukov (Google) for helping us with some complex, Firefox-specific edge cases during deployment.

This Week In RustThis Week in Rust 394

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is cargo-sort, a cargo subcommand to sort your Cargo.toml's dependencies and workspace members.

Thanks to jplatte for the nomination

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

267 pull requests were merged in the last week

Rust Compiler Performance Triage

Some good improvements, and a few regressions. No large changes.

Triage done by @simulacrum. Revision range: 1160cf..a50d721

3 Regressions, 3 Improvements, 1 Mixed; 1 of them in rollups

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No new RFCs were proposed this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
New RFCs

Upcoming Events


If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweede golf









Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

As the tradeoffs in software engineering change over time, so does the ideal solution. Some 40 years ago when the first C standards were written down, by people no less competent than those that work on Rust today, the design of the language and the list of behaviours not defined likely made much more sense in context of back then than they do right now. It is not all that unlikely that some years down the line the choices made by Rust won't make all that much of sense as they do today, too.

Simonas on rust-internals

Thanks to Kill The Mule for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

The Mozilla BlogBecome a better writer with these five extensions for Firefox

Sometimes the hardest thing to write is the first word. It means you’re committed. No looking back now. You can’t leave that lone word just sitting there. Better add a second word, then a third. Now you’re on your way. 

Procrastination can be a major blocker for writers. While putting everything off until the last minute may work for a few thrill seekers out there, for most of us it means the work suffers with little time to reflect on early drafts or an opportunity to improve your phrasing and flow. If you’re looking for help on writing projects, here are some great browser extensions for Firefox that are perfectly suited for writers. 

Focus first

I’ll get started on my paper after a quick detour on Twitter and my newsfeed… you said to yourself two hours ago. 

LeechBlock NG and Block Site have many of the same core functions that effectively allow you to block specific websites entirely, or for certain designated periods, so you can’t even be tempted to turn your attention to clickbait. 

But they each have a few distinct features, too. LeechBlock NG has all sorts of highly customizable ways to restrict yourself—from blocking just portions of certain sites (e.g. you can’t access the YouTube homepage but you can see specific videos) to setting limits on specific days (e.g. no Facebook Monday through Friday) to 60-second delayed access to some websites to stave off instant gratification and make sure you’re not about to make a horrible time-sucking mistake! Block Site has a nice custom redirection feature, so you’re taken to more productive websites whenever you try to visit one of your online weakspots; you can also leave yourself custom messages whenever you try to stray (Get back to work, buddy, you can do it!)

What if your writing project involves online research, so you can’t avoid potentially troublesome news or other information sites? Tranquility Reader removes everything but a web page’s words. Gone are distracting ads, images, tempting links to other stories—everything but the words you want to focus on.

Save all your writing inspiration in one place for later with Pocket

Get Pocket

Improve your grammar and clean up typos

Supported in more than two-dozen languages, LanguageTool is like having your own personal copy editor with you wherever you write on the web. It automatically finds misspellings and awkward phrasing and suggests helpful corrections; it even spots common forehead-slapping mix-ups like you’re/your and there/their

Time management 

Tomato Clock is a simple but effective tool that helps break up your work sessions into focused “tomato” intervals inspired by the Pomodoro technique. Let’s say you prefer to write in 45-minute chunks before taking a break. The Tomato Clock extension lets you set your preferred work intervals and break times, and uses Firefox’s built-in notification system to alert you when time is up.

The post Become a better writer with these five extensions for Firefox appeared first on The Mozilla Blog.

The Mozilla Blog11 secret tips for Firefox that will make you an internet pro

With Firefox, getting around the internet is fast, straight-forward and easy. Now you can go beyond the basics with these secret and not-so-secret tricks that make your internetting experience even more fun. Read on for some of our favorite Firefox features that you may not know about… yet.

1. Send tabs across the room

If you’ve ever been reading an article, recipe or website on your phone and thought it would look better and bigger on your computer, what do you do? Email or text yourself the link, right? Friends, there is a better way to do this with Firefox. Send that tab (or several tabs) to any device where you’re logged into your Firefox account, and it’ll be waiting for you when you get there. It also works in reverse, so you can send a tab from your computer to your phone as well. Pick up where you left off with send tabs in Firefox.

How to do it:

  • From Firefox on your computer: Right-click (PC) or two-finger tap (Mac) on a tab and select Send Tab to Device.
  • From Firefox on your phone: Tap the share icon, which will open up the Send to Device option.
  • Don’t see these options? Sign into your Firefox account on both devices, then they’ll appear. 

2. Search for a needle in a tabstack

Tab hoarders, we see you. Heck, we are you. Don’t ever let anyone shame you for having dozens (and dozens3) of open tabs, implying you don’t have it together and can’t find the right one. Instead, dazzle them with this trick. Add a % sign to your URL search to search specifically through all your open tabs, including tabs in different windows. Then you can click over to the already open tab instead of creating a duplicate, not that anyone has ever done that. 

Bonus tip: If you love that tab search trick, try searching through your Bookmarks with * or your History with ^. 

3. Screenshots made simple

Ever need a screenshot, but don’t remember how to do it? Or you snapped one successfully but lost it in the bowels of your hard drive? The built-in Firefox screenshot feature takes all the stress away. Right-click (PC) or two-finger tap (Mac) to call up the Firefox action menu (see above tip) and scroll to Take Screenshot. Bam, you’re screenshotting! 

Bonus tip: Here’s how to add a screenshot button to your Firefox toolbar so it’s at your fingertips.

4. Reopen a closed tab

Tabs are life, and life is in tabs. Closing the wrong one leads to that sinking oh no feeling we know all too well. So we made a fix for that.

How to do it: Type Ctrl + shift + T for PC. Command + shift + T for Mac. Boom, tab instantly resurrected. You can even do that multiple times to reopen multiple closed tabs. 

5. Pocket the best for later

There is so much good stuff to read and watch online that you’re not going to finish the internet in your lifetime. But, you can save the best for later. Click the Pocket icon to save any article, video or page straight from Firefox. Then, when you’ve got some extra time on your hands, it’ll be waiting in the Pocket app (available for Android and iOS) on your phone. How brilliant is that?

Where to get it: Look for the Pocket symbol to the right of your toolbar to get started saving any article, video or page from Firefox.

6. Video multitasking with picture-in-picture

Got things to do and games to watch? One of our most popular features, picture-in-picture, lets you pop out a video from Firefox so that it plays over the other windows on your screen. It’s perfect for watching things on the side — sports, hearings, live news, soothing scenes and, even cute animals. Plus, it has multi-mode. 

Where to get it: Hover over any playing video and look for the picture-in-picture icon. Try it:

7. Sample any color with the built-in eyedropper 

This one is for the creators out there. The web is a colorful place, and every color has a code called a HEX value assigned to it. With the eyedropper tool built into Firefox, you can go around the web quickly sampling colors and copying the HEX value to use elsewhere. 

How to use it: Click the main menu in the upper right corner, scroll to More Tools and then to the eyedropper. Have fun sampling!

8. Fuggedaboutit!

There might be times when you want to erase your recent browsing history super fast, and when that time comes, Firefox is here for you with the forget feature. Instead of asking a lot of complex technical questions, forget asks you only one: how much do you want to forget? Once you tell Firefox you want to forget the last 5 minutes, or 2 hours or 24 hours, it takes care of the rest. This is extra handy if you share a computer with friends or family. Or if you forgot to open a private browsing window. Or if you just don’t want your recent browsing history (royal family gossip) reappearing.

Where to get it:

  • Click the main menu in the upper right corner and select More Tools.
  • Next, select Customize Toolbar.
  • Locate the Forget shortcut
  • Grab it with your mouse and drag it up to the Toolbar.
  • Then click the Done button. 
  • Now the Forget button is at your fingertips everywhere you go in Firefox.

9. Password headache fixer

If you’ve heard it once, you’ve heard it a million times — :clap::skin-tone-4: use :clap::skin-tone-4: strong :clap::skin-tone-4: unique :clap::skin-tone-4: passwords. You might complain that it’s easier said than done, but in Firefox it’s actually so easy to do. With the built-in password generator, you can create and save strong unique passwords without ever leaving Firefox. No extra downloads, software or sketchy websites needed. Bonus tip: Sync your Firefox account between multiple devices to get your logins and passwords on your mobile.

How to use it: 

  • Right-click in the Password field and select Suggest Strong Password.
  • To find saved passwords on your computer: Click the main menu in the upper right corner and select Passwords.
  • To find saved passwords on your phone, go to your Firefox settings and select Logins and Passwords.

Bonus tip: Set up a Primary Password to guard all those strong, unique passwords.

10. Restore session

Everyone has been through the struggle of restarting their computer or just the browser and losing tabs. It’s soul draining. By default, Firefox starts with a single open window. If you want to get back to where you left off, use the session restore feature that revives everything you had open last time. Tap the main menu in the upper right. Click History, then select Restore Previous Session. Soul restored.

Make it permanent: You can set Firefox to always show the windows and tabs from your previous session each time you start Firefox. Here’s how.

11. The hidden menu

Maybe you know about this one, but for those who do not, it’s essential. You need to know about the hidden Firefox action menu that appears and changes depending on where you open it. When you open the menu on a website, an image or the menu bar, the actions are contextual to what you’re doing at the moment. Smart! Fun fact: The top 25 context menu entries cover 97% of the things people want to do. 

How to use it: Right-click (PC) or two-finger tap (Mac) throughout Firefox.

Now that you’re practically a pro, pass this article onto your friends to share the secret tips of Firefox.

The post 11 secret tips for Firefox that will make you an internet pro appeared first on The Mozilla Blog.

Dennis SchubertWebCompat Tale: CSS Flexbox and the order of things

Have you thought about the order of things recently? Purely from a web development perspective, I mean.

The chances are that you, just like me, usually don’t spend too much time thinking about the drawing order of elements on your site when writing HTML and CSS. And that’s generally fine because things usually just feel right. Consider the following little example:


  #order-demo-one {
    position: relative;
    height: 100px;
    width: 100px;

  #order-demo-one .box {
    position: absolute;
    bottom: 0;
    left: 0;
    right: 0;
    top: 0;

  #order-demo-one .first {
    background-color: blue;

  #order-demo-one .second {
    background-color: red;
<section id="order-demo-one">
  <div class="box first"></div>
  <div class="box second"></div>


You could probably tell, without even looking at the result, that the second box - the red one - should be “on top”, completely covering up the blue box. After all, both boxes have the same size and the same position, but since the second box is placed after the first box, it’s drawn on top of the first one. To me, this feels pretty intuitive.

Let’s add some Flex to it

Now, let’s make things a bit more complicated. If you’re reading this article, I hope you’re at least slightly familiar with CSS Flexbox. And as you might know, flex-items have an order property, which you can use to reorder the items inside a flex container. Here’s the same example as before, but this time inside a flexbox container, with the items reordered. Note that this demo uses the same source as above, but I’m only showing relevant changes here.


  #order-demo-two {
    display: flex;

  #order-demo-two .first {
    order: 2;

  #order-demo-two .second {
    order: 1;
<section id="order-demo-two">
  <div class="box first"></div>
  <div class="box second"></div>


Okay, now we used order to swap positions of the first and second boxes. And as you can see in the demo … nothing changed. What? This is where things start becoming a bit counter-intuitive because this test case is actually a bit of a trick question.

Here is what the CSS Flexbox spec says about the order property:

  1. A flex container lays out its content in order-modified document order, starting from the lowest numbered ordinal group and going up. Items with the same ordinal group are laid out in the order they appear in the source document.
  2. This also affects the painting order, exactly as if the flex items were reordered in the source document.
  3. Absolutely-positioned children of a flex container are treated as having order: 0 for the purpose of determining their painting order relative to flex items.

(List points added by me; the original is a single block of text.)

Point 1 is what we intuitively know. An element with order: 2 is shown after order: 1. So far, so good. Point 2, however, says that if you specify order, the elements should behave as if they have been reordered in the HTML. For our example above, this should mean that both of these HTML snippets should behave the same:

<section id="order-demo-two">
  <div class="box first"></div>
  <div class="box second"></div>
<section id="order-demo-two">
  <div class="box second"></div>
  <div class="box first"></div>

But we can clearly see that that’s not how it works. That is because in the spec text above, point 3 says that if a flex item is absolutely-positioned, it is always treated as having order: 0, so what we define in our CSS doesn’t actually matter.

Flex order, for real this time.

So instead of having the absolutely positioned element as the flex item, let’s build another demo that has the absolute element inside the flex item.


  #order-demo-three {
    display: flex;
    position: relative;
    height: 100px;
    width: 100px;

  #order-demo-three .first {
    order: 2;

  #order-demo-three .second {
    order: 1;

  #order-demo-three .inner {
    position: absolute;
    bottom: 0;
    left: 0;
    right: 0;
    top: 0;

  #order-demo-three .first .inner {
    background-color: blue;

  #order-demo-three .second .inner {
    background-color: red;
<section id="order-demo-three">
  <div class="box first">
    <div class="inner"></div>
  <div class="box second">
    <div class="inner"></div>


And now, I might have lost you. Because, as of the time of writing this, what you see as the result depends on which browser you read this blog post in. In Firefox, you’ll see the blue box on top; but pretty much everywhere else, the red box will still be on top.

The question now is: who is right? And instead of just telling you the answer, let’s work it out together. Rule 3 from above does not apply here: The flex items are not absolutely positioned, so the order should be taken into consideration. To check if that’s the case, we can look at rule 2: the code should behave the same if we reorder the elements in the HTML. We can build a test for that:


<section id="order-demo-four">
  <div class="box second">
    <div class="inner"></div>
  <div class="box first">
    <div class="inner"></div>


(again, the code is the same as the one in #order-demo-three, but I’m just showing the HTML to keep it easier to read)

If you’re reading this in Firefox, then the last two test cases behave the same: they’ll show the blue box. However, if you’re in Chrome, Safari, or Edge, there will be a difference: the first case will show the red box, the second case shows the blue box. If you now think that this is a bug in Blink and WebKit: you are right, and that bug has been known for a while.

Uh, what now?

This might sound like a super weird edge-case, and that’s probably right. It is a weird edge case. But unfortunately, as with pretty much all things that end up on my desk, we discovered this edge-case by investigating real-world breakage. Here, we received a report about the flight date picker on flydubai.com being broken, where in Firefox, there is an advertising banner on top of the picker. That’s caused by what I described here.

The Blink issue I linked earlier was opened in 2016, and there hasn’t been much progress on there since. I’m not saying this to blame Google folks; that’s just highlighting that changing things like this is a bit tricky sometimes. While there appears to be a consensus that Firefox is right, changing Chrome to match Firefox could result in an undefined number of broken sites. So you have to be careful when pushing such a change.

For now, I decided to go ahead and add a web-platform-test for this, because there is none yet. Currently, there’s also a cross-browser compat effort, “Compat2021”, going on, and CSS Flexbox is one of the key areas everyone wants to work on to make it a bit less of a pain for web developers. Maybe we can get some progress done on this issue as well. I will certainly try!

And with that, I have to end this post. There is no happy end, there isn’t even a certainty on what - if anything - will happen next. Sometimes, that’s the nature of our work. And I think that’s worth sharing, too.

Hacks.Mozilla.OrgImplementing Private Fields for JavaScript

This post is cross-posted from Matthew Gaudet’s blog

When implementing a language feature for JavaScript, an implementer must make decisions about how the language in the specification maps to the implementation. Sometimes this is fairly simple, where the specification and implementation can share much of the same terminology and algorithms. Other times, pressures in the implementation make it more challenging, requiring or pressuring the implementation strategy diverge to diverge from the language specification.

Private fields is an example of where the specification language and implementation reality diverge, at least in SpiderMonkey– the JavaScript engine which powers Firefox. To understand more, I’ll explain what private fields are, a couple of models for thinking about them, and explain why our implementation diverges from the specification language.

Private Fields

Private fields are a language feature being added to the JavaScript language through the TC39 proposal process, as part of the class fields proposal, which is at Stage 4 in the TC39 process. We will ship private fields and private methods in Firefox 90.

The private fields proposal adds a strict notion of ‘private state’ to the language. In the following example, #x may only be accessed by instances of class A:

class A {
  #x = 10;

This means that outside of the class, it is impossible to access that field. Unlike public fields for example, as the following example shows:

class A {
  #x = 10; // Private field
  y = 12; // Public Field

var a = new A();
a.y; // Accessing public field y: OK
a.#x; // Syntax error: reference to undeclared private field

Even various other tools that JavaScript gives you for interrogating objects are prevented from accessing private fields (e.g. Object.getOwnProperty{Symbols,Names} don’t list private fields; there’s no way to use Reflect.get to access them).

A Feature Three Ways

When talking about a feature in JavaScript, there are often three different aspects in play: the mental model, the specification, and the implementation.

The mental model provides the high-level thinking that we expect programmers to use mostly. The specification in turn provides the detail of the semantics required by the feature. The implementation can look wildly different from the specification text, so long as the specification semantics are maintained.

These three aspects shouldn’t produce different results for people reasoning through things (though, sometimes a ‘mental model’ is shorthand, and doesn’t accurately capture semantics in edge case scenarios).

We can look at private fields using these three aspects:

Mental Model

The most basic mental model one can have for private fields is what it says on the tin: fields, but private. Now, JS fields become properties on objects, so the mental model is perhaps ‘properties that can’t be accessed from outside the class’.

However, when we encounter proxies, this mental model breaks down a bit; trying to specify the semantics for ‘hidden properties’ and proxies is challenging (what happens when a Proxy is trying to provide access control to properties, if you aren’t supposed to be able see private fields with Proxies? Can subclasses access private fields? Do private fields participate in prototype inheritance?) . In order to preserve the desired privacy properties an alternative mental model became the way the committee thinks about private fields.

This alternative model is called the ‘WeakMap’ model. In this mental model you imagine that each class has a hidden weak map associated with each private field, such that you could hypothetically ‘desugar’

class A {
  #x = 15;
  g() {
    return this.#x;

into something like

class A_desugared {
  static InaccessibleWeakMap_x = new WeakMap();
  constructor() {
    A_desugared.InaccessibleWeakMap_x.set(this, 15);

  g() {
    return A_desugared.InaccessibleWeakMap_x.get(this);

The WeakMap model is, surprisingly, not how the feature is written in the specification, but is an important part of the design intention is behind them. I will cover a bit later how this mental model shows up in places later.


The actual specification changes are provided by the class fields proposal, specifically the changes to the specification text. I won’t cover every piece of this specification text, but I’ll call out specific aspects to help elucidate the differences between specification text and implementation.

First, the specification adds the notion of [[PrivateName]], which is a globally unique field identifier. This global uniqueness is to ensure that two classes cannot access each other’s fields merely by having the same name.

function createClass() {
  return class {
    #x = 1;
    static getX(o) {
      return o.#x;

let [A, B] = [0, 1].map(createClass);
let a = new A();
let b = new B();

A.getX(a); // Allowed: Same class
A.getX(b); // Type Error, because different class.

The specification also adds a new ‘internal slot’, which is a specification level piece of internal state associated with an object in the spec, called [[PrivateFieldValues]] to all objects. [[PrivateFieldValues]] is a list of records of the form:

  [[PrivateName]]: Private Name,
  [[PrivateFieldValue]]: ECMAScript value

To manipulate this list, the specification adds four new algorithms:

  1. PrivateFieldFind
  2. PrivateFieldAdd
  3. PrivateFieldGet
  4. PrivateFieldSet

These algorithms largely work as you would expect: PrivateFieldAdd appends an entry to the list (though, in the interest of trying to provide errors eagerly, if a matching Private Name already exists in the list, it will throw a TypeError. I’ll show how that can happen later). PrivateFieldGet retrieves a value stored in the list, keyed by a given Private name, etc.

The Constructor Override Trick

When I first started to read the specification, I was surprised to see that PrivateFieldAdd could throw. Given that it was only called from a constructor on the object being constructed, I had fully expected that the object would be freshly created, and therefore you’d not need to worry about a field already being there.

This turns out to be possible, a side effect of some of the specification’s handling of constructor return values. To be more concrete, the following is an example provided to me by André Bargull, which shows this in action.

class Base {
  constructor(o) {
    return o; // Note: We are returning the argument!

class Stamper extends Base {
  #x = "stamped";
  static getX(o) {
    return o.#x;

Stamper is a class which can ‘stamp’ its private field onto any object:

let obj = {};
new Stamper(obj); // obj now has private field #x
Stamper.getX(obj); // => "stamped"

This means that when we add private fields to an object we cannot assume it doesn’t have them already. This is where the pre-existence check in PrivateFieldAdd comes into play:

let obj2 = {};
new Stamper(obj2);
new Stamper(obj2); // Throws 'TypeError' due to pre-existence of private field

This ability to stamp private fields into arbitrary objects interacts with the WeakMap model a bit here as well. For example, given that you can stamp private fields onto any object, that means you could also stamp a private field onto a sealed object:

var obj3 = {};
new Stamper(obj3);
Stamper.getX(obj3); // => "stamped"

If you imagine private fields as properties, this is uncomfortable, because it means you’re modifying an object that was sealed by a programmer to future modification. However, using the weak map model, it is totally acceptable, as you’re only using the sealed object as a key in the weak map.

PS: Just because you can stamp private fields into arbitrary objects, doesn’t mean you should: Please don’t do this.

Implementing the Specification

When faced with implementing the specification, there is a tension between following the letter of the specification, and doing something different to improve the implementation on some dimension.

Where it is possible to implement the steps of the specification directly, we prefer to do that, as it makes maintenance of features easier as specification changes are made. SpiderMonkey does this in many places. You will see sections of code that are transcriptions of specification algorithms, with step numbers for comments. Following the exact letter of the specification can also be helpful where the specification is highly complex and small divergences can lead to compatibility risks.

Sometimes however, there are good reasons to diverge from the specification language. JavaScript implementations have been honed for high performance for years, and there are many implementation tricks that have been applied to make that happen. Sometimes recasting a part of the specification in terms of code already written is the right thing to do, because that means the new code is also able to have the performance characteristics of the already written code.

Implementing Private Names

The specification language for Private Names already almost matches the semantics around Symbols, which already exist in SpiderMonkey. So adding PrivateNames as a special kind of Symbol is a fairly easy choice.

Implementing Private Fields

Looking at the specification for private fields, the specification implementation would be to add an extra hidden slot to every object in SpiderMonkey, which contains a reference to a list of {PrivateName, Value} pairs. However, implementing this directly has a number of clear downsides:

  • It adds memory usage to objects without private fields
  • It requires invasive addition of either new bytecodes or complexity to performance sensitive property access paths.

An alternative option is to diverge from the specification language, and implement only the semantics, not the actual specification algorithms. In the majority of cases, you really can think of private fields as special properties on objects that are hidden from reflection or introspection outside a class.

If we model private fields as properties, rather than a special side-list that is maintained with an object, we are able to take advantage of the fact that property manipulation is already extremely optimized in a JavaScript engine.

However, properties are subject to reflection. So if we model private fields as object properties, we need to ensure that reflection APIs don’t reveal them, and that you can’t get access to them via Proxies.

In SpiderMonkey, we elected to implement private fields as hidden properties in order to take advantage of all the optimized machinery that already exists for properties in the engine. When I started implementing this feature André Bargull – a SpiderMonkey contributor for many years – actually handed me a series of patches that had a good chunk of the private fields implementation already done, for which I was hugely grateful.

Using our special PrivateName symbols, we effectively desuagar

class A {
  #x = 10;
  x() {
    return this.#x;

to something that looks closer to

class A_desugared {
  constructor() {
    this[PrivateSymbol(#x)] = 10;
  x() {
    return this[PrivateSymbol(#x)];

Private fields have slightly different semantics than properties however. They are designed to issue errors on patterns expected to be programming mistakes, rather than silently accepting it. For example:

  1. Accessing an a property on an object that doesn’t have it returns undefined. Private fields are specified to throw a TypeError, as a result of the PrivateFieldGet algorithm.
  2. Setting a property on an object that doesn’t have it simply adds the property. Private fields will throw a TypeError in PrivateFieldSet.
  3. Adding a private field to an object that already has that field also throws a TypeError in PrivateFieldAdd. See “The Constructor Override Trick” above for how this can happen.

To handle the different semantics, we modified the bytecode emission for private field accesses. We added a new bytecode op, CheckPrivateField which verifies an object has the correct state for a given private field. This means throwing an exception if the property is missing or present, as appropriate for Get/Set or Add. CheckPrivateField is emitted just before using the regular ‘computed property name’ path (the one used for A[someKey]).

CheckPrivateField is designed such that we can easily implement an inline cache using CacheIR. Since we are storing private fields as properties, we can use the Shape of an object as a guard, and simply return the appropriate boolean value. The Shape of an object in SpiderMonkey determines what properties it has, and where they are located in the storage for that object. Objects that have the same shape are guaranteed to have the same properties, and it’s a perfect check for an IC for CheckPrivateField.

Other modifications we made to make to the engine include omitting private fields from the property enumeration protocol, and allowing the extension of sealed objects if we are adding private field.


Proxies presented us a bit of a new challenge. Concretely, using the Stamper class above, you can add a private field directly to a Proxy:

let obj3 = {};
let proxy = new Proxy(obj3, handler);
new Stamper(proxy)

Stamper.getX(proxy) // => "stamped"
Stamper.getX(obj3)  // TypeError, private field is stamped
                    // onto the Proxy Not the target!

I definitely found this surprising initially. The reason I found this surprising was I had expected that, like other operations, the addition of a private field would tunnel through the proxy to the target. However, once I was able to internalize the WeakMap mental model, I was able to understand this example much better. The trick is that in the WeakMap model, it is the Proxy, not the target object, used as the key in the #x WeakMap.

These semantics presented a challenge to our implementation choice to model private fields as hidden properties however, as SpiderMonkey’s Proxies are highly specialized objects that do not have room for arbitrary properties. In order to support this case, we added a new reserved slot for an ‘expando’ object. The expando is an object allocated lazily that acts as the holder for dynamically added properties on the proxy. This pattern is used already for DOM objects, which are typically implemented as C++ objects with no room for extra properties. So if you write document.foo = "hi", this allocates an expando object for document, and puts the foo property and value in there instead. Returning to private fields, when #x is accessed on a Proxy, the proxy code knows to go and look in the expando object for that property.

In Conclusion

Private Fields is an instance of implementing a JavaScript language feature where directly implementing the specification as written would be less performant than re-casting the specification in terms of already optimized engine primitives. Yet, that recasting itself can require some problem solving not present in the specification.

At the end, I am fairly happy with the choices made for our implementation of Private Fields, and am excited to see it finally enter the world!


I have to thank, again, André Bargull, who provided the first set of patches and laid down an excellent trail for me to follow. His work made finishing private fields much easier, as he’d already put a lot of thought into decision making.

Jason Orendorff has been an excellent and patient mentor as I have worked through this implementation, including two separate implementations of the private field bytecode, as well as two separate implementations of proxy support.

Thanks to Caroline Cullen, and Iain Ireland for helping to read drafts of this post, and to Steve Fink for fixing many typos.

The post Implementing Private Fields for JavaScript appeared first on Mozilla Hacks - the Web developer blog.

The Talospace ProjectFirefox 89 on POWER

Firefox 89 was released last week with much fanfare over its new interface, though being the curmudgeon I am I'm less enamoured of it. I like the improvements to menus and doorhangers but I'm a big user of compact tabs, which were deprecated, and even with compact mode surreptitously enabled the tab bar is still about a third or so bigger than Firefox 88 (see screenshot). There do seem to be some other performance improvements, though, plus the usual more lower-level changes and WebRender is now on by default for all Linux configurations, including for you fools out there trying to run Nvidia GPUs.

The chief problem is that Fx89 may not compile correctly with certain versions of gcc 11 (see bugs 1710235 and 1713968). For Fedora users if you aren't on 11.1.1-3 (the current version as of this writing) you won't be able to compile the browser at all, and you may not be able to compile it fully even then without putting a # pragma GCC diagnostic ignored "-Wnonnull" at the top of js/src/builtin/streams/PipeToState.cpp (I still can't; see bug 1713968). gcc 10 is unaffected. I used the same .mozconfigs and PGO-LTO optimization patches as we used for Firefox 88. With those changes the browser runs well.

While waiting for the updated gcc I decided to see if clang/clang++ could now build the browser completely on ppc64le (it couldn't before), even though gcc remains my preferred compiler as it generates higher performance objects. The answer is now it can and this time it did, merely by substituting clang for gcc in the .mozconfig, but even using the bfd linker it makes a defective Firefox that freezes or crashes outright on startup; it could not proceed to the second phase of PGO-LTO and the build system aborted with an opaque error -139. So much for that. For the time being I think I'd rather spend my free cycles on the OpenPOWER JavaScript JIT than figuring out why clang still sucks at this.

Some of you will also have noticed the Mac-style pulldown menus in the screenshot, even though this Talos II is running Fedora 34. This comes from firefox-appmenu, which since I build from source is trivial to patch in, and the Fildem global menu GNOME extension (additional tips) paired with my own custom gnome-shell theme. I don't relish adding another GNOME extension that Fedora 35 is certain to break, but it's kind of nice to engage my Mac mouse-le memory and it also gives me a little extra vertical room. You'll notice the window also lacks client-side decorations since I can just close the window with key combinations; this gives me a little extra horizontal tab room too. If you want that, don't apply this particular patch from the firefox-appmenu series and just use the other two .patches.

The Rust Programming Language BlogAnnouncing Rustup 1.24.3

The rustup working group is happy to announce the release of rustup version 1.24.3. Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of rustup installed, getting rustup 1.24.3 is as easy as closing your IDE and running:

rustup self update

Rustup will also automatically update itself at the end of a normal toolchain update:

rustup update

If you don't have it already, you can get rustup from the appropriate page on our website.

What's new in rustup 1.24.3

This patch release focusses around resolving some regressions in behaviour in the 1.24.x series, in either low tier platforms, or unusual situations around very old toolchains.

Full details are available in the changelog!

Rustup's documentation is also available in the rustup book.


Thanks again to all the contributors who made rustup 1.24.3 possible!

  • Alexander (asv7c2)
  • Ian Jackson
  • pierwill
  • 二手掉包工程师 (hi-rustin)
  • Robert Collins
  • Daniel Silverstone

The Mozilla BlogFeel at home on your iPhone and iPad with Firefox

When we set out earlier this year to reimagine Firefox for desktop to be simpler, more modern and faster to use, we didn’t forget about your Apple mobile devices. We use our laptops and computers for work and play, but our tablets and phones sit a lot closer. They’re the first screens we see in the morning, and the last we see before we go to bed.

Firefox is only one of the many apps you use throughout the day, so our browser can’t just look good. It also needs to feel at home. The newest Firefox for iOS and iPadOS represents what we think online life should be: fast, beautiful and private – presented in a package that looks and feels intuitive and natural.

It’s fast

You can already one-tap to go to your top sites, and we’ve brought the same speed to search. Every time Firefox or a new tab opens, the keyboard will now come up, ready for you to start searching right away.

When typing takes too long, you can build search suggestions word by word.

Firefox syncs your bookmarks, history, passwords and more between devices. But when you have many tabs open on your iPhone, iPad and computer, they’re often hard to find. Now you can simply start typing the tab’s name, and Firefox will show the tabs you have opened, no matter where they’re located.

And if you don’t quite know what something is called, you can browse for it easily. Open the main application menu, and all your bookmarks, history, downloads and reading list are there.

It’s beautiful

We’ve rebuilt parts of Firefox in native components, making it feel more iPhone and iPad-like than ever before. You’ll notice design elements that look and work identically to those found in many other apps, so our browser feels instantly familiar. We’ve also taken a major step up in accessibility. Firefox now supports more text sizes and integrates better with screen readers.

Our menus take up less space and work like all other menus you’re used to.


All your tabs, including ones synced from other devices, now appear side by side. And your bookmarks, history, downloads and reading list are browsed just like tabs do. Switching between them is a breeze.


And we’ve given Firefox for iPadOS the same productivity-inspired new tab design, simplified navigation and streamlined menus. Your favourite browser now looks and feels consistent, no matter where you are.

It’s private

The App Store now includes app privacy labels that help you understand our data collection practices.

Here’s what we collect and why:

  • You may choose to share information about feature usage, crashes and other technical data with us. We collect them, but we do not link them to your Firefox account identity! This information helps us improve Firefox performance and stability, and to support our marketing. You can read more about it here.

What’s next for Firefox

We’re excited to share a fast, beautiful and private Firefox experience across all your devices. No matter what device you choose to tackle the web in the future, Firefox will be there for you. Download the latest version of Firefox for your iOS and Mac and to experience the latest fresh clean look and feel.

The post Feel at home on your iPhone and iPad with Firefox appeared first on The Mozilla Blog.

Daniel StenbergBye bye metalink in curl

In 2012 I wrote a blog post titled curling the metalink, describing how we added support for metalink to curl.

Today, we remove that support again. This is a very drastic move, and I feel obliged to explain it so here it goes! curl 7.78.0 will ship without metalink support.

Metalink problems

There were several issues found that combined led us to this move.

Security problems

We’ve found several security problems and issues involving the metalink support in curl. The issues are not detailed here because they’ve not been made public yet.

When working on these issues, it become apparent to the curl security team that several of the problems are due to the system design, metalink library API and what the metalink RFC says. They are very hard to fix on the curl side only.

Unusual use pattern

Metalink usage with curl was only very briefly documented and was not following the “normal” curl usage pattern in several ways, making it surprising and non-intuitive which could lead to further security issues.

libmetalink is abandoned

The metalink library libmetalink was last updated 6 years ago and wasn’t very actively maintained the years before that either. An unmaintained library means there’s a security problem waiting to happen. This is probably reason enough.

XML is heavy

Metalink requires an XML parsing library, which is complex code (even the smaller alternatives) and to this day often gets security updates.

Not used much

Metalink is not a widely used curl feature. In the 2020 curl user survey, only 1.4% of the responders said that they’d are using it. In the just closed 2021 survey that number shrunk to 1.2%. Searching the web also show very few traces of it being used, even with other tools.

The torrent format and associated technology clearly won for downloading large files from multiple sources in parallel.

Violating a basic principle

This change unfortunately breaks command lines that uses --metalink. This move goes directly against one of our basic principles as it doesn’t maintain behavior with previous versions. We’re very sorry about this but we don’t see a way out of this pickle that also takes care of user’s security – which is another basic principle of ours. We think the security concern trumps the other concerns.

Possible to bring back?

The list above contains reasons for the removal. At least some of them can be addressed given enough efforts and work put into it. If someone is willing to do the necessary investment, I think we could entertain the possibility that support can be brought back in a future. I just don’t think it is very probable.


Image by Ron Porter from Pixabay

The Talospace ProjectProgress on the OpenPOWER SpiderMonkey JIT


% gdb --args obj/dist/bin/js --no-baseline --no-ion --no-native-regexp --blinterp-eager -e 'print("hello world")'
GNU gdb (GDB) Fedora 10.1-14.fc34
Copyright (C) 2020 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "ppc64le-redhat-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
Find the GDB manual and other documentation resources online at:

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from obj/dist/bin/js...
(gdb) run
Starting program: obj/dist/bin/js --no-baseline --no-ion --no-native-regexp --blinterp-eager -e print\(\"hello\ world\"\)
warning: Expected absolute pathname for libpthread in the inferior, but got .gnu_debugdata for /lib64/libpthread.so.0.
warning: Unable to find libthread_db matching inferior's thread library, thread debugging will not be available.
[New LWP 2797069]
[LWP 2797069 exited]
[New LWP 2797070]
[New LWP 2797071]
[New LWP 2797072]
[New LWP 2797073]
[New LWP 2797074]
[New LWP 2797075]
[New LWP 2797076]
[New LWP 2797077]
hello world
[LWP 2797072 exited]
[LWP 2797070 exited]
[LWP 2797074 exited]
[LWP 2797077 exited]
[LWP 2797073 exited]
[LWP 2797071 exited]
[LWP 2797076 exited]
[LWP 2797075 exited]
[Inferior 1 (process 2797041) exited normally]

This may not look like much, but it demonstrates that the current version of the OpenPOWER JavaScript JIT for Firefox can emit machine language instructions correctly (mostly — still more codegen bugs to shake out), handles the instruction cache correctly, handles ABI-compliant calls into the SpiderMonkey VM correctly (the IonMonkey JIT is not ABI-compliant except at those edges), and enters and exits routines without making a mess of the stack. Much of the code originates from TenFourFox's "IonPower" 32-bit PowerPC JIT, though obviously greatly expanded, and there is still ongoing work to make sure it is properly 64-bit aware and takes advantage of instructions available in later versions of the Power ISA. (No more spills to the stack to convert floating point, for example. Yay for VSX!)

Although it is only the lowest level of the JIT, what Mozilla calls the Baseline Interpreter, there is substantial code in common between the Baseline Interpreter and the second-stage Baseline Compiler. Because it has much less overhead compared to Baseline Compiler and to the full-fledged Ion JIT, the Baseline Interpreter can significantly improve page loads all by itself. In fact, my next step might be to get regular expressions and the OpenPOWER Baseline Interpreter to pass the test suite and then drag that into a current version of Firefox for continued work so that it can get banged on for reliability and improve performance for those people who want to build it (analogous to how we got PPCBC running first before full-fledged IonPower in TenFourFox). Eventually full Ion JIT and Wasm support should follow, though those both use rather different codepaths apart from the fundamental portions of the backend which still need to be shaped.

A big shout-out goes to Justin Hibbits, who took TenFourFox's code and merged it with the work I had initially done on JitPower way back in the Firefox 62 days but was never able to finish. With him having done most of the grunt work, I was able to get it to compile and then started attacking the various bugs in it.

Want to contribute? It's on Github. Tracing down bugs is labour-intensive, and involves a lot of emitting trap instructions and single-stepping in the debugger, but when you see those small steps add up into meaningful fixes (man, it was great to see those two words appear) it's really rewarding. I'm happy to give tips to anyone who wants to participate. Once it can pass the test suite at some JIT level, it will be time to forward-port it and if we can get our skates on it might even be possible to upstream it into the next Firefox ESR.

For better or worse, the Web is a runtime. Let's get OpenPOWER workstations running it better.

Mozilla Open Policy & Advocacy BlogThe Van Buren decision is a strong step forward for public interest research online

In a victory for security research and other public interest work, yesterday the U.S Supreme Court held that the Computer Fraud and Abuse Act’s (CFAA) “exceeding authorized access” provision should be narrowly interpreted and cannot be used to criminalize every single violation of a computer-use policy. This is encouraging news for journalists, bug bounty hunters, social science researchers, and many other practitioners who could legitimately access information in a myriad of ways but were at the risk of being prosecuted as criminals.

As we stated in our joint amicus brief to the Court in July 2020, over the years some federal circuit courts had interpreted the CFAA so broadly as to threaten important practices to protect the public, including research and disclosure of software vulnerabilities by those in the security community. The scope of such broad interpretation went beyond security management and has also been used to stifle legitimate public interest research, such as looking into the advertising practices of online platforms, something Mozilla has pushed back against in the past.

In its ruling, the Supreme Court held that authorized access under the CFAA is not exceeded when information is accessed on a computer for a purpose that the system owner considers improper. For example, the ruling clarifies that employees would not violate the CFAA simply by using a work computer to check personal email if that is contrary to the company’s computer use policies. The decision overrules some of the most expansive interpretations of the CFAA and makes it less likely that the law will be used to chill legitimate research and disclosures. The decision does, however, leave some open questions on the role of contractual limits in the CFAA that will likely have to be settled via litigation over the coming years.

However, the net impact of the decision leaves the “exceeding authorized access” debate under the CFAA in a much better place than when it began and should be celebrated as a clear endorsement of the years of efforts by various digital rights organizations to limit its chilling effects with the goal of protecting public interest research, including in cybersecurity.

The post The Van Buren decision is a strong step forward for public interest research online appeared first on Open Policy & Advocacy.

Firefox NightlyThese Weeks in Firefox: Issue 95


  • Firefox 89 released this week! This version includes a major UI redesign and thoughtful touches throughout. See the release notes here.
  • We launched ideas.mozilla.org, a new platform for the Mozilla community to share their feedback and ideas. While technical feedback and bug reports should still go to Bugzilla, please post product feedback and new feature recommendations to ideas.mozilla.org. Product managers, engineers, and designers will be engaging with the community there to refine and prioritize the suggestions!
  • Fission is now rolled out to ~50% of Nightly users. For those who don’t yet have it, you can opt-in to using Fission on Nightly, Beta or Release channels by setting `fission.autostart` to `true` followed by a Firefox restart. You will start to see a [F] in the tab hover that confirms that Fission is turned on.
  • We released Total Cookie Protection in private browsing mode in Firefox 89.

Two illustrations. On the left, three hands representating three different websites reach into one jar labelled "Cookies". On the right, the three hands are each reaching into a separate jar.

Friends of the Firefox team

For contributions from May 19 to June 1 2021, inclusive.


Resolved bugs (excluding employees)

Fixed more than one bug
New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Fixed a small bug in about:addons that was mistakenly allowing a user to remove an extension locked by an enterprise policy – Bug 1658768
WebExtensions Framework
  • Bug 1709687 – NotFoundError: Could not open the file webext.sc.lz4
    • Barret landed a fix to avoid some log spam introduced by the changes originally landed as part of Bug 1649593
  • The id of the extension that is redirecting an intercepted webRequest is now being stored into the channel properties bag (as we are already doing for the extension id that is blocking an intercepted webRequest) – Bug 1711924
    • The devtools are not currently using this new property set on the channel, and so it is not going to be visible anywhere in the UI at the moment, we will be filing a follow up to track it as a possible addition to the DevTools network panel.
  • Gijs fixed a bug that was making Firefox to show the “external protocol permission prompt” when an url for an WebExtension-controlled protocol handler was passed on the command line – Bug 1700976 (originally regressed in Firefox 88 from Bug 1678255)
WebExtension APIs
  • In Firefox 89, the webRequest multipart/form-data parser has been updated to better match the spec (as well as how this API behaves in Chrome) – Bug 1697292

Developer Tools

  • Font preview for HTTP font requests (bug, contributed by Sebastian Zartner, :sebo)

The DevTools Network panel. A font request is highlighted. In a subpanel, samples of the downloaded font are shown.


  • Supporting private fields in DevTools (bug)

The DevTools console. The text representation of a JavaScript Object is expanded. Inside, its properties are listed, including one called #myPrivate.


  • Fission
    • Targeting Fission M8 – reaching feature parity with pre-Fision state. Also focusing on issues related to BFCache changes (changes behind a pref now, but will be enabled in Nightly soon – check: fission.bfcacheInParent)


  • Fission team is fixing the last couple of test failures with the new BFCache in parent process architecture. It is now ready for your testing and experience. Users can set `fission.bfcacheInParent` pref to `true` followed by a Firefox restart to enable this.
  • Please file any Fission bugs you encounter using this template.

macOS Spotlight

  • Work continues on supporting native fullscreen. See bug 1631735. Try the behaviour early by enabling full-screen-api.macos-native-full-screen.
  • Work continues on improved dark mode support. See bug 1623686. Try the behaviour early by enabling widget.macos.respect-system-appearance.
  • Fixed a bug where context menus would sometimes not appear on a two-finger click: bug 1710474.
  • Preliminary work to improve high contrast mode on Mac: Bug 1711261 – Address bar and Search bar lack contrast in OSX High Contrast mode.
  • Preliminary work on reducing power consumption when watching videos.

Messaging System

New Tab Page

  • Work continues on remaining proton redesign work. Meta: Bug 1707989.
    • Continuing to file bugs about this work that include some front-end good first bugs. (:amy is happy to mentor bugs)
  • Bug 1712297 Pinned topsites search shortcuts left hanging when @search_engine gets removed. Fix landed including tests. Thank you :standard8, :aflorinescu & :dao!

Password Manager


Performance Tools

  • You can now filter markers by their category in the marker chart and marker table.
  • No periodic sampling mode now collects thread CPU usage as well. Previously this section was blank. Example profile.

A usage graph in the Firefox Profiler. It shows CPU usage values.

  • When importing a Linux perf profile into Firefox Profiler, the timeline will look better with less wasted empty space. Example profile.
A Firefox Profiler usage graph. Its lines are erratic.


A Firefox Profiler usage graph. Its lines are smooth.


  • Made various improvements on the activity graph in the timeline. It’s more accurate now.


  • Updating site data clearing to also clear storage partitioned by Total Cookie Protection – Bug 1646215

Search and Navigation

  • Drew added new nontechnical address bar overview to source tree documentation, also explaining results composition
  • Drew continues to work on the next Firefox Suggest experiment
  • Daisuke fixed deduplication of search suggestions with the experimental unit conversion address bar provider (browser.urlbar.unitConversion.enabled) – Bug 1711156
  • Daisuke fixed spacing of search shortcut buttons – Bug 1710651
  • Harry fixed the reader view button tooltip – Bug 1712569


  • Kajal Sah started her outreachy internship last week!
  • Kajal is working on a patch to add the screenshot button to the context menu for iframes

Downloads Panel

  • Outreachy intern, Ava Katushka, started with us last week! She’ll be helping us implement fixes and features in the Downloads Panel to make the downloading experience smoother:
    • See meta bug to follow along
    • Work will be behind a pref: browser.download.improvements_to_download_panel (bug 1710929)
  • Ava fixed an issue where downloads telemetry was inflated (bug 1706355).
  • Ava is working on a change where if a user chooses to open a file with a computer application, it’s saved to the Downloads folder (bug 1710933).

Ryan HarterGetting Credit for Invisible Work

Last month I gave a talk at csv,conf on "Getting Credit for Invisible Work". The (amazing) csv,conf organizers just published a recording of the talk. (slides here). Give it a watch! It's only 20m long (including the Q&A).

Invisible work is a concept I've been trying to …

About:CommunityFirefox 89: The New Contributors To MR1

Firefox 89 would not have been possible without our community, and it is a great privilege for us to thank all the developers who contributed their first code change to MR1, 44 of whom were brand new volunteers!

Data@MozillaThis week in Glean: Glean Dictionary updates

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index (and on the Mozilla Data blog).

Lots of progress on the Glean Dictionary since I made the initial release announcement a couple of months ago. For those coming in late, the Glean Dictionary is intended to be a data dictionary for applications built using the Glean SDK and Glean.js. This currently includes Firefox for Android and Firefox iOS, as well as newer initiatives like Rally. Desktop Firefox will use Glean in the future, see Firefox on Glean (FoG).

Production URL

We’re in production! You can now access the Glean Dictionary at dictionary.telemetry.mozilla.org. The old protosaur-based URL will redirect.

Glean Dictionary + Looker = ❤️

At the end of last year, Mozilla chose Looker as our internal business intelligence tool. Frank Bertsch, Daniel Thorn, Anthony Miyaguchi and others have been building out first class support for Glean applications inside this platform, and we’re starting to see these efforts bear fruit. Looker’s explores are far easier to use for basic data questions, opening up data based inquiry to a much larger cross section of Mozilla.

I recorded a quick example of this integration here:

Note that Looker access is restricted to Mozilla employees and NDA’d volunteers. Stay tuned for more public data to be indexed inside the Glean Dictionary in the future.

Glean annotations!

I did up the first cut of a GitHub-based system for adding annotations to metrics – acting as a knowledge base for things data scientists and others have discovered about Glean Telemetry in the field. This can be invaluable when doing new analysis. A good example of this is the annotation added for the opened as default browser metric for Firefox for iOS, which has several gotchas:

Many thanks to Krupa Raj and Leif Oines for producing the requirements which led up to this implementation, as well as their evangelism of this work more generally inside Mozilla. Last month, Leif and I did a presentation about this at Data Club, which has been syndicated onto YouTube:

Since then, we’ve had a very successful working session with some people Data Science and have started to fill out an initial set of annotations. You can see the progress in the glean-annotations repository.

Other Improvements

Lots more miscellaneous improvements and fixes have gone into the Glean Dictionary in the last several months: see our releases for a full list. One thing that irrationally pleases me are the new labels Linh Nguyen added last week: colorful and lively, they make it easy to see when a Glean Metric is coming from a library:

Future work

The Glean Dictionary is just getting started! In the next couple of weeks, we’re hoping to:

  • Expand the Looker integration outlined above, as our deploy takes more shape.
  • Work on adding “feature” classification to the Glean Dictionary, to make it easier for product managers and other non-engineering types to quickly find the metrics and other information they need without needing to fully understand what’s in the source tree.
  • Continue to refine the user interface of the Glean Dictionary as we get more feedback from people using it across Mozilla.

If you’re interested in getting involved, join us! The Glean Dictionary is developed in the open using cutting edge front-end technologies like Svelte. Our conviction is that being transparent about the data Mozilla collects helps us build trust with our users and the community. We’re a friendly group and hang out on the #glean-dictionary channel on Matrix.

William LachanceGlean Dictionary updates

(this is a cross-post from the data blog)

Lots of progress on the Glean Dictionary since I made the initial release announcement a couple of months ago. For those coming in late, the Glean Dictionary is intended to be a data dictionary for applications built using the Glean SDK and Glean.js. This currently includes Firefox for Android and Firefox iOS, as well as newer initiatives like Rally. Desktop Firefox will use Glean in the future, see Firefox on Glean (FoG).

Production URL

We’re in production! You can now access the Glean Dictionary at dictionary.telemetry.mozilla.org. The old protosaur-based URL will redirect.

Glean Dictionary + Looker = ❤️

At the end of last year, Mozilla chose Looker as our internal business intelligence tool. Frank Bertsch, Daniel Thorn, Anthony Miyaguchi and others have been building out first class support for Glean applications inside this platform, and we’re starting to see these efforts bear fruit. Looker’s explores are far easier to use for basic data questions, opening up data based inquiry to a much larger cross section of Mozilla.

I recorded a quick example of this integration here:

Note that Looker access is restricted to Mozilla employees and NDA’d volunteers. Stay tuned for more public data to be indexed inside the Glean Dictionary in the future.

Glean annotations!

I did up the first cut of a GitHub-based system for adding annotations to metrics — acting as a knowledge base for things data scientists and others have discovered about Glean Telemetry in the field. This can be invaluable when doing new analysis. A good example of this is the annotation added for the opened as default browser metric for Firefox for iOS, which has several gotchas:

Many thanks to Krupa Raj and Leif Oines for producing the requirements which led up to this implementation, as well as their evangelism of this work more generally inside Mozilla. Last month, Leif and I did a presentation about this at Data Club, which has been syndicated onto YouTube:

Since then, we’ve had a very successful working session with some people Data Science and have started to fill out an initial set of annotations. You can see the progress in the glean-annotations repository.

Other Improvements

Lots more miscellaneous improvements and fixes have gone into the Glean Dictionary in the last several months: see our releases for a full list. One thing that irrationally pleases me are the new labels Linh Nguyen added last week: colorful and lively, they make it easy to see when a Glean Metric is coming from a library:

Future work

The Glean Dictionary is just getting started! In the next couple of weeks, we’re hoping to:

  • Expand the Looker integration outlined above, as our deploy takes more shape.
  • Work on adding “feature” classification to the Glean Dictionary, to make it easier for product managers and other non-engineering types to quickly find the metrics and other information they need without needing to fully understand what’s in the source tree.
  • Continue to refine the user interface of the Glean Dictionary as we get more feedback from people using it across Mozilla.

If you’re interested in getting involved, join us! The Glean Dictionary is developed in the open using cutting edge front-end technologies like Svelte. Our conviction is that being transparent about the data Mozilla collects helps us build trust with our users and the community. We’re a friendly group and hang out on the #glean-dictionary channel on Matrix.

Mozilla Security BlogUpdating GPG key for signing Firefox Releases

Mozilla offers GPG signing to let you verify the integrity of our Firefox builds. GPG signatures for Linux based builds are particularly important, because it allows Linux distributions and other repackagers to verify that the source code they use to build Firefox actually comes from Mozilla.

We regularly rotate our GPG signing subkey — usually every two years — to guard against the unlikely possibility that the key has been leaked without our knowledge. Last week, such a rotation happened, and we switched over to the new signing subkey.

The new GPG subkey’s fingerprint is 14F2 6682 D091 6CDD 81E3 7B6D 61B7 B526 D98F 0353, and will expire on 2023-05-17.

If you are interested in performing a verification of your copy of Firefox, you can fetch the public key directly from our KEY files from the Firefox 89 release, or alternatively directly from below.




The post Updating GPG key for signing Firefox Releases appeared first on Mozilla Security Blog.

This Week In RustThis Week in Rust 393

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

No official blog posts or research papers this week.

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is rust-codegen-gcc, a drop-in replacement for the LLVM-based rust compiler backend targetting GCC.

Thanks to Josh Triplett for the nomination

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

255 pull requests were merged in the last week

Rust Compiler Performance Triage

Busy week, with several reverted PRs due to performance regressions, but overall a positive week.

Triage done by @simulacrum. Revision range: cdbe288..1160cf8

3 Regressions, 3 Improvements, 5 Mixed

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

No new RFCs were proposed this week.

Upcoming Events

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs



Tweede golf

Dedalus Healthcare

Yat Labs





Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

I recently graduated with my Ph.D., after having worked on 5 different versions of my simulator, written in 4 different languages. The last version, written in pure, safe rust, worked correctly in part because of rust's strong guarantees about what 'safety' means, which I was able to leverage to turn what would normally be runtime errors into compile time errors. That let me catch errors that would normally be days or weeks of debugging into relatively simple corrections. [...] So, once again, thank you to everyone!

Cem Karan on rust-internals

Thanks to Josh Triplett for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

The Mozilla BlogSharing your deepest emotions online: Did 2020 change the future of therapy?

COVID-19 forced the U.S. into a new era of telehealth. Therapists hope it’s here to stay. 


2020 was the worst year of Kali’s life. 

“Going through something like the pandemic and not having family around, feeling out of control—I was on edge all year,” says the 30-year-old Colorado resident. And she wasn’t alone. 

Amid the COVID-19 lockdowns of early 2020, millions of Americans found themselves trapped at home with a backlog of unaddressed mental-health issues. As with so many other things, computer screens and video calls became the only portals out of lockdown—for work, entertainment and, ultimately, real psychological help. 

By spring 2020, nearly all therapy was forced online, a shift that initially worried therapists and patients alike. 

“You do all these years of school to become a psychologist and all the training is in-person,” says Dr. Justin Puder, a therapist and licensed psychologist based in Boca Raton, Florida. “So when you transition to online, you have these doubts—will the connection be as genuine? Will it be as effective?”

But by summer, many therapists, including Puder, were experiencing an unprecedented surge in new-patient inquiries. 

“About six months into the pandemic, I was over capacity, and that was the first time that my private practice had filled up like that,” Puder says. 

Many of Puder’s clients were teens or young adults struggling with the transition to online school and the loss of important milestones like prom or graduation. Other therapists, like Dr. Jeff Rocker in Miami, Florida, saw an influx of Black men seeking therapy after a summer of highly publicized shootings of unarmed Black males at the hands of police. Others still, like K. Michelle Johnson, a sex and relationship coach and therapist based in Denver, Colorado, saw their practices flooded with struggling couples. 

“I think the pandemic created a bit of a pressure cooker for people who were suddenly unable to avoid or escape the issues they’d been having,” Johnson says. 

For Johnson’s clients, the shift to virtual therapy had both pros and cons. One issue was privacy. Virtual therapy nearly always takes place on specialized HIPAA-compliant platforms that ensure a secure connection, but interruptions from nosy roommates, curious children and needy pets are facts of life at home. 

Still, Jennifer Dunkle, a Certified Gottman Couples Therapist who specializes in financial coaching and is based in Fort Collins, Colorado, says many of the parents she works with still prefer online therapy—it freed them to get the help they needed without having to find or pay for childcare, she says.

That new therapy-from-anywhere paradigm has also helped improve access for people in more rural settings. Angela, 31, raises pigs and vegetables with her partner on a farm in southern Colorado. They live about 60 miles from the nearest mental health center. 

With that commute, “Going into therapy could be a three-hour ordeal,” Angela says. “So virtual—I’m all about it.” 

Pocket Joy List Project

The Pocket Joy List Project

The stories, podcasts, poems and songs we always come back to

Virtual therapy also just feels more accessible for some patients. 

Haley, a 26-year-old in Atlanta, Georgia, says the idea of having to meet a therapist face-to-face had always intimidated her. That was compounded by anxiety around having to locate a new office, find parking and be somewhere on time. Conversely, just opening up a laptop from the comfort of her kitchen or bedroom? “That felt so much easier,” she says.  

Puder suspects that there’s another phenomenon at play in the public’s newfound openness to virtual therapy: social media. 

Puder currently has 335,000 followers on his TikTok. Rocker, better known in Miami as the “Celebrity Therapist” for his work with elite athletes and entertainers, has over 62,000 followers on Instagram. Both are part of a new wave of influencer-therapists who build their followings through bite-size mental health tips, often infused with humor and a lot of personality. 

The trend has been enormously effective, Rocker says. 

“People are seeing that there are so many different approaches and ethnicities and cultures within the mental health space,” Rocker says. “People are seeing that they can receive mental health services from people who look like them and talk like them and who they can relate to.” 

Puder also suspects that TikTok’s rise in popularity during the pandemic has helped cultivate a new light-hearted, open, stigma-free approach to mental health, which has in turn encouraged more people to give therapy a try. After all, the difference between watching weekly videos from your favorite therapist and signing up for weekly video chats is an easy leap to make. 

Better yet: According to a 2018 analysis of 64 different trials, internet-delivered therapy is just as effective as face-to-face therapy. And for some patients, it could be even more effective. Johnson reports that when clients can talk to her from a serene setting like a park, they’re able to open up and be more vulnerable than they may have been in person. 

“With virtual therapy, there’s that little bit of separation between me and the client,” she says. “Sometimes that helps them let their guard down a little more.” 

Kali had been going to therapy for years to help her manage her anxiety and depression. Pre-pandemic, she says she never would have considered virtual therapy. But between the political and social unrest and general pandemic anxiety, she found herself making the switch from weekly in-person sessions to weekly online sessions — something she said was “absolutely” critical to her surviving 2020.

“I genuinely have no idea what I would have done without it,” she says. “I don’t know that I would have made it through this year.”

While she looks forward to going back to in-person sessions, Kali says her opinions of virtual therapy have changed.  

“I do feel like that deep connection can still be felt,” she says, “Now when I’m away on a trip, I’m more inclined to keep therapy on the schedule.” 

Johnson doesn’t expect virtual therapy to ever completely replace in-person therapy. After all, in-person sessions are still a better fit for some clients, like those who may be starting deep trauma work, she says. Rocker adds that kids and teens often have trouble focusing on a screen. 

But one thing is certain: Virtual therapy is here to stay, and the events of 2020 have pushed the mental health conversation in America into a new era—one Puder expects to snowball as more people continue to try therapy and talk about it online. 

“When you have the opportunity to be vulnerable and talk about a low you’re in, it’s freeing,” he says. “And I think when people get a taste of that, they’ll continue the conversation.” 

The post Sharing your deepest emotions online: Did 2020 change the future of therapy? appeared first on The Mozilla Blog.

Hacks.Mozilla.OrgLooking fine with Firefox 89

While we’re sitting here feeling a bit frumpy after a year with reduced activity, Firefox 89 has smartened up and brings with it a slimmed down, slightly more minimalist interface.

Along with this new look, we get some great styling features including a force-colors feature for media queries and better control over how fonts are displayed. The long awaited top-level await keyword for JavaScript modules is now enabled, as well as the PerformanceEventTiming interface, which is another addition to the performance suite of APIs: 89 really has been working out!

This blog post provides merely a set of highlights; for all the details, check out the following:

forced-colors media feature

The forced-colors CSS media feature detects if a user agent restricts the color palette used on a web page. For instance Windows has a High Contrast mode. If it’s turned on, using forced-colors: active within a CSS media query would apply the styles nested inside.

In this example we have a .button class that declares a box-shadow property, giving any HTML element using that class a nice drop-shadow.

If forced-colors mode is active, this shadow would not be rendered, so instead we’re declaring a border to make up for the shadow loss:

.button {
  border: 0;
  padding: 10px;
  box-shadow: -2px -2px 5px gray, 2px 2px 5px gray;

@media (forced-colors: active) {
  .button {
    /* Use a border instead, since box-shadow is forced to 'none' in forced-colors mode */
    border: 2px ButtonText solid;

Better control for displayed fonts

Firefox 89 brings with it the line-gap-override, ascent-override and descent-override CSS properties. These allow developers more control over how fonts are displayed. The following snippet shows just how useful these properties are when using a local fallback font:

@font-face {
  font-family: web-font;
  src: url("<https://example.com/font.woff>");

@font-face {
  font-family: local-font;
  src: local(Local Font);
  ascent-override: 90%;
  descent-override: 110%;
  line-gap-override: 120%;

These new properties help to reduce layout shift when fonts are loading, as developers can better match the intricacies of a local font with a web font. They work alongside the size-adjust property which is currently behind a preference in Firefox 89.

Top-level await

If you’ve been writing JavaScript over the past few years you’ve more than likely become familiar with async functions. Now the await keyword, usually confined to use within an async function, has been given independence and allowed to go it alone. As long as it stays within modules that is.

In short, this means JavaScript modules that have child modules using top level await wait for that child to execute before they themselves run. All while not blocking other child modules from loading.

Here is a very small example of a module using the >a href=”https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API”>Fetch API and specifying await within the export statement. Any modules that include this will wait for the fetch to resolve before running any code.

// fetch request
const colors = fetch('../data/colors.json')
  .then(response => response.json());

export default await colors;


A new look can’t go unnoticed without mentioning performance. There’s a plethora of Performance APIs, which give developers granular power over their own bespoke performance tests. The PerformanceEventTiming interface is now available in Firefox 89 and provides timing information for a whole array of events. It adds yet another extremely useful feature for developers by cleverly giving information about when a user-triggered event starts and when it ends. A very welcome addition to the new release.

The post Looking fine with Firefox 89 appeared first on Mozilla Hacks - the Web developer blog.

Henri SivonenBogo-XML Declaration Returns to Gecko

Firefox 89 was released today. This release (again!) honors a character encoding declaration made via syntax that looks like an XML declaration used in text/html (if there are no other character encoding declarations).

Before HTML parsing was specified, Internet Explorer did not support declaring the encoding of a text/html document using the XML declaration syntax. However, Gecko, WebKit, and Presto did. Unfortunately, I didn’t realize that they did.

When Hixie specified HTML parsing, consistent with IE, he didn’t make the spec sensitive to the XML declaration syntax in a particular way. I am unable to locate any discussion in the WHATWG mailing list archives about whether an encoding declaration made using the XML declaration syntax in text/html should be honored when processing text/html.

When I implemented the specified HTML parsing algorithm in Gecko, I also implemented the internal encoding declaration handling per specification. As a side effect, in Firefox 4, I removed Gecko’s support for the XML declaration syntax for declaring the character encoding in text/html. I don’t recall this having been a knowingly-made decision: The rewrite just did strictly what the spec said.

When WebKit and Presto implemented the specified HTML parsing algorithm, they only implemented the tokenization and tree building parts and kept their old ways for handling character encoding declarations. That is, they continued to honor the XML declaration syntax for declaring the character encoding text/html. I don’t recall the developers of the either engine raising this as a spec issue back then.

The closest to the issue getting raised as a spec issue was for the wrong reason, which made people push back instead of fixing the spec.

When Blink forked, it inherited WebKit’s behavior. When Microsoft switched from EdgeHTML to Blink, Gecko became the only actively-developed major engine not to support the XML declaration syntax for declaring the character encoding text/html. Since unlabeled UTF-8 is not automatically detected, this became a Web compatibility issue with pages that declare UTF-8 but only using the XML declaration syntax (i.e. without a BOM, a meta, or HTTP-layer declaration as well).

And that’s why support for declaring the character encoding via the XML declaration syntax came to the HTML spec and back to Gecko.

What Can We learn?

  • When the majority of engines has a behavior, we should assume that content is authored with the expectation that that behavior exists, and we can’t rely on assuming that all content is tested with the engine that doesn’t have the behavior even if that engine has the majority market share.

    (In general, the HTML parsing algorithm upheld IE behaviors a bit too much. I regret that I didn’t push for non-IE behavior in tokenization when a less-than sign is encountered inside a tag token.)

  • Instead of just trusting the spec, also check with other engines do.

  • If you aren’t willing to implement what the spec says, you should raise the issue of the standardization forum.

  • If an issue is raised for a bad reason, pay attention to if there is an adjacent issue that needs fixing for a good reason.

  • “We comply with the spec” is unlikely to be a winning response to a long-standing Web compatibilty bug.

The Mozilla BlogHow to capture screenshots instantly with Firefox

Sometimes you need to take a screenshot of something online to save it or share it with someone. Firefox has a built-in feature that makes grabbing a screenshot quick and easy. Here’s how to use the Firefox screenshot feature in your desktop browser:

Use the menu

  1. Right-click for Windows or two-finger tap on Mac to call up the Firefox main action menu.
  2. Scroll to the Take Screenshot action.
The Firefox screenshot feature shows up in the menu
  1. Drag your mouse around the area of your screen that you want to capture. 
  2. Or, you can select one of the pre-set options:
    • Save visible captures only what you see in your browser window without scrolling.
    • Save full page captures everything on the page, which is handy for screenshotting a full webpage without awkwardly dragging your mouse the whole way down the screen.
  3. Once you’ve selected the area to screenshot, you have two options.
    • Click the copy button to add it to your clipboard. Then paste it somewhere (control-v), like in a chat, email, document or presentation — and even other software other than Firefox.
    • Or you can download the screenshot image, which is handy for saving and reusing later. 

Changed your mind about taking a screenshot? Hit the ESC key to back out.

Add the Firefox screenshot feature to your toolbar

If you love the screenshot feature and want it front and center, you can add it as a button in the space next to the Firefox toolbar. Here’s how:

  1. Click the main menu in the upper right corner and select More Tools.
  2. Next, select Customize Toolbar.
  3. Locate the Screenshot shortcut:

  1. Grab it with your mouse and drag it up to the Toolbar.
  2. Then click the Done button. 
  3. Now the Screenshot shortcut is at your fingertips everywhere you go in Firefox.

Using the Firefox screenshot feature is so easy anyone can use it. You don’t need to remember any shortcuts or download additional software. It’s all built-in and ready to roll with Firefox. 

The post How to capture screenshots instantly with Firefox appeared first on The Mozilla Blog.

The Mozilla BlogA fresh new Firefox is here

Even though we’re in the web browser business, we know you don’t go online to look at Firefox, it’s more that you look through Firefox to get to everything on the open web. In today’s major release, Firefox sports a fresh new design that gets you where you’re going online, fast and distraction-free. And since we’re all about privacy, we’re also expanding integrated privacy protections in Firefox, so you feel safe and free to be yourself online thanks to fewer eyes following you across the web.

It’s all happening starting today in Firefox.

A sleek, clean Firefox design backed by research

Going into the Firefox redesign, our team studied how people interact with the browser, observing their patterns and behaviors. We listened to feedback and gathered ideas from regular people who just want to have an easier experience on the web. We obsessed over distractions, extra clicks and wasted time. The resulting new design is simple, modern and fast and delivers a beautiful experience to support what people do most in Firefox. 

Bright and buoyant throughout

The fresh new Firefox is easy on the eyes, bright and buoyant on screens of all sizes — computers, phones and tablets. A new icon set, crisp typography and thoughtful spacing throughout all reflect a modern aesthetic for 2021.

Streamlined toolbar and menus

The toolbar is naturally where you start every web visit. It’s the place where you type a URL to go somewhere online. After web page content, it’s what you look at most in Firefox. The new toolbar is simplified and clutter-free so you get to the good stuff effortlessly. 

Menus are where key Firefox actions and commands live. We’ve consolidated extra menus to reduce clutter and be more intuitive through the three bars menu in the upper right or by right-clicking to activate it on your computer screen. The new look reorganized and streamlined our menus to put the best actions quickly at your fingertips. 

When privacy protections are engaged in Firefox, the shield icon in the toolbar glows subtly indicating that we’re working behind the scenes to protect you from nosy trackers. Fun fact: Firefox has blocked more than 6 trillion — that’s trillion with a T —  trackers since we rolled out enhanced tracking protection, stopping thousands of companies from viewing your online activity.. We’re talking about tracking cookies, social media trackers, fingerprinters, cryptominers and more. Go ahead and click on the shield to see who and what Firefox is blocking… you might be surprised by what you find out. 

A new look for tabs

Based on our research, we found out that more than half of you have 4+ tabs open all the time, and some of you have more, a lot more. And we feel that! Tab as much as you like, friends. Tabs got a makeover so they are now gently curved and float above the toolbar. It’s an exciting change that also serves as a reminder that tabs aren’t stationary. So grab those tabs, move them around and organize them as you like. Tabs also got a glow-up to be a touch brighter when active. 

🤫 Shhhhhh…. notifications 

No one likes to be interrupted when they’re in the flow, but if you must be alerted to something, at least it can look good. We’ve updated notifications and alerts of all kinds in Firefox to take up less space for less jarring interruptions. Plus, non-essential alerts and messages have been removed altogether. Media autoplay is turned off by default, so you won’t be interrupted by a random video blasting unexpectedly. Spotting a noisy tab is easy, and unmuting/muting takes just a quick click on the tab itself. 

Expanded privacy protections

Mozilla makes it our mission to put your privacy and security first in the technology we develop. Our goal is for you to worry less every time you go online. The latest Firefox release comes to you with next-level security and privacy that you’ve come to expect from us.

The best private browsing mode out there

All browsers have a private browsing mode, but none match Firefox. The popular Total Cookie Protection moves from the optional strict setting to always-on in private browsing. This feature maintains a separate “cookie jar” for each website you visit while browsing privately. Any time a site deposits a cookie, Firefox locks it up in its own cookie jar so that it can’t be shared with any other website.

An even better Firefox for iOS and Android

The fresh new look covers Firefox everywhere, from desktop browsers to Android and iOS mobile devices. The iOS experience is optimized for iPhone and iPad, with key actions now taking fewer steps for quicker searches, navigation and tab viewing. With refinements in iconography and menu names, the whole browsing experience is more cohesive and harmonious across every platform.

“What inspires us the most are the people that love and use Firefox.”

Firefox design team

Backed by Mozilla, the non-profit that puts people first

Firefox was created by Mozilla as a faster, more private alternative to browsers like Internet Explorer, and now Chrome. Today, our mission-driven company and volunteer community continue to put your privacy above all else. Even as the internet grows and changes, Firefox continues to focus on your right to privacy — we call it the Personal Data Promise: Take less. Keep it safe. No secrets. Your data, your web activity, your life online is protected with Firefox, on your computer, your phone and anywhere you use it.

Things are looking different in 2021

We’re always excited when a new Firefox launches, and when it comes to this major redesign, we’re even more stoked for you to experience it. If you left Firefox behind at some point, this modern approach — inside and out — is designed to win you back and make it your go-to browser.

Keep an eye out for the new look in Firefox for desktop and mobile, rolling out starting today. Download and install for desktop, Android and iOS so you have all the best of Firefox everywhere you browse. 

The post A fresh new Firefox is here appeared first on The Mozilla Blog.

The Mozilla BlogModern, clean new Firefox clears the way to all you need online

We set out in 2021 to reimagine Firefox’s design to be fast, modern and inviting the first time you run it and every day after. We’ve always had your back on privacy, and still do. Now with today’s new Firefox release we’re also bringing you a modern new look designed to streamline and calm things down so you have a fresh new web experience every time you use Firefox.

We’re living in a frenetic time, where people are dealing with tough changes in our daily lives and hard to solve problems are popping up everywhere. We think the browser should be a piece of software you can rely on to have your back, pleasant to look at and working seamlessly with the web.

We’re also on a mission to save you time, whether that’s by making pages load faster, using less memory, or by streamlining everyday use of the browser. Good design is invisible. So if things just work, you don’t really think about it. But a ton of thought has been put into the flow. Our users who have tried the new Firefox have said, “the fact that I was using a new web browser slipped into the background of my consciousness.” And that’s just what we were going for.

Today’s desktop and mobile releases represent the intentional and thoughtful touches we made to give you a safe, calm, and useful experience online. We made these changes with you and your online habits in mind. Check it out for yourself:

Today’s Firefox gets you where you want to go online

Here’s a quick and easy breakdown of what you’ll find:

For starters we’ve cleaned up the amount of things that demand your attention from prompts and notifications to actions in the menu bar.

  • Simplified unencumbered navigation: The first step to going anywhere online is the toolbar, just type in the URL and press return/enter, and you’re off. In some ways this area serves as your car’s dashboard that you look at every time you get behind the wheel. We kept it simple and focused on these three key areas of the toolbar: 1) Navigation – back, forward and refresh 2) Address Bar – privacy shield (so you know your ambient information is always protected), security lock and where to type in your URL. 3) Frequently Used Settings – reader mode, zoom level and bookmark. Our intent was to make it easier for people to focus on the frequently used items in this area and easily get to where they needed to go.
  • Streamlined clutter-free menus: There are many ways to get to your preferences and settings, and we found that the two most popular ways were: 1) the hamburger menu – button on the far right with three equal horizontal lines – and 2) the right-click menu. So, we prioritized the content based on what people clicked on when they visited the menu. We made the labels less cryptic and clear and easier to understand, and we removed some icons so that it was easier for people to see at a glance where they wanted to go.
  • Productivity-inspired new tab design: Tabs. We use them every day. They signal where you are, but we need them to do more work. Everything from conveying information about what video is playing to where your next Zoom meeting is. It’s no surprise that more than 50% of people have 4 tabs or more open. We redesigned these tabs so that they floated neatly, and we added the visual indicators, like blocking autoplay videos until you’re ready to visit that tab. We detached the tab from the browser to invite you to move, rearrange and pull out tabs into a new window to suit your flow, and organize them so they’re easier for you to find.
  • We shushed notifications: Your web experience shouldn’t be bogged down by a bunch of notifications, and if you have to be given a heads-up on something, it should look good and not be a distraction. You’ll see consolidated the panels so you can respond more quickly and get back to why you were online in the first place faster. We specifically reduced some of the frustration and re-prompting associated with getting in and out of Google Meet meetings. Thanks to this pared down interface, you can get to all your web calls and meetings with fewer clicks.
  • Fresh, new Firefox for iOS: And we took care to pay attention to the key touches particularly on Apple devices. The improved iPhone and iPad experience includes a modernized, optimized and differentiated Firefox user interface. We reduced the steps to search in a new tab by automatically popping up the keyboard, emphasizing the ability to do quick searches with the search engine logo, and adding the append feature. You’ll see improved navigation around the app with new tab views and moved the synced tabs into the tab tray for better discoverability on any device. We refreshed design elements such as iconography and menu items naming to be consistent across our desktop and Firefox for Android platforms.

17 billion clicks drove us to create a new Firefox

As you can see for the past couple of months we’ve obsessed over everything from the icons you click to the address bar to the navigation buttons and menus you use. When we embarked on this journey to redesign the browser, we started by taking a closer look at where people were spending their time in the Firefox browser. We needed to know what clicks led to an action and if people accomplished what they set out to do when they clicked. For a month, we looked closely at the parts of the browser that were “sparking joy” for people, and the parts that weren’t:

We learned there was an opportunity to create a more streamlined environment to get people where they needed to go with less clicks and distractions. Our goal was to get people to their destination faster, with the least amount of clicks. Based on our user data, we observed how people used Firefox to get to their online destinations: 

In one month, there are 17 billion clicks in the Firefox browser. Out of those 17 billion clicks, there were three major areas within the browser they visited:

  • Tab Bar – About 43% of the clicks go to the top portion of the browser where they can open as many tabs as they want
  • Navigation Bar – About 33% of the clicks go to the area below the tab bar where people can move forward, backward or refresh; and add the URL in the address bar, as well as other functions on the navigation bar
  • Bookmark bar – About 5% of the clicks go to  the section where people bookmark their frequently visited place

Another area we observed was the menu area. Previously, we had three menus: right-click menu, the meatball menu (it appears as three dots at the end of the address bar), and the hamburger menu (it appears as three parallel lines at the far right of the address bar). The two popular menus were the right-click menu and the three parallel lines menu. People love or intuitively believe what they need is in the right click menu. We also saw differences around the world like:

  • Canada – Gets the crown for efficiency with an average of 12+ keyboard shortcuts per user and about 19 right clicks per user.
  • France – At an average of about 93 clicks pers user, France users click around the most within the browser.
  • Great Britain –  We always wondered which country had the most tab hoarders? In the UK, users have the most tabs opened at about 7% of users who have 16 or more tabs opened.
  • India – Are the tidiest in terms of how they use browsers with over 60% of the users having 3 or less tabs opened.
  • United States: We found they topped the list of countries who like to go to their settings and personalize their browser. 

With these insights we wanted to create a place that felt fresh and modern, and kept things like icons and menus flowing in a cohesive way. This meant using simple design and easy-to-understand language to lessen people’s cognitive load, essentially getting them faster to the places they wanted to go. It also meant retiring intrusive alerts and messages to avoid a jarring experience and provide a more calming environment. Throughout this process, our extraordinary team of designers really thought about the colors and iconography and wanted to make them more refined and consistent.

We’ve taken Private Browsing mode to the next level

We’ve designed the new Firefox to ensure that using a browser with industry-leading privacy is always an inviting experience. For those who browse exclusively or occasionally in Private Browsing mode, we take privacy to another level with today’s privacy protections. In February, we rolled out Total Cookie Protection in ETP (Enhanced Tracking Protection) strict mode. This privacy protection stops cookies from tracking you around the web by creating a separate cookie jar for every website. Today, Total Cookie Protection is now available in Private Browsing mode.

What’s next for Firefox

We’re excited to share this new Firefox experience across your devices. We’re planning exciting new features to roll out this year that will build on the modern web redesign this new Firefox delivers. No matter what device you choose to tackle the web in the future, Firefox will be there for you. We’ll share more details once we have them.

You can download the latest version of Firefox for your desktop and mobile devices and get ready for a new look and feel.

The post Modern, clean new Firefox clears the way to all you need online appeared first on The Mozilla Blog.

Mozilla Security BlogFirefox 89 blocks cross-site cookie tracking by default in private browsing

At Mozilla, we believe that your right to privacy is fundamental. Unfortunately, for too long cookies have been used by tracking companies to gather data about you as you browse the web. Today, with the launch of Firefox 89, we are happy to announce that Firefox Private Browsing windows now include our innovative Total Cookie Protection by default. That means: when you open a Private Browsing window, each website you visit is given a separate cookie jar that keeps cookies confined to that site. Cookies can no longer be used to follow you from site to site and gather your browsing history.

What is Total Cookie Protection?

In February of this year we introduced Total Cookie Protection, a new, extra-strong protection against cross-site tracking cookies. Since Firefox 86, Total Cookie Protection has been available for users who have ETP Strict Mode enabled. Now, with Firefox 89, we are extending this same protection to Private Browsing windows.

To recap: a cookie is a small piece of data that websites can ask your browser to store on your computer. Traditionally, browsers have allowed websites to share cookies in what is effectively a single cookie jar. Firefox’s Total Cookie Protection is a sophisticated set of privacy improvements that enforce a simple, revolutionary principle: your browser should not allow the sharing of cookies between websites. This principle is now enforced in Firefox Private Browsing windows by creating a separate cookie jar for every website you visit, as illustrated here:

Previously, third-party cookies were shared between websites. Now, every website gets its own cookie jar so that cookies can’t be used to share data between them. (Illustration: Meghan Newell)

As we described in February, Total Cookie Protection covers not just cookies but a variety of browser technologies that previously were able to be used for cross-site tracking. To ensure a smooth browsing experience, Total Cookie Protection makes occasional exceptions to share cookies between websites when they are needed for cross-site logins or similar cross-site functionality.

Firefox Private Browsing Windows, now with even more privacy

With the addition of Total Cookie Protection, Firefox’s Private Browsing windows have the most advanced privacy protections of any major browser’s private browsing mode. The following protections are included in Private Browsing windows by default:

If you have Firefox installed, you don’t need to do anything special to benefit from this upgrade to Private Browsing windows. To open a Private Browsing window, click on the Application Menu button (☰) and choose “New Private Window”:

Screenshot of the application menu with New Private Window selected.Or, if you like keyboard shortcuts, just press Ctrl + Shift + P (Cmd + Shift + P on Mac). When you are done with that private browsing session, you can simply close all your Private Browsing windows. All the cookies and other stored data from the websites you visited will be immediately deleted!

As we continue to strengthen Firefox’s privacy protections, Mozilla is committed to maintaining state-of-the-art performance and a first-class browsing experience. Stay tuned for more privacy advances in the coming months!

Thank you

We are grateful to the many Mozillians who have contributed to or supported this new enhancement to Firefox, including Steven Englehardt, Andrea Marchesini, Tim Huang, Johann Hofmann, Gary Chen, Nihanth Subramanya, Paul Zühlcke, Tanvi Vyas, Anne van Kesteren, Ethan Tseng, Prangya Basu, Wennie Leung, Ehsan Akhgari, Dimi Lee, Selena Deckelmann, Mikal Lewis, Tom Ritter, Eric Rescorla, Olli Pettay, Philip Luk, Kim Moir, Gregory Mierzwinski, Doug Thayer, and Vicky Chin.

The post Firefox 89 blocks cross-site cookie tracking by default in private browsing appeared first on Mozilla Security Blog.

Daniel Stenbergcurl localhost as a local host

When you use the name localhost in a URL, what does it mean? Where does the network traffic go when you ask curl to download http://localhost ?

Is “localhost” just a name like any other or do you think it infers speaking to your local host on a loopback address?


curl http://localhost

The name was “resolved” using the standard resolver mechanism into one or more IP addresses and then curl connected to the first one that works and gets the data from there.

The (default) resolving phase there involves asking the getaddrinfo() function about the name. In many systems, it will return the IP address(es) specified in /etc/hosts for the name. In some systems things are a bit more unusually setup and causes a DNS query get sent out over the network to answer the question.

In other words: localhost was not really special and using this name in a URL worked just like any other name in curl. In most cases in most systems it would resolve to and ::1 just fine, but in some cases it would mean something completely different. Often as a complete surprise to the user…

Starting now

curl http://localhost

Starting in commit 1a0ebf6632f8, to be released in curl 7.78.0, curl now treats the host name “localhost” specially and will use an internal “hard-coded” set of addresses for it – the ones we typically use for the loopback device: and ::1. It cannot be modified by /etc/hosts and it cannot be accidentally or deliberately tricked by DNS resolves. localhost will now always resolve to a local address!

Does that kind of mistakes or modifications really happen? Yes they do. We’ve seen it and you can find other projects report it as well.

Who knows, it might even be a few microseconds faster than doing the “full” resolve call.

(You can still build curl without IPv6 support at will and on systems without support, for which the ::1 address of course will not be provided for localhost.)

Specs say we can

The RFC 6761 is titled Special-Use Domain Names and in its section 6.3 it especially allows or even encourages this:

Users are free to use localhost names as they would any other domain names.  Users may assume that IPv4 and IPv6 address queries for localhost names will always resolve to the respective IP loopback address.

Followed by

Name resolution APIs and libraries SHOULD recognize localhost names as special and SHOULD always return the IP loopback address for address queries and negative responses for all other query types. Name resolution APIs SHOULD NOT send queries for localhost names to their configured caching DNS server(s).

Mike West at Google also once filed an I-D with even stronger wording suggesting we should always let localhost be local. That wasn’t ever turned into an RFC though but shows a mindset.

(Some) Browsers do it

Chrome has been special-casing localhost this way since 2017, as can be seen in this commit and I think we can safely assume that the other browsers built on their foundation also do this.

Firefox landed their corresponding change during the fall of 2020, as recorded in this bugzilla entry.

Safari (on macOS at least) does however not do this. It rather follows what /etc/hosts says (and presumably DNS of not present in there). I’ve not found any official position on the matter, but I found this source code comment indicating that localhost resolving might change at some point:

// FIXME: Ensure that localhost resolves to the loopback address.

Windows (kind of) does it

Since some time back, Windows already resolves “localhost” internally and it is not present in their /etc/hosts alternative. I believe it is more of a hybrid solution though as I believe you can put localhost into that file and then have that custom address get used for the name.

Secure over http://localhost

When we know for sure that http://localhost is indeed a secure context (that’s a browser term I’m borrowing, sorry), we can follow the example of the browsers and for example curl should be able to start considering cookies with the “secure” property to be dealt with over this host even when done over plain HTTP. Previously, secure in that regard has always just meant HTTPS.

This change in cookie handling has not happened in curl yet, but with localhost being truly local, it seems like an improvement we can proceed with.

Can you still trick curl?

When I mentioned this change proposal on twitter two of the most common questions in response were

  1. can’t you still trick curl by routing somewhere else
  2. can you still use --resolve to “move” localhost?

The answers to both questions are yes.

You can of course commit the most hideous hacks to your system and reroute traffic to somewhere else if you really wanted to. But I’ve never seen or heard of anyone doing it, and it certainly will not be done by mistake. But then you can also just rebuild your curl/libcurl and insert another address than the default as “hardcoded” and it’ll behave even weirder. It’s all just software, we can make it do anything.

The --resolve option is this magic thing to redirect curl operations from the given host to another custom address. It also works for localhost, since curl will check the cache before the internal resolve and --resolve populates the DNS cache with the given entries. (Provided to applications via the CURLOPT_RESOLVE option.)

What will break?

With enough number of users, every single little modification or even improvement is likely to trigger something unexpected and undesired on at least one system somewhere. I don’t think this change is an exception. I fully expect this to cause someone to shake their fist in the sky.

However, I believe there are fairly good ways to make to restore even the most complicated use cases even after this change, even if it might take some hands on to update the script or application. I still believe this change is a general improvement for the vast majority of use cases and users. That’s also why I haven’t provided any knob or option to toggle off this behavior.


The top photo was taken by me (the symbolism being that there’s a path to take somewhere but we don’t really know where it leads or which one is the right to take…). This curl change was written by me. Mike West provided me the Chrome localhost change URL. Valentin Gosu gave me the Firefox bugzilla link.

Karl DubostGet Ready For Three Digits User Agent Strings

In 2022, Firefox and Chrome will reach a version number with three digits: 100. It's time to get ready and extensively test your code, so your code doesn't return null or worse 10 instead of 100.

Durian on sale

Some contexts

The browser user agent string is used in many circumstances, on the server side with the User-Agent HTTP header and on the client side with navigator.userAgent. Browsers lie about it. Web apps and websites detection do not cover all cases. So browsers have to modify the user agent string on a site by site case.

Browsers Release Calendar

According to the Firefox release calendar, during the first quarter of 2022 (probably March), Firefox Nightly will reach version 100. It will set Firefox stable release version around May 2022 (if it doesn't change until then).

And Chrome release calendar sets a current date of March 29, 2022.

What Mozilla Webcompat Team is doing?

Dennis Schubert started to test JavaScript Libraries, but this tests only the libraries which are up to date. And we know it, the Web is a legacy machine full of history.

The webcompat team will probably automatically test the top 1000 websites. But this is very rudimentary. It will not cover everything. Sites always break in strange ways.

What Can You Do To Help?

Browse the Web with a 100 UA string

  1. Change the user agent string of your favorite browser. For example, if the string is Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:89.0) Gecko/20100101 Firefox/89.0, change it to be Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:100.0) Gecko/20100101 Firefox/100.0
  2. If you notice something that is breaking because of the UA string, file a report on webcompat. Do not forget to check that it is working with the normal UA string.

Automatic tests for your code

If your web app has a JavaScript Test suite, add a profile with a browser having 100 for its version number and check if it breaks. Test both Firefox and Chrome (mobile and desktop) because the libraries have different code paths depending on the user agent. Watch out for code like:

const ua_string = "Firefox/100.0";
ua_string.match(/Firefox\/(\d\d)/); //  ["Firefox/10", "10"]
ua_string.match(/Firefox\/(\d{2})/); // ["Firefox/10", "10"]
ua_string.match(/Firefox\/(\d\d)\./); //  null

Compare version numbers as integer not string

Compare integer, not string when you have decided to have a minimum version for supporting a browser, because

"80" < "99" // true
"80" < "100" // false
parseInt("80", 10) < parseInt("99", 10) // true
parseInt("80", 10) < parseInt("100", 10) // true


If you have more questions, things I may have missed, different take on them. Feel free to comment…. Be mindful.


Cameron KaiserTenFourFox FPR32 SPR1 available

TenFourFox Feature Parity Release 32 Security Parity Release 1 "32.1" is available for testing (downloads, hashes). There are no changes to the release notes except that Mozilla has lengthened 78ESR by a couple more weeks, so the end of official builds is now extended to October 5, 2021. Assuming no major problems, FPR32.1 will go live Monday evening Pacific time as usual.

The Mozilla BlogBuilding a more privacy preserving ads-based ecosystem

Advertising is central to the internet economy. It funds many free products and services. But it is also very intrusive. It is powered by ubiquitous surveillance and it is used in ways that harm individuals and society. The advertising ecosystem is fundamentally broken in its current form.

Advertising does not need to harm consumer privacy. As a browser maker and as an ethical company driven by a clear mission, we want to ensure that the interests of users are represented and that privacy is a priority. We also benefit from the advertising ecosystem which gives us a unique perspective on these issues.

Every part of the ecosystem has a role to play in strengthening and improving it. That is why we see potential in the debate happening today about the merits of privacy preserving advertising.

As this debate moves forward, there are two principles that should anchor work on this topic to ensure we deliver a better web to consumers.

Consumer Privacy First

Improving privacy for everyone must remain the north star for review of proposals, such as Google’s FLoC and Microsoft’s PARAKEET, and parallel proposals from the ad tech industry. At Mozilla, we will be looking at proposals through this lens, which is always a key factor for any decision about what we implement in Firefox. Parties that aren’t interested in protecting user privacy or in advancing a practical vision for a more private web will slow down the innovation that is possible to achieve and necessary for consumers.

Development in the Open

It is important that proposals are transparently debated and collaboratively developed by all stakeholders through formal processes and oversight at open standards development organizations (“SDOs”). Critical elements of online infrastructure should be developed at SDOs to ensure an interoperable and decentralized open internet. Stakeholder commitment to final specifications and timelines is just as important, because without this, the anticipated privacy benefits to consumers cannot materialize. 

At its core, the current proposals being debated and their testing plans have important potential to improve how advertising can be delivered but also may raise privacy and centralization issues that need to be addressed. This is why it’s so critical this process plays out in the open at SDOs.

We hope that all stakeholders can commit to these two principles. We have a real opportunity now to improve the privacy properties of online advertising—an industry that hasn’t seen privacy improvement in years. We should not squander this opportunity. We should instead draw on the internet’s founding principles of transparency, public participation and innovation to make progress.

For more on this:

How online advertising works today and Privacy-Preserving Advertising

Privacy analysis of FLoC

The post Building a more privacy preserving ads-based ecosystem appeared first on The Mozilla Blog.

The Mozilla BlogThe future of ads and privacy

The modern web is funded by advertisements. Advertisements pay for all those “free” services you love, as well as many of the products you use on a daily basis — including Firefox. There’s nothing inherently wrong with advertising: Mozilla’s Principle #9 states that “Commercial involvement in the development of the internet brings many benefits.” However, that principle goes on to say that “a balance between commercial profit and public benefit is critical” and that’s where things have gone wrong: advertising on the web in many situations is powered by ubiquitous tracking of people’s activity on the web in a way that is deeply harmful to users and to the web as a whole.

Some Background

The ad tech ecosystem is incredibly complicated, but at its heart, the way that web advertising works is fairly simple. As you browse the web, trackers (mostly, but not exclusively advertisers), follow you around and build up a profile of your browsing history. Then, when you go to a site which wants to show you an ad, that browsing history is used to decide which of the potential ads you might see you actually get shown. 

The visible part of web tracking is creepy enough — why are those pants I looked at last week following me around the Internet? — but the invisible part is even worse: hundreds of companies you’ve never heard of follow you around as you browse and then use your data for their own purposes or sell it to other companies you’ve also never heard of. 

The primary technical mechanism used by trackers is what’s called “third party cookies”. A good description of third party cookies can be found here, a cookie is a piece of data that a website stores on your browser and can retrieve later. A third party cookie is a cookie which is set by someone other than the page you’re visiting (typically a tracker). The tracker works with the web site to embed some code from the tracker on their page (often this code is also responsible for showing ads) and that code sets a cookie for the tracker. Every time you go to a page the tracker is embedded on, it sees the same cookie and can use that to link up all the sites you go to. 

Cookies themselves are an important part of the web — they’re what let you log into sites, maintain your shopping carts, etc. However, third party cookies are used in a way that the designers of the web didn’t really intend and unfortunately, they’re now ubiquitous. While they have some legitimate uses, like federated login, they are mostly used for tracking user behavior.

Obviously, this is bad and it shouldn’t be a surprise to anybody who has followed our work in Firefox that we believe this needs to change. We’ve been working for years to drive the industry in a better direction. In 2015 we launched Tracking Protection, our first major step towards blocking tracking in the browser. In 2019 we turned on a newer version of our anti-tracking technology by default for all of our users. And we’re not the only ones doing this.

We believe all browsers should protect their users from tracking, particularly cookie-based tracking, and should be moving expeditiously to do so.

Privacy Preserving Advertising

Although third-party cookies are bad news, now that they are so baked into the web, it won’t be easy to get rid of them. Because they’re a dual-use technology with some legitimate applications, just turning them off (or doing something more sophisticated like Firefox Total Cookie Protection) can cause some web sites to break for users. Moreover, we have to be constantly on guard against new tracking techniques.

One idea that has gotten a lot of attention recently is what’s called “Privacy Preserving Advertising” (PPA) . The basic idea has a long history with systems such as Adnostic, PrivAd, and AdScale but has lately been reborn with proposals from Google, Microsoft, Apple, and Criteo, among others. The details are of course fairly complicated, but the general idea is straightforward: identify the legitimate (i.e., non-harmful) applications for tracking techniques and build alternative technical mechanisms for those applications without threatening user privacy. Once we have done that, it becomes much more practical to strictly limit the use of third party cookies.

This is a generally good paradigm: technology has advanced a lot since cookies were invented in the 1990s and it’s now possible to do many things privately that used to require just collecting user data. But, of course, it’s also possible to use technology to do things that aren’t so good (which is how we got into this hole in the first place). When looking at a set of technologies like PPA, we need to ask:

  1. Are the use cases for the technology actually good for users and for the web?
  2. Do these technologies improve user privacy and security? Are they collecting the minimal amount of data that is necessary to accomplish the task?
  3. Are these technologies being developed in an open standards process with input from all stakeholders?

Because this isn’t just one technology but rather a set of them, we should expect some pieces to be better than others. In particular, ad measurement is a use case that is important to the ecosystem, and we think that getting this one component right can drive value for consumers and engage advertising stakeholders. There’s overlap here with technologies like Prio which we already use in Firefox. On the other hand, we’re less certain about a number of the proposed technologies for user targeting, which have privacy properties that seem hard to analyze. This is a whole new area of technology, so we should expect it to be hard, but that’s also a reason to make sure we get it right.

What’s next?

Obviously, this is just the barest overview. In upcoming posts we’ll provide a more detailed survey of the space, covering the existing situation in more detail, some of the proposals on offer, and where we think the big opportunities are to improve things in both the technical and policy domains.

For more on this:

Building a more privacy preserving ads-based ecosystem

Privacy analysis of FLoC

The post The future of ads and privacy appeared first on The Mozilla Blog.

Sam FosterIdeas on a lower-carbon internet through scheduled downloads and Quality of Service requests

Other titles:

  • The impact of internet use and what we might do about it?
  • Opportunities for powering more internet use with renewables
  • I want this thing, but not until later
  • A story of demand-side prioritization, scheduling and negotiation to take advantage of a fluxuating energy supply.

I recently got interested in how renewable power generation plays into the carbon footprint of internet usage. We need power to run and charge the devices we use to consume internet content, to run the networks that deliver that content to us, and to power the servers and data centers that house those servers.

Powering the internet eats up energy. The power necessary to serve up the files, do the computation, encode and package it all up to send it down the wire to each of the billions of devices making those requests consumes energy on an enormous scale. The process of hosting and delivering content is so power hungry, the industry is driven to large extent by the cost and availability of electricity. Data centers are even described in terms of the power they consume - as a reasonable proxy for the capacity they can supply.

One of the problems we hear about constantly is that the intermittent and relatively unpredicatable nature of wind and solar energy means it can only ever make up a portion of a region’s electricity generation capacity. There’s an expectation of always-on power availability; regardles of the weather or time of day, a factory must run, a building must be lit, and if a device requests some internet resource the request must be met immediately. So, we need reliable base load generation to meet most energy demands. Today, that means coal, natural gas, nuclear and hydro generation plants - which can be depended on to supply energy day and night, all year round. Nuclear and hydro are low-carbon, but they can also be expensive and problematic to develop. Wind and solar are much less so, but as long as their output is intermittent they can only form part of the solution for de-carbonizing electricity grids across the world - as long as demand not supply is king.

There are lots of approaches to tackling this. Better storage options (PDF) smooth out the intermittency of wind and solar - day to day if not seasonally. Carbon capture and sequestration lower the carbon footprint of fossil fuel power generation - but raise the cost. What if that on-demand, constant availability of those data centers’ capacity was itself a variable? Suppose the client device issuing the request had a way to indicate priority and expected delivery time, would that change the dynamic?

Wind power tends to peak early in the morning, solar in the afternoon. Internet traffic is at its highest during the day and evening, and some - most - is necessarily real-time. But if I’m watching a series on Netflix, the next episode could be downloaded at anytime, as long as its available by the next evening when I sit down to watch it. And for computational tasks - like compiling some code, running an automated test suite, or encoding video - sometimes you need it as soon as possible, other times its less urgent. Communicating priority and scheduling requirements (a.k.a Quality of Service) from the client through to the infrastructure used to fullfill a request would allow smarter balancing of demand and resources. It would open up the door to better use of less constant (non-baseload) energy sources. The server could defer on some tasks when power is least available or most expensive, and process them later when for example the sun comes up, or the wind blows. Smoothing out spikes in demand would also reduce the need for so-called “peaker” plants - typically natural gas power plants that are spun up to meet excess energy demand.

“Kestler: While intermittent power is a challenge for data center operations, the development of sensors, software tools and network capabilities will be at the forefront of advancing the deployment of renewables across the globe. The modernization of the grid will be dependent on large power consumers being capable of operating in a less stable flow of electrons.

What’s Ahead for Data Centers in 2021

Google already experimented with some of this, and its a fascinating and encouraging read.

“Results from our pilot suggest that by shifting compute jobs we can increase the amount of lower-carbon energy we consume”

Our data centers now work harder when the sun shines and wind blows

There are clearly going to be hurdles for wide-scale adoption of this kind of strategy, and its never going to work for all cases. But with a problem at this scale, a solution that shaves off 1%, or a fraction of 1% can still translate into huge monetary and carbon savings. So, what would it take? Are there practical steps that us non-data-center-operators can take to facilitate this kind of negotiation betweeen the client and the massive and indifferent upstream infrastructure that supports it?

The low hanging fruit in this scenario is video streaming. It represents an outsized percentage of all internet traffic - and data center load. Netflix alone generates 15% of all global internet traffic. What if even 1% of that could be shifted to be powered entirely by renewable energy, by virtue of the deferred-processing at the supply-side, or scheduled download at the client-side? Often its the case that when I click to watch video, I need it right there and then - perhaps it is a live event, or I didn’t know I needed it until that minute. Sometimes not though. If it was possible to schedule the download ensuring it was there on my device when I did need it, the benefits would ripple through the whole system - content delivery providers would save money and maybe the grid itself would be able to absorb more intermittent renewable generation.

There are other opportunities and I don’t want to get too hung up on specifics. But the notion of attaching Quality of Service in some way to some requests to facilitate smarter utilization of seasonal, regional and weather-dependent energy generation fluxuations seems promising to me. Fundamentally, power demand from worldwide internet traffic is extremely dynamic. We can better meet that demand with equally dynamic low and zero carbon sources if we can introduce patterns and signals at all levels of the system to allow it to plan and adapt.

When I get to the end of a piece like this I’m always left wondering “what is the point?”. Is this just a rant into the void, hoping someone listens? Its certainly not an actionable plan for change. Writing it down helps me process some of these ideas, and I hope it starts conversations and prompts you to spot these kind of indirect opportunities to tackle climate change. And if you are in a position to nudge any of this towards really existing in the world, that would be great. I work at Mozilla, we make a web browser and have our own substantial data-center and compute-time bill. I’ll be looking into what change I can help create there.

Some References

I collected a large list of papers and articles as I looked into this. Here’s a smaller list:

Daniel StenbergTaking hyper-curl further

Thanks to funding by ISRG (via Google), we merged the hyper powered HTTP back-end into curl earlier this year as an alternative HTTP/1 and HTTP/2 implementation. Previously, there was only one way to do HTTP/1 and 2 in curl.


Core libcurl functionality can be powered by optional and alternative backends in a way that doesn’t change the API or directly affect the application. This is done by featuring internal APIs that can be implemented by independent components. See the illustration below (click for higher resolution).

This is a slide from Daniel’s libcurl under the hood presentation.

curl 7.75.0 became the first curl release that could be built with hyper. The support for it was labeled “experimental” as while most of all common and basic use cases were supported, we still couldn’t run the full test suite when built with it and some edge cases even crashed.

We’ve subsequently fixed a few of the worst flaws so the Hyper powered curl has gradually and slowly improved since then.

Going further

Our best friends at ISRG has now once again put up funding and I’ll spend more work hours on making sure that more (preferably all) tests can run with hyper.

I’ve already started. Right now I’m sitting and staring at test case 154 which is doing a HTTP PUT using Digest authentication and an Expect: 100-continue header and this test case currently doesn’t work correctly when built to use Hyper. I’ll report back in a few weeks and let you know how it goes – and then I don’t mean with just test 154!

Consider yourself invited to join the #curl IRC channel and chat if you want live reports or want to help out!


You too can fund me to do curl work. Get in touch!

Raphael PierzinaEnable Fission tt(c) on more platforms

Last week my coworker Andrew Halberstadt talked me through the process of configuring Firefox CI to run a given test suite with Fission enabled on additional platforms. I am working on a patch to do this for our telemetry integration tests which are set up with mozharness and use treeherder symbol tt(c). Since the process should be close to identical for similar test suites, I decided to summarize what I’ve learned in this post, so next time someone on my team wants to do this, we don’t need to bug Andrew again.

Daniel StenbergGiving away an insane amount of curl stickers

Part 1. The beginning. (There will be at least one more part later on following up the progress.)

On May 18, 2021 I posted a tweet that I was giving away curl stickers for free to anyone who’d submit their address to me. It looked like this:

Everyone once in a while when I post a photo that involves curl stickers, a few people ask me where they can get hold of such. I figured it was about time I properly offered “the world” some. I expected maybe 50 or a 100 people would take me up on this offer.

The response was totally overwhelming and immediate. Within the first hour 270 persons had already requested stickers. After 24 hours when I closed the form again, 1003 addresses had been submitted. To countries all around the globe. Quite the avalanche.

Assessing the damage

This level of interest put up some challenges I hadn’t planned for. Do I have stickers enough? Now suddenly doing 3 or 5 stickers per parcel will have a major impact. Getting envelops and addresses onto them for a thousand deliveries is quite a job! Not to mention the cost. A “standard mail” to outside Sweden using the regular postal service is 24 SEK. That’s ~2.9 USD. Per parcel. Add the extra expenses and we’re at an adventure north of 3,000 USD.

For this kind of volume, I can get a better rate by registering as a “company customer”. It adds some extra work for me though but I haven’t worked out the details around this yet.

Let me be clear: I already from the beginning planned to ask for reimbursement from the curl fund for my expenses for this stunt. I would mostly add my work on this for free. Maybe “hire” my daughter for an extra set of hands.


During the time the form was up, we also received 51 donations to Open Collective (as the form mentioned that, and I also mentioned it on Twitter several times). The donated total was 943 USD. The average donation was 18 USD, the largest ones (2) were at 100 USD and the smallest was 2 USD.

Of course some donations might not be related to this and some donations may very well arrive after this form was closed again.

Cleaning up

If I had thought this through better at the beginning, I would not have asked for the address using a free text field like this. People clearly don’t have the same idea of how to do this as I do.

I had to manually go through the addresses to insert newlines, add country names and remove obviously broken addresses. For example, a common pattern was addresses added with only a 6-8 digit number? I think over 20 addresses were specified like that!

Clearly there’s a lesson to be had there.

After removing obviously bad and broken addresses there were 978 addresses left.


I got postal addresses to 65 different countries. A surprisingly diverse collection I think. The top 10 countries were:

The Netherlands24

Countries that were only entered once: Dubai, Iran, Japan, Latvia, Morocco, Nicaragua, Philippines, Romania, Serbia, Thailand, Tunisia, UAE, Ukraine, Uruguay, Zimbabwe

Figuring out the process

While I explicitly said I wouldn’t guarantee that everyone gets stickers, I want to do my best in delivering a few to every single one who asked for them.


I have the best community. Without me saying a word or asking for it, several people raised their hands and volunteered to offload the sending to their countries. I could send one big batch to them and they redistribute within their countries. They would handle US, Czechia, Denmark and Switzerland for me.

But why stop at those four? In my next step I put up a public plea for more volunteers on Twitter and man, I got myself a busy evening and after a few hours I had friends signed up from over 20 countries offering to redistributed stickers within the respective countries. This way, we share the expenses and the work load, and mailing out many smaller parcels within countries is also a lot cheaper than me sending them all individually from Sweden.

After a lot of communications I had an army of helpers lined up.

28 distributors will help me do 724 sticker deliveries to 24 countries. Leaving me to do just the remaining 282 packages to the other 41 countries.

Stickers inventory

I’ve offered “a few” stickers and I decided that means 4.

978 * 4 = 3912

Plus I want to add 10 extra stickers to each distributor, and there are 28 distributors.

3912 + 28 * 10 = 4192

Do I have 4200 curl stickers? I emptied my sticker drawer and put them all on the table and took this photo. All of these curl stickers you see on the photo have been donated to us/me by sponsors. Most of the from Sticker Mule, some of them from XXXX.

I think I might be a little “thin”. Luckily, I have friends that can help me stock up…

(There are some Haxx and wolfSSL stickers on the photo as well, because I figured I should spice up some packages with some of those as well.)


The stickers still haven’t shipped from my place but the plan is to get the bulk of them shipped from me within days. Stay tuned. There will of course be more delays on the route to their destinations, but rest assured that we intend to deliver to all who asked for them!

Will I give away more curl stickers?

Not now, and I don’t have any plans on doing this stunt again very soon. It was already way more than I expected. More attention, more desire and definitely a lot more work!

But at the first opportunity where you meet me physically I will of course give away stickers.

Buy curl stickers?

I’ve started looking into offering stickers for purchase but I’m not ready to make anything public or official yet. Stay tuned and I promise you’ll learn and be told when the sticker shop opens.

If it happens, the stickers will not be very cheap but you should rather see each such sticker as a mini-sponsorship.

Follow up

Stay tuned. I will be back with updates.

Mike TaylorThe hidden meaning of 537.36 in the Chromium User-Agent string

If you’re like me, first of all, very sorry to hear that, but you are probably spending your Friday morning wondering what the meaning of 537.36 is in the Chromium User-Agent string. It appears in two places: AppleWebKit/537.36 and Safari/537.36.

As any serious researcher does, the first place I went to for answers was numeroscop.net, to check out the “Angel Number Spiritual Meaning”.

(I enjoy a good data-collection-scheme-disguised-as-fortune-telling site as much as anyone else, don’t judge me.)

engraving an angel with 2 horns, blowing the numbers 537 and 36

537 means:

“Positive changes in the material aspect will be an extra confirmation that you have made the right choice of a life partner”

And 36 means:

“[Y]es, you are doing everything right, but you are not doing everything that you could do”.

Angels probably use PHP, so let’s assume “.” is the string concatenation operator. Mashing those together, a meaning emerges: “537.36” represents the last shipping version of WebKit before the Blink fork.

Back in 2013 (right after the fork announcement), Ojan Vafai wrote,

“In the short-term we have no plans of changing the UA string. The only thing that will change is the Chrome version number.”

Darin Fisher (former engineering lead for the Chrome Web Platform Team) said the same in the recorded Q&A video (linked from the Developer FAQ).

Assuming Wikipedia is as trustworthy as that “why did I give the Angel Numerology site my email, birthdate, relationship status, and name, and why am I getting so many ads on other sites about healing crystals and clearance specials on hydroxychloroquine??” site, Chrome 27.0.1453 was the last version of Chrome shipping WebKit, which was at 537.36, and Chrome 28.0.1500 was the first version of stable channel release shipping the Blink engine.

So that’s why those numbers are in the User-Agent string. For obvious compatibility reasons, you can’t just remove strings like AppleWebKit/537.36 and Safari/537.36. And that’s why we’ll keep them there, likely frozen forever.

Daniel StenbergQUIC is RFC 9000

The official publication date of the relevant QUIC specifications is: May 27, 2021.

I’ve done many presentations about HTTP and related technologies over the years. HTTP/2 had only just shipped when the QUIC working group had been formed in the IETF and I started to mention and describe what was being done there.

I’ve explained HTTP/3

I started writing the document HTTP/3 explained in February 2018 before the protocol was even called HTTP/3 (and yeah the document itself was also called something else at first). The HTTP protocol for QUIC was just called “HTTP over QUIC” in the beginning and it took until November 2018 before it got the name HTTP/3. I did my first presentation using HTTP/3 in the title and on slides in early December 2018, My first recorded HTTP/3 presentation was in January 2019 (in Stockholm, Sweden).

In that talk I mentioned that the protocol would be “live” by the summer of 2019, which was an optimistic estimate based on the then current milestones set out by the IETF working group.

I think my optimism regarding the release schedule has kept up but as time progressed I’ve updated that estimation many times…

HTTP/3 – not yet

The first four RFC documentations to be ratified and published only concern QUIC, the transport protocol, and not the HTTP/3 parts. The two HTTP/3 documents are also in queue but are slightly delayed as they await some other prerequisite (“generic” HTTP update) documents to ship first, then the HTTP/3 ones can ship and refer to those other documents.


QUIC is a new transport protocol. It is done over UDP and can be described as being something of a TCP + TLS replacement, merged into a single protocol.

Okay, the title of this blog is misleading. QUIC is actually documented in four different RFCs:

RFC 8999 – Version-Independent Properties of QUIC

RFC 9000 – QUIC: A UDP-Based Multiplexed and Secure Transport

RFC 9001 – Using TLS to Secure QUIC

RFC 9002 – QUIC Loss Detection and Congestion Control

My role: I’m just a bystander

I initially wanted to keep up closely with the working group and follow what happened and participate on the meetings and interims etc. It turned out to be too difficult for me to do that so I had to lower my ambitions and I’ve mostly had a casual observing role. I just couldn’t muster the energy and spend the time necessary to do it properly.

I’ve participated in many of the meetings, I’ve been present in the QUIC implementers slack, I’ve followed lots of design and architectural discussions on the mailing list and in GitHub issues. I’ve worked on implementing support for QUIC and h3 in curl and thanks to that helped out iron issues and glitches in various implementations, but the now published RFCs have virtually no traces of me or my feedback in them.

Mozilla Open Policy & Advocacy BlogAdvancing system-level change with ad transparency in the EU DSA

At Mozilla we believe that greater transparency in the online advertising ecosystem can empower individuals, safeguard advertisers’ interests, and address systemic harms. It’s something we care passionately about, and it’s an ethos that runs through our own marketing work. Indeed, our recent decision to resume advertising on Instagram is underpinned by a commitment to transparency. Yet we also recognise that this issue is a structural one, and that regulation and public policy has an important role to play in improving the health of the ecosystem. In this post, we give an update on our efforts to advance system-level change, focusing on the ongoing discussions on this topic in the EU.

In December 2020 the European Commission unveiled the Digital Services Act, a draft law that seeks to usher in a new regulatory standard for content responsibility by platforms. A focus on systemic transparency is at the core of the DSA, including in the context of online advertising. The DSA’s approach to ad transparency mandates disclosure well above the voluntary standard that we see today (and mirrors the ethos of our new Instagram advertising strategy).

Under the DSA’s approach, so-called ‘Very Large Online Platforms’ must:

  • Disclose the content of all advertisements that run on their services;
  • Disclose the key targeting parameters that are associated with each advertisement; and,
  • Make this disclosure through publicly-available ad archives (our recommendations on how these ad archives should operate can be found here).

The DSA’s ad transparency approach will give researchers, regulators, and advertisers greater insight into the platform-mediated advertising ecosystem, providing a crucial means of understanding and detecting hidden harms. Harms fester when they happen in the dark, and so meaningful transparency in and of the ecosystem can help mitigate them.

Yet at the same time, transparency is rarely an end in itself. And we’re humble enough to know that we don’t have all the answers to the challenges holding back the internet from what it should be. Fortunately, another crucial benefit of advertising transparency frameworks is that they can provide us with the prerequisite insight and evidence-base that is essential for effective policy solutions, in the EU and beyond.

Although the EU DSA is trending in a positive direction, we’re not resting on our laurels. The draft law still has some way to go in the legislative mark-up phase. We’ll continue to advocate for thoughtful and effective policy approaches for advertising transparency, and prototype these approaches in our own marketing work.

The post Advancing system-level change with ad transparency in the EU DSA appeared first on Open Policy & Advocacy.

Mozilla Addons BlogManifest v3 update

Two years ago, Google proposed Manifest v3, a number of foundational changes to the Chrome extension framework. Many of these changes introduce new incompatibilities between Firefox and Chrome. As we previously wrote, we want to maintain a high degree of compatibility to support cross-browser development.  We will introduce Manifest v3 support for Firefox extensions. However, we will diverge from Chrome’s implementation where we think it matters and our values point to a different solution.

For the last few months, we have consulted with extension developers and Firefox’s engineering leadership about our approach to Manifest v3. The following is an overview of our plan to move forward, which is based on those conversations.

High level changes

  • In our initial response to the Manifest v3 proposal, we committed to implementing cross-origin protections. Some of this work is underway as part of Site Isolation, a larger reworking of Firefox’s architecture to isolate sites from each other. You can test how your extension performs in site isolation on the Nightly pre-release channel by going to about:preferences#experimental and enabling Fission (Site Isolation). This feature will be gradually enabled by default on Firefox Beta in the upcoming months and will start rolling out a small percentage of release users in Q3 2021.

    Cross-origin requests in content scripts already encounter restrictions by advances of the web platform (e.g. SameSite cookies, CORP) and privacy features of Firefox (e.g. state partitioning). To support extensions, we are allowing extension scripts with sufficient host permissions to be exempted from these policies. Content scripts won’t benefit from these improvements, and will eventually have the same kind of permissions as regular web pages (bug 1578405). We will continue to develop APIs to enable extensions to perform cross-origin requests that respect the user’s privacy choices (e.g. bug 1670278, bug 1698863).

  • Background pages will be replaced by background service workers (bug 1578286). This is a substantial change and will continue to be developed over the next few months. We will make a new announcement once we have something that can be tested in Nightly.
  • Promise-based APIs: Our APIs have been Promise-based since their inception using the browser.* namespace and we published a polyfill to offer consistent behavior across browsers that only support the chrome.* namespace. For Manifest v3, we will enable Promise-based APIs in the chrome.* namespace as well.
  • Host permission controls (bug 1711787): Chrome has shipped a feature that gives users control over which sites extensions are allowed to run on. We’re working on our own design that puts users in control, including early work by our Outreachy intern Richa Sharma on a project to give users the ability to decide if extensions will run in different container tabs (bug 1683056). Stay tuned for more information about that project!
  • Code execution: Dynamic code execution in privileged extension contexts will be restricted by default (bug 1687763). A content security policy for content scripts will be introduced (bug 1581608). The existing userScripts and contentScripts APIs will be reworked to support service worker-based extensions (bug 1687761).


Google has introduced declarativeNetRequest (DNR) to replace the blocking webRequest API. This impacts the capabilities of extensions that process network requests (including but not limited to content blockers) by limiting the number of rules an extension can use, as well as available filters and actions.

After discussing this with several content blocking extension developers, we have decided to implement DNR and continue maintaining support for blocking webRequest. Our initial goal for implementing DNR is to provide compatibility with Chrome so developers do not have to support multiple code bases if they do not want to. With both APIs supported in Firefox, developers can choose the approach that works best for them and their users.

We will support blocking webRequest until there’s a better solution which covers all use cases we consider important, since DNR as currently implemented by Chrome does not yet meet the needs of extension developers.

You can follow our progress on implementing DNR in bug 1687755.

Implementation timeline

Manifest v3 is a large platform project, and some parts of it will take longer than others to implement. As of this writing, we are hoping to complete enough work on this project to support developer testing in Q4 2021 and start accepting v3 submissions in early 2022. This schedule may be pushed back or delayed due to unforeseeable circumstances.

We’d like to note that it’s still very early to be talking about migrating extensions to Manifest v3. We have not yet set a deprecation date for Manifest v2 but expect it to be supported for at least one year after Manifest v3 becomes stable in the release channel.

Get involved

We understand that extension developers will need to adapt their extensions to be compatible with Manifest v3, and we would like to make this process as smooth as possible. Please let us know about any pain points you might have encountered when migrating Chrome extensions to Manifest v3, and any suggested mitigations, on our community forum or in relevant issues on Bugzilla.

We are also interested in hearing about specific use cases we should keep in mind so that your extension will be compatible with Chrome for Manifest V3.

The post Manifest v3 update appeared first on Mozilla Add-ons Blog.

Dennis SchubertWebCompat PSA: Please don't use negative `text-indent`s for hidden labels.

During my work on Web Compatibility at Mozilla, I see many things that break in exciting ways. Sometimes, it’s obvious stuff like flexbox compat issues1, but sometimes, the breakages are a bit surprising. Today, the star of the show is a single CSS instruction:

text-indent: -9999px

When we talk about web compatibility issues, most people think about an elite subset of “well-known” breakages or massive layout issues. They rarely think about innocent-looking things like text-indent. And to be fair, most of the time, neither do we browser people.

This large negative text-indent appears to be a hack, frequently used to “move away” labels next to icons, probably to hide them from view but keep them in the markup for screen readers and similar user agents. Please don’t do that, there are better alternatives for screenreaders. Even though having a negative indentation seems like a good solution, the unfortunate reality is that text-indent has some weird cross-browser quirks. Two examples that I stumbled across in the last month:

… and there are a lot more.

text-indent does extend the size of an element, but not in a fixed direction, but depending on the direction of text flow. Here’s a quick example:


  #text-indent-demo p {
    text-indent: 100px;
<section id="text-indent-demo">
  <p style="direction: ltr;">one</p>
  <p style="direction: rtl;">two</p>
  <p style="direction: ltr; writing-mode: vertical-lr;">three</p>
  <p style="direction: rtl; writing-mode: vertical-lr;">five</p>






As you can see, we have the same text-indent: 100px;, but in four different directions depending on the text direction and writing mode. This makes perfect sense if you think about it, but developers can get caught off-guard here, especially if working on a site that later gets translated. Or, well, if browsers misbehave.

On an Israeli site I recently looked at, a large negative text-indent caused the site to be extended to the right, which caused some viewport issues in Firefox for Android because we try to fit everything into your view. Another example is a report about a Romanian news site, where clicking on the social links left a dotted border all across the screen because they extended their buttons 9999px to the left without overflow: hidden‘ing it. In Chrome, this particular case is not noticeable because Chrome does not show focus borders the same way Firefox does, but the issue is still there. There are more examples of things going wrong in unexpected ways, but you get the gist.

While I am only talking about text-indent here, mainly because the text direction dependency adds an interesting twist, note that all methods of “moving something out of the screen to make it invisible” have similar issues. Even if you move things really far away, they still exist inside the document, and they can have unexpected side effects.

So… please don’t. The web is broken enough already. :)

  1. Spoiler: there soon will be another blog post, about a flexbox issue! Wohoo! 

Mozilla Open Policy & Advocacy BlogMozilla reacts to the European Commission’s guidance on the revision of the EU Code of Practice on Disinformation

Today the European Commission published its guidance for the upcoming revision of the EU Code of Practice on Disinformation. Mozilla was a founding signatory of the Code of Practice in 2018, and we’re happy to see plans materialise for its evolution.

Reacting to the guidance, Raegan MacDonald, Director of Global Policy, Mozilla Corporation said:

“We welcome the Commission’s guidance for the next iteration of the Code of Practice. We’re happy that the revised Code will provide a greater role for organisations with technical and research expertise, and we look forward to harnessing that opportunity to support the various stakeholders.

This guidance outlines a clear vision for how the fight against disinformation can sit within a future-focused and thoughtful policy framework for platform accountability. While we still need to ensure the DSA provides the fundamentals, we see the revised Code as playing an important role in giving practical meaning to transparency and accountability.”

The post Mozilla reacts to the European Commission’s guidance on the revision of the EU Code of Practice on Disinformation appeared first on Open Policy & Advocacy.

Mozilla Performance BlogPerformance Sheriff Newsletter (April 2021)

In April there were 187 alerts generated, resulting in 34 regression bugs being filed on average 6 days after the regressing change landed.

Welcome to the April 2021 edition of the performance sheriffing newsletter. Here you’ll find the usual summary of our sheriffing efficiency metrics, followed by some analysis on our invalid regression alerts and bugs. If you’re interested (and if you have access) you can view the full dashboard.

Sheriffing efficiency

  • All alerts were triaged in an average of 1.3 days
  • 85% of alerts were triaged within 3 days
  • Valid regressions were associated with bugs in an average of 3.3 days
  • 85% of valid regressions were associated with bugs within 5 days

Sheriffing Efficiency (Apr 2021)

Invalid Alerts

Sometimes we have alerts that turn out to be invalid. This usually means there were outliers in the results that triggered an alert, the results are multi-modal, or that the data is too noisy and the magnitude of the change is to small to confidently identify a culprit revision. Here’s an example of where outliers have caused invalid regression alerts:

Perfherder graph showing invalid alerts due to outliers

Perfherder graph showing invalid alerts due to outliers

These invalid alerts are usually identified by the performance sheriffs. They can be an indicator for the quality of our data and our change detection algorithm. If the percentage of invalid alerts increases we’ll be spending more time sheriffing these alerts, and we may want to investigate.

Alerts by Status (April 2021)

Regression Alerts by Status (April 2021)

In April we saw 5 invalid alerts, which equates for 3% of all regression alerts. Over the last 6 months we’ve seen 93 invalid alerts out 1,371 total alerts, just under 7%.

Invalid Regression Bugs

Occaisionally we detect a performance regression, identify the suspected culprit, and open a regression bug only for it to be closed as invalid. There can be a number of reasons for this, but the most likely is that the suspected culprit was incorrect. As our performance sheriffs are not expected to be familiar with all of our performance tests or what might impact them, we rely on the authors of suspected culprits to point out when the performance impact doesn’t make sense. When queried, our sheriffs will trigger additional tests around the regression range and either confirm the original culprit or close the bug as invalid and open a new one. Note that until recently, sheriffs may have used the same bug and simply modified the “regressed by” field. We have changed this to allow us to track the number of invalid bugs over time.

Regression Bugs by Status (April 2021)

Regression Bugs by Status (April 2021)

Note that bugs may have many alerts, and are often resolved some time before the alerts, which explains why there are more open alerts than bugs. Our sheriffs periodically run a query to identify alerts linked to bugs that have been resolved and use this to sanity check and update the alerts as necessary.

Summary of alerts

Each month I’ll highlight the regressions and improvements found.

Note that whilst I usually allow one week to pass before generating the report, there are still alerts under investigation for the period covered in this article. This means that whilst I believe these metrics to be accurate at the time of writing, some of them may change over time.

I would love to hear your feedback on this article, the queries, the dashboard, or anything else related to performance sheriffing or performance testing. You can comment here, or find the team on Matrix in #perftest or #perfsheriffs.

The dashboard for April can be found here (for those with access).

Daniel Stenbergcurl 7.77.0 – 200 OK

Welcome to the 200th curl release. We call it 200 OK. It coincides with us counting more than 900 commit authors and surpassing 2,400 credited contributors in the project. This is also the first release ever in which we thank more than 80 persons in the RELEASE-NOTES for having helped out making it and we’ve set two new record in the bug-bounty program: the largest single payout ever for a single bug (2,000 USD) and the largest total payout during a single release cycle: 3,800 USD.

This release cycle was 42 days only, two weeks shorter than normal due to the previous 7.76.1 patch release.

Release Presentation


the 200th release
5 changes
42 days (total: 8,468)

133 bug-fixes (total: 6,966)
192 commits (total: 27,202)
0 new public libcurl function (total: 85)
2 new curl_easy_setopt() option (total: 290)

2 new curl command line option (total: 242)
82 contributors, 44 new (total: 2,410)
47 authors, 23 new (total: 901)
3 security fixes (total: 103)
3,800 USD paid in Bug Bounties (total: 9,000 USD)


We set two new records in the curl bug-bounty program this time as mentioned above. These are the issues that made them happen.

CVE-2021-22901: TLS session caching disaster

This is a Use-After-Free in the OpenSSL backend code that in the absolutely worst case can lead to an RCE, a Remote Code Execution. The flaw is reasonably recently added and it’s very hard to exploit but you should upgrade or patch immediately.

The issue occurs when TLS session related info is sent from the TLS server when the transfer that previously used it is already done and gone.

The reporter was awarded 2,000 USD for this finding.

CVE-2021-22898: TELNET stack contents disclosure

When libcurl accepts custom TELNET options to send to the server, it the input parser was flawed which could be exploited to have libcurl instead send contents from the stack.

The reporter was awarded 1,000 USD for this finding.

CVE-2021-22897: schannel cipher selection surprise

In the Schannel backend code, the selected cipher for a transfer done with was stored in a static variable. This caused one transfer’s choice to weaken the choice for a single set transfer could unknowingly affect other connections to a lower security grade than intended.

The reporter was awarded 800 USD for this finding.


In this release we introduce 5 new changes that might be interesting to take a look at!

Make TLS flavor explicit

As explained separately, the curl configure script no longer defaults to selecting a particular TLS library. When you build curl with configure now, you need to select which library to use. No special treatment for any of them!

No more SSL

curl now has no more traces of support for SSLv2 or SSLv3. Those ancient and insecure SSL versions were already disabled by default by TLS libraries everywhere, but now it’s also impossible to activate them even in special build. Stripped out from both the curl tool and the library (thus counted as two changes).

HSTS in the build

We brought HSTS support a while ago, but now we finally remove the experimental label and ship it enabled in the build by default for everyone to use it more easily.

In-memory cert API

We introduce API options for libcurl that allow users to specify certificates in-memory instead of using files in the file system. See CURLOPT_CAINFO_BLOB.

Favorite bug-fixes

Again we manage to perform a large amount of fixes in this release, so I’m highlighting a few of the ones I find most interesting!

Version output

The first line of curl -V output got updated: libcurl now includes OpenLDAP and its version of that was used in the build, and then the curl tool can add libmetalink and its version of that was used in the build!

curl_mprintf: add description

We’ve provided the *printf() clone functions in the API since forever, but we’ve tried to discourage users from using them. Still, now we have a first shot at a man page that clearly describes how they work.

This is important as they’re not quite POSIX compliant and users who against our advice decide to rely on them need to be able to know how they work!

CURLOPT_IPRESOLVE: preventing wrong IP version from being used

This option was made a little stricter than before. Previously, it would be lax about existing connections and prefer reuse instead of resolving again, but starting now this option makes sure to only use a connection with the request IP version.

This allows applications to explicitly create two separate connections to the same host using different IP versions when desired, which previously libcurl wouldn’t easily let you do.

Ignore SIGPIPE in curl_easy_send

libcurl makes its best at ignoring SIGPIPE everywhere and here we identified a spot where we had missed it… We also made sure to enable the ignoring logic when built to use wolfSSL.

Several HTTP/2-fixes

There are no less than 6 separate fixes mentioned in the HTTP/2 module in this release. Some potential memory leaks but also some more behavior improving things. Possibly the most important one was the move of the transfer-related error code from the connection struct to the transfers struct since it was vulnerable to a race condition that could make it wrong. Another related fix is that libcurl no longer forcibly disconnects a connection over which a transfer gets HTTP_1_1_REQUIRED returned.

Partial CONNECT requests

When the CONNECT HTTP request sent to a proxy wasn’t all sent in a single send() call, curl would fail. It is baffling that this bug hasn’t been found or reported earlier but was detected this time when the reporter issued a CONNECT request that was larger than 16 kilobytes…

TLS: add USE_HTTP2 define

There was several remaining bad assumptions that HTTP/2 support in curl relies purely on nghttp2. This is no longer true as HTTP/2 support can also be provide by hyper.

normalize numerical IPv4 hosts

The URL parser now knows about the special IPv4 numerical formats and parses and normalizes URLs with numerical IPv4 addresses.

Timeout, timed out libssh2 disconnects too

When libcurl (built with libssh2 support) stopped an SFTP transfer because a timeout was triggered, the following SFTP disconnect procedure was subsequently also stopped because of the same timeout and therefore wasn’t allowed to properly clean up everything, leading to a memory-leak!

IRC network switch

We moved the #curl IRC channel to the new network libera.chat. Come join us there!

Next release

On Jul 21, 2021 we plan to ship the next release. The version number for that is not yet decided but we have changes in the pipeline, making a minor version number bump very likely.


7.77.0 release image by Filip Dimitrovski.

This Week In RustThis Week in Rust 392

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

No official blog posts, newsletters, or research papers this week.

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is typed-index-collections, a crate that lets you make Vecs with custom-typed indices.

Thanks to Tim for the nomination

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

No issues were proposed for CfP.

Updates from Rust Core

280 pull requests were merged in the last week

Rust Compiler Performance Triage

A somewhat quiet week. Some PRs had performance runs performed on them, but the changes were merged despite this. Also, we still have issues with certain benchmarks being noisy.

Triage done by @rylev. Revision range: 25a277..cdbe2

2 Regressions, 2 Improvements, 1 Mixed 0 of them in rollups

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

Upcoming Events

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs


Red Hat





Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Ok, you wanted it. Let's go full meta:

This time, there were two crates and one quote, which is not much, but ok. Keep it up, folks!

llogiq on reddit

Thanks to Patrice Peterson for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Niko MatsakisEdition: the song

You may have heard that the Rust 2021 Edition is coming. Along with my daughter Daphne, I have recorded a little song in honor of the occasion! The full lyrics are below – if you feel inspired, please make your own version!1 Enjoy!



Breaking changes where no code breaks.
Sounds impossible, no?
But in the Rust language, you might say that we like to do impossible things.
It isn’t easy.
You may ask, how do we manage such a thing?
That I can tell you in one word… Edition!

Edition, edition… edition!

Who day and night
Is searching for a change
Whatever they can do
So Rust’s easier for you
Who sometimes finds
They have to tweak the rules
And change a thing or two in Rust?

The lang team, the lang team… edition!
The lang team, the lang team… edition!

Who designs the traits that we use each day?
All the time, in every way?
Who updates the prelude so that we can call
The methods that we want no sweat

The libs team, the libs team… edition!
The libs team, the libs team… edition!

Three years ago I changed my code
to Rust twenty eighteen
Some dependencies did not
But they… kept working.

The users, the users… edition!
The users, the users… edition!

And who does all this work
To patch and tweak and fix
Migrating all our code
Each edition to the next

The tooling, the tooling… edition!
The tooling, the tooling… edition!

And here in Rust, we’ve always had our little slogans.
For instance, abstraction… without overhead.
Concurrency… without data races.
Stability… without stagnation.
Hack… without fear.
But we couldn’t do all of those things…
not without…


  1. OMG, that would be amazing. I’ll update the post with any such links I find. 

Mozilla Security BlogUpdates to Firefox’s Breach Alert Policy

Your personal data is yours – and it should remain yours! Unfortunately data breaches that reveal your personal information on the internet are omnipresent these days. In fact, fraudulent use of stolen credentials is the 2nd-most common threat action (after phishing) in Verizon’s 2020 Data Breach Investigations report and highlights the problematic situation of data breaches.

In 2018, we launched Firefox Monitor which instantly notifies you in case your data was involved in a breach and further provides guidance on how to protect your personal information online. Expanding the scope of protecting our users across the world to stay in control of their data and privacy, we integrated alerts from Firefox Monitor into mainstream Firefox. We integrated this privacy enhancing feature into your daily browsing experience so Firefox can better protect your data by instantly notifying you when you visit a site that has been breached.

While sites continue to suffer password breaches, other leaks or lose other types of data. Even though we consider all personal data as important, notifying you for every one of these leaks generates noise that’s difficult to act on. The better alternative is to only alert you in case it’s critical for you to act to protect your data. Hence, the primary change is that Firefox will only show alerts for websites where passwords were exposed in the breach.

In detail, we are announcing an update to our initial Firefox breach alert policy for when Firefox alerts for breached sites:

“Firefox shows a breach alert when a user visits a site where passwords were exposed and added to Have I Been Pwned within the last 2 months.”

To receive the most comprehensive breach alerts we suggest to sign up for Firefox Monitor to check if your account was involved in a breach. We will keep you informed and will alert you with an email in case your personal data is affected by a data breach. Our continued commitment  to protect your privacy and security from online threats is critical for us and aligns with our mission: Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.

If you are a Firefox user, you don’t have to do anything to benefit from this new privacy protection. If you aren’t a Firefox user, download Firefox to start benefiting from all the ways that Firefox works to protect your privacy.

The post Updates to Firefox’s Breach Alert Policy appeared first on Mozilla Security Blog.

Patrick Clokecelery-batches 0.5 released!

A new version (v0.5) of celery-batches is available which adds support for Celery 5.1 and fixes storing of results when using the RPC result backend.

As explored previously, the RPC result backend works by having a results queue per client, unfortunately celery-batches was attempting to store the results …

Daniel StenbergThe curl user survey 2021

For the eighth consecutive year we run the annual curl user survey again in 2021. The form just went up and I would love to have you spend 10 minutes of your busy life to tell us how you think curl works, what doesn’t work and what we should do next.

We have no tracking on the website and we have no metrics or usage measurements of the curl tool or the libcurl library. The only proper way we have left to learn how users and people in general think of us and how curl works, is to ask. So this is what we do, and we limit the asking to once per year.

You can also view this from your own “selfish” angle: this is a way for you to submit your input, your opinions and we will listen.

The survey will be up two weeks during which I hope to get as many people as possible to respond. If you have friends you know use curl or libcurl, please have them help us out too!

Take the survey

Yes really, please take the survey!

Bonus: see the extensive analysis of the 2020 user survey. There’s a lot of user feedback to learn from it.

Firefox NightlyThese Weeks in Firefox: Issue 94


  • On macOS, scrollbars now squish during rubber-banding.
  • We’re working on supporting native fullscreen on macOS. Turn it on by enabling the pref full-screen-api.macos-native-full-screen. This will (among other things) create new fullscreen Spaces for videos. You could, for example, put a fullscreen YouTube video in native Split Screen next to another application.
  • We’re also working on enhanced dark mode support for macOS (Bug 1623686). Enable this by turning on the pref widget.macos.respect-system-appearance. Recent fixes include a dark library window (Bug 1698763), dark page info dialog (Bug 1698754), and a dark “Clear Recent History” window (Bug 1710269).
  • We’ve announced the deprecation of the canvas drawWindow WebExtension method, due to incompatibility with the Fission architecture:
  • about:welcome got major updates for Firefox 89. This includes new animations, icons, and accessibility improvements.

Friends of the Firefox team

For contributions from May 4 to May 18 2021, inclusive.

Resolved bugs (excluding employees)

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Landed some more styling tweaks for making sure about:addons does better match the new Proton UI conventions: Bug 1709464 and Bug 1709655
WebExtensions Framework
  • More Fission-related changes landed in Firefox 90: Bug 1708238
  • Work related to the “manifest_version 3”: Support for new web_accessible_resources manifest property formats – Bug 1696580, Bug 1697334
WebExtension APIs
  • Starting from Firefox 90, Extensions will be allowed to use the Cache web API from extension pages to cache HTTP resources (caching an HTTP URL using the Cache web API will be still enforcing the extensions host permissions, as it would if the extension would be fetching from the same URLs using fetch or XHR) – Bug 1575625 and Bug 1710138
    • Thanks to André Natal for contributing this change as part of his work on the Project Bergamot extension


Lint and Docs

macOS Spotlight

  • Native context menus landed in Firefox 89! This closes the 21-year-old bug 34572.
    • We also fixed a number of follow-up issues, like supporting dark mode context menus on macOS 10.14+.

Messaging System

New Tab Page

  • Accessibility bug fixes for the “personalize” drawer allowing it to operate better with screen readers (Bug 1707022) Thanks to :eeejay for the patches! Also a fix for high contrast mode (Bug 1708248) thanks to :morgan and :thecount
  • Snippets has been disabled in Firefox 89 (Bug 1709984)


  • dthayer has a patch up for review to reduce the UI freezes caused by sending SessionStore data to the SessionFile worker.
  • mconley would like to experiment with the about:home startup cache using Nimbus, and is considering having the startup cache enabled in MR1.1.
  • mconley fixed Bug 1703494 – Remove sync layout flush for hamburger menu opening with proton
  • emalysz landed a patch to provide async support for promise workers, and removed OS.File from PageThumbs.jsm. Only 3 callers of OS.File left during startup!
  • Several BHR improvements:
    • Improved dashboard:
      • It’s possible to navigate to the data of previous days, and to link to a specific day.
      • For hangs with an associated bug, the whiteboard annotation is shown in the top right
      • when using the filter box, the filtered word is highlighted in the stack on the right side.
    • Better data:
      • (chrome) JS function names are now included in BHR stacks.
      • these label frames are now visible: “ChromeUtils::Import”, “mozJSSubScriptLoader::DoLoadSubScriptWithOptions”, “nsThreadManager::SpinEventLoop”, “Category observer notification”, “Services_Resolve”, “Task”
    • Doug is working on showing annotations (eg. “UserInteracting”, “browser.tabs.opening”) in the dashboard

Performance Tools

Proton / MR1

Search and Navigation

  • Daisuke fixed a bug on Linux where opening new tabs by middle clicking the tabs bar could paste clipboard contents into the urlbar. Bug 1710289
  • Daisuke also fixed a bug where pasting a string ending with a combination of CR, LF may drop the search terms. Bug 1709971
  • Marco landed a patch improving the tooltips and accessible text when adding new OpenSearch engines – Bug 1706334
  • Mark fixed a bug in the separate search bar, where certain characters could be shown encoded in the results panel – Bug 1529220


  • Screenshots now factors in Firefox zoom values
  • We’ve accepted an Outreachy intern who will start next week!

Mozilla Attack & DefenseBrowser fuzzing at Mozilla


Mozilla has been fuzzing Firefox and its underlying components for a while. It has proven to be one of the most efficient ways to identify quality and security issues. In general, we apply fuzzing on different levels: there is fuzzing the browser as a whole, but a significant amount of time is also spent on fuzzing isolated code (e.g. with libFuzzer) or whole components such as the JS engine using separate shells. In this blog post, we will talk specifically about browser fuzzing only, and go into detail on the pipeline we’ve developed. This single pipeline is the result of years of work that the fuzzing team has put into aggregating our browser fuzzing efforts to provide consistently actionable issues to developers and to ease integration of internal and external fuzzing tools as they become available.

Diagram showing interaction of systems used in Mozilla's browser fuzzing workflow

Build instrumentation

To be as effective as possible we make use of different methods of detecting errors. These include sanitizers such as AddressSanitizer (with LeakSanitizer), ThreadSanitizer, and UndefinedBehaviorSanitizer, as well as using debug builds that enable assertions and other runtime checks. We also make use of debuggers such as rr and Valgrind. Each of these tools provides a different lens to help uncover specific bug types, but many are incompatible with each other or require their own custom build to function or provide optimal results. Besides providing debugging and error detection, some tools cannot work without build instrumentation, such as code coverage and libFuzzer. Each operating system and architecture combination requires a unique build and may only support a subset of these tools.

Last, each variation has multiple active branches including Release, Beta, Nightly, and Extended Support Release (ESR). The Firefox CI Taskcluster instance builds each of these periodically.

Downloading builds

Taskcluster makes it easy to find and download the latest build to test. We discussed above the number of variants created by different instrumentation types, and we need to fuzz them in automation. Because of the large number of combinations of builds, artifacts, architectures, operating systems, and unpacking each, downloading is a non-trivial task.

To help reduce the complexity of build management, we developed a tool called fuzzfetch. Fuzzfetch makes it easy to specify the required build parameters and it will download and unpack the build. It also supports downloading specified revisions to make it useful with bisection tools.

How we generate the test cases

As the goal of this blog post is to explain the whole pipeline, we won’t spend much time explaining fuzzers. If you are interested, please read “Fuzzing Firefox with WebIDL” and the in-tree documentation. We use a combination of publicly available and custom-built fuzzers to generate test cases.

How we execute, report, and scale

For fuzzers that target the browser, Grizzly manages and runs test cases and monitors for results. Creating an adapter allows us to easily run existing fuzzers in Grizzly.

Simplified Python code for a Grizzly adaptor using an external fuzzer.

To make full use of available resources on any given machine, we run multiple instances of Grizzly in parallel.

For each fuzzer, we create containers to encapsulate the configuration required to run it. These exist in the Orion monorepo. Each fuzzer has a configuration with deployment specifics and resource allocation depending on the priority of the fuzzer. Taskcluster continuously deploys these configurations to distribute work and manage fuzzing nodes.

Grizzly Target handles the detection of issues such as hangs, crashes, and other defects. Target is an interface between Grizzly and the browser. Detected issues are automatically packaged and reported to a FuzzManager server. The FuzzManager server provides automation and a UI for triaging the results.

Other more targeted fuzzers use JS shell and libFuzzer based targets use the fuzzing interface. Many third-party libraries are also fuzzed in OSS-Fuzz. These deserve mention but are outside of the scope of this post.

Managing results

Running multiple fuzzers against various targets at scale generates a large amount of data. These crashes are not suitable for direct entry into a bug tracking system like Bugzilla. We have tools to manage this data and get it ready to report.

The FuzzManager client library filters out crash variations and duplicate results before they leave the fuzzing node. Unique results are reported to a FuzzManager server. The FuzzManager web interface allows for the creation of signatures that help group reports together in buckets to aid the client in detecting duplicate results.

Fuzzers commonly generate test cases that are hundreds or even thousands of lines long. FuzzManager buckets are automatically scanned to queue reduction tasks in Taskcluster. These reduction tasks use Grizzly Reduce and Lithium to apply different reduction strategies, often removing the majority of the unnecessary data. Each bucket is continually processed until a successful reduction is complete. Then an engineer can do a final inspection of the minimized test case and attach it to a bug report. The final result is often used as a crash test in the Firefox test suite.

Animation showing an example testcase reduction using Grizzly

Code coverage of the fuzzer is also measured periodically. FuzzManager is used again to collect code coverage data and generate coverage reports.

Creating optimal bug reports

Our goal is to create actionable bug reports to get issues fixed as soon as possible while minimizing overhead for developers.

We do this by providing:

  • crash information such as logs and a stack trace
  • build and environment information
  • reduced test case
  • Pernosco session
  • regression range (bisections via Bugmon)
  • verification via Bugmon

Grizzly Replay is a tool that forms the basic execution engine for Bugmon and Grizzly Reduce, and makes it easy to collect rr traces to submit to Pernosco. It makes re-running browser test cases easy both in automation and for manual use. It simplifies working with stubborn test cases and test cases that trigger multiple results.

As mentioned, we have also been making use of Pernosco. Pernosco is a tool that provides a web interface for rr traces and makes them available to developers without the need for direct access to the execution environment. It is an amazing tool developed by a company of the same name which significantly helps to debug massively parallel applications. It is also very helpful when test cases are too unreliable to reduce or attach to bug reports. Creating an rr trace and uploading it can make stalled bug reports actionable.

The combination of Grizzly and Pernosco have had the added benefit of making infrequent, hard to reproduce issues, actionable. A test case for a very inconsistent issue can be run hundreds or thousands of times until the desired crash occurs under rr. The trace is automatically collected and ready to be submitted to Pernosco and fixed by a developer, instead of being passed over because it was not actionable.

How we interact with developers

To request new features get a proper assessment, the fuzzing team can be reached at fuzzing@mozilla.com or on Matrix. This is also a great way to get in touch for any reason. We are happy to help you with any fuzzing related questions or ideas. We will also reach out when we receive information about new initiatives and features that we think will require attention. Once fuzzing of a component begins, we communicate mainly via Bugzilla. As mentioned, we strive to open actionable issues or enhance existing issues logged by others.

Bugmon is used to automatically bisect regression ranges. This notifies the appropriate people as quickly as possible and verifies bugs once they are marked as FIXED. Closing a bug automatically removes it from FuzzManager, so if a similar bug finds its way into the code base, it can be identified again.

Some issues found during fuzzing will prevent us from effectively fuzzing a feature or build variant. These are known as fuzz-blockers, and they come in a few different forms. These issues may seem benign from a product perspective, but they can block fuzzers from targeting important code paths or even prevent fuzzing a target altogether. Prioritizing these issues appropriately and getting them fixed quickly is very helpful and much appreciated by the fuzzing team.

PrefPicker manages the set of Firefox preferences used for fuzzing. When adding features behind a pref, consider adding it to the PrefPicker fuzzing template to have it enabled during fuzzing. Periodic audits of the PrefPicker fuzzing template can help ensure areas are not missed and resources are used as effectively as possible.

Measuring success

As in other fields, measurement is a key part of evaluating success. We leverage the meta bug feature of Bugzilla to help us keep track of the issues identified by fuzzers. We strive to have a meta bug per fuzzer and for each new component fuzzed.

For example, the meta bug for Domino lists all the issues (over 1100!) identified by this tool. Using this Bugzilla data, we are able to show the impact over the years of our various fuzzers.

Bar graph showing number of bugs reported by Domino over time

Number of bugs reported by Domino over time

These dashboards help evaluate the return on investment of a fuzzer.


There are many components in the fuzzing pipeline. These components are constantly evolving to keep up with changes in debugging tools, execution environments, and browser internals. Developers are always adding, removing, and updating browser features. Bugs are being detected, triaged, and logged. Keeping everything running continuously and targeting as much code as possible requires constant and ongoing efforts.

If you work on Firefox, you can help by keeping us informed of new features and initiatives that may affect or require fuzzing, by prioritizing fuzz-blockers, and by curating fuzzing preferences in PrefPicker. If fuzzing interests you, please take part in the bug bounty program. Our tools are available publicly, and we encourage bug hunting.

Daniel Stenberg“I could rewrite curl”

Collected quotes and snippets from people publicly sneezing off or belittling what curl is, explaining how easy it would be to make a replacement in no time with no effort or generally not being very helpful.

These are statements made seriously. For all I know, they were not ironic. If you find others to add here, please let me know!

Listen. I’ve been young too once and I’ve probably thought similar things myself in the past. But there’s a huge difference between thinking and saying. Quotes included here are mentioned for our collective amusement.

I can do it in less than a 100 lines


I can do it in a three day weekend

(The yellow marking in the picture was added by me.)


No reason to be written in C

Maybe not exactly in the same category as the two ones above, but still a significant “I know this” vibe:


We sold a curl exploit

Some people deliberately decides to play for the other team.


This isn’t a big deal

It’s easy to say things on Twitter…

This tweet was removed by its author after I and others replied to it so I cannot link it. The name has been blurred on purpose because of this.


Hacker news, Reddit

Hacks.Mozilla.OrgImproving Firefox stability on Linux

Roughly a year ago at Mozilla we started an effort to improve Firefox stability on Linux. This effort quickly became an example of good synergies between FOSS projects.

Every time Firefox crashes, the user can send us a crash report which we use to analyze the problem and hopefully fix it:

A screenshot of a tab that justc crashed

This report contains, among other things, a minidump: a small snapshot of the process memory at the time it crashed. This includes the contents of the processor’s registers as well as data from the stacks of every thread.

Here’s what this usually looks like:

If you’re familiar with core dumps, then minidumps are essentially a smaller version of them. The minidump format was originally designed at Microsoft and Windows has a native way of writing out minidumps. On Linux, we use Breakpad for this task. Breakpad originated at Google for their software (Picasa, Google Earth, etc…) but we have forked, heavily modified for our purposes and recently partly rewrote it in Rust.

Once the user submits a crash report, we have a server-side component – called Socorro – that processes it and extracts a stack trace from the minidump. The reports are then clustered based on the top method name of the stack trace of the crashing thread. When a new crash is spotted we assign it a bug and start working on it. See the picture below for an example of how crashes are grouped:

The snapshot of a stack trace as displayed on crash-stats.mozilla.com

To extract a meaningful stack trace from a minidump two more things are needed: unwinding information and symbols. The unwinding information is a set of instructions that describe how to find the various frames in the stack given an instruction pointer. Symbol information contains the names of the functions corresponding to a given range of addresses as well as the source files they come from and the line numbers a given instruction corresponds to.

In regular Firefox releases, we extract this information from the build files and store it into symbol files in Breakpad standard format. Equipped with this information Socorro can produce a human-readable stack trace. The whole flow can be seen below:

A graphicsl representation of our crash reporting flow, from the capture on the client to processing on the server

Here’s an example of a proper stack trace:

A fully symbolicated stack trace

If Socorro doesn’t have access to the appropriate symbol files for a crash the resulting trace contains only addresses and isn’t very helpful:

A stack trace showing raw addresses instead of symbols

When it comes to Linux things work differently than on other platforms: most of our users do not install our builds, they install the Firefox version that comes packaged for their favourite distribution.

This posed a significant problem when dealing with stability issues on Linux: for the majority of our crash reports, we couldn’t produce high-quality stack traces because we didn’t have the required symbol information. The Firefox builds that submitted the reports weren’t done by us. To make matters worse, Firefox depends on a number of third-party packages (such as GTK, Mesa, FFmpeg, SQLite, etc.). We wouldn’t get good stack traces if a crash occurred in one of these packages instead of Firefox itself because we didn’t have symbols for them either.

To address this issue, we started scraping debug information for Firefox builds and their dependencies from the package repositories of multiple distributions: Arch, Debian, Fedora, OpenSUSE and Ubuntu. Since every distribution does things a little bit differently, we had to write distro-specific scripts that would go through the list of packages in their repositories and find the associated debug information (the scripts are available here). This data is then fed into a tool that extracts symbol files from the debug information and uploads it to our symbol server.

With that information now available, we were able to analyze >99% of the crash reports we received from Linux users, up from less than 20%. Here’s an example of a high-quality trace extracted from a distro-packaged version of Firefox. We haven’t built any of the libraries involved yet the function names are present and so are the file and line numbers of the affected code:

A fully symbolicated stack trace including external code

The importance of this cannot be overestimated: Linux users tend to be more tech-savvy and are more likely to help us solve issues, so all those reports were a treasure trove for improving stability even for other operating systems (Windows, Mac, Android, etc). In particular, we often identified Fission bugs on Linux first.

The first effect of this newfound ability to inspect Linux crashes is that it greatly sped up our response time to Linux-specific issues, and often allowed us to identify problems in the Nightly and Beta versions of Firefox before they reached users on the release channel.

We could also quickly identify issues in bleeding-edge components such as WebRender, WebGPU, Wayland and VA-API video acceleration; oftentimes providing a fix within days of the change that triggered the issue.

We didn’t stop there: we could now identify distro-specific issues and regressions. This allowed us to inform package maintainers of the problems and have them resolved quickly. For example, we were able to identify a Debian-specific issue only two weeks after it was introduced and fixed it right away. The crash was caused by a modification made by Debian to one of Firefox dependencies that could cause a crash on startup, it’s filed under bug 1679430 if you’re curious about the details.

Another good example comes from Fedora: they had been using their own crash reporting system (ABRT) to catch Firefox crashes in their Firefox builds, but given the improvements on our side they started sending Firefox crashes our way instead.

We could also finally identify regressions and issues in our dependencies. This allowed us to communicate the issues upstream and sometimes even contributed fixes, benefiting both our users and theirs.

For example, at some point, Debian updated the fontconfig package by backporting an upstream fix for a memory leak. Unfortunately, the fix contained a bug that would crash Firefox and possibly other software too. We spotted the new crash only six days after the change landed in Debian sources and only a couple of weeks afterwards the issue had been fixed both upstream and in Debian. We sent reports and fixes to other projects too including Mesa, GTK, glib, PCSC, SQLite and more.

Nightly versions of Firefox also include a tool to spot security-sensitive issues: the probabilistic heap checker. This tool randomly pads a handful of memory allocations in order to detect buffer overflows and use-after-free accesses. When it detects one of these, it sends us a very detailed crash report. Given Firefox’s large user-base on Linux, this allowed us to spot some elusive issues in upstream projects and report them.

This also exposed some limitations in the tools we use for crash analysis, so we decided to rewrite them in Rust largely relying on the excellent crates developed by Sentry. The resulting tools were dramatically faster than our old ones, used a fraction of the memory and produced more accurate results. Code flowed both ways: we contributed improvements to their crates (and their dependencies) while they expanded their APIs to address our new use-cases and fixed the issues we discovered.

Another pleasant side-effect of this work is that Thunderbird now also benefits from the improvement we made for Firefox.

This goes on to show how collaboration between FOSS projects not only benefits their users but ultimately improves the whole ecosystem and the broader community that relies on it.

Special thanks to Calixte Denizet, Nicholas Nethercote, Jan Auer and all the others that contributed to this effort!

The post Improving Firefox stability on Linux appeared first on Mozilla Hacks - the Web developer blog.

This Week In RustThis Week in Rust 391

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

No newsletters or research papers this week.

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is arraygen, a derive proc macro to generate arrays from structs.

Thanks to José Manuel Barroso Galindo for the nomination

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

No issues were proposed for CfP.

Updates from Rust Core

333 pull requests were merged in the last week

Rust Compiler Performance Triage

A lot of noise in the benchmark results this week. We are discussing (zulip archive, live zulip) how best to update the benchmark set to eliminate the noisy cases that are bouncing around. Beyond that, some large improvements to a few individual benchmarks.

The memory usage (max-rss) seemed largely flat. Except for an upward trend on tuple-stess that indicates 4% more memory from a week ago.

Triage done by @pnkfelix. Revision range: 382f..25a2

5 Regressions, 7 Improvements, 2 Mixed 1 of them in rollups

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

Upcoming Events


If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Protocol Labs

Amazon Web Services

Techno Creatives






Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

I often think about Rust as a process and community for developing a programming language, rather than as a programming language itself.

throwaway894345 on hacker news

Thanks to Krishna Sundarram for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Mozilla Open Policy & Advocacy BlogMozilla publishes position paper on EU Digital Services Act

In December 2020 the European Commission published the draft EU Digital Services Act. The law seeks to establish a new paradigm for tech sector regulation, and we see it as a crucial opportunity to address many of the challenges holding back the internet from what it should be. As EU lawmakers start to consider amendments and improvements to the draft law, today we’re publishing our substantive perspectives and recommendations to guide those deliberations.

We are encouraged that the draft DSA includes many of the policy recommendations that Mozilla and our allies had advocated for in recent years. For that we commend the European Commission. However, many elements of the DSA are novel and complex, and so there is a need for elaboration and clarification in the legislative mark-up phase. We believe that with targeted amendments the DSA has the potential to serve as the effective, balanced, and future-proof legal framework.

Given the sheer breath of the DSA, we’re choosing to focus on the elements where we believe we have a unique contribution to make, and where we believe the DSA can constitute a real paradigm shift. That is not to say we don’t have thoughts on the other elements of the proposal, and we look forward to supporting our allies in industry and civil society who are focusing their efforts elsewhere.

Broadly speaking, our position can be summarised as follows:

  • Asymmetric obligations for the largest platforms 
      • We welcome the DSA’s approach of making very large platforms subject to enhanced regulation compared to the rest of the industry, but we suggest tweaks to the scope and definitions.
      • The definition of these so-called Very Large Online Platforms (VLOPS) shouldn’t be based solely on quantitative criteria, but possibly qualitative (e.g. taking into account risk) as well, in anticipation of certain extraordinary edge cases where a service that meets that quantitative VLOP standard is in reality very low risk in nature.
  • Systemic transparency
      • We welcome the DSA’s inclusion of public-facing ad archive APIs and the provisions on access to data for public interest researchers.
      • We call for the advertising transparency elements to take into account novel forms of paid influence, and for the definition of ‘public interest researchers’ to be broader than just university faculty.
  • A risk-based approach to content responsibility
      • We welcome this approach, but suggest more clarification on the types of risks to be assessed and how those assessments are undertaken.
  • Auditing and oversight
    • We welcome the DSA’s third-party auditing requirement but we provide recommendations on how it can be more than just a tick-box exercise (e.g. through standardisation; clarity on what is to be audited; etc).
    • We reiterate the call for oversight bodies to be well-resourced and staffed with the appropriate technical expertise.

This position paper is the latest milestone in our long-standing engagement on issues of content regulation and platform responsibility in the EU. In the coming months we’ll be ramping up our efforts further, and look forward to supporting EU lawmakers in turning these recommendations into reality.

Ultimately, we firmly believe that if developed properly, the DSA can usher in a new global paradigm for tech regulation. At a time when lawmakers from Delhi to Washington DC are grappling with questions of platform accountability and content responsibility, the DSA is indeed a once-in-a-generation opportunity.

The post Mozilla publishes position paper on EU Digital Services Act appeared first on Open Policy & Advocacy.

Mozilla Security BlogIntroducing Site Isolation in Firefox

When two major vulnerabilities known as Meltdown and Spectre were disclosed by security researchers in early 2018, Firefox promptly added security mitigations to keep you safe. Going forward, however, it was clear that with the evolving techniques of malicious actors on the web, we needed to redesign Firefox to mitigate future variations of such vulnerabilities and to keep you safe when browsing the web!

We are excited to announce that Firefox’s new Site Isolation architecture is coming together. This fundamental redesign of Firefox’s Security architecture extends current security mechanisms by creating operating system process-level boundaries for all sites loaded in Firefox for Desktop. Isolating each site into a separate operating system process makes it even harder for malicious sites to read another site’s secret or private data.

We are currently finalizing Firefox’s Site Isolation feature by allowing a subset of users to benefit from this new security architecture on our Nightly and Beta channels and plan a roll out to more of our users later this year. If you are as excited about it as we are and would like to try it out, follow these steps:

To enable Site Isolation on Firefox Nightly:

  1. Navigate to about:preferences#experimental
  2. Check the “Fission (Site Isolation)” checkbox to enable.
  3. Restart Firefox.

To enable Site Isolation on Firefox Beta or Release:

  1. Navigate to about:config.
  2. Set `fission.autostart` pref to `true`.
  3. Restart Firefox.

With this monumental change of secure browser design, users of Firefox Desktop benefit from protections against future variants of Spectre, resulting in an even safer browsing experience. If you aren’t a Firefox user yet, you can download the latest version here and if you want to know all the technical details about Firefox’s new security architecture, you can read it here.

The post Introducing Site Isolation in Firefox appeared first on Mozilla Security Blog.

Hacks.Mozilla.OrgIntroducing Firefox’s new Site Isolation Security Architecture

Like any web browser, Firefox loads code from untrusted and potentially hostile websites and runs it on your computer. To protect you against new types of attacks from malicious sites and to meet the security principles of Mozilla, we set out to redesign Firefox on desktop.

Site Isolation builds upon a new security architecture that extends current protection mechanisms by separating (web) content and loading each site in its own operating system process.

This new security architecture allows Firefox to completely separate code originating from different sites and, in turn, defend against malicious sites trying to access sensitive information from other sites you are visiting.

In more detail, whenever you open a website and enter a password, a credit card number, or any other sensitive information, you want to be sure that this information is kept secure and inaccessible to malicious actors.

As a first line of defence Firefox enforces a variety of security mechanisms, e.g. the same-origin policy which prevents adversaries from accessing such information when loaded into the same application.

Unfortunately, the web evolves and so do the techniques of malicious actors. To fully protect your private information, a modern web browser not only needs to provide protections on the application layer but also needs to entirely separate the memory space of different sites – the new Site Isolation security architecture in Firefox provides those security guarantees.

Why separating memory space is crucial

In early 2018, security researchers disclosed two major vulnerabilities, known as Meltdown and Spectre. The researchers exploited fundamental assumptions about modern hardware execution, and were able to demonstrate how untrusted code can access and read memory anywhere within a process’ address space, even in a language as high level as JavaScript (which powers almost every single website).

While band-aid countermeasures deployed by OS, CPU and major web browser vendors quickly neutralized the attacks, they came with a performance cost and were designed to be temporary. Back when the attacks were announced publicly, Firefox teams promptly reduced the precision of high-precision timers and disabled APIs that allowed such timers to be implemented to keep our users safe.

Going forward, it was clear that we needed to fundamentally re-architecture the security design of Firefox to mitigate future variations of such vulnerabilities.

Let’s take a closer look at the following example which demonstrates how an attacker can access your private data when executing a Spectre-like attack.

Two hand-drawn diagrams, with the first labeled “Without Site Isolation, we might load both of these sites in the same process :( ”. Two browser windows with partially visible sites “attacker.com” and “my-bank” partial site, are loaded in the same process - process 16. On top of the banking window, there is a cartoon face that looks happy, personifying the browser. The attacker site window contains a face that is looking at the banking window, with a mischievous smile. In the second diagram, labeled “Attacker.com executes a sophisticated attack”, we see the same two browser windows loaded in process 16 and a 1 column table labelled “memory where my-bank’s data is stored in process 16” underneath the banking window. It has two entries: “credit card info” and “login password”. A hand extending from the malicious site reaches toward the table (aka memory of the second window), signifying that the malicious site is able to access sensitive data belonging to the banking window because it is in the same process. The personified browser character is looking towards the malicious site, and exhibits feelings of concern and worry, with exclamation marks floating around the face.

Without Site Isolation, Firefox might load a malicious site in the same process as a site that is handling sensitive information. In the worst case scenario, a malicious site might execute a Spectre-like attack to gain access to memory of the other site.

Suppose you have two websites open – www.my-bank.com and www.attacker.com. As illustrated in the diagram above, with current web browser architecture it’s possible that web content from both sites ends up being loaded into the same operating system process. To make things worse, using a Spectre-like attack would allow attacker.com to query and access data from the my-bank.com website.

Despite existing security mitigations, the only way to provide memory protections necessary to defend against Spectre-like attacks is to rely on the security guarantees that come with isolating content from different sites using the operating system’s process separation.

Background on Current Browser Architecture

Upon being launched, the Firefox web browser internally spawns one privileged process (also known as the parent process) which then launches and coordinates activities of multiple (web) content processes – the parent process is the most privileged one, as it is allowed to perform any action that the end-user can.

This multi-process architecture allows Firefox to separate more complicated or less trustworthy code into processes, most of which have reduced access to operating system resources or user files. As a consequence, less privileged code will need to ask more privileged code to perform operations which it itself cannot.

For example, a content process will have to ask the parent process to save a download because it does not have the permissions to write to disk. Put differently, if an attacker manages to compromise the content process it must additionally (ab)use one of the APIs to convince the parent process to act on its behalf.

In great detail, (as of April 2021) Firefox’s parent process launches a fixed number of processes: eight web content processes, up to two additional semi-privileged web content processes, and four utility processes for web extensions, GPU operations, networking, and media decoding.

While separating content into currently eight web content processes already provides a solid foundation, it does not meet the security standards of Mozilla because it allows two completely different sites to end up in the same operating system process and, therefore, share process memory. To counter this, we are targeting a Site Isolation architecture that loads every single site into its own process.

A hand drawn diagram titled “Loading Sites with Current Browser Architecture”. On the left hand-side, from top to bottom, there are four browser windows with different sites loaded. The first window, www.my-bank.com, is loaded in process 3. The second window is loaded in process 4. The third window is loaded in process 5. The last window with a url - “www.attacker.com” - is loaded in process 3, same as the first window. On the right hand-side of the drawing, there is a table titled “List of Content Processes”. The table contains two columns: “site” and “pid”, which stands for process id. In the table, the first window, my-bank.com, and the last attacker.com window have the same PID.

Without Site Isolation, Firefox does not separate web content into different processes and it’s possible for different sites to be loaded in the same process.

Imagine you open some websites in different tabs: www.my-bank.com, www.getpocket.com, www.mozilla.org and www.attacker.com. As illustrated in the diagram above, it’s entirely possible that my-bank.com and attacker.com end up being loaded in the same operating system process, which would result in them sharing process memory. As we saw in the previous example, with this separation model, an attacker could perform a Spectre-like attack to access my-bank.com’s data.

A hand drawn diagram titled “Loading Subframes With Current Browser Architecture”. There is one browser window drawn. The window, www.attacker.com, embeds a page from a different site, www.my-bank.com. The top level page and the subframe are loaded in the same process - process 3.

Without Site Isolation, the browser will load embedded pages, such as a bank page or an ad, in the same process as the top level document.

While straightforward to understand sites being loaded into different tabs, it’s also possible that sites are embedded into other sites through so-called subframes – if you ever visited a website that had ads on it, those are probably subframes. If you ever had a personal website and you embedded a YouTube video with your favourite song within it, the YouTube video was embedded in a subframe.

In a more dangerous scenario, a malicious site could embed a legitimate site within a subframe and try to trick you into entering sensitive information. With the current architecture, if a page contains any subframes from a different site, they will generally be in the same process as the outer tab.

This results in both the page and all of its subframes sharing process memory, even if the subframes originate from different sites. In the case of a successful Spectre-like attack, a top-level site might access sensitive information it should not have access to from a subframe it embeds (and vice-versa) – the new Site Isolation security architecture within Firefox will effectively make it even harder for malicious sites to execute such attacks.

How Site Isolation Works in Firefox

When enabling Site Isolation in Firefox for desktop, each unique site is loaded in a separate process. In more detail, loading “https://mozilla.org” and also loading “http://getpocket.com” will cause Site Isolation to separate the two sites into their own operating system process because they are not considered “same-site”.

Similarly, “https://getpocket.com” (note the difference between http and https) will also be loaded into a separate process – so ultimately all three sites will load in different processes.

For the sake of completeness, there are some domains such as “.github.io” or “.blogspot.com” that would be too general to identify a “site”. This is why we use a community-maintained list of effective top level domains (eTLDs) to aid in differentiating between sites.

Since “github.io” is listed as an eTLD, “a.github.io”  and “b.github.io” would load in different processes. In our running examples, websites “www.my-bank.com” and “www.attacker.com” are not considered “same-site” with each other and will be isolated in separate processes.

Two hand-drawn diagrams, with the first labeled “With Site Isolation, we will load these sites in different processes”. It shows two browser windows, one www.attacker.com , loaded in process 5, and www.my-bank.com loaded in process 16. On top of the banking window, there is a cartoon face that looks happy, personifying the browser. In contrast, the webpage area of the www.attacker.com window, contains a face that is looking at the banking window, with a mischievous smile. In the second diagram, labeled “Attacker.com tries to execute a sophisticated attack”, we see the same two browser windows. There is a 1 column table labelled “memory where my-bank’s data is stored in process 16” underneath the banking window . It has two entries: “credit card info” and “login password”. A hand extending from the malicious site tries to reach towards the table (aka memory of the banking window), but is unable to reach it, due to the process boundary. The face of the malicious site is frowning and looks unhappy, while the face, representing the browser, continues to look happy and carefree. The second window’s data is safe from the malicious site.

With Site Isolation, Firefox loads each site in its own process, thereby isolating their memory from each other, and relies on security guarantees of the operating system.

Suppose now, you open the same two websites: www.attacker.com and www.my-bank.com, as seen in the diagram above. Site isolation recognizes that the two sites are not “same-site” and hence the site isolation architecture will completely separate content from attacker.com and my-bank.com into separate operating system processes.

This process separation of content from different sites provides the memory protections required to allow for a secure browsing experience, making it even harder for sites to execute Spectre-like attacks, and, ultimately, provide a secure browsing experience for our users.

The window, www.attacker.com, embeds a page from a different site, www.my-bank.com. The top level page is loaded in process 3 and the subframe corresponding to the bank site is loaded in process 5. The two sites are, thus, isolated from each other in different operating system processes.

With Site Isolation, Firefox loads subframes from different sites in their own processes.

Identical to loading sites into two different tabs is the separation of two different sites when loaded into subframes. Let’s revisit an earlier example where pages contained subframes, with Site Isolation, subframes that are not “same-site” with the top level page will load in a different process.

In the diagram above, we see that the page www.attacker.com embeds a page from www.my-bank.com and loads in a different process. Having a top level document and subframes from different sites loaded in their own processes ensures their memory is isolated from each other, yielding profound security guarantees.

Additional Benefits of Site Isolation

With Site Isolation architecture in place, we are able to bring additional security hardening to Firefox to keep you and your data safe. Besides providing an extra layer of defence against possible security threats, Site Isolation brings other wins:

  • By placing more pages into separate processes, we can ensure that doing heavy computation or garbage collection on one page will not degrade the responsiveness of pages in other processes.
  • Using more processes to load websites allows us to spread work across many CPU cores and use the underlying hardware more efficiently.
  • Due to the finer-grained separation of sites, a subframe or a tab crashing will not affect websites loaded in different processes, resulting in an improved application stability and better user experience.

Going Forward

We are currently testing Site Isolation on desktop browsers Nightly and Beta with a subset of users and will be rolling out to more desktop users soon. However, if you already want to benefit from the improved security architecture now, you can enable it by downloading the Nightly or Beta browser from here and following these steps:

To enable Site Isolation on Firefox Nightly:

  1. Navigate to about:preferences#experimental
  2. Check the “Fission (Site Isolation)” checkbox to enable.
  3. Restart Firefox.

To enable Site Isolation on Firefox Beta or Release:

  1. Navigate to about:config.
  2. Set `fission.autostart` pref to `true`.
  3. Restart Firefox.

For technical details on how we group sites and subframes together, you can check out our new process manager tool at “about:processes” (type it into the address bar) and follow the project at  https://wiki.mozilla.org/Project_Fission.

With Site Isolation enabled on Firefox for Desktop, Mozilla takes its security guarantees to the next level and protects you against a new class of malicious attacks by relying on memory protections of OS-level process separation for each site. If you are interested in contributing to Mozilla’s open-source projects, you can help us by filing bugs here if you run into any problems with Site Isolation enabled.


Site Isolation (Project Fission), has been a massive multi-year project. Thank you to all of the talented and awesome colleagues who contributed to this work! It’s a privilege to work with people who are passionate about building the web we want: free, inclusive, independent and secure! In particular, I would like to thank Neha Kochar, Nika Layzell, Mike Conley, Melissa Thermidor, Chris Peterson, Kashav Madan, Andrew McCreight, Peter Van der Beken, Tantek Çelik and Christoph Kerschbaumer for their insightful comments and discussions.  Finally, thank you to Morgan Rae Reschenberg for helping me craft alt-text to meet the high standards of our web accessibility principles and allow everyone on the internet to easily gather the benefits provided by Site Isolation.

The post Introducing Firefox’s new Site Isolation Security Architecture appeared first on Mozilla Hacks - the Web developer blog.

Spidermonkey Development BlogErgonomic Brand Checks will ship with Firefox 90

When programming with Private Fields and methods, it can sometimes be desirable to check if an object has a given private field. While the semantics of private fields allow doing that check by using try...catch, the Ergonomic Brand checks proposal provides a simpler syntax, allowing one to simply write #field in o.

As an example, the following class uses ergonomic brand checks to provide a more helpful custom error.

class Scalar {
  #length = 0;

  add(s) {
    if (!(#length in s)) {
      throw new TypeError("Expected an instance of Scalar");

    this.#length += s.#length;

While the same effect could be accomplished with try...catch, it’s much uglier, and also doesn’t work reliably in the presence of private getters which may possibly throw for different reasons.

This JavaScript language feature proposal is at Stage 3 of the TC39 process, and will ship in Firefox 90.

The Rust Programming Language BlogAnnouncing Rustup 1.24.2

The rustup working group is happy to announce the release of rustup version 1.24.2. Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of rustup installed, getting rustup 1.24.2 is as easy as closing your IDE and running:

rustup self update

Rustup will also automatically update itself at the end of a normal toolchain update:

rustup update

If you don't have it already, you can get rustup from the appropriate page on our website.

What's new in rustup 1.24.2

1.24.2 introduces pooled allocations to prevent memory fragmentation issues on some platforms with 1.24.x. We're not entirely sure what aspect of the streamed unpacking logic caused allocator fragmentation, but memory pools are a well known fix that should solve this for all platforms.

Those who were encountering CI issues with 1.24.1 should find them resolved.

Other changes

You can check out all the changes to Rustup for 1.24.2 in the changelog!

Rustup's documentation is also available in the rustup book.

Finally, the Rustup working group are pleased to welcome a new member. Between 1.24.1 and 1.24.2 二手掉包工程师 (hi-rustin) has joined, having already made some excellent contributions.


Thanks again to all the contributors who made rustup 1.24.2 possible!

  • Carol (Nichols || Goulding)
  • Daniel Silverstone
  • João Marcos Bezerra
  • Josh Rotenberg
  • Joshua Nelson
  • Martijn Gribnau
  • pierwill
  • Robert Collins
  • 二手掉包工程师 (hi-rustin)

The Rust Programming Language BlogSix Years of Rust

Today marks Rust's sixth birthday since it went 1.0 in 2015. A lot has changed since then and especially over the past year, and Rust was no different. In 2020, there was no foundation yet, no const generics, and a lot of organisations were still wondering whether Rust was production ready.

In the midst of the COVID-19 pandemic, hundreds of Rust's global distributed set of team members and volunteers shipped over nine new stable releases of Rust, in addition to various bugfix releases. Today, "Rust in production" isn't a question, but a statement. The newly founded Rust foundation has several members who value using Rust in production enough to help continue to support and contribute to its open development ecosystem.

We wanted to take today to look back at some of the major improvements over the past year, how the community has been using Rust in production, and finally look ahead at some of the work that is currently ongoing to improve and use Rust for small and large scale projects over the next year. Let's get started!

Recent Additions

The Rust language has improved tremendously in the past year, gaining a lot of quality of life features, that while they don't fundamentally change the language, they help make using and maintaining Rust in more places even easier.

  • As of Rust 1.52.0 and the upgrade to LLVM 12, one of few cases of unsoundness around forward progress (such as handling infinite loops) has finally been resolved. This has been a long running collaboration between the Rust teams and the LLVM project, and is a great example of improvements to Rust also benefitting the wider ecosystem of programming languages.

  • On supporting an even wider ecosystem, the introduction of Tier 1 support for 64 bit ARM Linux, and Tier 2 support for ARM macOS & ARM Windows, has made Rust an even better place to easily build your projects across new and different architectures.

  • The most notable exception to the theme of polish has been the major improvements to Rust's compile-time capabilities. The stabilisation of const generics for primitive types, the addition of control flow for const fns, and allowing procedural macros to be used in more places, have allowed completely powerful new types of APIs and crates to be created.

Rustc wasn't the only tool that had significant improvements.

  • Cargo just recently stabilised its new feature resolver, that makes it easier to use your dependencies across different targets.

  • Rustdoc stabilised its "intra-doc links" feature, allowing you to easily and automatically cross reference Rust types and functions in your documentation.

  • Clippy with Cargo now uses a separate build cache that provides much more consistent behaviour.

Rust In Production

Each year Rust's growth and adoption in the community and industry has been unbelievable, and this past year has been no exception. Once again in 2020, Rust was voted StackOverflow's Most Loved Programming Language. Thank you to everyone in the community for your support, and help making Rust what it is today.

With the formation of the Rust foundation, Rust has been in a better position to build a sustainable open source ecosystem empowering everyone to build reliable and efficient software. A number of companies that use Rust have formed teams dedicated to maintaining and improving the Rust project, including AWS, Facebook, and Microsoft.

And it isn't just Rust that has been getting bigger. Larger and larger companies have been adopting Rust in their projects and offering officially supported Rust APIs.

  • Both Microsoft and Amazon have just recently announced and released their new officially supported Rust libraries for interacting with Windows and AWS. Official first party support for these massive APIs helps make Rust people's first choice when deciding what to use for their project.
  • The cURL project has released new versions that offer opt-in support for using Rust libraries for handling HTTP/s and TLS communication. This has been a huge inter-community collaboration between the ISRG, the Hyper & Rustls teams, and the cURL project, and we'd like to thank everyone for their hard work in providing new memory safe backends for a project as massive and widely used as cURL!
  • Tokio (an asynchronous runtime written in Rust), released its 1.0 version and announced their three year stability guarantee, providing everyone with a solid, stable foundation for writing reliable network applications without compromising speed.

Future Work

Of course, all that is just to start, we're seeing more and more initiatives putting Rust in exciting new places;

Right now the Rust teams are planning and coordinating the 2021 edition of Rust. Much like this past year, a lot of themes of the changes are around improving quality of life. You can check out our recent post about "The Plan for the Rust 2021 Edition" to see what the changes the teams are planning.

And that's just the tip of the iceberg; there are a lot more changes being worked on, and exciting new open projects being started every day in Rust. We can't wait to see what you all build in the year ahead!

Are there changes, or projects from the past year that you're excited about? Are you looking to get started with Rust? Do you want to help contribute to the 2021 edition? Then come on over, introduce yourself, and join the discussion over on our Discourse forum and Zulip chat! Everyone is welcome, we are committed to providing a friendly, safe and welcoming environment for all, regardless of gender, sexual orientation, disability, ethnicity, religion, or similar personal characteristic.

Firefox NightlyThese Weeks in Firefox: Issue 93


  • Firefox 89 introduces a fresh new look and feel!
    • Floating tabs!
    • Streamlined menus!
    • New icons!
    • Better dark mode support!
    • Improved context menus on Mac and Windows
    • Improved perceived startup performance on Windows
    • Native context menus and rubberbanding/overscroll on macOS
    • Refreshed modals dialogs and notification bars!
    • More details in these release notes and in this early review from laptopmag
  • Non-native form controls are slated to ride out in Firefox 89 as well
    • This lays the groundwork for improving the sandboxing of the content processes by shutting off access to native OS widget drawing routines
  • (Experimental, and en-US Nightly only) Users will now get unit conversions directly in the URL bar! Users can type “5 lbs to kg” and see a copy/paste friendly result instantaneously.

Friends of the Firefox team

For contributions from April 20 2021 to May 4 2021, inclusive.


Resolved bugs (excluding employees)
Fixed more than one bug
  • Falguni Islam
  • Itiel
  • kaira [:anshukaira]
  • Kajal Sah
  • Luz De La Rosa
  • Richa Sharma
  • Sebastian Zartner [:sebo]
  • Vaidehi
New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Starting from Firefox 90, when no extensions are installed  our about:addons page will show to the users a nicer message to explicitly direct them to addons.mozilla.org  instead of an empty list of installed extensions (Bug 1561538) – Thanks to Samuel Grasse-Haroldsen for fixing this polishing issue.
  • As part of the ongoing work to get rid of OS.File usage, Barret unveiled and fixed some races in AddonManager and XPIDatabase jsm (Bug 1702116)
  • Fixed a macOS specific issue in the “Manager Extension Shortcuts” about:addons view, which was preventing this view from detecting some of the conflicting shortcuts (Bug 1565854)
WebExtensions Framework
WebExtension APIs
  • Nicolas Chevobbe applied the needed changes to ensure that the devtools.inspectedWindow.reload method is Fission compatible also when an extension does pass to it the userAgent option (Bug 1706098)


  • Neil has been working on reviving the tab unloader for when users are hitting memory limits
    • It’s smarter this time though, and should hopefully make better choices on which tabs to unload.
    • Currently disabled by default, but Nightly users can test it by setting `browser.tabs.unloadOnLowMemory` to `true`

Messaging System


Performance Tools

  • Stacks now include the category color of each stack frame (in tooltips, marker table, sidebar)
    • Before and after image with stack frames highlighted in different colors.
  • Fixed a bug where the dot markers appear in the wrong places.
    • Profiler timeline with markers correctly displayed.

Search and Navigation

  • Lots of polish fixes to Proton address bar (and search bar)
  • The Search Mode chiclet can now be closed also when the address bar is unfocused – Bug 1701901
  • Address bar results action text (for example “Switch to tab”, or “Search with Engine”) won’t be pushed out of the visible area by long titles anymore – Bug 1707839
  • Double dots in domain-looking strings will now be corrected – Bug 1580881


Kajal, Falguni, Dawit, and Kaira have been working on removing server side code from screenshots

Niko MatsakisCTCFTFTW

This Monday I am starting something new: a monthly meeting called the “Cross Team Collaboration Fun Times” (CTCFT)1. Check out our nifty logo2:


The meeting is a mechanism to help keep the members of the Rust teams in sync and in touch with one another. The idea is to focus on topics of broad interest (more than two teams):

  • Status updates on far-reaching projects that could affect multiple teams;
  • Experience reports about people trying new things (sometimes succeeding, sometimes not);
  • “Rough draft” proposals that are ready to be brought before a wider audience.

The meeting will focus on things that could either offer insights that might affect the work you’re doing, or where the presenter would like to pose questions to the Rust teams and get feedback.

I announced the meeting some time back to all@rust-lang.org, but I wanted to make a broader announcement as well. This meeting is open for anyone to come and observe. This is by design. Even though the meeting is primarily meant as a forum for the members of the Rust teams, it can be hard to define the borders of a community like ours. I’m hoping we’ll get people who work on major Rust libraries in the ecosystem, for example, or who work on the various Rust teams that have come into being.

The first meeting is scheduled for 2021-05-17 at 15:00 Eastern and you will find the agenda on the CTCFT website, along with links to the slides (still a work-in-progress as of this writing!). There is also a twitter account @RustCTCFT and a Google calendar that you can subscribe to.

I realize the limitations of a synchronous meeting. Due to the reality of time zones and a volunteer project, for example, we’ll never be able to get all of Rust’s global community to attend at once. I’ve designed the meeting to work well even if you can’t attend: the goal is have a place to start conversations, not to finish them. Agendas are annonunced well in advance and the meetings are recorded. We’re also rotating times – the next meeting on 2021-06-21 takes place at 21:00 Eastern time, for example.3

Hope to see you there!


  1. In keeping with Rust’s long-standing tradition of ridiculous acronyms. 

  2. Thanks to @Xfactor521! 🙏 

  3. The agenda is still TBD. I’ll tweet when we get it lined up. We’re not announcing that far in advance! 😂 

Mozilla Open Policy & Advocacy BlogDefending users’ security in Mauritius

Yesterday, Mozilla and Google filed a joint submission to the public consultation on amending the Information and Communications Technology (ICT) Act organised by the Government of Mauritius. Our submission states that the proposed changes would disproportionately harm the security of Mauritian users on the internet and should be abandoned. Mozilla believes that individuals’ security and privacy on the internet are fundamental and must not be treated as optional. The proposals under these amendments are fundamentally incompatible with this principle and would fail to achieve their projected outcomes.

Under Section 18(m) of the proposed changes, the ICTA could deploy a “new technical toolset” to intercept, decrypt, archive and then inspect/block https traffic between a local user’s Internet device and internet services, including social media platforms.

In their current form, these measures will place the privacy and security of internet users in Mauritius at grave risk. The blunt and disproportionate action will allow the government to decrypt, read and store anything a user types or posts on the internet, including intercepting their account information, passwords and private messages. While doing little to address the legitimate concerns of content moderation in local languages, it will undermine the trust of the fundamental security infrastructure that currently serves as the basis for the security of at least 80% of websites on the web that use HTTPS, including those that carry out e-commerce and other critical financial transactions.

When similarly dangerous mechanisms have been abused in the past, whether by known-malicious parties, business partners such as a computer or device manufacturer, or a government entity, as browser makers we have taken steps to protect and secure our users and products.

In our joint submission to the on-going public consultation, Google and Mozilla have urged the Authority not to pursue this approach. Operating within international frameworks for cross-border law enforcement cooperation and enhancing communication with industry can provide a more promising path to address the stated concerns raised in the consultation paper. We remain committed to working with the Government of Mauritius to address the underlying concerns in a manner that does not harm the privacy, security and freedom of expression of Mauritians on the internet.

The post Defending users’ security in Mauritius appeared first on Open Policy & Advocacy.

Mozilla Open Policy & Advocacy BlogMozilla files joint amicus brief in support of California net neutrality law

Yesterday, Mozilla joined a coalition of public interest organizations* in submitting an amicus brief to the Ninth Circuit in support of SB 822, California’s net neutrality law. In this case, telecom and cable companies are arguing that California’s law is preempted by federal law. In February of this year, a federal judge dismissed this challenge and held that California can enforce its law. The telecom industry appealed that decision to the 9th Circuit. We are asking the 9th Circuit to find that California has the authority to protect net neutrality.

“Net neutrality preserves the environment that creates room for new businesses and new ideas to emerge and flourish, and where internet users can choose freely the companies, products, and services that they want to interact with and use. In a marketplace where consumers frequently do not have access to more than one internet service provider (ISP), these rules ensure that data is treated equally across the network by gatekeepers. We are committed to restoring the protections people deserve and will continue to fight for net neutrality,” said Amy Keating, Mozilla’s Chief Legal Officer.

*Mozilla is joined on the amicus brief by Access Now, Public Knowledge, New America’s Open Technology Institute and Free Press.

The post Mozilla files joint amicus brief in support of California net neutrality law appeared first on Open Policy & Advocacy.