Mozilla Addons BlogOpen extensions on Firefox for Android debut December 14 (but you can get a sneak peek today)

Starting December 14, 2023, extensions marked as Android compatible on (AMO) will be openly available to Firefox for Android users.

“We’ve been so impressed with developer enthusiasm and preparation,” said Giorgio Natili, Firefox Director of Engineering. “Just a few weeks ago it looked like we might have a couple hundred Android extensions for launch, but now we can safely say AMO will have 400+ new Firefox for Android extensions available on December 14. We couldn’t be more thankful to our developer community for embracing this exciting moment.”

In anticipation of the launch of open extensions on Android, we just added a link to “Explore all Android extensions” on AMO’s Android page to make it easy to discover new content. And just for fun and to offer a taste of what’s to come, we also released a couple dozen new open extensions for Android. You can find them listed beneath the Recommended Extensions collection on that AMO Android page. Try a few out!

Get your Firefox desktop extension ready for Android

There’s still time to make your desktop extension compatible with Firefox for Android if you want to be part of the December 14 launch. Senior Developer Relations Engineer Simeon Vincent recently hosted two webinars to help developers work through common migration hurdles. Here are recorded webinars from October (an introduction to mobile extension migration) and November (setup, testing, debugging).

Simeon also hosts open “office hours” every Monday and Tuesday for anyone interested in signing up to receive 1:1 guidance on Firefox for Android extension development. Office hours run through December, so be sure to tap Simeon’s expertise while time remains.

“Early Add-opter” t-shirts still available!

Are you a developer planning to make your desktop extension work with Firefox for Android by December 14? Do you like cool free t-shirts? Great! Then email us at firefox-android-addon-support [at] with a link to your extension’s AMO listing page and we’ll follow up with t-shirt order details. Better act fast though, we’ve only got 200 tees total and just a few remain.

The post Open extensions on Firefox for Android debut December 14 (but you can get a sneak peek today) appeared first on Mozilla Add-ons Community Blog.

Niko MatsakisProject Goals

Lately I’ve been iterating on an idea I call project goals. Project goals are a new kind of RFC that defines a specific goal that a specific group of people hope to achieve in a specific amount of time – for example, “Rusty Spoon Corp proposes to fund 2 engineers full time to stabilize collections that support custom memory allocations by the end of 2023”.

Project goals would also include asks from various teams that are needed to complete the goal. For example, “Achieving this goal requires a dedicated reviewer from the compiler team along with an agreement from the language design team to respond to RFCs or nominated issues within 2 weeks.” The decision of whether to accept a goal would be up to those teams who are being asked to support it. If those teams approve the RFC, it means they agree with the goal, and also that they agree to commit those resources.

My belief is that project goals become a kind of incremental, rolling roadmap, declaring our intent to fix specific problems and then tracking our follow-through (or lack thereof). As I’ll explain in the post, I believe that a mechanism like project goals will help our morale and help us to get shit done, but I also think it’ll help with a bunch of other ancillary problems, such as providing a clearer path to get involved in Rust as well as getting more paid maintainers and contributors.

At the moment, project goals are just an idea. My plan is to author some sample goals to iron out the process and then an RFC to make it official.

Driving a goal in the Rust project is an uncertain process

Rust today has a lot of half-finished features waiting for people to invest time into them. But figuring out how to do so can be quite intimidating. You may have to trawl through github or Zulip threads to figure out what’s going on. Once you’ve done that, you’ll likely have to work through some competing constraints to find a proposed solution. But that stuff isn’t the real problem. The real problem is that, once you’ve invested that time and done that work, you don’t really know whether anyone will care enough about your work to approve it. There’s a good chance you’ll author an RFC, or a PR, and nobody will even respond to it.

Rust teams today often operate in a fairly reactive mode, without clear priorities. The official Rust procedures are almost exclusively ‘push’, and often based on evaluating artifacts, not intentions – people decide a problem they would like to see solved, and write an RFC or a PR to drive it forward; the teams decide whether to accept that work. But there is no established way to get feedback from the team on whether this is a problem – or an approach the problem – that would be welcome. Or, even if the team does theoretically want the work, there is no real promise from the team that they’ll respond or accountability when they do not.

We do try to be proactive and talk about our goals. Teams sometimes post lists of aspirations or roadmaps to to Inside Rust, for example, and we used to publish annual roadmaps as a project. But these documents have never seemed very successful to me. There is a fundamental tension that is peculiar to open source: the teams are not the ones doing the work. Teams review and provide feedback. Contributors do the work, and ultimately they decide what they will work on (or if they will do work at all). It’s hard to plan for the kinds of things you will do when you don’t know what resources you have. A more reliable barometer of the Rust project’s priorities has been to read the personal blogs doing the work, where people are talking about the goals they personally plan to drive.

This uncertainty holds back investment

The uncertainty involved in trying to push an idea forward in Rust is a major deterrent for companies thinking about investing in Rust. I hear about this gap from virtually every angle:

  • Imagine you’re a a developer who wants to use paid time to work on open source. How do you convince your manager it makes sense? Right now, the best you can do is I think I can make progress, and besides, it’s the right thing to do!"
  • Imagine you’re a contractor who wants to deliver for a client. They want to pay you to help drive a feature over the finish line – but you can’t be sure if you’re going to be able to deliver, since it will require consensus from a Rust team, and it’s unclear whether it meets their priorities.
  • Imagine you’re a CTO considering whether to adopt Rust for your company. You see that there are gaps in an area, but you don’t know whether that is something the project is actively looking to close, or what.
  • Or maybe you’re a CTO who has adopted Rust and is looking to “give back” to the community by contributing. You want to help deliver support for a feature you need and that you know a lot of people in the community would like, but you can’t figure out how to get started, and you can’t afford to have an engineer or two work on something for months without a return.

But some things work really well and we don’t want to lose those

Rust’s development may be chaotic, but there’s a beauty to it as well. As Mara’s classic blog post put it, “Rust is not a company”. Rust’s current structure allows for a feature to make progress in fits and starts, which means we can accommodate all kinds many different interest levels and motivation. Someone who is motivated can author and contribute an RFC, and then disappear. Somebody else can pick up the ball and move the implementation forward. And yet a third person can drive the docs and stabilization over the finish line. This is not only cool to watch, it also means that some features get done that would never be “top priority”. Consider let-else – this is one of the most popular features from the last few years, and yet, compared against core enabled like “async fn in trait”, it clearly takes second place in the priority list. But that’s fine, there are plenty of folks who don’t have the time or expertise to work on async fn in trait, but they can move let-else forward. It’s really important to me that we don’t lose this.

Proposal: project goal RFCs

So, top-down roadmaps are a poor fit for open-source. But working purely bottom-up has its own downsides. What can we do?

My proposal is to form roadmaps, but to do it bottom-up, via a new kind of RFC called a project goal RFC. A regular RFC proposes a solution to a problem. A project goal RFC proposes a plan to solve a particular problem in a particular timeframe. This could be specific, like “stabilize support for async closures in 2024”, or it could be more general, like “land nightly support for managing resource cleanup in async functions in 2024”. What it can’t be is non-actionable, such as “simplify async programming in 2024” or “make async Rust nice in 2024”.

Project goal RFCs are opened by the goal owners, the people proposing to do the work. They are approved by the teams which will be responsible for approving that work.1 The RFC serves as a kind of contract: the owners will drive the work and the team will review that work and/or provide other kinds of support (such as mentorship).

Project goal RFCs are aimed squarely at larger projects

Project goal RFCs are not appropriate for all projects. In fact, they’re not appropriate for most projects. They are meant for larger, flagship projects, the kind where you want to be sure that the project is aligned around the goals before you start investing heavily. Here are some examples where I think project goal RFCs would be useful…

  • The async WG set an “unofficial” project goal of shipping async functions in traits this year (coming Dec 28!). Honestly, setting a goal like this felt a bit uncomfortable, as we didn’t have a means to make it “official and blessed”. I think that would have also helped during the push to stabilization, since we could reference this goal to help make the case for “time to ship”.
  • Goals might also take the shape of internal improvements. The types team is driving a flagship goal to ship a new trait solver. Authoring a project goal RFC would help bring this visibility and would also make it easier to make the case for funding work on this project.
  • I sometimes help to mentor collaborations with people in universities or with Master’s students. Project goals would let us set expectations up front about what work we expect to do during that time.
  • I’d like to drive consensus around the idea of easing tradeoffs with profiles – but I don’t want to start off with an RFC that is going to focus discuss on the details of how profiles are specified. I want to start off by getting alignment around whether to do something like profiles at all. Wearing my Amazon manager hat, having alignment there would also influence whether I allocated some of our team’s bandwidth to work on that. A project goal could be perfect for that.
  • The Foundation has run several project grant programs, and one of the challenges has been trying to choose projects to fund which will be welcomed by the project. As I’ve been saying, we don’t really have a mechanism for making those sorts of decisions.
  • The embedded working group or the Rust For Linux folks have a bunch of pain points. I think it’s been hard for us to manage cooperation between those really important efforts and the other Rust teams. Developing a joint project goal would be a way to highlight needs.
  • Someone who wants to work on Rust at their company could work with a team to develop an official goal that they can show to their manager to get authorized work time.
  • Companies that want to invest in Rust to close gaps could propose project goals. For example, I frequently get asked how a company can help move custom allocators forward. One candidate that comes up a lot is support for custom allocators and collections with fallible allocation. This same mechanism would also allow larger companies to propose goals that they’d like to drive. For example, there was a recent RFC on debugger visualization aimed at better support for debugging Rust in Windows. I could imagine folks from Microsoft proposing some goals in that area.

Anatomy of a project goal RFC

Project goal RFCs need to include enough detail that both the owners and the teams know what they are signing up for. I believe a project goal RFC should answer the following questions:

  • Why is this work important?
  • What work will be done on what timeframe?
    • This should include…
      • milestones you will meet along the way,
      • specific use-cases you plan to address,
      • and guiding principles that will be used during design.
  • Who will be doing the work, and how much time will the have?
  • What support is needed and from which Rust teams?

The list above is intentionally somewhat detailed. Project goal RFCs are not meant to be used for everything. They are meant to be used for goals that are big enough that doing the planning is worthwhile. The planning also helps the owners and the teams set realistic timelines. (My assumption is that the first few project goals we set will be wildly optimistic, and over time we learn to temper our expectations.)

Why is this work important?

Naturally whenever we propose to do something, it is important to explain why this thing is worth doing. A quality project goal will lay out the context and motivation. The goal is for the owners to explain to the team why the team should dedicate their maintenance bandwidth to this feature. It’s also a space for the owners to explain to the world why they feel it’s worth their time to do the work to develop this feature.

What will be done and on what timeframe?

The heart of the project goal is declaring what work is to be done and when it will be done by. It’s important that this “work to be done” is specific enough to be evaluated. For example, “make async nice next year” is not a good goal. Something like “stabilize async closures in 2024” is good. It’s also ok to just talk about the problem to be solved, if the best solution isn’t known yet. For example, “deliver nightly support for managing resource cleanup in async programs in 2025” is a good goal that could be solved by [“async drop”][] but also by some other means.

Scaling work with timeframes and milestones

Goals should always include a specific timeframe, such as “in 2024” or “in 2025”. I think these timeframes will typically be about a year. If the time is too short, then the work is probably not significant enough to call it a goal. But if the timeframe is much longer than a year, then it’s probably best to scale back the “work to be done” to something more intermediate.

Of course, many goals will be part of a bigger project. For example, if one took a goal to deliver nightly support for something in 2024, then the next year, one might propose a goal to stabilize that support.

Ideally, the goal will also include milestones along the way. For example, if the goal is to have something stable in 1 year, it might begin with an RFC after 3 months, then 3 months of impl, 3 months of gaining experience, and 3 months for stabilization.

Pinning things down with use-cases

Unlike a feature RFC, a project goal RFC does not specify a precise design for the feature in question. Even if the project goal is something relatively specific, like “add support for async functions in traits”, there will still be a lot of ambiguity about what counts as success. For example, we decided to stabilize async functions in traits without support for send bounds. This means that some use cases, notably a crate like tower, aren’t supported yet. Does this count as success? To help pin this down, the project goal should include a list of use cases that it is trying to address.

Establishing guiding principles early

Finally, especially when goals involve a fair bit of design leeway, it is useful to lay down some of the guiding principles the goal owners expect to use. I think having discussion around these principles early will really help focus discussions later on. For example, when discussing how dynamic dispatch for async functions in traits should work, Tyler Mandry and I had an early goal that it should “just work” for simple cases but give the ability to customize behavior. But we quickly found that ran smack into Josh’s prioritization of allocation transparency. This conflict was precictable and I think it would have been useful to have had the discussion around these tenets early as a lang team, rather than waiting.2

Who will be doing the work, and how much time will the have?

Part of the goal is specifying who is going to be doing the work. For example, the goal might say “two developers to work at 50% time”. It might also say something more flexible, like “one developer to create quest issues and then mentor a group of volunteers to drive most of the work”. If possible, including specific names is useful too, particularly in more specialized areas. For example, “Ralf Jung and one graduate student will pursue an official set of rules for stacked borrows”.

What support is needed and from which Rust teams?

This section is where the project goal owners make asks of the project. Here are some typical asks that I expect we will have:

  • A dedicated reviewer for PRs to the compiler and an expected SLA of reviews within 3 days (or 1 week, or something).
  • An agreement from the lang team to review and provide feedback on RFCs.
  • Mentorship on some aspect or other.

I think teams should suggest the expected shape of asks and track their resources. For example, the lang team can probably have manage up to only a small number of “prioritized RFCs” at a time, so if there are more project goals, they may have to wait or accept a lower SLA.

Tracking progress

One of the interesting things about project goals is that they give us an immediate roadmap. I would like to see the project author a quarterly report – which means every 12 weeks, or two release cycles. This report would include all the current project goals and updates on their progress. Did they make their declared milestones? If not, why not? Because project goals don’t cover the entirety of the work we do, the report could also include other significant developments. This would be published on the main Rust blog and would let people follow along with Rust development and get a sense for our current trajectory.

One thing I’ve learned, though: you can’t require the goal owners to author that blog post. It would be much better to have a dedicated person or team authoring the blog posts and pinging the goal owners to get those status updates. Preparing an update so that it can be understood by a mass audience is its own sort of skill. Moreover, goal owners will be tempted to put it off, and the updates won’t happen. I think it’s quite important that these project updates happen every quarter, like clockwork, just as our Rust releases do. This is true even if the update has to ship without an update from some goals.

I envision this progress tracking as providing a measure of accountability. When somebody takes a goal, we’ll be able to follow along with their progress. I’ve seen at Amazon and elsewhere that having written down a goal and declared milestones, and then having to say whether you’ve met them, helps to keep teams focused on getting the job done. I often find that I have a job about 95% done but then, in the week before I have to write an update about it, I’m inspired to go and finish that last 5%.

Conclusion: next steps

My next step is that I am going to fashion an RFC making the case for project goals. This RFC will include a template. To try out the idea, I plan to also author an example project goal for “async function in traits” and perhaps some other ongoing or proposed efforts. In truth, I don’t think we need an RFC to do project goals – nothing is stopping us from accepting whatever RFC we want – but I see some value in spelling out and legitimizing the process. I think this probably ought to be approved by the governance council, which is an interesting test for that new group.

There are some follow-up questions worth discussing. One of the ones I think is most interesting is how to manage the quarterly project updates. This deserves a post of its own. The short version of my opinion is that I think it’d be great to have an open source “reporting” team that has the job of authoring this update and others of its ilk. I suspect that this team would work best if we had one or more people paid to participate and to bear the brunt of some of the organizational lift. I further suspect that the Foundation would be a good place for at least one of those people. But this is getting pretty speculative by now and I’d have to make the case to the board and Rust community that it’s a good use for the Foundation budget, which I certainly have not done.

It’s worth noting that I see project goal RFCs as just one piece of a larger puzzle that is giving a bit more structure to our design effort. One thing I think went wrong in prior efforts was that we attemped to be too proscriptive and too “one size fits all”. These days I tend to think that the only thing we must have to add a new feature to stable is an FCP-binding decision from the relevant teams(s). All the rest, whether it be authoring a feature RFC or creating a project goal RFC, are steps that make sense for projects of a certain magnitude, but not everything. Our job then should be to lay out the various kinds of RFCs one can write and when they are appropriate for use, and then let the teams judge how and when to request one.

  1. In theory, anyway. In practice, I imagine that many team maintainers may keep some draft project goal RFCs in their pocket, looking for someone willing to do the work. ↩︎

  2. The question of how to make dyn async traits easy to use and transparent remains unresolved, which is partly why I’m keen on something like profiles↩︎

Spidermonkey Development BlogSpiderMonkey Newsletter (Firefox 118-121)

SpiderMonkey is the JavaScript engine used in Mozilla Firefox. This newsletter gives an overview of the JavaScript and WebAssembly work we’ve done as part of the Firefox 118 to 121 Nightly release cycles.

The team wishes you Happy Holidays!

🚀 Performance

We’re working with other Firefox teams to improve performance for popular web frameworks such as React. This work is largely driven by the Speedometer 3 benchmark that Mozilla is collaborating on with other browser vendors. The Performance Team recently gave a talk all about Speedometer 3 at the conference.

We can’t list all of our improvements here, but the list below covers some of this work.

  • We’ve added JIT optimizations for property accesses involving proxies. As described in this Mozilla Hacks post, this significantly improved performance on the Vue.js 3 framework.
  • We added more optimizations for Object.assign.
  • We’ve changed how some Baseline IC stubs are allocated to use less memory and to be faster.
  • Array destructing has been optimized with a fast path in the bytecode.
  • We improved JSON parsing to help avoid GC time when parsing very large files.

We also partially fixed a long-standing performance issue with the DevTools Web Console: if the JS code doesn’t use any of the special console variables, it will now run as fast as regular JS code on a website.

⚡ Wasm GC

We’re shipping WebAssembly GC in Firefox 120! 🎉 This is a large feature that makes it possible for high-level languages to compile to WebAssembly and use the browser’s garbage collector. The Wasm GC proposal adds struct and array types to WebAssembly for this.

If you’re using Firefox 120 or later, you can try this demo of a Kotlin image viewer or this Dart/Flutter demo. Both of these use Wasm GC.

👷🏽‍♀️ Other features

We’re also shipping Wasm tail calls in Firefox 121. This is an important feature for functional languages such as OCaml or Scheme that rely heavily on tail recursion.

We also shipped some new JS features in Firefox 119:

Additionally we will be shipping the Promise.withResolvers proposal in Firefox 121.

We implemented some features that are still disabled by default:

⏰ Date parsing improvements

The JS language specification does not define which date/time formats have to be accepted or rejected when converting strings to Date objects. This has resulted in a number of web compatibility issues because there are subtle differences between the date parsers of most JS engines.

Vinny Diehl has volunteered to improve compatibility with other browsers. Here are just a few of these changes:

  • We now accept dates with a period after the month.
  • We accept more numeric dashed dates, for example 1-1-2024.
  • We now support milliseconds in more cases.

The release notes for Firefox 121 (and earlier versions) list more cases.

Adrian GaudebertThe ruins of Dawnmaker's lost continent

Today we are releasing a new version of Dawnmaker, with two big changes. The first one is the 2D board, which I talked about in my previous blog post. The second one is a new feature called "Ruins and Rewards". That feature… adds ruins and… changes rewards. Yeah. Pretty good name, right?

As with everything we do in this game, there is a good reason. So let's start with why we're changing things: feedback from our players! (But also feedback from a publisher, and observations from watching people play, but hey, ultimately it's players giving feedback directly or indirectly.)

There are four (4!) issues we're trying to address with this new feature:

  1. The first decision in a run is too complex: you need to choose which buildings you'll want to have for your next game, before you've even got to play with your current buildings.

  2. During a game, there are no decisions impacting the overall run. The game lacks interactions between the micro loop (building a city to secure a region) and the macro loop (improving your tools to reach the last region of the continent).

  3. There are not enough variations between games, the board is always the same (with the small exception of lighthouses).

  4. The game lacks an experience, a moment that is truly exciting, memorable, something that players can enjoy and tell their friends about.


The first thing we decided to add was ruins. It was supposed to be part of a bigger feature that we call "the terrain", where we wanted to add various types of tiles throughout the board, like lakes, mountains, swamps, things like that. To reduce the scope, we chose to add only the ruins for now.

So how to they work? Ruins are scattered on the board when you start a game, but you'll never find one directly next to your Aerostation. Once you've brought light to a tile containing a ruin, you can explore it. It costs you two elders (the action points) and gives you a choice between three options:

The first option unlocks a new building. It is added to your roster of buildings, and you will be able to find it in your market (if you have reached the level of that building). You'll keep it until the end of the run, so it will be available to you in your next games, until you either defeat the final region or lose.

Sometimes you might not be interested in that new building, or you might have more short-term objectives that require resources. The other two options cover that: you can choose to not gain the new building (and you won't have another chance to unlock it during this run) and instead gain materials immediately, or science over the next three turns.

You never know what you'll find in a ruin until you explore it, so there's a bit of mystery there. You also have to make a decision between short-term goals and long-term development, improving the interactions between the micro and macro loops. Finally, we're hoping that exploring a ruin will be a good step towards making the game more exciting and memorable — but we'll need to add more juicy effects to truly make it sensational.


Adding this ruins feature, and especially the part that gives the player new buildings, means that we need to re-think our rewards. Which is a good thing, because as I said in the intro, the first decision of the game (choosing between three packs of buildings) was too hard. It was also confusing players that they did not get to play the buildings they had chosen immediately.

So we've completely changed the rewards you get after securing a region: no more buildings — you need to explore ruins to get them. Instead, you get a choice between replacing a card from your starting deck with a new, better one (chosen amongst three options), or permanently removing a building from your roster — meaning it will not show up in your market anymore. This makes the first decision much simpler. We intend to add more types of rewards later on, but we wanted to playtest this simple version first.

And that's all I have for today! We're hoping this new feature will make Dawnmaker more enjoyable, and we'll be back soon with more good stuff for our game.

This piece was initially sent out to the readers of our newsletter. Wanna join in on the fun? Head out to Dawnmaker's presentation page and fill the form. You'll receive regular stories about how we're making this game, the latest news of its development, as well as an exclusive access to Dawnmaker's alpha version!

Join our community!

Kartikaya GuptaAI and capitalism

(No this is not a post about commercialization of AI)

It occurred to me today that there is an interesting parallel between AI and capitalism. They're both tools that can be used to accomplish things very effectively. And they both have "alignment" issues.

Capitalism provides incentives and a free market, and that can result in extremely efficient action because people are trying to maximize profit. However, left unchecked, this can produce all kinds of negative externalities which are not really what we want. This is the alignment issue - we need to be diligent in "aligning" capitalism so that it incentivizes and produces outcomes that are actually what we want. Primarily governments do this with the use of taxes and subsidies - make bad behaviour (produces negative externalities) more expensive, and make good behaviour (produces positive externalities) less expensive. In my view, this is the primary function of a government (at least in a capitalist society). However, I think most people would agree we haven't yet perfected this part of things, and we still suffer from all kinds of negative externalities.

AI/AGI is very similar - it can produce extremely efficient action, but suffers from an alignment problem in that what it does may not be exactly what we want it to do. Lots has already been written about the AI alignment problem.

My main point here is that we, as a society, haven't even figured out how to align capitalism reliably. Which makes me a bit worried about how we're going to handle AGI. We should probably think hard about why we haven't been able to align capitalism perfectly (e.g. maybe because not all subsets of people want the same thing?) and see if those reasons carry over to AGI as well.

The Mozilla BlogSeven tips to make holiday shopping easier. Really.

Remember when the holidays meant waking up early and going down and opening up presents? When the only gift list you had to worry about was the gifts you were asking for. Last year, it was estimated that adults bought an average of nine presents and spent about $1500 on holiday gifts. Buying gifts doesn’t just cost a lot of money, but it takes a lot of time — finding the right gift, doing the research to make sure the item you’re buying is a quality product and figuring out how to get the best deal online without sacrificing all of your personal data is a lot of work. While Mozilla can’t do anything about inflation and the prices of gifts increasing, we can help make the process of holiday shopping a bit more enjoyable and help you reclaim some of your time back this season. Below are seven tips to help make the holiday season just a bit easier this year.

Hey, once you get your holiday shopping for everyone else done, maybe you will have some extra time to gift yourself exactly what you want this holiday season (you know, in case no one else gets it right).

1.  See which reviews are real before you purchase

Almost everyone has been there. You do a lot of research, you read all the comments, you feel like you’ve done everything you can and then when your purchase arrives you realize the product reviews really led you astray. In fact, 82% of consumers have come across a fake customer review in the last year.

Instead of taking those four- and five-star ratings at face value, using the Fakespot add-on makes it easier to filter out fake reviews, unreliable sellers and counterfeit products. Not only does it take away some of the time you have to spend researching and trying to figure out which reviews are worth your time, but hopefully reduces the amount of returns you have to do. 

If you want more information you can also use the Fakespot Analyzer Bar or hit the “analyze reviews” on the product page of many online retailers so you can make a more informed choice. With this tool you can read the “Pros and Cons” (written by Fakespot’s AI), “Review Highlights,” “Fake Review Analysis”, “Helpful Insights,” “Review Count,” and “Price History.” 

2. Shop on the go with Firefox

In a utopian world, things would slow down a bit at the end of the year, so we would have more time for all the holiday tasks from baking to cleaning to shopping that are all involved in the fact of creating some holiday magic. But alas, normally the end of the year just means that you have to squeeze in the holiday chores on top of everything else. But if you are researching gifts with Firefox, you can share your tabs and history on mobile and desktop.

So if you are researching the best pair of hiking boots to find someone and start doing research on your desktop during a meeting and then you need to go run errands, you can use Firefox’s Send Tab feature to send pages from your computer to your other devices like your iPhone, iPad or Android device. While waiting in line for coffee, you can pull out your phone and pick up exactly where you left on your desktop. 

3. Checking all gifts with our *Privacy Not Included holiday guide 

It’s estimated almost 199 million adults in the United States will buy tech products or services as a gift during the holiday season. That is a lot of opportunities to accidentally gift someone a device that doesn’t keep your loved ones’ privacy safe. Before you purchase a new tech product, take a look at our 2023 holiday buyer’s guide. Mozilla’s team looked at over 100 connected products and spent almost a thousand hours reading terms and conditions and researching to find out what products were safe and what products should make everyone wary to bring home. From video doorbells, to Bluetooth trackers, to any manner of robot, we have the best and worst picks and details on each product so you can shop informed this holiday season. 

4. Organizing all of your gift ideas in Pocket

How often do you see a product online and you’re like “oh man that would be a perfect present for someone, I should remember this later?” If you’re like me, it happens quite often, and, spoiler alert, you don’t remember that gift later. This is where Pocket can help, you can save all the items you see online in real-time and come back to them later, even if you don’t have service. You can even customize the tag with the name of the recipient. Whatever keeps you organized and going!

The real trick to keeping your sanity over the holiday season? Treating it as something you are training for all year-long. Instead of waiting to find all of the gifts you need in the last six weeks of the year, you can use Pocket to save the perfect gifts you see all year long, and come holiday season, all of your ideas will be saved and organized in one place.

How to add a Pocket tag from Firefox: Hit the “Save to Pocket” button in the top right corner of Firefox, then tag your page and Save it. Later, when you open your Pocket app, your Saves will be there, tagged and organized.

5. Avoiding shopping spam with Firefox Relay 

You know that you should be careful about whom you give your contact information, but the reality is that most online retailers require an email to check out. And that isn’t necessarily a bad thing, so you have an easy place to track receipts for post-holiday returns and package tracking information. But you can make it safer by using Firefox Relay to create email aliases you provide retailers when you check out online.

Email aliases allow you to mask your email while still receiving important updates and shipping notifications directly to your inbox. And if you continue receiving emails long after you’ve unsubscribed, you can turn off that alias so you only receive the emails you want.

Need to sign up for text message alerts to get that extra 20% off? You can also use Firefox Relay phone masking to help protect your privacy and personal information while getting a discount on that perfect gift.

You can get started with Firefox Relay here.

6. Check out faster with secure credit card autofill

Once you have found the perfect gift for all of the people on your list, and it’s time to pay, you don’t have to worry about your privacy or spend the time typing in your number manually over and over again with Firefox’s credit card autofill. 

People in Canada, Germany, France, the U.K. and the U.S. can use Firefox’s autofill feature to automatically fill your saved credit card information in web forms, while still helping to protect their privacy. As a precaution, Firefox does not save your CVV number, and you even have the option to password protect your credit card data as an additional layer of security. 

7. Feeling good about supporting a mission-oriented company

The holidays are not just a time for buying gifts, it is also a time for giving back, shopping at local and small businesses, and coming together to support causes and communities you care about. When you use Mozilla products, you’re not only helping to protect your data, you’re supporting a mission-oriented organization that puts people before profits. 

The post Seven tips to make holiday shopping easier. Really. appeared first on The Mozilla Blog.

Firefox Developer ExperienceFirefox DevTools Newsletter — 120

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 120 Nightly release cycle.

Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla.

Thanks to Calum Smith who fixed the Inspector to properly display CSS Color 4 formats (for example oklch) (#1859079)

Firefox DevTools Rules view displaying a rule with a color property whose value is `oklch(61.33% 0.264 354.18)`. A fuchsia preview icon is displayed before the value.

Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues

Reliable Debugger

This release comes with a few Debugger fixes and tweaks. First, and the most important, we found some tricky issues with the implementation of the variable preview and managed to make it right (#1815472). So if you ever stop using the preview because you felt it was unreliable, give it another go!

We were also aware of buggy behavior when pausing the debugger on unload event and are happy to report that this shouldn’t be broken anymore (#1806796)

When a website uses workers, we display them in the Threads panel of the debugger. It might happen that a thread execution is paused (because of a breakpoint, a debugger statement or an exception if “Pause on Exception” is enabled), but the others are not. We used to only show a very light pause icon next to paused thread, but this could be easily missed, so we changed it to a bolder style with an explicit “paused” label (#1838393)

Firefox Debugger close-up of the threads panel. Four threads are displayed, the Main Thread, and then 3 workers. The Main Thread is selected, the second thread has a yellow text and brown background and has a "paused" suffix. The other 2 worker threads are not paused, and are styled as regular threads.

Finally, the Wasm-GC proposal adds many new types and opcodes that can show up in the wasm binary, so we added support for them in the debugger (#1855254)

Pretty Style

The Style Editor makes it easy to read and modify stylesheets. When it detects a minified file, it will automatically pretty print the stylesheet content. There are cases where we fail to detect that a stylesheet isn’t legible, so we added a button in the bottom left corner, like in the Debugger, that will prettify the file when clicked (#1832213).

Firefox Style Editor. A CSS file is selected, and its content is displayed on the right-hand side. At the bottom of the screen, a pink arrow points to the pretty-print button, located in the bottom left corner of the CSS file editor.

Improving DevTools Accessibility

A few months ago, the accessibility team at Mozilla ran an audit of the most used panel in DevTools and handed us over a list of the accessibility issues they found. We’re now in the process of fixing those issues, which will span multiple Firefox releases, and will hopefully make our tools more accessible. We started with the low hanging fruits which can be put in a few categories:

A combination of all of those went into fixing issues on the event listeners tooltip (#1843328, #1843330, #1843331), which should be properly keyboard and screen reader accessible now.

This is just the beginning of the project and more fixes and changes are coming in future releases. If you’re curious, check the full list of bugs we plan to fix during this project.

Thank you for reading this and using our tools, see you next month for a new round of exciting updates 🙂

This Week In RustThis Week in Rust 522

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is rocket, an opinionated web framework that aims to be really ergonomic while still being fast.

Thanks to David Mason for the suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

369 pull requests were merged in the last week

Rust Compiler Performance Triage

Pretty quiet week, with only a small number of statistically significant changes landing.

Triage done by @simulacrum. Revision range: 173b6e68..4f3da90

1 Regressions, 1 Improvements, 1 Mixed; 0 of them in rollups 60 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline RFCs entered Final Comment Period this week.
New and Updated RFCs
Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Upcoming Events

Rusty Events between 2023-11-22 - 2023-12-20 🦀

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.


Please see the latest Who's Hiring thread on r/rust

Quote of the Week

If you require it, measure it. That's the simple answer. Everything else is guesswork.

Johannes Lade on rust-users

Thanks to Michael Bryan for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Mike HommeyHow I (kind of) killed Mercurial at Mozilla

Did you hear the news? Firefox development is moving from Mercurial to Git. While the decision is far from being mine, and I was barely involved in the small incremental changes that ultimately led to this decision, I feel I have to take at least some responsibility. And if you are one of those who would rather use Mercurial than Git, you may direct all your ire at me.

But let's take a step back and review the past 25 years leading to this decision. You'll forgive me for skipping some details and any possible inaccuracies. This is already a long post, while I could have been more thorough, even I think that would have been too much. This is also not an official Mozilla position, only my personal perception and recollection as someone who was involved at times, but mostly an observer from a distance.

From CVS to DVCS

From its release in 1998, the Mozilla source code was kept in a CVS repository. If you're too young to know what CVS is, let's just say it's an old school version control system, with its set of problems. Back then, it was mostly ubiquitous in the Open Source world, as far as I remember.

In the early 2000s, the Subversion version control system gained some traction, solving some of the problems that came with CVS. Incidentally, Subversion was created by Jim Blandy, who now works at Mozilla on completely unrelated matters. In the same period, the Linux kernel development moved from CVS to Bitkeeper, which was more suitable to the distributed nature of the Linux community. BitKeeper had its own problem, though: it was the opposite of Open Source, but for most pragmatic people, it wasn't a real concern because free access was provided. Until it became a problem: someone at OSDL developed an alternative client to BitKeeper, and licenses of BitKeeper were rescinded for OSDL members, including Linus Torvalds (they were even prohibited from purchasing one).

Following this fiasco, in April 2005, two weeks from each other, both Git and Mercurial were born. The former was created by Linus Torvalds himself, while the latter was developed by Olivia Mackall, who was a Linux kernel developer back then. And because they both came out of the same community for the same needs, and the same shared experience with BitKeeper, they both were similar distributed version control systems.

Interestingly enough, several other DVCSes existed:

  • SVK, a DVCS built on top of Subversion, allowing users to create local (offline) branches of remote Subversion repositories. It was also known for its powerful merging capabilities. I picked it at some point for my Debian work, mainly because I needed to interact with Subversion repositories.
  • Arch (tla), later known as GNU arch. From what I remember, it was awful to use. You think Git is complex or confusing? Arch was way worse. It was forked as "Bazaar", but the fork was abandoned in favor of "Bazaar-NG", now known as "Bazaar" or "bzr", a much more user-friendly DVCS. The first release of Bzr actually precedes Git's by two weeks. I guess it was too new to be considered by Linus Torvalds for the Linux kernel needs.
  • Monotone, which I don't know much about, but it was mentioned by Linus Torvalds two days before the initial Git commit of Git. As far as I know, it was too slow for the Linux kernel's needs. I'll note in passing that Monotone is the creation of Graydon Hoare, who also created Rust.
  • Darcs, with its patch-based model, rather than the common snapshot-based model, allowed more flexible management of changes. This approach came, however, at the expense of performance.

In this landscape, the major difference Git was making at the time was that it was blazing fast. Almost incredibly so, at least on Linux systems. That was less true on other platforms (especially Windows). It was a game-changer for handling large codebases in a smooth manner.

Anyways, two years later, in 2007, Mozilla decided to move its source code not to Bzr, not to Git, not to Subversion (which, yes, was a contender), but to Mercurial. The decision "process" was laid down in two rather colorful blog posts. My memory is a bit fuzzy, but I don't recall that it was a particularly controversial choice. All of those DVCSes were still young, and there was no definite "winner" yet (GitHub hadn't even been founded). It made the most sense for Mozilla back then, mainly because the Git experience on Windows still wasn't there, and that mattered a lot for Mozilla, with its diverse platform support. As a contributor, I didn't think much of it, although to be fair, at the time, I was mostly consuming the source tarballs.

Personal preferences

Digging through my archives, I've unearthed a forgotten chapter: I did end up setting up both a Mercurial and a Git mirror of the Firefox source repository on was a FusionForge-based collaboration system for Debian developers, similar to SourceForge. It was the ancestor of I used those mirrors for the Debian packaging of Firefox (cough cough Iceweasel). The Git mirror was created with hg-fast-export, and the Mercurial mirror was only a necessary step in the process. By that time, I had converted my Subversion repositories to Git, and switched off SVK. Incidentally, I started contributing to Git around that time as well.

I apparently did this not too long after Mozilla switched to Mercurial. As a Linux user, I think I just wanted the speed that Mercurial was not providing. Not that Mercurial was that slow, but the difference between a couple seconds and a couple hundred milliseconds was a significant enough difference in user experience for me to prefer Git (and Firefox was not the only thing I was using version control for)

Other people had also similarly created their own mirror, or with other tools. But none of them were "compatible": their commit hashes were different. Hg-git, used by the latter, was putting extra information in commit messages that would make the conversion differ, and hg-fast-export would just not be consistent with itself! My mirror is long gone, and those have not been updated in more than a decade.

I did end up using Mercurial, when I got commit access to the Firefox source repository in April 2010. I still kept using Git for my Debian activities, but I now was also using Mercurial to push to the Mozilla servers. I joined Mozilla as a contractor a few months after that, and kept using Mercurial for a while, but as a, by then, long time Git user, it never really clicked for me. It turns out, the sentiment was shared by several at Mozilla.

Git incursion

In the early 2010s, GitHub was becoming ubiquitous, and the Git mindshare was getting large. Multiple projects at Mozilla were already entirely hosted on GitHub. As for the Firefox source code base, Mozilla back then was kind of a Wild West, and engineers being engineers, multiple people had been using Git, with their own inconvenient workflows involving a local Mercurial clone. The most popular set of scripts was moz-git-tools, to incorporate changes in a local Git repository into the local Mercurial copy, to then send to Mozilla servers. In terms of the number of people doing that, though, I don't think it was a lot of people, probably a few handfuls. On my end, I was still keeping up with Mercurial.

I think at that time several engineers had their own unofficial Git mirrors on GitHub, and later on Ehsan Akhgari provided another mirror, with a twist: it also contained the full CVS history, which the canonical Mercurial repository didn't have. This was particularly interesting for engineers who needed to do some code archeology and couldn't get past the 2007 cutoff of the Mercurial repository. I think that mirror ultimately became the official-looking, but really unofficial, mozilla-central repository on GitHub. On a side note, a Mercurial repository containing the CVS history was also later set up, but that didn't lead to something officially supported on the Mercurial side.

Some time around 2011~2012, I started to more seriously consider using Git for work myself, but wasn't satisfied with the workflows others had set up for themselves. I really didn't like the idea of wasting extra disk space keeping a Mercurial clone around while using a Git mirror. I wrote a Python script that would use Mercurial as a library to access a remote repository and produce a git-fast-import stream. That would allow the creation of a git repository without a local Mercurial clone. It worked quite well, but it was not able to incrementally update. Other, more complete tools existed already, some of which I mentioned above. But as time was passing and the size and depth of the Mercurial repository was growing, these tools were showing their limits and were too slow for my taste, especially for the initial clone.

Boot to Git

In the same time frame, Mozilla ventured in the Mobile OS sphere with Boot to Gecko, later known as Firefox OS. What does that have to do with version control? The needs of third party collaborators in the mobile space led to the creation of what is now the gecko-dev repository on GitHub. As I remember it, it was challenging to create, but once it was there, Git users could just clone it and have a working, up-to-date local copy of the Firefox source code and its history... which they could already have, but this was the first officially supported way of doing so. Coincidentally, Ehsan's unofficial mirror was having trouble (to the point of GitHub closing the repository) and was ultimately shut down in December 2013.

You'll often find comments on the interwebs about how GitHub has become unreliable since the Microsoft acquisition. I can't really comment on that, but if you think GitHub is unreliable now, rest assured that it was worse in its beginning. And its sustainability as a platform also wasn't a given, being a rather new player. So on top of having this official mirror on GitHub, Mozilla also ventured in setting up its own Git server for greater control and reliability.

But the canonical repository was still the Mercurial one, and while Git users now had a supported mirror to pull from, they still had to somehow interact with Mercurial repositories, most notably for the Try server.

Git slowly creeping in Firefox build tooling

Still in the same time frame, tooling around building Firefox was improving drastically. For obvious reasons, when version control integration was needed in the tooling, Mercurial support was always a no-brainer.

The first explicit acknowledgement of a Git repository for the Firefox source code, other than the addition of the .gitignore file, was bug 774109. It added a script to install the prerequisites to build Firefox on macOS (still called OSX back then), and that would print a message inviting people to obtain a copy of the source code with either Mercurial or Git. That was a precursor to current, from September 2012.

Following that, as far as I can tell, the first real incursion of Git in the Firefox source tree tooling happened in bug 965120. A few days earlier, bug 952379 had added a mach clang-format command that would apply clang-format-diff to the output from hg diff. Obviously, running hg diff on a Git working tree didn't work, and bug 965120 was filed, and support for Git was added there. That was in January 2014.

A year later, when the initial implementation of mach artifact was added (which ultimately led to artifact builds), Git users were an immediate thought. But while they were considered, it was not to support them, but to avoid actively breaking their workflows. Git support for mach artifact was eventually added 14 months later, in March 2016.

From gecko-dev to git-cinnabar

Let's step back a little here, back to the end of 2014. My user experience with Mercurial had reached a level of dissatisfaction that was enough for me to decide to take that script from a couple years prior and make it work for incremental updates. That meant finding a way to store enough information locally to be able to reconstruct whatever the incremental updates would be relying on (guess why other tools hid a local Mercurial clone under hood). I got something working rather quickly, and after talking to a few people about this side project at the Mozilla Portland All Hands and seeing their excitement, I published a git-remote-hg initial prototype on the last day of the All Hands.

Within weeks, the prototype gained the ability to directly push to Mercurial repositories, and a couple months later, was renamed to git-cinnabar. At that point, as a Git user, instead of cloning the gecko-dev repository from GitHub and switching to a local Mercurial repository whenever you needed to push to a Mercurial repository (i.e. the aforementioned Try server, or, at the time, for reviews), you could just clone and push directly from/to Mercurial, all within Git. And it was fast too. You could get a full clone of mozilla-central in less than half an hour, when at the time, other similar tools would take more than 10 hours (needless to say, it's even worse now).

Another couple months later (we're now at the end of April 2015), git-cinnabar became able to start off a local clone of the gecko-dev repository, rather than clone from scratch, which could be time consuming. But because git-cinnabar and the tool that was updating gecko-dev weren't producing the same commits, this setup was cumbersome and not really recommended. For instance, if you pushed something to mozilla-central with git-cinnabar from a gecko-dev clone, it would come back with a different commit hash in gecko-dev, and you'd have to deal with the divergence.

Eventually, in April 2020, the scripts updating gecko-dev were switched to git-cinnabar, making the use of gecko-dev alongside git-cinnabar a more viable option. Ironically(?), the switch occurred to ease collaboration with KaiOS (you know, the mobile OS born from the ashes of Firefox OS). Well, okay, in all honesty, when the need of syncing in both directions between Git and Mercurial (we only had ever synced from Mercurial to Git) came up, I nudged Mozilla in the direction of git-cinnabar, which, in my (biased but still honest) opinion, was the more reliable option for two-way synchronization (we did have regular conversion problems with hg-git, nothing of the sort has happened since the switch).

One Firefox repository to rule them all

For reasons I don't know, Mozilla decided to use separate Mercurial repositories as "branches". With the switch to the rapid release process in 2011, that meant one repository for nightly (mozilla-central), one for aurora, one for beta, and one for release. And with the addition of Extended Support Releases in 2012, we now add a new ESR repository every year. Boot to Gecko also had its own branches, and so did Fennec (Firefox for Mobile, before Android). There are a lot of them.

And then there are also integration branches, where developer's work lands before being merged in mozilla-central (or backed out if it breaks things), always leaving mozilla-central in a (hopefully) good state. Only one of them remains in use today, though.

I can only suppose that the way Mercurial branches work was not deemed practical. It is worth noting, though, that Mercurial branches are used in some cases, to branch off a dot-release when the next major release process has already started, so it's not a matter of not knowing the feature exists or some such.

In 2016, Gregory Szorc set up a new repository that would contain them all (or at least most of them), which eventually became what is now the mozilla-unified repository. This would e.g. simplify switching between branches when necessary.

7 years later, for some reason, the other "branches" still exist, but most developers are expected to be using mozilla-unified. Mozilla's CI also switched to using mozilla-unified as base repository.

Honestly, I'm not sure why the separate repositories are still the main entry point for pushes, rather than going directly to mozilla-unified, but it probably comes down to switching being work, and not being a top priority. Also, it probably doesn't help that working with multiple heads in Mercurial, even (especially?) with bookmarks, can be a source of confusion. To give an example, if you aren't careful, and do a plain clone of the mozilla-unified repository, you may not end up on the latest mozilla-central changeset, but rather, e.g. one from beta, or some other branch, depending which one was last updated.

Hosting is simple, right?

Put your repository on a server, install hgweb or gitweb, and that's it? Maybe that works for... Mercurial itself, but that repository "only" has slightly over 50k changesets and less than 4k files. Mozilla-central has more than an order of magnitude more changesets (close to 700k) and two orders of magnitude more files (more than 700k if you count the deleted or moved files, 350k if you count the currently existing ones).

And remember, there are a lot of "duplicates" of this repository. And I didn't even mention user repositories and project branches.

Sure, it's a self-inflicted pain, and you'd think it could probably(?) be mitigated with shared repositories. But consider the simple case of two repositories: mozilla-central and autoland. You make autoland use mozilla-central as a shared repository. Now, you push something new to autoland, it's stored in the autoland datastore. Eventually, you merge to mozilla-central. Congratulations, it's now in both datastores, and you'd need to clean-up autoland if you wanted to avoid the duplication.

Now, you'd think mozilla-unified would solve these issues, and it would... to some extent. Because that wouldn't cover user repositories and project branches briefly mentioned above, which in GitHub parlance would be considered as Forks. So you'd want a mega global datastore shared by all repositories, and repositories would need to only expose what they really contain. Does Mercurial support that? I don't think so (okay, I'll give you that: even if it doesn't, it could, but that's extra work). And since we're talking about a transition to Git, does Git support that? You may have read about how you can link to a commit from a fork and make-pretend that it comes from the main repository on GitHub? At least, it shows a warning, now. That's essentially the architectural reason why. So the actual answer is that Git doesn't support it out of the box, but GitHub has some backend magic to handle it somehow (and hopefully, other things like Gitea, Girocco, Gitlab, etc. have something similar).

Now, to come back to the size of the repository. A repository is not a static file. It's a server with which you negotiate what you have against what it has that you want. Then the server bundles what you asked for based on what you said you have. Or in the opposite direction, you negotiate what you have that it doesn't, you send it, and the server incorporates what you sent it. Fortunately the latter is less frequent and requires authentication. But the former is more frequent and CPU intensive. Especially when pulling a large number of changesets, which, incidentally, cloning is.

"But there is a solution for clones" you might say, which is true. That's clonebundles, which offload the CPU intensive part of cloning to a single job scheduled regularly. Guess who implemented it? Mozilla. But that only covers the cloning part. We actually had laid the ground to support offloading large incremental updates and split clones, but that never materialized. Even with all that, that still leaves you with a server that can display file contents, diffs, blames, provide zip archives of a revision, and more, all of which are CPU intensive in their own way.

And these endpoints are regularly abused, and cause extra load to your servers, yes plural, because of course a single server won't handle the load for the number of users of your big repositories. And because your endpoints are abused, you have to close some of them. And I'm not mentioning the Try repository with its tens of thousands of heads, which brings its own sets of problems (and it would have even more heads if we didn't fake-merge them once in a while).

Of course, all the above applies to Git (and it only gained support for something akin to clonebundles last year). So, when the Firefox OS project was stopped, there wasn't much motivation to continue supporting our own Git server, Mercurial still being the official point of entry, and was shut down in 2016.

The growing difficulty of maintaining the status quo

Slowly, but steadily in more recent years, as new tooling was added that needed some input from the source code manager, support for Git was more and more consistently added. But at the same time, as people left for other endeavors and weren't necessarily replaced, or more recently with layoffs, resources allocated to such tooling have been spread thin.

Meanwhile, the repository growth didn't take a break, and the Try repository was becoming an increasing pain, with push times quite often exceeding 10 minutes. The ongoing work to move Try pushes to Lando will hide the problem under the rug, but the underlying problem will still exist (although the last version of Mercurial seems to have improved things).

On the flip side, more and more people have been relying on Git for Firefox development, to my own surprise, as I didn't really push for that to happen. It just happened organically, by ways of git-cinnabar existing, providing a compelling experience to those who prefer Git, and, I guess, word of mouth. I was genuinely surprised when I recently heard the use of Git among moz-phab users had surpassed a third. I did, however, occasionally orient people who struggled with Mercurial and said they were more familiar with Git, towards git-cinnabar. I suspect there's a somewhat large number of people who never realized Git was a viable option.

But that, on its own, can come with its own challenges: if you use git-cinnabar without being backed by gecko-dev, you'll have a hard time sharing your branches on GitHub, because you can't push to a fork of gecko-dev without pushing your entire local repository, as they have different commit histories. And switching to gecko-dev when you weren't already using it requires some extra work to rebase all your local branches from the old commit history to the new one.

Clone times with git-cinnabar have also started to go a little out of hand in the past few years, but this was mitigated in a similar manner as with the Mercurial cloning problem: with static files that are refreshed regularly. Ironically, that made cloning with git-cinnabar faster than cloning with Mercurial. But generating those static files is increasingly time-consuming. As of writing, generating those for mozilla-unified takes close to 7 hours. I was predicting clone times over 10 hours "in 5 years" in a post from 4 years ago, I wasn't too far off. With exponential growth, it could still happen, although to be fair, CPUs have improved since. I will explore the performance aspect in a subsequent blog post, alongside the upcoming release of git-cinnabar 0.7.0-b1. I don't even want to check how long it now takes with hg-git or git-remote-hg (they were already taking more than a day when git-cinnabar was taking a couple hours).

I suppose it's about time that I clarify that git-cinnabar has always been a side-project. It hasn't been part of my duties at Mozilla, and the extent to which Mozilla supports git-cinnabar is in the form of taskcluster workers on the community instance for both git-cinnabar CI and generating those clone bundles. Consequently, that makes the above git-cinnabar specific issues a Me problem, rather than a Mozilla problem.

Taking the leap

I can't talk for the people who made the proposal to move to Git, nor for the people who put a green light on it. But I can at least give my perspective.

Developers have regularly asked why Mozilla was still using Mercurial, but I think it was the first time that a formal proposal was laid out. And it came from the Engineering Workflow team, responsible for issue tracking, code reviews, source control, build and more.

It's easy to say "Mozilla should have chosen Git in the first place", but back in 2007, GitHub wasn't there, Bitbucket wasn't there, and all the available options were rather new (especially compared to the then 21 years-old CVS). I think Mozilla made the right choice, all things considered. Had they waited a couple years, the story might have been different.

You might say that Mozilla stayed with Mercurial for so long because of the sunk cost fallacy. I don't think that's true either. But after the biggest Mercurial repository hosting service turned off Mercurial support, and the main contributor to Mercurial going their own way, it's hard to ignore that the landscape has evolved.

And the problems that we regularly encounter with the Mercurial servers are not going to get any better as the repository continues to grow. As far as I know, all the Mercurial repositories bigger than Mozilla's are... not using Mercurial. Google has its own closed-source server, and Facebook has another of its own, and it's not really public either. With resources spread thin, I don't expect Mozilla to be able to continue supporting a Mercurial server indefinitely (although I guess Octobus could be contracted to give a hand, but is that sustainable?).

Mozilla, being a champion of Open Source, also doesn't live in a silo. At some point, you have to meet your contributors where they are. And the Open Source world is now majoritarily using Git. I'm sure the vast majority of new hires at Mozilla in the past, say, 5 years, know Git and have had to learn Mercurial (although they arguably didn't need to). Even within Mozilla, with thousands(!) of repositories on GitHub, Firefox is now actually the exception rather than the norm. I should even actually say Desktop Firefox, because even Mobile Firefox lives on GitHub (although Fenix is moving back in together with Desktop Firefox, and the timing is such that that will probably happen before Firefox moves to Git).

Heck, even Microsoft moved to Git!

With a significant developer base already using Git thanks to git-cinnabar, and all the constraints and problems I mentioned previously, it actually seems natural that a transition (finally) happens. However, had git-cinnabar or something similarly viable not existed, I don't think Mozilla would be in a position to take this decision. On one hand, it probably wouldn't be in the current situation of having to support both Git and Mercurial in the tooling around Firefox, nor the resource constraints related to that. But on the other hand, it would be farther from supporting Git and being able to make the switch in order to address all the other problems.

But... GitHub?

I hope I made a compelling case that hosting is not as simple as it can seem, at the scale of the Firefox repository. It's also not Mozilla's main focus. Mozilla has enough on its plate with the migration of existing infrastructure that does rely on Mercurial to understandably not want to figure out the hosting part, especially with limited resources, and with the mixed experience hosting both Mercurial and git has been so far.

After all, GitHub couldn't even display things like the contributors' graph on gecko-dev until recently, and hosting is literally their job! They still drop the ball on large blames (thankfully we have searchfox for those).

Where does that leave us? Gitlab? For those criticizing GitHub for being proprietary, that's probably not open enough. Cloud Source Repositories? "But GitHub is Microsoft" is a complaint I've read a lot after the announcement. Do you think Google hosting would have appealed to these people? Bitbucket? I'm kind of surprised it wasn't in the list of providers that were considered, but I'm also kind of glad it wasn't (and I'll leave it at that).

I think the only relatively big hosting provider that could have made the people criticizing the choice of GitHub happy is Codeberg, but I hadn't even heard of it before it was mentioned in response to Mozilla's announcement. But really, with literal thousands of Mozilla repositories already on GitHub, with literal tens of millions repositories on the platform overall, the pragmatic in me can't deny that it's an attractive option (and I can't stress enough that I wasn't remotely close to the room where the discussion about what choice to make happened).

"But it's a slippery slope". I can see that being a real concern. LLVM also moved its repository to GitHub (from a (I think) self-hosted Subversion server), and ended up moving off Bugzilla and Phabricator to GitHub issues and PRs four years later. As an occasional contributor to LLVM, I hate this move. I hate the GitHub review UI with a passion.

At least, right now, GitHub PRs are not a viable option for Mozilla, for their lack of support for security related PRs, and the more general shortcomings in the review UI. That doesn't mean things won't change in the future, but let's not get too far ahead of ourselves. The move to Git has just been announced, and the migration has not even begun yet. Just because Mozilla is moving the Firefox repository to GitHub doesn't mean it's locked in forever or that all the eggs are going to be thrown into one basket. If bridges need to be crossed in the future, we'll see then.

So, what's next?

The official announcement said we're not expecting the migration to really begin until six months from now. I'll swim against the current here, and say this: the earlier you can switch to git, the earlier you'll find out what works and what doesn't work for you, whether you already know Git or not.

While there is not one unique workflow, here's what I would recommend anyone who wants to take the leap off Mercurial right now:

  • Make sure git is installed. Chances are you already have it.

  • Install git-cinnabar where mach bootstrap would install it.

    $ mkdir -p ~/.mozbuild/git-cinnabar
    $ cd ~/.mozbuild/git-cinnabar
    $ curl -sOL
    $ python3 && rm
  • Add git-cinnabar to your PATH. Make sure to also set that wherever you keep your PATH up-to-date (.bashrc or wherever else).

    $ PATH=$PATH:$HOME/.mozbuild/git-cinnabar
  • Enter your mozilla-central or mozilla-unified Mercurial working copy, we'll do an in-place conversion, so that you don't need to move your mozconfigs, objdirs and what not.

  • Initialize the git repository from GitHub.

    $ git init
    $ git remote add origin
    $ git remote update origin
  • Switch to a Mercurial remote.

    $ git remote set-url origin hg::
    $ git config --local remote.origin.cinnabar-refs bookmarks
    $ git remote update origin --prune
  • Fetch your local Mercurial heads.

    $ git -c cinnabar.refs=heads fetch hg::$PWD refs/heads/default/*:refs/heads/hg/*

    This will create a bunch of hg/<sha1> local branches, not all relevant to you (some come from old branches on mozilla-central). Note that if you're using Mercurial MQ, this will not pull your queues, as they don't exist as heads in the Mercurial repo. You'd need to apply your queues one by one and run the command above for each of them.
    Or, if you have bookmarks for your local Mercurial work, you can use this instead:

    $ git -c cinnabar.refs=bookmarks fetch hg::$PWD refs/heads/*:refs/heads/hg/*

    This will create hg/<bookmark_name> branches.

  • Now, make git know what commit your working tree is on.

    $ git reset $(git cinnabar hg2git $(hg log -r . -T '{node}'))

    This will take a little moment because Git is going to scan all the files in the tree for the first time. On the other hand, it won't touch their content or timestamps, so if you had a build around, it will still be valid, and mach build won't rebuild anything it doesn't have to.

As there is no one-size-fits-all workflow, I won't tell you how to organize yourself from there. I'll just say this: if you know the Mercurial sha1s of your previous local work, you can create branches for them with:

$ git branch <branch_name> $(git cinnabar hg2git <hg_sha1>)

At this point, you should have everything available on the Git side, and you can remove the .hg directory. Or move it into some empty directory somewhere else, just in case. But don't leave it here, it will only confuse the tooling. Artifact builds WILL be confused, though, and you'll have to ./mach configure before being able to do anything. You may also hit bug 1865299 if your working tree is older than this post.

If you have any problem or question, you can ping me on #git-cinnabar or #git on Matrix. I'll put the instructions above somewhere on, and we can collaboratively iterate on them.

Now, what the announcement didn't say is that the Git repository WILL NOT be gecko-dev, doesn't exist yet, and WON'T BE COMPATIBLE (trust me, it'll be for the better). Why did I make you do all the above, you ask? Because that won't be a problem. I'll have you covered, I promise. The upcoming release of git-cinnabar 0.7.0-b1 will have a way to smoothly switch between gecko-dev and the future repository (incidentally, that will also allow to switch from a pure git-cinnabar clone to a gecko-dev one, for the git-cinnabar users who have kept reading this far).

What about git-cinnabar?

With Mercurial going the way of the dodo at Mozilla, my own need for git-cinnabar will vanish. Legitimately, this begs the question whether it will still be maintained.

I can't answer for sure. I don't have a crystal ball. However, the needs of the transition itself will motivate me to finish some long-standing things (like finalizing the support for pushing merges, which is currently behind an experimental flag) or implement some missing features (support for creating Mercurial branches).

Git-cinnabar started as a Python script, it grew a sidekick implemented in C, which then incorporated some Rust, which then cannibalized the Python script and took its place. It is now close to 90% Rust, and 10% C (if you don't count the code from Git that is statically linked to it), and has sort of become my Rust playground (it's also, I must admit, a mess, because of its history, but it's getting better). So the day to day use with Mercurial is not my sole motivation to keep developing it. If it were, it would stay stagnant, because all the features I need are there, and the speed is not all that bad, although I know it could be better. Arguably, though, git-cinnabar has been relatively stagnant feature-wise, because all the features I need are there.

So, no, I don't expect git-cinnabar to die along Mercurial use at Mozilla, but I can't really promise anything either.

Final words

That was a long post. But there was a lot of ground to cover. And I still skipped over a bunch of things. I hope I didn't bore you to death. If I did and you're still reading... what's wrong with you? ;)

So this is the end of Mercurial at Mozilla. So long, and thanks for all the fish. But this is also the beginning of a transition that is not easy, and that will not be without hiccups, I'm sure. So fasten your seatbelts (plural), and welcome the change.

To circle back to the clickbait title, did I really kill Mercurial at Mozilla? Of course not. But it's like I stumbled upon a few sparks and tossed a can of gasoline on them. I didn't start the fire, but I sure made it into a proper bonfire... and now it has turned into a wildfire.

And who knows? 15 years from now, someone else might be looking back at how Mozilla picked Git at the wrong time, and that, had we waited a little longer, we would have picked some yet to come new horse. But hey, that's the tech cycle for you.

The Mozilla BlogMozilla Ventures Invests in Sendmarc, Global Leader in Email and Domain Security

The investment will help Sendmarc provide essential DMARC protection to thousands of customers around the world

(TUESDAY, NOVEMBER  21, 2023) – Today, Mozilla Ventures is announcing an investment in Sendmarc, an industry leader in email and domain cybersecurity. 

Sendmarc specializes in preventing email impersonation (i.e. phishing and spoofing), the root cause of more than 90% of cybercrimes. The company’s cutting-edge tools and practices enable organizations to protect their email domains and closely monitor for any attempted abuse. Sendmarc helps customers deploy DMARC, the same email authentication protocol used by NASA and other parts of the U.S. government.

Sendmarc provides email and domain security protection for thousands of companies across five continents. Customers include Fortune 500 companies, stock exchanges, banks, insurance companies, tech companies, retailers, municipalities, law firms, and law enforcement agencies. Organizations can determine if their domain is vulnerable to impersonation by using Sendmarc’s free Know Your Score tool.

Mozilla Ventures is a first-of-its-kind impact venture fund to invest in startups that push the internet — and the tech industry — in a better direction. The fund’s mission is largely modeled on Mozilla’s “Creating Trustworthy AI” whitepaper. 

Mozilla Ventures isn’t publicizing its investment amount at this time. Sendmarc has previously received investment from Atlantica Ventures, Allan Gray E-Squared Ventures, Fireball Capital, Endeavor Catalyst, 4Di Capital, Kalon Venture Partners, Endeavor Harvest, and Alpha Private Capital.

Says Sacha Matulovich, Co-Founder and Chief Strategy Officer of Sendmarc: “Responsible domain owners must use DMARC to protect their customers, their suppliers, their employees and the public from email impersonation, the most prevalent form of cybercrime. Mozilla’s investment in Sendmarc helps us make this a reality — and sends a strong message to the entire ecosystem of email administrators, MSPs and MSSPs about the importance of email and domain cybersecurity.” 

Says Mohamed Nanabhay, Managing Partner of Mozilla Ventures: “Sendmarc and Mozilla Ventures both see email and domain security as a cornerstone of a healthy internet. With our investment, we’re not just supporting Sendmarc — we’re supporting an internet that’s more resilient to cybercrime.”  

Sendmarc will join a cohort of other mission driven startups that Mozilla Ventures has invested in, including Fiddler, Array Insights, heylogin, Lelapa AI, Themis AI, Block Party, and Rodeo. Mozilla Ventures launched in 2022 with an initial $35 million in funding.

The post Mozilla Ventures Invests in Sendmarc, Global Leader in Email and Domain Security appeared first on The Mozilla Blog.

Firefox Developer ExperienceFirefox WebDriver Newsletter — 120

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 120 release cycle.


With Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla.

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette

WebDriver BiDi

Improved RemoteValue serialization

For commands like script.evaluate and script.callFunction the serialization of remote values has been improved to now also include a proper serialization of the JavaScript types for Proxy and Generator.

HTTP Authentication

In preparation of getting support for the network.authRequired event, the authChallenges field was added to the network.responseStarted and network.responseCompleted events. This field holds a list of authentication challenges as present in the HTTP headers returned by the server for 401 responses.

Mozilla Privacy BlogGlobal Privacy Control Empowers Individuals to Limit Privacy-Invasive Tracking

Global Privacy Control (GPC) is a proposed standard by PrivacyCG that aims to make privacy more accessible to everyone. Available now in Firefox version 120 and soon to be featured in Firefox for Android version 122, a new setting (in Preferences → Privacy & Security) has been introduced that allows users to enable GPC. With this opt-in feature, Firefox, on behalf of our users, can signal to websites to not sell or share user data with third parties.

Mozilla has long invested in technologies to protect the privacy of Firefox users and is a founding member of the GPC standards effort. In 2021, Mozilla took initiative to begin experimenting with implementing Global Privacy Control in Firefox Nightly. Since then, we’ve made a commitment to provide Firefox users with the means to refuse targeted advertising by providing a simpler and more accessible means of signaling privacy preferences to businesses or websites.

Firefox users can now easily enable GPC preferences directly within the privacy & security section in their Firefox settings – expressing user privacy preferences to websites is now as simple as checking off a box! In addition, we’ve also ensured that GPC is enabled in private browsing mode by default.

Enable Global Privacy Control on Firefox with three easy steps:

  1. In the Menu bar at the top of the screen, click Firefox and select Preferences.
  2. In the Privacy & Security panel, scroll down to ‘Website Privacy Preferences’ and click the ‘Tell websites not to sell or share my data option.’
  3. Close the Settings page. Any changes you’ve made will automatically be saved.

Global Privacy Control is considered legally enforceable in some jurisdictions, such as California via the California Consumer Protection Act (CCPA), and can indicate an opt-out of targeted advertising or elicit a general request to limit the sale or sharing of user personal data in that jurisdiction. We have previously advocated for legislation that would require websites in more jurisdictions to recognize universal opt-out signals as a valid objection to data collection or sharing.

Without GPC, users are forced to repeatedly re-communicate their objection to tracking, leading to consent fatigue even in jurisdictions where users do have basic rights around their data.  Once collected, most people have no idea how data is stored, shared, or even sold. GPC aims to make it easier for people to express their privacy preferences and exercise their rights.

The post Global Privacy Control Empowers Individuals to Limit Privacy-Invasive Tracking appeared first on Open Policy & Advocacy.

IRL (podcast)Lend Me Your Voice

Big tech’s power over language, means power over people. Bridget Todd talks to AI community leaders paving the way for open voice tech in their own languages and dialects.

In this episode: AI builders and researchers in the US, Kenya and New Zealand who say the languages computers learn to recognize today will be the ones that survive tomorrow — as long as communities and local startups can defend their data rights from big AI companies.

Halcyon Lawrence was a researcher of information design at Towson University in Maryland (via Trinidad and Tobago) who did everything Alexa told her to for a year.*

Keoni Mahelona is a leader of Indigenous data rights and chief technology officer of Te Hiku Media, a Māori community media network with 21 local radio stations in New Zealand. 

Kathleen Siminyu is an AI grassroots community leader in Kenya and a machine learning fellow with Mozilla’s Common Voice working on Kiswahili voice projects. 

IRL: Online Life is Real Life is an original podcast from Mozilla, the non-profit behind Firefox. In Season 7, host Bridget Todd talks to AI builders that put people ahead of profit.

*Sadly, following the recording of this episode, Dr. Halcyon Lawrence passed away. We are glad to have met her and pay tribute to her legacy as a researcher and educator. Thank you, Halcyon. 

Martin ThompsonThoughts on TAG Design Reviews

Before I start on my thoughts, if you work for a W3C member organization, please head to the 2023 TAG Election page. Voting is open until 2023-12-14.

If you are considering how you might like to rank me when voting, read on. I can’t promise that this post will provide much additional context, but it might.

The W3C TAG is a bit of a strange institution. The TAG occupies a position of some privilege due to its standing within the W3C and the long-standing participation and sponsorship of Sir Tim Berners-Lee.

The TAG also has a history marked by notable documents produced under its letterhead. The TAG, through its findings, has been responsible for recognizing and analyzing certain key trends in the evolution of the Web, providing some key pieces of architectural guidance. The TAG also publishes documents with general guidance for people seeking to improve the Web, like design principles and a security and privacy questionnaire.

On a day-to-day basis, however, the TAG provides hands-on guidance to people looking to add new capabilities to the Web, primarily through design reviews. Records of early reviews trace back to 2013 in the TAG repository, but the practice has deeper roots.

The modern review record starts with a meager 5 reviews in the latter half of 2013. More recently, the TAG closed a total of 85 design reviews in 2022[1]. Already, in 2023, there have been 106 design review requests opened.

The function of the TAG as a body primarily focused on reviewing new Web APIs is one that took a while to settle. A key driver of this increase in volume has clearly been the inclusion of TAG review as a formal precondition for shipping Web-visible changes in the Chromium project. Chromium consequently drives a lot of this review load with 73 of the 106 new requests that arrived in 2023 clearly marked as originating from “Google”, “Chromium”, or “Microsoft” as a primary driver or funder of the work[2]. That is nearly 70% of the total review load attributed to Chromium. This is in addition to those design reviews that were initiated on behalf of a W3C group in which Chromium contributors were instrumental in the work.

Obviously, at a rate of more than 2 reviews a week, that’s a fairly major outlay in terms of time for the TAG. Proposals vary in size, but some of them are quite substantial. A good review requires reading lengthy explainers and specifications, filling gaps in understanding by talking to different people, considering alternative options, and building an understanding of the broader context. A proper review for a more substantial proposal can take weeks or even months to research, discuss, and write up.

The TAG is expanding in size this year. An increase to 12 members (8 elected, 4 appointed) does give the TAG more capacity, albeit with added coordination costs reducing efficiency. This is predicated on the idea is that reviews are the most important function of the TAG. That being the case, then adding more capacity seems like a reasonable reaction.

That an action is superficially reasonable is not the standard to apply when making such a decision. As with a design review, an examination of the alternatives is generally illuminating. Once those alternatives are understood, we might again conclude that the proposal on the table is the best possible path, but we do so with a more complete understanding of what opportunities are lost or foreclosed as a result. The AB minutes of the decision do not reflect that process, but then they are only responding to a request from the TAG.

There are several other equally reasonable ways of dealing with increased workload. If reviews are taking too long, it might be possible to find ways to make reviewing easier or faster. Perhaps the TAG has exhausted their options in that area already. Maybe they have looked at selectively rejecting more design review requests. Maybe they have considered finding ways to offload review work onto other bodies, like PING.

From my limited perspective, it is not clear that these avenues of investigation have been fully explored. For instance, I have good experience with effective directorate system that the IESG uses to greatly alleviate their workload, but I see no evidence of an effort to delegate in a similar fashion.

TAG members each volunteer time from their ordinarily busy day jobs, so any excess load spent on reviewing is time that is not available for higher functions. In addition to review load, the TAG has a role in W3C Councils and other critical procedural functions in the W3C process. Those tasks are generally not easily delegated or dealt with by a subset of TAG members.

I am supportive of efforts to better use the TAG in for key procedural functions, like the W3C Council. Those functions make the TAG more important in a good way. The W3C needs people in the role who have good judgment and the experience to inform that judgment.

Along with that, it is important to reserve some space for the TAG to provide technical leadership for the W3C and the Web community as a whole. After time spent on the procedural functions demanded by the process, design reviews have the potential completely drain any time TAG members have to dedicate to the role, leaving no spare capacity. Ideally, there needs to be some remaining space for the careful and thoughtful work that leadership demands.

Effective technical leadership depends somewhat on the TAG being exposed to how the platform is evolving. Reviews are a great way to gain some of that exposure, but that does not mean that the TAG needs to review every single proposal.

I don’t have a specific plan yet. If appointed, it will take some time to understand what the role is and what options are available. I consider myself quite capable of performing that sort of review and I expect it would be easy to settle into that function. But I have no intent of letting design reviews dominate my time; the TAG – and the Web – deserves better.

  1. A note on the numbers here: The TAG has a template that they use for design reviews and I have only selected reviews that include the string “requesting a TAG review”, as present in that template. There were other issues closed in this period, some of which are probably also pre-template design reviews, but I haven’t carefully reviewed those. ↩︎

  2. For posterity, this is the search I used: opened_since(2023-01-01) not(opened_since(2024-01-01)) body("requesting a TAG review") body("(?:driving the (?:design|specification)|funded by):\\s+\\[?(?:Microsoft|Google|Chromium)")), using a tool I built in combination with the excellent GitHub issue archival tool that Mike Bishop wrote. ↩︎

Mozilla ThunderbirdJoin Us For November 2023 Thunderbird Community Office Hours

The thunderbird logo (a blue elemental bird curled up and protecting an inner icon shaped like a heart, formed from two hands clasping together)

Please join us for our upcoming Thunderbird community office hours all about Cards View on November 29, 2023 at 18:00 UTC! (Keep reading for joining information.)

A New Era

We are trying out a new format in this edition of the office hours. Previously, we had several sessions with no agenda. This was intended to provide an open forum for anyone to come and ask any questions, during a few different times. Since we have seen low engagement, we’re trying something new.

The new office hours format will feature a key Thunderbird guest to talk about the area they specialize in. Going forward, we’ll hold just one monthly session at a time we hope is convenient for the majority of our users around the globe. The session will be recorded and shared with the community afterwards. On air, we will do our best to answer any questions you have. Everyone is welcome to attend and ask questions live. If you cannot make it in person, we encourage you to submit any questions you have in advance by sending them to

Topic: Cards View 

With the substantial UI changes that the new cards view in Thunderbird 115 “Supernova” offers, we thought we would bring an expert on cards view, Micah Ilbery to discuss the improvements with you. Micah has been a designer on the Thunderbird team and played a major role in the beautiful cards view we have today. So if you have questions or helpful feedback you would like to share face-to-face with Micah, this is your opportunity! If you can’t make it, please submit any questions you have to the above email address and we will ask them on air for you.

(You can see where Cards View is headed next by looking here.)

Zoom Information

Direct URL To Join:
Meeting ID: 939 2505 4689
Password: 993267

Dial by your location:

  • +1 646 518 9805 US (New York)
  • +1 669 219 2599 US (San Jose)
  • +1 647 558 0588 Canada
  • +33 1 7095 0103 France
  • +49 69 7104 9922 Germany
  • +44 330 088 5830 United Kingdom
  • Find your local number:

The call will be recorded and this post updated with a link to the recording afterwards. 

The post Join Us For November 2023 Thunderbird Community Office Hours appeared first on The Thunderbird Blog.

Firefox NightlyGetting Better Every Day – These Weeks in Firefox: Issue 149


  • The WebExtensions team has been working with other browser vendors to make it easier for extension authors to migrate to Manifest v3
    • To allow extensions developers to use the same manifest.json file across multiple browser vendors, extensions with a manifest.json file including both background.service_worker and background.scripts will load successfully instead of failing at install time, a warning will be logged and background.scripts will take precedence over the currently unsupported background.service_worker – Bug 1860304
  • The Ubuntu Snap improvements for the browser migration wizard have gotten a green light from QA! It will be riding the trains out in Firefox 120, and hopefully will make it easier for users on Ubuntu migrate data from other browsers.
  • Some excellent improvements have landed for the Firefox Profiler
  • Nicolas from the DevTools team has been making some great progress on the DevTools Accessibility project (bug, bug, bug, bug). The most recent visible one is that a thick, noticeable focus indicator was added to focusable elements across the toolbox

Friends of the Firefox team


  • Welcome to Nikki Sharpley! Nikki is joining the Information Management team, and will be focusing on Firefox View.

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Ganna
  • Jonas Jenwald [:Snuffleupagus]

New contributors (🌟 = first patch)


Project Updates

Add-ons / Web Extensions

WebExtensions Framework
Addon Manager & about:addons
    • Investigated and fixed failures hit by AddonManager mochitest-browser tests in a11_checks jobs Bug 1859035

Developer Tools

  • Alex added error message when evaluating unknown commands (bug)
  • Alex fixed logging SharedArrayBuffer from Worklets and Workers (bug)
WebDriver BiDi
  • Sasha added support for userActivation parameter in script evaluation (bug)
  • Sasha added support for the origin argument in the browsingContext.captureScreenshot command, allowing a script to, for example, take a screenshot of part of the page that is not visible (bug) …
  • … which also allowed to remove the now unnecessary scrollIntoView  argument  usage in browsingContext.captureScreenshot (bug)
  • Finally, Sasha added a context property on WindowProxy serialized object (bug)
  • Julian fixed an issue where browsingContext.navigate would not return the expected navigation id when wait=none was passed (bug)

ESMification status


Lint, Docs and Workflow

Migration Improvements

  • The last vestiges of the old XUL-based migration dialog have now been removed.
  • In Device Migration land, we’re primed to add helpful email and calendar reminders to our switching-devices SUMO page to make it easier for users to remember to download Firefox on their new devices! Emails are just going through localization, but we aim to have this enabled by the end of the month.

New Tab Page

Performance Tools (aka Firefox Profiler)

  • Test jobs now upload resource use profiles too (similar to build jobs).
    • Example profile of a mochitest job
      • In addition to markers for the names of the tests, there are also markers showing the name of the test folders, making it clear which tests take long, when we restart the browser, and if the time was used with the CPU busy or idle.
    • Example profile of an xpcshell job
      • This profile makes it easy to see which tests ran in parallel.

Search and Navigation

Enhanced Cross-Platform Suggest, which expands our Firefox Suggest capabilities and brings Suggest to mobile browsers

  • Drew removed adM/wikipedia suggestions from memory. Before these suggestions were sticking around even when disabled. 1832198.
  • Drew has added tests and refactors for Firefox Suggest in 1859389 and 1861540.
  • Dao implemented the layout for Firefox Suggest Opt-in Experiment in 1852055.


General improvements
  • Marc Seibert fixed a bug where the container label was not properly displayed in a bookmark result. 1859810
  • Marc Seibert fixed a bug where a part of the url is crossed out when the security padlock is clicked. This bug happens when browser.urlbar.trimHttps is enabled. 1860528.
  • Dale and Dao refactored CSS to use `moz-bool-pref` for richSuggestions in 1862704  and 1862930.


Search and SERP (Search Engine Result Page) telemetry
  • Mandy added telemetry to track search service failures. 1849013.
  • James added remote settings listeners for when SearchSERPTelemetry was updated in 1785104.
  • Standard8 extended SERP telemetry config for mobile. 1861676.
  • Standard8 updated Yahoo! Auctions branding. 1861925
  • Stephanie fixed broken marionette tests for search. 1863023


Consolidated Search Configuration
  • This is an initiative that consolidates search engine configurations across our desktop and mobile browsers
  • Mandy and Standard8 have just started rewriting the configuration to address some limitations of the previous configuration, and to allow it to be shared with our mobile products.
  • Mandy added a new application search engine class in 1861080  and 1863360.
  • Standard8 added search engine config schema v2. 1854965
  • Standard8 updated search tests in 1862287 , 1862462 and 1863023.
  • Standard8 updated the new and old search engine selectors to use the right keys. 1862679
  • Standard8 enables spell check for schemas and docs in search. 1862624

Hacks.Mozilla.OrgMozilla AI Guide Launch with Summarization Code Example

The Mozilla AI Guide has launched and we welcome you to read through and get acquainted with it. You can access it here

Our vision is for the AI Guide to be the starting point for every new developer to the space and a place to revisit for clarity and inspiration, ensuring that AI innovations enrich everyday life. The AI Guide’s initial focus begins with language models and the aim is to become a collaborative community-driven resource covering other types of models.

To start the first few sections in the Mozilla AI Guide go in-depth on the most asked questions about Large Language Models (LLMs). AI Basics covers the concepts of AI, ML, LLMs, what these concepts mean and how they are related. This section also breaks down the pros and cons of using an LLM. Language Models 101 continues to build on the shared knowledge of AI basics and dives deeper into the next level with language models. It will answer questions such as “What does ‘training’ an ML model mean” or “What is ‘human in the loop’ approach?”

We will jump to the last section on Choosing ML Models and demonstrate in code below what can be done using open source models to summarize certain text. You can access the Colab Notebook here or continue reading:

First Steps with Language Models

Unlike other guides, this one is designed to help pick the right model for whatever it is you’re trying to do, by:

  • teaching you how to always remain on the bleeding edge of published AI research
  • broadening your perspective on current open options for any given task
  • not be tied to a closed-source / closed-data large language model (ex OpenAI, Anthropic)
  • creating a data-led system for always identifying and using the state-of-the-art (SOTA) model for any particular task.

We’re going to hone in on “text summarization” as our first task.

So… why are we not using one of the popular large language models?

Great question. Most available LLMs worth their salt can do many tasks, including summarization, but not all of them may be good at what specifically you want them to do. We should figure out how to evaluate whether they actually can or not.

Also, many of the current popular LLMs are not open, are trained on undisclosed data and exhibit biases. Responsible AI use requires careful choices, and we’re here to help you make them.

Finally, most large LLMs require powerful GPU compute to use. While there are many models that you can use as a service, most of them cost money per API call. Unnecessary when some of the more common tasks can be done at good quality with already available open models and off-the-shelf hardware.

Why do using open models matter?

Over the last few decades, engineers have been blessed with being able to onboard by starting with open source projects, and eventually shipping open source to production. This default state is now at risk.

Yes, there are many open models available that do a great job. However, most guides don’t discuss how to get started with them using simple steps and instead bias towards existing closed APIs.

Funding is flowing to commercial AI projects, who have larger budgets than open source contributors to market their work, which inevitably leads to engineers starting with closed source projects and shipping expensive closed projects to production.

Our First Project – Summarization

We’re going to:

  • Find text to summarize.
  • Figure out how to summarize them using the current state-of-the-art open source models.
  • Write some code to do so.
  • Evaluate quality of results using relevant metrics

For simplicity’s sake, let’s grab Mozilla’s Trustworthy AI Guidelines in string form

Note that in the real world, you will likely have to use other libraries to extract content for any particular file type.

import textwrap

content = """Mozilla's "Trustworthy AI" Thinking Points:

PRIVACY: How is data collected, stored, and shared? Our personal data powers everything from traffic maps to targeted advertising. Trustworthy AI should enable people to decide how their data is used and what decisions are made with it.

FAIRNESS: We’ve seen time and again how bias shows up in computational models, data, and frameworks behind automated decision making. The values and goals of a system should be power aware and seek to minimize harm. Further, AI systems that depend on human workers should protect people from exploitation and overwork.

TRUST: People should have agency and control over their data and algorithmic outputs, especially considering the high stakes for individuals and societies. For instance, when online recommendation systems push people towards extreme, misleading content, potentially misinforming or radicalizing them.

SAFETY: AI systems can carry high risk for exploitation by bad actors. Developers need to implement strong measures to protect our data and personal security. Further, excessive energy consumption and extraction of natural resources for computing and machine learning accelerates the climate crisis.

TRANSPARENCY: Automated decisions can have huge personal impacts, yet the reasons for decisions are often opaque. We need to mandate transparency so that we can fully understand these systems and their potential for harm."""

Great. Now we’re ready to start summarizing.

A brief pause for context

The AI space is moving so fast that it requires a tremendous amount of catching up on scientific papers each week to understand the lay of the land and the state of the art.

It’s some effort for an engineer who is brand new to AI to:

  • discover which open models are even out there
  • which models are appropriate for any particular task
  • which benchmarks are used to evaluate those models
  • which models are performing well based on evaluations
  • which models can actually run on available hardware

For the working engineer on a deadline, this is problematic. There’s not much centralized discourse on working with open source AI models. Instead there are fragmented X (formerly Twitter) threads, random private groups and lots of word-of-mouth transfer.

However, once we have a workflow to address all of the above, you will have the means to forever be on the bleeding age of published AI research

How do I get a list of available open summarization models?

For now, we recommend Huggingface and their large directory of open models broken down by task. This is a great starting point. Note that larger LLMs are also included in these lists, so we will have to filter.

In this huge list of summarization models, which ones do we choose?

We don’t know what any of these models are trained on. For example, a summarizer trained on news articles vs Reddit posts will perform better on news articles.

What we need is a set of metrics and benchmarks that we can use to do apples-to-apples comparisons of these models.

How do I evaluate summarization models?

The steps below can be used to evaluate any available model for any task. It requires hopping between a few sources of data for now, but we will be making this a lot easier moving forward.


  1. Find the most common datasets used to train models for summarization.
  2. Find the most common metrics used to evaluate models for summarization across those datasets.
  3. Do a quick audit on training data provenance, quality and any exhibited biases, to keep in line with Responsible AI usage.

Finding datasets

The easiest way to do this is using Papers With Code, an excellent resource for finding the latest scientific papers by task that also have code repositories attached.

First, filter Papers With Code’s “Text Summarization” datasets by most cited text-based English datasets.

Let’s pick (as of this writing) the most cited dataset — the “CNN/DailyMail” dataset. Usually most cited is one marker of popularity.

Now, you don’t need to download this dataset. But we’re going to review the info Papers With Code have provided to learn more about it for the next step. This dataset is also available on Huggingface.

You want to check 3 things:

  • license
  • recent papers
  • whether the data is traceable and the methods are transparent

First, check the license. In this case, it’s MIT licensed, which means it can be used for both commercial and personal projects.

Next, see if the papers using this dataset are recent. You can do this by sorting Papers in descending order. This particular dataset has many papers from 2023 – great!

Finally, let’s check whether the data is from a credible source. In this case, the dataset was generated by IBM in partnership with the University of Montréal. Great.

Now, let’s dig into how we can evaluate models that use this dataset.

Evaluating models

Next, we look for measured metrics that are common across datasets for the summarization task. BUT, if you’re not familiar with the literature on summarization, you have no idea what those are.

To find out, pick a “Subtask” that’s close to what you’d like to see. We’d like to summarize the CNN article we pulled down above, so let’s choose “Abstractive Text Summarization”.

Now we’re in business! This page contains a significant amount of new information.

There are mentions of three new terms: ROUGE-1, ROUGE-2 and ROUGE-L. These are the metrics that are used to measure summarization performance.

There is also a list of models and their scores on these three metrics – this is exactly what we’re looking for.

Assuming we’re looking at ROUGE-1 as our metric, we now have the top 3 models that we can evaluate in more detail. All 3 are close to 50, which is a promising ROUGE score (read up on ROUGE).

Testing out a model

OK, we have a few candidates, so let’s pick a model that will run on our local machines. Many models get their best performance when running on GPUs, but there are many that also generate summaries fast on CPUs. Let’s pick one of those to start – Google’s Pegasus.

# first we install huggingface's transformers library
%pip install transformers sentencepiece

Then we find Pegasus on Huggingface. Note that part of the datasets Pegasus was trained on includes CNN/DailyMail which bodes well for our article summarization. Interestingly, there’s a variant of Pegasus from google that’s only trained on our dataset of choice, we should use that.

from transformers import PegasusForConditionalGeneration, PegasusTokenizer 
import torch 

# Set the seed, this will help reproduce results. Changing the seed will 
# generate new results 
from transformers import set_seed 

# We're using the version of Pegasus specifically trained for summarization 
# using the CNN/DailyMail dataset 
model_name = "google/pegasus-cnn_dailymail"

# If you're following along in Colab, switch your runtime to a
# T4 GPU or other CUDA-compliant device for a speedup
device = "cuda" if torch.cuda.is_available() else "cpu" 

# Load the tokenizer
tokenizer = PegasusTokenizer.from_pretrained(model_name) 

# Load the model 
model = PegasusForConditionalGeneration.from_pretrained(model_name).to(device)

# Tokenize the entire content
batch = tokenizer(content, padding="longest", return_tensors="pt").to(device)

# Generate the summary as tokens
summarized = model.generate(**batch)

# Decode the tokens back into text
summarized_decoded = tokenizer.batch_decode(summarized, skip_special_tokens=True)
summarized_text = summarized_decoded[0]

# Compare
def compare(original, summarized_text):
  print(f"Article text length: {len(original)}\n")
  print(textwrap.fill(summarized_text, 100))
  print(f"Summarized length: {len(summarized_text)}")

compare(content, summarized_text)
Article text length: 1427

Trustworthy AI should enable people to decide how their data is used.<n>values and goals of a system
should be power aware and seek to minimize harm.<n>People should have agency and control over their
data and algorithmic outputs.<n>Developers need to implement strong measures to protect our data and
personal security.

Summarized length: 320

Alright, we got something! Kind of short though. Let’s see if we can make the summary longer…


# Generate the summary as tokens, with a max_new_tokens
summarized = model.generate(**batch, max_new_tokens=800)
summarized_decoded = tokenizer.batch_decode(summarized, skip_special_tokens=True)
summarized_text = summarized_decoded[0]

compare(content, summarized_text)
Article text length: 1427

Trustworthy AI should enable people to decide how their data is used.<n>values and goals of a system
should be power aware and seek to minimize harm.<n>People should have agency and control over their
data and algorithmic outputs.<n>Developers need to implement strong measures to protect our data and
personal security.

Summarized length: 320

Well, that didn’t really work. Let’s try a different approach called ‘sampling’. This allows the model to pick the next word according to its conditional probability distribution (specifically, the probability that said word follows the word before).

We’ll also be setting the ‘temperature’. This variable works to control the levels of randomness and creativity in the generated output.

summarized = model.generate(**batch, do_sample=True, temperature=0.8, top_k=0)
summarized_decoded = tokenizer.batch_decode(summarized, skip_special_tokens=True)
summarized_text = summarized_decoded[0]
compare(content, summarized_text)
Article text length: 1427

Mozilla's "Trustworthy AI" Thinking Points:.<n>People should have agency and control over their data
and algorithmic outputs.<n>Developers need to implement strong measures to protect our data.

Summarized length: 193

Shorter, but the quality is higher. Adjusting the temperature up will likely help.

summarized = model.generate(**batch, do_sample=True, temperature=1.0, top_k=0)
summarized_decoded = tokenizer.batch_decode(summarized, skip_special_tokens=True)
summarized_text = summarized_decoded[0]
compare(content, summarized_text)
Article text length: 1427

Mozilla's "Trustworthy AI" Thinking Points:.<n>People should have agency and control over their data
and algorithmic outputs.<n>Developers need to implement strong measures to protect our data and
personal security.<n>We need to mandate transparency so that we can fully understand these systems
and their potential for harm.

Summarized length: 325

Now let’s play with one other generation approach called top_k sampling — instead of considering all possible next words in the vocabulary, the model only considers the top ‘k’ most probable next words.

This technique helps to focus the model on likely continuations and reduces the chances of generating irrelevant or nonsensical text.

It strikes a balance between creativity and coherence by limiting the pool of next-word choices, but not so much that the output becomes deterministic.

summarized = model.generate(**batch, do_sample=True, top_k=50)
summarized_decoded = tokenizer.batch_decode(summarized, skip_special_tokens=True)
summarized_text = summarized_decoded[0]
compare(content, summarized_text)
Article text length: 1427

Mozilla's "Trustworthy AI" Thinking Points look at ethical issues surrounding automated decision
making.<n>values and goals of a system should be power aware and seek to minimize harm.People
should have agency and control over their data and algorithmic outputs.<n>Developers need to
implement strong measures to protect our data and personal security.

Summarized length: 355

Finally, let’s try top_p sampling — also known as nucleus sampling, is a strategy where the model considers only the smallest set of top words whose cumulative probability exceeds a threshold ‘p’.

Unlike top_k which considers a fixed number of words, top_p adapts based on the distribution of probabilities for the next word. This makes it more dynamic and flexible. It helps create diverse and sensible text by allowing less probable words to be selected when the most probable ones don’t add up to ‘p’.

summarized = model.generate(**batch, do_sample=True, top_p=0.9, top_k=50)
summarized_decoded = tokenizer.batch_decode(summarized, skip_special_tokens=True)
summarized_text = summarized_decoded[0]
compare(content, summarized_text)

# saving this for later.
pegasus_summarized_text = summarized_text
Article text length: 1427

Mozilla's "Trustworthy AI" Thinking Points:.<n>People should have agency and control over their data
and algorithmic outputs.<n>Developers need to implement strong measures to protect our data and
personal security.<n>We need to mandate transparency so that we can fully understand these systems
and their potential for harm.

Summarized length: 325

To continue with the code example and see a test with another model, and to learn how to evaluate ML model results (a whole another section), click here to view the Python Notebook and click “Open in Colab” to experiment with your own custom code.

Note this guide will be constantly updated and new sections on Data Retrieval, Image Generation, and Fine Tuning will be coming next.

Developer Contributions Are Vital

Shortly after today’s launch of the Mozilla AI Guide, we will be publishing our community contribution guidelines. It will provide guidance on the type of content developers can contribute and how it can be shared. Get ready to share any great open source AI projects, implementations, video and audio models.

Together, we can bring together a cohesive, collaborative and responsible AI community.

A special thanks to Kevin Li and Pradeep Elankumaran who pulled this great blog post together.

The post Mozilla AI Guide Launch with Summarization Code Example appeared first on Mozilla Hacks - the Web developer blog.

Spidermonkey Development BlogSpiderMonkey Byte-Sized Architectures

I recently presented on SpiderMonkey and Byte-Sized Architectures at a collaborative meeting the evening before the TC39 plenary in Tokyo. The first part of the presentation is a high-level view of the overall architecture of SpiderMonkey, and gives an idea of how a modern JavaScript engine is put together. In the second part, I talk about bytesize architectures a way for teams to build a shared understanding of complicated systems.

The slides and a recording are available.

The Rust Programming Language BlogAnnouncing Rust 1.74.0

The Rust team is happy to announce a new version of Rust, 1.74.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.74.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.74.0 on GitHub.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.74.0 stable

Lint configuration through Cargo

As proposed in RFC 3389, the Cargo.toml manifest now supports a [lints] table to configure the reporting level (forbid, deny, warn, allow) for lints from the compiler and other tools. So rather than setting RUSTFLAGS with -F/-D/-W/-A, which would affect the entire build, or using crate-level attributes like:


You can now write those in your package manifest for Cargo to handle:

unsafe_code = "forbid"

enum_glob_use = "deny"

These can also be configured in a [workspace.lints] table, then inherited by [lints] workspace = true like many other workspace settings. Cargo will also track changes to these settings when deciding which crates need to be rebuilt.

For more information, see the lints and workspace.lints sections of the Cargo reference manual.

Cargo Registry Authentication

Two more related Cargo features are included in this release: credential providers and authenticated private registries.

Credential providers allow configuration of how Cargo gets credentials for a registry. Built-in providers are included for OS-specific secure secret storage on Linux, macOS, and Windows. Additionally, custom providers can be written to support arbitrary methods of storing or generating tokens. Using a secure credential provider reduces risk of registry tokens leaking.

Registries can now optionally require authentication for all operations, not just publishing. This enables private Cargo registries to offer more secure hosting of crates. Use of private registries requires the configuration of a credential provider.

For further information, see the Cargo docs.

Projections in opaque return types

If you have ever received the error that a "return type cannot contain a projection or Self that references lifetimes from a parent scope," you may now rest easy! The compiler now allows mentioning Self and associated types in opaque return types, like async fn and -> impl Trait. This is the kind of feature that gets Rust closer to how you might just expect it to work, even if you have no idea about jargon like "projection".

This functionality had an unstable feature gate because its implementation originally didn't properly deal with captured lifetimes, and once that was fixed it was given time to make sure it was sound. For more technical details, see the stabilization pull request, which describes the following examples that are all now allowed:

struct Wrapper<'a, T>(&'a T);

// Opaque return types that mention `Self`:
impl Wrapper<'_, ()> {
    async fn async_fn() -> Self { /* ... */ }
    fn impl_trait() -> impl Iterator<Item = Self> { /* ... */ }

trait Trait<'a> {
    type Assoc;
    fn new() -> Self::Assoc;
impl Trait<'_> for () {
    type Assoc = ();
    fn new() {}

// Opaque return types that mention an associated type:
impl<'a, T: Trait<'a>> Wrapper<'a, T> {
    async fn mk_assoc() -> T::Assoc { /* ... */ }
    fn a_few_assocs() -> impl Iterator<Item = T::Assoc> { /* ... */ }

Stabilized APIs

These APIs are now stable in const contexts:

Compatibility notes

  • As previously announced, Rust 1.74 has increased its requirements on Apple platforms. The minimum versions are now:
    • macOS: 10.12 Sierra (First released 2016)
    • iOS: 10 (First released 2016)
    • tvOS: 10 (First released 2016)

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.74.0

Many people came together to create Rust 1.74.0. We couldn't have done it without all of you. Thanks!

This Week In RustThis Week in Rust 521

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is cargo-msrv, a cargo subcommand to find out the Minimum Supported Rust Version (MSRV) of your crate.

llogiq is a bit worried about not having received suggestions for two weeks in a row, but still offers you his choice.

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

364 pull requests were merged in the last week

Rust Compiler Performance Triage

A week dominated by one particular perf improvement that lead to huge performance gains - an avg of 5% improvement across 121 test cases! The perf improvement comes from adding an #[inline] hint to the output from #[derive(Debug)] which presumably allows the compiler to more easily do deadcode elimination reducing the binary size and the amount of code that actually needs to be code-gened.

Triage done by @rylev. Revision range: 7b97a5ca..173b6e68


(instructions:u) mean range count
Regressions ❌
0.4% [0.2%, 0.9%] 10
Regressions ❌
1.9% [0.2%, 3.6%] 12
Improvements ✅
-5.6% [-49.2%, -0.1%] 111
Improvements ✅
-3.5% [-25.0%, -0.2%] 155
All ❌✅ (primary) -5.1% [-49.2%, 0.9%] 121

2 Regressions, 2 Improvements, 3 Mixed; 3 of them in rollups 55 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guidelines entered Final Comment Period this week.
New and Updated RFCs
Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Upcoming Events

Rusty Events between 2023-11-15 - 2023-12-13 🦀

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.


Please see the latest Who's Hiring thread on r/rust

Quote of the Week

I decided to keep learning Rust because I liked the syntax. I liked the speed. I liked the community. I liked it all. It felt like a breath of fresh air: a syntax more intuitive than Python, JavaScript, or C, yet still faster.

Goren Barak on their blog

Thanks to Goren Barak for the self-suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Mozilla ThunderbirdAn Untold History of Thunderbird

The History of Thunderbird -- graphics that depicts the evolution of the Thunderbird logo throughout the last 20 years.

Selfie of Ryan Sipes in front of trees with yellow and orange leaves. <figcaption class="wp-element-caption">Ryan Sipes, Product and Business Development Manager</figcaption>

Hi, my name is Ryan Sipes and I run MZLA Technologies Corporation, the subsidiary of the Mozilla Foundation that develops Thunderbird. I have been working on Thunderbird for my day job since November of 2017. It doesn’t seem like that long ago, but looking at the calendar I see that it has been six years this month. A lot has happened in that time and Thunderbird is in a much different place than it was when I started. I’ve seen multiple people online share accounts of “the Thunderbird story,” and each time I’ve thought “that’s really great, but they missed some important parts.” It’s not their fault, we’ve simply never shared the whole story. So today, I thought I’d sit down and write that.

To tell the story correctly, we must go back to 2012. That’s when Thunderbird began to transition from a project that was funded and developed by the Mozilla Corporation, to a community run project. The reasons behind that move were sound and made sense given the state of the project at the time.

One of the biggest issues for Thunderbird throughout its life is that, while it was a well-loved product with over 20 million users, it never had any substantial revenue that could adequately cover its development. So, whomever managed the incredibly large project had to simply eat the cost of developing, maintaining, fixing, and distributing the software. A few attempts were made to make the project sustainable (Mozilla Messaging, for example) but they ultimately didn’t work.

Surviving With A Skeleton Crew

By the time I joined in 2017, Thunderbird lived in the nonprofit Mozilla Foundation and was governed by the Thunderbird Council, an elected group of contributors to the project. At some point, a Thunderbird donation form was set up and donations were high enough to hire 3 people. A developer was hired, someone else to keep the infrastructure running, and a Community Manager (me). The team was way too small for the task at hand (by contrast, we now have 29 people working on Thunderbird). Shortly after being hired, I joined the Thunderbird Council and acted as its Treasurer for many years.

To say that this period was challenging would be an understatement. Thunderbird was being maintained by a skeleton crew. There were long stretches where we couldn’t even get the software to build. On top of that, there was no real organization or guiding vision for where we were trying to go. In fact, there were competing ideas within the community, and a consensus looked difficult to reach.

Throughout the first couple of years that I was around, I was acting as the Treasurer for the project and kept thinking, “we’ve got 20,000,000 users who rely on Thunderbird every day – is an application with that many users really going to just die? We really need to tell our users that we need their support.” 

The project simply wasn’t on a sustainable path, and a few of us suspected it was only a matter of a few years until Thunderbird would be unmaintainable.

Asking For Help

So we defined a vision, made a roadmap, and focused our work. It was still kind of chaotic, but it was more organized chaos. At the same time, I worked with the team on how best to convey the message that we needed the support of our users. It started with updates to the familiar Start Page that shows inside Thunderbird. Then we tried letting folks know when they downloaded the software that “Thunderbird is supported by donations from our users. Please consider giving to keep Thunderbird alive” (not the exact wording, but that was the spirit of the message). 

Eventually, we showed an in-app donation appeal to all our users explaining that we truly need their support if Thunderbird is going to stay around. Each time we tried to be honest about our need, and tasteful about how and when we asked.

Thunderbird's current Start Page, displayed when opening the software. It shows various links to donate, get support, report bugs, and contribute

And… It actually worked! Our users donated to help Thunderbird and we continued to get more organized. Eventually our team grew large enough that it made sense to move to our own organization, the aforementioned MZLA Technologies Corporation. The team working on Thunderbird got way more organized, and we put out many of the fires that made developing Thunderbird so hard. 

I could probably write a book on this part, but trust me: it was really, really tough to solve all these problems. Fortunately, it has all paid off in allowing us to do more in a faster and more efficient way.

Thunderbird Adopts A Puppy

Then we were able to do bigger things. As noted in another blog post, I’d been talking to cketti who ran the K-9 Mail open source email client project. K-9 Mail was exactly what I thought Thunderbird on Android would be if it existed. Talking to Cketti for a couple of years showed me that the K-9 Mail project also faced a sustainability challenge.

After many Thunderbird Council discussions, we concluded that K-9 Mail and Thunderbird shared common values and that we could better serve our users by adopting the project. K-9 Mail is amazing software, with a long legacy like Thunderbird and I’m excited that soon we will ship Thunderbird for Android based on the awesome foundation the K-9 Mail team has built and continues to improve. 

At the same time, there was a dreaded task to take on. Parts of the Thunderbird codebase are 20 years old. In software time, that’s ancient, like the pyramids are ancient. That old code made fixing long standing bugs or adding a feature extraordinarily difficult. Often a small change would break something random in a totally different part of the application and it would take a lot of time to figure out why. We had to fix this in the areas that were most affecting our work. 

Enter the “Supernova” project.

But WHY The Name “Supernova?” 

I named the 115 release Thunderbird “Supernova” after a cosmic explosion because I knew we were going to have to blow some things up and rebuild them. But I also knew the result was going to be beautiful and more powerful for our users. (Quick aside: a supernova often results in a beautiful nebula – search “Crab Nebula” to see what I mean). We rebuilt some really core parts of Thunderbird (a ton of the front-end, the message list, and message display areas to name a few). And in the process, we began a journey of future-proofing Thunderbird so that it can live for another 20 years. 

A photo of the Crab Nebula, courtesy of NASA<figcaption class="wp-element-caption">This view of the Crab Nebula in visible light comes from the Hubble Space Telescope and spans 12 light-years. The supernova remnant, located 6,500 light-years away in the constellation Taurus, is among the best-studied objects in the sky. Credits: NASA/ESA/ASU/J. Hester</figcaption>

And to celebrate, we gave Thunderbird a new logo.

Now, we look forward to the future, but email as a standard is under threat. It is becoming less of an open ecosystem and one of large fiefdoms with high walls.

The biggest providers are making features and tools that only work within their walled garden. They espouse “privacy” while maintaining access to all your messages in their privacy policies. They make it increasingly harder to use clients like Thunderbird because they want you to live in their apps in order to monetize you and your data. 

Next: Thunderbird for Android and iOS

But Thunderbird has an important story to tell. A story of decentralized, privacy-respecting email based on open standards. We’ve always created open source software that anyone can use, contribute to, and extend to their specific needs. 

By the time Q1 2024 rolls around, we’ll have opened the aperture a bit and given folks a new choice on Android. Next year we’re also going to start working on Thunderbird for iOS. In addition, we’re going to develop the tools that give people choices other than the big three. Thunderbird has come a long way these past few years, but we’re not done yet – come and join us as we get ready to do so much more!

The post An Untold History of Thunderbird appeared first on The Thunderbird Blog.

The Rust Programming Language BlogFaster compilation with the parallel front-end in nightly

The Rust compiler's front-end can now use parallel execution to significantly reduce compile times. To try it, run the nightly compiler with the -Z threads=8 option. This feature is currently experimental, and we aim to ship it in the stable compiler in 2024.

Keep reading to learn why a parallel front-end is needed and how it works, or just skip ahead to the How to use it section.

Compile times and parallelism

Rust compile times are a perennial concern. The Compiler Performance Working Group has continually improved compiler performance for several years. For example, in the first 10 months of 2023, there were mean reductions in compile time of 13%, in peak memory use of 15%, and in binary size of 7%, as measured by our performance suite.

However, at this point the compiler has been heavily optimized and new improvements are hard to find. There is no low-hanging fruit remaining.

But there is one piece of large but high-hanging fruit: parallelism. Current Rust compiler users benefit from two kinds of parallelism, and the newly parallel front-end adds a third kind.

Existing interprocess parallelism

When you compile a Rust program, Cargo launches multiple rustc processes, compiling multiple crates in parallel. This works well. Try compiling a large Rust program with the -j1 flag to disable this parallelization and it will take a lot longer than normal.

You can visualise this parallelism if you build with Cargo's --timings flag, which produces a chart showing how the crates are compiled. The following image shows the timeline when building ripgrep on a machine with 28 virtual cores.

cargo build --timings output when compiling ripgrep

There are 60 horizontal lines, each one representing a distinct process. Their durations range from a fraction of a second to multiple seconds. Most of them are rustc, and the few orange ones are build scripts. The first twenty processes all start at the same time. This is possible because there are no dependencies between the relevant crates. But further down the graph, parallelism reduces as crate dependencies increase. Although the compiler can overlap compilation of dependent crates somewhat thanks to a feature called pipelined compilation, there is much less parallel execution happening towards the end of compilation, and this is typical for large Rust programs. Interprocess parallelism is not enough to take full advantage of many cores. For more speed, we need parallelism within each process.

Existing intraprocess parallelism: the back-end

The compiler is split into two halves: the front-end and the back-end.

The front-end does many things, including parsing, type checking, and borrow checking. Until this week, it could not use parallel execution.

The back-end performs code generation. It generates code in chunks called "codegen units" and then LLVM processes these in parallel. This is a form of coarse-grained parallelism.

We can visualize the difference between the serial front-end and the parallel back-end. The following image shows the output of a profiler called Samply measuring rustc as it does a release build of the final crate in Cargo. The image is superimposed with markers that indicate front-end and back-end execution.

Samply output when compiling Cargo, serial

Each horizontal line represents a thread. The main thread is labelled "rustc" and is shown at the bottom. It is busy for most of the execution. The other 16 threads are LLVM threads, labelled "opt cgu.00" through to "opt cgu.15". There are 16 threads because 16 is the default number of codegen units for a release build.

There are several things worth noting.

  • Front-end execution takes 10.2 seconds.
  • Back-end execution takes 6.2 seconds, and the LLVM threads are running for 5.9 seconds of that.
  • The parallel code generation is highly effective. Imagine if all those LLVM executed one after another!
  • Even though there are 16 LLVM threads, at no point are all 16 executing at the same time, despite this being run on a machine with 28 cores. (The peak is 14 or 15.) This is because the main thread translates its internal code representation (MIR) to LLVM's code representation (LLVM IR) in serial. This takes a brief period for each codegen unit, and explains the staircase shape on the left-hand side of the code generation threads. There is some room for improvement here.
  • The front-end is entirely serial. There is a lot of room for improvement here.
New intraprocess parallelism: the front-end

The front-end is now capable of parallel execution. It uses Rayon to perform compilation tasks using fine-grained parallelism. Many data structures are synchronized by mutexes and read-write locks, atomic types are used where appropriate, and many front-end operations are made parallel. The addition of parallelism was done by modifying a relatively small number of key points in the code. The vast majority of the front-end code did not need to be changed.

When the parallel front-end is enabled and configured to use eight threads, we get the following Samply profile when compiling the same example as before.

Samply output when compiling Cargo, parallel

Again, there are several things worth noting.

  • Front-end execution takes 5.9 seconds (down from 10.2 seconds).
  • Back-end execution takes 5.3 seconds (down from 6.2 seconds), and the LLVM threads are running for 4.9 seconds of that (down from 5.9 seconds).
  • There are seven additional threads labelled "rustc" operating in the front-end. The reduced front-end time shows they are reasonably effective, but the thread utilization is patchy, with the eight threads all having periods of inactivity. There is room for significant improvement here.
  • Eight of the LLVM threads start at the same time. This is because the eight "rustc" threads create the LLVM IR for eight codegen units in parallel. (For seven of those threads that is the only work they do in the back-end.) After that, the staircase effect returns because only one "rustc" thread does LLVM IR generation while seven or more LLVM threads are active. If the number of threads used by the front-end was changed to 16 the staircase shape would disappear entirely, though in this case the final execution time would barely change.
Putting it all together

Rust compilation has long benefited from interprocess parallelism, via Cargo, and from intraprocess parallelism in the back-end. It can now also benefit from intraprocess parallelism in the front-end.

You might wonder how interprocess parallelism and intraprocess parallelism interact. If we have 20 parallel rustc invocations and each one can have up to 16 threads running, could we end up with hundreds of threads on a machine with only tens of cores, resulting in inefficient execution as the OS tries its best to schedule them?

Fortunately no. The compiler uses the jobserver protocol to limit the number of threads it creates. If a lot of interprocess parallelism is occuring, intraprocess parallelism will be limited appropriately, and the number of threads will not exceed the number of cores.

How to use it

The nightly compiler is now shipping with the parallel front-end enabled. However, by default it runs in single-threaded mode and won't reduce compile times.

Keen users can opt into multi-threaded mode with the -Z threads option. For example:

$ RUSTFLAGS="-Z threads=8" cargo build --release

Alternatively, to opt in from a config.toml file (for one or more projects), add these lines:

rustflags = ["-Z", "threads=8"]

It may be surprising that single-threaded mode is the default. Why parallelize the front-end and then run it in single-threaded mode? The answer is simple: caution. This is a big change! The parallel front-end has a lot of new code. Single-threaded mode exercises most of the new code, but excludes the possibility of threading bugs such as deadlocks that can affect multi-threaded mode. Even in Rust, parallel programs are harder to write correctly than serial programs. For this reason the parallel front-end also won't be shipped in beta or stable releases for some time.

Performance effects

When the parallel front-end is run in single-threaded mode, compilation times are typically 0% to 2% slower than with the serial front-end. This should be barely noticeable.

When the parallel front-end is run in multi-threaded mode with -Z threads=8, our measurements on real-world code show that compile times can be reduced by up to 50%, though the effects vary widely and depend on the characteristics of the code and its build configuration. For example, dev builds are likely to see bigger improvements than release builds because release builds usually spend more time doing optimizations in the back-end. A small number of cases compile more slowly in multi-threaded mode than single-threaded mode. These are mostly tiny programs that already compile quickly.

We recommend eight threads because this is the configuration we have tested the most and it is known to give good results. Values lower than eight will see smaller benefits, but are appropriate if your hardware has fewer than eight cores. Values greater than eight will give diminishing returns and may even give worse performance.

If a 50% improvement seems low when going from one to eight threads, recall from the explanation above that the front-end only accounts for part of compile times, and the back-end is already parallel. You can't beat Amdahl's Law.

Memory usage can increase significantly in multi-threaded mode. We have seen increases of up to 35%. This is unsurprising given that various parts of compilation, each of which requires a certain amount of memory, are now executing in parallel.


Reliability in single-threaded mode should be high.

In multi-threaded mode there are some known bugs, including deadlocks. If compilation hangs, you have probably hit one of them.

The binaries produced by the compiler are expected to be the same no matter which front-end is being used. Any differences will be considered a bug.


If you have any problems with the parallel front-end, please check the issues marked with the "WG-compiler-parallel" label. If your problem does not match any of the existing issues, please file a new issue.

For more general feedback, please start a discussion on the wg-parallel-rustc Zulip channel. We are particularly interested to hear the performance effects on the code you care about.

Future work

We are working to improve the performance of the parallel front-end. As the graphs above showed, there is room to improve the utilization of the threads in the front-end. We are also ironing out the remaining bugs in multi-threaded mode.

We aim to stabilize the -Z threads option and ship the parallel front-end running by default in multi-threaded mode on stable releases in 2024.


The parallel front-end has been under development for a long time. It was started by @Zoxc, who also did most of the work for several years. After a period of inactivity, the project was revived this year by @SparrowLii, who led the effort to get it shipped. Other members of the Parallel Rustc Working Group have also been involved with reviews and other activities. Many thanks to everyone involved.

The Mozilla BlogMozilla Joins Latest AI Insight Forum

Today Mozilla Foundation President, Mark Surman, spoke with members of the US Senate, including Senator Leader Schumer, Senator Rounds, Senator Heinrich and Senator Young about two of what Mozilla believes are the most critical questions we must ask if we’re to chart a better path forward with AI: How can we protect people’s privacy in the AI era? And how can we ensure that those who cause harm through AI can be held accountable — and liable?

At Mozilla, we have a unique vantage point in finding answers to these questions: that of a non-profit foundation and a tech company. As a foundation, we’ve spent the past five years exploring what it takes to make AI trustworthy and, along with nine other philanthropic foundations, have joined Vice President Harris in announcing a $200 million investment in the trustworthy AI ecosystem. As a tech company, we’re investing heavily in leveraging AI in our products and have set up our own AI R&D lab,

As progress in AI accelerates, it is critical that we take action to ensure that the benefits of AI are shared widely across society and to protect people from harm. Binding rules should be a part of this course of action, and privacy, openness, and transparency should be core principles underlying any regulatory framework. 

Open source AI, in particular, faces significant threat from speculative fears about its potential misuse. Rushing to shut down open source AI could hurt our ability to harness AI’s potential. Abuse is not a problem of open source AI — we’ve seen time and time again that proprietary technologies are equally susceptible to abuse. In fact, openness has the potential to play a significant role in promoting competition in AI and large language models – something organizations like AI2, EleutherAI, Mistral, and are focused on – and also allows for governments and public interest groups to assess the technology and flag bias, security flaws, and other issues, therefore improving the quality of these technologies and the oversight of them too.

In contemplating new rules for AI, we’ve asked the Senate to consider the following recommendations: 

  1. Incentivize openness and transparency: Open AI ecosystems facilitate scrutiny and help foster an environment where responsibility for AI-driven outcomes can be appropriately attributed. Moreover, openness in AI stimulates innovation by providing the building blocks with which the market can build competitive products. Ensuring that all projects, open source or not, meet minimum criteria of responsible release is different from effectively banning open source approaches due to hypothetical future harms. Openness is not a problem but a core part of the solution that will help a broad group of actors engage core questions in this space, including privacy or liability.
  2. Distribute liability equitably: The complexity of AI systems necessitates a nuanced approach to liability that considers the entire value chain, from data collection to model deployment. Liability should not be concentrated but rather distributed in a manner that reflects how AI is developed and brought to market. Rather than just looking at the deployers of these models, who often might not be in a position to mitigate the underlying causes for potential harms, a more holistic approach would regulate practices and processes across the development ‘stack’.
  3. Champion privacy by default: Privacy legislation must be at the forefront of the AI regulatory framework. The American Data Privacy and Protection Act, endorsed by Mozilla, would represent a significant step towards providing the necessary privacy guarantees that underpin responsible AI. Until Congress passes a federal law, the FTC should push forward its critical Commercial Surveillance and Data Security rulemaking, and existing rules protecting consumers and competition need to be enforced. 
  4. Invest in privacy-enhancing technologies: Investment in privacy-enhancing technologies, with government funding at its heart, is crucial for the development of AI that protects individual privacy — beginning with data collection. Such investment not only aligns with ethical standards but also drives innovation in creating more responsible and trustworthy methodologies for AI development.

At Mozilla, we will continue to fight for and invest in more trustworthy AI. We’ve shared Mark Surman’s full written submission with the Senate, that includes more of Mozilla’s perspective on AI regulation and governance. Below is a link to the submission:

The post Mozilla Joins Latest AI Insight Forum appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 520

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is floem, a native Rust UI library with fine-grained reactivity.

Despite receiving no suggestions, llogiq is reasonably pleased with his choice.

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

366 pull requests were merged in the last week

Rust Compiler Performance Triage

A difficult week for triage, due to what appears to be system-level disruption to measurement apparatus, yielding transient noise (and potentially masking actual problems). The main non-noise performance change was huge regression to bitmaps introduced by PR 117131, and that already has a fix in-flight fix (PR #117542). The other thing worth noting is that the parallel rustc front-end has been enabled in the nighlty builds, which has introduced some overhead that was expected by wg-parallel-rustc.

Triage done by @pnkfelix. Revision range: 650991d6..7b97a5ca

10 Regressions, 4 Improvements, 3 Mixed; 3 of them in rollups 68 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
New and Updated RFCs
Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Upcoming Events

Rusty Events between 2023-11-08 - 2023-12-06 🦀

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.


Please see the latest Who's Hiring thread on r/rust

Quote of the Week

For Binder to continue to meet Android's needs, we need better ways to manage (and reduce!) complexity without increasing the risk.

The biggest change is obviously the choice of programming language. We decided to use Rust because it directly addresses a number of the challenges within Binder that we have faced during the last years.

Alice Ryhl on the Linux Kernel Mailing List

Thanks to Vincent de Phily for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Mozilla BlogA Third Way on AI

Last week was an important moment in the debate about AI, with President Biden issuing an executive order and the UK’s AI Safety Summit convening world leaders.

Much of the buzz around these events made it sound like AI presents us with a binary choice: unbridled optimism, or existential fear. But there was also a third path available, a nuanced, practical perspective that examines the real risks and benefits of AI. 

There have been people promoting this third perspective for years — although GPT-fueled headlines of the past 12 months have often looked past them. They are foundations, think tanks, researchers and activists (including a number of Mozilla fellows and founders) plus the policymakers behind efforts like last year’s AI Blueprint for an AII Bill of Rights.

We were happy to see the executive order echo many of the ideas that have emerged from this school of thought over the last few years, prioritizing practical, responsible AI governance. The UK Safety Summit started on a very different note, anchored in concerns around existential risks — but also some welcome reframing.

As we look forward from this point, it feels important to highlight three key levers that will help us get closer to responsible AI governance: well-designed regulation, open markets, and open source. Some of these were in the news last week, while others require more attention. Together, they have the potential to help us shape AI in ways that are more trustworthy, empowering and equitable. 


As we saw last week, there is near consensus that AI presents risks and harms, from the immediate (from discrimination to disinformation) to the longer term (which are still emerging and being explored). There’s also a growing consensus that regulation is a part of the solution.

But what exactly this regulation looks like — and what its outcomes should be — is where consensus breaks down. One thing is clear, though: Any regulatory framework should  protect people from harm and provide mechanisms to hold companies accountable where they cause it. 

The executive order included encouraging elements, balancing the need for a rights-respecting approach to addressing AI’s present risks with exploration of longer-term, more speculative risks. It also acknowledges that the U.S. is still missing critical baseline protections, such as comprehensive privacy legislation that work hand-in-hand with AI-specific rules.

The ideas that dominated the Safety Summit were less encouraging. They reinforced that old binary, either going too far or not far enough. There was a focus on self regulation by AI companies (which isn’t really governance at all). And there were nods towards the idea of licensing large language models (which would only “increase concentration and may worsen AI risks,” in the words of Sayash Kapoor and Arvind Narayanan).

Open markets 

To Arvind and Sayash’s point, there is a problematic concentration of power in the tech industry. Decisions about AI, like who it most benefits or who is even allowed to access it, are made by a handful of people in just a few corners of the world. The majority of people impacted by this technology don’t get to shape it in any meaningful way. 

Competition is an antidote. AI development not just by big companies, but also smaller ones (and nonprofits, too) has the potential to decentralize power. And government action to hinder monopolies and anti-competitive practices can accelerate this. The executive order takes note, calling on the Federal Trade Commission (FTC) to promote competition and protect small businesses and entrepreneurs. 

It’s important for this work to start now — both by enforcing existing competition law, but also by greater adoption of ex-ante interventions like the UK’s DMCC bill. The previous decade showed how quickly incumbent players, like social media platforms, acquire or shut down competitors. And it’s already happening again: Anthropic and OpenAI have familiar investors (Google + Amazon and Microsoft, respectively), and once-independent laboratories like DeepMind were long ago acquired (by Google).  

Open source

For smaller AI players to thrive in the marketplace, the core building blocks of the technology need to be broadly accessible. This has been a key lever in the past — open-source technology allowed a diverse set of companies, like Linux and Firefox, to compete and thrive in the early days of the web.  

Open source has a chance to play a role in fueling competition in AI and, more specifically, large language models. This is something organizations like Ai2, EleutherAI, Mistral, and are focused on. Open source AI also has the potential to strengthen AI oversight, allowing governments and public interest groups to scrutinize the technology and call out bias, security flaws, and other issues. We’ve already seen open source catch critical bugs in tooling used for core AI development. While open source isn’t a panacea — and it can be twisted to further consolidate power if it’s not done right — it has huge potential in helping more people participate in and shape the next era of AI. 

It’s important to note that there is a major threat to open source AI emerging: some use the fear of existential risk to propose approaches that would shut down open-source AI. Yes, bad actors could abuse open source AI models — but internet history shows that proprietary technologies are just as likely to be abused. Rushing to shut down open source AI in response to speculative fears, rather than exploring new approaches focused on responsible release, could unnecessarily foreclose our ability to tap into the potential of these technologies. 

Collaboratively dealing with global problems is not a new idea in technology. In fact there are many lessons to learn from previous efforts — from how we dealt with cybersecurity issues like encryption, governed the internet across borders, and worked to counter content moderation challenges like disinformation. What we need to do is take the time to develop a nuanced approach to open source and AI. We are happy to see the EU’s upcoming AI Act exploring these questions, and the recent U.S. executive order instructing the Department of Commerce to collect input on both the risks and benefits of  “dual-use foundation models with widely accessible weights” — in essence, open-source foundation models. This creates a process to develop the kind of nuanced, well-informed approaches we need. 

Which was exactly the goal of the letter on open source and AI safety that we both signed last week — along with over 1,500 others. It was a public acknowledgement that open source and open science are neither a silver bullet nor a danger. They are tools that can be used to better understand risks, bolster accountability, and fuel competition. It also acknowledged that positioning tight and proprietary control of foundational AI models as the only path to safety is naive, and maybe even dangerous.

The letter was just that — a letter. But we hope it’s part of something bigger. Many of us have been calling for AI governance that balances real risks and benefits for years. The signers of the letter include a good collection of these voices — and many new ones, often coming from surprising places. The community of people ready to roll up their sleeves to tackle the thorny problems of AI governance (even alongside people they usually disagree with) is growing. This is exactly what we need at this juncture. There is much work ahead.

The post A Third Way on AI appeared first on The Mozilla Blog.

The Mozilla BlogThe Future of Shopping

It’s clear that online shopping has given consumers more choices than ever, offering remarkable convenience with a few clicks of a button. But there’s a catch. With Fakespot by Mozilla, which uses AI to detect fake reviews and scams, we’ve seen it all when it comes to e-commerce. Counterfeits, fake reviews, review flooding, and nowadays, more GPT-generated fake reviews and fraudulent trends that rapidly explode and recede as trends naturally progress. 

While technologies come and go, in shopping there is an underlying attribute that is fundamental to any transaction on the internet: trust. It’s the cornerstone of successful e-commerce — promising safe transactions, consumer confidence and authenticity. With the speed of innovation and the importance of trust in mind, what does the future of shopping look like?

This is a question that has been on our minds as we’ve developed features such as Fakespot Pros and Cons. Powered by generative AI, Pros and Cons was released last year, long before public concerns sparked by large language models, primarily led by ChatGPT release late last year. Since its inception, Fakespot has utilized artificial intelligence as a critical part of our platform. We built it to be a trusted guide that saves you time and money by leveraging AI technology that protects consumers from the start of their shopping journey. 

It’s right to be skeptical; with most of the AI models we see today, it seems trust and security are always an afterthought. Fast-paced technology releases are emblematic of that, leading to situations where it’s too late to fix the problem once it appears. What’s staggering is that with each iteration in innovation for AI, the amount that we understand about what happens under the hood fades considerably with each release. Sometimes we truly don’t know how a model comes up with its outputs because they operate at dimensions the human mind cannot comprehend. That is concerning. So, for the future of shopping, our solution is this: factor in consumer trust and security at the initial stages of developing product features and models.

That’s where Mozilla – a company that has been pioneering user-first innovations for more than 25 years – leads with trust and security with its AI efforts. This year, Mozilla announced it would commit $30 million to build, a startup focused on building a trustworthy, independent, and open-source AI ecosystem. Additionally, Mozilla hosted its first Responsible AI Challenge, which challenged builders to design and defend responsible and trustworthy AI solutions for a wide range of industries. Last month, Mozilla launched its AI Guide, a resource where developers come together to pioneer and drive generative AI innovations.

You can see this in action with Fakespot Chat, a new AI agent we’re testing. Fakespot Chat will guide you as you’re shopping by answering your questions with trust already built-in. This strengthens what we believe shopping should look like, in a world where a paradigm shift is occurring in consumer technology: privacy, safety and openness are imperative to our experiences as individuals on the internet.

Laptop monitor showing Fakespot analysis of Lego product with Fakespot Chat<figcaption class="wp-element-caption">Fakespot Chat answers questions you have about a product</figcaption>

How Fakespot Chat works 

Remember the days, when you’d go to a physical store and ask the salesperson questions about a particular item before you purchased it? Fakespot Chat is our virtual version of that experience. 

Fakespot Chat – which we are currently testing – is Mozilla’s first large language model (LLM). We will be working to improve the accuracy of its responses and would like to get feedback from people who use it. Simply click the thumbs up button to let us know if the responses are accurate or thumbs down button if it is inaccurate. This will help improve the model and its responses.

Ultimately, our goal with Fakespot Chat is to reduce your product research time and lead you to better purchasing decisions. This is a free service available at Currently, Fakespot Chat is available for shoppers in the U.S. 

Here’s how Fakespot Chat works:

Step 1: Use the Fakespot Analyzer or analyze an product from our extension/add-on

Step 2: If you are using the Fakespot Analyzer, copy and paste the URL of the product you have questions about. If you analyze a product from the extension/add-on, the analysis will automatically start. 

Step 3: After analysis is complete, Fakespot Chat will appear on the right-hand side of an Analysis Page along with our core features such as Fakespot Review Grades, Pros and Cons, and Highlights.  

Step 4: Start asking Fakespot Chat any questions about the product. If available, Fakespot Chat will suggest questions that may be a good place to start your research.

Fakespot’s technology uses sophisticated AI and machine learning (ML) to sort through real and fake reviews to deliver the best answer to your questions. The only data that is collected is the data that you choose to share with us. Moreover, information about your session is only used to improve the functionality of Fakespot Chat for others. As always with Fakespot, we do not require you to create an account to use Fakespot Chat because we don’t need to know who you are or what you are doing.     

Try out Fakespot Chat by activating it at this link; we’re currently progressively rolling it out to users on  If you think Fakespot Chat’s answer is right or wrong, you can help us improve the model by submitting feedback. Please share your general feedback and comments about the feature by visiting  this link. We look forward to hearing from you as we build towards a better and more trusted shopping experience, together.

Reduce product research time and make better purchases

The post The Future of Shopping appeared first on The Mozilla Blog.

IRL (podcast)Crash Test Dummies

Why does it so often feel like we’re part of a mass AI experiment? What is the responsible way to test new technologies? Bridget Todd explores what it means to live with unproven AI systems that impact millions of people as they roll out across public life. 

In this episode: a visit to San Francisco, a major hub for automated vehicle testing; an exposé of a flawed welfare fraud prediction algorithm in a Dutch city; a look at how companies comply with regulations in practice; and how to inspire alternative values for tomorrow’s AI.

Julia Friedlander is senior manager for automated driving policy at San Francisco Municipal Transportation Agency who wants to see AVs regulated  based on safety performance data.

Justin-Casimir Braun is a data journalist at Lighthouse Reports who is investigating suspect algorithms for predicting welfare fraud across Europe. 

Navrina Singh is the founder and CEO of Credo AI, a platform that guides enterprises on how to ‘govern’ their AI responsibly in practice.

Suresh Venkatasubramanian is the director of the Center for Technological Responsibility, Reimagination, and Redesign at Brown University and he brings joy to computer science. 

IRL is an original podcast from Mozilla, the non-profit behind Firefox. In Season 7, host Bridget Todd shares stories about prioritizing people over profit in the context of AI.

The Talospace ProjectFirefox 119 and the next ppc64le JITeration

Although I've been a bit preoccupied lately with a new $DAYJOB which has required me to be remote, let's not bury the (larger) lede: the first iteration of the Firefox/SpiderMonkey ppc64le JIT is being evaluated by Mozilla to determine if the changes are acceptable. Please don't spam the Bugzilla entry with drive-by comments, but if you'd like to observe its progress, you can follow along in bug 1860412.

That doesn't mean, of course, that you can't try it yourself. The current JIT state for 115ESR now supports baseline Wasm as well as full optimizing Ion compilation for regular JS, and passes the complete test suite on Linux. It does not yet support POWER8, nor the optimizing Wasm compiler, so some applications will not run as well as they should (and obnoxiously asm.js code is not JITted at all in this configuration because it relies on the optimizing Wasm compiler, despite the fact it's regular JavaScript — for TenFourFox, which didn't support Wasm otherwise, I hacked JavaScript to simply compile asm.js with regular Ion). However, I do intend to add support for optimized Wasm and later POWER8, and with that said, the testers I've been seeding this with see good improvements for the vast majority of sites and no additional reproducible crashes so far.

If you'd like to give it a shot as well, then apply the new patches numerically and build as we did for Firefox 115, using the .mozconfigs from Firefox 105. For your convenience the JIT patch set already includes the PGO-LTO and WebRTC fixes for that version. If you don't want to roll your own browser (though I highly recommend it), then Dan Horák has you covered with a copr build for Fedora users. However, I don't intend to backport POWER8 or optimizing Wasm support to 115ESR; future work will be done on trunk, assuming Mozilla is fine with the existing changes. Do not post bugs with the ESR JIT to bug 1860412.

Apart from that, the other Firefox news is anticlimatic: Firefox 119 (I did a test build of Fx118 but hadn't tested enough to post about it) builds fine with the WebRTC patch from Fx116 (or --disable-webrtc in your .mozconfig), the PGO-LTO patch from Fx117 and the .mozconfigs from Firefox 105.

The Servo BlogServo announces grant from the NLnet Foundation

We are excited to announce that earlier this year in July, Servo received a NLnet grant to enhance several aspects of Servo. Under this grant, our primary focus is to:

  • Complete float support in Servo
  • Support more languages in inline layout
  • Add initial <table> support


Supporting floats in Servo is an ongoing effort since mid-2023. We’ve made significant progress on floats, but there are still some issues that need to be addressed before Servo can boast a fully-compliant implementation of CSS floats.

Our objective is to achieve an average pass rate of over 80% for /css/CSS2/floats/ and /css/CSS2/floats-clear/. You can track the results on our WPT dashboard

Last week, we surpassed this for the floats tests, with an 82.2% WPT pass rate:

Image showing web platform test result for floats, that is 82.2%

We’re also nearing the milestone for floats-clear, currently at a 73.3% pass rate:

Image showing web platform test result for floats-clear, that is 73.3%

More languages in inline layout

Servo’s layout engine lacks crucial features for rendering languages that don’t use the Latin alphabet. This includes proper font selection, support for right-to-left scripts, and logical properties. Our aim is to improve Servo’s support for displaying a wider variety of content.

Initial <table> support

HTML tables are an important and widely used feature. Servo’s new layout engine doesn’t support tables yet, which leads to incorrect layout of many web pages. Under this scope, our main focus is to implement initial support for tables in Servo, so that it can render tables used on Wikipedia.

As we progress and achieve these milestones, we’ll cover them in more detail in subsequent blog posts. Stay tuned for more updates!

Support.Mozilla.OrgWhat’s up with SUMO – Q3 2023

Hi everybody,

Sarto here! It’s been a great 4 months! The time really flew by. First and foremost I would like to thank the community here at Mozilla for for giving me grace and also showing me how passionate you guys truly are. I’ve worked in a handful of communities in the past but, by far, Mozilla has the most engaged community I’ve come across. The work that you guys put into Mozilla is commendable and valuable. For the community members and contributors that I was able to meet and interact with during my time here, thank you for sharing that passion with me. I’m handing the baton back over to Kiki. Till next time, keep on rocking the helpful web!


Welcome note and shout-outs from Q3

  • Big thanks to Paul who helped investigate 3 different incidents for Firefox in the last 2 weeks. There has been a huge amount of work going on for the CX team this quarter and you being involved in these incidents to help provide forum examples, follow up with users, and help herd some community folks to investigate has been very helpful.
  • Thanks to Jscher2000, Danny Colin, Paul, jonzn4SUSE, Dan, TyDraniu, and Zulqarnainjabbar99 for your input in the thread about UX Pain points leading to users leaving Firefox in the first 30 days.
  • Thank you to everyone who contributed to the release of Firefox 117 for Desktop, as well as all of the contributors who participated in the release thread.
  • Shout out to Paul for his work updating the Browsing history in Firefox – View the websites you have visited article for FireFox v118.
  • Shout out to Mark Heijl for his amazing job getting dutch article translations (incl. all the pocket ones) to a 100%!. And thank you Tim for bringing this to our attention!

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to add them in our next edition.

Community news

Catch up

  • Watch the monthly community call if you haven’t. Learn more about what’s new in July, August, and September! Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can answer them during the meeting. First time joining the call? Check out this article to get to know how to join. 
  • If you’re an NDA’ed contributor, you can watch the recording of the Customer Experience weekly scrum meeting from AirMozilla to catch up with the latest product updates.
  • Consider subscribe to Firefox Daily Digest to get daily updates about Firefox from across different platforms.
  • Check out SUMO Engineering Board to see what the platform team is currently doing and submit a report through Bugzilla if you want to report a bug/request for improvement.

Community stats


KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only

Month Page views Vs previous month
Jul 2023 6,512,758 3.87%
Aug 2023 7,164,666 10.01%
Sep 2023 6,456,716 -9.88%

Top 5 KB contributors in the last 90 days: 

KB Localization

Top 10 locales based on total page views

Locale Jul 2023 

pageviews (*)

Aug 2023 pageviews (*) Sep 2023 

pageviews (*)

Localization progress (per Oct, 30)(**)
de 11.09% 11.41% 11.12% 87%
zh-CN 6.98% 7.03% 6.67% 88%
fr 6.16% 5.95% 7.49% 80%
es 5.71% 5.50% 5.84% 23%
ja 4.81% 4.62% 4.84% 35%
ru 3.47% 3.48% 3.55% 84%
pt-BR 3.39% 3.66% 3.39% 43%
It 2.35% 1.98% 2.42% 91%
pl 2.06% 2.05% 1.99% 78%
zh-TW 1.91% 0.92% 2.16% 2%
* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Jul 2023 2,664 76.28% 11.71% 59.24%
Aug 2023 2,853 79.36% 12.72% 49.59%
Sep 2023 2,977 72.93% 11.89% 67.89%

Top 5 forum contributors in the last 90 days: 

Social Support

Channel Total tweets Total moderation by contributors Total reply by contributors Respond conversion rate
Jul 2023 317 157 83 52.87%
Aug 2023 237 47 33 70.21%
Sep 2023 192 47 22 46.81%

Top 5 Social Support contributors in the past 3 months: 

  1. Daniel B.
  2. Théo Cannillo
  3. Wim Benes
  4. Ifeoma
  5. Peter Gallwas

Play Store Support*

Channel Total reviews Total conv interacted by contributors Total conv replied by contributors
Jul 2023 6,072 191 40
Aug 2023 6,135 185 55
Sep 2023 6,111 75 23
* Firefox for Android only

Top 5 Play Store contributors in the past 3 months: 

  1. Wim Benes
  2. Tim Maks
  3. Damian Szabat
  4. Christophe Villeneuve
  5. Selim Şumlu

Product updates

To catch up on product releases update, please watch the recording of the Customer Experience scrum meeting from AirMozilla. You can also subscribe to the AirMozilla folder by clickling on the Subscribe button at the top right corner of the page to get notifications each time we add a new recording.

Useful links:

Mozilla Localization (L10N)L10n Report: November 2023 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

New content and projects

What’s new or coming up in Firefox desktop

On October 24 we shipped Firefox 119 with a brand new locale: Santali (sat). This brings the overall number of locales supported in Firefox release to 102. Congratulations to Prasanta and the other Santali contributors for this huge accomplishment.

In terms of new content to translate, a couple of new features were responsible for most of the new strings over the last months: a new shopping feature (Review Checker), and a redesigned Firefox View page, which now includes more information to support the user (recent browsing, recently closed tabs, tabs from other devices, etc.).

Check your Pontoon notifications for instructions on how to test your localization for the Review Checker in Nightly.

In the current Nightly (121) we also migrated the integrated PDF Viewer to Fluent, finally replacing the unmaintained legacy l10n system (webl10n.js) used in this feature.

What’s new or coming up in mobile

We officially launched yesterday the brand update from “Firefox Accounts” to a more general “Mozilla accounts” – a change you have probably noticed in recent string updates. Please make sure to address these strings so you keep products up to date with the rebranding.

You may have also noticed that a few Android strings have landed for add-ons, specifically to call out that we have hundreds of new extensions. If you would like to have this experiment available in your locale, make sure you go into the Firefox for Android project in Pontoon, and choose the Fenix file. Then search for these string IDs:

  • addon_ga_message_title
  • addon_ga_message_body
  • Addon_ga_message_button

You can find these from the search bar, once you are in the Fenix file in Pontoon.

What’s new or coming up in web projects

Mozilla Accounts

In early October Mozilla announced a name change for Firefox accounts, and as of November 1 Firefox accounts is now officially Mozilla accounts. Even before this, starting in September a significant number of new strings and changes related to this name change started making its way to you. Thank you for ensuring that your locales were updated and ready. The majority of locales shipping to production launched with all translations complete and ready for people around the world to use their Mozilla accounts in their own language. This is truly a result of your contributions! Now that these changes are live, please do reach out if you notice anything strange as you go about using your Mozilla account.

Since the last report, a few changes have landed in this project. In addition to the global change from Firefox account(s) to Mozilla account(s), the team also began to simplify the references to third party brand names. The names are no longer inside a placeholder. This change will make it easier to translate long strings with many brand names, all  too common in this project. Only Mozilla brands and product names will be coded in the placeholder. During this transition period, you will see a mixture of both. As we update a page or add a new page, the new approach will be applied.

A few new pages were added too. These are pages with file names ending in “-2023” or “-2”, replacing the older versions which will soon be removed from Pontoon. If you are working on these pages, make sure you are working on the new versions, not the old ones.

Relay Website

In the last report, we shared with you the news of migrating a few pages to The migration was complete which resulted in opening up Relay specific pages to more locales. However, an internal decision has been made that these pages should remain on the current Relay product site and not move to

We regret that the reversal of this decision came soon after the migration. We are having internal discussions around how we can better communicate changes in the future so that we can minimize the impact to our community volunteers.

The and Relay teams will work closely with the l10n team to migrate the content back to the existing product site. All the work you have done will be stored in Pontoon. The l10n team will make its best effort to preserve the history of each of the translated strings. For the locales that didn’t opt in to the Relay Website project but participated in the localization of the pages on, we encourage you to consider opting in on the Relay project if the community is interested and has the bandwidth.

What’s new or coming up in SUMO

Firefox Review Checker Sprint is happening as we launched Firefox 119. Please check out the sprint wiki to get know more about the detail.

Firefox Account transition to Mozilla account. What you need to know as a SUMO contributor?

The content team at SUMO is utilizing Bugzilla to collect content requests from other teams. If you’re contributing to content at SUMO, please check out this best practices for Bugzilla tickets.

What’s new or coming up in Pontoon

Light Theme

We are excited to announce that we have incorporated a light theme into Pontoon. The theme selector is available in two places:

  • Settings Page: Directly select the light theme.
  • User Profile Menu: Click on the profile icon (top right) and choose the light theme.

Newly published localizer facing documentation

We have added documentation on how to use the theme selector feature to access the light theme in the settings page and user profile menu.


We are hosting an L10n Fireside chat mid-November (date and time TBD). It will be live and recorded here. We are interested in your questions and topics! Please submit them in this form, or reach out directly to delphine at mozilla dot com if you prefer.

Want to showcase an event coming up that your community is participating in? Contact us and we’ll include it.

Friends of the Lion

We started a series called “Localizer Spotlight” and have published two already. Do you know someone who should be featured there? Let us know here!

Also, do someone in your l10n community who’s been doing a great job and should appear in this section? Contact us and we’ll make sure they get a shout-out!

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

Firefox NightlyI Can :has Browser Improvements – These Weeks in Firefox: Issue 148


  • The platform team has sent an Intent to Ship for the :has selector! Currently targeting Firefox 121.
  • There’s a new item in the content context menu in Nightly that lets you copy URLs to the clipboard, but with known tracking parameters stripped:
    • The content context menu for Firefox is opened over a link to the Wikipedia page for hamsters. An item in the context menu is highlighted: "Copy Link Without Site Tracking"

      No more manual stripping of known tracking parameters!

  • Alex improved the JSTracer in the DevTools console by adding setInterval/setTimeout/requestAnimationFrame callbacks and DOM events in the traces (bug)
    • A trace being displayed inside of the Firefox DevTools console. Several frames in the trace are highlighted as having been entered via requestAnimationFrame.
    • You can test the JSTracer by setting devtools.debugger.features.javascript-tracing to true in about:config, and clicking the tracer icon in the debugger pane.
  • Alex and Hubert are working hard to make the Firefox Debugger much faster
    • Alex optimised our parser worker computations, bringing nice wins (e.g. 5% to 10% faster to open a large file) (bug)
    • Alex updated babel do benefit from performance improvement made in the library lately, which were validated by DAMP results (e.g. debugger is than 10% faster to open a large file) (bug)
    • Hubert deferred some parsing work until needed for the Outline panel, so we don’t have to pay the performance cost upfront (bug)

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug
  • anonymous0000007
  • Ganna
  • Gregory Pappas [:gregp]
  • Itiel
  • Jonas Jenwald [:Snuffleupagus]
  • Masatoshi Kimura [:emk]
  • Mathew Hodson Sebastian [:sebcode]
  • Sebastian Zartner [:sebo]
New contributors (🌟 = first patch)

Project Updates

Developer Tools

  • Contributors
    • Calum Smith fixed the Inspector so it properly displays CSS Color 4 formats (e.g. lab, oklch, …) (bug)
  • Nicolas fixed an issue where Custom formatter hooks where not called with proxy objects (bug)
  • Bomsy and Nicolas fixed a bug where the Debugger tooltip won’t display the actual value for the hovered token  (bug)
  • Nicolas is still making progress on his accessibility project, mostly fixing color contrasts, focus indicator and keyboard navigation (bug, bug, bug, bug, progress chart not going down because I’m filing more bugs)
  • Nicolas fixed an issue in the Rule view when selecting a flex container with text-wrap: balance (bug)
WebDriver BiDi
  • Sasha implemented the browsingContext.contextDestroyed event that is emitted when browsing contexts are destroyed (bug)
  • Sasha added the defaultValue field to the browsingContext.userPromptOpened event (bug)
  • Sasha also added support for userActivation parameter in script evaluation (bug)
  • Sasha renamed ViewportOptions with BoxOptions for the browsingContext.captureScreenshot command to be spec-compliant (bug)
  • Julian added authChallenges to response data in network events ( network.responseStarted and network.responseCompleted) (bug)
  • Henrik fixed an issue with serialization of remote values (bug)
  • Henrik added support for serializing and deserializing Window objects in Marionette (bug)

ESMification status

  • ESMified status:
    • browser: 88%
    • toolkit: 99%
    • Total:  96.13% (up from 95.72%)
  • #esmification on Matrix

Lint, Docs and Workflow

Migration Improvements

Search and Navigation

  • Daisuke fixed an address bar bug where, when users tried to copy the URL, they would sometimes copy about:blank instead if the page took too much time to load
  • James landed a patch to extend search telemetry so that search counts are now broken down by window type
  • Marc Seibert landed a long series of patches to enable trimming of https and the insecure connection label in the address bar. This is currently enabled only in Nightly.
  • James fixed a bug in our SERP telemetry where we were recording an incorrect number of displayed ads
  • Karandeep updated the desktop configuration for Salidzini, a Latvian search engine
  • Dale fixed a bug so that we now restrict search history deletions by search engine
  • Drew landed a patch that modifies the Firefox Suggest desktop integration so that it now uses Addon and Pocket suggestions from the Suggest Rust component
  • Mark Banner landed a patch that extends and updates our SERP telemetry configuration for mobile
  • Drew landed a patch that fixes dismissals and telemetry for Wikipedia Suggest results coming from the new Rust component

Storybook/Reusable Components

Mozilla Privacy BlogMozilla Mornings: Big tech, big AI? Why keeping markets open matters

Mozilla Mornings in Brussels is back, debating AI competition and open systems

AI dominates global policy discussions, but there’s no consensus on how to act. A topic gaining prominence is how to maintain a competitive market where new players, small businesses, non-profits, and others can access innovative tools.

The significant costs of building advanced AI models can eventually lead to market consolidation. Alternatively, open-source advocates see open-source AI as a means to challenge the existing (or future) concentration of power to a small number of tech companies and a way to reduce the barriers to developing AI models.

Is this duality the only way forward, or the reality lie somewhere in the middle? Can policy intervention ensure markets remain open and competitive while allowing AI to reach its full potential?

To discuss these issues, we are delighted to announce that the following speakers will be participating in our panel discussion, moderated by Mark Scott, POLITICO’s Chief Technology Correspondent:

  • Cornelia Kutterer, Senior Researcher, MIAI, University of Grenoble Alpes
  • Max von Thun, Europe Director at Open Markets Institute
  • Connor Dunlop, Europe Policy Lead at Ada Lovelace Institute
  • Gabriele Mazzini, Team Leader for AI Act at European Commission (TBC)

MEP Marcel Kolaja (CZ, Greens/EFA) will be the keynote speaker, with opening remarks by Linda Griffin, VP Global Public Policy at Mozilla.

  • Date: Tuesday 28 November
  • Time: 08:30-10:30 CET
  • Location: Sofitel, Place Jourdan 1, Brussels, 1040

To register, click here.

The post Mozilla Mornings: Big tech, big AI? Why keeping markets open matters appeared first on Open Policy & Advocacy.

Support.Mozilla.OrgMozilla account rename – Changes on the support flows

If you’ve been contributing to the support forum on the Mozilla Support platform, you might’ve been aware of the difficulties of supporting users with Firefox account problems. The lack of safety measures to deal with PII (Personal Identifiable Information) in the forum, the ambiguity on some security terminologies (recovery codes vs. recovery key) or, ultimately, the lack of infrastructure to support users with account’s recovery issues without having them losing their data.

With the momentum of Firefox account rebrands to Mozilla account, the Customer Experience team has prepared a new flow to support this transition as well as building the foundation for a better support experience for account’s holders in the long run.

The new support flows

If you’re contributing to Mozilla Support, here’s what you need to know about the new support flow:

  • Mozilla account specific contact form

Users with Mozilla account issues can now submit their questions to the Mozilla account contact form that can be accessed from the Get Help fly-out menu. Questions submitted to Mozilla account contact form will be handled by dedicated support agents who are better equipped to deal with PII as well as have access to the infrastructure to solve a more complex case.

Screenshot of the new fly-out menu in the Mozilla Support platform

  • Login-less support

We also introduced login-less support for account holders who lose access to their account. This type of support can be accessed from the login prompt. Users who submit a question from this contact form will also be handled by dedicated support agents.

Screenshot of the new login prompt in the Mozilla Support platform

Implication for the Forum & Social Support contributors

If you’re a forum contributor or you have access to Verint, please help us direct any questions related to Mozilla account to the Mozilla account contact form. We have a forum common response for this called ‘Mozilla account contact form‘ and a clipping in Verint called ‘Mozilla account contact form‘ that you can use at your convenience.

Mozilla account as a product in SUMO

Technically, we have created a new product for Mozilla account in SUMO, which means that we’ll host future articles related to Mozilla account in this category. However, it won’t be visible as a tile in our product selections page. If you see Firefox account is still mentioned in a KB article, or if you see an article that should be moved to the Mozilla account category, please notify the content team. You can also check out this article to learn more about editorial guidelines for Mozilla account in our Knowledge Base.

Implication for the locale teams

You should expect to see many translated articles become outdated due to the update that we’re doing with the English KB articles. Please check the Recent Revisions page to see the articles that we’ve updated as part of this launch.

Frequently asked questions

What to do when encountering users with Mozilla account problems?

Please direct any questions related to Mozilla account to the Mozilla account contact form, unless it can be solved with KB articles.

Does this also include users with Firefox Sync issue?

The login-less contact form is intended for users with login issues, while the signed-in contact form is intended for account-related issues. In short, Firefox Sync is out of scope for now.

Do we support account recovery now?

Account recovery is a complicated process, and we don’t have the infrastructure yet to handle every case. However, that’s part of the scope of this new support infra, and you should direct users with this issue to file a ticket.

If you have other questions about this change, please join our discussion in this forum thread!

Mozilla Addons BlogIs your extension ready for Firefox for Android? Be part of the launch of a new open mobile ecosystem

During the release cycle of Firefox 120, we’ll begin to see the emergence of dozens of new, openly available extensions on Firefox for Android on (AMO). We’re taking a steady approach to opening up the mobile extension ecosystem to ensure Firefox for Android maintains strong performance standards while a vast new array of extensions are utilized for the first time in a mobile environment. If testing continues to progress well, we anticipate unveiling a fully open Firefox for Android extension ecosystem sometime in December. Stay tuned for details.

For developers interested in optimizing desktop extensions for Firefox for Android usage, now’s the perfect time to assess your extension and take necessary steps to make your extension part of the coming first wave of openly available extensions on Firefox for Android.

We anticipate strong interest from users excited to explore all the new ways they can customize Firefox for Android. Current trends indicate we’ll have at least 200+ new Firefox for Android extensions on AMO when open availability debuts in December. And while a couple hundred extensions is more variety than you’ll find on any other mobile browser, it is significantly fewer than the nearly 40,000 desktop Firefox extensions on AMO. So the opportunity for heightened discoverability with new users may be intriguing to some developers.

It’s also a great time for developers who are intrigued at the prospect of creating new ways Firefox for Android users will fundamentally experience the mobile web. Are there browsing problems unique to the mobile environment that web extensions can solve? How can we enhance mobile web experiences with extensions? How can extensions empower mobile users? It’s an open invitation to innovation.

For developers keen to learn more about making their desktop extensions compatible on Firefox for Android, here are some timely resources (in addition to Firefox Add-ons Discourse where you can hit us up anytime with questions)…

Webinar: Setup, testing, debugging 

Time: Wednesday, November 15 at 11am EDT

Senior Developer Relations Engineer Simeon Vincent will host his second webinar dedicated to Firefox for Android extension development and desktop migration. The November 15 session will focus on Firefox for Android development setup steps like getting started with Android Studio, creating a virtual device for QA and getting Firefox Nightly readied for remote debugging.

Register for the livestream!

Check out our first Firefox for Android webinar from October.

Open office hours 

Time: Every Monday, Tuesday

Simeon also hosts weekly open “office hours” for anyone interested in signing up to receive 1:1 guidance on Firefox for Android extension development. These open office hours are only scheduled to run through December, so don’t be shy to tap Simeon’s expertise as you prepare your extension for mobile release.

First 200 Firefox for Android extension developers (to email us) get a free t-shirt!

Sorry to bury the lede, but we’re also giving away this one of a kind “Early Add-opter” t-shirt to the first 200 developers who… 1) make their extension functional on Android; and 2) email us at firefox-android-addon-support [at] with a link to your extension’s AMO listing page. If your extension works as expected on Firefox for Android and you’re one of the first 200 to reach out we’ll be in touch with the t-shirt ordering details.

Can you imagine yourself wearing this t-shirt, just chilling after you’ve made your desktop extension compatible on Firefox for Android? 

The post Is your extension ready for Firefox for Android? Be part of the launch of a new open mobile ecosystem appeared first on Mozilla Add-ons Community Blog.

Mozilla ThunderbirdFix Font Scaling and Density Issue on Thunderbird 115 Upgrade

The Thunderbird Community Support logo

If you have recently upgraded to Thunderbird 115 “Supernova” and noticed a smaller font size or a presentation that feels too compact, there might be an easy solution. You should be able to simply change the font from the app menu to achieve your ideal font setting by clicking the +/- to change the font size globally (see below).

As the GIF above illustrates, here’s how to do it:

  • Change the density to suit your taste: Click ≡ > Density. Then click “Relaxed” to increase the size of UI elements in Thunderbird or click “Compact” to decrease the size.
  • Change the font size to suit your needs: Click ≡ > Font Size. Click + to increase the size of the fonts or click to decrease the size of the fonts.

If for some reason that does not fix your font resizing problem, then you may have hit a known issue about the font size going back to a smaller size due to some unsupported properties being adjusted either directly or via an add-on.

You can find more detailed info on fixing these technical issues in our support article.

The post Fix Font Scaling and Density Issue on Thunderbird 115 Upgrade appeared first on The Thunderbird Blog.

The Mozilla BlogWhy we’re renaming ‘Firefox accounts’ to ‘Mozilla accounts’

For many Firefox users, a Firefox account has been indispensable. It safely syncs everything from open tabs, bookmarks, history and add-ons to passwords, credit card information and saved addresses across desktop and mobile devices. 

In fact, over the past few years, a Firefox account has grown its support beyond our beloved web browser. It’s now the authentication and account management tool for millions of users across Mozilla’s family of products – all designed to keep people safer and smarter online. So, to reflect this expanding world of Mozilla services, we’ve made the decision to rename “Firefox accounts” to “Mozilla accounts.”

More access for Firefox customers

With this name change, we hope people who love Firefox will continue to support the open-source browser that kickstarted Mozilla’s journey. We also welcome them to explore our growing slate of Mozilla’s people-first products.

Want a more transparent online shopping experience? Try Fakespot. Looking to secure your internet connection? Check out Mozilla VPN. Relay will help protect your phone number and email addresses from spammers, while Monitor can help keep your sensitive data private. And if you hope to find and save the best content online, give our popular Pocket app a try.

A seamless transition (really)

If you’re already a Firefox account customer, no need to create a new account. Or do anything at all.

You can log in with the same email and password, and you’ll find your information saved and secure right where you left it. We’ll continue to send out emails from to keep important communications consistent. Rest assured, our terms of service and privacy notice remain unchanged.

 In other words, you get all the same benefits of a Firefox account — just under the Mozilla name. Check out our support page for additional information.

Using Mozilla products just got easier

You can now sign in across Mozilla’s products with your Google or Apple ID. Using your Google or Apple ID makes it easy to authenticate your identity and recover access to your account. While you can use your Google or Apple ID to log into your Mozilla account, you’ll still need to set a password for your Mozilla account in order to sync your browser bookmarks, history, passwords, open tabs and more. Find out more in our support article

Beyond our current offerings for a safer and more secure internet, we invite everyone to keep an eye out for more exciting things to come. Whether that’s our investment in trustworthy AI, our exploration into a better social media or our continuing efforts to advocate for ethical tech policies

The needs of web users are always evolving. We’re right there with you. But have no doubt that one thing will remain constant: Mozilla’s dedication to keep the internet open, accessible and healthy for all.

The post Why we’re renaming ‘Firefox accounts’ to ‘Mozilla accounts’ appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 519

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is silkenweb, a library for building web apps with fine-grained reactivity and a clean separation of logic and UI.

Thanks to henrik for the suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

408 pull requests were merged in the last week

Rust Compiler Performance Triage

This week we have two sets of results as last week's arrived later than the publish date:

Triage done by @rylev and @simulacrum.

Revision range: b9832e72..650991d

Across both reports:

9 Regressions, 7 Improvements, 5 Mixed 127 artifact comparisons made in total

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Language Reference
Unsafe Code Guidelines
  • No Unsafe Code Guideline RFCs entered Final Comment Period this week.
New and Updated RFCs
  • No New or Updated RFCs were created this week.
Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Upcoming Events

Rusty Events between 2023-11-01 - 2023-11-29 🦀

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.


Please see the latest Who's Hiring thread on r/rust

Quote of the Week

After doing a best fit, we found Rust projects were less likely to introduce vulnerabilities than their equivalent C++ projects at all relevant experience levels, but more importantly, we found the effect was most significant for first-time contributors, who were almost two orders of magnitude less likely to contribute vulnerabilities. That is, even though Rust may have a reputation as a harder language to learn, there is a very measurable effect that makes it better for newbies. Reviewers should not have to put as much effort into reviewing code to be confident that someone making their first foray into their project is accidentally adding a vulnerability.

Justin Tracey on

Thanks to Brian Kung for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Mozilla BlogQuick as a Fox: Firefox keeps getting faster

Web browsing is a pervasive part of modern life, and the quality of the experience directly affects the quality of your day. When your tasks are disrupted by slow or unresponsive pages, it is frustrating and distracting. As such, performance is a key component of Mozilla’s vision for the web.

To deliver against our vision and enable a better online experience for everyone, we’ve been working hard on making Firefox even faster. We’re extremely happy to report that this has resulted in a significant improvement in speed over the past year.

Improvements on benchmarks

One way to judge browser performance is by using industry benchmarks. We have seen measurable improvements here, specifically around the popular benchmark Speedometer 2.1.  This benchmark measures browser responsiveness by simulating user interactions (such as manipulating a list of to-do items).

Since January 2023, Firefox’s Speedometer score has improved by 50%, a significant performance improvement for our users.

Performance on the web

Yes, benchmarks matter, but it’s worth pointing out they only simulate what a user could experience. It was important for us to verify that the performance improvements were actually being felt by users.

We’ve observed improvements on performance metrics that matter. In particular, pages are appearing 15% faster on average:

It is extremely gratifying to see that the improvements in benchmark scores are actually being felt by Firefox users everywhere. If you’re interested in getting more technical details, check out our recent blog post on Mozilla Hacks

It’s been an exciting year for the Firefox Performance team – and we’re not stopping any time soon. This is a preview of the work we’ve been doing, and we’ll be sharing more technical detailed posts in the next few weeks on Mozilla Hacks. 

Get Firefox

Get the browser that protects what’s important

The post Quick as a Fox: Firefox keeps getting faster appeared first on The Mozilla Blog.

Hacks.Mozilla.OrgDown and to the Right: Firefox Got Faster for Real Users in 2023

One of the biggest challenges for any software is to determine how changes impact user experience in the real world. Whether it’s the processing speed of video editing software or the smoothness of a browsing experience, there’s only so much you can tell from testing in a controlled lab environment. While local experiments can provide plenty of metrics, improvements to those metrics may not translate to a better user experience.

This can be especially challenging with complex client software running third-party code like Firefox, and it’s a big reason why we’ve undertaken the Speedometer 3 effort alongside other web browsers. Our goal is to build performance tests that simulate real-world user experiences so that browsers have better tools to drive improvements for real users on real webpages. While it’s easy to see that benchmarks have improved in Firefox throughout the year as a result of this work, what we really care about is how much those wins are being felt by our users.

In order to measure the user experience, Firefox collects a wide range of anonymized timing metrics related to page load, responsiveness, startup and other aspects of browser performance. Collecting data while holding ourselves to the highest standards of privacy can be challenging. For example, because we rely on aggregated metrics, we lack the ability to pinpoint data from any particular website. But perhaps even more challenging is analyzing the data once collected and drawing actionable conclusions. In the future we’ll talk more about these challenges and how we’re addressing them, but in this post we’d like to share how some of the metrics that are fundamental to how our users experience the browser have improved throughout the year.

Let’s start with page load. First Contentful Paint (FCP) is a better metric for felt performance than the `onload` event. We’re tracking the time it takes between receiving the first byte from the network to FCP. This tells us how much faster we are giving feedback to the user that the page is successfully loading, so it’s a critical metric for understanding the user experience. While much of this is up to web pages themselves, if the browser improves performance across the board, we expect this number to go down.

Graph of the median time between response start and first contentful paint, going from ~250 to ~215. Three distinct areas with a more pronounced slope are visible in mid february, late April and the largest in late July.

Image 1 – Median time from Response Start to First Contentful Paint in milliseconds

We can see that this time improved from roughly 250ms at the start of the year to 215ms in October. This means that a user receives feedback on page loads almost 15% faster than they did at the start of the year. And it’s important to note that this is all the result of optimization work that didn’t even explicitly target pageload.

In order to understand where this improvement is coming from, let’s look at another piece of timing data: the amount of time that was spent executing JavaScript code during a pageload. Here we are going to look at the 95th percentile, representing the most JS heavy pages and highlighting a big opportunity for us to remove friction for users.

A graph of the 95th percentile of JS execution time during pageload. It runs from ~1560 in January 2023 to ~1260 by October 2023. In general it's a steady downward slope with a small downward jump in April and a large downward jump during August.

Image 2 – 95th Percentile of JS execution time during pageload in milliseconds

This shows the 95th percentile improving from ~1560ms at the beginning of the year, to ~1260ms in October. This represents a considerable improvement of 300ms, or almost 20%, and is likely responsible for a significant portion of the reduced FCP times. This makes sense, since Speedometer 3 work has led to significant optimizations to the SpiderMonkey JavaScript engine (a story for another post).

We’d also like to know how responsive pages are after they are loaded. For example, how smooth is the response when typing on the keyboard as I write this blogpost! The primary metric we collect here is the “keypress present latency”; the time between a key being pressed on the keyboard and its result being presented onto the screen. Rendering some text to the screen may sound simple, but there’s a lot going on to make that happen – especially when web pages run main thread JavaScript to respond to the keypress event. Most typing is snappy and primarily limited by hardware (e.g. the refresh rate of the monitor), but it’s extremely disruptive when it’s not. This means it’s important to mitigate the worst cases, so we’ll again look at the 95th percentile.

A graph of the 95th percentile of the keypress present latency. Ranging from January 2023 to October 2023. It hovers fairly steady around 65ms, even seemingly going up a bit between March and May. Before dropping down to about 58-59ms over the course of August and September 2023.

Image 3 – 95th Percentile of the keypress present latency

Once again we see a measurable improvement. The 95th percentile hovered around 65ms for most of the year and dropped to under 59ms after the Firefox 116 and 117 releases in August. A 10% improvement to the slowest keypresses means users are experiencing more instantaneous feedback and fewer disruptions while typing.

We’ve been motivated by the improvements we’re seeing in our telemetry data, and we’re convinced that our efforts this year are having a positive effect on Firefox users. We have many more optimizations in the pipeline and will share more details about those and our overall progress in future posts.

The post Down and to the Right: Firefox Got Faster for Real Users in 2023 appeared first on Mozilla Hacks - the Web developer blog.

Mozilla ThunderbirdThunderbird 115 and Signatures Using The Obsolete SHA-1 Algorithm

Several red keys on a light blue background.

As part of our continuing efforts to strengthen the security of Thunderbird, a change was made in version 115.0 that rejects the use of the SHA-1 algorithm in digital signatures of S/MIME emails.

The SHA-1 algorithm is nowadays considered insecure in most contexts, which includes digital signatures, as explained in the related Wikipedia article.

Because of the change in Thunderbird 115,  when an affected message is displayed, an invalid signature will be reported.

You can spot such messages by looking at the message source, and search for the text micalg= in the message headers. If it is followed by the text sha-1 or sha1, you should contact your correspondent and ask them to upgrade.

Most modern email software that supports S/MIME should already be able to use another hash algorithm, for example SHA-256 is a more modern alternative. It might be necessary to change a setting to enable its use.

The Thunderbird team was recently made aware that the use of SHA-1 is still required in some environments, as some government agencies continue to send out messages based on SHA-1. Recipients of such messages asked for a way to confirm the validity of such signatures, despite the risk that the signature could have been forged.

To accommodate those Thunderbird users, starting with version 115.4.1 a new configuration mechanism will be made available. It can be used to accept S/MIME signatures based on SHA-1. To enable it, use Thunderbird’s settings, access the advanced config editor, search for the setting mail.smime.accept_insecure_sha1_message_signatures and set it to the value true.

Note that changing this setting is not recommended, and if you decide to set it, you should work with your correspondents to get them to change to SHA-256 or newer as soon as possible. Once your correspondents have upgraded, you should revert the setting to false.

Changing the setting will have no effect on the messages that Thunderbird sends. Thunderbird uses SHA-256 when sending digitally signed S/MIME email messages, and has been doing so for several years already.

The Thunderbird team understands that it might seem early to demand the deprecation of insecure algorithms while other software is still using it, given the incompatibilities that some users experience. However, aligned with our mission to increase the security of users, we hope that our actions can raise awareness and motivate deployments to upgrade to more secure settings, which otherwise they might not have done.

The post Thunderbird 115 and Signatures Using The Obsolete SHA-1 Algorithm appeared first on The Thunderbird Blog.

Will Kahn-GreeneTecken/Socorro: Code info lookup: retrospective (2023)



6 weeks

  • improved visibility on set of crash reports by fixing symbolication and signatures

  • better understanding of consequences of sampling Firefox / Windows < 8.1 / ESR crash reports


In November, 2021, we wrote up a bug in the Tecken product to support download symbols files using the code file and code id.

In July, 2023, Mozilla migrated users for Windows 7, 8, and 8.1 from Firefox release channel to ESR channel. Firefox / Windows / release is sampled by the Socorro collector, so the system only accepts and processes 10% of incoming crash reports. When the users were migrated, their crash reports moved to an unsampled group, so then we were getting 100% of those incoming crash reports. That caused a volume increase of 30k.

I looked into adding another sampling rule for Firefox / Windows < 8.1 / ESR, but many of the crash reports had a xul module where there wasn't a debug file and debug id in the module list stream in the minidump, so we couldn't get symbols files for them. Because of that, we didn't have much visibility into this group of crash reports.

I looked at [bug 1746940] and worked out how to fix it. I thought it would be relatively straight-forward, so I prioritized working on it with the assumption it'd take a week to do.

I hit a bunch of road bumps and it took me 6 weeks to work through several attempts, settle on a final architecture, implement it, test it, and push all the pieces to production. I finished the work on October 24th, 2023.

The end result is a radically reduced number of crash reports where the stackwalker couldn't symbolicate xul.dll addresses because of missing debug file and debug id.

Read more… (14 min remaining to read)

Firefox NightlyIntroducing Mozilla’s Firefox Nightly .deb Package for Debian-based Linux Distributions

Great news for people using Firefox Nightly on Debian-based Linux distributions (such as Debian, Ubuntu, Linux Mint, and others): installing, updating, and testing the latest Firefox Nightly builds just got a lot easier. We’ve set up a new APT repository for you to install Firefox Nightly as a .deb package. These packages are compatible with the same Debian and Ubuntu versions as our traditional binaries. If you’ve previously used our traditional binaries (distributed as .tar.bz2 archives), switching to Mozilla’s APT repository allows Firefox to be installed and updated like any other application. Your feedback is invaluable to us, so don’t hesitate to report any issues you encounter to help us improve the overall experience.

Adopting Mozilla’s Firefox Nightly .deb package offers multiple benefits:

  • you will get better performance thanks to our advanced compiler-based optimizations,
  • you will receive the latest updates as fast as possible because the .deb is integrated into Firefox’s release process,
  • you will get hardened binaries with all security flags enabled during compilation,
  • you will not have to create your own .desktop file,
  • you will be able to continue browsing after upgrading the package.

To set up the APT repository and install the Firefox Nightly .deb package, simply follow these steps:

# Create a directory to store APT repository keys if it doesn't exist:
sudo install -d -m 0755 /etc/apt/keyrings

# Import the Mozilla APT repository signing key:
wget -q -O- | sudo tee /etc/apt/keyrings/ > /dev/null

# The fingerprint should be 35BAA0B33E9EB396F59CA838C0BA5CE6DC6315A3
gpg -n -q --import --import-options import-show /etc/apt/keyrings/ | awk '/pub/{getline; gsub(/^ +| +$/,""); print "\n"$0"\n"}'

# Next, add the Mozilla APT repository to your sources list:
echo "deb [signed-by=/etc/apt/keyrings/] mozilla main" | sudo tee -a /etc/apt/sources.list.d/mozilla.list > /dev/null

# Update your package list and install the Firefox Nightly .deb package:
sudo apt-get update && sudo apt-get install firefox-nightly

And that’s it! You have now installed the latest Firefox Nightly build .deb package on your Debian-based Linux distribution.

For those of you who would like to use Firefox Nightly in a different language than American English, we have also created .deb packages containing the Firefox language packs. To install a specific language pack, replace fr in the example below with the desired language code:

sudo apt-get install firefox-nightly-l10n-fr

To list all the available language packs, you can use this command after adding the Mozilla APT repository and running sudo apt-get update:

apt-cache search firefox-nightly-l10n

We hope this new installation method makes it easier for people on Debian-based Linux distributions to test and provide feedback on the latest Firefox developments. Your participation in the Nightly community plays a critical role in helping us deliver the best possible browser experience.

Following a period of testing, these packages will become available on the beta, esr, and release branches of Firefox.

Thank you for your support, and we look forward to hearing your feedback.

Edit (November 8, 2023): Following community discussions, we have updated the post to highlight that Firefox can continue browsing after an APT upgrade, allowing people to restart at their convenience.

Edit (October 31, 2023): Based on feedback from our readers, we’ve updated the installation steps to align with the latest best practices. Instead of storing the de-armored key in /etc/apt/trusted.gpg.d, the steps now keep the armored signing key in the /etc/apt/keyrings directory.

Mozilla ThunderbirdThunderbird for Android / K-9 Mail: September 2023 Progress Report

a dark background with thunderbird and k-9 mail logos centered, with the text "Thunderbird for Android, September 2023 progress report"

Welcome back to your monthly K-9 Mail update! The previous month ended with Mozilla’s All-Hands event in Montreal, Canada. While I used this opportunity for a three weeks long vacation in Canada (it was great, see picture below), Wolf went back to work on your favorite mobile email client as it transforms into Thunderbird for Android.

<figcaption class="wp-element-caption">Algonquin Provincial Park, Ontario, Canada | Photo credit: cketti</figcaption>

Improved account setup

Wolf continued to work on the new and improved account setup code. This mostly involved fixing bugs and improving the internal architecture, so the code will be easier to maintain in the future.

With the switch to the new account setup code, we were able to remove (some of) the old setup code. If you’re software developer, you know that being able to delete a significant amount of old code is one of the best feelings on the job. If you’re not, just take my word for it.

Wolf also started work on using the new server settings screens when editing the incoming and outgoing server of an existing account. Once that work is complete we’ll be able to delete even more old code.

Unfortunately, none of this work resulted in new screens that we could show off in this progress report. But maybe the following stats can give an idea of how busy Wolf was.

App maintenance

These are some of the more notable bugs we fixed in September.

Vector image as app icon

Some users reported that the splash screen newer Android versions automatically display, shows a blurry app icon. The reason was that we used a bitmap that looked fine when used as regular-sized icon, but that looked blurry when being scaled up, e.g. for the splash screen.

We fixed this by converting the icon into the vector image format supported by Android. To be able to do that we had to remove some details from the icon. But the result is a sharp app icon on the splash screen.

Fixed OAuth 2.0 for Yahoo and AOL

In our new setup code we accidentally broke OAuth 2.0 support for Yahoo and AOL accounts. Apparently some people still use those email providers. So we fixed the bug.

Cleaned up “Return to list after delete” setting

K-9 Mail allows the user to specify what is displayed next after a message has been deleted from the message view screen. Available options are:

  • return to message list
  • show next message
  • show previous message

However, those are not the options a user could select in app settings. There were two preferences: Return to list after delete and Show next message after delete.

During one of our design meetings we quickly decided this is not a great user experience and changed it to one setting with three options.

Since the same behavior is also used after moving a message, we also used this opportunity to change the name of the setting.

Community contributions

In September we merged the following pull requests by external contributors:

Thank you. Your work is greatly appreciated ❤


We didn’t release any beta or stable versions in September. However, that’s an exception. Usually we publish a couple of beta releases a month. If you want to help shape future versions of the app, become a beta tester and provide feedback on new features while they are still in development.

The post Thunderbird for Android / K-9 Mail: September 2023 Progress Report appeared first on The Thunderbird Blog.

Mozilla Privacy BlogGlobal Network Fee Proposals are Troubling. Here are Three Paths Forward.

Today we’re sharing our perspective on the EU’s network fee proposal (aka. “fair share”) that would mandate payments from large Content and Application Providers (“CAPs,” such as YouTube or Netflix) to telecommunications network operators. We believe that our position paper is particularly timely given this week’s EU informal ministerial meeting in León.

Regulators and legislators in the US, Brazil, and India are considering similar policy proposals, and our position on those initiatives is no different.

Our analysis? These proposals would violate network neutrality, a bedrock principle of good internet policy, while enriching billion-dollar-revenue telcos – and, most importantly, they would obscure the real goal of digital inclusion. Here’s our perspective:

  • Digital inclusion should be the focus and priority of policy-makers, rather than the profitability of European telcos. The European Telecommunications Network Operators’ Association (ETNO) has attempted to turn the spotlight on their members with their network fee proposal. Yet any direct payments from CAPs to telcos would be no guarantee of more equitable, inclusive, affordable access for all.
  • Evidence should be transparent and verifiable, whether for or against the network fee proposal. The underlying methodology and sources of evidence supplied by ETNO in support of the network fee proposal are not transparent in terms of either source or methodology. This is amply illustrated by the fact that some of ETNO’s claims are contradicted by the annual reports of their member operators.
  • Third, ETNO claims that their proposal would not violate net neutrality have been rejected by regulators and are not supported by historical or economic evidence. Such mandated payments would effectively grant network operators a termination monopoly, giving them gatekeeper control over content and reaching their customers. There is increasing evidence that the biggest telecom operators are already attempting to extract such payments for sufficient connectivity in their network.
  • Finally, many of the concerns raised by network operators are best addressed via competition tools, not network fee payments.

For each of these buckets, we highlight a path forward that stresses public benefit over the “clash of giants” inherent to the network fee debate.

Recently, both the EU and Brazil released the results of their respective network fee consultations. We are heartened to see widespread opposition to the network fee concept. The results of the EU Commission’s consultation in particular, watched by regulators around the world, presents a clear catalyst for policymakers to shift their attention to policy proposals which will more clearly benefit the public interest.

Read our full position paper here.

The post Global Network Fee Proposals are Troubling. Here are Three Paths Forward. appeared first on Open Policy & Advocacy.

The Rust Programming Language Dropping support for non-canonical downloads


  • We want to improve the reliability and performance of crate downloads.
  • "Non-canonical downloads" (that use URLs containing hyphens or underscores where the crate published uses the opposite) are blocking these plans.
  • On 2023-11-20 support for "non-canonical downloads" will be disabled.
  • cargo users are unaffected.

What are "non-canonical downloads"?

The "non-canonical downloads" feature allows everyone to download the serde_derive crate from, but also from, where the underscore was replaced with a hyphen ( normalizes underscores and hyphens to be the same for uniqueness purposes, so it isn't possible to publish a crate named serde-derive because serde_derive exists) and parts of the crate name are using uppercase characters. The same also works vice versa, if the canonical crate name uses hyphens and the download URL uses underscores instead. It even works with any other combination for crates that have multiple such characters (please don't mix them…!).

Why remove it?

Supporting such non-canonical download requests means that the server needs to perform a database lookup for every download request to figure out the canonical crate name. The canonical crate name is then used to construct a download URL and the client is HTTP-redirected to that URL.

While we have introduced a caching layer some time ago to address some of the performance concerns, having all download requests go through our backend servers has still started to become problematic and at the current rate of growth will not become any easier in the future.

Having to support "non-canonical downloads" however prevents us from using CDNs directly for the download requests, so if we can remove support for non-canonical download requests, it will unlock significant performance and reliability gains.

Who is using "non-canonical downloads"?

cargo always uses the canonical crate name from the package index to construct the corresponding download URLs. If support was removed for this on the side then cargo would still work exactly the same as before.

Looking at the request logs, the following user-agents are currently relying on "non-canonical downloads" support:

  • cargo-binstall/1.1.2
  • Faraday v0.17.6
  • Go-http-client/2.0
  • GNU Guile
  • python-requests/2.31.0

Three of these are just generic HTTP client libraries. GNU Guile is apparently a programming language, so most likely this is also a generic user-agent from a custom user program.

cargo-binstall is a tool enabling installation of binary artifacts of crates. The maintainer is already aware of the upcoming change and confirmed that more recent versions of cargo-binstall should not be affected by this change.

We recommend that any scripts relying on non-canonical downloads be adjusted to use the canonical names from the package index, the database dump, or the API instead. If you don't know which data source is best suited for you, we welcome you to take a look at the data access page.

What is the plan?

  1. Today: Announce the removal of support for non-canonical downloads on the main Rust blog.
  2. 2023-11-20: Disable support for non-canonical downloads and return a migration error message instead, to alert remaining users of this feature of the need to migrate. This still needs to put load on the application to detect a request is using a non-canonical download URL.
  3. 2023-12-18: Return a regular 404 error instead of the migration error message, allowing us to get rid of (parts of) the database query.

Note that we will still need the database query for download counting purposes for now. We have plans to remove this requirement as well, but those efforts are blocked by us still supporting non-canonical downloads.

If you want to follow the progress on implementing these changes or if you have comments you can subscribe to the corresponding tracking issue. Related discussions are also happening on the Zulip stream.

The Mozilla BlogIntroducing Mozilla’s AI Guide, the developers onboarding ramp to AI

Today, Mozilla announces the availability of its AI Guide, a community-driven resource, where developers can come together, ready to pioneer and drive generative AI innovations. In the spirit of a truly open web, Mozilla launches a tool that will evolve, just like the world of AI, which is messy, and poses complex questions that have yet to be answered. 

With the AI Guide, Mozilla’s ambition is clear: empower developers with a choice between open-sourced solutions and alternatives from big tech companies.

“Mozilla’s efforts in AI are more than just technical – they’re a call to action and unity across the currently fragmented open-source AI community,” said Imo Udom, Senior Vice President of Innovation Ecosystems at Mozilla. “We created the AI Guide with the ethos of curating a more accessible and transparent AI space to support developers interested in building innovative and trustworthy technology.”

With over a quarter-century of pioneering user-first innovations, from the Firefox browser to Mozilla VPN, Mozilla has always championed digital empowerment. Mozilla’s venture into the AI space is no different. Mozilla’s first foray into responsible AI started with the Mozilla Foundation which has been pioneering the trustworthy AI space (underscoring our commitment with this whitepaper). 

This year, Mozilla announced it would commit $30 million to build, a startup focused on building a trustworthy, independent, and open-source AI ecosystem. Additionally, Mozilla hosted its first Responsible AI Challenge, which challenged builders to design and defend responsible and trustworthy AI solutions for a wide range of industries. Also this year, Mozilla added Fakespot to its product family, which uses AI to find patterns among reviews to sort real reviews from fake ones. 

“We created the AI Guide with the ethos of curating a more accessible and transparent AI space to support developers interested in building innovative and trustworthy technology,” said Imo Udom, Senior Vice President of Innovation Ecosystems at Mozilla.

Here’s what you’ll find in Mozilla’s AI Guide

To start, we are releasing three sections that go in-depth on the most asked questions about large language models (LLMs). These sections include AI Basics, Language Models and Choosing ML Models. In the Choosing ML Models section, we give developers a place where they can take all their learnings and apply it in an interactive environment using Google Colab, a digital notebook where developers can combine executable code and rich text. While in the Colab, developers can comment and edit together in real-time. More details about the sections listed below:

  • AI Basics: AI, ML, LLM. What do these concepts mean and how are they related? We delve into developers’ top questions to give readers a shared baseline to these topics. The Mozilla AI Guide breaks it all down with images and looks at the pros and cons of using an LLM. 
Image of AI Basics section<figcaption class="wp-element-caption">Developers’ top questions answered</figcaption>
  • Language Models: As we continue to build on that shared knowledge of AI basics, we will take developers to the next level with language models. This is where we answer more questions like, “What does ‘training’ an ML model mean?” or “What is a ‘human in the loop’ approach?” or “What is temperature?”
Image of the Language Models 101 section<figcaption class="wp-element-caption">Top questions about language models</figcaption>
  • Choosing Machine Learning (ML) models: Now, here comes the fun part, where developers can work with all the information learned thus far. Instead of drowning in jargon and complex terms, we provide interactive tools and exercises to help users see AI in action using our Colab notebooks. 
<figcaption class="wp-element-caption">Interactive tools and exercises to run examples</figcaption>
  • Notable Projects: From front-end solutions to complete LLM solutions, dive into standout initiatives from the AI community. These handpicked projects showcase innovation in action, offering both inspiration and insights.
Image of the Notable Projects section<figcaption class="wp-element-caption">Projects that serve as examples and inspiration</figcaption>

We plan to launch more sections over the next few months including “Data Retrieval,” “Image Modeling” and “Fine Tuning”.  

Open to developer contributions

The developer community is essential in our mission to use and build AI technology responsibly — meaning technology that prioritizes accountability, user agency, and both individual and collective well-being.

“Our vision for the AI Guide is to be the starting point that every developer can revisit for clarity and inspiration, ensuring that AI innovations enrich everyday life,” said Udom. “With contributions from developers, the AI Guide will become a collaborative community-driven resource, where developers can come together, ready to pioneer and drive generative AI innovations.“

“With contributions from developers, the AI Guide will become a collaborative community-driven resource, where developers can come together, ready to pioneer and drive generative AI innovations,” said Udom

Developers can find community contribution guidelines here. Within the AI Guide, developers can see examples of the type of content they can contribute. From open-source AI projects and implementations to video, audio models, and indispensable learning resources — all are welcome.

Together, let’s forge a cohesive, collaborative, and responsible AI community with Mozilla’s AI Guide.

Mozilla AI Guide

Where developers come together to pioneer and drive AI innovations

Check out Mozilla’s AI Guide

The post Introducing Mozilla’s AI Guide, the developers onboarding ramp to AI appeared first on The Mozilla Blog.

Mozilla Performance BlogNew Features in Mach Try Perf

Since we’ve added mach try perf, quite a few improvements have been made, along with new features added. Below, you’ll find a summary of the most important changes. If mach try perf is something new to you, see this article on Improving the Test Selection Experience with Mach Try Perf.

Standard Workflow

The workflow for using mach try perf can be a bit difficult to follow so we’ve prepared a standard workflow guide to help make the most of this new tool. The guide can be found here.

Mach try perf –alert

Something that we’ve wanted to do for a very long time now but have not had the infrastructure, and tooling required for it is allowing developers to run performance tests based on only the number (summary ID) of the alert that they are working on. I’m excited to say that we now have this functionality through mach try perf with --alert.

This feature was added by a volunteer contributor, MyeongJun Go (Jun). This was a complex task that required him to make changes on Treeherder, and in our Mozilla-Central code. On the Treeherder side, he added an API call to find the tasks that produced an alert. Then, using this new API call, he made some changes to mach try perf to allow us to run all the tasks that get returned. This new feature can be used like so: ./mach try perf --alert <ALERT-SUMMARY-ID>

The alert summary ID can be found by looking at the Bugzilla alert comment (in this case it’s 39052):

Some more information about this feature can be found here. In the future, the alert summary comment on bugs will include information about how to do this. See bug 1848885 for updates on this work.

Mach try perf –perfcompare-beta

The Performance Tools team is currently working on revamping our CompareView into PerfCompare. This new tool will provide us with the ability to extend the tooling, and provide more features to improve developer experiences, and efficiency when it comes to comparing performance changes across different sets of changes. More information on this project can be found in this blog post.

With ./mach try perf --perfcompare-beta, you can test out the beta version of this new interface, and begin providing feedback on it in the Testing :: PerfCompare component.


Lastly, for more complex use cases, we have a new feature called “comparators”. These allow us to customize how the multiple pushes are produced. For instance, one custom comparator that we have is for Speedometer 3 benchmark tests so that we can run a push with one benchmark revision, and a push with another. For example, this command will let you run the Speedometer 3 benchmark on two different revisions:

./mach try perf --no-push --comparator BenchmarkComparator --comparator-args new-revision=c19468aa56afb935753bd7150b33d5ed8d11d1e3 base-revision=a9c96c3bd413a329e4bc0d34ce20f267c9983a93 new-repo= base-repo=

With this feature, we no longer need to make changes to the mozilla-central code to run experiments with new benchmark changes. More information about this can be found here, and the BenchmarkComparator can be found here. In the future, we’ll be using these to do more than 2 pushes, and enable comparisons with multiple preference settings.

Future Work

In the very near future, descriptions of the various categories will be added to mach try perf and displayed under the tasks selected (see bug 1826190) this is being worked on by Jun. We’d also like to make mach try perf compatible with –push-to-lando as it is currently unsupported there due to the remote revision requirement, see bug 1836069 for this.

For any questions, comments, etc. you can find us in the #perftest channel on Element.

The Servo BlogThis month in Servo: CSS filters, testing changes, Tauri, and more!

Servo has had some exciting changes land in our nightly builds over the last month:

  • as of 2023-09-23, ‘@media (resolution)’ queries are now supported (@sagudev, #30406)
  • as of 2023-09-28, the ‘dir’ attribute getter now behaves correctly (@EnnuiL, #30435)
    • this fixes over 12000 subtests in the HTML test suite!

Much of the recent work on Servo has been around upgrading the components we share with Firefox:

  • SpiderMonkey — upgraded from 107 to 115 (@sagudev, mozjs#408, #30379)
  • Stylo — upgrade continues, with another 65 commits now landed in Servo (@Loirooriol, #30421)
  • WebRender — upgraded to May 2021, now fixing regressions and preparing for more breaking changes:
    • as of 2023-09-19, we’ve fixed a scrolling regression in Acid2 and other quirks mode pages (@mrobinson, #30375)
    • as of 2023-09-21, we’ve fixed a major WebGL regression related to tile cache invalidation (@mukilan, #30390)
    • as of 2023-10-04, pinch zoom is now handled in Servo, preparing us for its removal from WebRender (@mrobinson, #30446, #30459)

Sometimes the best source of ideas for improving Servo is to focus on a real-world app. Ennui @EnnuiL is doing exactly that with Cookie Clicker, a 2013 idle game that relies on CSS positioning, transitions, transforms, filters, and 2D canvases.

  • as of 2023-10-05, the CSS ‘drop-shadow()‘ filter is now supported (@EnnuiL, #30439)
  • as of 2023-10-10, CSS filters are now correctly clipped by ‘overflow: hidden’ (@EnnuiL, #30517)
  • as of 2023-10-19, drawImage() on a 2D canvas now uses shared memory for performance (@EnnuiL, #30544)
  • her work continues in #30535, with an analysis of Servo’s performance issues under Cookie Clicker
Cookie Clicker on Servo 2023-10-04, without the drop-shadow() filter Cookie Clicker on Servo 2023-10-05, now with the drop-shadow() filter
<figcaption> left: Cookie Clicker as of 2023-10-04
right: Cookie Clicker as of 2023-10-05 </figcaption>

There have also been some changes to our internals that affect

contributing to Servo


Debug assertions are now enabled everywhere except for official nightly releases (@delan, #30509). This includes both debug (-d --dev) and release (-r --release) builds locally, as well as try jobs and most other builds on CI. For more details, see

With debug assertions enabled, you can use debug_assert!() to panic when an invariant is violated, much like you would use DCHECK() in Chromium, or for more complex checks, you can wrap code in #[cfg(debug_assertions)] or if cfg!(debug_assertions) {}. Note that panicking in official releases — where cfg!(debug_assertions) is false — is still verboten in general, and those panics should almost always warn and/or gracefully recover instead.

Servo has long aimed to become an

embeddable web engine

, and our next step on this journey will be supported by a grant from NLNet! Over the next few months, we will be collaborating with the developers of Tauri to make Servo available as a webview backend.

Tauri is a framework for building desktop apps that combine a web frontend with a Rust backend, and work is already ongoing to expand it to mobile apps and other backend languages. But unlike say, Electron or React Native, Tauri is both frontend-agnostic and engine-agnostic, allowing you to use any frontend tooling you like and whichever web engine makes the most sense for your users.

At the moment, Tauri supports webkit2gtk (WebKit) on Linux, WebView2 (Chromium) on Windows, and WKWebView (WebKit) on macOS and iOS, in each case leveraging the system webview where possible. With this project to add support for Servo in Tauri, we hope to make embedding Servo easier than ever.

For more details, subscribe to our tracking issue #30593.

This was a big month for Servo at

conferences and events

too! You can catch up on our recent talks here:

The Rust Programming Language BlogA tale of broken badges and 23,000 features

Around mid-October of 2023 the team was notified by one of our users that a badge for their crate stopped working. The issue reporter was kind enough to already debug the problem and figured out that the API request that sends to was most likely the problem. Here is a quote from the original issue:

This crate makes heavy use of feature flags which bloat the response payload of the API.

Apparently the API response for this specific crate had broken the 20 MB mark and wasn't particularly happy with this. Interestingly, this crate only had 9 versions published at this point in time. But how do you get to 20 MB with only 9 published versions?

As the quote above already mentions, this crate is using features… a lot of features… almost 23,000! 😱

What crate needs that many features? Well, this crate provides SVG icons for Rust-based web applications… and it uses one feature per icon so that the payload size of the final WebAssembly bundle stays small.

At first glance there should be nothing wrong with this. This seems like a reasonable thing to do from a crate author perspective and neither cargo, nor, were showing any warnings about this. Unfortunately, some of the internals are not too happy about such a high number of features…

The first problem that was already identified by the crate author: the API responses from are getting veeeery large. Adding to the problem is the fact that the API currently does not paginate the list of published versions. Changing this is obviously a breaking change, so our team had been a bit reluctant to change the behavior of the API in that regard, though this situation has shown that we will likely have to tackle this problem in the near future.

The next problem is that the index file for this crate is also getting large. With 9 published versions it already contains 11 MB of data. And just like the API, there is currently no pagination built into the package index file format.

Now you may ask, why do the package index and cargo need to know about features? Well, the easy answer is: for dependency resolution. Features can enable optional dependencies, so when a dependency feature is used it might influence the dependency resolution. Our initial thought was that we could at least drop all empty feature declarations from the index file (e.g. foo = []), but the cargo team informed us that cargo relies on them being available there too, and so for backwards-compatibility reasons this is not an option.

On the bright side, most Rust users are on cargo versions these days that use the sparse package index by default, which only downloads index files for packages actually being used. In other words: only users of this icon crate need to pay the price for downloading all the metadata. On the flipside, this means users who are still using the git-based index are all paying for this one crate using 23,000 features.

So, where do we go from here? 🤔

While we believe that supporting such a high number of features is conceptually a valid request, with the current implementation details in and cargo we cannot support this. After analyzing all of these downstream effects from a single crate having that many features, we realized we need some form of restriction on to keep the system from falling apart.

Now comes the important part: on 2023-10-16 the team deployed a change limiting the number of features a crate can have to 300 for any new crates/versions being published.

… for now, or at least until we have found solutions for the above problems.

We are aware of a couple of crates that also have legitimate reasons for having more than 300 features, and we have granted them appropriate exceptions to this rule, but we would like to ask everyone to be mindful of these limitations of our current systems.

We also invite everyone to participate in finding solutions to the above problems. The best place to discuss ideas is the Zulip stream, and once an idea is a bit more fleshed out it will then be transformed into an RFC.

Finally, we would like to thank Charles Edward Gagnon for making us aware of this problem. We also want to reiterate that the author and their crate are not to blame for this. It is hard to know of these implementation details when developing crates, so if anything, the blame would be on us, the team, for not having limits on this earlier. Anyway, we have them now, and now you all know why! 👋

Firefox Developer ExperienceFirefox DevTools Newsletter — 119

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 119 Nightly release cycle. You can find the full list of fixed bugs in this release here.

The 119 work was happening during the summer, which is always less busy as people take some time off to enjoy the shiny sun. Luckily for us, we got a lot of contributions from people outside of Mozilla:

  • Zac Svoboda helped us on many fronts:
    • Made the JSON Viewer show the raw data when the response is not valid JSON (#1764897)
    • Fixed some React propTypes in the JSON viewer code (#1852298)
    • Improved the styling of the compatibility panel (#1643843)
    • Fixed the contrast for links across the toolbox (#1673582) and for the Layout flex/grid highlight toggle (#1844071)
    • Corrected a typo in a variable name (#1646638)
  • Sebastian Zartner , a long time DevTools contributor, also helped us a lot:
    • He improved the Inactive CSS feature which now reports ignored properties on ::first-letter (#1842175) , ::cue (#1849235) and ::placeholder (#1849255) pseudo-elements
    • Made the path of manually added cookies in Storage Inspector to root instead of URL path (#1748422)
    • Fixed a test intermittent (#1850952)
Firefox DevTools inspector showing a `::placeholder` CSS rule with a `writing-mode: vertical-lr` property. The property is dimmed, an info icon is shown after the value, and a tooltip is displayed pointing to the icon. The tooltip says that `writing-mode` is not supported on `::placeholder` pseudo-elements<figcaption class="wp-element-caption">No writing-mode for ::placeholder elements</figcaption>
  • Vinny Diehl fixed an issue in the Shape path editor for inset() when it was using pixels on an element sized in percentage (#1853559)
  • LyScott123 improved our Object Inspector to show wrapped primitive (#1695150)
Firefox DevTools console The following expression was evaluated: `Object(123)`  The result is showing an object tree. The root item is `Number { 123 }` , which is expanded. The second item is `<primitive value>: 123`<figcaption class="wp-element-caption"><primitive value> node shows the, well, wrapped primitive value</figcaption>

Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues.

CSS Custom properties computed value

A few months ago, the talented Ana Tudor complained on Mastodon:

As we don’t like people being sad, we put some work into the computed panel so it also displays custom properties (#1834690).

Firefox DevTools inspector computed panel displayed for MDN. After the regular properties (`line-height`, `margin-top`, …), we can see CSS custom properties and their values (for example `--accent-primary: #0085f2`)<figcaption class="wp-element-caption">Computed view on MDN, now showing the (many) CSS custom properties</figcaption>

This is not the first time we’re implementing a feature or fixing a bug after someone tagged us on X/Mastodon. As a matter of fact, we do like to hear from you about what tool you want or what’s broken in the toolbox. So don’t hesitate to file a bug on Bugzilla, or reach out on social media, we might end up implementing your nice idea 🙂

Debugger stability

For this release, we landed a few small but useful bug fixes.

The Debugger would sometimes underline the wrong token for errors, due to some missing information in the JS engine (#1845865). Thanks a lot to the SpiderMonkey team for fixing this!

We fixed a crash in the Debugger panel (#1849946), as well as an issue with the “Map Scopes” feature (#1849987). By the way we started a project to make this feature faster and more robust, so hopefully we’ll share the result of that work in a future newsletter 🙂.

We’re also continuing our efforts to make the Debugger as fast as possible and fixed a few issues (#1851522, #1851566, #1852979, #1853124). We got positive numbers in our performance tracking infrastructure showing the impact of those patches, but as the overall work spans multiple releases, we’ll do a summary of those results in a dedicated blog post later when this work is over.

Thank you for reading this and using our tools, see you next month for a new round of exciting updates 🙂

Firefox Developer ExperienceFirefox WebDriver Newsletter — 119

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 119 release cycle.


With Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette and geckodriver.


Bug fixes

In Firefox 119, several bugs were simultaneously fixed for WebDriver BiDi and Marionette:

WebDriver BiDi

New: “browsingContext.reload” command

With the browsingContext.reload command, it is now possible to reload the page that is currently displayed within a given browsing context. Note that there is no limitation anymore (as in WebDriver classic) to a top-level browsing context, so even individual frames can now be reloaded. A limitation at the moment is that Firefox will reload the page only from the cache. The support for the ignoreCache argument we be added at a later time.

New: “browsingContext.userPromptClosed” event

Back in Firefox 118 we added the browsingContext.userPromptOpened event, which is emitted whenever a user prompt of type “alert”, “confirm” or “prompt” is opened. With the Firefox 119 release, the browsingContext.userPromptClosed event is now available as well, and is emitted when such a user prompt is closed. The event’s payload contains the context where the dialog is displayed, the accept or dismiss state, and as well the user-entered text in case of a user prompt of type “prompt”.

This event will also support "beforeunload" type dialogs in the future, but they are not handled at the moment.

New: “browsingContext.navigationStarted” event

The browsingContext.navigationStarted is a new event, which gets emitted when a new navigation is started by Firefox. A navigation can be requested by using the browsingContext.navigate or the new browsingContext.reload command, by some user interaction with elements within a page, or some JavaScript executed in the page’s context that causes a navigation to a different page. The event’s payload contains the context where the navigation takes place, a unique navigation id, the URL that Firefox navigates to, and the timestamp.

New: “script.realmCreated” and “script.realmDestroyed” events

script.realmCreated and script.realmDestroyed are new events that allow WebDriver BiDi clients to monitor the lifetime of JavaScript Realms of a given browsing context. Such a Realm is basically an isolated execution environment with its own unique global object (window). By default, only a single Realm will exist per page, but when using script.evaluate or script.callFunction with the sandbox argument, a new Realm can be created. To get all the existing Realms for a page, you can still use the script.getRealms command. The payload for the script.realmCreated event will contain the realm information of the created Realm, which includes Realm’s identifier, the browsing context, the Realm’s type and if relevant the name of the Sandbox. The payload for script.realmDestroyed only contains the Realm’s identifier.

Bug fixes

Marionette (WebDriver classic)

Bug fixes

IRL (podcast)The Humans in the Machine

They’re the essential workers of AI — yet mostly invisible and exploited. Does it have to be this way? Bridget Todd talks to data workers and entrepreneurs pushing for change.

Millions of people work on data used to train AI behind the scenes. Often, they are underpaid and even traumatized by what they see. In this episode: a company charting a different path; a litigator holding big tech accountable; and data workers organizing for better conditions.

Thank you to Foxglove and Superrr for sharing recordings from the the Content Moderators Summit in Nairobi, Kenya in May, 2023.

Richard Mathenge helped establish a union for content moderators after surviving a traumatic experience as a contractor in Kenya training Open AI’s ChatGPT.

Mercy Mutemi is a litigator for digital rights in Kenya who has issued challenges to some of the biggest global tech companies on behalf of hundreds of data workers.

Krista Pawloski is a full time data worker on Amazon’s Mechanical Turk platform and is an organizer with the worker-led advocacy group, Turkopticon.

Safiya Husain is the co-founder of Karya, a company in India with an alternative business model to compensate data workers at rates that reflect the high value of the data.

IRL: Online Life is Real Life is an original podcast from Mozilla, the non-profit behind Firefox. In Season 7, host Bridget Todd talks to AI builders that put people ahead of profit.

Alexandre PoirotKnowledge versus market -- Sharing versus Ease of use.

This blog post is a continuation of the previous one about the history of editing and publishing in web browsers.

I'm now going to focus on the significant shift of vision about the Web between the two first browsers: WorldWideWeb versus Mosaic.

While the very first, WorldWideWeb, defined the web as being editable by default, Mosaic restricted the browser to become read-only. All features related to web page editing disappeared in Mosaic.

Later down the road, Netscape 4 re-introduced a web page editor. But as Netscape copied the interpretation of the Web from Mosaic, the web pages were no longer editable by default. This somewhat divided users in two distinct groups: readers versus authors. The editor in Netscape 4 was an external feature of the browser, opening a distinct window.

An interesting fact is that both WorldWideWeb and Netscape 4 were superseded by Mosaic and Internet Explorer, which were focusing strictly on read-only vision for the Web.

WorldWideWeb, the very first browser.

WorldWideWeb browser and the premises of the web was created in the lab called CERN, the European Organization for Nuclear Research.

Their browser and the web spread within various research labs and universities. The main audience were scientists and librarians.

This page does a very nice summary of these first usages of the web:

The Web was originally conceived and developed to meet the demand for automated information-sharing between scientists in universities and institutes around the world.

This other page, in french, describes at length how scientists shared information from the 60s. The part about the web ("1.6 Le web (1984-1996)") is also an interesting read.

From these extracts, it isn't clear if the users were really editing web pages from the browser. It looks like it was mostly meant to query large databases of documents (scientific articles) and information (phonebooks). It sounds like it was already going into the direction of a read-only web.

Otherwise, we can easily explain why the web was originally restricted to scientists and librarians. This browser only worked on NeXT computers. This was a serious limitation to a widespread audience as these computers were targeting higher education and business markets.

ViolaWWW, the second browser

This browser got very little coverage in the history of browsers, but may have had a significant impact on the future of the web.

Many browsers appeared after WorldWideWeb. This web page is archiving the list of all of those.

But ViolaWWW was particularly important for three reasons:

  • The CERN suggested to use this browser instead of WorldWideWeb and quickly became the default browser in the lab source.
  • This browser, while becoming popular for its additional features (scripting and stylesheets), also regressed all the editor aspects. It looks like it only rendered the web pages and disallow any editing.
  • The main author of Mosaic (Marc Andreessen) was shown ViolaWWW just before initiating the Mosaic project 1st source 2nd source

ViolaWWW may have been the very precise project influencing the future of browser as read-only tools to browse the web. That mostly by inspiring the creation of Mosaic, which also got rid of editing features to focus on reading and browsing the web.

Mosaic, the first widespread browser.

Mosaic was developed at the National Center for Supercomputing Applications (NCSA).

It was the first to reach a very wide audience up to the mass-market. The main difference with past browsers was its compatibility with many hardwares and Operating Systems. It was the very first to support Unix, MacOS and Windows. The team behind it also focused a lot on making it easy to install and use.

This browser sealed this vision of the web by becoming much more popular. Mosaic, like ViolaWWW really focused on browsing the web. It contained no feature around editing the web pages.

Knowledge sharing versus Market and ease of use

Now it may be interesting to compare the vision of the web promoted at CERN/WorldWideWeb versus NCSA/Mosaic.

The CERN described the web in a simple and generic way:

The WorldWideWeb (W3) is a wide-area hypermedia information retrieval initiative aiming to give universal access to a large universe of documents. source

While the original web at CERN was meant to ease sharing the knowledge between scientists. It was probably not intentionally targeting any larger audience.

The web of Mosaic was clearly shifting to a wide audience of ordinary people. But the way they were promoting the Internet was quite different:

Mosaic offers a window into the Internet, presenting content and services to users in a friendly, interactive, point-and-click way. source

Mosaic Communications Corporation intends to support companies and accelerate the coming of this new era with tools that ease and advance online communications. source

Mosaic was promoting a whole market/ecosystem for the Internet. It would be made of companies providing services to consumers. It was drastically different from CERN phrasing: "giving universal access to large universe of documents".

I imagine we could debate at length about these two ways of framing the web, but I would like to instead focus on the most important appeal of Mosaic, which explained its success: The ease of use.

Mosaic surely gained lots of traction thanks to its support of most hardwares and operating system, but it also polished its ease of use. Unfortunately it only focused on browsing and reading the web. But I'm wondering, what if Mosaic also spent some time in these early days on helping the first users of the web to create and edit their own websites??

Instead, it promoted companies to build the services. Building the services here meant to build the web pages. This ultimately delegated content creation to experts in the early days.

What if Mosaic focused on the ease of use of web page editing? What if Mosaic continued along the lines of Tim Berners-Lee original vision of the web described over here.

If you think surfing hypertext is cool, that's because you haven't tried writing it.

The Web is universal and so should be able to encompass everything across the range from the very rough scribbled idea on the back of a virtual envelope to a beautifully polished work of art.

A first assumption, by the way, is that you have modeless interface in which browsing and editing are not separate functions. If to edit a page, you have to switch from browsing mode to editing mode, then you have lost already.

That's the vision I'd like to elaborate in 2023. Give a second change for the web to be fully editable (almost) by default.

Note: I published this article late after writing it. I actually wrote it before the release of Marc Andreessen's manifesto, which introduced lots of debate about his vision of tech, like here. This is typically the kind of discussion, which I find enlightning, but I really wanted to focus on actual actionnable Web features.

Tiger OakesThe easiest way to set focus on mount in React

Using callback refs to avoid useEffect issues.

Chris H-CEight-Year Moziversary

At the end of my post for my seven-year moziversary, I made some predictions about what was to be and now has been the next year of work. And I got them pretty spot-on:

Predictions for the next year of Moz Work:

  • There’ll be another All Hands
  • Glean will continue to gain ground in Firefox Desktop
  • “FOG Migration” will not happen

There was an all hands. It was in Montreal. It was fun to have folks come to a city I knew a little bit (though I’m still sore we didn’t get June 2020’s Toronto All-Hands). Poutine. Bagels. And a cirque-themed closing party.

Glean continued to gain ground on Firefox Desktop. Last year’s post mentioned over 100 Glean probes in Firefox Desktop. The current count as of time of writing is 368. Now, some of this is due to some projects our team have undertaken, but a lot of it is organic.

This is despite “FOG Migration” not happening. Firefox leadership remained uninterested in the prospect of migrating data collections to begin being sent in Glean. Though in Montreal there were some conversations that suggest that this might be changing.

So, what have I been up to? Well, I discovered in January that a legacy data collection system (PingCentre) was subject to some data loss that was incompatible with how the data was being used (( You can imagine that data loss would be acceptable for certain things like performance measurement or feedback, so long as you could characterize the loss (e.g. you lose stuff randomly? Only the small numbers? Only the feedback from Ottawa?). It’s less acceptable for retention or revenue. )). By March, replacing PingCentre had become a top-level OKR and I was managing the project.

So this year has been spent growing an appreciation for Project Management as a discipline. I now have more opinions about work tracking than I ever dreamed I’d have (though, no, I’ve not set up Kanban or anything else).

I’ve also continued my practice of basically never saying No to someone who had a question for me. As much as I bemoan the new tendency of questions being asked over direct message instead of in a topic channel where anyone can help, it does bring me no little joy to partner in a data exploration, consult on answering awkward privacy/data questions from contributors, or debug someone’s test file “out loud” so they can follow along. It really is the people that make Mozilla special, so helping them feels like a high calling.

Which is why I find our continued focus on “AI” to be so baffling. So much of “AI” we hear about is dragging the humanity out of the Internet that Mozilla is so keen to protect. We seem to be just as bad as the Valley for using “AI” to mean everything from outstanding work on local machine translation (now available in Firefox 118), to LLMs spouting out incorrect answers when you ask them to explain CSS. I hope we provide some clarity about what we mean when we say “AI”, and draw a thick line between what we’re doing and the grifts being peddled all around us at great cost to truth and the environment.

I understand that we need to be in a business to be able to speak about it. It’s why I’m excited that we’re giving social media some attention. I can’t wait to see what those teams create for the world. But the way everything became “AI” so fast sounds like chasing the hype cycle.

As for me, what do I expect to do? First, I expect to finish up the year by migrating Use Counters to Glean. Then… who knows? Maybe the results of the Events Work Week will exceed expectations and require more investment. Maybe I’ll find another data collection system in Firefox Desktop that’s dropping between 2% and 15% of all data that’ll need replacing. Maybe I’ll finally get to rewrite the IPC layer so it leaves the main thread alone. Yeah, okay maybe not.

Predictions for the next year of moz work:

  • I’ll work on client code in Firefox Desktop
  • I’ll not blog as much as I’d like
  • We continue to support existing collections in all of Legacy Telemetry’s core systems
  • There’ll be an All Hands (safe bet, as Dublin was announced in Montreal), and at least one more will be announced
  • Glean will continue to be used more on Firefox Desktop, and not just because Use Counters will juice the numbers (I will no doubt ascribe this increase to be disproportionately due to the (well-received) Glean talk I (finally) gave to a Firefox Front-End Engineering team)
  • “FOG Migration” will not happen, but new top-down guidance will be handed down expanding the circumstances where Glean is explicitly stated to be the data collection system system of choice in Firefox Desktop (and not just because it provides the best API to Legacy Telemetry)
  • We will quietly stop talking about AI so much, in the same way most firms have stopped talking about Web3 this year
  • I will publish the moziversary blog post actually _on_ my moziversary, unlike this year

Let’s see how that pans out.


Niko MatsakisIdea: "Using Rust", a living document

A few years back, the Async Wg tried something new. We collaboratively authored an Async Vision Doc. The doc began by writing “status quo” stories, written as narratives from our cast of characters, that described how people were experiencing Async Rust at that time and then went on to plan a “shiny future”. This was a great experience. My impression was that authoring the “status quo” stories in particular was really helpful. Discussions at EuroRust recently got me wondering: can we adapt the “status quo” stories to something bigger? What if we could author a living document on the Rust user experience? One that captures what people are trying to do with Rust, where it is working really well for them, and where it could use improvement. I love this idea, and the more I thought about it, the more I saw opportunities to use it to improve other processes, such as planning, public communication, and RFCs. But I’m getting ahead of myself! Let’s dive in.


I think authoring a living document (working title: “Using Rust”) that collects “status quo” stories could be a tremendous resource for the Rust community. I’m curious to hear from folks who might like to be part of a group authoring such a document, especially (but not only) people with experience as product managers, developer advocates, or UX researchers.

Open source is full of ideas, but which to do?

The Rust open-source organization is a raucuous, chaotic, and, at its best, joyful environment. People are bubbling with ideas on how to make things better (some better than others). There are also a ton of people who want to be involved, but don’t know what to do. This sounds great, but it presents a real challenge: how do you decide which ideas to do?

The vast majority of ideas for improvement tend to be incremental. They take some small problem and polish it. If I sound disparaging, I don’t mean to be. This kind of polish is absolutely essential. It’s kind of ironic: there’s always been a perception that open source can’t build a quality product, but my experience has often been the opposite. Open source means that people show up out of nowhere with PRs that remove sharp edges. Sometimes it’s an edge you knew was there but didn’t have time to fix; other times it’s a problem you weren’t aware of, perhaps because of the Curse of Knowledge.

But finding those revolutionary ideas is harder. To be clear, it’s hard in any environment, but I think it’s particularly hard in open source. A big part of the problem is that open source has always focused on coding as our basic currency. Discussions tend to orient around specific proposals – that could be as small as a PR or as large as an RFC. But finding a revolutionary idea doesn’t start from coding or from a specific idea.

It all starts with the “status quo”

So how do we go about having more “revolutionary ideas”? My experience is that it begins by deeply understandly understanding the present moment. It’s amazing how often we take the “status quo” for granted. We assume that we know the problems people experience, and we assume that everybody else knows them too. In reality, we only know the problems that we personally experience – and most of the time we are not even fully aware of those!

One thing I remember from authoring the async vision doc is how hard it was to focus on the “status quo” – and how rewarding it was when we did! When you get people talking about the problems they experience, the temptation is to immediately jump to how to fix the problem. But if you resist that, and you force yourself to just document the current state, you’ll find you have a much richer idea of the problem.1 And that richer understanding, in turn, gives rise to better ideas for how to fix it.

Idea: a living “Using Rust” document

So here is my idea: what if we created a living document, working title “Using Rust”, that aims to capture the “status quo” of Rust today:

  • What are people building with Rust?
  • How are people’s Rust experiences influenced by their background (e.g., prior programming experience, native language, etc)?
  • What is working well?
  • What challenges are they encountering?

Just as with the Async Vision Doc, I imagine “Using Rust” would cover the whole gamut of experiences, including not just the language itself but tooling, libraries, etc. Unlike the vision doc, I wouldn’t narrow it to async (though we might start by focusing on a particular domain to prove out the idea).

Like the vision doc, I imagine “Using Rust” would be composed of a series of vignettes, expressed in narrative form, using a similar set of personas2 to the Async Vision Doc (perhaps with variations, like Spanish-speaking Alano instead of Alan).

I personally found the narratives really helpful to get the emotional “heft” of some of the stories. For example, “Alan started trusting the Rust compiler, but then… async” helped drive home the importance of that “if it compiles, it works” feeling for Rust users, as well as the way that panics can undermine it. Even though these are narratives, they can still dive deep into technical details. Researching and writing “Barbara battles buffered streams”, for example, really helped me to appreciate the trickiness of async cancellation’s semantics.3

I don’t think “Using Rust” would ever be finished, nor would I narrow it to one domain. Rather, I imagine it being a living document, one that we continuously revise as Rust changes.

Improving on the async vision doc

The async vision doc experience was great, but I learned a few things along the way that I would do differently now. One of them is that collecting stories is good, but synthesizing them is better (and harder). I also found that people telling you the stories are not always the right ones to author them. Last time, we had a lot of success with people authoring PRs, but many times people would tell a story, agree to author a PR, and then never follow up. This is pretty standard for open source but it also applies a sort of “selection bias” to the stories we got. I would address both of these problems by dividing up the roles. Rust users would just have to tell their stories. There would be a group of maintainers who would record those stories and then go try to author the PRs that integrate into “Using Rust”.

The other thing I learned is that trying to author a single shiny future does not work. It was meant to be a unifying vision for the group, but there are just too many variables at play to reach consensus on that. We should definitely be talking about where we will be in 5 years, but we don’t have to be entirely aligned on it. We just have to agree on the right next steps. My new plan is to integrate the “shiny future” into RFCs, as I describe below.

Maintaining “Using Rust”

In the fullness of time, and presuming it works out well, I think “Using Rust” should be a rust-lang project, owned and maintained by its own team. My working title for this team is the User Research Team, which has the charter of gathering up data on how people use Rust and putting that data into a form that makes it accessible to the rest of the Rust project. But I tend to think it’s better to prove out ideas before creating the team, so I think I would start with an experimental project, and create the team once we demonstrate the concept is working.

Gathering stories

So how would this team go about gathering data? There’s so many ways. When doing the async vision doc, we got some stories submitted by PRs on the repo. We ran writing sessions where people would come and tell us about their experiences.

I think it’s very valuable to have people gather “in depth” data from within specific companies. For the Async Vision Doc, I also interviewed team members, culminating in the “meta-story” “Alan extends an AWS service”. Tyler Mandry and I also met with members from Google, and I recall we had folks from Embark and a few other companies reach out to tell us about their experiences.

Another really cool idea that came from Pietro Albini: set up a booth at various Rust conferences where people can come up and tell you about their stories. Or perhaps we can run a workshop. So many possibilities!

Integrating “Using Rust” with the RFC process

The purpose of an RFC, in my mind, is to lay out a problem and a specific solution to that problem. The RFC is not code. It doesn’t have to be a complete description of the problem. But it should be complete enough that people can imagine how the problem is going to be solved.

Every RFC includes a motivation, but when I read those motivations, I am often a bit at a loss as to how to evaluate them. Clearly there is some kind of problem. But is it important? How does it rank with respect to other problems that users are encountering?

I imagine that the “Using Rust” doc would help greatly here. I’d like to get to the point where the moivation for RFCs is primarily addressing particular stories or aspects of stories within the document. We would then be able to read over other related stories to get a sense for how this problem ranks compared to other problems for that audience, and thus how important the motivation is.

RFCs can also include a section that “retells” the story to explain how it would have played out had this feature been available. I’ve often found that doing this helps me to identify obvious gaps. For example, maybe we are adding a nifty new syntax to address an issue, but how will users learn about it? Perhaps we can add a “note” to the diagnostic to guide them.

Frequently asked questions

Will this help us in cross-team collaboration?

Like any organization, the Rust organization can easily wind up “shipping its org chart”. For example, if I see a problem, as a lang-team member, I may be inclined to ship a language-based solution for it; similarly, I’ve seen that the embedded community works very hard to work within the confines of Rust as it is, whereas sometimes they could be a lot more productive if we added something to the language.

Although they are not a complete solution, I think having a “Using Rust” document will be helpful. Focusing on describing the problem means it can be presented to multiple teams and each can evaluate it to decide where the best solution lies.

What about other kinds of stories?

I’ve focused on stories about Rust users, but I think there are other kinds of stories we might want to include. For example, what about the trials and travails of Alan, Barbara, Grace, and Niklaus as they try to contribute to Rust?

How will we avoid “scenario solving”?

Scenario solving refers to a pattern where a feature is made to target various specific examples rather than being generalized to address a pattern of problems. It’s possible that if we write out user stories, people will design features to target exactly the problems that they read about, rather than observing that a whole host of problems can be addressed via a single solution. That is true, and I think teams will want to watch out for that. At the same time, I think that having access to a full range of stories will make it much easier to see those large patterns and to help identify the full value for a proposal.

What about a project management team?

From time to time there are proposals to create a “project management” team. There are many different shapes for what such a team would do, but the high-level motivation is to help provide “overall guidance” and ensure coherence between the Rust teams. I am skeptical about any idea that sounds like an “overseer” team. I trust the Rust teams to own and maintain their area. But I do think we can all benefit from getting more alignment on the sets of problems to be solved, which I think this “Using Rust” document would help to create. I can also imagine other interesting mechanisms that build on the doc, such as reviewing stories as a group online, or at “unconferences”.

Call to action: get in touch!

I’m feeling pretty excited about this project. I’m contemplating how to go about organizing it. I’m really interested to hear from people who would like to take part as authors and collators of user stories. If you think you’d be interested to participate, please send me an email. I’m particularly interested to hear from people with experience doing this sort of work (e.g., product managers, developer advocates, UX researchers).

  1. If you’re hearing resonance of the wisdom of the Buddha, it was not intentional when I wrote this, but you are not alone. ↩︎

  2. The personas/characters may look simple, but developing that cast of characters took a lot of work. Finding a set that is small enough to be memorable but which captures the essentials is hard work. One key insight was separating out the projects people are building from the characters building them, since otherwise you get a combinatorial explosion. ↩︎

  3. Async cancellation is an area I deseparately want to return to! I still think we want some kind of structured concurrency like solution. My current thinking is roughly that we want something like moro for task-based concurrency and something like Yosh’s merged streams for handling “expect one of many possible message”-like scenarios. ↩︎

Alexandre PoirotThe History of editing and publishing in web browsers

Some web browsers used to offer built-in features to edit and publish web pages.

You could edit any web page. Modify the text, the formatting and styling, attach images, link to another page...

After having done these changes, you could publish them to the web server so that others can see your contribution.

I'm going to highlight that this was only possible for a limited period of time, on browsers with a limited audience.

WorldWideWeb (1990-1994)

The Web was originally created within a European research lab called CERN, European Organization for Nuclear Research.
This is where was developed the very first browser called "WorldWideWeb".
The original documentation pages are still available online!
The following quote highlights the read and write capabilities of this browser.

The "WorldWideWeb" application for the NeXT is a prototype Hypertext browser/editor.

The main author of this application, Tim Berners Lee also insists about the editor aspect in this retrospective:

The first web browser - or browser-editor rather - was called WorldWideWeb [...]

And another time in this note:

If you think surfing hypertext is cool, that's because you haven't tried writing it.

In 2019, the CERN organized a project to rebuild WorldWideWeb using today's web technologies. While doing so, they published a website describing the original vision of the Web and its related browser application in details.

This website also insists a lot about the editor side of the browser:

Today it's hard to imagine that web browsers might also be used to create web pages. It turned out that people were quite happy to write HTML by hand—something that Tim Berners-Lee and colleagues never expected. They thought that some kind of user interface would be needed for making web pages and links. That's what the WorldWideWeb browser provided.

You can test this browser on this project web page. This works slightly better on Chrome than Firefox, but I must warn you, it is quite buggy. There is many cursor issues.

Screenshot of editing of the home page in WorldWideWeb

Nonetheless, it is quite stunning to see how this browser actually works.
You can move the caret anywhere, in all the web pages, and modify the text anywhere.
Do some basic styling, copy and paste text, ...
Exactly like Microsoft Word / Google Docs, but against remote web pages!

But... it had some serious limitation.
While you can edit all the pages, you could only save your changes to local files.
You could edit, but not publish your changes.

This is mentioned on this documentation page about how to create a new page:

You can edit existing documents using WWW so long as they are files. You cannot normally edit information retrieved from remote databases.

This actually relates to some implementation detail, which was clarified by Tim Berners Lee:

It would browse http: space and news: and ftp: spaces and local file: space, but edit only in file: space as HTTP PUT was not implemented back then.
source (I will followup about this in another blog post)

None of the future browsers ever re-implemented such behavior: editable by default.

Mosaic (1993-1997)

The second most notable browser, "Mosaic", drastically changed the vision of the Web.
You could only open and browse HTML pages. All the editing features disappeared in this browser.
This introduced the URL bar which wasn't visible in WorldWideWeb.

Screenshot of view source dialog in Mosaic

You could only open the HTML sources internally (via view source feature), or via an external editor application. Source code

Mosaic influenced much more the long term future of web browsers. Looking at its UI, you can see that it is very similar to today's browser UI.

Note that you can run Mosaic on Linux! But you have to build it from sources available on GitHub. It can easily fail building, but see this issue to address the failures.

Early Netscape versions up to 2 (1994-1996)

"Netscape" started being released one year after the first version of Mosaic. Netscape took the lead on being the most popular browser, but still didn't reimplement page editing in any way.

Screenshots for Netscape Navigator 2

Netscape 3 (1997)

Netscape 3, via the Gold edition, shipped the "Netscape Editor" feature. Pressing "Ctrl-E" would allow you to edit any page in it! You could then save your modifications in a local file, but still not publish it to the remote server.
You could edit, but not publish your changes. This was really similar to the behavior of WorldWideWeb browser, except that pages aren't editable by default. The editing had to be done within a distinct application/window.

Screenshot of Editor in Netscape 3 source

Netscape 4 (June 1997-2000)

Netscape 4 started exposing a publish feature while renaming "Netscape Editor" into "Netscape Composer". Screenshot of Composer in Netscape 4 Notice the HTTP -or- FTP upload methods. (I'll followup about the HTTP upload method in a another blog post)
This finally addressed the shortcomings of WorldWideWeb browser. We could easily publish the changes we just made on the page, as soon as you had the necessary credentials on the remote web server for uploading files.

Unfortunately, this is also the last popular Netscape product. Netscape had 80% market share in 1997, but only 13% in 2000! 1st source 2nd source

Netscape 6 (2000-2002)

Note that Netscape 5 was never released. The version was dropped in favor of Netscape 6.

Surprisingly Netscape 6 dropped the publish feature from Composer: Screenshot of Composer in Netscape 6 Thus, getting back to the behavior of Netscape 3. You can edit changes locally, but you can no longer publish the changes to the web server.

Mention in Netscape 6 troubleshooting:

Problem: The Editor application does not support the Publish feature. source

Netscape 7 (2002)

Surprisingly again, Netscape 7 revived the publishing feature in Composer: Screenshot of Composer in Netscape 7 But at this point, netscape was under 4% in market shares. source

The complex history between Netscape 4, 5, 6 and 7 is probably related to the move of Netscape to an open source codebase. This was initiated by the Mozilla project, which started in 1998, one year after the release of Netscape 4. source This may be the reason why the version 5 was cancelled and why some features were dropped in Netscape 6.

On the plus side, today, we are able to track the development of the publishing feature which was re-implemented from scratch in the open source codebase. The latest version of Netscape 6 was based on Mozilla source The feature was tracked by this bugzilla ticket, the first patches landed into Mozilla 0.9.7 (November 2001) and the very last patch landed into Mozilla 1.0 (March 2002). Netscape 7 was later released in August 2002 based on Mozilla 1.0. source

Browsers landscape from 2002 till now

In 2002, "Internet Explorer" had already around 90% of market shares. And Internet Explorer did not have any editing capabilities.

Screenshot of Internet Explorer 6

A few years later, in 2006, the first "Firefox" version was released and also focused only on browsing and reading the web (Like Internet Explorer). "Netscape Composer" was never reintroduced in Firefox. Screenshot of Firefox 1

In 2008, "Chrome" doubled down on stripping down browser features and UI to delegate even more capabilities to the websites. Screenshot of Chrome 1

Bonus: Seamonkey (2006-today)

A browser still exists today, in 2023, with web page editing and publishing, exactly like Netscape 7!

Believe it or not, but a group of contributors are maintaining the original open source codebase of Netscape over the decades!!
This browser is SeaMonkey. Like Netscape, it includes: a Web browser, but also a mail reader (ThunderBird), a newsgroup reader, IRC chat, and last but not least, one HTML editor (Composer). This project is still active and released a new version in September.

I encourage everyone to give it a try. This is really amazing to see all these old and complex softwares still working today. It is also the easiest way to run a Web browser with full editing and publishing support on modern computers. The icing on the cake is that, as it is based on latest version of Gecko (the Web engine of Firefox) it benefits from almost the same support of Web standards as Firefox.

Screenshot of Composer on SeaMonkey


Web page editing and publication features through the browser UI was only exposed to a wide audience during three years (1997 to 2000). It was actually even shorter than that as it was during the Netscape 4 era, where their market share fall part.

I'll investigate in a following blog post how different WorldWideWeb's vision of the web was compared to all subsequent browsers. And the consequences it had on how the Web is used starting from Mosaic.

Overview of browser history

Overview of all this history of browsers

source for this diagram

Mozilla Privacy BlogThe Revival We Need: The FCC Takes On Net Neutrality

Today, the US Federal Communications Commission (FCC) took an important step towards restoring net neutrality protections.

At Mozilla, we’ve long defended people’s access to the internet across the globe. Supporting the principle of net neutrality in the US has been a vital piece of this effort, from our lawsuit against the FCC to the call for FCC Chairwoman Rosenworcel to reinstate net neutrality at the federal level, and our support for state level net neutrality laws.

Net neutrality prevents internet service providers (ISPs) from leveraging their market power to slow, block, or prioritize content, ensuring that people online can freely access ideas and services. At the heart of Mozilla’s work on this issue is our belief that the Internet should be a global public resource, open and accessible to all. People everywhere, not just in states or countries that have passed their own net neutrality laws, deserve to have the same control over their online experiences. This openness also enables competition on the web, innovation, and equal opportunity.

Today, the FCC kicked off a renewed effort by voting to begin a rulemaking process on “Safeguarding and Securing the Open Internet.” Next, the public will have an opportunity to weigh in the proposed rules.

Mozilla, alongside a large community of allies, applauds Chairwoman Rosenworcel and the FCC for taking this vital step, and asking some important questions in the NPRM. We’re eager to see the FCC’s effort advance. Restoring net neutrality is a key part of building a healthier internet that puts people first.

The post The Revival We Need: The FCC Takes On Net Neutrality appeared first on Open Policy & Advocacy.

Firefox NightlyMore WebExtensions! Coming to an Android near you soon – These Weeks in Firefox: Issue 147


  • Extensions process support in Firefox Android is going to ride the Gecko 120 release train (tracked by Bug 1859533). This has been one of the many things blocking full support for WebExtensions on Firefox for Android.
  • pbz added a new button for clearing your session in Private Browsing windows, enabled by default in Nightly
Screenshot of a new button for clearing private sessions on Firefox displayed on the toolbar, as well as a new dialog panel that appears after clicking the toolbar button.

Clear your session in the press of a button!

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Ganna
  • Itiel
  • Mathew Hodson
  • Sebastian Zartner [:sebo]

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • Extensions process support in Firefox Android
    • Most of the work on the extensions process crash handling logic started in Firefox 117 and has been improved in 118, 119 and 120. This process has already been enabled for 100% of Beta and Nightly populations for a while (Nimbus rollouts) as well as on 118 Release for ~1% of the population.
    • This work initially prioritised for Firefox Android has also been contributing to handle extension process crashes on Firefox Desktop (Bug 1355239)
WebExtension APIs
  • Fixed behavior of the match_about_blank / matchAboutBlank content script configuration property: in Gecko < 120, when this flag is set to true, the related content script is implicitly injected in all top level about:blank pages. Starting with Gecko >= 120, it is going to be injected into top level about:blank pages only if the extension is also granted the <all_urls> permission – Bug 1853409

Addon Manager & about:addons
  • Changed message bar related to an add-on incompatible with the current app version into an error message bar (because the user cannot re-enable the add-ons marked as “app disabled” like in this case) – Bug 1856397

Developer Tools

  • Contributors
    • :sebo added a new CSS warning displayed when `text-wrap: balance` is ignored because the number of lines is greater than 10. (bug)
  • Jamie (:jteh) fixed a bug with the Accessibility Panel in the Browser Toolbox which was unable to inspect popups (eg <input type=date>’s calendar).
  • Hubert (:bomsy) fixed a bug in the Debugger where you would see “undefined” tooltips displayed on top of Debugger inline previews (bug).
  • Alex (:ochameau) has landed several performance improvements for the Debugger. (bug, bug, bug)
  • Nicolas (:nchevobbe) added a pretty print button in the Style Editor, so that you can force pretty printing in case it did not automatically happen. (bug)
Screenshot of a button to force pretty printing in devtools Style Editor.

The button can be found at the bottom of the Style Editor.

  • Nicolas fixed a bug with the “Copy Location” context menu item of the ruleview, which was ignoring sourcemap information (bug)

Migration Improvements

  • The patches to allow users on Ubuntu to import from Chromium when Firefox is installed as a Snap package have landed! This is still undergoing testing from QA, but if all goes well, this should ride out in Firefox 120.
  • We’ve removed the ability to switch back to the old XUL-based migration dialog, in preparation for removing the old dialog altogether after the soft-freeze ends.
  • For users migrating data between devices, we ran an experiment to see if adding adorable illustrations to the wizard on the switching-devices SUMO page would result in more people getting through the process. Interestingly, it seemed to have a small but significant effect! We’ve enabled the illustrations by default.
Illustration of a very adorable fox peeking from a bundle of cardboard boxes.

Inside this box is an adorable fox!

Performance Tools (aka Firefox Profiler)

  • Display nicer time units in the timeline ruler for values that are better expressed in minutes or hours. (PR #4774)

Screenshot of the Firefox Profiler timeline displayed in minutes or hours.

  • Show the hovered time at the top of the timeline

Screenshot of a Firefox Profiler timeline displaying a newly implemented vertical line that appears when you hover over a specific time

We now display a vertical line in the timeline when hovering over markers and the stack chart.

  • Performance improvements
    • Improve call tree and flame graph performance for big profiles
    • Make removing a focus function faster
  • Avoid showing the call tree panel when there’s no sample
  • Custom tracks from markers
  • We can now profile build jobs on Taskcluster!
Screenshot of CI tasks running on treeherder and a newly implemented "Open in Firefox Profiler" button on the page.

Under the “Performance” tab, you can find a button to open Firefox Profiler.

Screenshot of a Firefox Profiler page, after opening it from treeherder.

After pressing the button, a new Firefox Profiler page will appear.

  • You can get a profile of your local build by running `./mach resource-usage`


Search and Navigation

  • Drew (:adw) finished integrating the Rust Suggest component into Desktop: 1851255, 1855884, 1857396, 1855884
  • Drew removed support for individual help and dismiss buttons in a Urlbar result as the functionality is currently accessible from the urlbar results menu: 1827966
  • Dale(:daleharvey) continued work on recent searches, which allows user to see a list of recent searches while the URL bar is in a zero prefix mode: 1852848, 1858639
  • Marc Seibert(:mseibert) made it such that trimming the URLs in the URL bar only occurs if the directionality doesn’t change to RTL: 1836962
  • Mandy(:mcheang) continued work on Search Service improvements, refactoring initialization and setting up the steps for SearchService to eventually migrate to a new search configuration: 1855084, 1852792
  • Karandeep(:klubana) tweaked the UI of a clipboard suggestion to use a chiclet, different text, and icon: 1858141

Storybook/Reusable Components

  • sclements fixed a bug where the panel-list sub-menu was getting cut off when the browser window isn’t wide enough (Bug 1855827)
  • ganna landed a patch to use moz-message-bar in the unified extensions panel (Bug 1844850)
  • ganna landed a fix for an issue where split buttons in chrome panels had a gap between the buttons (Bug 1854420)
  • emilio fixed an issue where calling setAttribute(“selectedIndex”, 0) on the XUL deck element wasn’t working as expected (Bug 1856189)
  • hjones landed a patch to make the space between the new toggle label and button clickable (Bug 1852827)
  • mstriemer has been working on renaming all of the things from “XUL Widgets” to “UI Widgets”

The Rust Programming Language BlogAnnouncing the New Rust Project Directors

We are happy to announce that we have completed the process to elect new Project Directors.

The new Project Directors are:

They will join Ryan Levick and Mark Rousskov to make up the five members of the Rust Foundation Board of Directors who represent the Rust Project.

The board is made up of Project Directors, who come from and represent the Rust Project, and Member Directors, who represent the corporate members of the Rust Foundation.

Both of these director groups have equal voting power.

We look forward to working with and being represented by this new group of project directors.

We were fortunate to have a number of excellent candidates and this was a difficult decision. We wish to express our gratitude to all of the candidates who were considered for this role! We also extend our thanks to the project as a whole who participated by nominating candidates and providing additional feedback once the nominees were published. Finally, we want to share our appreciation for the Project Director Elections Subcommittee for working to design and facilitate running this election process.

This was a challenging decision for a number of reasons.

This was also our first time doing this process and we learned a lot to use to improve it going forward. The Project Director Elections Subcommittee will be following up with a retrospective outlining how well we achieved our goals with this process and making suggestions for future elections. We are expecting another election next year to start a rotating cadence of 2-year terms. Project governance is about iterating and refining over time.

Once again, we thank all who were involved in this process and we are excited to welcome our new Project Directors.

Mozilla Reps CommunityDecommissioning of the Mozilla Reps Program

There is no easy way to do this without bits of sadness, but also accomplishments and happiness while looking back at all the things this program succeeded at building. After much careful consideration and evaluation, with the most profound sense of gratitude, we have decided that it is time to retire the program. The Mozilla Reps program was initially built with the aim of bringing structure to the local regional communities and helping people find their way to how they can help Mozilla. In the last few years, we have seen that communities tend to get organized based on products and interests not necessarily connected to each other. We believe that it makes sense to go where those communities are and address their needs and a program that provides a large number of alternative options brings less value.

The Mozilla Reps program has been the beating heart of our community, igniting passion, and driving positive change across the globe. You have championed the cause of the open web and Mozilla’s mission like none other through your unwavering commitment and boundless enthusiasm. From organizing inspirational events that brought communities closer together, to educating on web literacy and digital empowerment, your impact has been nothing short of extraordinary. We are incredibly grateful for the dedication and passion that Mozilla Reps have demonstrated in advancing Mozilla’s mission. We will want to celebrate this remarkable impact of the Mozilla Reps program, through its 12 years of existence, and we will share steps on how to share your stories, memories, and reflections with us. Together, we will create a tapestry of gratitude, honoring the incredible journey we have shared.

We do express our heartfelt gratitude to all the Mozilla Reps who have dedicated their time, energy, and passion to the program over the years. As we say farewell to this cherished program, we want you to know that your contributions have left an indelible mark on the world. Together, we have built a legacy of change that will resonate far beyond the boundaries of this program. The friendships forged, the knowledge shared, and the dreams kindled will forever remain as beacons of hope guiding us towards a more inclusive, open, and safer internet for all.

What Does This Mean for Current Mozilla Reps?

As of 01/09/2023, the Mozilla Reps will be a retired program. We will follow up with all logistics transition and closing during this month but we will consider it close. If you have an activity going on, please close it by the end of the month.

Staying Connected

While the Mozilla Reps program is being decommissioned, we want to assure you that your connections and your direct contributions to product teams stay the same. You can continue your work within Code, L10n, SUMO, AMO, MDN, Thunderbird, Hubs, Common Voice, Connect, and many other projects that we foster under Mozilla.

As we move forward, we want to emphasize that this decision does not diminish the value we place on the contributions of our volunteers. Mozilla remains committed to fostering a strong and vibrant volunteer community, and we will continue to provide alternative opportunities for contribution and collaboration. Its mission remains unchanged. We are still committed to building a better internet for everyone, and we will continue to do so through our other programs and new initiatives. I am more than sure you will champion many of them, with as much impact as you did as Reps. We count on your support as we navigate this change together.

Once a Mozillian, always a Mozillian!

Thank you for all your hard work, creativity, and passion! We’ll be forever impressed by what we achieved as a community!

[This is a repost of the original announcement made by Ioana Chiorean on Discourse]

Mozilla Privacy BlogMozilla Meetups – Code to Conduct in AI: Open Source, Privacy, and More

Register Below!

The AI wave has generated excitement but also debate, as policymakers across the globe grapple with tough policy questions. We’re collectively wondering: How do we approach regulation in the US?  How do we get the right safeguards in place when it comes to privacy and other harms?  How do we treat open source?  How do we create policies that enable a diverse AI landscape that works for everyone?

The event will feature a fireside chat, followed by an expert panel discussion. A happy hour with drinks and light fare will follow.

Date and time: Wednesday, November 15th – event starts @ 4:00PM promptly (doors @ 3:45pm)

Location: The Eaton Hotel, Wild Days – Rooftop, 1201 K St NW, Washington, DC

The post Mozilla Meetups – Code to Conduct in AI: Open Source, Privacy, and More appeared first on Open Policy & Advocacy.

Niko MatsakisEurorust reflections

I’m on the plane back to the US from Belgium now and feeling grateful for having had the chance to speak at the EuroRust conference1. EuroRust was the first Rust-focused conference that I’ve attended since COVID (though not the first conference overall). It was also the first Rust-focused conference that I’ve attended in Europe since…ever, from what I recall.2 Since many of us were going to be in attendance, the types team also organized an in-person meetup which took place for 3 days before the conference itself3. Both the meetup and the conference were great in many ways, and sparked a lot of ideas. I think I’ll be writing blog posts about them for weeks to come, but I thought that to start, I’d write up something general about the conference itself, and some of my takeaways from the experience

It’s great to talk to people using Rust

When I started on Rust, I figured the project was never going to go anywhere — I mean, come on, we were making a new programming language. What are the odds it’ll be a success? But it still seemed like fun. So I set myself a simple benchmark: I will consider the project a success the first time I see an announcement where somebody built something cool with it, and I didn’t know them beforehand. In those days, everybody using Rust was also hanging out on IRC or on the mailing list.

Well, that turned out to be a touch on the conservative side. These days, Rust has gotten big enough that the core project itself is just a small piece of the action. It’s just amazing to hear all the things people are using Rust for. Just looking at the conference sponsors alone, I loved meeting the Shuttle and Tauri/CrabNebula teams and I got excited about playing with both of them. I had a great time talking to the RustRover team about the possibilities for building custom diagnostics and the ways we could leverage their custom GUI to finally get past the limitations of the terminal when we present error messages. But one of my favorite parts happened on the tram ride home, when I randomly met the maintainer of PyO3. Such a cool project, and definite inspiration for work I’ve been doing lately, like duchess.

Rust teachers everywhere

Speaking of Shuttle and Tauri, both of them are interesting in a particular way: they are empowerment efforts in their own right, and so they attract people whose primary interest is not Rust itself, but rather achieving some other goal (e.g., cloud development, or building a GUI application). It’s cool to see Rust empowering people to build other empowerment apps, but it’s also a fascinating source of data. Both of those projects have started embarking on efforts to teach Rust precisely because that will help grow their userbase. The Shuttle blog has all kinds of interesting articles4; the Tauri folks told me about their efforts to build Rust articles specifically targeting JavaScript and TypeScript programmers, which required careful choice of terminology and concepts.

The whole RustFest idea seems to have really worked

At some point, RustFest morphed from a particular conference into a kind of ‘meta conference’ organization, helping others to organize and run their own events. Looking over the calendar of Rust events in Europe, I have to say, that looks like it’s worked out pretty dang well. Hats off to y’all on that. Between EuroRust, RustLab in Italy, Rust Nation in the UK, and probably a bunch more that I’m not aware of.

I should also say that meeting the conference organizers at this conference was very nice. Both the EuroRust organizers (Marco and Sarah, from Mainmatter) were great to talk to, and I finally got to meet Ernest (now organizing Rust Nation in the UK), whom I’ve talked to on and off over the years but never met in person.

I do still miss the cozy chats at Rust Belt Rust (RIP), but this new generation of Rust conferences (and their organizers) is pretty rad too. Plus I get to eat good cheese and drink beer outdoors, two things that for reasons unbeknownst to me are all too rare in the United States.

The kids are all right

One of my favorite things about being involved in the Rust project has been watching it sustain and reinvent itself over the years. This year at the conference I got to see the “new generation” of Rust maintainers and contributors — some of them, like @davidtwco, I had met before, but who have gone from “wanna be” Rust contributor to driving core initiatives like the diagnostic translation effort. Others — like @bjorn3, @WaffleLapkin, @Nilstrieb, and even @MaraBos — I had never had a chance to meet before. I love that working on Rust lets you interact with people from all other the world, but there’s nothing like putting a name to a face, and getting to give someone a hug or shake their hand.

But yeah, there’s that thing

So, let me say up front, due to scheduling conflicts, I wasn’t able to attend RustConf this year (or last year, as it happens). But I read Adam Chalmer’s blog post that many people were talking about, and I saw this paragraph…

Rustconf definitely felt sadder and downbeat than my previous visit. Rustconf 2019 felt jubilant. The opening keynote celebrated the many exciting things that had happened over the last year. Non-lexical lifetimes had just shipped, which removed a ton of confusing borrow checker edge cases. Async/await was just a few short months away from being stabilized, unleashing a lot of high-performance, massively-scalable software. Eliza Weisman was presenting a new async tracing library which soon took over the Rust ecosystem. Lin Clark presented about how you could actually compile Rust into this niche thing called WebAssembly and get Rust to run on the frontend – awesome! It felt like Rust had a clear vision and was rapidly achieving its goals. I was super excited to be part of this revolution in software engineering.

…and it made me feel really sad.5 Rust’s mission has always been empowerment. I’ve always loved the “can do” spirit of Rust, the way we aim high and try to push boundaries in every way we can. To me, the open source org has always been an important part of how we empower.

Developing a programming language, especially a compiled one, is often viewed as the work of “wizards”, just like systems programming. I think Rust proves that this “wizard-like” reputation has more to do with the limitations of the tools we were using than the task itself. But just like Rust has the goal of making systems programming more practical and accessible, I like to think the Rust org helps to open up language development to a wider audience. I’ve seen so many people come to Rust, full of enthusiasm but not so much experience, and use it to launch a new career.

But, if I’m honest, I’ve also seen a lot of people come into Rust full of enthusiasm and wind up burned out and frustrated. And sometimes I think that’s precisely because of our “sky’s the limit” attitude — sometimes we can get so ambitious, we set ourselves up to crash and burn.

Sometimes “thinking big” means getting nowhere

Everybody wants to “think big”. And Rust has always prided itself on taking a “holistic view” of problems — we’ve tried to pay attention to the whole project, not just generating good code, but targeting the whole experience with quality diagnostics, a build system, an easy way to manage which Rust version you want, a package ecosystem, etc. But when we look at all the stuff we’ve built, it’s easy to forget how we got there: incrementally and painfully.

I mean, in Ye Olde Days of Rust, we didn’t even have a borrow checker. Soundness was an aspiration, not a reality. And once we got one, it sucked to use, because the design was still stuck in some ‘old style’ thinking. And even once we had INHTWAMA6, the error messages were pretty confounding. And once we invented the idea of multiline errors, it wasn’t until late 2018 that we had NLL, which changed the game again. And that’s just the compiler! The story is pretty much the same for every other detail of the language. You used to have to build the compiler with a Makefile that was so complex, I wouldn’t be surprised if were self-aware.7

When I feel burned out, one of the biggest reasons is that I’ve fallen into the trap of thinking too big, doing too much, and as a result I am spread too thin and everything seems impossible. Just look back three years ago: the async working group was driving this crazy project, the Async Vision Doc, and it seemed like we were on top of the world. We recorded all these stories of how async Rust was hard, and we were thinking about how we could solve it. Not surprisingly, we found that these stories were sometimes language problems, but just as often they were library limitations, or gaps in the tooling, or the docs. And so we set out an expansive vision, spawning out a ton of subprojects. And all the time, there was a voice in my head saying, “is this really going to work?”

Well, I’d say the answer is “no”. I mean, we made a lot of progress. We are going to stabilize async functions in traits this year, and that is awesome. We made a bunch of improvements to async usability, most notably cjgillot’s fantastic PR that improves the accuracy of send bounds and futures, preventing a whole ton of false errors (though that work wasn’t really done in coordination with the async wg effort per se, it’s just because cjgillot is out there silently making huge refactors8).

And yet, there’s a lot we didn’t do. We don’t have generators. We didn’t yet find a way to make futures smaller. We didn’t really drive to ground the conversation on structured concurrency. We also took a lot longer to do stuff than I hoped. I thought async functions in traits would ship in 2021 — it’s shipping now, but it’s 2023.

Focus, focus, focus; iterate, iterate, iterate

One lesson I take away from the async wg experience is focus, focus, focus and iterate, iterate, iterate. You can (almost) never start too small. I think we were absolutely right that “doing async right” demands addressing all of those concerns, but I think that we overestimated our ability to coordinate them up front, and as a result, things like shipping async fn in traits took longer than they needed to. We are going to get the async shiny future, but we’re going to get it one step at a time.

Also: we’re a lot bigger than we used to

Still, sometimes I find that when I float ideas, I encounter a reflexive bit of pushback: “sounds great, who’s going to do it”. One the one hand, that’s the voice of experience, coming back from one too many Think Big plans that didn’t work out. But on the other, sometimes it feels a bit like “old school” thinking to me. Rust is not the dinky little project it used to be, where we all knew everybody. Rust is used by millions of developers and is one of the fastest growing language today; it powers the cloud and it’s quite possibly in your kernel. In many ways, this growth hasn’t caught up with the open source org: I’d still like to see more companies hiring dedicated Rust teams of Rust developers, or giving their employees paid time to work on Rust9. But I think that growth is coming, especially if we work harder at harnessing it, and I am very excited about what that can mean.

Nothing succeeds like success

Now I know that when we talk about burnout, we’re also talking about other kinds of drama. Maybe you think that things like ‘working iteratively’ and having more people or resources are not going to help when the problem is conflicts between people or organizations. And you’re not wrong, it’s not going to solve all conflict. But I also think that an awful lot of conflict ultimately comes out of zero-sum, scarcity-oriented thinking, or from feeling disempowered to achieve the goals you set out to do. To help with burnout, we need to do better at a number of things, including I think helping each other to practice empathy and manage conflict more productively10, but I think we also need to do better at shipping product.

Don’t be afraid to fail — you got this

One of my favorite conversations from the whole conference happened after the conference itself. I was in the midst of pitching Jack Huey on some of the organizational ideas that I’m really excited about right now, which I think can help bring the Rust project closer to being the empowering, inclusive open-source project it aspires to be. Jack wasn’t sure if they were going to work. “But”, he said, “what the heck, let’s try it! I mean, what have we got to lose? If it doesn’t work, we’ll learn something, and do something else.”11 Hell yes.

  1. As I usually do, I’ve put my slides online. If you’re curious, take a look! If you see a typo, maybe open a PR. The speaker notes have some of the “soundrack”, though not all of it. ↩︎

  2. Somehow, I never made it to a RustFest. ↩︎

  3. You can find the agenda here. It contains links to the briefing documents that we prepared in advance, along with loose notes that we took during the discussions. I expect we’ll author a blog post covering the key developments on the Inside Rust blog. ↩︎

  4. Including one I can’t wait to read about OAuth – I tried to understand Github’s docs on OAuth and just got completely lost. ↩︎

  5. Side note, but I think Rust 2024 is shaping up to be another hugely impactful edition. There’s a very good chance we’ll have async functions in traits, type alias impl trait, and polonius, each of which is a massive usability and expressiveness win. I’m hoping we’ll also get improved temporary lifetimes in the new edition, eliminating the “blocking bugs” identified as among the most common in real-world Rust programs. And of course the last few years have already seen let-else, scoped threads, cargo add, and a variety of other changes. Gonna be great! ↩︎

  6. INHTWAMA was the rather awkward (and inaccurate) acronym that we gave to the idea of “aliasing xor mutation” — i.e., the key principle underlying Rust’s borrow checker. The name comes from a blog post I wrote called “Imagine never hearing the phrase aliasable, mutable again”, which @pcwalton incorrectly remembered as “Imagine never hearing the words aliasable, mutable again”, and hence shortened to INHTWAMA. I notice now though that this acronym was also frequently mutated to IMHTWAMA which just makes no sense at all. ↩︎

  7. I learned a lot from reading Rust’s Makefile in the early days. I had no idea you could model function calls in make with macros. Brilliant. I’ve always deeply admired Graydon’s Makefile wizardry there, though it occurs to me now that I never checked the git logs – maybe it was somebody else! I’ll have to go look later. ↩︎

  8. Side note, but more often than not, I think cjgillot’s approaches are not going to work. And so far I’m 0 for 2 on this, he’s always been right. To paraphrase Brendan Eich, “always bet on cjgillot”. ↩︎

  9. And I have some thoughts on how we can do better at encouraging them! More on that in some later posts. ↩︎

  10. One of the biggest lessons for me in my personal life has been realizing that not telling people when I feel upset is not necessarily being kind to them and certainly not kind to myself. It seems like avoiding conflict, but it can actually lead to much larger conflicts down the line. ↩︎

  11. Full confession, this quote is made up out of thin air. I have no memory of what words he used. But this is what he meant! ↩︎

Hacks.Mozilla.OrgBuilt for Privacy: Partnering to Deploy Oblivious HTTP and Prio in Firefox

Protecting user privacy is a core element of Mozilla’s vision for the web and the internet at large. In pursuit of this vision, we’re pleased to announce new partnerships with Fastly and Divvi Up to deploy privacy-preserving technology in Firefox.

Mozilla builds a number of tools that help people defend their privacy online, but the need for these tools reflects a world where companies view invasive data collection as necessary for building good products and making money. A zero-sum game between privacy and business interests is not a healthy state of affairs. Therefore, we dedicate considerable effort to developing and advancing new technologies that enable businesses to achieve their goals without compromising peoples’ privacy. This is a focus of our work on web standards, as well as in how we build Firefox itself.

Building an excellent browser while maintaining a high standard for privacy sometimes requires this kind of new technology. For example: we put a lot of effort into keeping Firefox fast. This involves extensive automated testing, but also monitoring how it’s performing for real users. Firefox currently reports generic performance metrics like page-load time but does not associate those metrics with specific sites, because doing so would reveal peoples’ browsing history. These internet-wide averages are somewhat informative but not particularly actionable. Sites are constantly deploying code changes and occasionally those changes can trigger performance bugs in browsers. If we knew that a specific site got much slower overnight, we could likely isolate the cause and fix it. Unfortunately, we lack that visibility today, which hinders our ability to make Firefox great.

This is a classic problem in data collection: We want aggregate data, but the naive way to get it involves collecting sensitive information about individual people. The solution is to develop technology that delivers the same insights while keeping information about any individual person verifiably private.

In recent years, Mozilla has worked with others to advance two such technologies — Oblivious HTTP and the Prio-based Distributed Aggregation Protocol (DAP) — towards being proper internet standards that are practical to deploy in production. Oblivious HTTP works by routing encrypted data through an intermediary to conceal its source, whereas DAP/Prio splits the data into two shares and sends each share to a different server [1]. Despite their different shapes, both technologies rely on a similar principle: By processing the data jointly across two independent parties, they ensure neither party holds the information required to reveal sensitive information about someone.

We therefore need to partner with another independent and trustworthy organization to deploy each technology in Firefox. Having worked for some time to develop and validate both technologies in staging environments, we’ve now taken the next step to engage Fastly to operate an OHTTP relay and Divvi Up to operate a DAP aggregator. Both Fastly and ISRG (the nonprofit behind Divvi Up and Let’s Encrypt) have excellent reputations for acting with integrity, and they have staked those reputations on the faithful operation of these services. So even in a mirror universe where we tried to persuade them to cheat, they have a strong incentive to hold the line.

Our objective at Mozilla is to develop viable alternatives to the things that are wrong with the internet today and move the entire industry by demonstrating that it’s possible to do better. In the short term, these technologies will help us keep Firefox competitive while adhering to our longstanding principles around sensitive data. Over the long term, we want to see these kinds of strong privacy guarantees become the norm, and we will continue to work towards such a future.


[1] Each approach is best-suited to different scenarios, which is why we’re investing in both. Oblivious HTTP is more flexible and can be used in interactive contexts, whereas DAP/Prio can be used in situations where the payload itself might be identifying.


The post Built for Privacy: Partnering to Deploy Oblivious HTTP and Prio in Firefox appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogBuilt for privacy: Partnering to deploy Oblivious HTTP and Prio in Firefox

Protecting user privacy is a core element of Mozilla’s vision for the web and the internet at large. In pursuit of this vision, we’re pleased to announce new partnerships with Fastly and Divvi Up to deploy privacy-preserving technology in Firefox.

Mozilla builds a number of tools that help people defend their privacy online, but the need for these tools reflects a world where companies view invasive data collection as necessary for building good products and making money. A zero-sum game between privacy and business interests is not a healthy state of affairs. Therefore, we dedicate considerable effort to developing and advancing new technologies that enable businesses to achieve their goals without compromising peoples’ privacy. This is a focus of our work on web standards, as well as in how we build Firefox itself.

Building an excellent browser while maintaining a high standard for privacy sometimes requires this kind of new technology. For example: we put a lot of effort into keeping Firefox fast. This involves extensive automated testing, but also monitoring how it’s performing for real users. Firefox currently reports generic performance metrics like page-load time, but does not associate those metrics with specific sites, because doing so would reveal peoples’ browsing history. These internet-wide averages are somewhat informative but not particularly actionable. Sites are constantly deploying code changes and occasionally those changes can trigger performance bugs in browsers. If we knew that a specific site got much slower overnight, we could likely isolate the cause and fix it. Unfortunately, we lack that visibility today, which hinders our ability to make Firefox great.

This is a classic problem in data collection: We want aggregate data, but the naive way to get it involves collecting sensitive information about individual people. The solution is to develop technology that delivers the same insights while keeping information about any individual person verifiably private.

In recent years, Mozilla has worked with others to advance two such technologies — Oblivious HTTP and the Prio-based Distributed Aggregation Protocol (DAP) — towards being proper internet standards that are practical to deploy in production. Oblivious HTTP works by routing encrypted data through an intermediary to conceal its source, whereas DAP/Prio splits the data into two shares and sends each share to a different server [1]. Despite their different shapes, both technologies rely on a similar principle: By processing the data jointly across two independent parties, they ensure neither party holds the information required to reveal sensitive information about someone.

We therefore need to partner with another independent and trustworthy organization to deploy each technology in Firefox. Having worked for some time to develop and validate both technologies in staging environments, we’ve now taken the next step to engage Fastly to operate an OHTTP relay and Divvi Up to operate a DAP aggregator. Both Fastly and ISRG (the nonprofit behind Divvi Up and Let’s Encrypt) have excellent reputations for acting with integrity, and they have staked those reputations on the faithful operation of these services. So even in a mirror universe where we tried to persuade them to cheat, they have a strong incentive to hold the line.

Our objective at Mozilla is to develop viable alternatives to the things that are wrong with the internet today and move the entire industry by demonstrating that it’s possible to do better. In the short term, these technologies will help us keep Firefox competitive while adhering to our longstanding principles around sensitive data. Over the long term, we want to see these kinds of strong privacy guarantees become the norm, and we will continue to work towards such a future.


[1] Each approach is best-suited to different scenarios, which is why we’re investing in both. Oblivious HTTP is more flexible and can be used in interactive contexts, whereas DAP/Prio can be used in situations where the payload itself might be identifying.

The post Built for privacy: Partnering to deploy Oblivious HTTP and Prio in Firefox appeared first on The Mozilla Blog.

The Mozilla Blog‘Reclaim Expression’: An immersive installation that puts you at the center of the internet

Concept art shows a person looking at strips of paper that read "Reclaim Expression."<figcaption class="wp-element-caption">The installation “Reclaim Expression” was created by the Liganova Horizon team led by Sebastian Kraus and in collaboration with Christine Mayerhofer for Mozilla’s Reclaim the Internet event in Berlin on Oct. 12 to 16, 2023.</figcaption>

Mozilla’s Reclaim the Internet event at the Alte Münze in Berlin, happening Oct. 12 to 16, features an immersive journey that invites people to act, build and choose to reimagine our digital future. The journey includes three art installations where visitors can explore how reclaiming the internet will help us take back expression, inspiration and wonder online.

Below is a preview of the installation “Reclaim Expression,” created by the Liganova Horizon team led by Sebastian Kraus and in collaboration with Christine Mayerhofer. You can click the following links to read about the other two installations: “Reclaim Inspiration” and “Reclaim Wonder.” Reserve free tickets to all three exhibits here

Imagine the internet with you at the center, rather than big tech and its profit margins. What would it feel like?

That was the assignment for digital artist Christine Mayerhofer and Liganova Horizon’s executive creative director, Sebastian Kraus. They embarked on building an interactive art installation that embodies Mozilla’s mission to empower everyone so that we, as individuals, can shape the internet and our own experiences online. 

Visitors who step into the exhibit, titled “Reclaim Expression,” are handed the reins to craft the room’s aesthetic by uploading personal images from their camera rolls. Through projection mapping and reactive soundscapes, people’s contributions transform the space in real time.

“We wanted to create a digital landscape that’s interactively shapeable,” Sebastian said. “Every time you participate, you bring in an expression, the whole landscape will change.”

Sebastian, who doesn’t consider himself a traditional artist but has worked in creative branding for the last two decades, found the project challenging but less restrictive than what a traditional advertising career typically allows. He was inspired by his first internet experience, in 1995: “It was quite a mess with our first computer. It took ages for a website to load with a 56k [dial-up] modem. But even then, I got a glimpse of what the internet could be. I connected with other students and worked on school projects online. It was a time of excitement and discovery.”

“Maybe people will laugh. Maybe they’ll feel connected to other humans. … They should be able to feel that it’s their own, live experience.”

Christine Mayerhofer

Sebastian’s experience in creative advertising complemented Christine’s extensive background in immersive art. She first started manipulating light to create large installations as a college student, saying that light – even when produced by a machine – made her feel alive. Christine hopes visitors to the exhibit feel the same way. “I tried to make it emotional,” she said. “Maybe people will laugh. Maybe they’ll feel connected to other humans. I want them to interpret [the installation] themselves and not need an explanation. They should be able to feel that it’s their own, live experience.”

Like Sebastian, Christine’s first memory of using the internet involved connection with others. “I remember one holiday, when I was 14, waking up early every day just to check if someone’s online. All I did was chat with people.”

Threading their early online experiences to the project at hand, Sebastian and Christine created something that reflects what they both hope for the future of the internet: a space where people can express their individuality while connecting with others to build something unique for everyone. Corporations may be the ones who build the online platforms we use, but people – once we recognize and wield our power – have the ability to make technology more transparent, responsible and inclusive.

“The internet is fluid, growing all the time,” Christine said. “People can come together, take care of this tool and handle it with honesty, so that it serves and remains free for everybody.”

Sebastian added, “You will be surprised what kind of nice things will happen if we all collaborate and work together.”


If you’re in Berlin this weekend, make sure you stop by Mozilla’s “Reclaim the Internet” experience 🙌 #berlinactivities

♬ Undertow (Sped Up) – Flight School & XIRA

The post ‘Reclaim Expression’: An immersive installation that puts you at the center of the internet appeared first on The Mozilla Blog.

Firefox NightlyDevelopments Aplenty for 120 – These Weeks in Firefox: Issue 146


Friends of the Firefox team

Resolved bugs (excluding employees)

New contributors (🌟 = first patch)


Project Updates


  • A11y team is working on activating a11y_checks on CI – these are Tier 2 jobs and would not cause backouts but will prevent adding any new inaccessible controls (like buttons that are not focusable or are not labeled). At the moment there are about 900 test failures in central, so we’re adding fail-if clauses in test manifests to further investigate reported labels and focus concerns. There are false positives and special cases and we will be fine-tuning the tests and filing bugs when controls are confirmed to be not accessible.
  • How can you prepare? Ensure that if any UI you’re working on can be clicked, it can be focused with a keyboard and it has an accessible name (use Accessibility Inspector, a keyboard, and/or a screen reader to check it). If in doubt, ask any questions in #Accessibility room in Matrix.

Developer Tools

  • Contributors
    • Zac Svoboda (:zacnomore) tweaked the color we use for links in DevTools so they are accessible (bug)
    • Vinny Diehl fixed the Shape path editor when inset() uses pixels on an element which is sized in percentage (bug)
  • Yury updated wasmparser library to 5.8.0, which adds support for Wasm-GC proposal in the debugger (bug)
  • Alex also made the project search persist on reload (bug)
  • Hubert also improved debugger performance by lazy loading information for the Quick open panel (bug)
  • Hubert fixed a Debugger crash that was due to a race condition when trying to add a breakpoint during page load between page load and adding breakpoint (bug)
  • Nicolas made paused thread styling in the Threads panel more noticeable (bug)
  • The accessibility team ran an audit of the most important panels (Inspector, Console, Debugger) and reported a list of issues . Nicolas will dedicate 6 weeks to fix (at least all P2s). A few bugs are already resolved (bug, bug, bug, bug, bug, bug , progress chart)
WebDriver BiDi
  • Julian added the browsingContext.navigationStarted event, which is emitted whenever a navigation to another document occurs (bug).
  • Sasha implemented the script.realmCreated event, which is emitted when a realm is available as a target for script evaluation (bug).
  • Sasha added support for the serialization of generator and proxy (bug).
  • Henrik released a new geckodriver version: 0.49.0, including support for all the Web Authentication extension commands (bug).

ESMification status

  • XPCOMUtils.defineLazyModuleGetter has now been removed.
  • Support for Cu.import has been removed from most of ESLint. A rule remains to disallow its use.
  • ESMified status:
    • browser: 87%
    • toolkit: 99%
    • Total:  95.72% (up from 95.55%)
  • #esmification on Matrix

Lint, Docs and Workflow

Migration Improvements

  • We’ve got patches up to add the capability of importing from other browsers when Firefox is installed as a Snap package on Ubuntu Linux. Going through review now, and we’re hoping to have this capability available in Firefox 120.
  • We did a quick audit and check via Telemetry events, and it seems like we no longer have any usage of the legacy migration wizard out in the wild. We’re going to remove the pref to re-enable it soon, and then remove the old wizard altogether shortly after.
  • For device migration, we have designs from UX to add calendar and email reminders to the wizard on SUMO. This ticket is our first foray into this work.

Search and Navigation

  • Marco fixed a bug where the address bar’s “Remove from history” command was not applying to adaptive history results. This was fixed in 1844771
  • Dao has done a lot of refactoring complicated UrlbarView CSS to use nested CSS @ 1853911, 1853918, 1854082  and more
  • Drew has worked on integrating the new cross platform rust component for suggest results @ 1854060
  • Daisuke refactored UrlbarProviderSearchTips to avoid main thread I/O @ 1620576
  • Stephanie is working on adding categorisation logic to search telemetry @ 1846368
  • Mark has replaced the SearchService ‘init-complete’ notification with a promise API @ 1832704
  • Preparing for Sarah and Sam to take over ownership of the Session Restore module as they have been

Storybook/Reusable Components

  • mconley landed a patch to use moz-toggle in the about:newtab personalization panel (Bug 1812135)
  • hjones updated our Storybook mach commands so you can now use ./mach storybook to start the Storybook server and launch the site in a local build of Firefox (Bug 1818007)
    • you can run ./mach storybook –no-open if you don’t want to spin up a local build

Adrian GaudebertRemoving Dawnmaker's 3rd dimension

If you've played Dawnmaker in the last 6 months, you will have noticed, hopefully, that the board was rendered in 3D. After having received a lot of questions and criticism about that feature, we've decided a few months ago to redo the whole board rendering in 2D. After two months of work, I am proud to announce that Dawnmaker will soon have a new version containing only two dimensions! Today we're going to dig deeper into the why and how of this transition.

The point of 2D

The debate of doing 2D or 3D has been a long discussion internally. What turned the tides was the Game Camp: we showed the game to a few publishers who almost unanimously criticized the fact that the game was in 3D. One question that really struck with me was: what value is 3D bringing to the game? We honestly struggled to answer that question, and so went back to the drawing board, wondering what it would take to switch the game to a 2D rendering engine, how long it would take, and what long terms benefits it would bring.

There were 3 main reasons why we decided to do 2D. First, over the course of the rest of the game, it would cost us less to get to the level of quality we wanted to reach. It is easier to do pretty assets in 2D than it is in 3D: in 2D, most of the rendering is done outside of the game, whereas in 3D the game itself has to do a lot of work to render pretty things. Doing 2D removes a lot of programming work, from complex rendering pipelines to optimizations.

The second reason is that doing 2D is intrinsically less uncertain. Rendering things in two dimensions is a lot easier than it is in three, and programming is also a lot simpler. Doing 2D reduces the potential for bugs, inconsistencies between different computers, operating systems, etc.

The last reason is that we intend to release Dawnmaker on mobile phones. And phones are a lot less powerful than PCs. Doing 3D on a phone requires another level of optimizations, especially regarding graphics rendering. We'll have different problems for mobile platforms with our 2D rendering (I'll get back to that), but we expect they will be a lot easier to manage than having to deal with advanced 3D rendering techniques for smaller devices.

Rebuilding the board

So we moved our game to a 2D rendering system. What did it mean? First, a lot of re-programming features of the game: showing the board, animating things like the creation of a building, redoing the Smog entirely (but in a better way, so that's good). It also means re-thinking our assets production pipeline. With 3D rendering, Alexis could work on a building in 3D, export it with its animations, and we could quite simply import it into the game and render it. With a 2D rendering system, it gets a bit more complex. Animations require using a large number of sprites (or images, it's the same thing), that you show one after the other to create movement, just like a movie. But loading many images can take a lot of memory really fast. So, we have to be smart about how we do animations: we cannot have 200 sprites for each building of the game — we intend to have about 150 buildings in the final game, meaning about 30,000 images to load. That would be too much for many devices, including computers. Instead, we are going to split the buildings into separate elements: the base of the building, and a few animations that we'll reuse on several different buildings. This way, we intend to have one unique sprite for each building, and a few dozens of animations, vastly reducing the size of the game and the memory load.

OK, that was a bit technical, my apologies to those of you who don't care much about that level of details. Here's a treat for you to thank you for reading our newsletter: a glimpse into the new 2D board! We've changed the artistic direction a bit, using this opportunity to improve some textures and decorations, like grass and trees.

Dawnmaker board in 3D, oct. 2023 <figcaption>

The board in 3D, as currently available in our demo.

Dawnmaker board in 2D, oct. 2023 <figcaption>

A glimpse of the new 2D board, that will be released in a future version of the demo.


This piece was initially sent out to the readers of our newsletter! Wanna join in on the fun? Head out to Dawnmaker's presentation page and fill the form. You'll receive monthly stories about how we're making this game, and the latest news of its development.

Join our community!

IRL (podcast)With AIs Wide Open

Are today’s large language models too hot to handle? Bridget Todd digs into the risks and rewards of open sourcing the tech that makes ChatGPT talk.

In their competitive rush to release powerful LLMs to the world, tech companies are fueling a controversy about what should and shouldn’t be open in generative AI.

In this episode, we meet open source research communities who have stepped up to develop more responsible machine learning alternatives.

David Evan Harris worked at Meta to make AI more responsible and now shares his concerns about the risks of open large language models for disinformation and more. 

Abeba Birhane is a Mozilla advisor and cognitive scientist who calls for openness to facilitate independent audits of large datasets sourced from the internet

Sasha Luccioni is a researcher and climate lead at Hugging Face who says open source communities are key to developing ethical and sustainable machine learning.

Andriy Mulyar is co-founder and CTO of Nomic, the startup behind the open source chatbot GPT4All, an offline and private alternative to ChatGPT.

IRL: Online Life is Real Life is an original podcast from Mozilla, the non-profit behind Firefox. In Season 7, host Bridget Todd talks to AI builders that put people ahead of profit.

Patrick ClokeHandling GitHub Notifications


This was originally written for some coworkers and assumes a mostly GitHub-based workflow. It has been lightly edited to be more readable, but if your organization doesn’t use GitHub like we do then it might not apply great.

GitHub can generate a lot of notifications which can be difficult to follow, this documents some of my process for keeping up with it! For reference, I subscribe to:

  1. All notifications for the repositories I work in somewhat frequently;
  2. Only releases and security alerts for repositories which might affect me (e.g. upstream repositories);
  3. Other issues that might be related to the project I’m working on (e.g. bugs in upstream projects).

I also watch a bunch of open source projects and have some of my own projects. (These are mostly Twisted or Celery related.)

I generally enjoy having some idea of “everything” going on in my team (in enough detail to know what people are generally working on).

To avoid being overwhelmed by notifications I only subscribe to specific issues for repositories from other teams or projects. These are usually:

  • Things that personally annoy me (and I want to see fixed);
  • Things that are directly related to or blocking my work;

For reference, I currently watch 321 repositories, although most of my notifications probably come from < 20 repositories. I also have 32 repositories with custom notification rules — those are set to only releases & Security alerts. (And I have 1 muted repository.) [1]

When / how

I tend to do the following daily:

  • Catch-up on notifications in the morning (takes ~15 - 45 minutes for GitHub, chat, e-mail, etc.).
  • Check notifications a few times during the day (between meetings, after lunch, while tests runs, etc.).

Each time I check notifications I quickly triage each notification by skimming the title to see if I’m interested (sometimes the title is enough info!). From this I do one of several things:

  • Open any issue in a separate tab to come back to if I need to read more (or potentially take action). I usually skim the update, leaving it open if I need to respond, closing the tab if I don’t.
  • Mark as read” if I know it does not require anything from me:
    • A review someone else is handling (unless it is a bit of code I’m keen to understand, or know is tricky and feel some ownership over).
    • The title contains enough information I don’t need to read the issue (e.g. a colleague following a follow-up issue).
    • Obvious support requests, unless I’m the maintainer. [2]
    • Random MSCs / matrix-doc issues that I don’t care about.
  • Unsubscribing if I’m not interested in following the issue (e.g. an open source project is re-doing their CI). This was key for me watching other projects that I only somewhat care about.

I use both Thunderbird and the GitHub website (specifically the unread notifications view) to go through notifications. Note that the website has quick buttons on the right which I use frequently: “Done” and “Unsubscribe” (there is also “Save” — which I do not use, I mark as unread if I need to come back). It can also be useful to “Mark as done” an entire repository for projects I follow out of vague interest, but don’t have time to read at the moment.

Open unread” is useful to get everything into separate tabs for later processing (and to avoid waiting for GitHub to load). I usually use it when I have < 10 notifications left for a particular repository.

I usually attempt to go through notifications that I know I won’t have to respond to first, as they can be quickly processed and reduce the overwhelming number of notifications.


This workflow refers to using GitHub with Mozilla Thunderbird (via Fastmail) and Mozilla Firefox, none of it is particular to those applications and can be adapted to others.


If you use GitHub for both work and other personal / open source projects it can be helpful to route your work notifications to a separate email address. (This is a good idea regardless for security & intellectual property concerns.)

Your default email can be configured on the Notifications page and separation by organization can be configured on the Custom routing page. Under “Subscriptions” on the Notification page, I have both “Watching” and “Participating, @mentions and custom” set to notify on both GitHub & email.

You may also want to tweak your “Customize email preferences”. I have the following enabled:

  • Pull Request reviews”
  • Comments on Issues and Pull Requests”
  • Include your own updates” — this sounds weird, but you only need to lose a massive comment on GitHub once to want a copy of it in your inbox. (I automatically mark them as read, see below.)

I disable “Pull Request pushes” because I don’t find it useful, although you will still get these via the website.


I have two mail rules setup in Fastmail to move all GitHub email to a separate folder and to mark my own emails as read: [3]

  1. From: Patrick Cloke <>: 1. Mark as read 2. Move to “GitHub”
  2. From email address: 1. Move to “GitHub”

Similar filters can be setup on other mail services, e.g. Google Mail:

  1. Matches: from:(Patrick Cloke <>) 1. Skip Inbox 2. Mark as read 3. Apply label: “GitHub”
  2. Matches: from:( 1. Skip Inbox 2. Apply label: “GitHub”

You can also check for more ways to filter GitHub emails.

Mozilla Thunderbird

For all of my folders I use threads (View > Sort By > Threaded) and only view threads which have unread messages (View > Threads > Threads with Unread).

Other things that are useful:

  • Enable “Automatically mark messages as read”, but with a short delay (I have “After displaying for” set to 1 second). (This lets you move through messages quickly using the keyboard or shortcuts without marking them all by mistake.)
  • Add GitHub to the exceptions list in under “Allow remote content in messages” for either or the this can be added when viewing an email from GitHub. (This will mark the notification as read the GitHub website automatically.)

I sort my threads by date, oldest first so I can just click the “n” hotkey to move through messages quickly. I also use the message pane to have some context on remaining unread messages per thread, but it should work fine without that. If you decide you don’t care about the rest of the thread “r” marks it as read. Note that reading any messages in a thread will mark the entire issue or pull request as done on the website. I find this extremely efficient for going through a small number of notifications quickly.

I very much wish there was a way to sync back the read status of notifications from GitHub back to Thunderbird. Lacking that I tend to mark the entire folder as read (Shift+C) if I’ve caught up on the website. [4]

Mozilla Firefox

I use a few GitHub related extensions which can help:


Hopefully some of this is helpful, please let me know if you have any questions or thoughts!

[1]In August 2021 I was watching 263 repositories and had 18 repositories with custom notification settings.
[2]My team rotates through who is the first-line of contacts for incoming community requests, releases, etc.
[3]I have similar filters setup for GitLab, Sentry, etc.
[4]You could probably do this with an Thunderbird extension, but I’ve failed to find time to look into it.

Mozilla Addons BlogChanges to Android extension signing

We recently identified a bug in the (AMO) external API that caused all signing requests to mark extension submissions as being Android compatible. A fix for this bug will be pushed on Thursday, October 12th.

When the fix lands, the signing endpoint will stop marking extensions as being Android compatible by default, and instead check the extension’s manifest.json for a property in "browser_specific_settings" named “gecko_android”. If present, that object’s "strict_min_version" and "strict_max_version" properties will be used to set the Firefox for Android minimum and maximum values on AMO.

This change also affects community tools that send signing requests to AMO using the web API. This includes, but is not limited to:

What do I need to do?

To continue marking your extension as Android compatible on AMO, ensure that your manifest.json file includes a "browser_specific_settings.gecko_android" object. You can declare the minimum browser version supported using the "strict_min_version" properties of this object.

To stop marking your extension as Android compatible on AMO, ensure that your manifest.json file does not include a "browser_specific_settings.gecko_android" object.

For example, to signal that your extension works in Firefox for Android, you would include the following snippet in your extension’s manifest.

  "browser_specific_settings": {
    "gecko_android": {}

You may also want to check the version compatibility settings for your extension on AMO.

The post Changes to Android extension signing appeared first on Mozilla Add-ons Community Blog.

Alexandre PoirotDeclarative Web Component to replace build-time HTML templates

Recently I moved away from Jekyll to build this blog (see more).
While doing so I also moved away from the traditional HTML templates.
Instead I started using a "single file declarative web component".
The nice outcome is that the HTML page now mostly contains the text content of the blog post!
Do not hesitate to view the source of this page :)

This idea of "single file Web Component" actually comes from Tomasz Jakut (CK Editor) very simple JavaScript loader described over there.

"Single File"

In one file you can bundle the HTML, the CSS and the JavaScript for a given Web Component.
This is handy as you only have one file to register.
On this blog, all the HTML pages displaying a blog post will uses a unique Web Component to implement and display the blog design/template.
Instead of having a build step processing tool duplicating the template on every single HTML page, the browser engine use this unique Web Component to display all the blog post the same way.

Here is an overview of this Web Component.
You can see the header with the blog image, the navigation links, the footer, and finally in middle of this, a <slot> to define where the blog post content should be put.

  <nav role=navigation>
       <li><a href="/">Index</a></li>
       <li><a href="/archives/">Archives</a></li>
       <li><a href="/resume/">About me/Resume</a></li>
  <div id="content"><slot>ARTICLE</slot></div>
  <footer><p>Copyright &copy; 2023 - Alexandre Poirot</p></footer>
  header { background-image: url("/images/header.png"); }
  nav { background: black; color: white; }


This refers to Declarative-Shadow-DOM and Declarative-Custom-Elements-Strawman proposal... in some way.
The idea is being able to load it from the HTML page, without JavaScript.

On this web site, the Web Component used on all blog post pages is registered like this:

<link rel="component" href="/blog-article.wc">

And will implement the <blog-article> DOM Element used in the HTML page. Unfortunately, as this isn't part of any implemented standard, I'm using Tomasz's naive JavaScript loader to make this work.

Example of a blog post HTML page

The traditional header of any HTML page in 2023:

<!DOCTYPE html>
  <meta charset="utf-8">

The blog post title followed by the blog title.

  <title>Using the fediverse/Mastodon for comments on blogs - Techno Barje</title>

Then, Tomasz JS loader, which will implement the support for <link rel=component>.

  <script src="/loader.js"></script>

The declaration of the <blog-article> Web Component

  <link rel="component" href="/blog-article.wc">

The unique CSS for the whole blog, and the end of </head> section.

  <link href="/document.css" rel="stylesheet" type="text/css">

Now this is where it becomes interesting.
The <blog-article> component will implement the overall blog design/template.
So that the HTML page can focus only on the specific content of that specific HTML file:

  • The blog post title and link to it,
  • Its publish date,
  • The actual content of the blog post.

  <div class="entry-title">
    <h1><a href="/post/2023/10/05/fediverse-for-comments-on-blogs/">Using the fediverse/Mastodon for comments on blogs</a></h1>

    <time datetime="2023-10-05T10:00:00.000Z" pubdate>Oct 05, 2023</time>

    ... The content of a blog post ...

And that's it. We close the </html> right after.


My hope is that by simplifying the HTML files to the barebone actual text content, we can revive the straight edition of HTML files!

In 2023, everyone is still using either:

  • Wordpress/Medium/ to publish content when you don't want to care about the hosting side of things,
  • Jekyll/Hugo/ or more and more custom build scripts for the tech-savvy who are at ease running command lines and managing the (self) hosting.

Except a few web survivalist, I've not seen anyone edit HTML page for publishing text. HTML is now some kind of assembly language only generated or at best assembled by programs.

I'll keep bloging about this topic as this Declarative Web Component trick is only one small thing. We can do much more to get back to the roots of the editable web.

Tantek ÇelikMore Thoughtful Reading & Writing on the Web

Ben Werdmuller recently published an inspiring and thought-provoking blog post: “Subscribing to the blogs of people I follow on Mastodon”. Beyond the insights and excellent developer how-to in his post, I believe it points to something larger: a fundamental thoughtfulness difference between writing rapid short-form posts (whether tweets or toots) and medium or longer form writing (on blogs or journals), and the impact of that difference on readers: that the act of reading more thoughtful writing nudges & reinforces a reader into a more thoughtful state of mind.

If you have not read Derek Powazek’s watershed blog post “The Argument Machine”, I highly recommend you do so. In the nearly ten years since his post, Derek’s hypothesis of Twitter’s user interface design being the ultimate machine to create & amplify disputes has been repeatedly demonstrated.

Derek’s post predated Mastodon’s release by nearly three years. Ironically, by replicating much of Twitter’s user experience, Mastodon has in many ways also replicated its Argument Machine effects, except distributed across more servers.

I’ve witnessed numerous otherwise rational, well-intentioned individuals write reactive posts on Mastodon, exactly what the Twitter-like interface encourages. Quick emotional responses rather than slower, more thoughtful posts and replies.

I’ve seen the artificial urgency of tweets & toots bleed over into emotional essays on public mailing lists. New participants join a list and immediately make entitled demands. Fearful bordering on paranoid assumptions are used to state assertions of “facts” without citations. Arguments are made that appeal to emotion (argumentum ad passiones) rather than reasoning from principles and shared values.

Implicit in Ben’s post, “Subscribing to the blogs of people” (emphasis mine), is a preference for reading longer form writing, published on a site a human owns & identifies with (a la #indieweb), neither silo nor someone else’s garage.

The combination of taking more time (as longer form writing encourages) and publishing on a domain associated with your name, your identity, enables & incentivizes more thoughtful writing. More thoughtful writing elevates the reader to a more thoughtful state of mind.

There is also a self-care aspect to this kind of deliberate shift. Ben wrote that he found himself “craving more nuance and depth” among “quick, in-the-now status updates”. I believe this points to a scarcity of thoughtfulness in such short form writings. Spending more time reading thoughtful posts not only alleviates such scarcity, it can also displace the artificial sense of urgency to respond when scrolling through soundbyte status updates.

When I returned from #W3CTPAC, I made a list of all the thoughts, meetings, sessions that I wanted to write-up and publish as blog posts to capture my experiences, perspectives, and insights beyond any official minutes.

Yet due to distractions such as catching up on short form posts, it took me over a week to write-up even a summary of my TPAC week, nevermind the queue of per-topic notes I wanted to write-up. To even publish that I had to stop and cut-off reading short form posts, as well as ignoring (mostly postponing) numerous notifications.

There’s a larger connection here between thoughtful reading, and finding, restoring, and rebuilding the ability to focus, a key to thoughtful writing. It requires not only reducing time spent on short form reading (and writing), but also reducing notifications, especially push notifications. That insight led me to wade into and garden the respective IndieWeb wiki pages for notifications, push notifications, and document a new page for notification fatigue. That broader topic of what do to about notifications is worth its own blog post (or a few), and a good place to end this post.

Thanks again Ben for your blog post. May we spend more time reading & writing such thoughtful posts.

The Rust Programming Language BlogAnnouncing Rust 1.73.0

The Rust team is happy to announce a new version of Rust, 1.73.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.73.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.73.0 on GitHub.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.73.0 stable

Cleaner panic messages

The output produced by the default panic handler has been changed to put the panic message on its own line instead of wrapping it in quotes. This can make panic messages easier to read, as shown in this example:

fn main() {
    let file = "ferris.txt";
    panic!("oh no! {file:?} not found!");
Output before Rust 1.73:
thread 'main' panicked at 'oh no! "ferris.txt" not found!', src/
Output starting in Rust 1.73:
thread 'main' panicked at src/
oh no! "ferris.txt" not found!

This is especially useful when the message is long, contains nested quotes, or spans multiple lines.

Additionally, the panic messages produced by assert_eq and assert_ne have been modified, moving the custom message (the third argument) and removing some unnecessary punctuation, as shown below:

fn main() {
    assert_eq!("🦀", "🐟", "ferris is not a fish");
Output before Rust 1.73:
thread 'main' panicked at 'assertion failed: `(left == right)`
 left: `"🦀"`,
right: `"🐟"`: ferris is not a fish', src/
Output starting in Rust 1.73:
thread 'main' panicked at src/
assertion `left == right` failed: ferris is not a fish
 left: "🦀"
right: "🐟"

Thread local initialization

As proposed in RFC 3184, LocalKey<Cell<T>> and LocalKey<RefCell<T>> can now be directly manipulated with get(), set(), take(), and replace() methods, rather than jumping through a with(|inner| ...) closure as needed for general LocalKey work. LocalKey<T> is the type of thread_local! statics.

The new methods make common code more concise and avoid running the extra initialization code for the default value specified in thread_local! for new threads.

thread_local! {
    static THINGS: Cell<Vec<i32>> = Cell::new(Vec::new());

fn f() {
    // before:
    THINGS.with(|i| i.set(vec![1, 2, 3]));
    // now:
    THINGS.set(vec![1, 2, 3]);

    // ...

    // before:
    let v = THINGS.with(|i| i.take());
    // now:
    let v: Vec<i32> = THINGS.take();

Stabilized APIs

These APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.73.0

Many people came together to create Rust 1.73.0. We couldn't have done it without all of you. Thanks!