The Mozilla BlogThe writer behind ‘Diary of a Sad Black Woman’ on making space for feelings online

woman sitting in a library holding a large white chess knight piece.

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we know the internet is pretty darn magical. The internet opens up doors and opportunities, allows for human connection, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner Of The Internet, we talk with people about the online spaces they can’t get enough of, the sites and forums that shaped them, and how they would design their own corner of the web.

We caught up with Jacque Aye, the author behind “Diary of a Sad Black Woman.” She talks about blogging culture, writing fiction for “perpetually sighing adults” and Lily Allen’s new album.

What is an internet deep dive that you can’t wait to jump back into?

Right now, I’m deep diving into Lily Allen’s newest album! Not for the gossip, although there’s plenty of that to dive into, but for the psychology behind it all. I appreciate creatives who share so vulnerably but in nuanced and honest ways. Sharing experiences is what makes us feel human, I think. The way she outlined falling in love, losing herself, struggling with insecurities, and feeling numb was so relatable to me. Now, would I share as many details? Probably not. But I do feel her.

What was the first online community you engaged with?

Blogger. I was definitely a Blogger baby, and I used to share my thoughts and outfits there, the same way I currently share on Substack. I sometimes miss those times and my little oversharing community. Most people didn’t really have personal brands then, so everything felt more authentic, anonymous and free.

What is the one tab you always regret closing?

Substack! I always find the coolest articles, save the tab, then completely forget I meant to read it, ahhhh.

What can you not stop talking about on the internet right now?

I post about my books online to an obsessive and almost alarming degree, ha. I’ve been going on and on about my weird, whimsical, and woeful novels, and people seem to resonate with that. I describe my work as Lemony Snicket meets a Boots Riley movie, but for perpetually sighing adults. I also never, ever shut up about my feelings. You can even read my diary online. For free. On Substack.

If you could create your own corner of the internet, what would it look like?

I feel super lucky to have my own little corner of the internet! In my corner, we love wearing cute outfits, listening to sad girl music, watching Tim Burton movies, and reading about flawed women going through absurd trials.

What articles and/or videos are you waiting to read/watch right now?

I can’t wait to settle in and watch Knights of Guinevere! It looks so, so good, and I adore the creator.

What is your favorite corner of the internet?

This will seem so random, but right now, besides Substack, I’m really loving Threads. People are so vulnerable on there, and so willing to share personal stories and ask for help and advice. I love any space where I can express the full range of my feelings… and also share my books and outfits, ha.

How do you imagine the next version of the internet supporting creators who lead with emotion and care?

I really hope the next version of the internet reverts back to the days of Blogger and Tumblr. Where people could design their spaces how they see fit, integrate music and spew their hearts out without all the judgment.


Jacque Aye is an author and writes “Diary of a Sad Black Woman” on Substack. As a woman who suffers from depression and social anxiety, she’s made it her mission to candidly share her experiences with the hopes of helping others dealing with the same. This extends into her fiction work, where she pens tales about woeful women trying their best, with a surrealist, magical touch. Inspired by authors like Haruki Murakami, Sayaka Murata, and Lemony Snicket, Jacque’s stories are dark, magical, and humorous with a hint… well, a bunch… of absurdity.

The post The writer behind ‘Diary of a Sad Black Woman’ on making space for feelings online appeared first on The Mozilla Blog.

The Mozilla BlogIntroducing AI, the Firefox way: A look at what we’re working on and how you can help shape it

Illustration of Firefox browser showing menu options for Current, AI, and Private windows with glowing effects.

We recently shared how we are approaching AI in Firefox — with user choice and openness as our guiding principles. That’s because we believe AI should be built like the internet —  open, accessible, and driven by choice — so that users and the developers helping to build it can use it as they wish, help shape it and truly benefit from it.

In Firefox, you’ll never be locked into one ecosystem or have AI forced into your browsing experience. You decide when, how or whether to use it at all. You’ve already seen this approach in action through some of our latest features like the AI chatbot in the sidebar for desktop or Shake to Summarize on iOS. 

Now, we’re excited to invite you to help shape the work on our next innovation: an AI Window. It’s a new, intelligent and user-controlled space we’re building in Firefox that lets you chat with an AI assistant and get help while you browse, all on your terms. Completely opt-in, you have full control, and if you try it and find it’s not for you, you can choose to switch it off.

As always, we’re building in the open — and we want to build this with you. Starting today, you can sign up to receive updates on our AI Window and be among the first to try it and give us feedback. 

Firefox logo with orange fox wrapped around purple globe.

AI Window: Built for choice & control

Join the waitlist

We’re building a better browser, not an agenda

We see a lot of promise in AI browser features making your online experience smoother, more helpful, and free from the everyday disruptions that break your flow. But browsers made by AI companies ask you to make a hard choice — either use AI all the time or don’t use it at all.

We’re focused on making the best browser, which means recognizing that everyone has different needs. For some, AI is part of everyday life. For others, it’s useful only occasionally. And many are simply curious about what it can offer, but unsure where to start.

Regardless of your choice, with Firefox, you’re in control. 

You can continue using Firefox as you always have for the most customizable experience, or switch from classic to Private Window for the most private browsing experience. And now, with AI Window, you have the option to opt in to our most intelligent and personalized experience yet — providing you with new ways to interact with the web.

Why is investing in AI important for Firefox?

With AI becoming a more widely adopted interface to the web, the principles of transparency, accountability, and respect for user agency are critical to keeping it free, open, and accessible to all. As an independent browser, we are well positioned to uphold these principles.

While others are building AI experiences that keep you locked in a conversational loop, we see a different path — one where AI serves as a trusted companion, enhancing your browsing experience and guiding you outward to the broader web.

We believe standing still while technology moves forward doesn’t benefit the web or humanity. That’s why we see it as our responsibility to shape how AI integrates into the web — in ways that protect and give people more choice, not less.

Help us shape the future of the web 

Our success has always been driven by our community of users and developers, and we’ll continue to rely on you as we explore how AI can serve the web — without ever losing focus on our commitment to build what matters most to our users: a Firefox that remains fast, secure and private. 

Join us by contributing to open-source projects and sharing your ideas on Mozilla Connect.

The post Introducing AI, the Firefox way: A look at what we’re working on and how you can help shape it appeared first on The Mozilla Blog.

Mozilla Privacy BlogBehind the Manifesto: The Survivors of the Open Web

Welcome to the blog series “Behind the Manifesto,” where we unpack core issues that are critical to Mozilla’s mission. The Mozilla Manifesto represents Mozilla’s commitment to advancing an open, global internet. This blog series digs deeper on our vision for the web and the people who use it, and how these goals are advanced in policymaking and technology. 

 

The internet wasn’t always a set of corporate apps and walled gardens. In its early days, it was a place of experimentation — a digital commons where anyone could publish, connect, and build without asking permission. That openness depended on invisible layers of technology that allowed the web to function as a true public space. Layers such as browser engines, open standards, and shared protocols are the scaffolding that made the internet free, creative, and interoperable.

In 2013, there were five major browser engines. Now, only three remain: Apple’s WebKit, Google’s Blink, and Mozilla’s Gecko (which powers Firefox). In a world of giants, Gecko fights not for dominance, but for an internet that is open and accessible to all.

In an era of consolidation, a thriving and competitive browser engine ecosystem is critical. But sadly, browser engines are subject to the same trends towards concentration. As we’ve lost competitors, we lose more than a piece of code. We lose choice, perspectives, and ideas about how the web works.

So, how do we drive competition in browser engines and more widely across the web? How do we promote policies that protect people and encourage meaningful choice? How do we contend with AI as both a disruptor and an impetus for innovation? Can competition interventions protect the open web? What’s the impact of landmark antitrust cases for consumers and the future technology landscape?

These aren’t new questions for Mozilla. They’re the same questions that have shaped our mission for more than 20 years, and the ones we continue to ask today. Our recent Mozilla Meetup in Washington D.C., a panel-style event and happy hour, brought these debates to the forefront.

On October 8th, we convened leading minds in tech policy to explore the future of competition and its role in saving the open web. Before a standing-room-only audience, the panelists discussed browser competition, leading antitrust legislation, landmark cases currently under review, and AI’s impact. Their insights underscored a critical point: the same questions about access, agency and choice that defined parts of the early internet are just as pressing in today’s digital ecosystem, shaping our continued pursuit of an open and diverse web. Below are a few takeaways.

On today’s competition landscape:

Luke Hogg, Director, Technology Policy, Foundation for American Innovation:

“Antitrust is back. One of the emerging lessons of the last year in antitrust cases and competition policy is that with these big questions being answered, the results do tend to be bipartisan. Antitrust is a cross-partisan issue.”

On the United States v. Google LLC search case: 

Kush Amlani, Director, Global Competition & Regulation, Mozilla:

“One of our key concerns was ensuring that search competition didn’t come at the expense of browser competition. And the payments to independent browsers were not banned, and that was obviously granted by the judge…What’s next is really how the remedies are implemented, and how effective they are. And the devil is going to be in the detail, in terms of how useful is this data? How much can third parties benefit from syndicating search results?” 

Alissa Cooper, Executive Director, Knight-Georgetown Institute:

“The search case is set up as being pro-divestiture or anti-divestiture, but it’s really about what is going to work. Divestiture aligns with what was requested. If you leave Chrome under Google, you have to build in surveillance and monitoring in the market to make sure their behavior aligns. If you divest, it becomes independent and can operate on its own without the need for monitoring. In the end, do you think that would be an effective remedy to open the market to reentry? Or do you think there is another option?”

On the impact of AI: 

Amba Kak, Co-Executive Director, AI Now Institute:

“AI has upended the market and changed technology, but it’s also true Big Tech, in many ways, has been training for this very disruption for the last ten years. 

In the early 2010s, key resources — data, compute, talent — were already concentrated within a few players due to regulatory inaction. It’s important to understand that this trajectory of AI aligning with the incentives of Big Tech isn’t an accident, it’s by design.”

On the timing of this fight for the open web:

Alissa Cooper, Executive Director, Knight-Georgetown Institute:

“The difference now [as opposed to previous fights for the web] is that we have a lot of experience. We know what the open world and open web look like. In some ways, this is an advantage. The difference now is the unbelievable amount of corporate power involved. There needs to be a field where new businesses can enter. Without it, we are fighting the last war.”

 

This blog is part of a larger series. Be sure to follow Jenn Taylor Hodges on LinkedIn for further insights into Mozilla’s policy priorities.

 

The post Behind the Manifesto: The Survivors of the Open Web appeared first on Open Policy & Advocacy.

The Mozilla BlogMozilla joins the Digital Public Goods Alliance, championing open source to drive global progress

Today, Mozilla is thrilled to join the Digital Public Goods Alliance (DPGA) as its newest member. The DPGA is a UN-backed initiative that seeks to advance open technologies and ensure that technology is put to use in the public interest and serves everyone, everywhere — like Mozilla’s Common Voice, which has been recognized as a Digital Public Good (DPG). This announcement comes on the heels of a big year of digital policy-making globally, where Mozilla has been at the forefront in advocating for open source AI across Europe, North America and the UK. 

The DPGA is a multi-stakeholder initiative with a mission to accelerate the attainment of the Sustainable Development Goals (SDGs) “by facilitating the discovery, development, use of and investment in digital public goods.” Digital public goods means open-source technology, open data, open and transparent AI models, open standards and open content that adhere to privacy, the do no harm principle, and other best practices. 

This is deeply aligned with Mozilla’s mission. It creates a natural opportunity for collaboration and shared advocacy in the open ecosystem, with allies and like-minded builders from across the globe. As part of the DPGA’s Annual Roadmap for 2025, Mozilla will focus on three work streams: 

  1. Promoting DPGs in the Open Source Ecosystem: Mozilla has long championed open-source, public-interest technology as an alternative to profit-driven development. Through global advocacy, policy engagement, and research, we highlight the societal and economic value of open-source, especially in AI. Through our work in the DPGA,, we’ll continue pushing for better enabling conditions and funding opportunities for open source, public interest technology. 
  2. DPGs and Digital Commons: Mozilla develops and maintains a range of open source projects through our various entities. These include Common Voice, a digital public good with over 33,000 hours of multilingual voice data, and applications like the Firefox web browser and Thunderbird email client. Mozilla also supports open-source AI through our product work, including by Mozilla.ai, and our venture fund, Mozilla Ventures
  3. Funding Open Source & Public Interest Technology: Grounded by our own open source roots, Mozilla will continue to fund open source technologies that help to untangle thorny sociotechnical issues. We’ve fueled a broad and impactful portfolio of technical projects. Beginning in the Fall of 2025, we will introduce our latest grantmaking program: an incubator that will help community-driven projects find “product-community fit” in order to attain long-term sustainability.

We hope to use our membership to share research, tooling, and perspectives with a like-minded audience and partner with the DPGA’s diverse community of builders and allies. 

“Open source AI and open data aren’t just about tech,” said Mark Surman, president of Mozilla. “They’re about access to technology and progress for people everywhere. As a double bottom line, mission-driven enterprise, Mozilla is proud to be part of the DPGA and excited to work toward our joint mission of advancing open-source, trustworthy technology that puts people first.” 

To learn more about DPGA, visit https://digitalpublicgoods.net

The post Mozilla joins the Digital Public Goods Alliance, championing open source to drive global progress  appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 625

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is automesh, a crate for high-performance automatic mesh generation in Rust.

Thanks to Michael R. Buche for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

RFCs
Rust
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No Calls for participation were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

  • TokioConf 2026| CFP closes 2025-12-08 | Portland, Oregon, USA | 2026-04-20

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

409 pull requests were merged in the last week

Compiler
Library
Cargo
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Mostly quiet week, with the majority of changes coming from the standard library work towards removal of Copy specialization (#135634).

Triage done by @simulacrum. Revision range: 35ebdf9b..055d0d6a

3 Regressions, 1 Improvement, 7 Mixed; 3 of them in rollups 37 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Rust Compiler Team (MCPs only)

No Items entered Final Comment Period this week for Rust RFCs, Cargo, Language Team, Language Reference, Leadership Council or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2025-11-12 - 2025-12-10 🦀

Virtual
Africa
Asia
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Making your unsafe very tiny is sort of like putting caution markings on the lethally strong robot arm with no proximity sensors, rather than on the door into the protective cage.

Stephan Sokolow on lobste.rs

Thanks to llogiq for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: * nellshamrell * llogiq * ericseppanen * extrawurst * U007D * mariannegoldin * bdillo * opeolluwa * bnchi * KannanPalani57 * tzilist

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Firefox Developer ExperienceFirefox WebDriver Newsletter 145

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 145 release cycle.

Contributions

Firefox is an open source project, and we are always happy to receive external code contributions to our WebDriver implementation. We want to give special thanks to everyone who filed issues, bugs and submitted patches.

In Firefox 145, a new contributor landed two patches in our codebase. Thanks to Khalid AlHaddad for the following fixes:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi. Join our chatroom if you need any help to get started!

WebDriver BiDi

Niko MatsakisJust call clone (or alias)

Continuing my series on ergonomic ref-counting, I want to explore another idea, one that I’m calling “just call clone (or alias)”. This proposal specializes the clone and alias methods so that, in a new edition, the compiler will (1) remove redundant or unnecessary calls (with a lint); and (2) automatically capture clones or aliases in move closures where needed.

The goal of this proposal is to simplify the user’s mental model: whenever you see an error like “use of moved value”, the fix is always the same: just call clone (or alias, if applicable). This model is aiming for the balance of “low-level enough for a Kernel, usable enough for a GUI” that I described earlier. It’s also making a statement, which is that the key property we want to preserve is that you can always find where new aliases might be created – but that it’s ok if the fine-grained details around exactly when the alias is created is a bit subtle.

The proposal in a nutshell

Part 1: Closure desugaring that is aware of clones and aliases

Consider this move future:

fn spawn_services(cx: &Context) {
    tokio::task::spawn(async move {
        //                   ---- move future
        manage_io(cx.io_system.alias(), cx.request_name.clone());
        //        --------------------  -----------------------
    });
    ...
}

Because this is a move future, this takes ownership of cx.io_system and cx_request_name. Because cx is a borrowed reference, this will be an error unless those values are Copy (which they presumably are not). Under this proposal, capturing aliases or clones in a move closure/future would result in capturing an alias or clone of the place. So this future would be desugared like so (using explicit capture clause strawman notation):

fn spawn_services(cx: &Context) {
    tokio::task::spawn(
        async move(cx.io_system.alias(), cx.request_name.clone()) {
            //     --------------------  -----------------------
            //     capture alias/clone respectively

            manage_io(cx.io_system.alias(), cx.request_name.clone());
        }
    );
    ...
}

Part 2: Last-use transformation

Now, this result is inefficient – there are now two aliases/clones. So the next part of the proposal is that the compiler would, in newer Rust editions, apply a new transformat called the last-use transformation. This transformation would identify calls to alias or clone that are not needed to satisfy the borrow checker and remove them. This code would therefore become:

fn spawn_services(cx: &Context) {
    tokio::task::spawn(
        async move(cx.io_system.alias(), cx.request_name.clone()) {
            manage_io(cx.io_system, cx.request_name);
            //        ------------  ---------------
            //        converted to moves
        }
    );
    ...
}

The last-use transformation would apply beyond closures. Given an example like this one, which clones id even though id is never used later:

fn send_process_identifier_request(id: String) {
    let request = Request::ProcessIdentifier(id.clone());
    //                                       ----------
    //                                       unnecessary
    send_request(request)
}

the user would get a warning like so1:

warning: unnecessary `clone` call will be converted to a move
 --> src/main.rs:7:40
  |
8 |     let request = Request::ProcessIdentifier(id.clone());
  |                                              ^^^^^^^^^^ unnecessary call to `clone`
  |
  = help: the compiler automatically removes calls to `clone` and `alias` when not
    required to satisfy the borrow checker
help: change `id.clone()` to `id` for greater clarity
  |
8 -     let request = Request::ProcessIdentifier(id.clone());
8 +     let request = Request::ProcessIdentifier(id);
  |

and the code would be transformed so that it simply does a move:

fn send_process_identifier_request(id: String) {
    let request = Request::ProcessIdentifier(id);
    //                                       --
    //                                   transformed
    send_request(request)
}

Mental model: just call “clone” (or “alias”)

The goal of this proposal is that, when you get an error about a use of moved value, or moving borrowed content, the fix is always the same: you just call clone (or alias). It doesn’t matter whether that error occurs in the regular function body or in a closure or in a future, the compiler will insert the clones/aliases needed to ensure future users of that same place have access to it (and no more than that).

I believe this will be helpful for new users. Early in their Rust journey new users are often sprinkling calls to clone as well as sigils like & in more-or-less at random as they try to develop a firm mental model – this is where the “keep calm and call clone” joke comes from. This approach breaks down around closures and futures today. Under this proposal, it will work, but users will also benefit from warnings indicating unnecessary clones, which I think will help them to understand where clone is really needed.

Experienced users can trust the compiler to get it right

But the real question is how this works for experienced users. I’ve been thinking about this a lot! I think this approach fits pretty squarely in the classic Bjarne Stroustrup definition of a zero-cost abstraction:

“What you don’t use, you don’t pay for. And further: What you do use, you couldn’t hand code any better.”

The first half is clearly satisfied. If you don’t call clone or alias, this proposal has no impact on your life.

The key point is the second half: earlier versions of this proposal were more simplistic, and would sometimes result in redundant or unnecessary clones and aliases. Upon reflection, I decided that this was a non-starter. The only way this proposal works is if experienced users know there is no performance advantage to using the more explicit form.This is precisely what we have with, say, iterators, and I think it works out very well. I believe this proposal hits that mark, but I’d like to hear if there are things I’m overlooking.

The last-use transformation codifies a widespread intuition, that clone is never necessary

I think most users would expect that changing message.clone() to just message is fine, as long as the code keeps compiling. But in fact nothing requires that to be the case. Under this proposal, APIs that make clone significant in unusual ways would be more annoying to use in the new Rust edition and I expect ultimately wind up getting changed so that “significant clones” have another name. I think this is a good thing.

Frequently asked questions

I think I’ve covered the key points. Let me dive into some of the details here with a FAQ.

Can you summarize all of these posts you’ve been writing? It’s a lot to digest!

I get it, I’ve been throwing a lot of things out there. Let me begin by recapping the motivation as I see it:

  • I believe our goal should be to focus first on a design that is “low-level enough for a Kernel, usable enough for a GUI”.
    • The key part here is the word enough. We need to make sure that low-level details are exposed, but only those that truly matter. And we need to make sure that it’s ergonomic to use, but it doesn’t have to be as nice as TypeScript (though that would be great).
  • Rust’s current approach to Clone fails both groups of users;
    • calls to clone are not explicit enough for kernels and low-level software: when you see something.clone(), you don’t know that is creating a new alias or an entirely distinct value, and you don’t have any clue what it will cost at runtime. There’s a reason much of the community recommends writing Arc::clone(&something) instead.
    • calls to clone, particularly in closures, are a major ergonomic pain point, this has been a clear consensus since we first started talking about this issue.

I then proposed a set of three changes to address these issues, authored in individual blog posts:

  • First, we introduce the Alias trait (originally called Handle). The Alias trait introduces a new method alias that is equivalent to clone but indicates that this will be creating a second alias of the same underlying value.
  • Second, we introduce explicit capture clauses, which lighten the syntactic load of capturing a clone or alias, make it possible to declare up-front the full set of values captured by a closure/future, and will support other kinds of handy transformations (e.g., capturing the result of as_ref or to_string).
  • Finally, we introduce the just call clone proposal described in this post. This modifies closure desugaring to recognize clones/aliases and also applies the last-use transformation to replace calls to clone/alias with moves where possible.

What would it feel like if we did all those things?

Let’s look at the impact of each set of changes by walking through the “Cloudflare example”, which originated in this excellent blog post by the Dioxus folks:

let some_value = Arc::new(something);

// task 1
let _some_value = some_value.clone();
tokio::task::spawn(async move {
    do_something_with(_some_value);
});

// task 2:  listen for dns connections
let _some_a = self.some_a.clone();
let _some_b = self.some_b.clone();
let _some_c = self.some_c.clone();
tokio::task::spawn(async move {
  	do_something_else_with(_some_a, _some_b, _some_c)
});

As the original blog post put it:

Working on this codebase was demoralizing. We could think of no better way to architect things - we needed listeners for basically everything that filtered their updates based on the state of the app. You could say “lol get gud,” but the engineers on this team were the sharpest people I’ve ever worked with. Cloudflare is all-in on Rust. They’re willing to throw money at codebases like this. Nuclear fusion won’t be solved with Rust if this is how sharing state works.

Applying the Alias trait and explicit capture clauses makes for a modest improvement. You can now clearly see that the calls to clone are alias calls, and you don’t have the awkward _some_value and _some_a variables. However, the code is still pretty verbose:

let some_value = Arc::new(something);

// task 1
tokio::task::spawn(async move(some_value.alias()) {
    do_something_with(some_value);
});

// task 2:  listen for dns connections
tokio::task::spawn(async move(
    self.some_a.alias(),
    self.some_b.alias(),
    self.some_c.alias(),
) {
  	do_something_else_with(self.some_a, self.some_b, self.some_c)
});

Applying the Just Call Clone proposal removes a lot of boilerplate and, I think, captures the intent of the code very well. It also retains quite a bit of explicitness, in that searching for calls to alias reveals all the places that aliases will be created. However, it does introduce a bit of subtlety, since (e.g.) the call to self.some_a.alias() will actually occur when the future is created and not when it is awaited:

let some_value = Arc::new(something);

// task 1
tokio::task::spawn(async move {
    do_something_with(some_value.alias());
});

// task 2:  listen for dns connections
tokio::task::spawn(async move {
  	do_something_else_with(
        self.some_a.alias(),
        self.some_b.alias(),
        self.some_c.alias(),
    )
});

I’m worried that the execution order of calls to alias will be too subtle. How is thie “explicit enough for low-level code”?

There is no question that Just Call Clone makes closure/future desugaring more subtle. Looking at task 1:

tokio::task::spawn(async move {
    do_something_with(some_value.alias());
});

this gets desugared to a call to alias when the future is created (not when it is awaited). Using the explicit form:

tokio::task::spawn(async move(some_value.alias()) {
    do_something_with(some_value)
});

I can definitely imagine people getting confused at first – “but that call to alias looks like its inside the future (or closure), how come it’s occuring earlier?”

Yet, the code really seems to preserve what is most important: when I search the codebase for calls to alias, I will find that an alias is creating for this task. And for the vast majority of real-world examples, the distinction of whether an alias is creating when the task is spawned versus when it executes doesn’t matter. Look at this code: the important thing is that do_something_with is called with an alias of some_value, so some_value will stay alive as long as do_something_else is executing. It doesn’t really matter how the “plumbing” worked.

What about futures that conditionally alias a value?

Yeah, good point, those kind of examples have more room for confusion. Like look at this:

tokio::task::spawn(async move {
    if false {
        do_something_with(some_value.alias());
    }
});

In this example, there is code that uses some_value with an alias, but only under if false. So what happens? I would assume that indeed the future will capture an alias of some_value, in just the same way that this future will move some_value, even though the relevant code is dead:

tokio::task::spawn(async move {
    if false {
        do_something_with(some_value);
    }
});

Can you give more details about the closure desugaring you imagine?

Yep! I am thinking of something like this:

  • If there is an explicit capture clause, use that.
  • Else:
    • For non-move closures/futures, no changes, so
      • Categorize usage of each place and pick the “weakest option” that is available:
        • by ref
        • by mut ref
        • moves
    • For move closures/futures, we would change
      • Categorize usage of each place P and decide whether to capture that place…
        • by clone, there is at least one call P.clone() or P.alias() and all other usage of P requires only a shared ref (reads)
        • by move, if there are no calls to P.clone() or P.alias() or if there are usages of P that require ownership or a mutable reference
      • Capture by clone/alias when a place a.b.c is only used via shared references, and at least one of those is a clone or alias.
        • For the purposes of this, accessing a “prefix place” a or a “suffix place” a.b.c.d is also considered an access to a.b.c.

Examples that show some edge cased:

if consume {
    x.foo().
}

Why not do something similar for non-move closures?

In the relevant cases, non-move closures will already just capture by shared reference. This means that later attempts to use that variable will generally succeed:

let f = async {
    //  ----- NOT async move
    self.some_a.alias()
};

do_something_else(self.some_a.alias());
//                ----------- later use succeeds

f.await;

This future does not need to take ownership of self.some_a to create an alias, so it will just capture a reference to self.some_a. That means that later uses of self.some_a can still compile, no problem. If this had been a move closure, however, that code above would currently not compile.

There is an edge case where you might get an error, which is when you are moving:

let f = async {
    self.some_a.alias()
};

do_something_else(self.some_a);
//                ----------- move!

f.await;

In that case, you can make this an async move closure and/or use an explicit capture clause:

Can you give more details about the last-use transformation you imagine?

Yep! We would during codegen identify candidate calls to Clone::clone or Alias::alias. After borrow check has executed, we would examine each of the callsites and check the borrow check information to decide:

  • Will this place be accessed later?
  • Will some reference potentially referencing this place be accessed later?

If the answer to both questions is no, then we will replace the call with a move of the original place.

Here are some examples:

fn borrow(message: Message) -> String {
    let method = message.method.to_string();

    send_message(message.clone());
    //           ---------------
    //           would be transformed to
    //           just `message`

    method
}
fn borrow(message: Message) -> String {
    send_message(message.clone());
    //           ---------------
    //           cannot be transformed
    //           since `message.method` is
    //           referenced later

    message.method.to_string()
}
fn borrow(message: Message) -> String {
    let r = &message;

    send_message(message.clone());
    //           ---------------
    //           cannot be transformed
    //           since `r` may reference
    //           `message` and is used later.

    r.method.to_string()
}

Why are you calling it the last-use transformation and not optimization?

In the past, I’ve talked about the last-use transformation as an optimization – but I’m changing terminology here. This is because, typically, an optimization is supposed to be unobservable to users except through measurements of execution time (or though UB), and that is clearly not the case here. The transformation would be a mechanical transformation performed by the compiler in a deterministic fashion.

Would the transformation “see through” references?

I think yes, but in a limited way. In other words I would expect

Clone::clone(&foo)

and

let p = &foo;
Clone::clone(p)

to be transformed in the same way (replaced with foo), and the same would apply to more levels of intermediate usage. This would kind of “fall out” from the MIR-based optimization technique I imagine. It doesn’t have to be this way, we could be more particular about the syntax that people wrote, but I think that would be surprising.

On the other hand, you could still fool it e.g. like so

fn identity<T>(x: &T) -> &T { x }

identity(&foo).clone()

Would the transformation apply across function boundaries?

The way I imagine it, no. The transformation would be local to a function body. This means that one could write a force_clone method like so that “hides” the clone in a way that it will never be transformed away (this is an important capability for edition transformations!):

fn pipe<Msg: Clone>(message: Msg) -> Msg {
    log(message.clone()); // <-- keep this one
    force_clone(&message)
}

fn force_clone<Msg: Clone>(message: &Msg) -> Msg {
    // Here, the input is `&Msg`, so the clone is necessary
    // to produce a `Msg`.
    message.clone()
}

Won’t the last-use transformation change behavior by making destructors run earlier?

Potentially, yes! Consider this example, written using explicit capture clause notation and written assuming we add an Alias trait:

async fn process_and_stuff(tx: mpsc::Sender<Message>) {
    tokio::spawn({
        async move(tx.alias()) {
            //     ---------- alias here
            process(tx).await
        }
    });

    do_something_unrelated().await;
}

The precise timing when Sender values are dropped can be important – when all senders have dropped, the Receiver will start returning None when you call recv. Before that, it will block waiting for more messages, since those tx handles could still be used.

So, in process_and_stuff, when will the sender aliases be fully dropped? The answer depends on whether we do the last-use transformation or not:

  • Without the transformation, there are two aliases: the original tx and the one being held by the future. So the receiver will only start returning None when do_something_unrelated has finished and the task has completed.
  • With the transformation, the call to tx.alias() is removed, and so there is only one alias – tx, which is moved into the future, and dropped once the spawned task completes. This could well be earlier than in the previous code, which had to wait until both process_and_stuff and the new task completed.

Most of the time, running destructors earlier is a good thing. That means lower peak memory usage, faster responsiveness. But in extreme cases it could lead to bugs – a typical example is a Mutex<()> where the guard is being used to protect some external resource.

How can we change when code runs? Doesn’t that break stability?

This is what editions are for! We have in fact done a very similar transformation before, in Rust 2021. RFC 2229 changed destructor timing around closures and it was, by and large, a non-event.

The desire for edition compatibility is in fact one of the reasons I want to make this a last-use transformation and not some kind of optimization. There is no UB in any of these examples, it’s just that to understand what Rust code does around clones/aliases is a bit more complex than it used to be, because the compiler will do automatic transformation to those calls. The fact that this transformation is local to a function means we can decide on a call-by-call basis whether it should follow the older edition rules (where it will always occur) or the newer rules (where it may be transformed into a move).

Does that mean that the last-use transformation would change with Polonius or other borrow checker improvements?

In theory, yes, improvements to borrow-checker precision like Polonius could mean that we identify more opportunities to apply the last-use transformation. This is something we can phase in over an edition. It’s a bit of a pain, but I think we can live with it – and I’m unconvinced it will be important in practice. For example, when thinking about the improvements I expect under Polonius, I was not able to come up with a realistic example that would be impacted.

Isn’t it weird to do this after borrow check?

This last-use transformation is guaranteed not to produce code that would fail the borrow check. However, it can affect the correctness of unsafe code:

let p: *const T = &*some_place;

let q: T = some_place.clone();
//         ---------- assuming `some_place` is
//         not used later, becomes a move

unsafe {
    do_something(p);
    //           -
    // This now refers to a stack slot
    // whose value is uninitialized.
}

Note though that, in this case, there would be a lint identifying that the call to some_place.clone() will be transformed to just some_place. We could also detect simple examples like this one and report a stronger deny-by-default lint, as we often do when we see guaranteed UB.

Shouldn’t we use a keyword for this?

When I originally had this idea, I called it “use-use-everywhere” and, instead of writing x.clone() or x.alias(), I imagined writing x.use. This made sense to me because a keyword seemed like a stronger signal that this was impacting closure desugaring. However, I’ve changed my mind for a few reasons.

First, Santiago Pastorino gave strong pushback that x.use was going to be a stumbling block for new learners. They now have to see this keyword and try to understand what it means – in contrast, if they see method calls, they will likely not even notice something strange is going on.

The second reason though was TC who argued, in the lang-team meeting, that all the arguments for why it should be ergonomic to clone a ref-counted value in a closure applied equally well to clone, depending on the needs of your application. I completely agree. As I mentioned earlier, this also [addresses the concern I’ve heard with the Alias trait], which is that there are things you want to ergonomically clone but which don’t correspond to “aliases”. True.

In general I think that clone (and alias) are fundamental enough to how Rust is used that it’s ok to special case them. Perhaps we’ll identify other similar methods in the future, or generalize this mechanism, but for now I think we can focus on these two cases.

What about “deferred ref-counting”?

One point that I’ve raised from time-to-time is that I would like a solution that gives the compiler more room to optimize ref-counting to avoid incrementing ref-counts in cases where it is obvious that those ref-counts are not needed. An example might be a function like this:

fn use_data(rc: Rc<Data>) {
    for datum in rc.iter() {
        println!("{datum:?}");
    }
}

This function requires ownership of an alias to a ref-counted value but it doesn’t actually do anything but read from it. A caller like this one…

use_data(source.alias())

…doesn’t really need to increment the reference count, since the caller will be holding a reference the entire time. I often write code like this using a &:

fn use_data(rc: &Rc<Data>) {
    for datum in rc.iter() {
        println!("{datum:?}");
    }
}

so that the caller can do use_data(&source) – this then allows the callee to write rc.alias() in the case that it wants to take ownership.

I’ve basically decided to punt on adressing this problem. I think folks that are very performance sensitive can use &Arc and the rest of us can sometimes have an extra ref-count increment, but either way, the semantics for users are clear enough and (frankly) good enough.


  1. Surprisingly to me, clippy::pedantic doesn’t have a dedicated lint for unnecessary clones. This particular example does get a lint, but it’s a lint about taking an argument by value and then not consuming it. If you rewrite the example to create id locally, clippy does not complain↩︎

The Mozilla BlogFirefox expands fingerprint protections: advancing towards a more private web

With Firefox 145, we’re rolling out major privacy upgrades that take on browser fingerprinting — a pervasive and hidden tracking technique that lets websites identify you even when cookies are blocked or you’re in private browsing. These protections build on Mozilla’s long-term goal of building a healthier, transparent and privacy-preserving web ecosystem.

Fingerprinting builds a secret digital ID of you by collecting subtle details of your setup — ranging from your time zone to your operating system settings — that together create a “fingerprint” identifiable across websites and across browser sessions. Having a unique fingerprint means fingerprinters can continuously identify you invisibly, allowing bad actors to track you without your knowledge or consent. Online fingerprinting is able to track you for months, even when you use any browser’s private browsing mode.

Protecting people’s privacy has always been core to Firefox. Since 2020, Firefox’s built-in Enhanced Tracking Protection (ETP) has blocked known trackers and other invasive practices, while features like Total Cookie Protection and now expanded fingerprinting defenses demonstrate a broader goal: prioritizing your online freedom through innovative privacy-by-design. Since 2021, Firefox has been incrementally enhancing anti-fingerprinting protections targeting the most common pieces of information collected for suspected fingerprinting uses.

Today, we are excited to announce the completion of the second phase of defenses against fingerprinters that linger across all your browsing but aren’t in the known tracker lists. With these fingerprinting protections, the amount of Firefox users trackable by fingerprinters is reduced by half.

How we built stronger defenses

Drawing from a global analysis of how real people’s browsers can be fingerprinted, Mozilla has developed new, unique and powerful defenses against real-world fingerprinting techniques. Firefox is the first browser with this level of insight into fingerprinting and the most effective deployed defenses to reduce it. Like Total Cookie Protection, one of our most innovative privacy features, these new defenses are debuting in Private Browsing Mode and ETP Strict mode initially, while we work to enable them by default.

How Firefox protects you

These fingerprinting protections work on multiple layers, building on Firefox’s already robust privacy features. For example, Firefox has long blocked known tracking and fingerprinting scripts as part of its Enhanced Tracking Protection

Beyond blocking trackers, Firefox also limits the information it makes available to websites — a privacy-by-design approach — that preemptively shrinks your fingerprint. Browsers provide a way for websites to ask for information that enables legitimate website features, e.g. your graphics hardware information, which allows sites to optimize games for your computer.  But trackers can also ask for that information, for no other reason than to help build a fingerprint of your browser and track you across the web.  

Since 2021, Firefox has been incrementally advancing fingerprinting protections, covering the most pervasive fingerprinting techniques. These include things like how your graphics card draws images, which fonts your computer has, and even tiny differences in how it performs math. The first phase plugged the biggest and most-common leaks of fingerprinting information.

Recent Firefox releases have tackled the next-largest leaks of user information used by online fingerprinters. This ranges from strengthening the font protections to preventing websites from getting to know your hardware details like the number of cores your processor has, the number of simultaneous fingers your touchscreen supports, and the dimensions of your dock or taskbar. The full list of detailed protections is available in our documentation.

Our research shows these improvements cut the percentage of users seen as unique by almost half.

Firefox’s new protections are a balance of disrupting fingerprinters while maintaining web usability. More aggressive fingerprinting blocking might sound better, but is guaranteed to break legitimate website features. For instance, calendar, scheduling, and conferencing tools legitimately need your real time zone. Firefox’s approach is to target the most leaky fingerprinting vectors (the tricks and scripts used by trackers) while preserving functionality many sites need to work normally. The end result is a set of layered defenses that significantly reduce tracking without downgrading your browsing experience. More details are available about both the specific behaviors and how to recognize a problem on a site and disable protections for that site alone, so you always stay in control. The goal: strong privacy protections that don’t get in your way.

What’s next for your privacy

If you open a Private Browsing window or use ETP Strict mode, Firefox is already working behind the scenes to make you harder to track. The latest phase of Firefox’s fingerprinting protections marks an important milestone in our mission to deliver: smart privacy protections that work automatically — no further extensions or configurations needed. As we head into the future, Firefox remains committed to fighting for your privacy, so you get to enjoy the web on your terms. Upgrade to the latest Firefox and take back control of your privacy.

Take control of your internet

Download Firefox

The post Firefox expands fingerprint protections: advancing towards a more private web appeared first on The Mozilla Blog.

The Rust Programming Language BlogAnnouncing Rust 1.91.1

The Rust team has published a new point release of Rust, 1.91.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.91.1 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.91.1

Rust 1.91.1 includes fixes for two regressions introduced in the 1.91.0 release.

Linker and runtime errors on Wasm

Most targets supported by Rust identify symbols by their name, but Wasm identifies them with a symbol name and a Wasm module name. The #[link(wasm_import_module)] attribute allows to customize the Wasm module name an extern block refers to:

#[link(wasm_import_module = "hello")]
extern "C" {
    pub fn world();
}

Rust 1.91.0 introduced a regression in the attribute, which could cause linker failures during compilation ("import module mismatch" errors) or the wrong function being used at runtime (leading to undefined behavior, including crashes and silent data corruption). This happened when the same symbol name was imported from two different Wasm modules across multiple Rust crates.

Rust 1.91.1 fixes the regression. More details are available in issue #148347.

Cargo target directory locking broken on illumos

Cargo relies on locking the target/ directory during a build to prevent concurrent invocations of Cargo from interfering with each other. Not all filesystems support locking (most notably some networked ones): if the OS returns the Unsupported error when attempting to lock, Cargo assumes locking is not supported and proceeds without it.

Cargo 1.91.0 switched from custom code interacting with the OS APIs to the File::lock standard library method (recently stabilized in Rust 1.89.0). Due to an oversight, that method always returned Unsupported on the illumos target, causing Cargo to never lock the build directory on illumos regardless of whether the filesystem supported it.

Rust 1.91.1 fixes the oversight in the standard library by enabling the File::lock family of functions on illumos, indirectly fixing the Cargo regression.

Contributors to 1.91.1

Many people came together to create Rust 1.91.1. We couldn't have done it without all of you. Thanks!

The Mozilla BlogIntroducing early access for Firefox Support for Organizations

Multiple Firefox logos forming a curved trail on a dark background.

Increasingly, businesses, schools, and government institutions deploy Firefox at scale for security, resilience, and data sovereignty. Organizations have fine-grained administrative and orchestration control of the browser’s behavior using policies with Firefox and the Extended Support Release (ESR). Today, we’re opening early access to Firefox Support for Organizations, a new program that begins operation in January 2026.

What Firefox Support for Organizations offers

Support for Organizations is a dedicated offering for teams who need private issue triage and escalation, defined response times, custom development options, and close collaboration with Mozilla’s engineering and product teams.

  • Private support channel: Access a dedicated support system where you can open private help tickets directly with expert support engineers. Issues are triaged by severity level, with defined response times and clear escalation paths to ensure timely resolution.
  • Discounts on custom development: Paid support customers get discounts on custom development work for integration projects, compatibility testing, or environment-specific needs. With custom development as a paid add-on to support plans, Firefox can adapt with your infrastructure and third-party updates.
  • Strategic collaboration: Gain early insight into upcoming development and help shape the Firefox Enterprise roadmap through direct collaboration with Mozilla’s team.

Support for Organizations adds a new layer of help for teams and businesses that need confidential, reliable, and customized levels of support. All Firefox users will continue to have full access to existing public resources including documentation, the knowledge base, and community forums, and we’ll keep improving those for everyone in future. Support plans will help us better serve users who rely on Firefox for business-critical and sensitive operations.

Get in touch for early access

If these levels of support are interesting for your organization, get in touch using our inquiry form and we’ll get back to you with more information.

Multiple Firefox logos forming a curved trail on a dark background.

Firefox Support for Organizations

Get early access

The post Introducing early access for Firefox Support for Organizations appeared first on The Mozilla Blog.

The Mozilla BlogUnder the hood: How Firefox suggests tab groups with local AI

Browser popup showing the “Create tab group” menu with color options and AI tab suggestions button.

Background

Mozilla launched Tab Grouping in early 2025, allowing tabs to be arranged and grouped with persistent labels. It was the most requested feature in the history of Mozilla Connect. While tab grouping provides a great way to manage tabs and reduce tab overload, it can be a challenge to locate which tabs to group when you have many open.

We sought to improve the workflows by providing an AI tab grouping feature that enables two key capabilities:

  • Suggesting a title for a tab group when it is created by the user.
  • Suggesting tabs from the current window to be added to a tab group.

Of course, we wanted this to work without you needing to send any data of yours to Mozilla, so we used our local Firefox AI runtime and built an efficient model that delivers the features entirely on your own device. The feature is opt-in and downloads two small ML models when the user clicks to run it the first time.

Group title suggestion

Understanding the problem

Suggesting titles for grouped tabs is a challenge because it is hard to understand user intent when tabs are first grouped. Based on our interviews when we started the project, we found that while tab groups are sometimes generic terms like ‘Shopping’ or ‘Travel’, over half the time users’ tabs were specific terms such as name of a video game, friend or town. We also found tab names to be extremely short – 1 or 2 words.

Diagram showing Firefox tab information processed by a generative AI model to label topics like Boston Travel

Generating a digest of the group

To address these challenges, we adopt a hybrid methodology that combines a modified TF-IDF–based textual analysis with keyword extraction. We identify terms that are statistically distinctive to the titles of pages within a tab group compared to those outside it. The three most prominent keywords, along with the full titles of three randomly selected pages, are then combined to produce a concise digest representing the group, which is used as input for the subsequent stage of processing using a language model.

Generating the label

The digest string is used as an input to a generative model that returns the final label. We used a T5 based encoder-decoder model (flan-t5-base) that was fine tuned on over 10,000 example situations and labels.  

One of the key challenges in developing the model was generating the training data samples to tune the model without any user data. To do this, we defined a set of user archetypes and used an LLM API (OpenAI GPT-4) to create sample pages for a user performing various tasks. This was augmented by real page titles from the publicly available common crawl dataset. We then used the LLM to suggest short titles for those use cases. The process was first done at a small scale of several hundred group names. These were manually corrected and curated, adjusting for brevity and consistency. As the process scaled up, the initial 300 group names were used as examples passed to the LLM so that the additional examples created would meet those standards.  

Shrinking things down

We need to get the model small enough to run on most computers. Once the initial model was trained, it was sampled to a smaller model using a process known as knowledge distillation. For distillation, we tuned a t5-efficient-tiny model from the token probability outputs of our teacher flan-t5-base model.  Midway through the distillation process we also removed two encoder transformer layers and two decoder layers to further reduce the number of parameters.

Finally, the model parameters were quantized from floating point (4 bytes per parameter) to integer 8 bit. In the end this entire reduction process reduced the model from 1GB to 57 MB, with only a modest reduction in accuracy. 

Suggesting tabs 

Understanding the problem

For tab suggestions, we identified a couple of approaches on how people prefer grouping their tabs. Some people prefer grouping by domain to easily access all documents for work for instance. Others might prefer grouping all their tabs together when they are planning a trip. Others still might prefer separating their “work” and “personal” tabs.

Our initial approach on suggesting tabs was based on semantic similarity. Tabs that are topically similar are suggested.

Browser pop-up suggesting related tabs for a Boston trip using AI-based grouping

Identifying topically similar tabs

We first convert tab titles to a feature vector locally using a MiniLM embedding model. Embedding models are trained so that similar content produces vectors that are close together in embedding space. Using a similarity measure such as cosine similarity, we’re able to assign how closely similar a tab title or url is to another.

The similarity score between an anchor tab chosen by the user and another tab is a linear combination of the candidate tab with the group title (if present) of the anchor tab, the anchor tab title and the anchor url. Using these values, we generate a similarity probability and tabs that have a high probability threshold are suggested to be part of the group.

Mathematical formula showing conditional probability using weighted similarity and sigmoid function

where,
w is the weight,
t_i is the candidate tab,
t_a is the anchor tab,
g_a is the anchor group title,
u_i is the candidate url
u_a is the anchor url, and,
σ is the sigmoid function

Optimizing the weights

In order to find the weights, we framed the problem as a classification task, where we calculate the precision and recall based on the tabs that were correctly classified given an anchor tab. We used synthetic data generated by OpenAI based on the user archetypes above.

We initially used a clustering approach to establish a baseline and switched to a logistic regression when we realized that treating the group, title and url features with varying importances improved our metrics.

Bar chart comparing DBScan and Logistic Regression by precision, recall, and F1 performance metrics

Using logistic regression, there was an 18% improvement against the baseline.

Performance

While the median number of tabs for people using the feature is relatively small (~25), there are some “power” users whose tab count reaches the thousands. This would cause the tab grouping feature to take uncomfortably long. 

This was part of the reason why we switched from a clustering based approach to a linear model. 

Using our performance framework, we found that the p99 of running logistic regression compared to a clustering based method such as KMeans improved by 33%.

Bar chart comparing KMeans and Logistic Regression using percentile metrics p50, p95, and p99

Future work here would involve improving F1 score. These could be by adding a time-related component as part of the inference (we are more likely to group tabs together that we’ve opened at the same time) or using a fine-tuned embedding model for our use case.

Thanks for reading

All of our work is open source. If you are a developer feel free to peruse our source code on our model training, or view our topic model on Huggingface.

Feel free to try the feature and let us know what you think!

Take control of your internet

Download Firefox

The post Under the hood: How Firefox suggests tab groups with local AI appeared first on The Mozilla Blog.

Wladimir PalantAn overview of the PPPP protocol for IoT cameras

My previous article on IoT “P2P” cameras couldn’t go into much detail on the PPPP protocol. However, there is already lots of security research on and around that protocol, and I have a feeling that there is way more to come. There are pieces of information on the protocol scattered throughout the web, yet every one approaching from a very specific narrow angle. This is my attempt at creating an overview so that other people don’t need to start from scratch.

While the protocol can in principle be used by any kind of device, so far I’ve only seen network-connected cameras. It isn’t really peer-to-peer as advertised but rather relies on central servers, yet the protocol allows to transfer the bulk of data via a direct connection between the client and the device. It’s hard to tell how many users there are but there are lots of apps, I’m sure that I haven’t found all of them.

There are other protocols with similar approaches being used for the same goal. One is used by ThroughTek’s Kalay Platform which has the interesting string “Charlie is the designer of P2P!!” in its codebase (32 bytes long, seems to be used as “encryption” key for some non-critical functionality). I recognize both the name and the “handwriting,” it looks like PPPP protocol designer found a new home here. Yet PPPP seems to be still more popular than the competition, thanks to it being the protocol of choice for cheap low-end cameras.

Disclaimer: Most of the information below has been acquired by analyzing public information as well as reverse engineering applications and firmware, not by observing live systems. Consequently, there can be misinterpretations.

Update (2025-11-07): Added App2Cam Plus app to the table, representing a number of apps which all seem to be belong to ABUS Smartvest Wireless Alarm System.

Update (2025-11-07): This article originally grouped Xiaomi Home together with Yi apps. This was wrong, Xiaomi uses a completely different protocol to communicate with their PPPP devices. A brief description of this protocol has been added.

The general design

The protocol’s goal is to serve as a drop-in replacement for TCP. Rather than establish a connection to a known IP address (or a name to be resolved via DNS), clients connect to a device identifier. The abstraction is supposed to hide away how the device is located (via a server that keeps track of its IP address), how a direct communication channel is established (via UDP hole punching) or when one of multiple possible fallback scenarios is being used because direct communication is not possible.

The protocol is meant to be resilient, so there are usually three redundant servers handling each network. When a device or client needs to contact a server, it sends the same message to all of them and doesn’t care which one will reply. Note: In this article “network” generally means a PPPP network, i.e. a set of servers and the devices connecting to them. While client applications typically support multiple networks, devices are always associated with a specific one determined by their device prefix.

For what is meant to be a transport layer protocol, PPPP has some serious complexity issues. It encompasses device discovery on the LAN via UDP broadcasts, UDP communication between device/client and the server and a number of (not exactly trivial) fallback solutions. It also features multiple “encryption” algorithms which are more correctly described as obfuscators and network management functionality.

Paul Marrapese’s Wireshark Dissector provides an overview of the messages used by the protocol. While it isn’t quite complete, a look into the pppp.fdesc file shows roughly 70 different message types. It’s hard to tell how all these messages play together as the protocol has not been designed as a state machine. The protocol implementation uses its previous actions as context to interpret incoming messages, but it has little indication as to which messages are expected when. Observing a running system is essential to understanding this protocol.

The complicated message exchange required to establish a connection between a device and a client has been described by Elastic Security Labs. They also provide the code of their client which implements that secret handshake.

I haven’t seen any descriptions of how the fallback approaches work when a direct connection cannot be established. Neither could I observe these fallbacks in action, presumably because the network I observed didn’t enable them. There are at least three such fallbacks: UDP traffic can be relayed by a network-provided server, it can be relayed by a “supernode” which is a device that agreed to be used as a relay, and it can be wrapped in a TCP connection to the server. The two centralized solutions incur significant costs for the network owners, rendering them unpopular. And I can imagine the “supernode” approach to be less than reliable with low-end devices like these cameras (it’s also a privacy hazard but this clearly isn’t a consideration).

I recommend going though the CS2 sales presentation to get an idea of how the protocol is meant to work. Needless to say that it doesn’t always work as intended.

The network ports

I could identify the following network ports being used:

  • UDP 32108: broadcast to discover local devices
  • UDP 32100: device/client communication to the server
  • TCP 443: client communication to the server as fallback

Note that while port 443 is normally associated with HTTPS, here it was apparently only chosen to fool firewalls. The traffic is merely obfuscated, not really encrypted.

The direct communication between the client and the device uses a random UDP port. In my understanding the ports are also randomized when this communication is relayed by a server or supernode.

The device IDs

The canonical representation of a device ID looks like this: ABC-123456-VWXYZ. Here ABC is a device prefix. While a PPPP network will often handle more than one device prefix, mapping a device prefix to a set of servers is supposed to be unambiguous. This rule isn’t enforced across different protocol variants however, e.g. the device prefix EEEE is assigned differently by CS2 and iLnk.

The six digit number following the device prefix allows distinguishing different devices within a prefix. It seems that vendors can choose these numbers freely – some will assign them to devices sequentially, others go by some more complicated rules. A comment on my previous article even claims that they will sometimes reassign existing device IDs to new devices.

The final part is the verification code, meant to prevent enumeration of devices. It is generated by some secret algorithm and allows distinguishing valid device IDs from invalid ones. At least one such algorithm got leaked in the past.

Depending on the application a device ID will not always be displayed in its canonical form. It’s pretty typical for the dashes to be removed for example, in one case I saw the prefix being shortened to one letter. Finally, there are applications that will hide the device ID from the user altogether, displaying only some vendor-specific ID instead.

The protocol variants

So far I could identify at least four variants of this protocol – if you count HLP2P which is questionable. These protocol implementations differ significantly and aren’t really compatible. A number of apps can work with different protocol implementations but they generally do it by embedding multiple client libraries.

Variant Typical client library names Typical functions
CS2 Network libPPCS_API.so libobject_jni.so librtapi.so PPPP_Initialize PPPP_ConnectByServer
Yi Technology PPPP_API.so libmiio_PPPP_API.so PPPP_Initialize PPPP_ConnectByServer
iLnk libvdp.so libHiChipP2P.so XQP2P_Initialize XQP2P_ConnectByServer HI_XQ_P2P_Init
HLP2P libobject_jni.so libOKSMARTPPCS.so HLP2P_Initialize HLP2P_ConnectByServer

CS2 Network

The Chinese company CS2 Network is the original developer of the protocol. Their implementation can sometimes be recognized without even looking at any code just by their device IDs. The letters A, I, O and Q are never present in the verification code, there are only 22 valid letters here. Same seems to apply to the Yi Technology fork however which is generally very similar.

The other giveaway is the “init string” which encodes network parameters. Typically these init strings are hardcoded in the application (sometimes hundreds of them) and chosen based on device prefix, though some applications retrieve them from their servers. These init strings are obfuscated, with the function PPPP_DecodeString doing the decoding. The approach is typical for CS2 Network: a lookup table filled with random values and some random algebraic operations to make things seem more complex. The init strings look like this:

DRFTEOBOJWHSFQHQEVGNDQEXFRLZGKLUGSDUAIBXBOIULLKRDNAJDNOZHNKMJO:SECRETKEY

The part before the colon decodes into:

127.0.0.1,192.168.1.1,10.0.0.1,

This is a typical list of three server IPs. No, the trailing comma isn’t a typo but required for correct parsing. Host names are occasionally used in init strings but this is uncommon. With CS2 Network generally distrusting DNS from the looks of it, they probably recommend vendors to sidestep it. The “secret” key behind the colon is optional and activates encryption of transferred data which is better described as obfuscation. Unlike the server addresses, this part isn’t obfuscated.

Yi Technology

The Xiaomi spinoff Yi Technology appears to have licensed the code of the CS2 Network implementation. They made some moderate changes to it but it is still very similar to the original. For example, they still use the same code to decode init strings, merely with a different lookup table. Consequently, same init string as above would look slightly differently here:

LZERHWKWHUEQKOFUOREPNWERHLDLDYFSGUFOJXIXJMASBXANOTHRAFMXNXBSAM:SECRETKEY

As can be seen from Paul Marrapese’s Wireshark Dissector, the Yi Technology fork added a bunch of custom protocol messages and extended two messages presumably to provide forward compatibility. The latter is a rather unusual step for the PPPP ecosystem where the dominant approach seems to be “devices and clients connecting to the same network always use the same version of the client library which is frozen for all eternity.”

There is another notable difference: this PPPP implementation doesn’t contain any encryption functionality. There seems to be some AES encryption being performed at the application layer (which is the proper way to do it), I didn’t look too closely however.

iLnk

The protocol fork developed by Shenzhen Yunni Technology iLnkP2P seems to have been developed from scratch. The device IDs for legacy iLnk networks are easy to recognize because their verification codes only consist of the letters A to F. The algorithm generating these verification codes is public knowledge (CVE-2019-11219) so we know that these are letters taken from an MD5 hex digest. New iLnk networks appear to have verification codes that can contain all Latin letters, some new algorithm replaced the compromised one here. Maybe they use Base64 digests now?

An iLnk init string can be recognized by the presence of a dash:

ATBBARASAXAOAQAOAQAOARBBARAZASAOARAWAYAOARAOARBBARAQAOAQAOAQAOAR-$$

The part before the dash decodes into:

3;127.0.0.1;192.168.1.1;10.0.0.1

Yes, the first list entry has to specify how many server IPs there are. The decoding approach (function HI_DecStr or XqStrDec depending on the implementation) is much simpler here, it’s a kind of Base26 encoding. The part after the dash can encode additional parameters related to validation of device IDs but typically it will be $$ indicating that it is omitted and network-specific device ID validation can be skipped. As far as I can tell, iLnk networks will always send all data as plain text, there is no encryption functionality of any kind.

Going through the code, the network-level changes in the iLnk fork are extensive, with only the most basic messages shared with the original PPPP protocol. Some message types are clashing like for example MSG_DEV_MAX that uses the same type as MSG_DEV_LGN_CRC in the CS2 implementation. This fork also introduces new magic numbers: while PPPP messages normally start with 0xF1, some messages here start with 0xA1 and one for some reason with 0xF2.

Unfortunately, I haven’t seen any comprehensive analysis of this protocol variant yet, so I’ll just list the message types along with their payload sizes. For messages with 20 bytes payloads it can be assumed that the payload is a device ID. Don’t ask me why two pairs of messages share the same message type.

Message Message type Payload size
MSG_HELLO F1 00 0
MSG_RLY_PKT F1 03 0
MSG_DEV_LGN F1 10 IPv4: 40
IPv6: 152
MSG_DEV_MAX F1 12 20
MSG_P2P_REQ F1 20 IPv4: 36
IPv6: 152
MSG_LAN_SEARCH F1 30 0
MSG_LAN_SEARCH_EXT F1 32 0
MSG_LAN_SEARCH_EXT_ACK F1 33 52
MSG_DEV_UNREACH F1 35 20
MSG_PUNCH_PKT F1 41 20
MSG_P2P_RDY F1 42 20
MSG_RS_LGN F1 60 28
MSG_RS_LGN_EX F1 62 44
MSG_LST_REQ F1 67 20
MSG_RLY_HELLO F1 70 0
MSG_RLY_HELLO_ACK F1 71 0
MSG_RLY_PORT F1 72 0
MSG_RLY_PORT_ACK F1 73 8
MSG_RLY_PORT_EX_ACK F1 76 264
MSG_RLY_REQ_EX F1 77 288
MSG_RLY_REQ F1 80 IPv4: 40
IPv6: 160
MSG_HELLO_TO_ACK F1 83 28
MSG_RLY_RDY F1 84 20
MSG_SDEV_LGN F1 91 20
MSG_MGM_ADMIN F1 A0 160
MSG_MGM_DEVLIST_CTRL F1 A2 20
MSG_MGM_HELLO F1 A4 4
MSG_MGM_MULTI_DEV_CTRL F1 A6 variable
MSG_MGM_DEV_DETAIL F1 A8 24
MSG_MGM_DEV_VIEW F1 AA 4
MSG_MGM_RLY_LIST F1 AC 12
MSG_MGM_DEV_CTRL F1 AE 24
MSG_MGM_MEM_DB F1 B0 264
MSG_MGM_RLY_DETAIL F1 B2 24
MSG_MGM_ADMIN_LGOUT F1 BA 4
MSG_MGM_ADMIN_CHG F1 BC 164
MSG_VGW_LGN F1 C0 24
MSG_VGW_LGN_EX F1 C0 24
MSG_VGW_REQ F1 C3 20
MSG_VGW_REQ_ACK F1 C4 4
MSG_VGW_HELLO F1 C5 0
MSG_VGW_LST_REQ F1 C6 20
MSG_DRW F1 D0 variable
MSG_DRW_ACK F1 D1 variable
MSG_P2P_ALIVE F1 E0 0
MSG_P2P_ALIVE_ACK F1 E1 0
MSG_CLOSE F1 F0 0
MSG_MGM_DEV_LGN_DETAIL_DUMP F1 F4 12
MSG_MGM_DEV_LGN_DUMP F1 F4 12
MSG_MGM_LOG_CTRL F1 F7 12
MSG_SVR_REQ F2 10 0
MSG_DEV_LV_HB A1 00 20
MSG_DEV_SLP_HB A1 01 20
MSG_DEV_QUERY A1 02 20
MSG_DEV_WK_UP_REQ A1 04 20
MSG_DEV_WK_UP A1 06 20

HLP2P

While I’ve seen a few of apps with HLP2P code and the corresponding init strings, I am not sure whether these are still used or merely leftovers from some past adventure. All these apps use primarily networks that rely on other protocol implementations.

HLP2P init strings contain a dash which follows merely three letters. These three letters are ignored and I am unsure about their significance as I’ve only seen one variant:

DAS-0123456789ABCDEF

The decoding function is called from HLP2P_Initialize function and uses the most elaborate approach of all. The hex-encoded part after the dash is decrypted using AES-CBC where the key and initialization vector are derived from a zero-filled buffer via some bogus MD5 hashing. The decoded result is a list of comma-separated parameters like:

DCDC07FF,das,10000001,a+a+a,127.0.0.1-192.168.1.1-10.0.0.1,ABC-CBA

The fifth parameter is a list of server IP addresses and the sixth appears to be the list of supported device prefixes.

On the network level HLP2P is an oddity here. Despite trying hard to provide the same API as other PPPP implementations, including concepts like init strings and device IDs, it appears to be a TCP-based protocol (connecting to server’s port 65527) with little resemblance to PPPP. UDP appears to be used for local broadcasts only (on port 65531). I didn’t spend too much time on the analysis however.

“Encryption”

The CS2 implementation of the protocol is the only one that bothers with encrypting data, though their approach is better described as obfuscation. When encryption is enabled, the function P2P_Proprietary_Encrypt is applied to all outgoing and the function P2P_Proprietary_Decrypt to all incoming messages. These functions take the encryption key (which is visible in the application code as an unobfuscated part of the init string) and mash it into four bytes. These four bytes are then used to select values from a static table that the bytes of the message should be XOR’ed with.

There is at least one public implementation of this “encryption” though this one chose to skip the “key mashing” part and simply took the resulting four bytes as its key. A number of articles mention having implemented this algorithm however, it’s not really complicated.

The same obfuscation is used unconditionally for TCP traffic (TCP communication on port 443 as fallback). Here each message header contains two random bytes. The hex representation of these bytes is used as key to obfuscate message contents.

All *_CRC messages like MSG_DEV_LGN_CRC have an additional layer of obfuscation, performed by the functions PPPP_CRCEnc and PPPP_CRCDec. Unlike P2P_Proprietary_Encrypt which is applied to the entire message including the header, PPPP_CRCEnc is only applied to the payload. As normally only messages exchanged between the device and the server are obfuscated in this way, the corresponding key tends to be contained only in the device firmware and not in the application. Here as well the key is mashed into four bytes which are then used to generate a byte sequence that the message (extended by four + signs) is XOR’ed with. This is effectively an XOR cipher with a static key which is easy to crack even without knowing the key.

“Secret” messages

The CS2 implementation of the protocol contains a curiosity: two messages starting with 338DB900E559 being processed in a special way. No, this isn’t a hexadecimal representation of the bytes – it’s literally the message contents. No magic bytes, no encryption, the messages are expected to be 17 bytes long and are treated as zero-terminated strings.

I tried sending 338DB900E5592B32 (with a trailing zero byte) to a PPPP server and, surprisingly, received a response (non-ASCII bytes are represented as escape sequences):

\x0e\x0ay\x07\x08uT_ChArLiE@Cs2-NeTwOrK.CoM!

This response was consistent for this server, but another server of the same network responded slightly differently:

\x0e\x0ay\x07\x08vT_ChArLiE@Cs2-NeTwOrK.CoM!

A server from a different network which normally encrypts all communication also responded:

\x17\x06f\x12fDT_ChArLiE@Cs2-NeTwOrK.CoM!

It doesn’t take a lot of cryptanalysis knowledge to realize that an XOR cipher with a constant key is being applied here. Thanks to my “razor sharp deduction” I could conclude that the servers are replying with their respective names and these names are being XOR’ed with the string CS2MWDT_ChArLiE@Cs2-NeTwOrK.CoM!. Yes, likely the very same Charlie already mentioned at the start of this article. Hi, Charlie!

I didn’t risk sending the other message, not wanting to shut down a server accidentally. But maybe Shodan wants to extend their method of detecting PPPP servers: their current approach only works when no encryption is used, yet this message seems to get replies from all CS2 servers regardless of encryption.

Applications

Once a connection between the client and the device is established, MSG_DRW messages are exchanged in both directions. The messages will be delivered in order and retransmitted if lost, giving application developers something resembling a TCP stream if you don’t look too closely. In addition, each message is tagged with a channel ID, a number between 0 and 7. It looks like channel IDs are universally ignored by devices and are only relevant in the other direction. The idea seems to be that a client receiving a video stream should still be able to send commands to the device and receive responses over the same connection.

The PPPP protocol doesn’t make any recommendations about how applications should encode their data within that stream, and so they developed a number of wildly different application-level protocols. As a rule of thumb, all devices and clients on a particular PPPP network will always speak the same application-level protocol, though there might be slight differences in the supported capabilities. Different networks can share the same protocol, allowing them to be supported within the same application. Usually, there will be multiple applications implementing the same application-level protocol and working with the same PPPP networks, but I haven’t yet seen any applications supporting different protocols.

This allows grouping the applications by their application-level protocol. Applications within the same group are largely interchangeable, same devices can be accessed from any application. This doesn’t necessarily mean that everything will work correctly, as there might still be subtle differences. E.g. an application meant for visual doorbells probably accesses somewhat different functionality than one meant for security cameras even if both share the same protocol. Also, devices might be tied to the cloud infrastructure of a specific application, rendering them inaccessible to other applications working with the same PPPP network.

Fun fact: it is often very hard to know up front which protocol your device will speak. There is a huge thread with many spin-offs where people are attempting to reverse engineer A9 Mini cameras so that these can be accessed without an app. This effort is being massively complicated by the fact that all these cameras look basically the same, yet depending on the camera one out of at least four extremely different protocols could be used: HDWifiCamPro variant of SHIX JSON, YsxLite variant of iLnk binary, JXLCAM variant of CGI calls, or some protocol I don’t know because it isn’t based on PPPP.

The following is a list of PPPP-based applications I’ve identified so far, at least the ones with noteworthy user numbers. Mind you, these numbers aren’t necessarily indicative of the number of PPPP devices – some applications listed only use PPPP for some devices, likely using other protocols for most of their supported devices (particularly the ones that aren’t cameras). I try to provide a brief overview of the application-level protocol in the footnotes. Disclaimer: These applications tend to support a huge number of device prefixes in theory, so I mostly chose the “typical” ones based on which ones appear in YouTube videos or GitHub discussions.

Application Typical device prefixes Application-level protocol
Xiaomi Home XMSYSGB JSON (MISS) 1
Kami Home
Yi Home
Yi iot
TNPCHNA TNPCHNB TNPUSAC TNPUSAM TNPXGAC binary 2
Tuya - Smart Life,Smart Living TUYASA binary (Tuya SDK) 3
365Cam
CY365
Goodcam
HDWifiCamPro
PIX-LINK CAM
VI365
X-IOT CAM
DBG DGB DGO DGOA DGOC DGOE NMSA PIXA PIZ JSON (SHIX) 4
Eye4
O-KAM Pro
Veesky
EEEE VSTA VSTB VSTC VSTD VSTF VSTJ CGI calls 5
CamHi
CamHipro
AAFF EEEE MMMM NNNN PPPP SSAA SSAH SSAK SSAT SSSS TTTT binary 6
CloudEdge
ieGeek Cam
ECIPCM binary (Meari SDK) 7
YsxLite BATC BATE PTZ PTZA PTZB TBAT binary (iLnk) 8
FtyCamPro FTY FTYA FTYC FTZ FTZW binary (iLnk) 9
JXLCAM ACCQ BCCA BCCQ CAMA CGI calls 10
LookCam BHCC FHBB GHBB JSON 11
HomeEye
LookCamPro
StarEye
AYS AYSA TUT JSON (SHIX) 12
minicam CAM888 CGI calls 13
App2Cam Plus CGAG CMAG CTAI WGAG binary (Jsw SDK) 14

  1. Each message starts with a 4 byte command ID. The initial authorization messages (command ID 0x100 and 0x101) contain plain JSON data. Other messages contain ChaCha20-encoded data: first 8 bytes nonce, then the ciphertext. The encryption key is negotiated in the authorization phase. The decrypted plaintext again starts with a 4 byte command ID, followed by JSON data. There is even some Chinese documentation of this interface though it is rather underwhelming. ↩︎

  2. The device-side implementation of the protocol is available on the web. This doesn’t appear to be reverse engineered, it’s rather the source code of the real thing complete with Chinese comments. No idea who or why published this, I found it linked by the people who develop own changes to the stock camera firmware. The extensive tnp_eventlist_msg_s structure being sent and received here supports a large number of commands. ↩︎

  3. Each message is preceded by a 16 byte header: 78 56 34 12 magic bytes, request ID, command ID, payload size. This is a very basic interface exposing merely 10 commands, most of which are requesting device information while the rest control video/audio playback. As Tuya SDK also communicates with devices by means other than PPPP, more advanced functionality is probably exposed elsewhere. ↩︎

  4. Messages are preceded by an 8 byte binary header: 06 0A A0 80 magic bytes, four bytes payload size (there is a JavaScript-based implementation). The SHIX JSON format is a translation of this web API interface: /check_user.cgi?user=admin&pwd=pass becomes {"pro": "check_user", "cmd": 100, "user": "admin", "pwd": "pass"}. The pro and cmd fields are redundant, representing a command both as a string and as a number. ↩︎

  5. The binary message headers are similar to the ones used by apps like 365Cam: 01 0A 00 00 magic bytes, four bytes payload size. The payload is however a web request loosely based on this web API interface: GET /check_user.cgi?loginuse=admin&loginpas=pass&user=admin&pwd=pass. Yes, user name and password are duplicated, probably because not all devices expect loginuse/loginpas parameters? You can see in this article what the requests looks like. ↩︎

  6. Each message is preceded by a 24 byte header starting with the magic bytes 99 99 99 99, payload size and command ID. The other 12 bytes of the header are unused. Not trusting PPPP, CamHi encrypts the payload using AES. It looks like the encryption key is an MD5 hash of a string containing the user name and password among other things. Somebody published some initial insights into the application code↩︎

  7. Each message is preceded by a 52 byte header starting with the magic bytes 56 56 50 99. Bulk of this header is taken up by an authentication token: a SHA1 hex digest hashing the username (always admin), device password, sequence number, command ID and payload size. The implemented interface provides merely 14 very basic commands, essentially only exposing access to recordings and the live stream. So the payload even where present is something trivial like a date. As Meari SDK also communicates with devices by means other than PPPP, more advanced functionality is probably exposed elsewhere. ↩︎

  8. The commands and their binary representation are contained within libvdp.so which is the iLnk implementation of the PPPP protocol. Each message is preceded by a 12 bytes header starting with the 11 0A magic bytes. The commands are two bytes long with the higher byte indicating the command type: 2 for SD card command, 3 for A/V command, 4 for file command, 5 for password command, 6 for network command, 7 for system command. ↩︎

  9. While FtyCamPro app handles different networks than YsxLite, it relies on the same libvdp.so library, meaning that the application-level protocol should be the same. It’s possible that some commands are interpreted differently however. ↩︎

  10. The protocol is very similar to the one used by VStarcam apps like O-KAM Pro. The payload has only one set of credentials however, the parameters user and pwd. It’s also a far more limited and sometimes different set of commands. ↩︎

  11. Each message is wrapped in binary data: a prefix starting with A0 AF AF AF before it, the bytes F4 F3 F2 F1 after. For some reason the prefix length seems to be different depending on whether the message is sent to the device (26 bytes) or received from it (25 bytes). I don’t know what most of it is yet everything but the payload length at the end of the prefix seems irrelevant. This Warwick University paper has some info on the JSON payload. It’s particularly notable that the password sent along with each command isn’t actually being checked. ↩︎

  12. LookCamPro & Co. share significant amounts of code with the SHIX apps like 365Cam, they implement basically the same application-level protocol. There are differences in the supported commands however. It’s difficult to say how significant these differences are because all apps contain significant amounts of dead code, defining commands that are never used and probably not even supported. ↩︎

  13. The minicam app seems to use almost the same protocol as VStarcam apps like O-KAM Pro. It handles other networks however. Also, a few of the commands seem different from the ones used by O-KAM Pro, though it is hard to tell how significant these incompatibilities really are. ↩︎

  14. Each message is preceded by a 4 bytes header: 3 bytes payload size, 1 byte I/O type (1 for AUTH, 2 for VIDEO, 3 for AUDIO, 4 for IOCTRL, 5 for FILE). The payload starts with a type-specific header. If I read the code correctly, the first 16 bytes of the payload are encrypted with AES-ECB (unpadded) while the rest is sent unchanged. There is an “xor byte” in the payload header which is changed with every request seemingly to avoid generating identical ciphertexts. Payloads smaller than 16 bytes are not encrypted. I cannot see any initialization of the encryption key beyond filling it with 32 zero bytes, which would mean that this entire mechanism is merely obfuscation. ↩︎

Niko MatsakisBut then again...maybe alias?

Hmm, as I re-read the post I literally just posted a few minutes ago, I got to thinking. Maybe the right name is indeed Alias, and not Share. The rationale is simple: alias can serve as both a noun and a verb. It hits that sweet spot of “common enough you know what it means, but weird enough that it can be Rust Jargon for something quite specific”. In the same way that we talk about “passing a clone of foo” we can talk about “passing an alias to foo” or an “alias of foo”. Food for thought! I’m going to try Alias on for size in future posts and see how it feels.

Niko MatsakisBikeshedding `Handle` and other follow-up thoughts

There have been two major sets of responses to my proposal for a Handle trait. The first is that the Handle trait seems useful but doesn’t over all the cases where one would like to be able to ergonomically clone things. The second is that the name doesn’t seem to fit with our Rust conventions for trait names, which emphasize short verbs over nouns. The TL;DR of my response is that (1) I agree, this is why I think we should work to make Clone ergonomic as well as Handle; and (2) I agree with that too, which is why I think we should find another name. At the moment I prefer Share, with Alias coming in second.

Handle doesn’t cover everything

The first concern with the Handle trait is that, while it gives a clear semantic basis for when to implement the trait, it does not cover all the cases where calling clone is annoying. In other words, if we opt to use Handle, and then we make creating new handles very ergonomic, but calling clone remains painful, there will be a temptation to use the Handle when it is not appropriate.

In one of our lang team design meetings, TC raised the point that, for many applications, even an “expensive” clone isn’t really a big deal. For example, when writing CLI tools and things, I regularly clone strings and vectors of strings and hashmaps and whatever else; I could put them in an Rc or Arc but I know it just doens’t matter.

My solution here is simple: let’s make solutions that apply to both Clone and Handle. Given that I think we need a proposal that allows for handles that are both ergonomic and explicit, it’s not hard to say that we should extend that solution to include the option for clone.

The explicit capture clause post already fits this design. I explicitly chose a design that allowed for users to write move(a.b.c.clone()) or move(a.b.c.handle()), and hence works equally well (or equally not well…) with both traits

The name Handle doesn’t fit the Rust conventions

A number of people have pointed out Handle doesn’t fit the Rust naming conventions for traits like this, which aim for short verbs. You can interpret handle as a verb, but it doesn’t mean what we want. Fair enough. I like the name Handle because it gives a noun we can use to talk about, well, handles, but I agree that the trait name doesn’t seem right. There was a lot of bikeshedding on possible options but I think I’ve come back to preferring Jack Huey’s original proposal, Share (with a method share). I think Alias and alias is my second favorite. Both of them are short, relatively common verbs.

I originally felt that Share was a bit too generic and overly associated with sharing across threads – but then I at least always call &T a shared reference1, and an &T would implement Share, so it all seems to work well. Hat tip to Ariel Ben-Yehuda for pushing me on this particular name.

Coming up next

The flurry of posts in this series have been an attempt to survey all the discussions that have taken place in this area. I’m not yet aiming to write a final proposal – I think what will come out of this is a series of multiple RFCs.

My current feeling is that we should add the Hand^H^H^H^H, uh, Share trait. I also think we should add explicit capture clauses. However, while explicit capture clauses are clearly “low-level enough for a kernel”, I don’t really think they are “usable enough for a GUI” . The next post will explore another idea that I think might bring us closer to that ultimate ergonomic and explicit goal.


  1. A lot of people say immutable reference but that is simply accurate: an &Mutex is not immutable. I think that the term shared reference is better. ↩︎

This Week In RustThis Week in Rust 624

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Foundation
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is dioxus, a framework for building cross-platform apps.

Thanks to llogiq for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

RFCs
Rust
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

  • TokioConf 2026| CFP closes 2025-12-08 | Portland, Oregon, USA | 2026-04-20

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

480 pull requests were merged in the last week

Compiler
Library
Cargo
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Mostly positive week. We saw a great performance win implemented by #148040 and #148182, which optimizes crates with a lot of trivial constants.

Triage done by @kobzol.

Revision range: 23fced0f..35ebdf9b

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.8% [0.1%, 2.9%] 22
Regressions ❌
(secondary)
0.5% [0.1%, 1.7%] 48
Improvements ✅
(primary)
-2.8% [-16.4%, -0.1%] 102
Improvements ✅
(secondary)
-1.9% [-8.0%, -0.1%] 51
All ❌✅ (primary) -2.1% [-16.4%, 2.9%] 124

4 Regressions, 6 Improvements, 7 Mixed; 7 of them in rollups 36 artifact comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Rust Compiler Team (MCPs only) Language Reference Leadership Council

No Items entered Final Comment Period this week for Cargo, Rust RFCs, Language Team or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2025-11-05 - 2025-12-03 🦀

Virtual
Africa
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

If someone opens a PR introducing C++ to your Rust project, that code is free as in "use after"

Predrag Gruevski on Mastodon

Thanks to Brett Witty for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by:

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Firefox Add-on ReviewsSupercharge your productivity with a Firefox extension

With more work and education happening online you may find yourself needing new ways to juice your productivity. From time management to organizational tools and more, the right Firefox extension can give you an edge in the art of efficiency. 

I need help saving and organizing a lot of web content 

Raindrop.io

Organize anything you find on the web with Raindrop.io — news articles, videos, PDFs, and more.

Raindrop.io makes it simple to gather clipped web content by subject matter and organize with ease by applying tags, filters, and in-app search. This extension is perfectly suited for projects that require gathering and organizing lots of mixed media.

Gyazo

Capture, save, and share anything you find on the web. Gyazo is a great tool for personal or collaborative record keeping and research. 

Clip entire pages or just pertinent portions. Save images or take screenshots. Gyazo makes it easy to perform any type of web clipping action by either right-clicking on the page element you want to save or using the extension’s toolbar button. Everything gets saved to your Gyazo account, making it accessible across devices and collaborative teams. 

On your Gyazo homepage you can easily browse and sort everything you’ve clipped; and organize it all into shareable topics or collections.

<figcaption class="wp-element-caption">With its minimalist pop-up interface, Gyazo makes it easy to clip elements, sections, or entire web pages. </figcaption>

Evernote Web Clipper

Similar to Gyazo and Raindrop.io, Evernote Web Clipper offers a kindred feature set — clip, save, and share web content — albeit with some nice user interface distinctions. 

Evernote makes it easy to annotate images and articles for collaborative projects. It also has a strong internal search feature, allowing you to look for specific words and phrases that might appear across scattered collections of clipped content. Evernote also automatically strips out ads and social widgets on your saved pages. 

Notefox

Wouldn’t it be great if you could leave yourself little sticky notes anywhere you wanted around the web? Well now you can with Notefox.

Leave notes on specific web pages or entire domains. You can access all your notes from a central repository so everything is easy to find. The extension also includes a helpful auto-save feature so you’ll never lose a note.

Print Edit WE

If you need to save or print an important web page — but it’s mucked up with a bunch of unnecessary clutter like ads, sidebars, and other peripheral distractions — Print Edit WE lets you easily remove those unwanted elements.

Along with a host of great features like the option to save web pages as either HTML or PDF files, automatically delete graphics, and the ability to alter text or add notes, Print Edit WE also provides an array of productivity optimizations like keyboard shortcuts and mouse gestures. This is the ideal productivity extension for any type of work steeped in web research and cataloging.

Focus! Focus! Focus!

Anti-distraction and decluttering extensions can provide a major boon for online workers and students… 

Block Site 

Do you struggle avoiding certain time-wasting, productivity-sucking websites? With Block Site you can enforce restrictions on sites that tempt you away from good work habits. 

Just list the websites you want to avoid for specified periods of time (certain hours of the day or some days entirely) and Block Site won’t let you access them until you’re out of the focus zone. There’s also a fun redirection feature where you’re automatically redirected to a more productive website anytime you try to visit a time waster. 

<figcaption class="wp-element-caption">Give yourself a custom message of encouragement (or scolding?) whenever you try to visit a restricted site with Block Site.</figcaption>

LeechBlock NG

Very similar in function to Block Site, LeechBlock NG offers a few intriguing twists beyond standard site-blocking features. 

In addition to blocking sites during specified times, LeechBlock NG offers an array of granular, website-specific blocking abilities — like blocking just portions of websites (e.g. you can’t access the YouTube homepage but you can see video pages) to setting restrictions on predetermined days (e.g. no Twitter on weekends) to 60-second delayed access to certain websites to give you time to reconsider that potential productivity killing decision. 

Tomato Clock

A simple but highly effective time management tool, Tomato Clock (based on the Pomodoro technique) helps you stay on task by tracking short, focused work intervals. 

The premise is simple: it assumes everyone’s productive attention span is limited, so break up your work into manageable “tomato” chunks. Let’s say you work best in 40-minute bursts. Set Tomato Clock and your browser will notify you when it’s break time (which is also time customizable). It’s a great way to stay focused via short sprints of productivity. The extension also keeps track of your completed tomato intervals so you can track your achieved results over time.

Time Tracker

See how much time you spend on every website you visit. Time Tracker provides a granular view of your web habits.

If you find you’re spending too much time on certain websites, Time Tracker offers a block site feature to break the bad habit.

Tabby – Window & Tab Manager

Are you overwhelmed by lots of open tabs and windows? Need an easy way to overcome desktop chaos? Tabby – Window & Tab Manager to the rescue.

Regain control of your ever-sprawling open tabs and windows with an extension that lets you quickly reorganize everything. Tabby makes it easy to find what you need in a chaotic sea of open tabs — you can word/phrase search for what you’re looking for, of use Tabby’s visual preview feature to see little thumbnail images of your open tabs without actually navigating to them. And whenever you need a clean slate but want to save your work, you can save and close all of your open tabs with a single mouse click and return to them later.

<figcaption class="wp-element-caption">Access all of Tabby’s features in one convenient pop-up. </figcaption>

Tranquility Reader

Imagine a world wide web where everything but the words are stripped away — no more distracting images, ads, tempting links to related stories, nothing — just the words you’re there to read. That’s Tranquility Reader

Simply hit the toolbar button and instantly streamline any web page. Tranquility Reader offers quite a few other nifty features as well, like the ability to save content offline for later, customizable font size and colors, add annotations to saved pages, and more. 

Checker Plus for Gmail

Stop wasting time bouncing between the web and your Gmail app. Checker Plus for Gmail puts your inbox and more right into Firefox’s toolbar so it’s with you wherever you go on the internet.

See email notifications, read, reply, delete, mark as ‘read’ and more — all within a convenient browser pop-up.

We hope some of these great extensions will give your productivity a serious boost! Fact is there are a vast number of extensions that can help with productivity — everything from ways to organize tons of open tabs to translation tools to bookmark managers and more. 

Chris H-CTen-Year Moziversary

I’m a few days late publishing this, but this October marks the tenth anniversary of my first day working at Mozilla. I’m on my third hardware refresh (a Dell XPS which I can’t recommend), still just my third CEO, and now 68 reorgs in.

For something as momentous as breaking into two-digit territory, there’s not really much that’s different from last year. I’m still trying to get Firefox Desktop to use Glean instead of Legacy Telemetry and I’m still not blogging nearly as much as I’d like. Though, I did get promoted earlier this year. I am now a Senior Staff Software Engineer, which means I’m continuing on the journey of doing fewer things myself and instead empowering other people to do things.

As for predictions, I was spot on about FOG Migration actually taking off a little — in fact, quite a lot. All data collection in Firefox Desktop now either passes through Glean to get to Legacy Telemetry, has Glean mirroring alongside it, or has been removed. This is in large part thanks to a big help from Florian Quèze and his willingness to stop asking when we could start and just migrate the codebase. Now we’re working on moving the business data calculations onto Glean-sent data, and getting individual teams to change over too. If you’re reading this and were looking for an excuse to remove Legacy Telemetry from your component, this is your excuse.

My prediction that there’d be an All Hands was wrong. Mozilla Leadership has decided that the US is neither a place they want to force people to travel to nor is it a place they want to force people to travel out of (and then need to attempt to return to) in the current political climate. This means that business gatherings of any size are… complicated. Some teams have had simultaneous summits in cities both within and without the US. Some teams have had one or the other side call in virtually from their usual places of work. And our team… well, we’ve not gathered at all. Which is a bummer, since we’ve had a few shuffles in the ranks and it’d be good to get us all in one place. (I will be in Toronto with some fellow senior Data Engineering folks before the end of the year, but that’s the extent of work travel.) I’m broadly in favour of removing the requirement and expectation of travel over the US border — too many people have been disappeared in too many ways. We don’t want to make anyone feel as though they have to risk it. But it seems as though we’re also leaning away from allowing people to risk it if they want to, which is a level of paternalism that I didn’t want to see.

I did have one piece of “work” travel in that I attended CSV Conf in Bologna, Italy. Finally spent my Professional Development budget, and wow what a great investment. I learned so much and had a great time, and that was despite the heat and humidity (goodness, Italy. I was in your North (ish). In September. Why you gotta 30degC me like this?). I’m on the lookout for other great conferences to attend in 2026, so if you know any, get in touch.

My prediction that I’d still be three CEOs in because the search for a new one wouldn’t have completed by now: spot on. Ditto on executing my hardware refresh, though I’m still using a personal monitor at work. I should do something about that.

My prediction that we’d stop putting AI in everything has partially come true. There’s been a noticeable shift away from “Put genAI in it and find a problem for it to (maybe) solve” towards “If you find a problem that genAI can help with, give it a try.” You wouldn’t notice it, necessarily, looking at feature announcements for Firefox, as quite a lot of the integration infrastructure all landed in the past couple of months, making headlines. My feelings on LLMs and genAI have gained layers and nuance since last year. They’re still plagiarism machines that are illegally built by the absolute worst people in ways that worsen the climate catastrophe and entrench existing inequalities. But now they’ve apparently become actually useful in some ways. I’ve read reports from very senior developers about use cases that LLMs have been able to assist with. They are narrow use cases — you must only use it to work on components you understand well, you must only use it on tasks you would do yourself if you had the time and energy — but they’re real. And that means my usual hard line of “And even if you ignore the moral, ethical, environmental, economic, and industry concerns about using LLMs: they don’t even work” no longer applies. And in situations like a for-profit corporation lead by people from industry… ignoring the moral, ethical, environmental, economic, and industry concerns is de rigeur.

Add these to the sorta-kinda-okay things LLMs can do like natural language processing and aiding in training and refinement of machine translation models, and it looks as though we’re figuring out the “reheat the leftovers” and “melt butter and chocolate” use cases for these microwave ovens.

It still remains to be seen if, after the bubble pops, these nuclear-powered lake-draining art-stealing microwaves will find a home in many kitchens. I expect the fully-burdened cost will be awfully prohibitive for individuals who just want it to poorly regurgitate Wikipedia articles in a chat interface. It might even be too spicy for enterprises who think (likely erroneously) that they confer some instantaneous and generous productivity multiplier. Who knows.

All I know is that I still don’t like it. But I’ll likely find myself using one before the end of the year. If so, I intend to write up the experience and hopefully address my blogging drought by publishing it here.

Another thing that happened this year that I alluded to in last year’s post was the Google v DOJ ruling in the US. Well, the first two rulings anyway. Still years of appeal to come, but even the existing level of court seemed to agree that the business model that allows Mozilla to receive a bucketload of dollabux from Google for search engine placement in Firefox (aka, the thing that supplies most of my paycheque) should not be illegal at this time. Which is a bit of a relief. One existential threat to the business down… for now.

But mostly? This year has been feeling a little like 2016 again. Instead of The Internet of Things (IoT, where the S stands for Security), it’s genAI. Instead of Mexico and Muslims it’s Antifa and Trans people. The Jays are in the postseason again. Shit’s fucked and getting worse. But in all that, someone still has to rake the leaves and wash the dishes. And if I don’t do it, it won’t get done.

With that bright spot highlighted, what are my predictions for the new year:

  • I will requisition a second work monitor so I stop using personal hardware for work things.
  • FOG Migration (aka the Instrumentation Consolidation Project) will not fully remove all of Legacy Telemetry by this time next year. There’s evidence of cold feet on the “change business metrics to Glean-sent data” front, and even if there weren’t, there’s such a long tail that there’s no doubt something load-bearing that’d delay things to Q4 2025. I _am_ however predicting that FOG Migration will no longer being all-encompassing work — I will have a chance to do something else with my time.
  • I predict that one of the things I will do with that extra time is, since MoCo insists on a user population measurement KPI, push for a sensible user population measurement. Measuring the size of the user population by counting distinct _profiles_ we’ve _received_ a data packet from on a day (not that the data was collected on that day)? We can do better.
  • I don’t think there’s going to be an All Hands next year. If there is, I’d expect it to be Summit style: multiple cities simultaneously, with video links. Fingers crossed for Toronto finally getting its chance. Though I suppose if the people of the US rose up and took back their country, or if the current President should die, that could change the odds a little. Other US administrations saw the benefit of freedom of movement, regardless of which side of the aisle.
  • Maybe the genAI bubble will have burst? Timing these things is impossible, even if it weren’t the first time in history that this much of the US’ (and world’s) economy is inflating it. The sooner it bursts, the better, as it’s only getting bigger. (I suppose an alternative would be for the next shiny thing to happen along and the interest in genAI to dwindle more slowly with no single burst, just a bunch of crashes. Like blockchain/web3/etc. In that case a slower diminishing would be better than a sooner burst.)
  • I predict that a new MoCo CEO will have been found, but not yet sworn in by this time next year. I have no basis for this prediction: vibes only.

To another year of supporting the Mission!

:chutten

Mozilla Localization (L10N)L10n Report: November Edition 2025

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

What’s new or coming up in Firefox desktop

Firefox Backup

Firefox backup is a new feature being introduced in Firefox 145, currently testable in Beta and Nightly behind a preference flag. See here for instructions on how to test this feature.

This feature allows users to save a backup of their Firefox data to their local device at regular intervals, and later use that backup to restore their browser data or migrate their browser to a new device. One of the use cases is for current Windows 10 users who may be migrating to a new Windows 11 device. The user can save their Firefox backup to OneDrive, and later after setting up their new device can then install Firefox and restore their browsing data from the backup saved in OneDrive.

This is an alternative to using the sync functionality in combination with a Mozilla account.

Settings Redesign

Coming up in future releases, the current settings menu is being re-organized and re-designed to be more user friendly and easier to understand. New strings will be rolling out with relative frequency, but they can’t be viewed or tested in Beta or Nightly yet. If you encounter anything where you need additional context, please feel free to use the request context button in Pontoon or drop into our localization matrix channel where you can get the latest updates and engage with your fellow localizers from around the world.

What’s new or coming up in mobile

Here’s what’s been going on in Firefox for Android land lately: you may have noticed strings landing for the Toolbar refresh, the tab tray layout, as well as for a homepage revamp. All of this is work is ongoing, so expect to see more strings landing soon!

On the Firefox for iOS side, there have been improvements to Search along with a revamp of the menu and tab tray. Ongoing work continues on the Translations feature integration, the homepage revamp, and the toolbar refresh.

More updates coming soon — stay tuned!

What’s new or coming up in web projects

AMO and AMO Frontend

The team has been working on identifying and removing obsolete strings to minimize unnecessary translation effort especially the locales that are still catching on. Recently they removed an additional 160 or so strings.

To remain in production, a locale must have both projects at or above 80% completion. If only one project meets the threshold, neither will be enabled. This policy helps prevent users from unintentionally switching between their preferred language and English. Please review your locale to confirm both projects are localized and in good standing.

If a locale already in production falls below the threshold, the team will be notified. Each month, they will review the status of all locales and manually add or remove them from production as needed.

Mozilla accounts

The Mozilla accounts team has been working on the ability to customize surfaces for the various projects that rely on Mozilla accounts for account management such as sync, Mozilla VPN, and others. This customization applies only to a predetermined set of pages (such as sign-in, authentication, etc.) and emails (sign-up confirmation, sign-in verification code, etc.)  and is managed through a content management system. This CMS process bypasses the typical build process and as a result changes are shown in production within a very short time-frame (within minutes). Each customization requires an instance of a string, even if that value hasn’t changed, so this can result in a large number of identical strings being created.

This project will be managed in a new “Mozilla accounts CMS” project within Pontoon instead of the main “Mozilla accounts” project. We are doing this for a couple reasons:

  • To reduce or eliminate the need to translate duplicate strings: In most cases it’s best to have different strings to allow for translation adjustments depending on context, however due to the nature of this project, identical strings for the same page element (e.g. “button”) will use a single translation. For example, all buttons with the text “Sign in” will only require a single translation. This has reduced the number of strings requiring translation by over 50% already, and will reduce the number of additional strings in the future.
  • To enable pretranslation: Important note – this only applies to locales that have opted-in to the pretranslation feature. Due to the CMS string process skipping the normal build cycle and being exposed to production near instantaneously, there’s a high likelihood that untranslated strings may be shown in English before teams have the chance to translate. If a locale has opted in for pretranslation, then the “Mozilla accounts CMS” project will have pretranslation enabled by default and show pretranslated strings until the team has a chance to review and update strings. If your locale has decided not to use the pretranslation feature, then nothing will change and translated strings will be displayed once your team has them translated and approved in Pontoon.

Newly published localizer facing documentation

We’ve recently updated our testing instructions for Firefox for Android and for Firefox for iOS! If you spot anything that could be improved, please file an issue — we’d love your feedback.

Friends of the Lion

Image by Elio Qoshi

  • We’ve started a new blog series spotlighting amazing contributors from Mozilla’s localization community. The first one features Selim of the Turkish community.
  • A second localizer spotlight was published! This time, meet Bogo, a long-time contributor to Bulgarian projects.

Want to learn more from your fellow contributors? Who would you like to be featured? You are invited to nominate the next candidate!

Know someone in your l10n community who’s been doing a great job and should appear here? Contact us and we’ll make sure they get a shout-out!

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

Mozilla Privacy BlogPathways to a fairer digital world: Mozilla shares views on the EU Digital Fairness Act

The Digital Fairness Act (DFA) is a defining opportunity to modernise Europe’s consumer protection framework for the digital age. Mozilla welcomes the European Commission’s ambition to ensure that digital environments are fair, open, and respecting of user autonomy.

As online environments are increasingly shaped by manipulative design, pervasive personalization, and emerging AI systems, traditional transparency and consent mechanisms are no longer sufficient. The DFA must therefore address how digital systems are designed and operated – from interface choices to system-level defaults and AI-mediated decision-making.

Mozilla believes the  DFA, if designed in a smart way, will complement existing legislation (such as GDPR, DSA, DMA, AI Act) by closing long-recognized legal and enforcement gaps. When properly scoped, the DFA can simplify the regulatory landscape, reduce fragmentation, and enhance legal certainty for innovators, while also enabling consumers to exercise their choices online and bolster overall consumer protection. Ensuring effective consumer choice is at the heart of contestable markets, encouraging innovation and new entry.

Policy recommendations

1. Recognize and outlaw harmful design practices at the interface and system levels.

  • Update existing rules to ensure that manipulative and deceptive patterns at both interface and system architecture levels are explicitly banned.
  • Extend protection beyond “dark patterns” to include AI-driven and agentic systems that steer users toward outcomes they did not freely choose.
  • Introduce anti-circumvention and burden-shifting provisions requiring platforms to demonstrate the fairness of their design and user-interaction systems.
  • Harmonize key definitions and obligations across the different legislative instruments within consumer, competition, and data protection law.

2. Establish substantive fairness standards for personalization and online advertising.

  • Prohibit exploitative or manipulative personalization based on sensitive data or vulnerabilities.
  • Guarantee simple, meaningful opt-outs that do not degrade service quality.
  • Require the use of privacy-preserving technologies (PETs) and data minimisation by design in all personalization systems.
  • Mandate regular audits to assess fairness and detect systemic bias or manipulation across the ad-tech chain.

3. Strengthen centralized enforcement and cooperation across regulators. 

  • Adopt the DFA as a Regulation and introduce centralized enforcement to ensure consistent application across Member States.
  • Create formal mechanisms for cross-regulator coordination among consumer, data protection, and competition authorities.
  • Update the “average consumer” standard to reflect real behavioral dynamics online, ensuring protection for all users, not just the hypothetical rational actor.

A strong, harmonized DFA would modernize Europe’s consumer protection architecture, strengthen trust, and promote a fairer, more competitive digital economy. By closing long-recognized legal gaps, it would reinforce genuine user choice, simplify compliance, enhance legal certainty, and support responsible innovation.

You can read our position in more detail here.

The post Pathways to a fairer digital world: Mozilla shares views on the EU Digital Fairness Act appeared first on Open Policy & Advocacy.

The Rust Programming Language BlogAnnouncing Rust 1.91.0

The Rust team is happy to announce a new version of Rust, 1.91.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.91.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.91.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.91.0 stable

aarch64-pc-windows-msvc is now a Tier 1 platform

The Rust compiler supports a wide variety of targets, but the Rust Team can't provide the same level of support for all of them. To clearly mark how supported each target is, we use a tiering system:

  • Tier 3 targets are technically supported by the compiler, but we don't check whether their code build or passes the tests, and we don't provide any prebuilt binaries as part of our releases.
  • Tier 2 targets are guaranteed to build and we provide prebuilt binaries, but we don't execute the test suite on those platforms: the produced binaries might not work or might have bugs.
  • Tier 1 targets provide the highest support guarantee, and we run the full suite on those platforms for every change merged in the compiler. Prebuilt binaries are also available.

Rust 1.91.0 promotes the aarch64-pc-windows-msvc target to Tier 1 support, bringing our highest guarantees to users of 64-bit ARM systems running Windows.

Add lint against dangling raw pointers from local variables

While Rust's borrow checking prevents dangling references from being returned, it doesn't track raw pointers. With this release, we are adding a warn-by-default lint on raw pointers to local variables being returned from functions. For example, code like this:

fn f() -> *const u8 {
    let x = 0;
    &x
}

will now produce a lint:

warning: a dangling pointer will be produced because the local variable `x` will be dropped
 --> src/lib.rs:3:5
  |
1 | fn f() -> *const u8 {
  |           --------- return type of the function is `*const u8`
2 |     let x = 0;
  |         - `x` is part the function and will be dropped at the end of the function
3 |     &x
  |     ^^
  |
  = note: pointers do not have a lifetime; after returning, the `u8` will be deallocated
    at the end of the function because nothing is referencing it as far as the type system is
    concerned
  = note: `#[warn(dangling_pointers_from_locals)]` on by default

Note that the code above is not unsafe, as it itself doesn't perform any dangerous operations. Only dereferencing the raw pointer after the function returns would be unsafe. We expect future releases of Rust to add more functionality helping authors to safely interact with raw pointers, and with unsafe code more generally.

Stabilized APIs

These previously stable APIs are now stable in const contexts:

Platform Support

Refer to Rust’s platform support page for more information on Rust’s tiered platform support.

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.91.0

Many people came together to create Rust 1.91.0. We couldn't have done it without all of you. Thanks!

Mozilla Privacy BlogCalifornia’s Opt Me Out Act is a Win for Privacy

It’s no secret that privacy and user empowerment have always been core to Mozilla’s mission.

Over the years, we’ve consistently engaged with policymakers to advance strong privacy protections. We were thrilled when the California Consumer Privacy Act (CCPA) was signed into law, giving people the ability to opt-out and send a clear signal to websites that they don’t want their personal data tracked or sold. Despite this progress, many browsers and operating systems still failed to make these controls available or offer the tools to do so without third-party support. This gap is why we’ve pushed time and time again for additional legislation to ensure people can easily exercise their privacy rights online.

Last year, we shared our disappointment when California’s AB 3048 was not signed into law. This bill was a meaningful step toward empowering consumers. When it failed to pass, we urged policymakers to continue efforts to advance similar legislation, to close gaps and strengthen enforcement.

We can’t stress this enough: Legislation must prioritize people’s privacy and meet the expectations that consumers rightly have about treatment of their sensitive personal information.

That’s why we joined allies to support AB 566, the California Opt Me Out Act, mandating that browsers include an opt-out setting so Californians can easily communicate their privacy preferences. Earlier this month, we were happy to see it pass and Governor Newsom sign it into law.

Mozilla has long advocated for easily accessible universal opt-out mechanisms; it’s a core feature built into Firefox through our Global Privacy Control (GPC) mechanism. By requiring browsers to provide tools like GPC, California is setting an important precedent that brings us closer to a web where privacy controls are consistent, effective, and easy to use.

We hope to see similar steps in other states and at the federal level, to advance meaningful privacy protections for everyone online – the issue is more urgent than ever. We remain committed to working alongside policymakers across the board to ensure it happens.

The post California’s Opt Me Out Act is a Win for Privacy appeared first on Open Policy & Advocacy.

Mozilla Addons BlogNew Recommended Extensions arrived, thanks to our community curators

Every so often we host community-driven curatorial projects to select new Firefox Recommended Extensions. By gathering a diverse group of community contributors who share a passion for the open web and add-ons, we aim to identify new Recommended Extensions that meet Mozilla’s “highest standards of security, functionality, and user experience.”

Earlier this year we concluded yet another successful curatorial project spanning six months. We evaluated dozens of worthy nominations. Those that received highest marks for functionality and user experience were then put through a technical review process to ensure they adhere to Add-on Policies and our industry-leading security standards. A few candidates are still working their way through the final stages of review, but most of the new batch of Recommended Extensions are now live on AMO (addons.mozilla.org) and we wanted to share the news, so without further ado here are some exciting new additions to the program…

Yomitan is a dictionary extension uniquely suited for learning new languages (20+). An interactive pop-up provides not only word definitions but audio pronunciation guidance as well, plus other great features tailored for understanding foreign languages.

Power Thesaurus is another elite language tool that provides a vast world of synonyms just a mouse click away (antonyms too!).

Power Thesaurus brings a world of words into Firefox.

PhotoShow is a fabulous tool for any photophile. Just hover over images to instantly enlarge their appearance with an option to download in high-def. Works with 300+ top websites.

Simple Gesture for Android provides a suite of touch gestures like page scrolling, back and forth navigation, tab management, and more.

Immersive Translate is a feature-packed translation extension. Highlights include translations across mediums like web, PDF, eBooks, even video subtitles. Works great on both Firefox desktop and Android.

Time Tracker offers key insights into your web habits. Track the time you spend on websites — with an option to block specific sites if you find they’re stealing too much of your time.

Checker Plus for Gmail makes it easy to stay on top of your Gmail straight from Firefox’s toolbar. See email notifications, read, reply, delete, mark as read and more — without clicking away from wherever you are on the web.

YouTube Search Fixer de-clutters the YouTube experience by removing distracting features like Related Videos, For You, People Also Watched, Shorts — all that stuff intended to rabbit hole your attention. It’s completely customizable, so you’re free to tweak YouTube to taste.

YouTube Search Fixer puts you in control of what you see.

Notefox lets you leave notes to yourself on any website (per page or domain wide). It’s a simple, ideal tool for deep researchers or anyone who needs to leave themselves helpful notes around the web.

Sink It for Reddit features a bunch of “quality of life improvements” as its developer puts it, including color coded comments, content muting, adaptive dark mode, and more.

Raindrop.io helps you save and organize anything you find on the web. This is a tremendous tool for clipping articles, videos, even PDFs — and categorizing them by topic.

Show Video Controls for Firefox is a beloved feature for watchers of WebM formatted videos. The extension automatically enables video controls (volume/mute, play/pause, full screen, etc.).

Chrome Mask is a clever little extension designed to “mask” Firefox as the Chrome browser to websites that otherwise try to block or don’t want to support Firefox.

Congratulations to all of the developers! You’ve built incredible features that will be appreciated by millions of Firefox users.

Finally, a huge thank you to the Firefox Recommended Extensions Advisory Board who contributed their time and talent helping curate all these new Recommended extensions. Shout outs to Amber Shumaker, C. Liam Brown, Cody Ortt, Danny Colin, gsakel, Lewis, Michael Soh, Paul, Rafi Meher, and Rusty (Rusty Zone on YouTube).

We’re planning another curatorial project sometime in 2026, so if you’re the developer of a Firefox extension you believe meets the criteria to become a Recommended extension, or you’re the user of an extension you feel deserves consideration for the program, please email us nominations at amo-featured [at] mozilla [dot] org.

The post New Recommended Extensions arrived, thanks to our community curators appeared first on Mozilla Add-ons Community Blog.

This Week In RustThis Week in Rust 623

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Research

Crate of the Week

This week's crate is tower-resilience, a library offering resilience features for tower.

Thanks to Josh Rotenberg for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

RFCs
Rust
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

No Calls for papers or presentations were submitted this week.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

463 pull requests were merged in the last week

Compiler
Library
Cargo
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Mostly negative week, coming almost entirely from adding sizedness bounds in #142712. Other than that, we got a nice win for async code from state transform optimization in #147493 and quite a few smaller improvements from codegen optimization in #147890.

Triage done by @panstromek. Revision range: 4068bafe..23fced0f

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.7% [0.2%, 3.7%] 113
Regressions ❌
(secondary)
0.5% [0.1%, 1.7%] 75
Improvements ✅
(primary)
-0.4% [-0.7%, -0.2%] 3
Improvements ✅
(secondary)
-2.3% [-20.8%, -0.1%] 30
All ❌✅ (primary) 0.7% [-0.7%, 3.7%] 116

2 Regressions, 2 Improvements, 7 Mixed; 2 of them in rollups 42 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Rust Compiler Team (MCPs only) Leadership Council

No Items entered Final Comment Period this week for Rust RFCs, Cargo, Language Team, Language Reference or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2025-10-29 - 2025-11-26 🦀

Virtual
Africa
Asia
Europe
North America
Oceania
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Petition to add an unwise keyword in Rust

James Logan on hachyderm.io

Thanks to llogiq for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Mozilla ThunderbirdMobile Progress Report: September-October 2025

A Brief Self-Introduction

Hello community, it’s a pleasure to be here and help take part in a product I’ve used for many years, but now with the focus on Mobile.  I am Jon Bott, and am the new Engineering Manager for the Thunderbird Mobile teams.  I am passionate about native mobile development and am excited to be helping both mobile apps moving forward.  

Refining our Roadmaps

For now, as we develop, we are refining the roadmap and making more concrete plans for iOS Thunderbird’s Alpha release in a couple of months, and finalizing our initial pass with Account Drawer on the Android (planned for release in the next beta).  We also have Notification and Message List improvements under development.

Carpaccio

As a mobile product, we’ve gone through several changes over the last year or so, from large annual releases, to our more recent monthly beta and release process.  Our next steps are to start sizing our features so they fit better into that monthly cadence, and you’ll see the benefits of this over the next few months as we simplify our planning & process – breaking our large features into smaller, more frequently delivered pieces.  This is based on the Carpaccio method for breaking down features into thin slices with the goal of delivering usable features to our users more quickly, and focusing more on the iterative process helping us take feedback sooner from the community on a feature experience and designs.  Not everything will fit in this, of course, but more will go out sooner as we carve away with our larger goals for the platforms.

Stay Tuned

Over the next few weeks we’ll update our timelines and roadmaps, to what pieces we have high confidence in delivering over the next few months, and a 50,000 foot (15,000 meter) view of our larger pieces we hope to tackle in the next year.  Ultimately our goal is to more quickly reduce pain points you might have, and keep adding polish to Thunderbird’s mobile experience. 

Progress with Thunderbird iOS

We are excited to show the progress we are making in getting the iOS up and running.  Some things are connected, others have sample data for now, but it helps us move quickly and start to share what the UI will be like moving forward.  Here are the actual screen we’ve coded up:

____

Jon Bott

Manager, Mobile Apps

The post Mobile Progress Report: September-October 2025 appeared first on The Thunderbird Blog.

Spidermonkey Development BlogWho needs Graphviz when you can build it yourself?

We recently overhauled our internal tools for visualizing the compilation of JavaScript and WebAssembly. When SpiderMonkey’s optimizing compiler, Ion, is active, we can now produce interactive graphs showing exactly how functions are processed and optimized.

You can play with these graphs right here on this page. Simply write some JavaScript code in the test function and see what graph is produced. You can click and drag to navigate, ctrl-scroll to zoom, and drag the slider at the bottom to scrub through the optimization process.

As you experiment, take note of how stable the graph layout is, even as the sizes of blocks change or new structures are added. Try clicking a block's title to select it, then drag the slider and watch the graph change while the block remains in place. Or, click an instruction's number to highlight it so you can keep an eye on it across passes.

 

Example iongraph output

We are not the first to visualize our compiler’s internal graphs, of course, nor the first to make them interactive. But I was not satisfied with the output of common tools like Graphviz or Mermaid, so I decided to create a layout algorithm specifically tailored to our needs. The resulting algorithm is simple, fast, produces surprisingly high-quality output, and can be implemented in less than a thousand lines of code. The purpose of this article is to walk you through this algorithm and the design concepts behind it.

Read this post on desktop to see an interactive demo of iongraph.

Background

As readers of this blog already know, SpiderMonkey has several tiers of execution for JavaScript and WebAssembly code. The highest tier is known as Ion, an optimizing SSA compiler that takes the most time to compile but produces the highest-quality output.

Working with Ion frequently requires us to visualize and debug the SSA graph. Since 2011 we have used a tool for this purpose called iongraph, built by Sean Stangl. It is a simple Python script that takes a JSON dump of our compiler graphs and uses Graphviz to produce a PDF. It is perfectly adequate, and very much the status quo for compiler authors, but unfortunately the Graphviz output has many problems that make our work tedious and frustrating.

The first problem is that the Graphviz output rarely bears any resemblance to the source code that produced it. Graphviz will place nodes wherever it feels will minimize error, resulting in a graph that snakes left and right seemingly at random. There is no visual intuition for how deeply nested a block of code is, nor is it easy to determine which blocks are inside or outside of loops. Consider the following function, and its Graphviz graph:

function foo(n) {
  let result = 0;
  for (let i = 0; i < n; i++) {
    if (!!(i % 2)) {
      result = 0x600DBEEF;
    } else {
      result = 0xBADBEEF;
    }
  }

  return result;
}

Counterintuitively, the return appears before the two assignments in the body of the loop. Since this graph mirrors JavaScript control flow, we’d expect to see the return at the bottom. This problem only gets worse as graphs grow larger and more complex.

The second, related problem is that Graphviz’s output is unstable. Small changes to the input can result in large changes to the output. As you page through the graphs of each pass within Ion, nodes will jump left and right, true and false branches will swap, loops will run up the right side instead of the left, and so on. This makes it very hard to understand the actual effect of any given pass. Consider the following before and after, and notice how the second graph is almost—but not quite—a mirror image of the first, despite very minimal changes to the graph’s structure:

None of this felt right to me. Control flow graphs should be able to follow the structure of the program that produced them. After all, a control flow graph has many restrictions that a general-purpose tool would not be aware of: they have very few cycles, all of which are well-defined because they come from loops; furthermore, both JavaScript and WebAssembly have reducible control flow, meaning all loops have only one entry, and it is not possible to jump directly into the middle of a loop. This information could be used to our advantage.

Beyond that, a static PDF is far from ideal when exploring complicated graphs. Finding the inputs or uses of a given instruction is a tedious and frustrating exercise, as is following arrows from block to block. Even just zooming in and out is difficult. I eventually concluded that we ought to just build an interactive tool to overcome these limitations.

How hard could layout be?

I had one false start with graph layout, with an algorithm that attempted to sort blocks into vertical “tracks”. This broke down quickly on a variety of programs and I was forced to go back to the drawing board—in fact, back to the source of the very tool I was trying to replace.

The algorithm used by dot, the typical hierarchical layout mode for Graphviz, is known as the Sugiyama layout algorithm, from a 1981 paper by Sugiyama et al. As introduction, I found a short series of lectures that broke down the Sugiyama algorithm into 5 steps:

  1. Cycle breaking, where the direction of some edges are flipped in order to produce a DAG.
  2. Leveling, where vertices are assigned into horizontal layers according to their depth in the graph, and dummy vertices are added to any edge that crosses multiple layers.
  3. Crossing minimization, where vertices on a layer are reordered in order to minimize the number of edge crossings.
  4. Vertex positioning, where vertices are horizontally positioned in order to make the edges as straight as possible.
  5. Drawing, where the final graph is rendered to the screen.

A screenshot from the lectures, showing the five steps above

These steps struck me as surprisingly straightforward, and provided useful opportunities to insert our own knowledge of the problem:

  • Cycle breaking would be trivial for us, since the only cycles in our data are loops, and loop backedges are explicitly labeled. We could simply ignore backedges when laying out the graph.
  • Leveling would be straightforward, and could easily be modified to better mimic the source code. Specifically, any blocks coming after a loop in the source code could be artificially pushed down in the layout, solving the confusing early-exit problem.
  • Permuting vertices to reduce edge crossings was actually just a bad idea, since our goal was stability from graph to graph. The true and false branches of a condition should always appear in the same order, for example, and a few edge crossings is a small price to pay for this stability.
  • Since reducible control flow ensures that a program’s loops form a tree, vertex positioning could ensure that loops are always well-nested in the final graph.

Taken all together, these simplifications resulted in a remarkably straightforward algorithm, with the initial implementation being just 1000 lines of JavaScript. (See this demo for what it looked like at the time.) It also proved to be very efficient, since it avoided the most computationally complex parts of the Sugiyama algorithm.

iongraph from start to finish

We will now go through the entire iongraph layout algorithm. Each section contains explanatory diagrams, in which rectangles are basic blocks and circles are dummy nodes. Loop header blocks (the single entry point to each loop) are additionally colored green.

Be aware that the block positions in these diagrams are not representative of the actual computed layout position at each point in the process. For example, vertical positions are not calculated until the very end, but it would be hard to communicate what the algorithm was doing if all blocks were drawn on a single line!

Step 1: Layering

We first sort the basic blocks into horizontal tracks called “layers”. This is very simple; we just start at layer 0 and recursively walk the graph, incrementing the layer number as we go. As we go, we track the “height” of each loop, not in pixels, but in layers.

We also take this opportunity to vertically position nodes “inside” and “outside” of loops. Whenever we see an edge that exits a loop, we defer the layering of the destination block until we are done layering the loop contents, at which point we know the loop’s height.

A note on implementation: nodes are visited multiple times throughout the process, not just once. This can produce a quadratic explosion for large graphs, but I find that an early-out is sufficient to avoid this problem in practice.

The animation below shows the layering algorithm in action. Notice how the final block in the graph is visited twice, once after each loop that branches to it, and in each case, the block is deferred until the entire loop has been layered, rather than processed immediately after its predecessor block. The final position of the block is below the entirety of both loops, rather than directly below one of its predecessors as Graphviz would do. (Remember, horizontal and vertical positions have not yet been computed; the positions of the blocks in this diagram are hardcoded for demonstration purposes.)

Implementation pseudocode
/*CODEBLOCK=layering*/function layerBlock(block, layer = 0) {
  // Omitted for clarity: special handling of our "backedge blocks"

  // Early out if the block would not be updated
  if (layer <= block.layer) {
    return;
  }

  // Update the layer of the current block
  block.layer = Math.max(block.layer, layer);

  // Update the heights of all loops containing the current block
  let header = block.loopHeader;
  while (header) {
    header.loopHeight = Math.max(header.loopHeight, block.layer - header.layer + 1);
    header = header.parentLoopHeader;
  }

  // Recursively layer successors
  for (const succ of block.successors) {
    if (succ.loopDepth < block.loopDepth) {
      // Outgoing edges from the current loop will be layered later
      block.loopHeader.outgoingEdges.push(succ);
    } else {
      layerBlock(succ, layer + 1);
    }
  }

  // Layer any outgoing edges only after the contents of the loop have
  // been processed
  if (block.isLoopHeader()) {
    for (const succ of block.outgoingEdges) {
      layerBlock(succ, layer + block.loopHeight);
    }
  }
}

Step 2: Create dummy nodes

Any time an edge crosses a layer, we create a dummy node. This allows edges to be routed across layers without overlapping any blocks. Unlike in traditional Sugiyama, we always put downward dummies on the left and upward dummies on the right, producing a consistent “counter-clockwise” flow. This also makes it easy to read long vertical edges, whose direction would otherwise be ambiguous. (Recall how the loop backedge flipped from the right to the left in the “unstable layout” Graphviz example from before.)

In addition, we coalesce any edges that are going to the same destination by merging their dummy nodes. This heavily reduces visual noise.

Step 3: Straighten edges

This is the fuzziest and most ad-hoc part of the process. Basically, we run lots of small passes that walk up and down the graph, aligning layout nodes with each other. Our edge-straightening passes include:

  • Pushing nodes to the right of their loop header to “indent” them.
  • Walking a layer left to right, moving children to the right to line up with their parents. If any nodes overlap as a result, they are pushed further to the right.
  • Walking a layer right to left, moving parents to the right to line up with their children. This version is more conservative and will not move a node if it would overlap with another. This cleans up most issues from the first pass.
  • Straightening runs of dummy nodes so we have clean vertical lines.
  • “Sucking in” dummy runs on the left side of the graph if there is room for them to move to the right.
  • Straighten out any edges that are “nearly straight”, according to a chosen threshold. This makes the graph appear less wobbly. We do this by repeatedly “combing” the graph upward and downward, aligning parents with children, then children with parents, and so on.

It is important to note that dummy nodes participate fully in this system. If for example you have two side-by-side loops, straightening the left loop’s backedge will push the right loop to the side, avoiding overlaps and preserving the graph’s visual structure.

We do not reach a fixed point with this strategy, nor do we attempt to. I find that if you continue to repeatedly apply these particular layout passes, nodes will wander to the right forever. Instead, the layout passes are hand-tuned to produce decent-looking results for most of the graphs we look at on a regular basis. That said, this could certainly be improved, especially for larger graphs which do benefit from more iterations.

At the end of this step, all nodes have a fixed X-coordinate and will not be modified further.

Step 4: Track horizontal edges

Edges may overlap visually as they run horizontally between layers. To resolve this, we sort edges into parallel “tracks”, giving each a vertical offset. After tracking all the edges, we record the total height of the tracks and store it on the preceding layer as its “track height”. This allows us to leave room for the edges in the final layout step.

We first sort edges by their starting position, left to right. This produces a consistent arrangement of edges that has few vertical crossings in practice. Edges are then placed into tracks from the “outside in”, stacking rightward edges on top and leftward edges on the bottom, creating a new track if the edge would overlap with or cross any other edge.

The diagram below is interactive. Click and drag the blocks to see how the horizontal edges get assigned to tracks.

Implementation pseudocode
/*CODEBLOCK=tracks*/function trackHorizontalEdges(layer) {
  const TRACK_SPACING = 20;

  // Gather all edges on the layer, and sort left to right by starting coordinate
  const layerEdges = [];
  for (const node of layer.nodes) {
    for (const edge of node.edges) {
      layerEdges.push(edge);
    }
  }
  layerEdges.sort((a, b) => a.startX - b.startX);

  // Assign edges to "tracks" based on whether they overlap horizontally with
  // each other. We walk the tracks from the outside in and stop if we ever
  // overlap with any other edge.
  const rightwardTracks = []; // [][]Edge
  const leftwardTracks = [];  // [][]Edge
  nextEdge:
  for (const edge of layerEdges) {
    const trackSet = edge.endX - edge.startX >= 0 ? rightwardTracks : leftwardTracks;
    let lastValidTrack = null; // []Edge | null

    // Iterate through the tracks in reverse order (outside in)
    for (let i = trackSet.length - 1; i >= 0; i--) {
      const track = trackSet[i];
      let overlapsWithAnyInThisTrack = false;
      for (const otherEdge of track) {
        if (edge.dst === otherEdge.dst) {
          // Assign the edge to this track to merge arrows
          track.push(edge);
          continue nextEdge;
        }

        const al = Math.min(edge.startX, edge.endX);
        const ar = Math.max(edge.startX, edge.endX);
        const bl = Math.min(otherEdge.startX, otherEdge.endX);
        const br = Math.max(otherEdge.startX, otherEdge.endX);
        const overlaps = ar >= bl && al <= br;
        if (overlaps) {
          overlapsWithAnyInThisTrack = true;
          break;
        }
      }

      if (overlapsWithAnyInThisTrack) {
        break;
      } else {
        lastValidTrack = track;
      }
    }

    if (lastValidTrack) {
      lastValidTrack.push(edge);
    } else {
      trackSet.push([edge]);
    }
  }

  // Use track info to apply offsets to each edge for rendering.
  const tracksHeight = TRACK_SPACING * Math.max(
    0,
    rightwardTracks.length + leftwardTracks.length - 1,
  );
  let trackOffset = -tracksHeight / 2;
  for (const track of [...rightwardTracks.toReversed(), ...leftwardTracks]) {
    for (const edge of track) {
      edge.offset = trackOffset;
    }
    trackOffset += TRACK_SPACING;
  }
}

Step 5: Verticalize

Finally, we assign each node a Y-coordinate. Starting at a Y-coordinate of zero, we iterate through the layers, repeatedly adding the layer’s height and its track height, where the layer height is the maximum height of any node in the layer. All nodes within a layer receive the same Y-coordinate; this is simple and easier to read than Graphviz’s default of vertically centering nodes within a layer.

Now that every node has both an X and Y coordinate, the layout process is complete.

Implementation pseudocode
/*CODEBLOCK=verticalize*/function verticalize(layers) {
  let layerY = 0;
  for (const layer of layers) {
    let layerHeight = 0;
    for (const node of layer.nodes) {
      node.y = layerY;
      layerHeight = Math.max(layerHeight, node.height);
    }
    layerY += layerHeight;
    layerY += layer.trackHeight;
  }
}

Step 6: Render

The details of rendering are out of scope for this article, and depend on the specific application. However, I wish to highlight a stylistic decision that I feel makes our graphs more readable.

When rendering edges, we use a style inspired by railroad diagrams. These have many advantages over the Bézier curves employed by Graphviz. First, straight lines feel more organized and are easier to follow when scrolling up and down. Second, they are easy to route (vertical when crossing layers, horizontal between layers). Third, they are easy to coalesce when they share a destination, and the junctions provide a clear indication of the edge’s direction. Fourth, they always cross at right angles, improving clarity and reducing the need to avoid edge crossings in the first place.

Consider the following example. There are several edge crossings that may traditionally be considered undesirable—yet the edges and their directions remain clear. Of particular note is the vertical junction highlighted in red on the left: not only is it immediately clear that these edges share a destination, but the junction itself signals that the edges are flowing downward. I find this much more pleasant than the “rat’s nest” that Graphviz tends to produce.

Examples of railroad-diagram edges

Why does this work?

It may seem surprising that such a simple (and stupid) layout algorithm could produce such readable graphs, when more sophisticated layout algorithms struggle. However, I feel that the algorithm succeeds because of its simplicity.

Most graph layout algorithms are optimization problems, where error is minimized on some chosen metrics. However, these metrics seem to correlate poorly to readability in practice. For example, it seems good in theory to rearrange nodes to minimize edge crossings. But a predictable order of nodes seems to produce more sensible results overall, and simple rules for edge routing are sufficient to keep things tidy. (As a bonus, this also gives us layout stability from pass to pass.) Similarly, layout rules like “align parents with their children” produce more readable results than “minimize the lengths of edges”.

Furthermore, by rejecting the optimization problem, a human author gains more control over the layout. We are able to position nodes “inside” of loops, and push post-loop content down in the graph, because we reject this global constraint-solver approach. Minimizing “error” is meaningless compared to a human maximizing meaning through thoughtful design.

And finally, the resulting algorithm is simply more efficient. All the layout passes in iongraph are easy to program and scale gracefully to large graphs because they run in roughly linear time. It is better, in my view, to run a fixed number of layout iterations according to your graph complexity and time budget, rather than to run a complex constraint solver until it is “done”.

By following this philosophy, even the worst graphs become tractable. Below is a screenshot of a zlib function, compiled to WebAssembly, and rendered using the old tool.

spaghetti nightmare!!

It took about ten minutes for Graphviz to produce this spaghetti nightmare. By comparison, iongraph can now lay out this function in 20 milliseconds. The result is still not particularly beautiful, but it renders thousands of times faster and is much easier to navigate.

better spaghetti

Perhaps programmers ought to put less trust into magic optimizing systems, especially when a human-friendly result is the goal. Simple (and stupid) algorithms can be very effective when applied with discretion and taste.

Future work

We have already integrated iongraph into the Firefox profiler, making it easy for us to view the graphs of the most expensive or impactful functions we find in our performance work. Unfortunately, this is only available in specific builds of the SpiderMonkey shell, and is not available in full browser builds. This is due to architectural differences in how profiling data is captured and the flags with which the browser and shell are built. I would love for Firefox users to someday be able to view these graphs themselves, but at the moment we have no plans to expose this to the browser. However, one bug tracking some related work can be found here.

We will continue to sporadically update iongraph with more features to aid us in our work. We have several ideas for new features, including richer navigation, search, and visualization of register allocation info. However, we have no explicit roadmap for when these features may be released.

To experiment with iongraph locally, you can run a debug build of the SpiderMonkey shell with IONFLAGS=logs; this will dump information to /tmp/ion.json. This file can then be loaded into the standalone deployment of iongraph. Please be aware that the user experience is rough and unpolished in its current state.

The source code for iongraph can be found on GitHub. If this subject interests you, we would welcome contributions to iongraph and its integration into the browser. The best place to reach us is our Matrix chat.


Thanks to Matthew Gaudet, Asaf Gartner, and Colin Davidson for their feedback on this article.

Will Kahn-GreeneOpen Source Project Maintenance 2025

Every October, I do a maintenance pass on all my projects. At a minimum, that involves dropping support for whatever Python version is no longer supported and adding support for the most recently released Python version. While doing that, I go through the issue tracker, answer questions, and fix whatever I can fix. Then I release new versions. Then I think about which projects I should deprecate and figure out a deprecation plan for them.

This post covers the 2025 round.

TL;DR

Read more… (7 min remaining to read)

Mozilla Attack & DefenseFirefox Security & Privacy Newsletter 2025 Q3

Welcome to the Q3 2025 edition of the Firefox Security and Privacy newsletter!

Security and Privacy on the web are the cornerstones of Mozilla’s manifesto, and they influence how we operate and build our products. Following are the highlights of our work from Q3 2025, grouped into the following categories:

  • Firefox Product Security & Privacy, showcasing new Security & Privacy Features and Integrations in Firefox.
  • Firefox for Enterprise, highlighting security & privacy updates for administrative features, like Enterprise policies.
  • Core Security, outlining Security and Hardening efforts within the Firefox Platform.
  • Web Security and Standards, allowing websites to better protect themselves against online threats.

Preface

Note: Some of the bugs linked below might not be accessible to the general public and restricted to specific work groups. We de-restrict fixed security bugs after a grace-period, until the majority of our user population have received Firefox updates. If a link does not work for you, please accept this as a precaution for the safety of all Firefox users.

Firefox Product Security & Privacy

  • As a follow-up to our last newsletter, Firefox has won a “Speedrunner” Award by the TrendMicro Zero Day Initiative for being consistently fast to patch security vulnerabilities. This is the second consecutive year, in which Firefox is recognized for the speedy delivery of security updates.
  • Protecting against Fingerprinting-based tracking: With Firefox 143, we’ve introduced new defenses against online fingerprinting. Our analysis of the most frequently exploited user data shows that it’s possible to significantly lower the success rate of fingerprinting attacks, without compromising a user’s browsing experience. Specifically, Firefox now standardizes how it reports device attributes such as CPU core count, screen size, and touch input capabilities. By unifying these values across our entire user base, we cut the share of Firefox users who appear unique to fingerprinting scripts from roughly 35% to just 20%.
  • Strict Tracking Protection with web compatibility in mind: When users set Firefox’s tracking protection to strict, we already warn them that stricter blocking may result in missing content or broken websites. As of Firefox 142, we are providing a list of exceptions that may help unbreak popular websites without compromising the protection. The list of exceptions is transparently shared on https://etp-exceptions.mozilla.org/.
  • DoH on Android: We have landed opt-in support for DoH Android in Firefox 143. Opt-in available in Firefox preferences UI, Firefox Android users can enable DoH with Increased or Max Protection settings to prevent network observers from tracking their browsing behaviour.
  • Improved TLS Error Pages: We improved non-overridable TLS error pages to provide more context for end users. Starting in Fx140, Firefox contains more information on why a connection was blocked, highlighting that Firefox is not causing the problem but rather that the website has a security problem and Firefox is actually keeping the user safe.
  • SafeBrowsing v5: Firefox Nightly now supports the SafeBrowsing v5 protocol, which protects against threats like phishing or malware sites, in preparation for the upcoming decommissioning of SafeBrowsing v4 server.
  • Private Downloads in Private Browsing: When downloading a file in Private Browsing mode, Firefox 143 now asks whether to keep or delete the files after that session ends. You can adjust this behavior in Settings, if desired.
  • Improved Video sharing: As of Firefox 143, the browser permission dialog will now show a preview of the selected Video camera, making it much easier to see and decide what is being shared before providing camera permissions to a page.

Firefox for Enterprise

  • Updated Enterprise Policy for Tracking Protection: The EnableTrackingProtection policy has been updated to allow you to set the category to either strict or standard. When the category is set using this policy, the user cannot change it. The EnableTrackingProtection policy has also been updated to allow you to set control Suspected fingerprinters. For more information, see this SUMO page.
  • Improved Control over SVG, MathML, WebGL, CSP reporting and Fingerprinting Protection: The Preferences policy has been updated to allow setting the preferences mathml.disabled, svg.context-properties.content.enabled, svg.disabled, webgl.disabled, webgl.force-enabled, xpinstall.enabled, and security.csp.reporting.enabled as well as prefs beginning with privacy.baselineFingerprintingProtection or privacy.fingerprintingProtection.

Core Security

  • CRLite on Desktop and Mobile: CRLite is a faster, more reliable and privacy-protecting certificate revocation check mechanism, as compared to the traditional OCSP (Online Certificate Status Protocol). CRLite is available in Desktop versions since Firefox 142 and on Firefox for Android in Firefox 145. Read details on CRLite in the blogpost: CRLite: Fast, private, and comprehensive certificate revocation checking in Firefox.
  • Supporting Certificate Compression in QUIC: Certificate compression reduces the size of certificate chains during a Transport Layer Security (TLS) handshake, which improves performance by lowering latency and bandwidth consumption. The three compression algorithms zlib, brotli, and zstd are available in QUIC starting with Firefox 143.

Web Security & Standards

  • Improved Cache removal: When a website uses the "cache" directive of the Clear-Site-Data response header, Firefox 141 now also clears the backwards-forwards cache (bfcache). This allows a site to ensure that private session details can be removed, even if a user uses the browser back button. (bug 1930501).
  • Easy URL Pattern Matching: The URL Pattern API is fully supported as of Firefox 142, enabling you to match and parse URLs using a standardized pattern syntax. (bug 1731418).

Going Forward

As a Firefox user, you will automatically benefit from all the mentioned security and privacy benefits with the enabled auto-updates in Firefox. If you aren’t a Firefox user yet, you can download Firefox to experience a fast and safe browsing experience while supporting Mozilla’s mission of a healthy, safe and accessible web for everyone.

Thanks to everyone who helps make Firefox and the open web more secure and privacy-respecting.

See you next time with the Q4 2025 Report!
- Firefox Security and Privacy Teams.

The Rust Programming Language BlogProject goals for 2025H2

On Sep 9, we merged RFC 3849, declaring our goals for the "second half" of 2025H2 -- well, the last 3 months, at least, since "yours truly" ran a bit behind getting the goals program organized.

Flagship themes

In prior goals programs, we had a few major flagship goals, but since many of these goals were multi-year programs, it was hard to see what progress had been made. This time we decided to organize things a bit differently. We established four flagship themes, each of which covers a number of more specific goals. These themes cover the goals we expect to be the most impactful and constitute our major focus as a Project for the remainder of the year. The four themes identified in the RFC are as follows:

  • Beyond the &, making it possible to create user-defined smart pointers that are as ergonomic as Rust's built-in references &.
  • Unblocking dormant traits, extending the core capabilities of Rust's trait system to unblock long-desired features for language interop, lending iteration, and more.
  • Flexible, fast(er) compilation, making it faster to build Rust programs and improving support for specialized build scenarios like embedded usage and sanitizers.
  • Higher-level Rust, making higher-level usage patterns in Rust easier.
"Beyond the &"
GoalPoint of contactTeam(s) and Champion(s)
Reborrow traitsAapo Alasuutaricompiler (Oliver Scherer), lang (Tyler Mandry)
Design a language feature to solve Field ProjectionsBenno Lossinlang (Tyler Mandry)
Continue Experimentation with Pin ErgonomicsFrank Kingcompiler (Oliver Scherer), lang (TC)

One of Rust's core value propositions is that it's a "library-based language"—libraries can build abstractions that feel built-in to the language even when they're not. Smart pointer types like Rc and Arc are prime examples, implemented purely in the standard library yet feeling like native language features. However, Rust's built-in reference types (&T and &mut T) have special capabilities that user-defined smart pointers cannot replicate. This creates a "second-class citizen" problem where custom pointer types can't provide the same ergonomic experience as built-in references.

The "Beyond the &" initiative aims to share the special capabilities of &, allowing library authors to create smart pointers that are truly indistinguishable from built-in references in terms of syntax and ergonomics. This will enable more ergonomic smart pointers for use in cross-language interop (e.g., references to objects in other languages like C++ or Python) and for low-level projects like Rust for Linux that use smart pointers to express particular data structures.

"Unblocking dormant traits"
GoalPoint of contactTeam(s) and Champion(s)
Evolving trait hierarchiesTaylor Cramercompiler, lang (Taylor Cramer), libs-api, types (Oliver Scherer)
In-place initializationAlice Ryhllang (Taylor Cramer)
Next-generation trait solverlcnrtypes (lcnr)
Stabilizable Polonius support on nightlyRémy Rakictypes (Jack Huey)
SVE and SME on AArch64David Woodcompiler (David Wood), lang (Niko Matsakis), libs (Amanieu d'Antras), types

Rust's trait system is one of its most powerful features, but it has a number of longstanding limitations that are preventing us from adopting new patterns. The goals in this category unblock a number of new capabilities:

  • Polonius will enable new borrowing patterns, and in particular unblock "lending iterators". Over the last few goal periods, we have identified an "alpha" version of Polonius that addresses the most important cases while being relatively simple and optimizable. Our goal for 2025H2 is to implement this algorithm in a form that is ready for stabilization in 2026.
  • The next-generation trait solver is a refactored trait solver that unblocks better support for numerous language features (implied bounds, negative impls, the list goes on) in addition to closing a number of existing bugs and sources of unsoundness. Over the last few goal periods, the trait solver went from being an early prototype to being in production use for coherence checking. The goal for 2025H2 is to prepare it for stabilization.
  • The work on evolving trait hierarchies will make it possible to refactor some parts of an existing trait into a new supertrait so they can be used on their own. This unblocks a number of features where the existing trait is insufficiently general, in particular stabilizing support for custom receiver types, a prior Project goal that wound up blocked on this refactoring. This will also make it safer to provide stable traits in the standard library while preserving the ability to evolve them in the future.
  • The work to expand Rust's Sized hierarchy will permit us to express types that are neither Sized nor ?Sized, such as extern types (which have no size) or Arm's Scalable Vector Extension (which have a size that is known at runtime but not at compilation time). This goal builds on RFC #3729 and RFC #3838, authored in previous Project goal periods.
  • In-place initialization allows creating structs and values that are tied to a particular place in memory. While useful directly for projects doing advanced C interop, it also unblocks expanding dyn Trait to support async fn and -> impl Trait methods, as compiling such methods requires the ability for the callee to return a future whose size is not known to the caller.
"Flexible, fast(er) compilation"
GoalPoint of contactTeam(s) and Champion(s)
build-stdDavid Woodcargo (Eric Huss), compiler (David Wood), libs (Amanieu d'Antras)
Promoting Parallel Front EndSparrow Licompiler
Production-ready cranelift backendFolkert de Vriescompiler, wg-compiler-performance

The "Flexible, fast(er) compilation" initiative focuses on improving Rust's build system to better serve both specialized use cases and everyday development workflows:

"Higher-level Rust"
GoalPoint of contactTeam(s) and Champion(s)
Stabilize cargo-scriptEd Pagecargo (Ed Page), compiler, lang (Josh Triplett), lang-docs (Josh Triplett)
Ergonomic ref-counting: RFC decision and previewNiko Matsakiscompiler (Santiago Pastorino), lang (Niko Matsakis)

People generally start using Rust for foundational use cases, where the requirements for performance or reliability make it an obvious choice. But once they get used to it, they often find themselves turning to Rust even for higher-level use cases, like scripting, web services, or even GUI applications. Rust is often "surprisingly tolerable" for these high-level use cases -- except for some specific pain points that, while they impact everyone using Rust, hit these use cases particularly hard. We plan two flagship goals this period in this area:

  • We aim to stabilize cargo script, a feature that allows single-file Rust programs that embed their dependencies, making it much easier to write small utilities, share code examples, and create reproducible bug reports without the overhead of full Cargo projects.
  • We aim to finalize the design of ergonomic ref-counting and to finalize the experimental impl feature so it is ready for beta testing. Ergonomic ref-counting makes it less cumbersome to work with ref-counted types like Rc and Arc, particularly in closures.

What to expect next

For the remainder of 2025 you can expect monthly blog posts covering the major progress on the Project goals.

Looking at the broader picture, we have now done three iterations of the goals program, and we want to judge how it should be run going forward. To start, Nandini Sharma from CMU has been conducting interviews with various Project members to help us see what's working with the goals program and what could be improved. We expect to spend some time discussing what we should do and to be launching the next iteration of the goals program next year. Whatever form that winds up taking, Tomas Sedovic, the Rust program manager hired by the Leadership Council, will join me in running the program.

Appendix: Full list of Project goals.

Read the full slate of Rust Project goals.

The full slate of Project goals is as follows. These goals all have identified points of contact who will drive the work forward as well as a viable work plan.

Invited goals. Some of the goals below are "invited goals", meaning that for that goal to happen we need someone to step up and serve as a point of contact. To find the invited goals, look for the "Help wanted" badge in the table below. Invited goals have reserved capacity for teams and a mentor, so if you are someone looking to help Rust progress, they are a great way to get involved.

GoalPoint of contactTeam(s) and Champion(s)
Develop the capabilities to keep the FLS up to datePete LeVasseurbootstrap (Jakub Beránek), lang (Niko Matsakis), opsem, spec (Pete LeVasseur), types
Getting Rust for Linux into stable Rust: compiler featuresTomas Sedoviccompiler (Wesley Wiser)
Getting Rust for Linux into stable Rust: language featuresTomas Sedoviclang (Josh Triplett), lang-docs (TC)
Borrow checking in a-mir-formalityNiko Matsakistypes (Niko Matsakis)
Reborrow traitsAapo Alasuutaricompiler (Oliver Scherer), lang (Tyler Mandry)
build-stdDavid Woodcargo (Eric Huss), compiler (David Wood), libs (Amanieu d'Antras)
Prototype Cargo build analysisWeihang Locargo (Weihang Lo)
Rework Cargo Build Dir LayoutRoss Sullivancargo (Weihang Lo)
Prototype a new set of Cargo "plumbing" commandsHelp Wantedcargo
Stabilize cargo-scriptEd Pagecargo (Ed Page), compiler, lang (Josh Triplett), lang-docs (Josh Triplett)
Continue resolving cargo-semver-checks blockers for merging into cargoPredrag Gruevskicargo (Ed Page), rustdoc (Alona Enraght-Moony)
Emit Retags in CodegenIan McCormackcompiler (Ralf Jung), opsem (Ralf Jung)
Comprehensive niche checks for RustBastian Kerstingcompiler (Ben Kimock), opsem (Ben Kimock)
Const GenericsBoxylang (Niko Matsakis)
Ergonomic ref-counting: RFC decision and previewNiko Matsakiscompiler (Santiago Pastorino), lang (Niko Matsakis)
Evolving trait hierarchiesTaylor Cramercompiler, lang (Taylor Cramer), libs-api, types (Oliver Scherer)
Design a language feature to solve Field ProjectionsBenno Lossinlang (Tyler Mandry)
Finish the std::offload moduleManuel Drehwaldcompiler (Manuel Drehwald), lang (TC)
Run more tests for GCC backend in the Rust's CIGuillaume Gomezcompiler (Wesley Wiser), infra (Marco Ieni)
In-place initializationAlice Ryhllang (Taylor Cramer)
C++/Rust Interop Problem Space MappingJon Baumancompiler (Oliver Scherer), lang (Tyler Mandry), libs (David Tolnay), opsem
Finish the libtest json output experimentEd Pagecargo (Ed Page), libs-api, testing-devex
MIR move eliminationAmanieu d'Antrascompiler, lang (Amanieu d'Antras), opsem, wg-mir-opt
Next-generation trait solverlcnrtypes (lcnr)
Implement Open API Namespace SupportHelp Wantedcargo (Ed Page), compiler (b-naber), crates-io (Carol Nichols)
Promoting Parallel Front EndSparrow Licompiler
Continue Experimentation with Pin ErgonomicsFrank Kingcompiler (Oliver Scherer), lang (TC)
Stabilizable Polonius support on nightlyRémy Rakictypes (Jack Huey)
Production-ready cranelift backendFolkert de Vriescompiler, wg-compiler-performance
Stabilize public/private dependenciesHelp Wantedcargo (Ed Page), compiler
Expand the Rust Reference to specify more aspects of the Rust languageJosh Triplettlang-docs (Josh Triplett), spec (Josh Triplett)
reflection and comptimeOliver Scherercompiler (Oliver Scherer), lang (Scott McMurray), libs (Josh Triplett)
Relink don't RebuildJane Lusbycargo, compiler
Rust Vision DocumentNiko Matsakisleadership-council
rustc-perf improvementsJamescompiler, infra
Stabilize rustdoc doc_cfg featureGuillaume Gomezrustdoc (Guillaume Gomez)
Add a team charter for rustdoc teamGuillaume Gomezrustdoc (Guillaume Gomez)
SVE and SME on AArch64David Woodcompiler (David Wood), lang (Niko Matsakis), libs (Amanieu d'Antras), types
Rust Stabilization of MemorySanitizer and ThreadSanitizer SupportJakob Koschelbootstrap, compiler, infra, project-exploit-mitigations
Type System DocumentationBoxytypes (Boxy)
Unsafe FieldsJack Wrenncompiler (Jack Wrenn), lang (Scott McMurray)

The Mozilla BlogBetter search suggestions in Firefox

We’re working on a new feature to display direct results in your address bar as you type, so that you can skip the results page and get to the right site or answer faster.

Every major browser today supports a feature known as “search suggestions.” As you type in the address bar, your chosen search engine offers real-time suggestions for searches you might want to perform.

A Firefox browser window with a gray gradient background. The Google search bar shows “mozilla.” Google suggestions below include “mozilla firefox,” “mozilla thunderbird,” “mozilla careers,” “mozilla vpn,” and “mozilla foundation.”

This is a helpful feature, but these suggestions always take you to a search engine results page, not necessarily the information or website you’re ultimately looking for. This is ideal for the search provider, but not always best for the user.

For example, flight status summaries on a search results page are convenient, but it would be more convenient to show that information directly in the address bar:

A Firefox browser window with an orange gradient background. The Google search bar shows “ac 8170.” The result displays an Air Canada flight from Victoria (YYJ) to Vancouver (YVR), showing departure and arrival times and that it’s “In flight” or “On time.”

Similarly, people commonly search for a website when they don’t know or remember the exact URL. Why not skip the search?

A Firefox browser window with a green gradient background. The Google search bar shows “mdn.” Below, the top result is “Mozilla Developer Network — Your blueprint for a better internet,” with Google suggestions like “mdn web docs,” “mdn array,” and “mdn fetch.”

Another common use case is searching for recommendations, where Firefox can show highly relevant results from sources around the web:

A Firefox browser window with a gradient pink-to-purple background. The Google search bar shows the query “bike repair boston.” Below it, Google suggestions and a featured result for “Ballantine Bike Shop” appear, showing address, rating, and hours.

The truth is, browser address bars today are largely a conduit to your search engine. And while search engines are very useful, a single and centralized source for finding everything online is not how we want the web to work. Firefox is proudly independent, and our address bar should be too.

We experimented with the concept several years ago, but didn’t ship it1 because we have an extremely high standard for privacy and weren’t satisfied with any design that would send your raw queries directly to us. Even though these are already sent to your search engine, Firefox is built on the principle that even Mozilla should not be able to learn what you do online. Unlike most search engines, we don’t want to know who’s searching for what, and we want to enable anyone in the world to verify that we couldn’t know even if we tried.

We now have the technical architecture to meet that bar. When Firefox requests suggestions, it encrypts your query using a new protocol we helped design called Oblivious HTTP. The encrypted request goes to a relay operated by Fastly, which can see your IP address but not the text. Mozilla can see the text, but not who it came from. We can then return a result directly or fetch one from a specialized search service. No single party can connect what you type to who you are.

A simple black-and-white diagram with three rounded rectangles labeled “Firefox,” “Relay (Operated by Fastly),” and “Mozilla.” Double arrows connect them, showing a two-way flow between Firefox ↔ Relay ↔ Mozilla.

Firefox will continue to show traditional search suggestions for all queries and add direct results only when we have high confidence they match your intent. As with search engines, some of these results may be sponsored to support Firefox, but only if they’re highly relevant, and neither we nor the sponsor will know who they’re for. We expect this to be useful to users and, hopefully, help level the playing field by allowing Mozilla to work directly with independent sites rather than mediating all web discovery through the search engine.

Running this at scale is not trivial. We need the capacity to handle the volume and servers close to people to avoid introducing noticeable latency. To keep things smooth, we are starting in the United States and will evaluate expanding into other geographies as we learn from this experience and observe how the system performs. The feature is still in development and testing and will roll out gradually over the coming year.2


We did ship an experimental version that users could enable in settings, as well as a small set of locally-matched suggestions in some regions. Unfortunately, the former had too little reach to be worth building features for, and the latter had very poor relevance and utility due to the technical limitations (most notably, the size of the local database).

2 Where the feature is available, you can disable it by unchecking “Retrieve suggestions as you type” in the “Search” pane in Firefox settings. If this box is not yet available in your version of Firefox, you can pre-emptively disable it by setting browser.urlbar.quicksuggest.online.enabled to false in about:config.

Take control of your internet

Download Firefox

The post Better search suggestions in Firefox appeared first on The Mozilla Blog.

Firefox NightlyExtensions UI Improvements and More – These Weeks in Firefox: Issue 191

Highlights

  • As part of improvements to the extensions panel, an empty state UI has been introduced to help users to understand why their installed extensions may not be listed in the panel (e.g. when opening a private browsing window or enabling permanent private browsing mode).
The Firefox Extensions UI panel encouraging users to find more extensions.

Empty state shown when no extensions are currently installed.

The Firefox Extensions panel UI explaining why no extensions are displayed in private browsing mode.

Empty state shown when extensions are already installed but not allowed to access private browsing tabs.

A Firefox extension popup during the installation process with a checkbox enabled for the option "Allow extension to run in private windows"

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Khalid AlHaddad
  • Kyler Riggs [:kylr]
  • Michael van Straten [:michael]
  • Pier Angelo Vendrame

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons

WebExtension APIs
  • Thanks to the enhancement contributed by Jim Gong, starting from Firefox 146 the browsingData.remove API will also allow extensions to clear the sessionStorage WebAPI data – Bug 1886894
  • Valentin Gosu introduced masque proxy support to the WebExtensions proxy API in Firefox 145 – Bug 1988988
  • Investigated and fixed a crash triggered by storing deeply nested JSON data in the storage.sync WebExtensions API backend (introduced in Firefox 135 as a side-effect of changes introduced on the storage.sync backend side by Bug 1888472), fix landed in Firefox 145 and has been uplifted to Firefox 144 beta, Firefox 143.0.3 release and Firefox ESR 140.0.3 – Bug 1989840
  • Landed new Glean probe to assess real world impact of the storage.local API IndexedDB corruption issues of the underlying sqlite3 data store (investigated as part of Bug 1979997 and Bug 1885297)
    • NOTE: a new hidden boolean about:config pref extensions.webextensions.keepStorageOnCorrupted.storageLocal which does automatically reset the storage.local IndexedDB database when the Bug 1979997 corruption database issue is detected, and prevents browser.storage.local.clear API calls from failing when Bug 1885297 corrupted key is being hit.
    • NOTE: We intent to keep the auto-reset behaviors disabled by default for a few more nightly cycles to review the new telemetry before enabling the auto-reset behaviors on all channels (follow up tracked by Bug 1992973)

DevTools

Lint, Docs and Workflow

Search and Navigation

  • Address Bar
    • Drew enabled Important Dates feature in Germany, France and Italy for English locales. Bug 1992811
    • Dale made the new redesigned Identity panel show the expected icon for local files. Bug 1989844
    • Dharma landed new search onboarding strings to be used in Nimbus experiments. Bug 1982132
  • Places
  • Search
    • Pier Angelo Vendrame fixed origin attribute use for OpenSearch and engine icons. Bug 1987600, Bug 1993166
    • Florian optimized searchconfig xpcshell tests to use a lot less cpu time.

Mike TaylorA new, new logo for the W3C

In an effort to pivot this site into a full on graphic design side business after 2 blog posts about logos in a row (hit me up exclusivly on FB to request a consultation), I thought I would reveal my new, new logo for the W3C.

It turns out they recently launched a new one, but some folks don’t love it. As an artist, it’s not my job to critique other art, but instead to offer my own compelling vision for the web.

a shitty drawing of a w, the word three spelled out, and followed by a period and the letter c

I shouldn’t have to explain why I went with the classic dark blue and asparagus colors—that much is obvious. And of course, turning c into a file extension as a reminder that NCSA Mosaic was written in C (I didn’t go with WorldWideWeb because that was written in Objective C and .m kinda messes it all up).

Mozilla Localization (L10N)Localizer spotlight: Bogo

About you

My name is Bogomil but people call me Bogo, and I am a translator for the Bulgarian locale. I think I got involved with the Mozilla project back in 2005 when I wrote a small search add-on/script. I became more active around 2008-2009 and with just a few gaps until this day.

I am European. I was born in Bulgaria, but I have been living for a long time in the Czech Republic. Bulgarian is my main language, but sometimes I contribute to localization projects in Turkish, Romanian, Macedonian and Czech.

Q&A

Q: What inspired you to join the Mozilla localization community?

A: As I mentioned here I decided to start localizing software because I knew some people had trouble using it in other languages. I believe everyone deserves the right to use software in a language they understand which helps them to get the maximum value out of it. As for Mozilla in particular I believe in the mission and this is the most efficient way for me to contribute.

Q: How do you solve challenges like bugs or workflow hiccups, especially when collaborating virtually?

A: Since we are a small team for the Bulgarian localizations we are almost always in sync on how to translate the strings. We are following some basic rules, such as using a common dictionary and instructions on how to localize software in Bulgarian (shared across multiple FOSS projects), set 15+ years ago and that are still relevant. When we have a conflict, I usually count on the team managers to share their wisdom, because they have a bit more knowledge than the rest of us.

Q: Which projects or new product features were you most excited about this year, and why?

A: In the last year I contributed mainly to the Thunderbird project. The items that are most exciting to me are:

  • That finally we decided to remove the word “Junk” and replace it with “Spam”, I think this is self-explanatory 🙂
  • The new Account Hub which improves significantly the consumer’s experience and their onboarding into the beautiful world of the free email. Free as in Freedom.
  • I am also excited about all the things in the roadmap to come.

Q: What tips, tools, or habits help you succeed as a localizer?

A: If you look at my Pontoon profile, you will see that for the last 2 months I contributed every day. I find this habit very useful for me, because it keeps me focused on my goal for consistently improving the localized experience.

Another item is that I like to provide a better experience to the mobile users. I often test and fix labels in Thunderbird for Android which, even translated correctly, are too long for a mobile phone UI.

And lastly, I love to engage with the community and ask them for help when we finish a section or a product. Last year we asked the Bulgarian community to help us validate a localization available in the beta version and we got some very helpful feedback.

Something fun

Q: Could you share a few fun or unexpected facts about yourself that people might not know?

  • I ran for the European Parliament in 2009 with the intention to fight for our digital rights.
  • I was on almost every media in the world in 2012 when I bought the data of millions of users for $5! This is the Forbes article.
  • I am a heavy metal fan and you can find me in underground clubs, enjoying bands you have never heard of.
  • Apart from technology I am an artist – I produced and performed my own theater play and shot a movie in Prague.
  • I realized my dream to have an opening talk at FOSDEM. I was opening the Sunday session… but still!

Mozilla ThunderbirdYour Workflow, Supercharged

Extensions make Thunderbird truly yours, moving at your pace and reflecting your priorities. Thunderbird’s flexibility means you can tailor the app to how you actually work. We’ll cover tools for efficiency, consistency, and visibility so every send is faster and better informed, your future self will thank you.

Clippings

We’ve all been there, retyping the same line for the hundredth time and wondering if there’s a better way. Clippings lets you save text once and reuse it anywhere you compose in Thunderbird. You can organize by folders, apply color labels, and search by name with autocomplete, so the right text is always a couple of keystrokes away.

When you paste a clipping, you can include fill‑in prompts for names, dates, or custom notes, and even keep simple HTML formatting and images when needed. It’s like a spellbook for your inbox–summon, swap, send. 

Below is a quick glance at how Clippings can help you: 

  • Save and paste reusable snippets anywhere you write—no more repeat typing.
  • Include prompts for names, dates, or custom notes; HTML and inline images.
  • Organize with folders and labels; find snippets fast with autocomplete.
  • Paste instantly with keyboard shortcuts; import, export, or sync your library.
Link to Thunderbird Add-on library.




With the content process streamlined, now for a sign‑off that keeps your tone on track.

Signature Switch

We rotate hats as we write: buttoned‑up for clients, warm for teammates, and careful punctuation for legal. Signature Switch helps you with that. Keep multiple signatures, and swap them in with a click or shortcut right from the composer. Turn a signature off entirely, pick from your saved set, or append a different one without retyping a thing.

Use plain text for simplicity, or HTML with images and links for a more professional finish. Because everything is accessible while you write, choosing the right signature doesn’t break your flow—and it helps keep branding and tone consistent across messages. One click and your signature goes from handshake to high‑five.

Below is a quick glance at how Signature Switch can help you: 

  • Switch signatures on/off or choose from your saved set, no retyping.
  • Match by recipient, account, or context; keep tone aligned.
  • Use plain text or polished HTML with images and links.
  • Access quickly from the composer toolbar or menu while you write.
Link to Thunderbird Add-on library.




With the sign‑off sorted, now let’s measure the results.

ThirdStats

Looking for a way to interpret email trends on more than just vibes alone? ThirdStats turns your mailbox into clear, local analytics that reveal how your email work actually behaves, when volume spikes, which hours are busiest, how response times trend, and which folders see the most activity. Interactive charts make patterns easy to spot at a glance. 

You can compare accounts side by side, adjust date ranges to see changes over time, and focus on a specific folder for deeper context. All processing happens on your device with read‑only access, so your data isn’t transmitted elsewhere. It’s a simple, private way to understand your workload and time your effort better. 

Below is a quick glance at how ThirdStats can help you: 

  • Visualize volume, peak hours, response times, and folder activity with interactive charts.
  • Compare accounts side by side; filter by date ranges; view by folder.
  • Keep it private: analysis runs locally with read‑only access, no external transmission.
Link to Thunderbird Add-on library.




Do you have a favorite extension? Share it with us in the comments below.

To learn more about add-ons check out Maximize Your Day: Extend Your Productivity with Add-ons.

Your workflow deserves a client that adapts to it. Add what accelerates you, trim the rest, and keep improving. When you’re ready to go further, the Thunderbird Add-ons Catalog is the fastest path to new features. Check what’s popular, discover up‑and‑coming tools, and install directly from the page with built‑in version compatibility checks. Thanks for reading.

The post Your Workflow, Supercharged appeared first on The Thunderbird Blog.

The Servo BlogThis month in Servo: experimental mode, Trusted Types, strokeText(), and more!

September was another busy month for Servo, with a bunch of new features landing in our nightly builds:

servoshell nightly showing new support for the strokeText() method on CanvasRenderingContext2D

servoshell now has a new experimental mode button (☢). Turning on experimental mode has the same effect as running Servo with --enable-experimental-web-platform-features: it enables all engine features, even those that may not be stable or complete. This works much like Chromium’s option with the same name, and it can be useful when a page is not functioning correctly, since it may allow the page to make further progress.

servoshell nightly showing the new experimental mode button (☢), which enables experimental web platform features <figcaption>Top to bottom: experimental mode off, experimental mode on.</figcaption>

Viewport meta tags are now enabled on mobile devices only, fixing a bug where they were enabled on desktop (@shubhamg13, #39133). You can still enable them if needed with --pref viewport_meta_enabled (@shubhamg13, #39207).

Servo now supports Content-Encoding: zstd (@webbeef, #36530), and we’ve fixed a bug causing spurious credentials prompts when a HTTP 401 has no ‘WWW-Authenticate’ header (@simonwuelker, #39215). We’ve also made a bunch of progress on AbortController (@TimvdLippe, #39290, #39295, #39374, #39406) and <link rel=preload> (@TimvdLippe, @jdm, #39033, #39034, #39052, #39146, #39167).

‘Content-Security-Policy: sandbox’ now disables scripting unless ‘allow-scripts’ is given (@TimvdLippe, #39163), and crypto.subtle.exportKey() can now export HMAC keys in raw format (@arihant2math, #39059).

The scrollIntoView() method on Element now works with shadow DOM (@mrobinson, @Loirooriol, #39144), and recurses to parent iframes if they are same origin (@Loirooriol, @mrobinson, #39475, #39397, #39153).

Several types of DOM exceptions can now have error messages (@arihant2math, @rodio, @excitablesnowball, #39056, #39394, #39535), and we’ve also fixed a bug where links often need to be clicked twice (@yezhizhen, #39326), and fixed bugs affecting <img> attribute changes (@tharkum, #39483), the ‘:defined’ selector (@mukilan, #39325, #39390), invertSelf() on DOMMatrix (@lumiscosity, #39113), and the ‘href’ setter on Location (@arihant2math, @sagudev, #39051).

One complex part of Servo isn’t even written in Rust, it’s written in Python! codegen.py, which describes how to generate Rust code with bindings for every known DOM interface from the WebIDL, is now fully type annotated (@jerensl, @mukilan, #39070, #38998).

Embedding and automation

Servo now requires Rust 1.86 to build (@sagudev, #39185).

Keyboard scrolling is now automatically implemented by Servo (@delan, @mrobinson, #39371, #39469), so embedders no longer need to translate arrow keys, Home, End, Page Up, and Page Down to WebView API calls. This change also improves the behaviour of those keys, scrolling the element or <iframe> that was focused or most recently clicked (or a nearby ancestor).

DebugOptions::convert_mouse_to_touch (-Z convert-mouse-to-touch) has been removed (@mrobinson, #39352), with no replacement. Touch event simulation continues to be available in servoshell as --simulate-touch-events.

DebugOptions::webrender_stats (-Z wr-stats in servoshell) has been removed (@mrobinson, #39331); instead call toggle_webrender_debugging(Profiler) on a WebView (or press Ctrl+F12 in servoshell).

DebugOptions::trace_layout (-Z trace-layout) has been removed (@mrobinson, #39332), since it had no effect.

We’ve improved the docs for WebViewDelegate::notify_history_changed (@Narfinger, @mrobinson, @yezhizhen, #39134).

When automating servoshell with WebDriver, commands targeting elements now correctly scroll into view if needed (@PotatoCP, @yezhizhen, #38508, #39265), allowing Element Click, Element Send Keys, Element Clear, and Take Element Screenshot to work properly when the element is outside the viewport.

WebDriver mouse inputs now work correctly with HiDPI scaling on more platforms (@mrobinson, #39472), and we’ve improved the reliability of Take Screenshot, Take Element Screenshot (@yezhizhen, #39499, #39539, #39543), Switch To Frame (@yezhizhen, #39086), Switch To Window (@yezhizhen, #39241), and New Session (@yezhizhen, #39040).

These improvements have enabled us to run the WebDriver conformance tests in CI by default (@PotatoCP, #39087), and also mean we’re closer than ever to running WebDriver-based Web Platform Tests.

servoshell

Favicons now update correctly when you navigate back and forward (@webbeef, #39575), not just when you load a new page.

servoshell’s command line argument parsing has been reworked (@Narfinger, #37194, #39316), which should fix the confusing behaviour of some options.

On mobile devices, servoshell now resizes the webview correctly when the available space changes (@blueguy1, @yjx, @yezhizhen, #39507).

On macOS, telling servoshell to take a screenshot no longer hides the window (@mrobinson, #39500). This does not affect taking a screenshot in headless mode (--headless), where there continues to be no window at all.

Performance

Servo currently runs in single-process mode unless you opt in to --multiprocess mode, and we’ve landed a few perf improvements in that default mode. For one, in single-process mode, script can now communicate with the embedder directly for reduced latency (@jschwe, #39039). We also create one thread pool for the image cache now, rather than one pool per origin (@rodio, #38783).

Many components of Servo that would be separated by a process boundary in multiprocess mode, now use crossbeam channels in single-process mode, rather than using IPC channels in both modes (@jschwe, #39073, #39076, #39345, #39347, #39348, #39074). IPC channels are required when communicating with another process, but they’re more expensive, because they require serialising and deserialising each message, plus resources from the operating system.

We’ve started working on an optimisation for string handling in Servo’s DOM layer (@Narfinger, #39480, #39481, #39504). Strings in our DOM have historically been represented as ordinary Rust strings, but they often come from SpiderMonkey, where they use a variety of representations, none of which are entirely compatible. SpiderMonkey strings would continue to need conversion to Servo strings, but the idea we’re working towards is to make the conversion lazy, in the hope that many strings will never end up being converted at all.

We now use a faster hash algorithm for internal hashmaps that are not security-critical (@Narfinger, #39106, #39166, #39202, #39233, #39244, #39168). These changes also switch that faster algorithm from FNV to an even simpler polynomial hash, following in the footsteps of Rust and Stylo.

We’ve also landed a few more self-contained perf improvements:

Donations

Thanks again for your generous support! We are now receiving 5654 USD/month (+1.8% over August) in recurring donations.

This helps us cover the cost of our speedy CI and benchmarking servers, one of our latest Outreachy interns, and funding maintainer work that helps more people contribute to Servo. Keep an eye out for further CI improvements in the coming months, including faster pull request checks and ten-minute WPT builds.

Servo is also on thanks.dev, and already 28 GitHub users (±13 from August) that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

5654 USD/month
10000

Use of donations is decided transparently via the Technical Steering Committee’s public funding request process, and active proposals are tracked in servo/project#187. For more details, head to our Sponsorship page.

Conference talks

MiniApps Design and Servo (starting at ~2:37:00; slides) — Gregory Terzian (@gterzian) spoke about how Servo can be used as a web engine for mini-app platforms at WebEvolve 2025

独⽴的,轻量级,模块化与并⾏处理架构的Web引擎开发 [zh] / Developing an independent, light-weight, modular and parallel web-engine [en] (starting at ~5:49:00; slides) — Jonathan Schwender (@jschwe) spoke about Servo’s goals and status at WebEvolve 2025

Servo: A new web engine written in Rust* (slides; transcript) — Manuel Rego (@rego) spoke about the Servo project at GOSIM Hangzhou 2025

Driving Innovation with Servo and OpenHarmony: Unified Rendering and WebDriver* (slides) — Jingshi Shangguan & Zhizhen Ye (@yezhizhen) spoke about a new OpenHarmony rendering backend and WebDriver support in Servo at GOSIM Hangzhou 2025

The Joy and Value of Embedded Servo Systems* (slides) — Gregory Terzian (@gterzian) spoke about embedding Servo at GOSIM Hangzhou 2025

A Dive Into the Servo Layout System* (slides) — Martin Robinson (@mrobinson) & Oriol Brufau (@obrufau) spoke about the architecture of Servo’s parallel and incremental layout system at GOSIM Hangzhou 2025

* video coming soon; go to our About page for updates

Mozilla Addons BlogAnnouncing data collection consent changes for new Firefox extensions

As of November 3rd 2025, all new Firefox extensions will be required to specify if they collect or transmit personal data in their manifest.json file using the browser_specific_settings.gecko.data_collection_permissions key. This will apply to new extensions only, and not new versions of existing extensions. Extensions that do not collect or transmit any personal data are required to specify this by setting the none required data collection permission in this property.

This information will then be displayed to the user when they start to install the extension, alongside any permissions it requests.

Screenshot of example Firefox extension installation prompt showing data that the extension collects Screenshot of example Firefox extension installation prompt showing that the extension claims it collects no data

This information will also be displayed on the addons.mozilla.org page, if it is publicly listed, and in the Permissions and Data section of the Firefox about:addons page for that extension. If an extension supports versions of Firefox prior to 140 for Desktop, or 142 for Android, then the developer will need to continue to provide the user with a clear way to control the add-on’s data collection and transmission immediately after installation of the add-on.

Once any extension starts using these data_collection_permissions keys in a new version, it will need to continue using them for all subsequent versions. Extensions that do not have this property set correctly, and are required to use it, will be prevented from being submitted to addons.mozilla.org for signing with a message explaining why.

In the first half of 2026, Mozilla will require all extensions to adopt this framework. But don’t worry, we’ll give plenty of notice via the add-ons blog. We’re also developing some new features to ease this transition for both extension developers and users, which we will announce here.

The post Announcing data collection consent changes for new Firefox extensions appeared first on Mozilla Add-ons Community Blog.

Niko MatsakisExplicit capture clauses

In my previous post about Ergonomic Ref Counting, I talked about how, whatever else we do, we need a way to have explicit handle creation that is ergonomic. The next few posts are going to explore a few options for how we might do that.

This post focuses on explicit capture clauses, which would permit closures to be annotated with an explicit set of captured places. My take is that explicit capture clauses are a no brainer, for reasons that I’ll cover below, and we should definitely do them; but they may not be enough to be considered ergonomic, so I’ll explore more proposals afterwards.

Motivation

Rust closures today work quite well but I see a few problems:

  • Teaching and understanding closure desugaring is difficult because it lacks an explicit form. Users have to learn to desugar in their heads to understand what’s going on.
  • Capturing the “clone” of a value (or possibly other transformations) has no concise syntax.
  • For long closure bodies, it is hard to determine precisely which values are captured and how; you have to search the closure body for references to external variables, account for shadowing, etc.
  • It is hard to develop an intuition for when move is required. I find myself adding it when the compiler tells me to, but that’s annoying.

Let’s look at a strawperson proposal

Some time ago, I wrote a proposal for explicit capture clauses. I actually see a lot of flaws with this proposal, but I’m still going to explain it: right now it’s the only solid proposal I know of, and it’s good enough to explain how an explicit capture clause could be seen as a solution to the “explicit and ergonomic” goal. I’ll then cover some of the things I like about the proposal and what I don’t.

Begin with move

The proposal begins by extending the move keyword with a list of places to capture:

let closure = move(a.b.c, x.y) || {
    do_something(a.b.c.d, x.y)
};

The closure will then take ownership of those two places; references to those places in the closure body will be replaced by accesses to these captured fields. So that example would desugar to something like

let closure = {
    struct MyClosure {
        a_b_c: Foo,
        x_y: Bar,
    }

    impl FnOnce<()> for MyClosure {
        fn call_once(self) -> Baz {
            do_something(self.a_b_c.d, self.x_y)
            //           ----------    --------
            //   The place `a.b.c` is      |
            //   rewritten to the field    |
            //   `self.a_b_c`              |
            //                  Same here but for `x.y`
        }
    }

    MyClosure {
        a_b_c: self.a.b.c,
        x_y: self.x.y,
    }
};

When using a simple list like this, attempts to reference other places that were not captured result in an error:

let closure = move(a.b.c, x.y) || {
    do_something(a.b.c.d, x.z)
    //           -------  ---
    //           OK       Error: `x.z` not captured
};

Capturing with rewrites

It is also possible to capture a custom expression by using an = sign. So for example, you could rewrite the above closure as follows:

let closure = move(
    a.b.c = a.b.c.clone(),
    x.y,
) || {
    do_something(a.b.c.d, x.z)
};

and it would desugar to:

let closure = {
    struct MyClosure { /* as before */ }
    impl FnOnce<()> for MyClosure { /* as before */ }

    MyClosure {
        a_b_c: self.a.b.c.clone(),
        //     ------------------
        x_y: self.x.y,
    }
};

When using this form, the expression assigned to a.b.c must have the same type as a.b.c in the surrounding scope. So this would be an error:

let closure = move(
    a.b.c = 22, // Error: `i32` is not `Foo`
    x.y,
) || {
    /* ... */
};

Shorthands and capturing by reference

You can understand move(a.b) as sugar for move(a.b = a.b). We support other convenient shorthands too, such as

move(a.b.clone()) || {...}
// == anything that ends in a method call becomes ==>
move(a.b = a.b.clone()) || {...}

and two kinda special shorthands:

move(&a.b) || { ... }
move(&mut a.b) || { ... }

These are special because the captured value is indeed &a.b and &mut a.b – but that by itself wouldn’t work, because the type doesn’t match. So we rewrite each access to a.b to desugar to a dereference of the a_b field, like *self.a_b:

move(&a.b) || { foo(a.b) }

// desugars to

struct MyStruct<'l> {
    a_b: &'l Foo
}

impl FnOnce for MyStruct<'_> {
    fn call_once(self) {
        foo(*self.a_b)
        //  ---------
        //  we insert the `*` too
    }
}

MyStruct {
    a_b: &a.b,
}

move(&a.b) || { foo(*a.b) }

There’s a lot of precedence for this sort of transform: it’s precisely what we do for the Deref trait and for existing closure captures.

Fresh variables

We should also allow you to define fresh variables. These can have arbitrary types. The values are evaluated at closure creation time and stored in the closure metadata:

move(
    data = load_data(),
    y,
) || {
    take(&data, y)
}

Open-ended captures

All of our examples so far fully enumerated the captured variables. But Rust closures today infer the set of captures (and the style of capture) based on the paths that are used. We should permit that as well. I’d permit that with a .. sugar, so these two closures are equivalent:

let c2 = move || /* closure */;
//       ---- capture anything that is used,
//            taking ownership

let c1 = move(..) || /* closure */;
//           ---- capture anything else that is used,
//                taking ownership

Of course you can combine:

let c = move(x.y.clone(), ..) || {

};

And you could write ref to get the equivalent of || closures:

let c2 = || /* closure */;
//       -- capture anything that is used,
//          using references if possible
let c1 = move(ref) || /* closure */;
//            --- capture anything else that is used,
//                using references if possible

This lets you

let c = move(
    a.b.clone(), 
    c,
    ref
) || {
    combine(&a.b, &c, &z)
    //       ---   -   -
    //        |    |   |
    //        |    | This will be captured by reference
    //        |    | since it is used by reference
    //        |    | and is not explicitly named.
    //        |    |
    //        |   This will be captured by value
    //        |   since it is explicitly named.
    //        |
    // We will capture a clone of this because
    // the user wrote `a.b.clone()`
}

Frequently asked questions

How does this help with our motivation?

Let’s look at the motivations I named:

Teaching and understanding closure desugaring is difficult

There’s a lot of syntax there, but it also gives you an explicit form that you can use to do explanations. To see what I mean, consider the difference between these two closures (playground).

The first closure uses ||:

fn main() {
    let mut i = 3;
    let mut c_attached = || {
        let j = i + 1;
        std::mem::replace(&mut i, j)
    };
    ...
}

While the second closure uses move:

fn main() {
    let mut i = 3;
    let mut c_detached = move || {
        let j = i + 1;
        std::mem::replace(&mut i, j)
    };

These are in fact pretty different, as you can see in this playground. But why? Well, the first closure desugars to capture a reference:

let mut i = 3;
let mut c_attached = move(&i) || {...};

and the second captures by value:

let mut i = 3;
let mut c_attached = move(i) || {...};

Before, to explain that, I had to resort to desugaring to structs.

Capturing a clone is painful

If you have a closure that wants to capture the clone of something today, you have to introduce a fresh variable. So something like this:

let closure = move || {
    begin_actor(data, self.tx.clone())
};

becomes

let closure = {
    let self_tx = self.tx.clone();
    move || {
        begin_actor(data, self_tx.clone())
    }
};

This is awkward. Under this proposal, it’s possible to point-wise replace specific items:

let closure = move(self.tx.clone(), ..) || {
    begin_actor(data, self.tx.clone())
};
For long closure bodies, it is hard to determine precisely which values are captured and how

Quick! What variables does this closure use from the environment?

.flat_map(move |(severity, lints)| {
    parse_tt_as_comma_sep_paths(lints, edition)
    .into_iter()
    .flat_map(move |lints| {
        // Rejoin the idents with `::`, so we have no spaces in between.
        lints.into_iter().map(move |lint| {
            (
                lint.segments().filter_map(
                    |segment| segment.name_ref()
                ).join("::").into(),
                severity,
            )
        })
    })
})

No idea? Me either. What about this one?

.flat_map(move(edition) |(severity, lints)| {
    /* same as above */
})

Ah, pretty clear! I find that once a closure moves beyond a couple of lines, it can make a function kind of hard to read, because it’s hard to tell what variables it may be accessing. I’ve had functions where it’s important to correctness for one reason or another that a particular closure only accesses a subset of the values around it, but I have no way to indicate that right now. Sometimes I make separate functions, but it’d be nicer if I could annotate the closure’s captures explicitly.

It is hard to develop an intuition for when move is required

Hmm, actually, I don’t think this notation helps with that at all! More about this below.

Let me cover some of the questions you may have about this design.

Why allow the “capture clause” to specify an entire place, like a.b.c?

Today you can write closures that capture places, like self.context below:

let closure = move || {
    send_data(self.context, self.other_field)
};

My goal was to be able to take such a closure and to add annotations that change how particular places are captured, without having to do deep rewrites in the body:

let closure = move(self.context.clone(), ..) || {
    //            --------------------------
    //            the only change
    send_data(self.context, self.other_field)
};

This definitely adds some complexity, because it means we have to be able to “remap” a place like a.b.c that has multiple parts. But it makes the explicit capture syntax far more powerful and convenient.

Why do you keep the type the same for places like a.b.c?

I want to ensure that the type of a.b.c is the same wherever it is type-checked, it’ll simplify the compiler somewhat and just generally makes it easier to move code into and out of a closure.

Why the move keyword?

Because it’s there? To be honest, I don’t like the choice of move because it’s so operational. I think if I could go back, I would try to refashion our closures around two concepts

  • Attached closures (what we now call ||) would always be tied to the enclosing stack frame. They’d always have a lifetime even if they don’t capture anything.
  • Detached closures (what we now call move ||) would capture by-value, like move today.

I think this would help to build up the intuition of “use detach || if you are going to return the closure from the current stack frame and use || otherwise”.

What would a max-min explicit capture proposal look like?

A maximally minimal explicit capture close proposal would probably just let you name specific variables and not “subplaces”:

move(
    a_b_c = a.b.c,
    x_y = &x.y
) || {
    *x_y + a_b_c
}

I think you can see though that this makes introducing an explicit form a lot less pleasant to use and hence isn’t really going to do anything to support ergonomic RC.

Conclusion: Explicit closure clauses make things better, but not great

I think doing explicit capture clauses is a good idea – I generally think we should have explicit syntax for everything in Rust, for teaching and explanatory purposes if nothing else; I didn’t always think this way, but it’s something I’ve come to appreciate over time.

I’m not sold on this specific proposal – but I think working through it is useful, because it (a) gives you an idea of what the benefits would be and (b) gives you an idea of how much hidden complexity there is.

I think the proposal shows that adding explicit capture clauses goes some way towards making things explicit and ergonomic. Writing move(a.b.c.clone()) is definitely better than having to create a new binding.

But for me, it’s not really nice enough. It’s still quite a mental distraction to have to find the start of the closure, insert the a.b.c.clone() call, and it makes the closure header very long and unwieldy. Particularly for short closures the overhead is very high.

This is why I’d like to look into other options. Nonetheless, it’s useful to have discussed a proposal for an explicit form: if nothing else, it’ll be useful to explain the precise semantics of other proposals later on.

This Week In RustThis Week in Rust 622

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is extend_mut, a library to safely extend the lifetime of an exclusive reference under some constraints.P

Thanks to Oleksandr Babak for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Cargo * Tracking Issue for cargo-script RFC 3424 * Testing Steps

Let us know if you would like your feature to be tracked as a part of this list.

RFCs
Rust
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

  • TokioConf 2026| CFP closes 2025-12-08 | Portland, Oregon, USA | 2026-04-20

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

369 pull requests were merged in the last week

Compiler
Library
Cargo
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Fairly busy week, with lots of mixed results. However, overall we ended with a slight improvement on average.

Triage done by @simulacrum. Revision range: 956f47c3..4068bafe

2 Regressions, 5 Improvements, 10 Mixed; 5 of them in rollups

39 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Rust Compiler Team (MCPs only) Leadership Council

No Items entered Final Comment Period this week for Rust RFCs, Cargo, Language Team, Language Reference or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs
  • No New or Updated RFCs were created this week.

Upcoming Events

Rusty Events between 2025-10-22 - 2025-11-19 🦀

Virtual
Asia
Europe
North America
Oceania
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

There used to be recurring questions about mod vs use in the user forum, until I've added a note to the error message [...] and I think it largely solved the problem

Kornel on rust-internals

Thanks to Noratrieb for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Niko MatsakisMove, Destruct, Forget, and Rust

This post presents a proposal to extend Rust to support a number of different kinds of destructors. This means we could async drop, but also prevent “forgetting” (leaking) values, enabling async scoped tasks that run in parallel à la rayon/libstd. We’d also be able to have types whose “destructors” require arguments. This proposal – an evolution of “must move” that I’ll call “controlled destruction” – is, I think, needed for Rust to live up to its goal of giving safe versions of critical patterns in systems programming. As such, it is needed to complete the “async dream”, in which async Rust and sync Rust work roughly the same.

Nothing this good comes for free. The big catch of the proposal is that it introduces more “core splits” into Rust’s types. I believe these splits are well motivated and reasonable – they reflect inherent complexity, in other words, but they are something we’ll want to think carefully about nonetheless.

Summary

The TL;DR of the proposal is that we should:

  • Introduce a new “default trait bound” Forget and an associated trait hierarchy:
    • trait Forget: Drop, representing values that can be forgotten
    • trait Destruct: Move, representing values with a destructor
    • trait Move: Pointee, representing values that can be moved
    • trait Pointee, the base trait that represents any value
  • Use the “opt-in to weaker defaults” scheme proposed for sizedness by RFC #3729 (Hierarchy of Sized Traits)
    • So fn foo<T>(t: T) defaults to “a T that can be forgotten/destructed/moved”
    • And fn foo<T: Destruct>(t: T) means “a T that can be destructed, but not necessarily forgotten”
    • And fn foo<T: Move>(t: T) means “a T that can be moved, but not necessarily forgotten”
    • …and so forth.
  • Integrate and enforce the new traits:
    • The bound on std::mem::forget will already require Forget, so that’s good.
    • Borrow check can enforce that any dropped value must implement Destruct; in fact, we already do this to enforce const Destruct bounds in const fn.
    • Borrow check can be extended to require a Move bound on any moved value.
  • Adjust the trait bound on closures (luckily this works out fairly nicely)

Motivation

In a talk I gave some years back at Rust LATAM in Uruguay1, I said this:

  • It’s easy to expose a high-performance API.
  • But it’s hard to help users control it – and this is what Rust’s type system does.
Person casting a firespell and burning themselves

Rust currently does a pretty good job with preventing parts of your program from interfering with one another, but we don’t do as good a job when it comes to guaranteeing that cleaup happens2. We have destructors, of course, but they have two critical limitations:

  • All destructors must meet the same signature, fn drop(&mut self), which isn’t always adequate.
  • There is no way to guarantee a destructor once you give up ownership of a value.

Making it concrete.

That motivation was fairly abstract, so let me give some concrete examples of things that tie back to this limitation:

  • The ability to have async or const drop, both of which require a distinct drop signature.
  • The ability to have a “drop” operation that takes arguments, such as e.g. a message that must be sent, or a result code that must be provided before the program terminates.
  • The ability to have async scopes that can access the stack, which requires a way to guarantee that a parallel thread will be joined even in an async context.
  • The ability to integrate at maximum efficiency with WebAssembly async tasks, which require guaranteed cleanup.3

The goal of this post is to outline an approach that could solve all of the above problems and which is backwards compatible with Rust today.

The “capabilities” of value disposal

The core problem is that Rust today assumes that every Sized value can be moved, dropped, and forgotten:

// Without knowing anything about `T` apart
// from the fact that it's `Sized`, we can...
fn demonstration<T>(a: T, b: T, c: T) {
    // ...drop `a`, running its destructor immediately.
    std::mem::drop(a);

    // ...forget `b`, skipping its destructor
    std::mem::forget(b);

    // ...move `c` into `x`
    let x = c;
} // ...and then have `x` get dropped automatically,
// as exit the block.

Destructors are like “opt-out methods”

The way I see, most methods are “opt-in” – they don’t execute unless you call them. But destructors are different. They are effectively a method that runs by default – unless you opt-out, e.g., by calling forget. But the ability to opt-out means that they don’t fundamentally add any power over regular methods, they just make for a more ergonomic API.

The implication is that the only way in Rust today to guarantee that a destructor will run is to retain ownership of the value. This can be important to unsafe code – APIs that permit scoped threads, for example, need to guarantee that those parallel threads will be joined before the function returns. The only way they have to do that is to use a closure which gives &-borrowed access to a scope:

scope(|s| ...)
//     -  --- ...which ensures that this
//     |      fn body cannot "forget" it.
//     |  
// This value has type `&Scope`... 

Because the API nevers gives up ownership of the scope, it can ensure that it is never “forgotten” and thus that its destructor runs.

The scoped thread approach works for sync code, but it doesn’t work for async code. The problem is that async functions return a future, which is a value. Users can therefore decide to “forget” this value, just like any other value, and thus the destructor may never run.

Guaranteed cleanup is common in systems programming

When you start poking around, you find that guaranteed destructors turn up quite a bit in systems programming. Scoped APIs in futures are one example, but DMA (direct memory access) is another. Many embedded devices have a mode where you begin a DMA transfer that causes memory to be written into memory asynchronously. But you need to ensure that this DMA is terminated before that memory is freed. If that memory is on your stack, that means you need a destructor that will either cancel or block until the DMA finishes.4

So what can we do about it?

This situation is very analogous to the challenge of revisiting the default Sized bound, and I think the same basic approach that I outlined in [this blog post][sized] will work.

The core of the idea is simple: have a “special” set of traits arranged in a hierarchy:

trait Forget: Destruct {} // Can be "forgotten"
trait Destruct: Move {}   // Can be "destructed" (dropped)
trait Move: Pointee {}    // Can be "moved"
trait Pointee {}          // Can be referenced by pointer

By default, generic parameters get a Forget bound, so fn foo<T>() is equivalent to fn foo<T: Forget>(). But if the parameter opts in to a weaker bound, then the default is suppressed, so fn bar<T: Destruct>() means that T is assumed by “destructible” but not forgettable. And fn baz<T: Move>() indicates that T can only be moved.

Impact of these bounds

Let me explain briefly how these bounds would work.

The default can forget, drop, move etc

Given a default type T, or one that writes Forget explicitly, the function can do anything that is possible today:

fn just_forget<T: Forget>(a: T, b: T, c: T) {
    //         --------- this bound is the default
    std::mem::drop(a);   // OK
    std::mem::forget(b); // OK
    let x = c;           // OK
}

The forget function requires T: Forget

The std::mem::forget function would require T: Forget as well:

pub fn forget<T: Forget>(value: T) { /* magic intrinsic */ }

This means that if you have only Destruct, the function can only drop or move, it can’t “forget”:

fn just_destruct<T: Destruct>(a: T, b: T, c: T) {
    //           -----------
    // This function only requests "Destruct" capability.

    std::mem::drop(a);   // OK
    std::mem::forget(b); // ERROR: `T: Forget` required
    let x = c;           // OK
}

The borrow checker would require “dropped” values implement Destruct

We would modify the drop function to require only T: Destruct:

fn drop<T: Destruct>(t: T) {}

We would also extend the borrow checker so that when it sees a value being dropped (i.e., because it went out of scope), it would require the Destruct bound.

That means that if you have a value whose type is only Move, you cannot “drop” it:

fn just_move<T: Move>(a: T, b: T, c: T) {
    //           -----------
    // This function only requests "Move" capability.

    std::mem::drop(a);   // ERROR: `T: Destruct` required
    std::mem::forget(b); // ERROR: `T: Forget` required
    let x = c;           // OK
}                        // ERROR: `x` is being dropped, but `T: Destruct`

This means that if you have only a Move bound, you must move anything you own if you want to return from the function. For example:

fn return_ok<T: Move>(a: T) -> T {
    a // OK
}

If you have a function that does not move, you’ll get an error:

fn return_err<T: Move>(a: T) -> T {
} // ERROR: `a` does not implement `Destruct`

It’s worth pointing out that this will be annoying as all get out in the face of panics:

fn return_err<T: Move>(a: T) -> T {
    // ERROR: If a panic occurs, `a` would be dropped, but `T` not implement `Destruct`
    forbid_env_var();

    a
} 

fn forbid_env_var() {
    if std::env::var("BAD").is_ok() {
        panic!("Uh oh: BAD cannot be set");
    }
}

I’m ok with this, but it is going to put pressure on better ways to rule out panics statically.

Const (and later async) variants of Destruct

In fact, we are already doing something much like this destruct check for const functions. Right now if you have a const fn and you try to drop a value, you get an error:

const fn test<T>(t: T) {
} // ERROR!

Compiling that gives you the error:

error[E0493]: destructor of `T` cannot be evaluated at compile-time
 --> src/lib.rs:1:18
  |
1 | const fn test<T>(t: T) { }
  |                  ^       - value is dropped here
  |                  |
  |                  the destructor for this type cannot be evaluated in constant functions

This check is not presently taking place in borrow check but it could be.

The borrow checker would require “moved” values implement Move

The final part of the check would be requiring that “moved” values implement Move:

fn return_err<T: Pointee>(a: T) -> T {
    a // ERROR: `a` does not implement `Move`
}

You might think that having types that are !Move would replace the need for pin, but this is not the case. A pinned value is one that can never move again, whereas a value that is not Move can never be moved in the first place – at least once it is stored into a place.

I’m not sure if this part of the proposal makes sense, we could start by just having all types be Move, Destruct, or (the default) Forget.

Opting out from forget etc

The other part of the proposal is that you should be able to explicit “opt out” from being forgettable, e.g. by doing

struct MyType {}
impl Destruct for MyType {}

Doing this will limit the generics that can accept your type, of course.

Associated type bounds

The tough part with these “default bound” proposals is always associated type bounds. For backwards compatibility, we’d have to default to Forget but a lot of associated types that exist in the wild today shouldn’t really require Forget. For example a trait like Add should really just require Move for its return type:

trait Add<Rhs = Self> {
    type Output /* : Move */;
}

I am basically not too worried about this. It’s possible that we can weaken these bounds over time or through editions. Or, perhaps, add in some kind of edition-specific “alias” like

trait Add2025<Rhs = Self> {
    type Output: Move;
}

where Add2025 is implemented for everything that implements Add.

I am not sure exactly how to manage it, but we’ll figure it out – and in the meantime, most of the types that should not be forgettable are really just “guard” types that don’t have to flow through quite so many places.

Associated type bounds in closures

The one place that I think it is really imporatnt that we weaken the associated type bounds is with closures– and, fortunately, that’s a place we can get away with due to the way our “closure trait bound” syntax works. I feel like I wrote a post on this before, but I can’t find it now, but the short version is that, today, when you write F: Fn(), that means that the closure must return (). If you write F: Fn() -> T, then this type T must have been declared somewhere else, and so T will (independently from the associated type of the Fn trait) get a default Forget bound. So since the Fn associated type is not independently nameable in stable Rust, we can change its bounds, and code like this would continue to work unchanged:

fn foo<T, F>()
where
    F: Fn() -> T,
    //         - `T: Forget` still holds by default
{}

Frequently asked questions

How does this relate to the recent thread on internals?

Recently I was pointed at this internals thread for a “substructural type system” which likely has very similar capabilities. To be totally honest, though, I haven’t had time to read and digest it yet! I had this blog post like 95% done though so I figured I’d post it first and then go try and compare.

What would it mean for a struct to opt out of Move (e.g., by being only Pointee)?

So, the system as I described would allow for ‘unmoveable’ types (i.e., a struct that opts out from everything and only permits Pointee), but such a struct would only really be something you could store in a static memory location. You couldn’t put it on the stack because the stack must eventually get popped. And you couldn’t move it from place to place because, well, it’s immobile.

This seems like something that could be useful – e.g., to model “video RAM” or something that lives in a specific location in memory and cannot live anywhere else – but it’s not a widespread need.

How would you handle destructors with arguments?

I imagine something like this:

struct Transaction {
    data: Vec<u8>
}

/// Opt out from destruct
impl Move for Transaction { }

impl Transaction {
    // This is effectively a "destructor"
    pub fn complete(
        self, 
        connection: Connection,
    ) {
        let Transaction { data } = self;
    }
}

With this setup, any function that owns a Transaction must eventually invoke transaction.complete(). This is because no values of this type can be dropped, so they must be moved.

How does this relate to async drop?

This setup provides attacks a key problem that has blocked async drop in my mind, which is that types that are “async drop” do not have to implement “sync drop”. This gives the type system the ability to prevent them from being dropped in sync code, then, and it would mean that they can only be dropped in async drop. But there’s still lots of design work to be done there.

Why is the trait Destruct and not Drop?

This comes from the const generifs work. I don’t love it. But there is a logic to it. Right now, when you drop a struct or other value, that actually does a whole sequence of things, only one of which is running any Drop impl – it also (for example) drops all the fields in the struct recursively, etc. The idea is that “destruct” refers to this whole sequence.

How hard would this to be to prototype?

I…don’t actually think it would be very hard. I’ve thought somewhat about it and all of the changes seem pretty straightforward. I would be keen to support a lang-team experiment on this.

Does this mean we should have had leak?

The whole topic of destructors and leaks and so forth datesback to approximately Rust 1.0, when we discovered that, in fact, our abstraction for threads was unsound when combined with cyclic ref-counted boxes. Before that we hadn’t fully internalized that destructors are “opt-out methods”. You can read this blog post I wrote at the time. At the time, the primary idea was to have some kind of ?Leak bounds and it was tied to the idea of references (so that all 'static data was assumed to be “leakable”, and hence something you could put into an Rc). I… mostly think we made the right call at the time. I think it’s good that most of the ecosystem is interoperable and that Rc doesn’t require static bounds, and certainly I think it’s good that we moved to 1.0 with minimal disruption. In any case, though, I rather prefer this design to the ones that were under discussion at the time, in part because it also addresses the need for different kinds of destructors and for destructors with many arguments and so forth, which wasn’t something we thought about then.

Isn’t it confusing to have these “magic” traits that “opt out” from default bounds?

I think that specifying the bounds you want is inherently better than today’s ? design, both because it’s easier to understand and because it allows us to backwards compatibly add traits in between in ways that are not possible with the ? design.

However, I do see that having T: Move mean that T: Destruct does not hold is subtle. I wonder if we should adopt some kind of sigil or convention on these traits, like T: @Move or something. I don’t know! Something to consider.


  1. That was a great conference. Also, interestingly, this is one of my favorite of all my talks, but for some reason, I rarely reuse this material. I should change that. ↩︎

  2. Academics distinguish “safety” from “liveness properties”, where safety means “bad things don’t happen” and “liveness” means “good things eventually happen”. Another way of saying this is that Rust’s type system helps with a lot of safety properties but struggles with liveness properties. ↩︎

  3. Uh, citation needed. I know this is true but I can’t find the relevant WebAssembly issue where it is discussed. Help, internet! ↩︎

  4. Really the DMA problem is the same as scoped threads. If you think about it, the embedded device writing to memory is basically the same as a parallel thread writing to memory. ↩︎

Mozilla Addons BlogDeveloper Spotlight: Fox Recap

The Fox Recap team (pictured left to right): Taimur Hasan, Mozilla community manager Matt Cool, Kate Sawtell, Diego Valdez (not pictured: Peter Mitchell).

“What if we did a Spotify Wrapped for your browser?” wondered a group of Cal State Monterey Bay computer science students. That was the initial spark of an idea that became Fox Recap — a Firefox extension that leverages machine learning to give Firefox users fascinating insights into their browsing habits, like peak usage hours, types of websites commonly visited (news, entertainment, shopping, etc.), navigation patterns, and more.

Taimur Hasan was one of four CSMB students who built Fox Recap as part of a Mozilla-supported Capstone project. We spoke with Taimur about his experience building an AI-centered extension from scratch.

What makes Fox Recap an “AI” project?

Taimur Hasan: Fox Recap uses Machine Learning behind the scenes to classify sites and generate higher level insights, like top/trending categories and transition patterns. I kept the “AI” messaging light on the listing page to avoid hype and focus on the experience. Ideally the AI features feel seamless and natural rather than front and center.

What was your most challenging development hurdle?  

TH: For me, the most challenging part of development was creating the inference pipeline, which means the part where you actually use the AI model to do something useful. It took careful optimization to run well on a typical laptop as load times were a priority.

What is your perception of young emergent developers like yourself and their regard for privacy on the web?

TH: With data collection on the rise, privacy and security matter more than ever. Among dedicated and enthusiastic young developers, privacy will always be in mind.

How do you see AI and browser extensions interrelating in the coming years? Do you have a sense of mutual direction?

TH: I expect wider use of small, task specific models that quietly improve the user experience in most browser extensions. For mutual direction in the browser and add-on space I can see the use of AI in manipulating the DOM being done pretty heavily in the future.

Any advice for other extension developers curious about AI integration?  

TH: Be clear about the use case and model choice before investing in training or fine tuning. Start simple, validate the value, then add complexity only if it clearly improves the experience.

To learn even more about Fox Recap’s development process, please see Fox Recap: A student-built tool that analyzes your browsing habits.

The post Developer Spotlight: Fox Recap appeared first on Mozilla Add-ons Community Blog.

Mozilla Privacy BlogBehind the Manifesto: Standing up for encryption to keep the internet safe

Welcome to the first blog of the series “Behind the Manifesto,” where we unpack core issues that are critical to Mozilla’s mission. The Mozilla Manifesto represents Mozilla’s commitment to advancing an open, global internet. This blog series digs deeper on our vision for the web and the people who use it, and how these goals are advanced in policymaking and technology. 

At Mozilla, we’ve long said the internet is one of the world’s most important public resources, something that only thrives when guided by core principles. One of those principles is that individual security and privacy online are fundamental.

Encryption is the technology that makes secure and private online interactions possible. It protects our messages, our data, and our privacy, sitting in the center of security and trust on the internet. Given its critical role in online privacy, it can be a focal point for policymakers.

The truth is, encryption is integral to digital trust and safety. Strong encryption keeps us safe while weak encryption puts our personal, financial, and health data at risk. 

In recent years, we’ve seen governments around the world test ways to undermine encryption to access private conversations and data, often framing it as critical to combating crime. From proposals in the EU that could allow law enforcement to read messages before they are encrypted, to the UK Government’s pushback on Apple’s rollout of iCloud end-to-end encryption, or U.S. legislation that would require platforms to provide access to encrypted data, the pressure to weaken encryption is growing globally.

Governments and law enforcement agencies face complex and legitimate challenges in protecting the public from serious crime and emerging online threats. Their work is critical to ensuring safety in an increasingly digital world. But weakening encryption is not the solution. Strong encryption is what keeps everyone safe — it protects citizens, officials, and infrastructure alike. It is the foundation that safeguards people from intrusive surveillance and shields their most sensitive data from those who would exploit it for harm. We must work together to find solutions that both uphold public safety and prevent the erosion of the privacy and security that strong encryption provides.

With encryption increasingly under threat, this year’s Global Encryption Day (October 21) is the perfect moment to reflect on why strong encryption matters for every internet user.

At Mozilla, we believe encryption isn’t a luxury or privilege. It is a necessity for protecting data against unauthorized access. Our commitment to end-to-end encryption is strong because it is essential to protecting people and ensuring the internet remains open and secure.

That’s why Mozilla has taken action for years to protect and advance encryption. In 2023, we joined the Global Encryption Coalition Steering Committee, working with partners around the world to promote encryption and push back on proposals for backdoor access.

In the U.S., we’ve advanced encryption in our 2025 U.S. policy prioritiesjoined amicus briefs, and raised concerns with bills like the U.S. EARN IT Act. In the EU, we ran a multi-year campaign on the eIDAS Regulation working alongside civil society, academics, and industry experts to address concerns that Article 45 threatened to undermine the encryption and authentication technologies used on the web. With such a massive risk to web security, Mozilla, with allies, took action, releasing detailed position papers and joint statements.  All of our efforts have been to safeguard encryption, privacy, and digital rights. Why? Because the bottom line is simple: backdoor policies diminish the trust that allows the web to be a safe and reliable public resource.

Mozilla’s strong commitment to protecting privacy isn’t just a policy priority; it’s the foundation of our products and initiatives. Below, we’d like to share some of the ways in which Mozilla partnered with allies to make encryption a reality and a core function of the open internet ecosystem.

  • Mozilla is among the co-founders of Let’s Encrypt, a nonprofit Certificate Authority run by the Internet Security Research Group (ISRG), alongside partners like the EFF and the University of Michigan. This project made HTTPS certificates free and automatically renewable, transforming HTTPS from a costly, complex setup into a default expectation across the web. As a result, the share of encrypted traffic skyrocketed from less than 40% in 2016 to around 80% by 2019.
  • Mozilla closely collaborated with Cloudflare to roll-out Encrypted Client Hello (ECH) in Firefox in 2023, which encrypts the first “Hello” message of a user’s TLS connection so that even the website name is hidden from network observers.
  • Mozilla has most recently set a new standard for certificate revocation on the web, advancing encryption and security. In April 2025, Firefox became the first (and is still the only) browser that has deployed CRLite, the technology invented by a group of researchers that ensures revoked HTTPS certificates are identified quickly and privately without leaking unencrypted browsing activity to third parties.
  • In 2024, Firefox became the first browser to support DTLS 1.3 providing the most robust end-to-end encryption of real-time audio and video data, including all your web conferencing.

It’s easy to say we care about encryption, but it only works if the commitment is shared by the policymakers writing our laws and the engineers designing our systems.

As Neha Kochar, Director of Firefox Security and Privacy puts it: “Whether you’re visiting your bank’s website or sending a family photo, Firefox works behind the scenes to keep your browsing secure. With no shareholders to answer to, we serve only you — open-source and transparent by design, with verifiable guarantees that not even Mozilla knows which websites you visit or what you do online.”

That is why Global Encryption Day is such an important moment. If a system is weakened or broken, it opens vulnerabilities that anyone with the right tools can exploit. By standing up for encryption and the policies that protect it, we help ensure the internet remains safe, open, and fair for everyone.

To dig deeper on encryption, check out these partner resources: Global Encryption Coalition, Internet Society and Global Partners Digital.

This blog is part of a larger series. Be sure to follow Jenn Taylor Hodges and Sema Karaman on LinkedIn for further insights into Mozilla’s policy priorities.

The post Behind the Manifesto: Standing up for encryption to keep the internet safe appeared first on Open Policy & Advocacy.

Mozilla ThunderbirdThunderbird Monthly Release 144 Recap

We’re back with our Monthly Release recap! Thunderbird 144.0 readies the way for Exchange Web Services support, makes reordering your folders easier, and adds a new UI for TLS certificate handling. Additionally, we’ve fixed a dark mode toggle bug for High Contrast Mode users.

A quick reminder – these updates are for users on our monthly Thunderbird Release channel. For our users still on the ESR (Extended Standard Release) channel, these updates won’t land until next July 2026. For more information on the differences between the channels and how to make the switch:

Now let’s dive into what’s new in 144.0!

New Features:

Support for Exchange Web Service (EWS) Email in Account Hub

As part of our preparation for EWS support officially landing next month in Thunderbird 145.0, you’ll notice EWS accounts as an option in the Account Hub. We will have more detailed blog posts and support articles available next month describing what is and isn’t supported. 

Benefits:

  • This gives us a chance to gradually ready the app and users for our newest protocol

Improve UX of folder reordering

Bug 1957486

We recently introduced drag and drop folder reordering, and in 144, we’re making it better. A new widget shows where your folder is going. We also fixed an issue that prevented the visual from showing. We removed the jitters when positioning between folders, and Improved consistency by using the same indicator used to reorder tabs and attachments.

Benefits:

  • More control over drag-and- performance with fewer folders going to the wrong location
  • More visual consistency

New UI for TLS Certificate Handling

While power users might be comfortable handling TLS certificates, average Thunderbird users might not know what to do when Thunderbird doesn’t trust a server’s certificate. This UI makes these issues, when they occur, harder to ignore and easier to diagnose and fix, even for less tech-savvy users. 

The new UI will mark the server red, with a clickable icon that takes users to an updated server settings. There, users can view certificates and add or remove certificate override exceptions.




Benefits:

  • Increased security for the average Thunderbird user
  • Easier access, via the server settings, to certificate actions

Bug Fixes:

Dark mode won’t go away for messages in High Contrast Mode

Bug 1976900

Our High Contrast Mode users noticed the new toggle for messages in dark mode wasn’t appearing. Starting in Thunderbird 144, dark message mode will not be activated when in High Contrast mode, respecting the colors and priority of that accessibility setting.

Benefits:

  • Consistent use of system colors when in High Contrast Mode
  • More respectful of user settings

You can find a complete list of updates and bug fixes that went into Thunderbird 144.0 in our Release Notes.

Thank you for using Thunderbird and for supporting our mission to bring a truly independent, open‑source email experience. Your feedback and enthusiasm drive every improvement we make — and we can’t wait to share more with you in the next release.

The post Thunderbird Monthly Release 144 Recap appeared first on The Thunderbird Blog.

The Servo BlogServo 0.0.1 Release

Today, the Servo team has released new versions of the servoshell binaries for all our supported platforms, tagged v0.0.1. These binaries are essentially the same nightly builds that were already available from the download page with additional manual testing, now tagging them explicitly as releases for future reference.

We plan to publish such a tagged release every month. For now, we are adopting a simple release process where we will use a recent nightly build and perform additional manual testing to identify issues and regressions before tagging and publishing the binaries.

There are currently no plans to publish these releases on crates.io or platform-specific app stores. The goal is just to publish tagged releases on GitHub.

Mozilla ThunderbirdThunderbird Monthly Development Digest: September 2025

Hello again from the Thunderbird development team! This month’s sprints have been about focus and follow-through, as we’ve tightened up our new Account Hub experience and continued the deep work on Exchange Web Services (EWS) support. While those two areas have taken centre stage, we’ve also been busy adapting to a wave of upstream platform changes that demanded careful attention to keep everything stable and our continuous integration systems happy. Alongside this, our developers have been lending extra support to the Services team to ensure a smooth path for upcoming releases. It’s been a month of steady, detail-oriented progress – the kind that doesn’t always make headlines, but lays the groundwork for the next leaps forward.

Exchange Web Services support announcement for 145

While support for Microsoft Exchange via EWS landed in Thunderbird 144, the new “Account Hub” setup interface had a few outstanding priorities which required our attention. Considering that the announcement of EWS support will likely generate a large spike in secondary account additions, we felt it important enough to delay the announcement in order to polish the setup interface and make the experience better for the users taking advantage of the new features. The team working on the “back end” took the opportunity to deliver more features that had been in our backlog and address some bugs that were reported by users who are already using EWS on Beta and Daily:

  • Offline message policy
  • Soft delete / copy to Trash
  • Empty Trash
  • Notifications with message preview
  • Reply-to multiple recipients bug
  • Mark Folder as read
  • Experimental tenant-specific configuration options (behind a preference) now being tested with early adopters

Looking ahead, the team is already focused on our work week where we’ll have chance to put plans in place to tackle some architectural refactoring and the next major milestones in our EWS implementation for Calendar and Address Book.

We were also delighted to work with a community contributor who has been hard at work on adding support for the FindItem operation. We know some of our workflows are tricky so we very much appreciate the support and patience required!

Keep track of feature delivery here. 

Account Hub

We’ve now added the ability to manually edit any configuration from the first screen. This effectively bypasses the simpler detection methods which don’t work for every configuration. Upon detection failure, a user is now able to switch between protocols and choose EWS configuration.

Other notable items being rolled into 145 are:

  • Redirect warning and handling to prevent a hang for platforms using autodiscover on a 3rd party server
  • Authentication step added for Exchange discovery requiring credentials
  • Ability to cancel the account configuration detection process
  • Improvements to the experience for users with screen reading technology

The creation of address books through the Account Hub is now the experience by default on 145 which is coming to Beta release users this week and monthly Release users before I write next.

Follow progress in the Meta Bug

Calendar UI Rebuild

With the front end team mainly focused on Account Hub, the Calendar UI project has moved slowly this past month. We’ve concentrated the continued work in the following areas:

  • Acceptance widget
  • Title and close button
  • Dialog repositioning on resize
  • Migrating calendar strings from legacy .dtd files into modern .ftl files and preserving translations to avoid repeat work for our translation community.

Maintenance, Upstream adaptations, Recent Features and Fixes

With our focused maintenance sprint over, the team kept the Fluent Migration and moz-src migration projects moving in the background. They also handled another surge of upstream changes requiring triage. In addition to these items, the development community has helped us deliver a variety of improvements over the past month:

If you would like to see new features as they land, and help us find some early bugs, you can try running daily and check the pushlog to see what has recently landed. This assistance is immensely helpful for catching problems early.

Toby Pilling

Senior Manager, Desktop Engineering

The post Thunderbird Monthly Development Digest: September 2025 appeared first on The Thunderbird Blog.

Firefox Add-on ReviewsReddit revolutionized — use a browser extension to enhance your favorite forum

Reddit is awash with great conversation (well, not all the time). There’s a Reddit forum for just about everybody — sports fans, gamers, poets inspired by food, people who like arms on birds — you get the idea. 

If you spend time on Reddit, there are ways to augment your experience with a browser extension… 

Reddit Enhancement Suite

Used by millions of Redditors across various browsers, Reddit Enhancement Suite is optimized to work with the beloved “old Reddit”. 

Key features: 

  • Subreddit manager. Customize the top nav bar with your own subreddit shortcuts. 
  • Account switcher. Easily manage multiple Reddit accounts with a couple quick clicks. 
  • Show “parent” comment on hover. When you mouse over a comment, its “parent” comment displays. 
  • Dashboard. Fully customizable dashboard showcases content from subreddits, your message inbox & more. 
  • Tag specific users and subreddits so their activity appears more prominently
  • Custom filters. Select words, subreddits, or even certain users you want filtered out of your scrolling experience. 
  • New comment count. See the number of new comments on a thread since your last visit. 
  • Neverending Reddit. Just keep scrolling. Never stop!

Old Reddit Redirect

Speaking of the former design, Old Reddit Redirect provides a straightforward function. It simply ensures that every Reddit page you visit will redirect to the old.reddit.com domain. 

Sure, if you have a Reddit account the site gives you the option of using the old design, but with the browser extension you’ll get the old site regardless of being logged in or not. It’s also great for when you click Reddit links shared from the new domain. 

Sink It for Reddit

Designed to “make Reddit’s web version actually usable,” Sink It for Reddit is built for people craving a minimalist discussion platform.

Color coded comments are much simpler to navigate, especially with Sink It’s brilliant Adaptive Dark Mode feature. Give this privacy respecting extension a try if you desire a laser focused Reddit experience.

Reddit Comment Collapser

No more getting lost in confusing comment threads for users of old.reddt.com. Reddit Comment Collapser cleans up your commentary view with a simple mouse click.

Compatible with Reddit Enhancement Suite and Old Reddit Redirect, this single-use extension is beloved by many seeking a minimalist view of the classic Reddit.

Reddit on YouTube

Bring Reddit with you to YouTube. Whenever you’re on a YouTube page, Reddit on YouTube searches for Reddit posts that link to the video and embeds those comments into the YouTube comment area. 

You can easily toggle between Reddit and YouTube comments and select either one to be your default preference. 

<figcaption class="wp-element-caption">If there are multiple Reddit threads about the video you’re watching, the extension will display them in tab form in the YouTube comment section. </figcaption>

Reddit Ad Remover

Sick of seeing so many “Promoted” posts and paid advertisements in the feed and sidebar? Reddit Ad Remover silences the noise. 

The extension even blocks auto-play video ads, which is great for people who don’t appreciate sudden bursts of commercial sound. Hey, somebody should create a subreddit about this

Happy redditing, folks. Feel free to explore more news and media extensions on addons.mozilla.org.

Firefox Add-on ReviewsBoost your writing skills with a browser extension

Whatever kind of writing you do — technical documentation, corporate communications, Harry Potter-vampire crossover fan fiction — it probably happens online. Here are some fabulous browser extensions that will benefit anyone who writes on the web. Get grammar help, productivity tools, and other strong writing aids… 

LanguageTool

It’s like having your own copy editor with you wherever you write on the web. Language Tool – Grammar and Spell Checker will make you a better writer in 25+ languages. 

More than just a spell checker, LanguageTool also…

  • Recognizes common misuses of similar sounding words (e.g. there/their or your/you’re)
  • Works on social media sites and email
  • Offers alternate phrasing and style suggestions for brevity and clarity

Dictionary Anywhere

Need a quick word definition? With Dictionary Anywhere just double-click any word you find on the web and get an instant pop-up definition. 

You can even save and download words and their definitions for later offline reference. 

<figcaption class="wp-element-caption">Dictionary Anywhere — no more navigating away from a page just to get a word check.</figcaption>

Yomitan

Think of Yomitan as a dictionary extension that doubles as a language learning tool. Decipher and define text in 20+ languages.

As you navigate foreign language websites, Yomitan is right there with you to not only help define unfamiliar words and phrases, but also provide audio pronunciation guidance, flashcard creation for future study, offline support and more — all within a privacy protective framework.

Power Thesaurus

Every writer occasionally struggle to find the perfect word. Bring Power Thesaurus with you wherever you write on the web to gain instant access to alternative phrasing.

Simply highlight any word and pop up a handy thesaurus (also includes word definitions and antonyms). Power Thesaurus is a priceless tool for writers who labor over every word.

Dark Background and Light Text

Give your eyes a break. Dark Background and Light Text makes staring at blinking words all day a whole lot easier on your lookers. 

Really simple to use out of the box. Once installed, the extension’s default settings automatically flip the colors of every web page you visit. But if you’d like more granular control of color settings, just click the extension’s toolbar button to access a pop-up menu that lets you customize color schemes, set page exceptions for sites you don’t want colors inverted, and more simple controls. 

<figcaption class="wp-element-caption">Dark Background and Light Text goes easy on the eyes.</figcaption>

Clippings

If your online writing requires the repeated use of certain phrases (for example, work email templates or customer support responses), Clippings can be a huge time saver. 

Key features…

  • Create a practically limitless library of saved phrases
  • Paste your clippings anywhere via context menu
  • Organize batches of clippings with folders and color coded labels
  • Shortcut keys for power users
  • Extension supported in English, Dutch, French, German, and Portuguese (Brazil)
<figcaption class="wp-element-caption">Clippings handles bulk cutting/pasting. </figcaption>

We hope these extensions take your prose to the next level. Some writers may also be interested in this collection of great productivity extensions to help organize your writing projects. Feel free to explore thousands of other useful extensions on addons.mozilla.org

Firefox Add-on ReviewsExtension starter pack

You’ve probably heard about “ad blockers,” “tab managers,” “anti-trackers” or any number of browser customization tools commonly known as extensions. And maybe you’re intrigued to try one, but you’ve never installed an extension before and the whole notion just seems a bit vague. 

Let’s demystify extensions. 

An extension is simply an app that runs on a browser like Firefox. From serious productivity and privacy enhancing features to fun stuff like changing the way the web looks and feels, extensions give you the power to completely personalize your browsing experience. 

Addons.mozilla.org (AMO) is a discovery site that hosts thousands of independently developed Firefox extensions. It’s a vast and eclectic ecosystem of features, so we’ve hand-picked a small collection of great extensions to get you started…

I’ve always wanted an ad blocker!

uBlock Origin

Works beautifully “right out of the box.” Just add it to Firefox and uBlock Origin will automatically start blocking all types of advertising — display ads, banners, video pre-rolls, pop-ups — you name it. 

Of course, if you prefer deeper content blocking customization, uBlock Origin allows for fine control as well, like the ability to import your own custom block filters or access a data display that shows how much of a web page was blocked by the extension. More than just an ad blocker, uBlock Origin also effectively thwarts some websites that may be infected with malware. 

For more insights about this excellent ad blocker, please see uBlock Origin — everything you need to know about the ad blocker, or to explore even more ad blocker options, check out What’s the best ad blocker for you?

I’m concerned about my privacy and tracking around the web

Privacy Badger

The flagship anti-tracking extension from privacy proponents at the Electronic Frontier Foundation, Privacy Badger is programmed to look for tracking heuristics (i.e. specific actions that indicate someone is trying to track you).

Zero set up required. Just install Privacy Badger and it will automatically search for third-party cookies, HTML5 local storage “supercookies,” canvas fingerprinting, and other sneaky tracking methods.

Consent-O-Matic

Not only will Consent-O-Matic automatically handle pop-up data consent forms (per GDPR regulations), but it’s brilliantly designed to interpret the often intentionally confusing language of consent pop-ups trying to trick you into agreeing to invasive tracking.

Developed by internet privacy researchers at Aarhus University in Denmark who grew sick of seeing so many deceptive consent pop-ups, Consent-O-Matic’s decision-making logic is built upon studying hundreds of pop-ups and identifying duplicitous patterns. So using this extension not only gives you a great ally in the fight against intrusive tracking, but you’re spared the annoyance of constantly clicking consent forms all over the internet.

I need an easier way to translate languages

Simple Translate

Do you do a lot of language translations on the web? If so, it’s a hassle always copying text and navigating away from the page you’re on just to translate a word or phrase. Simple Translate solves this problem by giving you the power to perform translations right there on the page. 

Just highlight the text you want translated and right-click to get instant translations in a handy pop-up display, so you never have to leave the page again. 

My grammar in speling is bad!

LanguageTool

Anywhere you write on the web, LanguageTool will be there to lend a guiding editorial hand. It helps fix typos, grammar problems, and even recognizes common word mix-ups like the there/their/they’re. 

Available in 25 languages, LanguageTool automatically works on any web-based publishing platform like Gmail, web docs, social media sites, etc. The clever extension will even spot words you’re possibly overusing and suggest alternatives to spruce up your prose. 

YouTube your way

Improve YouTube!

Boasting 175+ customization features, Improve YouTube! is simple to grasp while providing a huge variety of ways to radically alter YouTube functionality. 

Key features include… 

  • Customize YouTube’s layout with different color schemes
  • Create shortcuts for common actions like skipping to next video, scrolling back/forward 10 seconds, volume control & more
  • Filter out unwanted elements like Related Videos, Shorts, Comments, etc.
  • Ad blocking (with ability to allow ads from channels you choose to support)
  • Simple screenshot and save features
  • Playlist shuffle
  • Frame by frame scrolling
  • High-def default video quality

I’m drowning in browser tabs! Send help! 

OneTab

You’ve got an overwhelming number of open tabs. You can’t close them. You need them. But you can’t organize them all right now either. You’re too busy. What to do?! 

If you have OneTab on Firefox you just click the toolbar button and suddenly all those open tabs become a clean list of text links listed on a single page. Ahhh serenity.

Not only will you create browser breathing room for yourself, but with all those previously open tabs now closed and converted to text links, you’ve also freed up a bunch of CPU and memory, which should improve browser speed and performance. 

If you’ve never installed a browser extension before, we hope you found something here that piques your interest to try. To continue exploring ways to personalize Firefox through the power of extensions, please see our collection of 100+ Recommended Extensions

The Mozilla BlogWindows 10 updates are ending. Here’s what it means for Firefox users.

Firefox logo with orange fox wrapped around purple globe.

This week Microsoft released the final free monthly update to Windows 10. While this marks the end of support from Microsoft, Firefox will continue to support Windows 10 for the foreseeable future.

If you remain on Windows 10, you will continue to get the same updates to Firefox you do today, with all of our latest feature improvements and bug fixes. This includes our commitment to resolve security vulnerabilities as rapidly as we can, sometimes in less than 24 hours, with special security updates. Windows 10 remains a primary platform for Firefox users. Unlike older versions of Windows like Windows 7 and 8, where Mozilla is only offering security updates to Firefox, Windows 10 will get the latest and greatest features and bug fixes just like users on Windows 11. 

Should you upgrade to Windows 11?

While Mozilla will continue to deliver the latest updates to Firefox on Windows 10, security online also requires continued updates from Microsoft to Windows 10 itself, and to the many other software and devices that you use on your Windows 10 computer. That’s why we recommend upgrading to Windows 11 if your computer supports it. You can find out if your PC can run Windows 11 and upgrade to it for free from your Windows update settings. With this option, when you start up Windows 11 for the first time you’ll find that Firefox is still installed, and all of your data and settings are just like you left them. 

If your computer cannot run Windows 11, or you wish to remain on Windows 10 for other reasons, your next best option is to make sure you’re getting “extended security updates” from Microsoft. While these updates won’t deliver new Windows features or non-security bug fixes, they will fix security vulnerabilities that are found in Windows 10 in the future. You should see an option to “enroll” in these updates in your Windows update settings, and if you choose the “Windows Backup” option you’ll get the updates for free. Microsoft has more information on Windows 10 extended security updates if you have other questions. 

Preparing for a device upgrade or new PC

If you get a new Windows 11 PC you might be surprised to see that even if you used Windows Backup, non-Microsoft apps like Firefox haven’t migrated with you. You will typically get a link in your start menu or on your desktop to re-install Firefox, and after it’s installed you’ll find that everything is “fresh” — without your bookmarks, saved passwords, browsing history, or any of your other data and settings. 

This can be frustrating, but we do have a solution for you if you prepare in advance and back up your data using Firefox sync through a Mozilla account. To get started with sync, just choose “sign in” from the Firefox toolbar or menu, and we’ll walk you through the quick process of creating a Mozilla account and enabling sync. 

Firefox sync helps transfer your data securely

Sync isn’t just for people who have Firefox running on more than one computer. It’s also a safe way to back up your data and protect yourself against a lost laptop, a computer that breaks down or is damaged, or your own excited forgetfulness if you get rid of your old PC the moment you get a new one. And what many Firefox users may not realize is that Firefox sync is “end-to-end encrypted,” which is a fancy way of saying that not even Mozilla can read your data. Without your password, which we don’t know, your data is an indecipherable scramble even to us. But it’s safe on our servers nonetheless, which means that if you find yourself with a new PC and a “fresh” Firefox, all you need to do is log in and all your bookmarks, passwords, history and more will quickly load in. 

Meanwhile, you can also rest assured that if you continue to use Firefox on Windows 10 over the next few years, we’ll let you know through messages in Firefox if there is new information about staying secure and whether our stance regarding our support for Windows 10 needs to change. 

Thanks for using Firefox, and know that you can always reach us at Mozilla Connect. We’re eager for your feedback and questions.

Take control of your internet

Download Firefox

The post Windows 10 updates are ending. Here’s what it means for Firefox users. appeared first on The Mozilla Blog.

The Rust Programming Language Blogdocs.rs: changed default targets

Changes to default build targets on docs.rs

This post announces two changes to the list of default targets used to build documentation on docs.rs.

Crate authors can specify a custom list of targets using docs.rs metadata in Cargo.toml. If this metadata is not provided, docs.rs falls back to a default list. We are updating this list to better reflect the current state of the Rust ecosystem.

Apple silicon (ARM64) replaces x86_64

Reflecting Apple's transition from x86_64 to its own ARM64 silicon, the Rust project has updated its platform support tiers. The aarch64-apple-darwin target is now Tier 1, while x86_64-apple-darwin has moved to Tier 2. You can read more about this in RFC 3671 and RFC 3841.

To align with this, docs.rs will now use aarch64-apple-darwin as the default target for Apple platforms instead of x86_64-apple-darwin.

Linux ARM64 replaces 32-bit x86

Support for 32-bit i686 architectures is declining, and major Linux distributions have begun to phase it out.

Consequently, we are replacing the i686-unknown-linux-gnu target with aarch64-unknown-linux-gnu in our default set.

New default target list

The updated list of default targets is:

  • x86_64-unknown-linux-gnu
  • aarch64-apple-darwin (replaces x86_64-apple-darwin)
  • x86_64-pc-windows-msvc
  • aarch64-unknown-linux-gnu (replaces i686-unknown-linux-gnu)
  • i686-pc-windows-msvc

Opting out

If your crate requires the previous default target list, you can explicitly define it in your Cargo.toml:

[package.metadata.docs.rs]
targets = [
    "x86_64-unknown-linux-gnu",
    "x86_64-apple-darwin",
    "x86_64-pc-windows-msvc",
    "i686-unknown-linux-gnu",
    "i686-pc-windows-msvc"
]

Note that docs.rs continues to support any target available in the Rust toolchain; only the default list has changed.

Firefox Add-on ReviewsTranslate the web easily with a browser extension

At Mozilla, of course we’re fans of Firefox’s built-in, privacy-focused translation feature, but the beauty of browser extensions is the vast array of niche tools and customization features they can provide. Sometimes finding the right extension for your personal needs can profoundly change the way you interact with the web. So if you do a lot of translating on the internet, you might consider using a specialized extension translator. Here are some great options…

I just want a simple, efficient way to translate. I don’t need fancy features.

Simple Translate

It doesn’t get much simpler than this. Highlight the text you want to translate and click the extension’s toolbar icon to activate a streamlined pop-up. Your highlighted text automatically appears in the pop-up’s translation field and a drop-down menu lets you easily select your target language. Simple Translate also features a handy one-click “Translate this page” button. 

Translate Web Pages

Maybe you just need to translate full web pages, like when reading news articles, how-to guides, or job related sites. If so, Translate Web Pages might be the ideal solution for you with its sharp focus on full-page utility. 

The extension includes a handy feature if you commonly translate a few languages — you can select up to three languages to easily access with a single-click popup menu. TWP also gives you the option to designate specific websites you always want translated without prompt.

S3.Translator

Supporting 100+ languages, S3.Translator serves up a full feature set of language tools, like the ability to translate full or select portions of a page, text-to-speech translation, YouTube subtitle translations, and more.

There’s even a nifty Learning Language mode, which allows you to turn any text into the language you’re studying. Toggle between languages so you can conveniently learn as you naturally browse the web.

To Google Translate

Very popular, very simple translation extension that exclusively uses Google’s translation services, including text-to-speech. 

Simply highlight any text on a web page and right-click to pull up a To Google Translate context menu that allows three actions: 1) translate into your preferred language; 2) listen to audio of the text; 3) Translate the entire page

<figcaption class="wp-element-caption">Right-click any highlighted text to activate To Google Translate.</figcaption>

I do a ton of translating. I need power features to save me time and trouble.

ImTranslator

Striking a balance between out-of-the-box ease and deep customization potential, ImTranslator leverages three top translation engines (Google, Bing, Translator) to cover 100+ languages.

Other strong features include text-to-speech, dictionary and spell check in eight languages, hotkey customization, and a huge assortment of ways to customize the look of ImTranslator’s interface — from light and dark themes to font size and more. 

Immersive Translate

One of the most feature packed translation extensions you’ll find, Immersive Translate goes beyond the web to capably handle PDF’s, eBooks and much more.

With more features than we have space to list, here are some of the most uniquely compelling capabilities of Immersive Translate.

  • Smartly identifies the main content portions of a web page to provide elegant side-by-side bilingual translations while avoiding page clutter
  • Mouse hover translations
  • Input translation box, so you can enter text to be translated (an ideal tool for real-time bilingual conversations)
  • Video subtitle translations
  • Strong Desktop and mobile support

Mate Translate

A slick, intuitive extension that performs all basic translation functions very well, but it’s Mate Translate’s paid tier that unlocks some unique features, such as Sync (saved translations can appear across devices and browsers, including iPhones and Mac). 

There’s also a neat Phrasebook feature, which lets you build custom word and phrase lists so you can return to common translations you frequently need. It works offline, too, so it’s ideal for travellers who need quick reference to common foreign phrases. 

These are some of our favorites, but there are plenty more translation extensions to explore on addons.mozilla.org.

The Mozilla BlogFox Recap: A student-built tool that analyzes your browsing habits

What would your browser history say about you? Whether you were getting things done this week or just collecting tabs, a new Firefox extension helps you reflect on your digital habits. 

Designed as a personal productivity tool, Fox Recap is a capstone project from a group of college seniors at California State University, Monterey Bay. It categorizes your browsing history, shows how much time you’re spending on different sites, and turns that data into simple visual reports. Everything happens locally on your device, so your information stays private.

Related story: Developer Spotlight: Fox Recap

Gradient intro card inviting a dive into today’s browser activity overview
Browser activity stat card showing Technology as most-clicked with 37 visits

How Fox Recap works

Once you download and open the extension on Firefox for desktop, click on settings and grant permission to run the ML engine. From there, you can choose to view your browsing history for today, this week or this month. 

Fox Recap then lays out your activity in simple charts and categories like technology, shopping, education and entertainment.

“It’s really a tool for you to know how you use your browser,” said one of the student developers, Taimur Hasan. “Maybe you want to lessen the amount of time you spend on entertainment, and see that you use more education sites.”

Kate Sawtell wanted to create a tool that helps people see how they spend their time on the internet. “As a busy mom with a bunch of side projects, I love how it shows where my time online actually goes,” Kate said. “Am I researching, streaming shows or slipping into online shopping holes? It’s not super serious or judgmental, just a quick snapshot of my habits. Sometimes it makes me feel productive, other times it’s like, wow okay maybe I should chill on the shopping tab.”

Four people standing in front of a Firefox Recap project display at California State University, Monterey Bay. From left to right: Taimur Hasan, Mozilla community manager Matt Cool, Kate Sawtell, and Diego Valdez.<figcaption class="wp-element-caption">Members of the Fox Recap team at California State University, Monterey Bay, presenting their capstone project. Pictured (left to right): Taimur Hasan, Mozilla community manager Matt Cool, Kate Sawtell, and Diego Valdez. Not pictured: Peter Mitchell.</figcaption>

‘Useful AI and strong privacy can coexist’

Firefox machine learning engineer Tarek Ziadé served as a mentor for the project. He was struck by how quickly Taimur, Kate, Diego and Peter internalized both the technical challenges of building AI features and their privacy implications. 

“I had assumed younger developers might treat privacy as an afterthought,” Tarek said. “I was wrong. They pushed for privacy by design from the start.”

Taimur, who trained the model himself rather than using an existing one, explained: “It’s not an off-the-shelf model that I pulled off the internet. I trained it myself using my gaming computer.”

Browser activity stat card showing Technology as most-clicked with 37 visits

Tarek believes that what the group built reflects the direction in which privacy-focused technology is headed.

“Intelligence should be local by default, data should be minimized, and anything that needs to leave the device should be explicit and consented,” Tarek said. “As AI capabilities become a commodity, the differentiator will be trust.”

That’s exactly where Mozilla should be leading, Tarek added: “making high-quality, on-device AI the default, and proving that useful AI and strong privacy can coexist.”

A glimpse of the next generation of web builders

For team member Diego Valdez, the project’s value is personal and practical: “I hope people who use Fox Recap can learn about their browsing activity in an engaging way, in hopes [of helping them] improve their productivity.”

Mozilla community manager Matt Cool sees it in a larger frame. “It’s a scary and exciting time to enter the tech industry,” Matt said. “The next generation of open web builders is already stepping up. Right here in Monterey, they’re building real-world projects, contributing to open-source, and tackling some of the toughest problems facing the future of the web.”

Fox Recap is one of several student projects showcased at this spring’s Capstone Festival by the School of Computing and Design at Cal State Monterey Bay. Professor Bude Su, who chairs the department, emphasized the value of mentorship as students prepare for what comes next.

“Mozilla’s involvement brought an added layer of motivation for our students,” Professor Su said. “The opportunity to work on a real-world project under industry mentorship has been invaluable for our students’ learning and professional growth.”

The collaboration shows what can happen when education, mentorship and Mozilla’s values of openness and trust come together. Fox Recap helps make sense of the tabs we collect, but it also points to something bigger: a new wave of developers building tools that respect the people who use them.

Take control of your internet

Download Firefox

The post Fox Recap: A student-built tool that analyzes your browsing habits  appeared first on The Mozilla Blog.

The Mozilla BlogThe social media director who helps make Merriam-Webster go viral

A bearded man in a denim shirt over a dark T-shirt, against a green background with a layered pixel effect.

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we know the internet is pretty darn magical. The internet opens up doors and opportunities, allows for human connection, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner Of The Internet, we talk with people about the online spaces they can’t get enough of, the sites and forums that shaped them, and how they would design their own corner of the web.

We caught up with John Sabine, the social media director of Merriam-Webster and Encyclopedia Britannica. He talks about his favorite subreddit, silly deep dives and why his job makes him hopeful about the internet.

What is your favorite corner of the internet?

Honestly, it’s the “AskHistorians” subreddit. It’s one of my few internet habits that I have that has kept up. I can’t recommend it enough. I wish more things were curated with such level of scrutiny and scholarship. If people disagree, they disagree as Ph.D. people disagree. I don’t have a Ph.D., but I imagine it’s respectful. There’s profiles and avatars, but those feel very secondary to the content. You lead with the “what,” and then you can look up the “who” afterwards. I don’t post on Reddit at all; I’m a lurker in general on the internet. So I’m shocked by how many people weigh in on things.

What is an internet deep dive that you can’t wait to jump back into?

I have a bunch of articles that I have bookmarked… and my goal is to read one of the 400 articles I have saved. What I’m looking forward to specifically is just to read an article for joy, that’s not doomscrolling or part of my job. I do feel like when you have this job, you kind of get internet-ed out every day. And also: crosswords. I want to get better at crosswords, if that counts. We have one on merriam-webster.com, and I also do The New York Times, though I rarely finish it.

What’s the last great story that you read?

It was on ringer.com. A writer named Tyler Parker went through NBA names. He just ranked their names, had nothing to do with basketball. I started it before bed, and I was like, “Oh, I’ll skim.” I read every single word. He really thought about the names and how they make people feel. And it’s truly just how they sound like. That’s it. It was written beautifully. That’s a silly one, but I think silly deep dives are probably good for the soul right now.

What is the one tab you always regret closing?

Probably my calendar… And honestly, I always have Merriam-Webster and Britannica up. And I rarely do close them because I always need them for my work.

What can you not stop talking about on the internet right now?

So Merriam-Webster is releasing its first print dictionary in over 20 years. And they made it really pretty, and it feels like a really cool book that you would display. I’m very excited because I’m doing deep dives of old ads for an almost 200-year-old company. There’s a lot of stuff to go through. Some of it we have in the archives, some of it is just out there. So just going through the old print stuff, finding old paper dictionaries. So, like, selfishly, I’m excited for the new collegiate 12th edition.

What was the first online community you engaged with?

I’m a lurker, so engagement is a lot for me. The first time I probably posted was on a forum when I moved to Chicago to do improv comedy. There’s a Chicago improv forum and I think I was like, “What show should I see?”

What articles and/or videos are you waiting to read/watch right now?

I’m waiting for the next [recommendation] from my group chats. There are some people that will just send you anything, and you’re like, “OK, thank you for sending me this. I’ll watch 30% of the things you sent.” But there’s the ones that you’re like, “Oh, yeah, gotta watch that.” So I’ve got a couple friends like that, so I hope they send me stuff because Lord knows, the internet’s huge.

Is there anything about the way people engage with Merriam-Webster online that makes you feel hopeful about the internet?

Oh, 4,000%. Yes, doomscrolling is a reality of being online now. I know a lot of people who just step away and go outside and touch grass.

But there’s still good stuff happening. The comment sections on our Instagram and TikTok can actually be really fun. People have genuine, kind, often funny conversations. It’s rarely mean. Seeing that makes me hopeful, because people clearly want wholesome, thoughtful interactions.

People have a personal connection to language. Over time, I’ve seen our audience expand to include all kinds of people who care deeply about words, even if they wouldn’t call themselves “word nerds.” Language is personal, and I think our work celebrates that.

And honestly, I feel more hopeful doing this job on the internet than I think I would if I weren’t doing this work and was just online as a regular user.


John Sabine is the social media director for Merriam-Webster and Encyclopedia Britannica. He is originally from Dallas, Texas, and he’s never once spelled “definitely” correctly on the first try.

The post The social media director who helps make Merriam-Webster go viral appeared first on The Mozilla Blog.

Mozilla Performance BlogFirefox 144 ships interactionId for INP


TL;DR

Firefox 144 ships PerformanceEventTiming.interactionId, which lets browsers and tools group events that belong to the same user interaction. This property is used to calculate Interaction to Next Paint (INP), one of the Core Web Vitals.


Firefox 144 ships support for the PerformanceEventTiming.interactionId property. It helps browsers and tools identify which input events belong to a single user interaction, such as a pointerdown, pointerup, and click triggered by the same tap.

The Interaction to Next Paint (INP) metric, part of the Core Web Vitals, relies on this grouping to measure how responsive a page feels during real user interactions. INP represents how long it takes for the next frame to paint after a user input. Instead of looking at a single event, it captures the worst interaction latency during the page’s lifetime, giving a more complete view of responsiveness.

Why this matters

Before interactionId, each event had to be measured separately, which made it hard to connect related events as part of the same interaction.
With this property, performance tools and developers can now:

  • Group related input events into a single interaction
  • Measure interaction latency more accurately
  • Identify and debug slow interactions more easily

Using interactionId

If you use the PerformanceObserver API to collect PerformanceEventTiming entries, you’ll start seeing an interactionId field in Firefox 144. Events that share a non-zero interactionId belong to the same interaction group, which can be used to calculate latency or understand where delays occur.

// The key is the interaction ID.
let eventLatencies = {};

const observer = new PerformanceObserver((list) => {
  list.getEntries().forEach((entry) => {
    if (entry.interactionId > 0) {
      const interactionId = entry.interactionId;
      if (!eventLatencies[interactionId]) {
        eventLatencies[interactionId] = [];
      }
      eventLatencies[interactionId].push(entry.duration);
    }
  });
});

observer.observe({ type: "event", buffered: true });

// Log events with maximum event duration for a user interaction
Object.entries(eventLatencies).forEach(([k, v]) => {
  console.log(Math.max(...v));
});

If you use external tools and libraries like web-vitals, they should already collect the INP value for you.

The Rust Programming Language BlogAnnouncing the New Rust Project Directors

We are happy to announce that we have completed the annual process to elect new Project Directors.

The new Project Directors are:

They will join Ryan Levick and Carol Nichols to make up the five members of the Rust Foundation Board of Directors who represent the Rust Project.

We would also like to thank the outgoing going Project Directors for contributions and service:

The board is made up of Project Directors, who come from and represent the Rust Project, and Member Directors, who represent the corporate members of the Rust Foundation. Both of these director groups have equal voting power.

We look forward to working with and being represented by this new group of project directors.

We were fortunate to have a number of excellent candidates and this was a difficult decision. We wish to express our gratitude to all of the candidates who were considered for this role! We also extend our thanks to the project as a whole who participated by nominating candidates and providing additional feedback once the nominees were published.

Finally, we want to share our appreciation for Tomas Sedovic for facilitating the election process. An overview of the election process can be found in a previous blog post here.

Firefox Developer ExperienceFirefox WebDriver Newsletter 144

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 144 release cycle.

Contributions

Firefox is an open source project, and we are always happy to receive external code contributions to our WebDriver implementation. We want to give special thanks to everyone who filed issues, bugs and submitted patches.

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi. Join our chatroom if you need any help to get started!

WebDriver BiDi

Marionette

The Mozilla BlogChoose how you search and stay organized with Firefox

llustration showing Firefox’s browser interface with a focus on search options. A bar labeled “Use visual search” overlaps an image of a floral painting, while another option labeled “with the Perplexity icon” appears below it. A small browser window shows a cropped view of the same artwork. The Firefox toolbar is visible at the bottom with a highlighted smiley face icon. The background is a gradient of purple and blue with grid lines and sparkles, conveying a playful, tech-inspired design.

At Mozilla, we build Firefox around one principle: putting you in control. With today’s release, we’re introducing new features that make browsing smarter and more personal while staying true to the values you care about most: privacy and choice.

A new option for search, still on your terms.

Earlier this year, we gave you more choice in how you search by testing Perplexity, an AI-powered answer engine, as a search option on Firefox. Now, after positive feedback, we’re making it a fixture, rolling it out to more users for desktop. Perplexity provides conversational answers with citations, so you can validate information without digging through pages of results.

This addition reflects our shared commitment to choice: You decide when to use an AI answer engine, or if you want to use it at all. Available globally, Perplexity can be found in the unified search button in the address bar. We’ll be bringing Perplexity to mobile in the coming months. And as always, privacy matters – Perplexity maintains strict prohibitions against selling or sharing personal data.

Organize your life with profiles

At the beginning of the year, we started testing profiles — a way to create and switch between different browsing setups. After months of gradual rollout and community feedback, profiles are now available to everyone.

Firefox Profiles feature shown with an illustration of three foxes and a setup screen for creating and customizing browser profiles.<figcaption class="wp-element-caption">Create and switch between different browsing setups</figcaption>

Profiles let you keep work tabs distinct from personal browsing, or dedicate a setup to testing extensions or managing a specific project. Each profile runs independently, giving you flexibility and focus. Feedback from students, professionals and contributors helped us refine this feature into the version you see today.

Discover more with visual search

In September, we announced visual search on Mozilla Connect and began rolling it out for testing. Powered by Google Lens, it lets you search what you see with a simple right-click on any image.

<figcaption class="wp-element-caption">Search what you see with a simple right-click on an image</figcaption>

You can:

  • Find similar products, places or objects 
  • Copy, translate or search text from images
  • Get inspiration for learning, travel or research

This desktop-only feature makes searching more intuitive and curiosity-driven. For now, it requires Google as your default search engine. Tell us what you think. Your feedback will guide where visual search appears next, from the address bar to mobile.

Evolving to meet your needs

Today’s release brings more ways to browse on your terms — from smarter search with Perplexity, to profiles that let you separate work from play, to visual search.

Each of these features reflects what matters most to us: putting you in control of your online experience and building alongside the community that inspires Firefox. With your feedback, we’ll keep shaping a browser that not only keeps pace with the future of the web but also stays true to the open values you trust.

We’re excited to see how you use what’s new, and can’t wait to share what’s next.

Take control of your internet

Download Firefox

The post Choose how you search and stay organized with Firefox appeared first on The Mozilla Blog.

Niko MatsakisWe need (at least) ergonomic, explicit handles

Continuing my discussion on Ergonomic RC, I want to focus on the core question: should users have to explicitly invoke handle/clone, or not? This whole “Ergonomic RC” work was originally proposed by Dioxus and their answer is simple: definitely not. For the kind of high-level GUI applications they are building, having to call cx.handle() to clone a ref-counted value is pure noise. For that matter, for a lot of Rust apps, even cloning a string or a vector is no big deal. On the other hand, for a lot of applications, the answer is definitely yes – knowing where handles are created can impact performance, memory usage, and even correctness (don’t worry, I’ll give examples later in the post). So how do we reconcile this?

This blog argues that we should make it ergonomic to be explicit. This wasn’t always my position, but after an impactful conversation with Josh Triplett, I’ve come around. I think it aligns with what I once called the soul of Rust: we want to be ergonomic, yes, but we want to be ergonomic while giving control1.

I like Tyler Mandry’s Clarity of purpose contruction, “Great code brings only the important characteristics of your application to your attention”. The key point is that there is great code in which cloning and handles are important characteristics, so we need to make that code possible to express nicely. This is particularly true since Rust is one of the very few languages that really targets that kind of low-level, foundational code.

This does not mean we cannot (later) support automatic clones and handles. It’s inarguable that this would benefit clarity of purpose for a lot of Rust code. But I think we should focus first on the harder case, the case where explicitness is needed, and get that as nice as we can; then we can circle back and decide whether to also support something automatic. One of the questions for me, in fact, is whether we can get “fully explicit” to be nice enough that we don’t really need the automatic version. There are benefits from having “one Rust”, where all code follows roughly the same patterns, where those patterns are perfect some of the time, and don’t suck too bad2 when they’re overkill.

“Rust should not surprise you.” (hat tip: Josh Triplett)

I mentioned this blog post resulted from a long conversation with Josh Triplett3. The key phrase that stuck with me from that conversation was: Rust should not surprise you. The way I think of it is like this. Every programmer knows what its like to have a marathon debugging session – to sit and state at code for days and think, but… how is this even POSSIBLE? Those kind of bug hunts can end in a few different ways. Occasionally you uncover a deeply satisfying, subtle bug in your logic. More often, you find that you wrote if foo and not if !foo. And occasionally you find out that your language was doing something that you didn’t expect. That some simple-looking code concealed a subltle, complex interaction. People often call this kind of a footgun.

Overall, Rust is remarkably good at avoiding footguns4. And part of how we’ve achieved that is by making sure that things you might need to know are visible – like, explicit in the source. Every time you see a Rust match, you don’t have to ask yourself “what cases might be missing here” – the compiler guarantees you they are all there. And when you see a call to a Rust function, you don’t have to ask yourself if it is fallible – you’ll see a ? if it is.5

Creating a handle can definitely “surprise” you

So I guess the question is: would you ever have to know about a ref-count increment? The trick part is that the answer here is application dependent. For some low-level applications, definitely yes: an atomic reference count is a measurable cost. To be honest, I would wager that the set of applications where this is true are vanishingly small. And even in those applications, Rust already improves on the state of the art by giving you the ability to choose between Rc and Arc and then proving that you don’t mess it up.

But there are other reasons you might want to track reference counts, and those are less easy to dismiss. One of them is memory leaks. Rust, unlike GC’d languages, has deterministic destruction. This is cool, because it means that you can leverage destructors to manage all kinds of resources, as Yehuda wrote about long ago in his classic ode-to-RAII entitled “Rust means never having to close a socket”. But although the points where handles are created and destroyed is deterministic, the nature of reference-counting can make it much harder to predict when the underlying resource will actually get freed. And if those increments are not visible in your code, it is that much harder to track them down.

Just recently, I was debugging Symposium, which is written in Swift. Somehow I had two IPCManager instances when I only expected one, and each of them was responding to every IPC message, wreaking havoc. Poking around I found stray references floating around in some surprising places, which was causing the problem. Would this bug have still occurred if I had to write .handle() explicitly to increment the ref count? Definitely, yes. Would it have been easier to find after the fact? Also yes.6

Josh gave me a similar example from the “bytes” crate. A Bytes type is a handle to a slice of some underlying memory buffer. When you clone that handle, it will keep the entire backing buffer around. Sometimes you might prefer to copy your slice out into a separate buffer so that the underlying buffer can be freed. It’s not that hard for me to imagine trying to hunt down an errant handle that is keeping some large buffer alive and being very frustrated that I can’t see explicitly in the where those handles are created.

A similar case occurs with APIs like like Arc::get_mut7. get_mut takes an &mut Arc<T> and, if the ref-count is 1, returns an &mut T. This lets you take a shareable handle that you know is not actually being shared and recover uniqueness. This kind of API is not frequently used – but when you need it, it’s so nice it’s there.

“What I love about Rust is its versatility: low to high in one language” (hat tip: Alex Crichton)

Entering the conversation with Josh, I was leaning towards a design where you had some form of automated cloning of handles and an allow-by-default lint that would let crates which don’t want that turn it off. But Josh convinced me that there is a significant class of applications that want handle creation to be ergonomic AND visible (i.e., explicit in the source). Low-level network services and even things like Rust For Linux likely fit this description, but any Rust application that uses get_mut or make_mut might also.

And this reminded me of something Alex Crichton once said to me. Unlike the other quotes here, it wasn’t in the context of ergonomic ref-counting, but rather when I was working on my first attempt at the “Rustacean Principles”. Alex was saying that he loved how Rust was great for low-level code but also worked well high-level stuff like CLI tools and simple scripts.

I feel like you can interpret Alex’s quote in two ways, depending on what you choose to emphasize. You could hear it as, “It’s important that Rust is good for high-level use cases”. That is true, and it is what leads us to ask whether we should even make handles visible at all.

But you can also read Alex’s quote as, “It’s important that there’s one language that works well enough for both” – and I think that’s true too. The “true Rust gestalt” is when we manage to simultaneously give you the low-level control that grungy code needs but wrapped in a high-level package. This is the promise of zero-cost abstractions, of course, and Rust (in its best moments) delivers.

The “soul of Rust”: low-level enough for a kernel, usable enough for a GUI

Let’s be honest. High-level GUI programming is not Rust’s bread-and-butter, and it never will be; users will never confuse Rust for TypeScript. But then, TypeScript will never be in the Linux kernel.

The goal of Rust is to be a single language that can, by and large, be “good enough” for both extremes. The goal is make enough low-level details visible for kernel hackers but do so in a way that is usable enough for a GUI. It ain’t easy, but it’s the job.

This isn’t the first time that Josh has pulled me back to this realization. The last time was in the context of async fn in dyn traits, and it led to a blog post talking about the “soul of Rust” and a followup going into greater detail. I think the catchphrase “low-level enough for a Kernel, usable enough for a GUI” kind of captures it.

Conclusion: Explicit handles should be the first step, but it doesn’t have to be the final step

There is a slight caveat I want to add. I think another part of Rust’s soul is preferring nuance to artificial simplicity (“as simple as possible, but no simpler”, as they say). And I think the reality is that there’s a huge set of applications that make new handles left-and-right (particularly but not exclusively in async land8) and where explicitly creating new handles is noise, not signal. This is why e.g. Swift9 makes ref-count increments invisible – and they get a big lift out of that!10 I’d wager most Swift users don’t even realize that Swift is not garbage-collected11.

But the key thing here is that even if we do add some way to make handle creation automatic, we ALSO want a mode where it is explicit and visible. So we might as well do that one first.

OK, I think I’ve made this point 3 ways from Sunday now, so I’ll stop. The next few blog posts in the series will dive into (at least) two options for how we might make handle creation and closures more ergonomic while retaining explicitness.


  1. I see a potential candidate for a design axiom… rubs hands with an evil-sounding cackle and a look of glee ↩︎

  2. It’s an industry term↩︎

  3. Actually, by the standards of the conversations Josh and I often have, it was’t really all that long – an hour at most. ↩︎

  4. Well, at least sync Rust is. I think async Rust has more than its share, particularly around cancellation, but that’s a topic for another blog post. ↩︎

  5. Modulo panics, of course – and no surprise that accounting for panics is a major pain point for some Rust users. ↩︎

  6. In this particular case, it was fairly easy for me to find regardless, but this application is very simple. I can definitely imagine ripgrep’ing around a codebase to find all increments being useful, and that would be much harder to do without an explicit signal they are occurring. ↩︎

  7. Or Arc::make_mut, which is one of my favorite APIs. It takes an Arc<_> and gives you back mutable (i.e., unique) access to the internals, always! How is that possible, given that the ref count may not be 1? Answer: if the ref-count is not 1, then it clones it. This is perfect for copy-on-write-style code. So beautiful. 😍 ↩︎

  8. My experience is that, due to language limitations we really should fix, many async constructs force you into 'static bounds which in turn force you into Rc and Arc where you’d otherwise have been able to use &↩︎

  9. I’ve been writing more Swift and digging it. I have to say, I love how they are not afraid to “go big”. I admire the ambition I see in designs like SwiftUI and their approach to async. I don’t think they bat 100, but it’s cool they’re swinging for the stands. I want Rust to dare to ask for more↩︎

  10. Well, not only that. They also allow class fields to be assigned when aliased which, to avoid stale references and iterator invalidation, means you have to move everything into ref-counted boxes and adopt persistent collections, which in turn comes at a performance cost and makes Swift a harder sell for lower-level foundational systems (though by no means a non-starter, in my opinion). ↩︎

  11. Though I’d also wager that many eventually find themselves scratching their heads about a ref-count cycle. I’ve not dug into how Swift handles those, but I see references to “weak handles” flying around, so I assume they’ve not (yet?) adopted a cycle collector. To be clear, you can get a ref-count cycle in Rust too! It’s harder to do since we discourage interior mutability, but not that hard. ↩︎

Mozilla ThunderbirdState of the Thunder 13: How We Make Our Roadmap

Welcome back to our thirteenth episode of State of the Thunder! Nothing unlucky about this latest installment, as Managing Director Ryan Sipes walks us through how Thunderbird creates its roadmap. Unlike other companies where roadmaps are driven solely by business needs, Thunderbird is working with our community governance and feedback from the wider user community to keep us honest even as we move forward.

Want to find out how to join future State of the Thunders? Be sure to join our Thunderbird planning mailing list for all the details.

Open Source, Open Roadmaps

In other companies, product managers tend to draft roadmaps based on business needs. Publishing that roadmap might be an afterthought, or might not happen at all. Thunderbird, however, is open source, so that’s not our process.

A quick history lesson provides some needed context. Eight years ago, Thunderbird was solely a community project driven by a community council. We didn’t have a roadmap like we do today. With the earlier loss of funding and support, the project was in triage mode. Since then, thanks to a wonderful user community who has donated their skill, time, and money, we’ve changed our roadmap process.

The Supernova release (Thunderbird 115) was where we first really focused on making a roadmap with a coherent product vision: a modernized app in performance and appearance. We developed this roadmap with input from the community, even if there was pushback to a UI change.

The 2026 Roadmap Process

At this point, the project has bylaws for the roadmap process, which unites the Thunderbird Council, MZLA staff, and user feedback. Over the past year we’ve added two new roadmaps: one for the Android app and another for ThunderbirdPro. (Note, iOS doesn’t have a roadmap yet. Our current goal is: let’s be able to receive email!) But even with these changes and additions, the Mozilla Manifesto is still at the heart of everything we do. We firmly believe that making roadmaps with community governance and feedback from the larger community keeps us honest and helps us make products that genuinely improve people’s lives.

Want to see how our 2025-2026 Roadmaps are taking shape? Check out the Desktop Roadmap, as well the mobile roadmaps for Android and iOS.

Questions

Integrating Community Contributions

In the past, community contributors have picked up “nice to have” issues and developed them alongside us. Or people want to pursue problems or challenges that affect them the most. Sometimes, either of these scenarios coincide with our roadmap, and we get features like the new drag and drop folders!

Needless to say, we love when the community helps us get the product where we hope it will go. Sometimes, we have to pause development because of shifted priorities, and we’re trying to get better at updating contributors when these shifts happen on places like the tb-planning and mobile-planning mailing lists.

And these community contributions aren’t just code! Testing is a crucial way to help make Thunderbird shine on desktop and mobile. Community suggestions on Mozilla Connect help us dream big, as we discussed in the last two episodes. Reporting bugs, either on Bugzilla for the desktop app or GitHub for the Android app, help us know when things aren’t working. We encourage our community to learn more about the Council, and don’t be afraid to get in touch with them at council@thunderbird.net.

Telemetry and the Roadmap

While we know there are passionate debates on telemetry in the open source community, we want to mention how respectful telemetry can make Thunderbird better. Our telemetry helps us see what features are important, and which ones just clutter up the UI. We don’t collect Personally Identifying Information (PII), and our code is open so you can check us on this. Unlike Outlook, who shares their data with 801 partners, we don’t. You can read all about what we use and how we use it here.

So if you have telemetry turned off, please, we ask you to turn it on, and if it’s already on, to keep it on! Especially if you’re a Linux user, enabling telemetry helps us have a better gauge of our Linux user base and how to best support you.

Roadmap Categories and Organizing

Should we try to ‘bucket’ similar items on our roadmap and spread development evenly between them, or should we concentrate on the bucket that needs it most? The answer to this question depends on who you ask! Sometimes we’re focused on a particular area of focus, like UI work in Supernova and current UX work in Calendar. Sometimes we’re working to pay down tech debt across our code. That effort in reducing tech debt can pave the way for future work, like the current efforts to modernize our database so we can have a true Conversation View and other features. Sometimes roadmaps reveal obstacles you have to overcome, and Ryan thinks we’re getting faster at this.

Where to see the roadmaps

The current desktop roadmap is here, while the current Android roadmap is on our GitHub repo. In the future, we’re hoping to update where these roadmaps live, how they look, and how you can interact with them. (Ryan is particularly partial to Obsidian’s roadmap.) We ultimately want our roadmaps to be storytelling devices, and to keep them more updated to any recent changes.

Current Calls for Involvement

Join us for the last few days of testing EWS mail support! Also, we had a fantastic time with the Ask a Fox replython, and would love if you helped us answer support questions on SUMO.

Watch the Video (also on PeerTube)

Listen to the Podcast

The post State of the Thunder 13: How We Make Our Roadmap appeared first on The Thunderbird Blog.

The Mozilla BlogShake to Summarize recognized with special mention in TIME’s Best Inventions of 2025

Illustration featuring a TIME magazine cover titled “Best Inventions of 2025,” showing a humanoid robot folding clothes, alongside a smartphone displaying the Firefox logo and a screen reading “Summarizing…” with a dessert recipe below it.<figcaption class="wp-element-caption">Cover credit: Photography by Spencer Lowell for TIME </figcaption>

Shake to Summarize has been recognized with a Special Mention in TIME’s Best Inventions of 2025.

Each year TIME spotlights a range of new industry-defining innovations across consumer electronics, health tech, apps and beyond. This year, Firefox’s Shake to Summarize feature made the list for bringing a smart solution to a modern user problem: information overload. 

With a single shake or tap, users on iOS devices can get to the heart of an article in seconds. The cool part? Summaries adapt to what you’re reading: recipes pull out the steps for cooking, sports focus on game scores and stats, and news highlights the key takeaways from a story.

“We’re thrilled to see Firefox earn a TIME Best Inventions 2025 Special Mention! Our work on Shake to Summarize reflects how Firefox is evolving,” said Anthony Enzor-DeMeo, general manager of Firefox. “We’re reimagining our browser to fit seamlessly into modern life, helping people browse with less clutter and more focus. The feature is also part of our efforts to give mobile users a cleaner UI and smarter tools that make browsing on the go fast, seamless, and even fun.”

Launched in September 2025 and currently available to English-language users in the U.S., Shake to Summarize generates summaries using Apple Intelligence on iPhone 15 Pro or later running iOS 26 or above, and Mozilla-hosted AI for other devices running iOS 16 or above.

“This recognition is a testament to the incredible work of our UX, design, product, and engineering teams who brought this innovation to life, showcasing that Firefox continues to lead with purpose, creativity, and a deep commitment to user-centric design. Big thank you!” added Enzor-DeMeo.

The Firefox team is working on making the feature available to more users and for those on Android. In the meantime, iOS users can already make the most of Shake to Summarize available in the Apple app store now.

Take control of your internet

Download Firefox

The post Shake to Summarize recognized with special mention in TIME’s Best Inventions of 2025 appeared first on The Mozilla Blog.

Mozilla ThunderbirdState Of The Bird 2024/25

The past twelve months have been another remarkable chapter in Thunderbird’s journey. Together, we started expanding Thunderbird beyond its strong desktop roots, introducing it to smartphones and web browsers to make it more accessible to more people. Thunderbird for Android arrived in the fall and has been steadily improving thanks to our growing mobile team, as well as feedback and contributions from our growing global family. A few months later, in December 2024, we celebrated an extraordinary milestone: 20 years of Thunderbird! We also looked toward a sustainable future with the announcement of Thunderbird Pro, with one of its first services, Appointment, already finding an audience in closed beta. 

The past year also saw a shift in how Thunderbird evolves. Although we recently released our latest annual ESR update (codenamed Eclipse), the bigger news is that our team built the new Monthly Release channel, which is now the default for most of you. This change means you’ll see more frequent updates that make Thunderbird feel fresher, more responsive, and more in tune with your personalized needs. 
Before diving into all the details, I want to pause and express our deepest gratitude to the incredible global community that makes all of this possible. To the hundreds of thousands of people who donated financially, the volunteers who contributed their time and expertise, and the beta testers who carefully helped us polish each update: thank you! Thunderbird thrives because of you. Every milestone we celebrate is a shared achievement, and a shining example of the power of community-driven, open source software development.

Team and Product Updates

Desktop and release updates

In December 2024, we celebrated Thunderbird’s 20th anniversary. Two decades of proving that email software can be both powerful and principled was not without its ups and downs, but that milestone reaffirmed something we hear so often from our community: Thunderbird continues to matter deeply to people all over the world. 

One of the biggest changes this year was the introduction of a new monthly release channel, simply called “Thunderbird Release.” Making this shift required an enormous amount of coordination and care across our desktop and release teams. Unlike the long-standing Extended Support Release (ESR), which provides a single major update every July, the new Thunderbird Release delivers monthly updates. This approach means we can bring you useful improvements and new features significantly faster, while keeping the stability and reliability you rely on.

Over the past year, our desktop team focused heavily on introducing changes that people have been asking for. Specifically, changes that make Thunderbird feel more efficient, intuitive, and modern. We improved visual consistency across system themes, gave you more ways to control the appearance of your message lists and how they’re organized, modernized notifications with native OS integration and quick actions, and moved closer to full Microsoft Exchange support. 

Many of you who switched from the ESR to the new Thunderbird Release channel started seeing these updates as early as April. For those who stuck with the ESR, the annual update, codenamed Eclipse, arrived in July. Thanks to the solid foundation established in those smaller monthly updates, Eclipse enjoyed the smoothest rollout of any annual release in Thunderbird’s history. 

In-depth details on Desktop development can be found in our monthly Developer Digest updates on our blog. 

Thunderbird Mobile

Android

It took longer than we originally anticipated, but Thunderbird has finally arrived as a true smartphone app. The launch of Thunderbird for Android in October 2024 was one of our most exciting steps forward in years. Releasing it took more than two years of active development, beta testing, and invaluable community feedback. 

​​This milestone was made possible by transforming the much-loved K-9 Mail app into something we could proudly call Thunderbird. That process included a full redesign of the interface, including bringing it up to modern design standards, and building an easy way for people to bring their existing Thunderbird desktop accounts directly into the Android app.

We’ve been encouraged by the enthusiastic response to Thunderbird on Android, but we’re also listening closely to your feedback. Our team, together with community contributors, has one very focused goal: to make Thunderbird the best Android email app available. 

iOS

We’ve also seen the overwhelming demand to build a version of Thunderbird for the iOS community. Unlike the Android app, the iOS app is being built from the ground up. 

Fortunately, Thunderbird for iOS took some major steps forward this year. We published the initial repository (a central location for open-source project files and code) for the Thunderbird mobile team and contributors to work together, and we’re laying the groundwork for public testing. 

Our goal for the first public alpha will be to support manual account setup and basic inbox viewing to meet Apple’s minimum review standards. These early pre-release versions will be distributed through TestFlight, allowing Thunderbird for iOS to benefit from your real-world feedback. 

When we started building Thunderbird for iOS, a core decision was made to use a modern foundation (JMAP) designed for mobile devices. This will allow for, among other advantages, faster mail synchronization and more efficient resource usage. The first pieces of that foundation are already in place, with the basic ability to view folders and messages. We’ve also set up internal tools that will make regular updates, language translations, and community testing possible. 

Thunderbird for iOS is still in the early stages of development, but momentum is strong, our team is growing, and we’re confidently moving toward the first community-accessible release. 

In depth details on mobile development can be found in our monthly Mobile Progress Report on our blog. 

Thundermail and Thunderbird Pro services

It’s no secret we’ve been building additional web services under the Thunderbird Pro name, and 2025 marked a pivotal moment in our vision for a complete, open-source Thunderbird ecosystem. 

This year we announced Thundermail, a dedicated email service by Thunderbird. During the past decade, we’ve seen a large move away from dedicated email clients to products like Gmail, partially because of the robust ecosystem around them. The plan for Thundermail is to eventually offer an alternative webmail solution that protects your privacy, and doesn’t use your messages to train AI or show you ads. 

Here’s what else we’ve been working on in addition to Thundermail: 

During its current beta, Thunderbird Appointment saw great improvements in managing your schedule, with many of the changes focused on reliability and visual polish.

Thunderbird Send, an app for securely sharing encrypted files, also saw forward momentum. Together, these services are steadily moving toward a wider beta launch this fall, and we’re excited to see how you’ll use them to improve your personal and professional lives. 

All of the work going into Thundermail and Thunderbird Pro services is guided by a clear goal: providing you with an ethical alternative to the closed-off “walled gardens” that dominate our digital communication. You shouldn’t have to sacrifice your values and give up your personal data to enjoy convenience and powerful features. 

In depth details on Thunderbird Pro development can be found in our Thunderbird Pro updates on our blog. 

2024 Financial Picture

The generosity of our donors continues to power everything we do, and the importance of these financial contributions cannot be understated. In 2024, the Thunderbird project once again saw continued growth in donations which paved the way for Thundermail and the Thunderbird Pro services you just read about. It also gave us the opportunity to grow our mobile development team, improve our user support outreach, and expand our connections to the community. 

Here’s a detailed breakdown of our donation revenue in 2024, and why many of these statistics are so meaningful. 

Contribution Revenue

In 2024, financial contributions to Thunderbird reached $10.3 million, representing a 19% increase over the previous year. This support came courtesy of more than 539,000 transactions from more than 335,000 individual donors. A healthy 25% of these contributions were given as recurring monthly support.

What makes this so meaningful to us isn’t the total revenue, or the scale of the donations. It’s how those donations break down. The average contribution was $18.88, with a median of $16.66. Among our recurring donors, the average monthly gift was only $6.25. In fact, 53% of all donations were $20 or less, and 94% were $35 or less. Only 17 contributions were $1,000 or more. 

What does this represent when we go beyond the numbers? It means Thunderbird isn’t sustained by a handful of wealthy benefactors or corporate sponsors. Rather, it is sustained by a global community of people who believe in what we’ve built and what we’re still building, and they come together to keep it moving forward.

And that global reach continues to inspire us. We received contributions from more than 200 countries. The top ten contributing countries – Germany, the United States, France, the United Kingdom, Switzerland, the Netherlands, Japan, Italy, Austria, and Canada – accounted for 83% of our total revenue.

But products aren’t just numbers and code. Products are the people that work on them. To support the ambitions of our expanding roadmap, our team grew significantly in 2024. We added 14 new team members throughout the year, closing out 2024 with 43 full-time staff members. Much of this growth strengthened our mobile development, web services, and desktop + release teams. 80% of our staff focuses on technical work – things like product development and infrastructure – but we also added more roles to actively support users, improve community outreach, and smooth out internal operations. 

Expenses

When we talk about how we use financial contributions, we’re really talking about investments in our shared values. The majority of our spending goes to personnel; the talented individuals who write code, design interfaces, test features, and support our users. Infrastructure is the next largest expense, followed by administrative costs to keep operations running smoothly. 

Below is a breakdown of our 2024 expenses:

Community Snapshot

Contributor & Community Growth

For two decades, Thunderbird has survived and thrived because of its dedicated open-source community. In 2024, we continued using our Bitergia dashboard to give our community a clear view of the project’s overall activity across the board. (You can read more about how we collaborated on and use this beneficial tool here.)

This dashboard helps us track participation, identify and celebrate successes, and find areas to improve, which is especially important as we expand the Thunderbird ecosystem with new products and services. 

For this report, we’ve highlighted some of the most notable community metrics and growth milestones from 2024. 

For reference, Github and Bugzilla measure developer contributions. TopicBox measures activity across our many mailing lists. Pontoon measures the activity from volunteers who help us translate and localize Thunderbird. SUMO (the Mozilla support website) measures the impact of Thunderbird’s support volunteers who engage with our users and respond to their varied support questions.

We estimate that in 2024, the total number of people who contributed to Thunderbird – by writing code, answering support questions, providing translations, or other meaningful areas – is more than 20,000. 

It’s especially encouraging to see the number of translation locales increase from 58 to 70, as Thunderbird continues to find new users around the world. 

But there are areas of opportunity, too. For example, making it less complicated for people who want to start contributing to Thunderbird. We’ve started addressing this by recording two Community Office Hours videos, talking about how to write Knowledge Base articles, and how to effectively answer questions on the Mozilla Support website. 

Mozilla Connect is another portal that lets anyone interested in the betterment of Thunderbird suggest ideas, openly discuss them, and vote on them. In 2024, four desktop ideas as well as four of your ideas in our relatively new mobile space were implemented, and we saw more than 500 new thoughtful ideas suggested across mobile and desktop. Our staff and community are watching for your ideas, so keep them coming! 

Thank you

As we close out this year’s State of the Bird, we want to once again shine a light on the incredible global community of Thunderbird supporters. Whether you’ve contributed your valuable time, financial donations, or simply shared Thunderbird with colleagues, friends, and family, your support continues to brighten Thunderbird’s future. 

After all, products aren’t just numbers on a chart. Products are the people who create them, support them, improve them, and believe in crucial concepts like privacy, digital wellbeing, and open standards. 

We’re so very grateful to you.

The post State Of The Bird 2024/25 appeared first on The Thunderbird Blog.

Niko MatsakisSymmACP: extending Zed's ACP to support Composable Agents

This post describes SymmACP – a proposed extension to Zed’s Agent Client Protocol that lets you build AI tools like Unix pipes or browser extensions. Want a better TUI? Found some cool slash commands on GitHub? Prefer a different backend? With SymmACP, you can mix and match these pieces and have them all work together without knowing about each other.

This is pretty different from how AI tools work today, where everything is a monolith – if you want to change one piece, you’re stuck rebuilding the whole thing from scratch. SymmACP allows you to build out new features and modes of interactions in a layered, interoperable way. This post explains how SymmACP would work by walking through a series of examples.

Right now, SymmACP is just a thought experiment. I’ve sketched these ideas to the Zed folks, and they seemed interested, but we still have to discuss the details in this post. My plan is to start prototyping in Symposium – if you think the ideas I’m discussing here are exciting, please join the Symposium Zulip and let’s talk!

“Composable agents” let you build features independently and then combine them

I’m going to explain the idea of “composable agents” by walking through a series of features. We’ll start with a basic CLI agent1 tool – basically a chat loop with access to some MCP servers so that it can read/write files and execute bash commands. Then we’ll show how you could add several features on top:

  1. Addressing time-blindness by helping the agent know what time it is.
  2. Injecting context and “personality” to the agent.
  3. Spawning long-running, asynchronous tasks.
  4. A copy of Q CLI’s /tangent mode that lets you do a bit of “off the books” work that gets removed from your history later.
  5. Implementing Symposium’s interactive walkthroughs, which give the agent a richer vocabulary for communicating with you than just text.
  6. Smarter tool delegation.

The magic trick is that each of these features will be developed as separate repositories. What’s more, they could be applied to any base tool you want, so long as it speaks SymmACP. And you could also combine them with different front-ends, such as a TUI, a web front-end, builtin support from Zed or IntelliJ, etc. Pretty neat.

My hope is that if we can centralize on SymmACP, or something like it, then we could move from everybody developing their own bespoke tools to an interoperable ecosystem of ideas that can build off of one another.

let mut SymmACP = ACP

SymmACP begins with ACP, so let’s explain what ACP is. ACP is a wonderfully simple protocol that lets you abstract over CLI agents. Imagine if you were using an agentic CLI tool except that, instead of communication over the terminal, the CLI tool communicates with a front-end over JSON-RPC messages, currently sent via stdin/stdout.

flowchart LR
    Editor <-.->|JSON-RPC via stdin/stdout| Agent[CLI Agent]
  

When you type something into the GUI, the editor sends a JSON-RPC message to the agent with what you typed. The agent responds with a stream of messages containing text and images. If the agent decides to invoke a tool, it can request permission by sending a JSON-RPC message back to the editor. And when the agent has completed, it responds to the editor with an “end turn” message that says “I’m ready for you to type something else now”.

sequenceDiagram
    participant E as Editor
    participant A as Agent
    participant T as Tool (MCP)
    
    E->>A: prompt("Help me debug this code")
    A->>E: request_permission("Read file main.rs")
    E->>A: permission_granted
    A->>T: read_file("main.rs")
    T->>A: file_contents
    A->>E: text_chunk("I can see the issue...")
    A->>E: text_chunk("The problem is on line 42...")
    A->>E: end_turn
  

Telling the agent what time it is

OK, let’s tackle our first feature. If you’ve used a CLI agent, you may have noticed that they don’t know what time it is – or even what year it is. This may sound trivial, but it can lead to some real mistakes. For example, they may not realize that some information is outdated. Or when they do web searches for information, they can search for the wrong thing: I’ve seen CLI agents search the web for “API updates in 2024” for example, even though it is 2025.

To fix this, many CLI agents will inject some extra text along with your prompt, something like <current-date date="2025-10-08" time="HH:MM:SS"/>. This gives the LLM the context it needs.

So how could use ACP to build that? The idea is to create a proxy. This proxy would wrap the original ACP server:

flowchart LR
    Editor[Editor/VSCode] <-->|ACP| Proxy[Datetime Proxy] <-->|ACP| Agent[CLI Agent]
  

This proxy will take every “prompt” message it receives and decorate it with the date and time:

sequenceDiagram
    participant E as Editor
    participant P as Proxy
    participant A as Agent
    
    E->>P: prompt("What day is it?")
    P->>A: prompt("<current-date .../> What day is it?")
    A->>P: text_chunk("It is 2025-10-08.")
    P->>E: text_chunk("It is 2025-10-08.")
    A->>P: end_turn
    P->>E: end_turn
  

Simple, right? And of course this can be used with any editor and any ACP-speaking tool.

Next feature: Injecting “personality” to the agent

Let’s look at another feature that basically “falls out” from ACP: injecting personality. Most agents give you the ability to configure “context” in various ways – or what Claude Code calls memory. This is useful, but I and others have noticed that if what you want is to change how Claude “behaves” – i.e., to make it more collaborative – it’s not really enough. You really need to kick off the conversation by reinforcing that pattern.

In Symposium, the “yiasou” prompt (also available as “hi”, for those of you who don’t speak Greek 😛) is meant to be run as the first thing in the conversation. But there’s nothing an MCP server can do to ensure that the user kicks off the conversation with /symposium:hi or something similar. Of course, if Symposium were implemented as an ACP Server, we absolutely could do that:

sequenceDiagram
    participant E as Editor
    participant P as Proxy
    participant A as Agent
    
    E->>P: prompt("I'd like to work on my document")
    P->>A: prompt("/symposium:hi")
    A->>P: end_turn
    P->>A: prompt("I'd like to work on my document")
    A->>P: text_chunk("Sure! What document is that?") 
    P->>E: text_chunk("Sure! What document is that?") 
    A->>P: end_turn
    P->>E: end_turn
  

Proxies are a better version of hooks

Some of you may be saying, “hmm, isn’t that what hooks are for?” And yes, you could do this with hooks, but there’s two problems with that. First, hooks are non-standard, so you have to do it differently for every agent.

The second problem with hooks is that they’re fundamentally limited to what the hook designer envisioned you might want. You only get hooks at the places in the workflow that the tool gives you, and you can only control what the tool lets you control. The next feature starts to show what I mean: as far as I know, it cannot readily be implemented with hooks the way I would want it to work.

Next feature: long-running, asynchronous tasks

Let’s move on to our next feature, long-running asynchronous tasks. This feature is going to have to go beyond the current capabilities of ACP into the expanded “SymmACP” feature set.

Right now, when the server invokes an MCP tool, it executes in a blocking way. But sometimes the task it is performing might be long and complicated. What you would really like is a way to “start” the task and then go back to working. When the task is complete, you (and the agent) could be notified.

This comes up for me a lot with “deep research”. A big part of my workflow is that, when I get stuck on something I don’t understand, I deploy a research agent to scour the web for information. Usually what I will do is ask the agent I’m collaborating with to prepare a research prompt summarizing the things we tried, what obstacles we hit, and other details that seem relevant. Then I’ll pop over to claude.ai or Gemini Deep Research and paste in the prompt. This will run for 5-10 minutes and generate a markdown report in response. I’ll download that and give it to my agent. Very often this lets us solve the problem.2

This research flow works well but it is tedious and requires me to copy-and-paste. What I would ideally want is an MCP tool that does the search for me and, when the results are done, hands them off to the agent so it can start processing immediately. But in the meantime, I’d like to be able to continue working with the agent while we wait. Unfortunately, the protocol for tools provides no mechanism for asynchronous notifications like this, from what I can tell.

SymmACP += tool invocations + unprompted sends

So how would I do it with SymmACP? Well, I would want to extend the ACP protocol as it is today in two ways:

  1. I’d like the ACP proxy to be able to provide tools that the proxy will execute. Today, the agent is responsible for executing all tools; the ACP protocol only comes into play when requesting permission. But it’d be trivial to have MCP tools where, to execute the tool, the agent sends back a message over ACP instead.
  2. I’d like to have a way for the agent to initiate responses to the editor. Right now, the editor always initiatives each communication session with a prompt; but, in this case, the agent might want to send messages back unprompted.

In that case, we could implement our Research Proxy like so:

sequenceDiagram
    participant E as Editor
    participant P as Proxy
    participant A as Agent
    
    E->>P: prompt("Why is Rust so great?")
    P->>A: prompt("Why is Rust so great?")
    A->>P: invoke tool("begin_research")
    activate P
    P->>A: ok
    A->>P: "I'm looking into it!"
    P->>E: "I'm looking into it!"
    A->>P: end_turn
    P->>E: end_turn

    Note over E,A: Time passes (5-10 minutes) and the user keeps working...
    Note over P: Research completes in background
    
    P->>A: <research-complete/>
    deactivate P
    A->>P: "Research says Rust is fast"
    P->>E: "Research says Rust is fast"
    A->>P: end_turn
    P->>E: end_turn
  

What’s cool about this is that the proxy encapsulates the entire flow: it knows how to do the research, and it manages notifying the various participants when the research completes. (Also, this leans on one detail I left out, which is that )

Next feature: tangent mode

Let’s explore our next feature, Q CLI’s /tangent mode. This feature is interesting because it’s a simple (but useful!) example of history editing. The way /tangent works is that, when you first type /tangent, Q CLI saves your current state. You can then continue as normal but when you next type /tangent, your state is restored to where you were. This, as the name suggests, lets you explore a side conversation without polluting your main context.

The basic idea for supporting tangent in SymmACP is that the proxy is going to (a) intercept the tangent prompt and remember where it began; (b) allow the conversation to continue as normal; and then (c) when it’s time to end the tangent, create a new session and replay the history up until the point of the tangent3.

SymACP += replay

You can almost implement “tangent” in ACP as it is, but not quite. In ACP, the agent always owns the session history. The editor can create a new session or load an older one; when loading an older one, the agent “replays” “replays” the events so that the editor can reconstruct the GUI. But there is no way for the editor to “replay” or construct a session to the agent. Instead, the editor can only send prompts, which will cause the agent to reply. In this case, what we want is to be able to say “create a new chat in which I said this and you responded that” so that we can setup the initial state. This way we could easily create a new session that contains the messages from the old one.

So how this would work:

sequenceDiagram
    participant E as Editor
    participant P as Proxy
    participant A as Agent
    
    E->>P: prompt("Hi there!")
    P->>A: prompt("Hi there!")

    Note over E,A: Conversation proceeds
    
    E->>P: prompt("/tangent")
    Note over P: Proxy notes conversation state
    P->>E: end_turn
    E->>P: prompt("btw, ...")
    P->>A: prompt("btw, ...")

    Note over E,A: Conversation proceeds
    
    E->>P: prompt("/tangent")
    
    P->>A: new_session
    P->>A: prompt("Hi there!")    
    Note over P,A: ...Proxy replays conversation...
  

Next feature: interactive walkthroughs

One of the nicer features of Symposium is the ability to do interactive walkthroughs. These consist of an HTML sidebar as well as inline comments in the code:

Walkthrough screenshot

Right now, this is implemented by a kind of hacky dance:

  • The agent invokes an MCP tool and sends it the walkthrough in markdown. This markdown includes commands meant to be placed on particular lines, identified not by line number (agents are bad at line numbers) but by symbol names or search strings.
  • The MCP tool parses the markdown, determines the line numbers for comments, and creates HTML. It sends that HTML over IPC to the VSCode extension.
  • The VSCode receives the IPC message, displays the HTML in the sidebar, and creates the comments in the code.

It works, but it’s a giant Rube Goldberg machine.

SymmACP += Enriched conversation history

With SymmACP, we would structure the passthrough mechanism as a proxy. Just as today, it would provide an MCP tool to the agent to receive the walkthrough markdown. It would then convert that into the HTML to display on the side along with the various comments to embed in the code. But this is where things are different.

Instead of sending that content over IPC, what I would want to do is to make it possible for proxies to deliver extra information along with the chat. This is relatively easy to do in ACP as is, since it provides for various capabilities, but I think I’d want to go one step further

I would have a proxy layer that manages walkthroughs. As we saw before, it would provide a tool. But there’d be one additional thing, which is that, beyond just a chat history, it would be able to convey additional state. I think the basic conversation structure is like:

  • Conversation
    • Turn
      • User prompt(s) – could be zero or more
      • Response(s) – could be zero or more
      • Tool use(s) – could be zero or more

but I think it’d be useful to (a) be able to attach metadata to any of those things, e.g., to add extra context about the conversation or about a specific turn (or even a specific prompt), but also additional kinds of events. For example, tool approvals are an event. And presenting a walkthrough and adding annotations are an event too.

The way I imagine it, one of the core things in SymmACP would be the ability to serialize your state to JSON. You’d be able to ask a SymmACP paricipant to summarize a session. They would in turn ask any delegates to summarize and then add their own metadata along the way. You could also send the request in the other direction – e.g., the agent might present its state to the editor and ask it to augment it.

Enriched history would let walkthroughs be extra metadata

This would mean a walkthrough proxy could add extra metadata into the chat transcript like “the current walkthrough” and “the current comments that are in place”. Then the editor would either know about that metadata or not. If it doesn’t, you wouldn’t see it in your chat. Oh well – or perhaps we do something HTML like, where there’s a way to “degrade gracefully” (e.g., the walkthrough could be presented as a regular “response” but with some metadata that, if you know to look, tells you to interpret it differently). But if the editor DOES know about the metadata, it interprets it specially, throwing the walkthrough up in a panel and adding the comments into the code.

With enriched histories, I think we can even say that in SymmACP, the ability to load, save, and persist sessions itself becomes an extension, something that can be implemented by a proxy; the base protocol only needs the ability to conduct and serialize a conversation.

Final feature: Smarter tool delegation.

Let me sketch out another feature that I’ve been noodling on that I think would be pretty cool. It’s well known that there’s a problem that LLMs get confused when there are too many MCP tools available. They get distracted. And that’s sensible, so would I, if I were given a phonebook-size list of possible things I could do and asked to figure something out. I’d probably just ignore it.

But how do humans deal with this? Well, we don’t take the whole phonebook – we got a shorter list of categories of options and then we drill down. So I go to the File Menu and then I get a list of options, not a flat list of commands.

I wanted to try building an MCP tool for IDE capabilities that was similar. There’s a bajillion set of things that a modern IDE can “do”. It can find references. It can find definitions. It can get type hints. It can do renames. It can extract methods. In fact, the list is even open-ended, since extensions can provide their own commands. I don’t know what all those things are but I have a sense for the kinds of things an IDE can do – and I suspect models do too.

What if you gave them a single tool, “IDE operation”, and they could use plain English to describe what they want? e.g., ide_operation("find definition for the ProxyHandler that referes to HTTP proxies"). Hmm, this is sounding a lot like a delegate, or a sub-agent. Because now you need to use a second LLM to interpret that request – you probably want to do something like, give it a list of sugested IDE capabilities and the ability to find out full details and ask it to come up with a plan (or maybe directly execute the tools) to find the answer.

As it happens, MCP has a capability to enable tools to do this – it’s called (somewhat oddly, in my opinion) “sampling”. It allows for “callbacks” from the MCP tool to the LLM. But literally nobody implements it, from what I can tell.4 But sampling is kind of limited anyway. With SymmACP, I think you could do much more interesting things.

SymmACP.contains(simultaneous_sessions)

The key is that ACP already permits a single agent to “serve up” many simultaneous sessions. So that means that if I have a proxy, perhaps one supplying an MCP tool definition, I could use it to start fresh sessions – combine that with the “history replay” capability I mentioned above, and the tool can control exactly what context to bring over into that session to start from, as well, which is very cool (that’s a challenge for MCP servers today, they don’t get access to the conversation history).

sequenceDiagram
    participant E as Editor
    participant P as Proxy
    participant A as Agent
    
    A->>P: ide_operation("...")
    activate P
    P->>A: new_session
    activate P
    activate A
    P->>A: prompt("Using these primitive operations, suggest a way to do '...'")
    A->>P: ...
    A->>P: end_turn
    deactivate P
    deactivate A
    Note over P: performs the plan
    P->>A: result from tool
    deactivate P
  

Conclusion

Ok, this post sketched a variant on ACP that I call SymmACP. SymmACP extends ACP with

  • the ability for either side to provide the initial state of a conversation, not just the server
  • the ability for an “editor” to provide an MCP tool to the “agent”
  • the ability for agents to respond without an initial prompt
  • the ability to serialize conversations and attach extra state (already kind of present)

Most of these are modest extensions to ACP, in my opinion, and easily doable in a backwards fashion just by adding new capabilities. But together they unlock the ability for anyone to craft extensions to agents and deploy them in a composable way. I am super excited about this. This is exactly what I wanted Symposium to be all about.

It’s worth noting the old adage: “with great power, comes great responsibility”. These proxies and ACP layers I’ve been talking about are really like IDE extensions. They can effectively do anything you could do. There are obvious security concerns. Though I think that approaches like Microsoft’s Wassette are key here – it’d be awesome to have a “capability-based” notion of what a “proxy layer” is, where everything compiles to WASM, and where users can tune what a given proxy can actually do.

I plan to start sketching a plan to drive this work in Symposium and elsewhere. My goal is to have a completely open and interopable client, one that can be based on any agent (including local ones) and where you can pick and choose which parts you want to use. I expect to build out lots of custom functionality to support Rust development (e.g., explaining and diagnosting trait errors using the new trait solver is high on my list…and macro errors…) but also to have other features like walkthroughs, collaborative interaction style, etc that are all language independent – and I’d love to see language-focused features for other langauges, especially Python and TypeScript (because “the new trifecta”) and Swift and Kotlin (because mobile). If that vision excites you, come join the Symposium Zulip and let’s chat!

Appendix: A guide to the agent protocols I’m aware of

One question I’ve gotten when discussing this is how it compares to the other host of protocols out there. Let me give a brief overview of the related work and how I understand its pros and cons:

  • Model context protocol (MCP): The queen of them all. A protocol that provides a set of tools, prompts, and resources up to the agent. Agents can invoke tools by supplying appropriate parameters, which are JSON. Prompts are shorthands that users can invoke using special commands like / or @, they are essentially macros that expand “as if the user typed it” (but they can also have parameters and be dynamically constructed). Resources are just data that can be requested. MCP servers can either be local or hosted remotely. Remote MCP has only recently become an option and auth in particular is limited.
    • Comparison to SymmACP: MCP provides tools that the agent can invoke. SymmACP builds on it by allowing those tools to be provided by outer layers in the proxy chain. SymmACP is oriented at controlling the whole chat “experience”.
  • Zed’s Agent Client Protocol (ACP): The basis for SymmACP. Allows editors to create and manage sessions. Focused only on local sessions, since your editor runs locally.
    • Comparison to SymmACP: That’s what this post is all about! SymmACP extends ACP with new capabilities that let intermediate layers manipulate history, provide tools, and provide extended data upstream to support richer interaction patterns than jus chat. PS I expect we may want to support more remote capabilities, but it’s kinda orthogonal in my opinion (e.g., I’d like to be able to work with an agent running over in a cloud-hosted workstation, but I’d probably piggyback on ssh for that).
  • Google’s Agent-to-Agent Protocol (A2A) and IBM’s Agent Communication Protocol (ACP)5: From what I can tell, Google’s “agent-to-agent” protocol is kinda like a mix of MCP and OpenAPI. You can ping agents that are running remotely and get them to send you “agent cards”, which describe what operations they can perform, how you authenticate, and other stuff like that. It looks to me quite similar to MCP except that it has richer support for remote execution and in particular supports things like long-running communication, where an agent may need to go off and work for a while and then ping you back on a webhook.
    • Comparison to MCP: To me, A2A looks like a variant of MCP that is more geared to remote execution. MCP has a method for tool discovery where you ping the server to get a list of tools; A2A has a similar mechanism with Agent Cards. MCP can run locally, which A2A cannot afaik, but A2A has more options about auth. MCP can only be invoked synchronously, whereas A2A supports long-running operations, progress updates, and callbacks. It seems like the two could be merged to make a single whole.
    • Comparison to SymmACP: I think A2A is orthogonal from SymmACP. A2A is geared to agents that provide services to one another. SymmACP is geared towards building new development tools for interacting with agents. It’s possible you could build something like SymmACP on A2A but I don’t know what you would really gain by it (and I think it’d be easy to do later).

  1. Everybody uses agents in various ways. I like Simon Willison’s “agents are models using tools in a loop” definition; I feel that an “agentic CLI tool” fits that definition, it’s just that part of the loop is reading input from the user. I think “fully autonomous” agents are a subset of all agents – many agent processes interact with the outside world via tools etc. From a certain POV, you can view the agent “ending the turn” as invoking a tool for “gimme the next prompt”. ↩︎

  2. Research reports are a major part of how I avoid hallucination. You can see an example of one such report I commissioned on the details of the Language Server Protocol here; if we were about to embark on something that required detailed knowledge of LSP, I would ask the agent to read that report first. ↩︎

  3. Alternatively: clear the session history and rebuild it, but I kind of prefer the functional view of the world, where a given session never changes. ↩︎

  4. I started an implementation for Q CLI but got distracted – and, for reasons that should be obvious, I’ve started to lose interest. ↩︎

  5. Yes, you read that right. There is another ACP. Just a mite confusing when you google search. =) ↩︎

The Mozilla BlogFirefox profiles: Private, focused spaces for all the ways you browse

Every part of your life has its own rhythm: work, school, family, personal projects. Beginning Oct. 14, we’re rolling out a new profile management feature in Firefox so you can keep them separate and create distinct spaces — each with its own bookmarks, logins, history, extensions and themes. It’s an easy way to stay organized, focused and private.

Firefox Profiles feature shown with an illustration of three foxes and a setup screen for creating and customizing browser profiles.

Spaces that lighten your load

Profiles don’t just keep you organized; they also reduce data mixing and ease cognitive load. By keeping your different roles online neatly separate, you spend less mental energy juggling contexts and avoid awkward surprises (like your weekend plans popping up in a work presentation). And, like everything in Firefox, profiles are built on our strong privacy foundation.

We also worked with disabled people to make profiles not only compliant, but genuinely delightful to use for everyone. That collaboration shaped everything from the visual design (avatars, colors, naming) to the way profiles keep sensitive data (like medical information) private. It’s an example of how designing for accessibility boundaries benefits all of us.

What makes profiles in Firefox different

Other browsers offer profiles mainly for convenience. Firefox goes further by making them part of our mission to put you in control of your online life.

  • Privacy first: Firefox is built with privacy as a default. We don’t know your age, gender, precise location, name of your profile, or other information Big Tech collects and profits from. Each profile keeps its own browsing data separate. No mixing, no surprise leaks.
  • Custom spaces: Pick colors and themes to make each profile easy to spot at a glance. You can even upload your own avatar. Your work profile can feel buttoned-up, while your personal profile reflects your style.
Firefox Profile Manager showing Work and Personal profiles, with an option to create a new one, on a desktop with a forest background.

Profiles in Firefox aren’t just a way to clean up your tabs. They’re a way to set boundaries, protect your information and make the internet a little calmer. Because when your browser respects your focus and your privacy, it frees you up to do what actually matters — work, connect, create, explore — on your own terms.

Take control of your internet

Download Firefox

The post Firefox profiles: Private, focused spaces for all the ways you browse appeared first on The Mozilla Blog.

Niko MatsakisThe Handle trait

There’s been a lot of discussion lately around ergonomic ref-counting. We had a lang-team design meeting and then a quite impactful discussion at the RustConf Unconf. I’ve been working for weeks on a follow-up post but today I realized what should’ve been obvious from the start – that if I’m taking that long to write a post, it means the post is too damned long. So I’m going to work through a series of smaller posts focused on individual takeaways and thoughts. And for the first one, I want to (a) bring back some of the context and (b) talk about an interesting question, what should we call the trait. My proposal, as the title suggests, is Handle – but I get ahead of myself.

The story thus far

For those of you who haven’t been following, there’s been an ongoing discussion about how best to have ergonomic ref counting:

This blog post is about “the trait”

The focus of this blog post is on one particular question: what should we call “The Trait”. In virtually every design, there has been some kind of trait that is meant to identify something. But it’s been hard to get a handle1 on what precisely that something is. What is this trait for and what types should implement it? Some things are clear: whatever The Trait is, Rc<T> and Arc<T> should implement it, for example, but that’s about it.

My original proposal was for a trait named Claim that was meant to convey a “lightweight clone” – but really the trait was meant to replace Copy as the definition of which clones ought to be explicit2. Jonathan Kelley had a similar proposal but called it Capture. In RFC #3680 the proposal was to call the trait Use.

The details and intent varied, but all of these attempts had one thing in common: they were very operational. That is, the trait was always being defined in terms of what it does (or doesn’t do) but not why it does it. And that I think will always be a weak grounding for a trait like this, prone to confusion and different interpretations. For example, what is a “lightweight” clone? Is it O(1)? But what about things that are O(1) with very high probability? And of course, O(1) doesn’t mean cheap – it might copy 22GB of data every call. That’s O(1).

What you want is a trait where it’s fairly clear when it should and should not be implemented and not based on taste or subjective criteria. And Claim and friends did not meet the bar: in the Unconf, several new Rust users spoke up and said they found it very hard, based on my explanations, to judge whether their types ought to implement The Trait (whatever we call it). That has also been a persitent theme from the RFC and elsewhere.

“Shouldn’t we call it share?” (hat tip: Jack Huey)

But really there is a semantic underpinning here, and it was Jack Huey who first suggested it. Consider this question. What are the differences between cloning a Mutex<Vec<u32>> and a Arc<Mutex<Vec<u32>>>?

One difference, of course, is cost. Cloning the Mutex<Vec<u32>> will deep-clone the vector, cloning the Arc will just increment a referece count.

But the more important difference is what I call “entanglement”. When you clone the Arc, you don’t get a new value – you get back a second handle to the same value.3

Entanglement changes the meaning of the program

Knowing which values are “entangled” is key to understanding what your program does. A big part of how the borrow checker4 achieves reliability is by reducing “entaglement”, since it becomes a relative pain to work with in Rust.

Consider the following code. What will be the value of l_before and l_after?

let l_before = v1.len();
let v2 = v1.clone();
v2.push(new_value);
let l_after = v1.len();

The answer, of course, is “depends on the type of v1”. If v1 is a Vec, then l_after == l_before. But if v1 is, say, a struct like this one:

struct SharedVec<T> {
    data: Arc<Mutex<Vec<T>>>
}

impl<T> SharedVec<T> {
    pub fn push(&self, value: T) {
        self.data.lock().unwrap().push(value);
    }

    pub fn len(&self) -> usize {
        self.data.lock().unwrap().len()
    }
}

then l_after == l_before + 1.

There are many types that act like a SharedVec: it’s true for Rc and Arc, of course, but also for things like Bytes and channel endpoints like Sender. All of these are examples of “handles” to underlying values and, when you clone them, you get back a second handle that is indistinguishable from the first one.

We have a name for this concept already: handles

Jack’s insight was that we should focus on the semantic concept (sharing) and not on the operational details (how it’s implemented). This makes it clear when the trait ought to be implemented. I liked this idea a lot, although I eventually decided I didn’t like the name Share. The word isn’t specific enough, I felt, and users might not realize it referred to a specific concept: “shareable types” doesn’t really sound right. But in fact there is a name already in common use for this concept: handles (see e.g. tokio::runtime::Handle).

This is how I arrived at my proposed name and definition for The Trait, which is Handle:5

/// Indicates that this type is a *handle* to some
/// underlying resource. The `handle` method is
/// used to get a fresh handle.
trait Handle: Clone {
    final fn handle(&self) -> Self {
        Clone::clone(self)
    }
}

We would lint and advice people to call handle

The Handle trait includes a method handle which is always equivalent to clone. The purpose of this method is to signal to the reader that the result is a second handle to the same underlying value.

Once the Handle trait exists, we should lint on calls to clone when the receiver is known to implement Handle and encourage folks to call handle instead:

impl DataStore {
    fn store_map(&mut self, map: &Arc<HashMap<...>>) {
        self.stored_map = map.clone();
        //                    -----
        //
        // Lint: convert `clone` to `handle` for
        // greater clarity.
    }
}

Compare the above to the version that the lint suggests, using handle, and I think you will get an idea for how handle increases clarity of what is happening:

impl DataStore {
    fn store_map(&mut self, map: &Arc<HashMap<...>>) {
        self.stored_map = map.handle();
    }
}

What it means to be a handle

The defining characteristic of a handle is that it, when cloned, results in a second value that accesses the same underlying value. This means that the two handles are “entangled”, with interior mutation that affects one handle showing up in the other. Reflecting this, most handles have APIs that consist exclusively or almost exclusively of &self methods, since having unique access to the handle does not necessarily give you unique access to the value.

Handles are generally only significant, semantically, when interior mutability is involved. There’s nothing wrong with having two handles to an immutable value, but it’s not generally distinguishable from two copies of the same value. This makes persistent collections an interesting grey area: I would probably implement Handle for something like im::Vec<T>, particularly since something like a im::Vec<Cell<u32>> would make entaglement visible, but I think there’s an argument against it.

Handles in the stdlib

In the stdlib, handle would be implemented for exactly one Copy type (the others are values):

// Shared references, when cloned (or copied),
// create a second reference:
impl<T: ?Sized> Handle for &T {}

It would be implemented for ref-counted pointers (but not Box):

// Ref-counted pointers, when cloned,
// create a second reference:
impl<T: ?Sized> Handle for Rc<T> {}
impl<T: ?Sized> Handle for Arc<T> {}

And it would be implemented for types like channel endpoints, that are implemented with a ref-counted value under the hood:

// mpsc "senders", when cloned, create a
// second sender to the same underlying channel:
impl<T: ?Sized> Handle for mpsc::Sender {}

Conclusion: a design axiom emerges

OK, I’m going to stop there with this “byte-sized” blog post. More to come! But before I go, let me layout what I believe to be a useful “design axiom” that we should adopt for this design:

Expose entanglement. Understanding the difference between a handle to an underlying value and the value itself is necessary to understand how Rust works.

The phrasing feels a bit awkward, but I think it is the key bit anyway.


  1. That. my friends, is foreshadowing. Damn I’m good. ↩︎

  2. I described Claim as a kind of “lightweight clone” but in the Unconf someone pointed out that “heavyweight copy” was probably a better description of what I was going for. ↩︎

  3. And, not coincidentally, the types where cloning leads to entanglement tend to also be the types where cloning is cheap. ↩︎

  4. and functional programming… ↩︎

  5. The “final” keyword was proposed by Josh Triplett in RFC 3678. It means that impls cannot change the definition of Handle::handle. There’s been some back-and-forth on whether it ought to be renamed or made more general or what have you; all I know is, I find it an incredibly useful concept for cases like this, where you want users to be able to opt-in to a method being available but not be able to change what it does. You can do this in other ways, they’re just weirder. ↩︎

Firefox NightlySmarter Search, Smoother Tools – These Weeks in Firefox: Issue 190

Highlights

  • Google Lens support has been turned on by default in the latest nightly builds.
    • When Google is your default search engine, and you right click an image you’ll see a new context menu entry:

Context menu entry: Search Image with Google Lens

  • Semantic history search has now also been enabled in the latest nightly and beta builds.
    • This uses a local machine learning model to suggest entries from history that are related to your searches based on natural language understanding in the address bar.
  • Alexandre Poirot [:ochameau] improved the editor by displaying an editor widget where you can navigate between the different calls to a given function (#1908889)

DevTools is displaying an editor widget

  • The WebExtension cookies.set API method rejection on invalid cookies is riding the Firefox 145 release train (after it has been kept as a nightly only behavior for 3 nightly cycles) – Bug 1976509

 

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Isaac Briandt

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • As part of finalizing the Add-ons telemetry migration from legacy telemetry to Glean, the EnvironmentAddonBuilder (responsible for collecting the activeAddons/Theme/GMPlugins metrics in Glean and mirror it in the legacy telemetry environment) has been refactored out of the TelemetryEnvironment ES module – Bug 1981496

DevTools

WebDriver BiDi

Lint, Docs and Workflow

Information Management/Sidebar

Profile Management

  • Jared fixed bug 1941854, [Windows] Additional window (skeleton UI) opens with Profile Selector on Firefox startup
  • Maile fixed bug 1955173, The favicons of the Profiles about: pages are not displayed properly in the List all tabs menu
  • Niklas fixed bug 1965598, The Usage Profile Group ID should be shared by all profiles in a group
  • Jaws fixed bug 1987317, Firefox won’t launch a profile if the library of that profile is open
  • Jaws fixed bug 1988882, SelectableProfileService uses the wrong value for the rgb color property
  • Jaws fixed bug 1990020, Small fixes in SelectableProfileService

Search and Navigation

  • Work continues on modularising and re-using the address bar code to replace the existing search bar.
    • This will allow us to simplify the existing code, remove dependence on the toolkit autocomplete widget, and bring more features to the separate search bar.
  • Search Engines identifiers and telemetry.
    • We’ve now removed nsISearchEngine.identifier, and deprecated nsISearchEngine.telemetryId. nsISearchEngine.id still exists.
      • These are fields that would contain a mixture of information about a search engine (an identifier, partner code and sometimes more). This would analysis via telemetry more difficult.
    • If you’re reporting search engine information either via telemetry or to other systems, please use the separate id / partnerCode fields on nsISearchEngine or check with the search team for your case.

Storybook/Reusable Components/Acorn Design System

  • moz-button supports type=”split”(Bug 1858811). Setting menuId on the split button links its “More options” button to a panel-list with the same id. (Storybook)

Split button component

  • Support for the support-page attribute was added to the moz-box-item (Bug 1990839)
  • New –font-size-xxlarge (2.2rem – 33px) token was added. (Bug 1961988)
  • Usage of border-radius was updated to use design tokens values (Bug 1983938)

Mozilla ThunderbirdVIDEO: Conversation View

Welcome back to another edition of the Community Office Hours! This month, we’re showing you our first steps towards a long awaited feature: a genuine Conversation View! Our guests are Alessandro Castellani, Director of Desktop and Mobile Apps and Geoff Lankow, Sr. Staff Software Engineer on the Desktop team. They recently attended a work week in Vancouver that brought together developers and designers to create our initial vision and plan to bring Conversation View from dream to reality. Before Geoff flew home, he joined Alessandro and us to discuss his backend database work that will make Conversation View possible. We also had a peek at the workweek itself, other features possible with our new database, and our tentative delivery timeline.

We’ll be back next month with an Office Hours all about Exchange Support for email, which is landing soon in our monthly Release channel.

September Office Hours: Conversation View

Some of you might be asking, “what IS Conversation View?” Basically, it’s a Gmail-like visualization of a message thread when reading emails. So, in contrast to the current threaded view, you have all the messages in a thread. This both includes your replies and any other messages that may have been moved to a different folder.

So, why hasn’t Thunderbird been able to do this already? The short answer is that our code is old. Netscape Navigator old. Our current ‘database,’ Mork, makes a mail folder summary (an .msf file) per folder. These files are text-based unicode and are NOT human readable. In Thunderbird 3, we introduced Gloda, our Global Search and Indexer, to try and work around Mork’s limitations. It indexes what’s in the .msf file and stores the data in a SQLite file. But as you might already know, Gloda itself is clunky and slow.

Modern Solutions for Modern Problems

If we want Conversation View (and other features users now expect), we need to bring Thunderbird further into the 21st century. Hence, our work on a new database, which we’re calling Panorama. It’s a single SQLite database with all your messages. Panorama indexes emails as soon as they’re received, and since it’s SQLite, it’s not only fast, but it can be read by so many tools.

Since all of your messages will be in a single SQLite database, we can do more than enable a true Conversation view. Panorama will improve global search, enable improved filters, and more. Needless to say, we’re excited about all the possibilities!

Conversation View Workweek

To get these possibilities started, we decided to bring developers and designers together for a Conversation View Workweek in Vancouver in early September. This brought people out of Zoom calls, emails, and Matrix messages, and across the Pacific Ocean in Geoff’s case, into one place to discuss technical and design challenges.

We’ve spoken previously about our design system and how we’ve collaborated between design and development on features like Account Hub. In-person collaboration, especially for something as complicated as a new database and message view, was invaluable. By the end of the week, developers and designers alike had plenty to show for their efforts.

Next Steps

Before you get too excited, the new database and Conversation view won’t land until after next year’s ESR release. There’s a lot of work to do, including testing Panorama in a standalone space until we’re ready to run Mork and Panorama alongside each other, along with the old and new code referencing each database. We need the migration to be seamless and easily reversible, and so we want to take the time to get this absolutely right.

Want to stay up to date on our progress? We recommend subscribing to our Planning and UX mailing lists, State of the Thunder videos and blog posts, and the meta bug on Bugzilla.

VIDEO (Also on Peertube):

Slides:

Resources:

The post VIDEO: Conversation View appeared first on The Thunderbird Blog.

The Mozilla BlogAnonym and Snap partner to unlock increased performance for advertisers

The Anonym wordmark and the Snap, Inc. logo are shown side by side.

An ads milestone in marketing reach without data risk.

The ad industry is shifting, and with it comes a clear need for advertisers to use data responsibly while still proving impact. Advertisers face a false choice between protecting privacy and proving performance. Anonym exists to prove they can have both — and this week marks a major milestone in that mission.

Today we announced a new partnership with Snap Inc., giving advertisers a way to use more of their first-party data safely and effectively. This collaboration shows what’s possible when privacy and performance go hand in hand: Marketers can unlock real insights into how campaigns drive results, without giving up data control.

Unleashing first-party data that’s often untapped

Unlocking value while maintaining privacy of advertisers’ sensitive first-party (1P) data has long been a challenge for advertisers concerned with exposure or technical friction. We set out to change this equation, enabling brands to safely activate data sets to measure conversion lift and attribution.

With Snapchat campaigns, advertisers can now bring first-party data that’s typically been inaccessible into play and understand how ads on the platform drive real-world actions — from product discovery to purchase. Instead of relying only on proxy signals or limited datasets, brands can generate more complete, incrementality-based insights on their Snapchat performance, gaining a clearer picture of the channel’s true contribution to business outcomes.

“Marketers possess deep reserves of first-party data that too often sits idle because it’s seen as difficult or risky to use,” said Graham Mudd, Senior Vice President, Product, Mozilla and Anonym co-founder. “Our partnership with Snap gives advertisers the power to prove outcomes with confidence, and do it in a way that is both tightly controlled and insight-rich.”

Snapchat audience scale: Reach meets relevance

With a reach of over 930 million monthly active users globally (MAUs), including 469 million daily active users — Snap’s rapidly growing audience makes it a uniquely powerful marketing channel. This breadth of reach is especially appealing to advertisers who previously avoided activating sensitive data—knowing they can now connect securely with high-value Snapchatters at scale.

Our solution is designed for ease of use, requiring minimal technical resources and enabling advertisers to go from kickoff to measurement reporting within weeks. Our collaboration with Snap furthers the mission of lowering barriers to entry in advertising, and enables brands of all sizes to confidently activate their competitive insights on Snapchat.

“Snapchat is where people make real choices, and advertisers need simple, clear insights into how their campaigns perform,” said Elena Bond, Head of Marketing Science, Snap Inc. “By working with Anonym, we’re making advanced measurement accessible to more brands — helping them broaden their reach, uncover deeper insights, and prove results, all while maintaining strict control of their data.”

How Anonym works: Simple, secure, scalable

Using end-to-end encryption, trusted execution environments (TEE), and differential privacy to guarantee protection and streamline compliance, Anonym helps advertisers connect with new, high-value customers and analyze campaign effectiveness without giving up data control. Strategic reach and actionable measurement are achieved with:

  • Advertiser-controlled: First-party data is never transferred to the ad platform.
  • Minimal technical lift: From campaign start to measurement, reporting can be completed in weeks—no heavy engineering or data science overhead.
  • Performance-focused: The outcome is clear insights into campaign lift and attribution, powering better investment decisions.
  • Regulation-ready: Provides advertisers with tools to help meet evolving privacy requirements, supporting responsible data use as rules change.

Anonym and Snap’s collaboration coincides with Advertising Week New York 2025, where measurement and data innovation will be in sharp focus. 

A teal lock icon next to the bold text "Anonym" on a black background.

Performance, powered by privacy

Learn more about Anonym

The post Anonym and Snap partner to unlock increased performance for advertisers appeared first on The Mozilla Blog.

Support.Mozilla.OrgAsk a Fox: A full week celebration of community power

From September 22–28, the Mozilla Support team ran our first-ever Mozilla – Ask a Fox virtual hackathon. In collaboration with the Thunderbird team, we invited contributors, community members, and staff to jump into the Mozilla Community Forums, lend a hand to Firefox and Thunderbird users, and experience the power of Mozillians coming together.

Rallying the Community

The idea was simple: we want to bring not only our long time community members, but newcomers and Mozilla staff together for one-week of focused engagement. The result was extraordinary.

  • The event generated strong momentum for both new and returning community members. This was reflected in the significant growth in total contributors, which rose by 41.6 %.
  • For the past year, our Community Forum had been struggling to maintain a strong reply rate as inbound questions grew. During the event, however, we achieved our highest weekly reply rate of the year, which was more than 50% above our daily average from the first half of 2025.
  • Time to first response (TTFR) also improved by 44.6%, which signal significant improvement in community responsiveness. The event also highlighted the importance of time to first response (TTFR) not just for users, but for the community as a whole. We saw a clear correlation: the faster users received their first reply, the more likely they were to return and continue the conversation.

Together, we showed just how responsive and effective our community can be when we rally around a common goal.

More Than Answering Forum Questions

Ask a Fox wasn’t only about answering questions—it was about connection. Throughout the week, we hosted special AMAs with the WebCompat, Web Performance, and Thunderbird teams, giving contributors the chance to engage directly with product experts. We also ran two Community Get Together calls to gather, share stories, and celebrate the spirit of collaboration.

For some added fun, we also launched a and ⚡ emoji hunt accross our Knowledge Base articles.

Recognizing contributors

We’re grateful for the incredible participation during the event and want to recognize the contributors who went above and beyond. Those who participated in our challenges should receive exclusive SUMO badges in their profile by now. And the following top five contributors for each product will soon receive a $25 swag voucher from us to shop our limited-edition Ask a Fox swag collection, available in the NA/EU swag store.

Firefox desktop (including Enterprise)

Congrats to Paul, Denyshon, Jonz4SUSE, @next, and jscher2000.

Firefox for Android

Congrats to Paul, TyDraniu, GerardoPcp04, Mad_Maks, and sjohnn.

Firefox for iOS 

Congratulations to Paul, Simon.c.lord, TyDraniu, Mad_Maks, and Mozilla-assistent.

Thunderbird (including Thunderbird for Android)

Congratulations to Davidsk, Sfhowes, Mozilla98, MattAuSupport, and Christ1.

 

We also want to extend a warm welcome to newcomers who made impressive impact during the event: mozilla98, starretreat, sjohnn, Vexi, Mark, Mapenzi, cartdaniel437, hariiee1277, and thisisharsh7.

And finally, congratulations to Vincent, winner of the staff award for the highest number of replies during the week.


Ask a Fox was more than a campaign—it was a celebration of what makes Mozilla unique: a global community of people who care deeply about helping others and shaping a better web. Whether you answered one question or one hundred, your contribution mattered.

This event reminded us that when Mozillians come together, we can amplify our impact in powerful ways. And this is just the beginning—we’re excited to carry this momentum forward, continue improving the Community Forums, and build an even stronger, more responsive Mozilla community for everyone.

The Mozilla BlogCelebrate the power of browser choice with Firefox. Join us live.

Firefox is celebrating 21 years of Firefox by hosting four global events celebrating the power of browser choice this fall. 

We are inviting people to join us in Berlin, Chicago, Los Angeles and Munich as part of Open What You Want, Firefox’s campaign to celebrate choice and the freedom to show up exactly as you are — whether that’s in your coffee order, the music you dance to, or the browser you use. These events are an opportunity to highlight why browser choice matters and why Firefox stands apart as the last major independent option.

Firefox is built differently with a history of defiance. It is built in a way to best push back against the defaults of Big Tech. Firefox is the only major browser not backed by a billionaire or built on Chromium’s browser engine. Instead, Firefox is backed by a non-profit, and maintains and runs on Gecko, a flexible, independent, open-source browser engine.

So, it makes sense that we are celebrating differently too. We are inviting people to join us at four community-driven “House Blend” coffee rave events. What is a coffee rave? A caffeine-fueled day rave celebrating choice, freedom, and doing things your own way – online and off. These events are open to everyone and in partnership with local coffee shops.

Each event will have free coffee, exclusive merch, sets by two great, local DJs, a lot of dancing, and an emphasis on how individuals should get to shape their online experience and feel control online — and you can’t feel in control without choice.

We are kicking off the celebrations this Saturday, Oct. 4 in both Chicago and Berlin, will move to Munich the following Saturday, Oct. 11 and will end in Los Angeles Saturday, Nov. 8, for Firefox’s actual birthday weekend.

Berlin (RSVP here)
When: Saturday, Oct. 4, 2025 | 13:00 – 16:00 CEST
Where: Café Bravo, Auguststraße 69, 10117 Berlin-Mitte

Chicago (RSVP here)
When: Saturday, Oct. 4, 2025 | 10:00AM – 2:00PM CT
Where: Drip Collective, 172 N Racine Ave, Chicago Illinois 

Munich (RSVP here)
When: Saturday, Oct. 11, 2025 | 13:00 – 16:00 CEST
Where: ORNO Café, Fraunhoferstraße 11, 80469 München

Los Angeles 
When: Saturday, Nov. 8, 2025 
More information to come

We hope you will join our celebration this year, in person at a coffee rave, or at one of our digital-first activations celebrating internet independence.  As Firefox reflects on another year, it’s a good reminder that the most important choice you can make online is your browser. And browser choice is something that we should all celebrate and not take for granted.

The post Celebrate the power of browser choice with Firefox. Join us live. appeared first on The Mozilla Blog.

The Mozilla BlogBlast off! Firefox turns data power plays into a game

We’re celebrating Firefox’s 21st anniversary this November, marking more than two decades of building a web that reflects creativity, independence and trust. While other major browsers are backed by billionaires, Firefox exists to ensure that the internet works for you — not for those cashing in on your data.

That’s the idea behind Billionaire Blast Off (BBO), an interactive experience where you design a fictional, over-the-top billionaire and launch them on a one-way trip to space. It’s a playful way to flip Big Tech’s power dynamics and remind people that choice belongs in our hands.

BBO lives online at billionaireblastoff.firefox.com, where you can build avatars, share memes and join in the joke. Offline, we’re bringing the fun to TwitchCon, with life-size games and our card game Data War, where data is currency and space is the prize.

Cartoon man riding rocket through space holding Earth with colorful galaxy background.

Create your own billionaire avatar

Play Billionaire Blast Off

The billionaire playbook for your data, served with satire 

The goal of Billionaire Blast Off isn’t finger-wagging — it’s satire you can play. It makes the hidden business of your data tangible, and instead of just reading about the problem, you get to laugh at it, remix it and send it into space.

The game is a safe, silly and shareable way to talk about something serious: who really holds the power over your data.

Two ways to join the fun online:

  • Build a billionaire: Create your own billionaire to send off-planet for good. Customize your avatar with an origin story, core drive and legacy plan.
  • Blast off: We’re not just making little billionaires. We’re launching them into space on a real rocket. Share your creation on social media for a chance to secure a seat for your avatar on the official launch.
<figcaption class="wp-element-caption"> Customize your billionaire avatar at billionaireblastoff.firefox.com.</figcaption>

Next stop: TwitchCon

At TwitchCon, you’ll find us sending billionaires into space (for real), playing Data War and putting the spotlight on the power of choice. 

Visit the Firefox booth #2805 (near Exhibit Hall F) to play Data War, a fast-paced card game where players compete to send egomaniacal, tantrum-prone little billionaires on a one-way ticket to space. 

Step into an AR holobox to channel your billionaire villain era, create a life-size avatar and make it perform for your amusement in 3D.

<figcaption class="wp-element-caption">Try out your billionaire in our AR holobox at TwitchCon booth #2805</figcaption>

On Saturday, Oct. 18, swing by the Firefox Lounge at the block party to snag some swag. Then stick around at 8:30 p.m. PT to cheer as we send billionaire avatars into space on a rocket built by Sent Into Space

Online, the fun continues anytime at billionaireblastoff.firefox.com. Because when the billionaires leave, the web opens up for you.

The post Blast off! Firefox turns data power plays into a game appeared first on The Mozilla Blog.

Mozilla Localization (L10N)Localizer Spotlight: Selim

About You

My name is Selim and I’m the Turkish localization manager. I’m from İstanbul, Türkiye. I’ve been contributing to Mozilla since 2010.

Your Contributions

Selim (first left) with fellow Turkish Mozillians Onur, Didem and Serkan (Mozilla Summit Brussels)

Selim (first left) with fellow Turkish Mozillians Onur, Didem and Serkan (Mozilla Summit Brussels)

Q: Over the years, do you remember how many projects you’ve been involved in (including ones that may no longer exist)?

A: It’s been so many! I began with Firefox 15 years ago, but I think I’ve been involved in around 30 projects over the years. We currently have 23 projects active in Pontoon, and I’ve been involved in every single one of them to some degree.

Q: Roughly how many Mozilla events have you joined — whether localization meetups, company-wide gatherings, MozFest, or others?

A: I’ve attended six of them. My first one was the Mozilla Balkans Meetup 2011 in Sofia. Then I had the chance to meet fellow Mozillians in Zagreb, Brussels, Berlin, Paris, and my hometown İstanbul. They were all great experiences, both enlightening and rewarding.

Q: Looking back, are there any contributions or milestones you feel especially proud of?

A: When I first began contributing, my intention was to complete a few missing translations I had noticed in Firefox. However, I quickly realized that the project was huge and there was much more to it than met the eye. Its Turkish localization was around 85% complete at that time, but the community lacked the resources to push it forward. I took it as my duty to reach 100% first, and then spellcheck and fix all existing translations. It took me a few months to get there, but Firefox has clearly had the best Turkish localization among all browsers ever since.

Your Background

Q: Does your professional background support or connect with your work in localization?

A: I currently work as a freelance editor and translator, translating and editing print magazines (mostly tech, popular science, and general knowledge titles), and localizing software and websites.

And the event that kickstarted my career in publishing and professional translation was volunteering for localization. (No, not Firefox. It didn’t even exist yet!) Back in high school, I began localizing an open-source CMS called PHP-Nuke to be used on my school’s website. PHP-Nuke became very popular in a short amount of time, and a computer magazine editor approached me to build the magazine’s website using open-source tools, including PHP-Nuke. I’ve been an avid reader of those magazines since my childhood but never imagined that one day I’d be working for Türkiye’s best-selling computer magazine!

In time, I began translating and writing articles for the magazine as a freelancer and joined the editorial staff after graduating from university.

I’ve written hundreds of software and website reviews and kept noticing that some of them were high-quality products that needed better localization. Now, with a better understanding of how things work and with some technical background, I began contributing to more and more open-source projects in my free time, and Firefox was one of them.

I was lucky that the previous Turkish contributors did a great job “localizing” Firefox, not just translating it. I learned a great deal from them, and it had a huge impact on my later professional work.

I was also approached and/or approved by several clients who had seen my volunteer localization work.

So, in a way, my professional background does support my work in localization — and vice versa.

Q: In what ways has being part of Mozilla’s localization community influenced you — whether in problem-solving, leadership, or collaborating across cultures?

A: Once I started contributing, I quickly realized that Mozilla had something none of the other projects I had contributed to previously had: a community that I felt part of. These people loved the internet, and they were having fun localizing stuff, just like me.

The localization community helped me improve myself both professionally and personally in a lot of ways: I learned how to collaborate better with a team of volunteers from different backgrounds, how to use different translation tools, how to properly report bugs, how to deal with different time zones, and how to get out of my comfort zone and talk to people from abroad both in virtual and face-to-face events.

Your Community

Q: As a long-time contributor, what motivates you to continue after all these years?

A: First and foremost, I believe in Mozilla’s mission wholeheartedly. But there’s a practical motivation too: Turkish is spoken by tens of millions of people, so the potential impact of localization is huge. Ensuring my fellow nationals have access to high-quality, localized open-source software is a driving force. And I’m still having fun doing it!

Q: Many communities struggle with onboarding or retaining contributors, especially after COVID limited in-person events. What are the challenges you face as a manager and how do you address them? And how do you engage with active contributors today? Do you have a process or approach for welcoming newcomers?

A: The Turkish community had its fair share of struggles with onboarding and retaining contributors, but it never became a huge challenge because of an advantage we had: The first iteration of the community started very early. Firefox 1.0 was already available in Turkish, and they maintained a good localization percentage for most Mozilla products, even if not 100%. So when I joined, there were things to do but not a single project that needed to be started from scratch. They were maintainable by one or two enthusiastic localizers. And when I took on the manager role, I always tried to keep it that way. I did approve a number of new projects, but not before ensuring that we had the resources to always keep them at least 90% complete.

But that creates a dilemma: New Turkish contributors usually face strings that are harder to grasp without context or are more difficult to translate, because the easier and more visible strings have already been translated. I guess that makes newcomers frustrated and they leave after translating a few strings. In fact, over the past 10 years, we’ve had only one contributor (Grk) who has translated more than 10,000 strings (apart from myself), and two contributors (Ali and Osman) with more than 1,000 strings. I’d like to thank them once again for their awesome contributions.

The Turkish community has always been very small: just a few people contributing at a time, and that has worked for us. So I’m not anxiously trying to onboard or retain contributors, but if I see an enthusiastic newcomer, I try to guide them by commenting on their translations or sending a welcome email to let them know how things work.

Something Fun
Q: Could you share a few fun or unexpected facts about yourself that people might not know?

A: Certainly:

  • I’m a metalhead, and the first thing I ever translated as a hobby was the lyrics of a Sentenced song. I’ve been translating song lyrics ever since, and I have a blog where I publish them.
  • My favorite documentary is Helvetica.
  • I built my first website when I was 13, by manually typing HTML in Windows Notepad. That’s when I discovered the internet’s endless possibilities and fell in love with it.

Matthew GaudetSummer of Sharpening

As we head into fall, I wanted to write up a bit of an experience report on a project I ran this summer with a few other people on the SpiderMonkey team.

A few of us on the team chose to block off some time during the summer to do intentional professional development. Exploring topics that we hadn’t looked into, often due to a feeling of time starvation.

Myself, I blocked off 2 hours every Friday through the summer.

In order to turn this into a team exercise, rather than just a personal development period, I create a shared document where I encouraged people to write up their experiments, so that we could read about their exploits.

How did it go?

Well, I don’t think -anyone- did 2 hours every week But I think most people did a little bit of exploration.

I’ve blogged already a bit about some of the topics I worked on for sharpening time: Both my blog posts about eBPF were a result of this practice. Other things I looked into that I didn’t get a chance to blog about include:

  • Learning about Instruments, and in particular Processor Trace (so painfully slow)
  • Exploring Coz, the causal profiler (really focused on multihreaded workloads in a way that didn’t produce value for me)
  • Playing with Zed (clangd so slow for some reason)
  • ‘vibe coding’ (AI can do some things, but man, local minima are a pain).
  • Exploring different options for Android emulation
  • Watching WWDC videos on performance optimization (nice overview, mostly stuff I knew).

I was very happy overall with the results, and have already created another document for next year to capture some ideas that we could look into next year.

The Servo BlogThis month in Servo: variable fonts, network tools, SVG, and more!

Another month, another record number of pull requests merged! August flew by, and with it came 447 pull requests from Servo contributors. It was also the final month of our Outreachy cohort; you can read Jerens’ and Uthman’s blogs to learn about how it went!

Highlights

Our big new feature this month is rendering inline SVG elements (@mukilan, @Loirooriol, #38188, #38603). This improves the appearance of many popular websites.

Screenshot of servoshell with the Google homepage loaded <figcaption>Did you know that the Google logo is an SVG element?</figcaption>

We have implemented named grid line lines and areas (@nicoburns, @loirooriol, #38306, #38574, #38493), still gated behind the layout_grid_enabled preference (#38306, #38574).

Screenshot of servoshell loading a page demoing a complex grid layout <figcaption>CSS grids are all around us.</figcaption>

Servo now supports CSS ‘font-variation-settings’ on all main desktop platforms (@simonwuelker, @mrobinson, #38642, #38760, #38831). This feature is currently gated behind the layout_variable_fonts_enabled preference. We also respect format(*-variations) inside @font-face rules (@mrobinson, #38832). Additionally, Servo now reads data from OpenType Collection (.ttc) system font files on macOS (@nicoburns, #38753), and uses Helvetica for the ‘system-ui’ font (@dpogue, #39001).

servoshell nightly showcasing variable fonts, with variable weight (`wght`) values smoothly increasing and decreasing (click to pause)
<figcaption>This font can be customized!</figcaption>

Our developer tools continue to make progress! We now have a functional network monitor panel (@uthmaniv, @jdm, #38216, #38601, #38625), and our JS debugger can show potential breakpoints (@delan, @atbrakhi, #38331, #38363, #38333, #38551, #38550, #38334, #38624, #38826, #38797). Additionally, the layout inspector now dims nodes that are not displayed (@simonwuelker, #38575).

servoshell showing the Servo Mastodon account homepage The Firefox network monitor, showing a list of network connections for the Servo Mastodon account homepage <figcaption>That's a lot of network requests.</figcaption>

We’ve fixed a significant source of crashes in the engine: hit testing using outdated display lists (issue #37932). Hit testing in a web rendering engine is the process that determines which element(s) the user’s mouse is hovering over.

Previously, this process ran inside of WebRender, which receives a display list representing what should be rendered for a particular page. WebRender runs on a separate thread or process from the actual page content, so display lists are updated asynchronously. By the time we do a hit test, the elements reported may not exist anymore, so we could trigger crashes by (for example) moving the mouse quickly over parts of the page that were rapidly changing.

This was fixed by making the hit test operation synchronous and moving it into the same thread as the actual content being tested against, eliminating the possibility of outdated results (@mrobinson, @Loirooriol, @kongbai1996, @yezhizhen, #38480, #38464, #38463, #38884, #38518).

Web platform support

DOM & JS

We’ve upgraded to SpiderMonkey v140 (changelog) (@jdm, #37077, #38563).

Numerous pieces of the Trusted Types API are now present in Servo (@TimvdLippe, @jdm, #38595, #37834, #38700, #38736, #38718, #38784, #38871, #8623, #38874, #38872, #38886), all gated behind the dom_trusted_types_enabled preference.

The IndexedDB implementation (gated behind dom_indexeddb_enabled) is progressing quickly (@arihant2math, @jdm, @rodion, @kkoyung, #28744, #38737, #38836, #38813, #38819, #38115, #38944, #38740, #38891, #38723, #38850, #38735), now reporting errors via IDBRequest interface and supporting autoincrement keys.

A prototype implementation of the CookieStore API is now implemented and gated by the dom_cookiestore_enabled preference (@sebsebmc, #37968, #38876).

Servo now passes over 99.6% of the CSS geometry test suite, thanks to an implementation of matrixTransform() on DOMPointReadOnly, making all geometry interfaces serializable, and adding the SVGMatrix and SVGPoint aliases (@lumiscosity, #38801, #38828, #38810).

You can now use the TextEncoderStream API (@minghuaw, #38466). Streams that are piped now correctly pass through undefined values, too (@gterzian, #38470). We also fixed a crash in the result of pipeTo() on ReadableStream (@gterzian, #38385).

We’ve implemented getModifierState() on MouseEvent (@PotatoCP, #38535), and made a number of changes involving DOM events: ‘mouseleave’ events are fired when the pointer leaves an <iframe> (@mrobinson, @Loirooriol, #38539), pasting from the clipboard into a text input triggers an ‘input’ event (@mrobinson, #37100), focus now occurs after ‘mousedown’ instead of ‘click’ (@yezhizhen, #38589), we ignore ‘mousedown’ and ‘mouseup’ events for elements that are disabled (@yezhizhen, #38671), and removing an event handler attribute like ‘onclick’ clears all relevant event listeners (@TimvdLippe, @kotx, #38734, #39011).

Servo now supports scrollIntoView() (@abdelrahman1234567, #38230), and fires a ‘scroll’ event whenever a page is scrolled (@stevennovaryo, #38321). You can now focus an element without scrolling, by passing the {preventScroll: true} option to focus() (@abdelrahman1234567, #38495).

navigator.sendBeacon() is now implemented, gated behind the dom_navigator_sendbeacon_enabled preference (@TimvdLippe, #38301). Similarly, the AbortSignal.abort() static method is hidden behind dom_abort_controller_enabled (@Taym95, #38746).

The HTMLDocument interface now exists as a property on the Window object (@leo030303, #38433). Meanwhile, the CSS window property is now a WebIDL namespace (@simonwuelker, #38579). We also implemented the new QuotaExceededError interface (@rmeno12, #38507, #38720), which replaces previous usages of DOMException with the QUOTA_EXCEEDED_ERR name.

Our 2D canvas implementation now supports addPath() on Path2D (@arthmis, #37838) and the restore() methods on CanvasRenderingContext2D and OffscreenCanvas now pop all applied clipping paths (@sagudev, #38496). Additionally, we now support using web fonts in the 2D canvas (@mrobinson, #38979). Meanwhile, the performance continues to improve in the new Vello-based backends (@sagudev, #38406, #38356, #38440, #38437), with asynchronous uploading also showing improvements (@sagudev, @mrobinson, #37776).

Muting media elements with the ‘mute’ HTML attribute now works during the initial resource load (@rayguo17, @jschwe, #38462).

Modifying stylesheets now integrates better with incremental layout, in both light trees and shadow trees (@coding-joedow, #38530, #38529). Note that calling setProperty() on a readonly CSSStyleDeclaration correctly throws an exception (@simonwuelker, #38677).

CSS

We’ve upgraded to the upstream Stylo revision as of August 1, 2025.

We now support custom CSS properties with the CSS.registerProperty() method (@simonwuelker, #38682), as well as custom element states with the ‘states’ property on ElementInternals (@simonwuelker, #38564).

Flexbox cross sizes can no longer end up negative through stretching (@Loirooriol, #38521), while ‘stretch’ on flex items now stretches to the line if possible (@Loirooriol, #38526).

Overflow calculations are more accurate, now that we ignore ‘position: fixed’ children of the root element (@stevennovaryo, #38618), compute overflow for <body> separate from the viewport (@shubhamg13, #38825), check for ‘overflow: visible’ in parents and children (@shubhamg13, #38443), and propagate ‘overflow’ to the viewport correctly (@shubhamg13, @Loirooriol, #38598).

‘color’ and ‘text-decoration’ properties no longer inherit into the contents of <select> elements (@simonwuelker, #38570).

Negative outline offsets work correctly (@lumiscosity, @mrobinson, #38418).

Video elements no longer fall back to a preferred aspect ratio of 2 (@Loirooriol, #38705).

‘position: sticky’ elements are handled correctly inside CSS transforms (@mrobinson, @Loirooriol, #38391).

Performance & Stability

We fixed several panics this month, involving IntersectionObserver and missing stacking contexts (@mrobinson, #38473), unpaintable canvases and text (@gterzian, #38664), serializing ‘location’ properties on Window objects (@jdm, #38709), and navigations canceled before HTTP headers are received (@gterzian, #38739).

We also fixed a number of performance pitfalls. The document rendering loop is now throttled to 60 FPS (@mrobinson, @Loirooriol, #38431), while animated images do less work when advancing the current frame (@mrobinson, #38857). In addition, elements with CSS images will not trigger page reflow until their image data is fully available (@coding-joedow, #38916).

Finally, we made improvements to memory usage and binary size. Inline stylesheets are now deduplicated, which can have a significant impact on pages with lots of form inputs or custom elements with common styles (@coding-joedow, #38540). We also removed many unused pieces of the ICU library, saving 16MB from the final binary.

Embedding

Servo has declared a Minimum Supported Rust Version (1.85.0), and this is verified with every new pull request (@jschwe, #37152).

Evaluating JS from the embedding layer now reports an error if the evaluation failed for any reason (@rodio, #38602).

Our WebDriver implementation now passes 80% of the implementation conformance tests. This is the result of lots of work on handling user prompts (@PotatoCP, #38591), computing obscured/disabled elements while clicking (@yezhizhen, #38497, #38841, #38436, #38490, #38383), and improving window focus behaviours (@yezhizhen, #38889, #38909). We also implemented the Get Window Handles command (@longvatrong111, @yezhizhen, #38622, #38745), added support for getting element boolean attributes (@kkoyung, #38401), and added more accurate errors for a number of commands (@yezhizhen, @longvatrong111, #38620, #38357). The Element Clear command now clears <input type="file"> elements correctly (@PotatoCP, #38536), and Element Send Keys now appends to file inputs with the ‘multiple’ attribute.

servoshell

We now display favicons of each top-level page in the tab bar (@simonwuelker, #36680).

servoshell showing a diffie favicon in the tab bar

Resizing the browser window to a very small dimension no longer crashes the browser (@leo030303, #38461). Element hit testing in full screen mode now works as expected (@yezhizhen, #38328).

Various popup dialogs, such as the <select> option chooser dialog, can now be closed without choosing a value (@TimvdLippe, #38373, #38949). Additionally, the browser now responds to a popup closing without any other inputs (@lumiscosity, #39038).

Donations

Thanks again for your generous support! We are now receiving 5552 USD/month (+18.3% over July) in recurring donations.

Historically this has helped cover the cost of our speedy CI servers and Outreachy interns. Thanks to your support, we’re now setting up two new CI servers for benchmarking, and funding the work of our long-time maintainer Josh Matthews (@jdm), with a particular focus on helping more people contribute to Servo.

Keep an eye out for further CI improvements in the coming months, including ten-minute WPT builds, macOS arm64 builds, and faster pull request checks.

Servo is also on thanks.dev, and already 15 GitHub users (−7 from July) that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

5552 USD/month
10000

As always, use of these funds will be decided transparently in the Technical Steering Committee. For more details, head to our Sponsorship page.

Niko MatsakisSymposium: exploring new AI workflows

Screenshot of the Symposium app

This blog post gives you a tour of Symposium, a wild-and-crazy project that I’ve been obsessed with over the last month or so. Symposium combines an MCP server, a VSCode extension, an OS X Desktop App, and some mindful prompts to forge new ways of working with agentic CLI tools.

Symposium is currently focused on my setup, which means it works best with VSCode, Claude, Mac OS X, and Rust. But it’s meant to be unopinionated, which means it should be easy to extend to other environments (and in particular it already works great with other programming languages). The goal is not to compete with or replace those tools but to combine them together into something new and better.

In addition to giving you a tour of Symposium, this blog post is an invitation: Symposium is an open-source project, and I’m looking for people to explore with me! If you are excited about the idea of inventing new styles of AI collaboration, join the symposium-dev Zulip. Let’s talk!

Demo video

I’m not normally one to watch videos online. But in this particular case, I do think a movie is going to be worth 1,000,000 words. Therefore, I’m embedding a short video (6min) demonstrating how Symposium works below. Check it out! But don’t worry, if videos aren’t your thing, you can just read the rest of the post instead.

Alternatively, if you really love videos, you can watch the first version I made, which went into more depth. That version came in at 20 minutes, which I decided was…a bit much. 😁

Taskspaces let you juggle concurrent agents

The Symposium story begins with Symposium.app, an OS X desktop application for managing taskspaces. A taskspace is a clone of your project1 paired with an agentic CLI tool that is assigned to complete some task.

My observation has been that most people doing AI development spend a lot of time waiting while the agent does its thing. Taskspaces let you switch quickly back and forth.

Before I was using taskspaces, I was doing this by jumping between different projects. I found that was really hurting my brain from context switching. But jumping between tasks in a project is much easier. I find it works best to pair a complex topic with some simple refactorings.

Here is what it looks like to use Symposium:

Screenshot of the Symposium app

Each of those boxes is a taskspace. It has both its own isolated directory on the disk and an associated VSCode window. When you click on the taskspace, the app brings that window to the front. It can also hide other windows by positioning them exactly behind the first one in a stack2. So it’s kind of like a mini window manager.

Within each VSCode window, there is a terminal running an agentic CLI tool that has the Symposium MCP server. If you’re not familiar with MCP, it’s a way for an LLM to invoke custom tools; it basically just gives the agent a list of available tools and a JSON scheme for what arguments they expect.

The Symposium MCP server does a bunch of things–we’ll talk about more of them later–but one of them is that it lets the agent interact with taskspaces. The agent can use the MCP server to post logs and signal progress (you can see the logs in that screenshot); it can also spawn new taskspaces. I find that last part very handy.

It often happens to me that while working on one idea, I find opportunities for cleanups or refactorings. Nowadays I just spawn out a taskspace with a quick description of the work to be done. Next time I’m bored, I can switch over and pick that up.

An aside: the Symposium app is written in Swift, a language I did not know 3 weeks ago

It’s probably worth mentioning that the Symposium app is written in Swift. I did not know Swift three weeks ago. But I’ve now written about 6K lines and counting. I feel like I’ve got a pretty good handle on how it works.3

Well, it’d be more accurate to say that I have reviewed about 6K lines, since most of the time Claude generates the code. I mostly read it and offer suggestions for improvement4. When I do dive in and edit the code myself, it’s interesting because I find I don’t have the muscle memory for the syntax. I think this is pretty good evidence for the fact that agentic tools help you get started in a new programming language.

Walkthroughs let AIs explain code to you

So, while taskspaces let you jump between tasks, the rest of Symposium is dedicated to helping you complete an individual task. A big part of that is trying to go beyond the limits of the CLI interface by connecting the agent up to the IDE. For example, the Symposium MCP server has a tool called present_walkthrough which lets the agent present you with a markdown document that explains how some code works. These walkthroughs show up in a side panel in VSCode:

Walkthrough screenshot

As you can see, the walkthroughs can embed mermaid, which is pretty cool. It’s sometimes so clarifying to see a flowchart or a sequence diagram.

Walkthroughs can also embed comments, which are anchored to particular parts of the code. You can see one of those in the screenshot too, on the right.

Each comment has a Reply button that lets you respond to the comment with further questions or suggest changes; you can also select random bits of text and use the “code action” called “Discuss in Symposium”. Both of these take you back to the terminal where your agent is running. They embed a little bit of XML (<symposium-ref id="..."/>) and then you can just type as normal. The agent can then use another MCP tool to expand that reference to figure out what you are referring to or what you are replying to.

To some extent, this “reference the thing I’ve selected” functionality is “table stakes”, since Claude Code already does it. But Symposium’s version works anywhere (Q CLI doesn’t have that functionality, for example) and, more importantly, it lets you embed multiple refrences at once. I’ve found that to be really useful. Sometimes I’ll wind up with a message that is replying to one comment while referencing two or three other things, and the <symposium-ref/> system lets me do that no problem.

Integrating with IDE knowledge

Symposium also includes an ide-operations tool that lets the agent connect to the IDE to do things like “find definitions” or “find references”. To be honest I haven’t noticed this being that important (Claude is surprisingly handy with awk/sed) but I also haven’t done much tinkering with it. I know there are other MCP servers out there too, like Serena, so maybe the right answer is just to import one of those, but I think there’s a lot of interesting stuff we could do here by integrating deeper knowledge of the code, so I have been trying to keep it “in house” for now.

Leveraging Rust conventions

Continuing our journey down the stack, let’s look at one more bit of functionality, which are MCP tools aimed at making agents better at working with Rust code. By far the most effective of these so far is one I call get_rust_crate_source. It is very simple: given the name of a crate, it just checks out the code into a temporary directory for the agent to use. Well, actually, it does a bit more than that. If the agent supplies a search string, it also searches for that string so as to give the agent a “head start” in finding the relevant code, and it makes a point to highlight code in the examples directory in particular.

We could do a lot more with Rust…

My experience has been that this tool makes all the difference. Without it, Claude just geneates plausible-looking APIs that don’t really exist. With it, Claude generally figures out exactly what to do. But really it’s just scratching the surface of what we can do. I am excited to go deeper here now that the basic structure of Symposium is in place – for example, I’d love to develop Rust-specific code reviewers that can critique the agent’s code or offer it architectural advice5, or a tool like CWhy to help people resolve Rust trait errors or macro problems.

…and can we decentralize it?

But honestly what I’m most excited about is the idea of decentralizing. I want Rust library authors to have a standard way to attach custom guidance and instructions that will help agents use their library. I want an AI-enhanced variant of cargo upgrade that automatically bridges over major versions, making use of crate-supplied metadata about what changed and what rewrites are needed. Heck, I want libraries to be able to ship with MCP servers implemented in WASM (Wassette, anyone?) so that Rust developers using that library can get custom commands and tools for working with it. I don’t 100% know what this looks like but I’m keen to explore it. If there’s one thing I’ve learned from Rust, it’s always bet on the ecosystem.

Looking further afield, can we use agents to help humans collaborate better?

One of the things I am very curious to explore is how we can use agents to help humans collaborate better. It’s oft observed that coding with agents can be a bit lonely6. But I’ve also noticed that structuring a project for AI consumption requires relatively decent documentation. For example, one of the things I did recently for Symposium was to create a Request for Dialogue (RFD) process – a simplified version of Rust’s RFC process. My motivation was partly in anticipation of trying to grow a community of contributors, but it was also because most every major refactoring or feature work I do begins with iterating on docs. The doc becomes a central tracking record so that I can clear the context and rest assured that I can pick up where I left off. But a nice side-effect is that the project has more docs than you might expect, considering, and I hope that will make it easier to dive in and get acquainted.

And what about other things? Like, I think that taskspaces should really be associated with github issues. If we did that, could we do a better job at helping new contributors pick up an issue? Or at providing mentoring instructions to get started?

What about memory? I really want to add in some kind of automated memory system that accumulates knowledge about the system more automatically. But could we then share that knowledge (or a subset of it) across users, so that when I go to hack on a project, I am able to “bootstrap” with the accumulated observations of other people who’ve been working on it?

Can agents help in guiding and shepherding design conversations? At work, when I’m circulating a document, I will typically download a copy of that document with people’s comments embedded in it. Then I’ll use pandoc to convert that into Markdown with HTML comments and then ask Claude to read it over and help me work through the comments systematically. Could we do similar things to manage unwieldy RFC threads?

This is part of what gets me excited about AI. I mean, don’t get me wrong. I’m scared too. There’s no question that the spread of AI will change a lot of things in our society, and definitely not always for the better. But it’s also a huge opportunity. AI is empowering! Suddenly, learning new things is just vastly easier. And when you think about the potential for integrating AI into community processes, I think that it could easily be used to bring us closer together and maybe even to make progress on previously intractable problems in open-source7.

Conclusion: Want to build something cool?

As I said in the beginning, this post is two things. Firstly, it’s an advertisement for Symposium. If you think the stuff I described sounds cool, give Symposium a try! You can find installation instructions here. I gotta warn you, as of this writing, I think I’m the only user, so I would not at all be surprised to find out that there’s bugs in setup scripts etc. But hey, try it out, find bugs and tell me about them! Or better yet, fix them!

But secondly, and more importantly, this blog post is an invitation to come out and play8. I’m keen to have more people come and hack on Symposium. There’s so much we could do! I’ve identified a number of “good first issue” bugs. Or, if you’re keen to take on a larger project, I’ve got a set of invited “Request for Dialogue” projects you could pick up and make your own. And if none of that suits your fancy, feel free to pitch you own project – just join the Zulip and open a topic!


  1. Technically, a git worktree. ↩︎

  2. That’s what the “Stacked” box does; if you uncheck it, the windows can be positioned however you like. I’m also working on a tiled layout mode. ↩︎

  3. Well, mostly. I still have some warnings about something or other not being threadsafe that I’ve been ignoring. Claude assures me they are not a big deal (Claude can be so lazy omg). ↩︎

  4. Mostly: “Claude will you please for the love of God stop copying every function ten times.” ↩︎

  5. E.g., don’t use a tokio mutex you fool, use an actor. That is one particular bit of advice I’ve given more than once. ↩︎

  6. I’m kind of embarassed to admit that Claude’s dad jokes have managed to get a laugh out of me on occassion, though. ↩︎

  7. Narrator voice: burnout. he means maintainer burnout. ↩︎

  8. Tell me you went to high school in the 90s without telling me you went to high school in the 90s. ↩︎

Mozilla ThunderbirdThunderbird Monthly Development Digest: August 2025

Hello again from the Thunderbird development team! As autumn settles in, we’re balancing the steady pace of ongoing projects with some forward-looking planning for 2026. Alongside coding and testing, some of our recent attention has gone into budgets, roadmaps, and setting priorities for the year ahead. It’s not the most glamorous work, but it’s essential for keeping our momentum strong and ensuring that the big features we’re building today continue to deliver value well into the future. In the meantime, plenty of exciting progress has landed across the application, and here are some of the highlights.

Exchange support for email is here

Exchange support has officially landed in Thunderbird 144, which will roll out as our October monthly release. A big final push from the team saw a number of important features make it in before the merge:

  • Undo/Redo operations for move/copy/delete
  • Notifications
  • Basic Search
  • Folder Repair
  • Remote message content display & blocking
  • Status Bar feedback messaging
  • Account Settings screen changes
  • Autosync manager for message downloads
  • Attachment delete & detach
  • First set of advanced server settings
  • Experimental tenant-specific configuration options (behind a preference) now being tested with early adopters

The QA team is continuing to work through their test plans with support from a small beta test group, and their findings will guide the documentation and support we share more broadly with users on monthly release 144, as well as the priorities to tackle before we head into the next chapter.

Looking ahead, the team is already focused on:

  • Expanding advanced server settings for more complex environments
  • Improving search functionality
  • Folder Quotas & Subscriptions
  • Refining the user experience as more real-world feedback comes in
  • A planning session to scope work to support calendar and address book via EWS

Keep track of feature delivery here.

Conversation View Work Week

One of the biggest milestones this month was our dedicated Conversation View Work Week which recently wrapped up, where designers and engineers gathered in person to tackle one of Thunderbird’s most anticipated UX features. 

The team aligned early on goals and scope, rapidly iterated on wireframes and high-fidelity mockups, and built out initial front-end components powered by the new Panorama database. 

By the end of the week, we had working prototypes that collapsed threads into a Gmail-style conversation view, demonstrated the new LiveView architecture, and produced detailed design documentation. It was an intense but rewarding sprint that laid the foundation for a more modern and intuitive Thunderbird experience.

Account Hub

We’ve now added the ability to manually edit an EWS configuration, as well as allow for users to create an advanced EWS configuration through the manual configuration step

The ability to cancel any loading operation in account hub for email has been completed and will be added to daily shortly

  • This also had the side effect of users who click “Stop” in the account old setup with an OAuth window open now closing the OAuth window automatically
  • We will be uplifting this change to beta and then ESR

Progress is being made with adding a step for 3rd party hosting credentials confirmation, with the UI complete and the logic being worked on

  • This progress will have to take into account changes from the cancel loading patch, as there are conflicting changes
  • Once this feature is complete, it will be uplifted to beta, and then ESR

Work will soon be starting to enable the creation of address books through account hub by default.

Follow progress in the Meta Bug

Calendar UI Rebuild

After a long pause, work on the Calendar re-write has resumed! We’ve picked things back up by continuing focus on the event read dialog. A number of improvements have already landed, including proper handling of description data and several small bug fixes.

We have seven patches under review that cover key areas such as:

  • Accessibility improvements, including proper announcements of event and calendar titles.
  • Adding the footer for acceptance.
  • Updating displays and transitioning current work to use the mod-src protocol.
  • Handling resizing

Development is also underway to add attendee support, after which we’ll move on to polishing the remaining pieces of the read dialog UI.

Maintenance, Recent Features and Fixes

August was set aside as a focus for maintenance, with a good number of our team dedicated to handling upstream liabilities such as our continued l10n migration to Fluent and module loading changes. In addition to these items, we’ve had help from the development community to deliver a variety of improvements over the past month:

  • Tree restyling following upstream changes – solved
  • An 18 year old bug to enable event duplication via drag & drop – solved
  • A 15 year old bug to sort by unread in threads correctly – solved
  • Implementation of standard colours throughout the application. [meta bug]
  • Modernization of module inclusion. [meta bug]
  • and many more which are listed in release notes for beta.

If you would like to see new features as they land, and help us squash some early bugs, you can try running daily and check the pushlog to see what has recently landed. This assistance is immensely helpful for catching problems early.

Toby Pilling

Senior Manager, Desktop Engineering

The post Thunderbird Monthly Development Digest: August 2025 appeared first on The Thunderbird Blog.

The Rust Programming Language Blogcrates.io: Malicious crates faster_log and async_println

Updated September 24th, 2025 17:34:38 UTC - Socket has also published their own accompanying blog post about the attack.

Summary

On September 24th, the crates.io team was notified by Kirill Boychenko from the Socket Threat Research Team of two malicious crates which were actively searching file contents for Etherum private keys, Solana private keys, and arbitrary byte arrays for exfiltration.

These crates were:

  • faster_log - Published on May 25th, 2025, downloaded 7181 times
  • async_println - Published on May 25th, 2025, downloaded 1243 times

The malicious code was executed at runtime, when running or testing a project depending on them. Notably, they did not execute any malicious code at build time. Except for their malicious payload, these crates copied the source code, features, and documentation of legitimate crates, using a similar name to them (a case of typosquatting1).

Actions taken

The users in question were immediately disabled, and the crates in question were deleted2 from crates.io shortly after. We have retained copies of all logs associated with the users and the malicious crate files for further analysis.

The deletion was performed at 15:34 UTC on September 24, 2025.

Analysis

Both crates were copies of a crate which provided logging functionality, and the logging implementation remained functional in the malicious crates. The original crate had a feature which performed log file packing, which iterated over an associated directories files.

The attacker inserted code to perform the malicious action during a log packing operation, which searched the log files being processed from that directory for:

  • Quoted Ethereum private keys (0x + 64 hex)
  • Solana-style Base58 secrets
  • Bracketed byte arrays

The crates then proceeded to exfiltrate the results of this search to https://mainnet[.]solana-rpc-pool[.]workers[.]dev/.

These crates had no dependent downstream crates on crates.io.

The malicious users associated with these crates had no other crates or publishes, and the team is actively investigating associative actions in our retained3 logs.

Thanks

Our thanks to Kirill Boychenko from the Socket Threat Research Team for reporting the crates. We also want to thank Carol Nichols from the crates.io team, Pietro Albini from the Rust Security Response WG and Walter Pearce from the Rust Foundation for aiding in the response.

  1. typosquatting is a technique used by bad actors to initiate dependency confusion attacks where a legitimate user might be tricked into using a malicious dependency instead of their intended dependency — for example, a bad actor might try to publish a crate at proc-macro3 to catch users of the legitimate proc-macro2 crate.

  2. The crates were preserved for future analysis should there be other attacks, and to inform scanning efforts in the future.

  3. One year of logs are retained on crates.io, but only 30 days are immediately available on our log platform. We chose not to go further back in our analysis, since IP address based analysis is limited by the use of dynamic IP addresses in the wild, and the relevant IP address being part of an allocation to a residential ISP.

Mozilla ThunderbirdState of the Thunder 12: Community, Android, and Mozilla Connect

We’re back with our twelfth episode of the State of the Thunder! In this episode, we’re talking about community initiatives, filling you in on Android development, and finishing our updates on popular Mozilla Connect requests.

Want to find out how to join future State of the Thunders? Be sure to join our Thunderbird planning mailing list for all the details.

Austin RiverHacks and Ask-A Fox

Thunderbird is a Silver sponsor for Austin RiverHacks NASA Space Apps Challenge 2025! If you’re in or around Austin, Texas from October 4th-5th, and want to join an in-person event where curious minds delve into NASA data to tackle real-life problems, we’d love to see you.

This week (as in right now! Check it out and get involved!), we’re joining forces with Firefox for the Ask-A-Fox event on Mozilla Support! Earn swag, join an incredible community, and help fellow Thunderbird users on desktop and Android! Want a great overview of how to contribute to SUMO? Watch our Community Office Hours with advice on getting started.

Android Plans for Q4 2025

It’s hard to believe we’re almost into the last three months of the year! We’ve just released our joint July/August Mobile Progress report. We also want to give you all an update on our overall progress on the roadmap we created at the beginning of the year.

The new Account Drawer, currently in Beta, isn’t finished yet. We’re still working on real, proper unified folders! We’ll have mockups of the account drawer progress before the end of the month and more info in the next beta’s release notes. We’ll also have updates soon on message list status notifications (similar to the desktop). In the single message view, we have improvements coming! This includes making attachments quicker to see and open.

The battle for proper IMAP fetch continues. Different server setups complicate this struggle, but we want to get this right, nonetheless. This will bring the Android app more on par with other emails apps.

Unfortunately, work on things like message sync, notifications, and Android 15 might delay features like HTML signatures.

Mozilla Connect Updates, Continued

We’re tackling more of the most frequently requested changes and features on Mozilla Connect, and we’re answering questions about native operating system integration, conversation view, and Thunderbird Pro related features!

Native Operating System Integration

When your operating system is capable of something Thunderbird isn’t, we share your frustration. We want things like OS-native progress bars that show you how downloads are going. We’ve started work on OS-native notification actions, like deleting messages. We love how helpful and time-saving this is, and want to expand it to things like calendar reminders.

There’s possibility and limitation in this, thanks to both Firefox and the OS itself. Firefox enables us more than it restricts us. For example, our work on the progress bar comes straight from Firefox code. Though there are some limits, and Thunderbird’s different needs as a mail client sometimes mean we need to improve an aspect of Firefox to enable further development. But the beauty of open source means we can contribute our improvements upstream! The OS often constrains us more. For example, we’d love snoozeable native OS calendar notifications, but they just aren’t possible yet.

Conversation View

We just finished an entire in-person work week focused on this in Vancouver! Conversation view, if you’re not familiar with it, includes ALL messages in a conversation, including your replies and messages moved to different folders. This feature, along with others, depends on having a single database for all messages in Thunderbird. Our current database doesn’t do this; instead, each folder is its own database.

The new SQLite database, which we’re calling Panorama, will enable a true Conversation View. During the work week, we thought about (and visualized) what the UI will look like. Having developers and designers in the same room was incredibly helpful for a complicated change. (Having a gassy Boston Terrier in said room, less so.) The existing code expects the current database, so we’ll have to rebuild a lot and carefully consider our decisions. The switch to the new database will probably occur next year after the Extended Support Release, behind a preference.

This change will help Thunderbird behave like a modern email client! Moving to Panorama will not only move us into the future, but into the present.

Thunderbird Pro Related-Requests

Three Mozilla Connect requests (Expanding Firefox Relay, a paid Mozilla email domain, and a Thunderbird webmail) were all out of our control once. But now, with the upcoming Thunderbird Pro offerings, these all will be possible! We’re even experimenting with a webmail experience for Thundermail, in addition to using Thunderbird (or even another email client if you want.) We’ll have an upcoming State of the Thunder dedicated to Thunderbird Pro with more info and updates!

Watch the Video (also on PeerTube)

Listen to the Podcast

The post State of the Thunder 12: Community, Android, and Mozilla Connect appeared first on The Thunderbird Blog.