The Mozilla BlogMozilla Corporation Org Changes to Accelerate our Path to the Future

Over the past few months, we’ve been accelerating our ability to execute outstandingly, make faster decisions, and realize our multi-product ambitions. To help facilitate this, I’m excited to announce an organizational change within the product team. This change will enable us to better develop and scale products at different stages of development and maturity.

Today, we have multiple groups across various teams working on new ideas and emerging products: Fakespot, PXI, Mozilla Social, and the Innovation Ecosystems team, plus some newer emerging pods around new product design sprints and ideation. To simplify and accelerate this work, we are consolidating our emerging and seed product portfolios under a single umbrella, led by Adam Fishman, as our SVP of New Products, reporting directly to me.  

By setting up Firefox as a standalone product organization, we will also be able to bring more focus to our continual efforts to improve the Firefox experience for everyone who uses it. Firefox is already a leader in foundational qualities like speed and privacy, and now we will be able to faster in developing solutions that bring more useful tools and more joyful experience to our users. Our recent announcement of new Firefox features is just the start, as we close in on Firefox’s 20th birthday in November.

I am really excited about these changes as they help us accelerate our path to a strong, multi-product future as we simultaneously expand on our investment in our flagship core product, Firefox.

Laura Chambers

CEO, Mozilla Corporation

The post Mozilla Corporation Org Changes to Accelerate our Path to the Future appeared first on The Mozilla Blog.

Mozilla ThunderbirdMay 2024 Community Office Hours: The Thunderbird Release Process

Have you ever wondered what the release process of Thunderbird is like? Wanted to know if a particular bug would be fixed in the next release? Or how long release support lasts? Or just how many point releases are there?

In the May Office Hours, we’ll demystify the current Thunderbird release process as we get closer to the next Extended Security Release on July 10, 2024. 

May Office Hours: The Thunderbird Release Process

One of our guests you may know already: Wayne Mery, our release and community manager. Daniel Darnell, a key release engineer, will also join us. They’ll answer questions about what roles they play, how we stage releases, and when they know if releases are ready. Additionally, they’ll tell us about the future of Thunderbird releases, including working with add-on developers and exploring a monthly release cadence.

Join us as our guests answer these questions and more in the next edition of our Community Office Hours! You can also submit your own questions about this topic beforehand and we’ll be sure to answer them:

Catch Up On Last Month’s Thunderbird Community Office Hours

While you’re thinking of questions to ask, watch last month’s office hours where we chatted with three key developers bringing Rust and native Microsoft Exchange support into Thunderbird. You can find the video on our TILvids page.

Join The Video Chat

We’ll be back in our Big Blue Button room, provided by KDE and the Linux Application Summit. We’re grateful for their support and to have an open source web conferencing solution for our community office hours.

Date and Time: Friday, May 31 at 17:30 UTC

Direct URL to Join:

Access Code: 964573

The post May 2024 Community Office Hours: The Thunderbird Release Process appeared first on The Thunderbird Blog.

This Week In RustThis Week in Rust 548

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is ralertsinua, a text user interface for getting information about Russian air raids in Ukraine.

Thanks to Vladyslav Batyrenko for the suggestion!

Please submit your suggestions and votes for next week!

Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No calls for testing were issued this week.
  • No calls for testing were issued this week.
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

  • [hyperswitch - [FEATURE] : add pagination support for customers list] (
  • [hyperswitch - [FEATURE] : [GlobalPayments] Currency Unit Conversion] (
  • [hyperswitch - [FEATURE] : Add support for sending additional metadata in the MessagingInterface] (

If you are a Rust project owner and are looking for contributors, please submit tasks here.

CFP - Speakers

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the submission website through a PR to TWiR.

Updates from the Rust Project

364 pull requests were merged in the last week

Rust Compiler Performance Triage

Fairly quiet week with the exception of a very large improvement coming from the switch to rust-lld on nightly Linux. This can have very large impacts on benchmarks where linking dominates the build time (e.g., ripgrep, exa, small binaries like hello-world). Aside from that change, there were a few small regressions that were either deemed worth it or are still being investigated.

Triage done by @rylev. Revision range: 9105c57b..1d0e4afd


(instructions:u) mean range count
Regressions ❌
0.7% [0.1%, 2.5%] 30
Regressions ❌
0.5% [0.2%, 0.8%] 5
Improvements ✅
-30.4% [-71.7%, -0.4%] 35
Improvements ✅
-25.6% [-70.9%, -0.5%] 75
All ❌✅ (primary) -16.1% [-71.7%, 2.5%] 65

4 Regressions, 1 Improvement, 4 Mixed; 2 of them in rollups 66 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team RFCs entered Final Comment Period this week.
Language Reference Unsafe Code Guidelines
  • No Unsafe Code Guideline RFCs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-05-22 - 2024-06-19 🦀

North America
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.


Please see the latest Who's Hiring thread on r/rust

Quote of the Week

In other words, I do not want the compiler to just insert code to uphold the bare minimum guarantees, I want the compiler to check my work for me and assist me in developing an algorithm I can confidently assert is right.

without boats

Thanks to scottmcm for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Daniel StenbergA history of a logo with a colon and two slashes

In the 2015 time frame I had come to the conclusion that the curl logo could use modernization and I was toying with ideas of how it could be changed. The original had served us well, but it definitely had a 1990s era feel to it.

On June 11th 2015, I posted this image in the curl IRC channel as a proof of concept for a new curl logo idea I had: since curl works with URLs and all the URLs curl supports have the colon slash slash separator. Obviously I am not a designer so it was rough. This was back in the day when we still used this logo:

Frank Gevarts had a go at it. He took it further and tried to make something out of the idea. He showed us his tweaked take.

When we met up at the following FOSDEM in the end of January 2016, we sat down together and discussed the logo idea a bit to see if we could make it work somehow. Left from that exercise is this version below. As you can see, basically the same one. It was hard to make it work.

Later that spring, I was contacted by Soft Dreams, a designer company, who offered to help us design a new logo at no cost to us. I showed them some of these rough outlines of the colon slash slash idea and we did a some back-and-forthing to see if we could make something work with it, but we could not figure out a way to get the colon slash slash sequence actually into the word curl in a way that would look good. It just kept on looking different kinds of weird. Eventually we gave that up and we ended up putting it after the word, making it look like curl is a URL scheme. It was ended up much easier and ultimately the better and right choice for us. The new curl logo was made public in May 2016. Made by Adrian Burcea.

Just months later in 2016, Mozilla announced that they were working on a revamp of their logo. They made several different skews and there was a voting process during which they would eventually pick a winner. One of the options used colon slash slash embedded in the name and during the process a number of person highlighted the fact that the curl project just recently changed logo to use the colon slash slash.

In the Mozilla all-hands meeting in Hawaii in December 2016, I was approached by the Mozilla logo design team who asked me if I (we?) would have any issues with them moving forward with the logo version using the colon slash slash.

I had no objections. I think that was the coolest of the new logo options they had and I also thought that it sort of validated our idea of using the symbols in our logo. I was perhaps a bit jealous how Mozilla is a better word to actually integrate the symbol into the name…. the way we tried so hard to do for curl, but had to give up.

In January 2017 Mozilla announce their new logo. With the colon slash slash.

And now you too know how this happened.

The Mozilla BlogReleasing a new paper on openness and artificial intelligence

For the past six months, the Columbia Institute of Global Politics and Mozilla have been working with leading AI scholars and practitioners to create a framework on openness and AI. Today, we are publishing a paper that lays out this new framework.

Banner reading "The Columbia Convening on Openness and AI" with logos of Mozilla and the Institute of Global Politics on a light blue background. Abstract design featuring green and white semi-circles is present.

During earlier eras of the internet, open source technologies played a core role in promoting innovation and safety. Open source technology provided a core set of building blocks that software developers have used to do everything from create art to design vaccines to develop apps that are used by people all over the world; it is estimated that open source software is worth over $8 trillion in value. And, attempts to limit open innovation — such as export controls on encryption in early web browsers — ended up being counterproductive, further exemplifying the value of openness. 

The paper surveys existing approaches to defining openness in AI models and systems, and then proposes a descriptive framework to understand how each component of the foundation model stack contributes to openness.

Today, open source approaches for artificial intelligence — and especially for foundation models —  offer the promise of similar benefits to society. However, defining and empowering “open source” for foundation models has proven tricky, given its significant differences from traditional software development. This lack of clarity has made it harder to recommend specific approaches and standards for how developers should advance openness and unlock its benefits. Additionally, these conversations about openness in AI have often operated at a high level, making it harder to reason about the benefits and risks from openness in AI. Some policymakers and advocates have blamed open access to AI as the source of certain safety and security risks, often without concrete or rigorous evidence to justify those claims. On the other hand, people often tout the benefits of openness in AI, but without specificity about how to actually harness those opportunities. 

That’s why, in February, Mozilla and the Columbia Institute of Global Politics brought together over 40 leading scholars and practitioners working on openness and AI for the Columbia Convening. These individuals — spanning prominent open source AI startups and companies, nonprofit AI labs, and civil society organizations — focused on exploring what “open” should mean in the AI era.

Today, we are publishing a paper that presents a framework for grappling with openness across the AI stack. The paper surveys existing approaches to defining openness in AI models and systems, and then proposes a descriptive framework to understand how each component of the foundation model stack contributes to openness. It enables — without prescribing — an analysis of how to unlock specific benefits from AI, based on desired model and system attributes. Furthermore, the paper also adds clarity to support further work on this topic, including work to develop stronger safety safeguards for open systems. 

A flowchart outlining various categories for AI development: Product/UX, Documentation, Model Components, Licensing, Infrastructure, and Safeguards, each with subcategories detailing specific items.<figcaption class="wp-element-caption">Framework from paper showing general-purpose AI system stack and dimensions of openness.</figcaption>

We believe this framework will support timely conversations around the technical and policy communities. For example, this week, as policymakers discuss AI policy at the AI Seoul Summit 2024, this framework can help clarify how openness in AI can support societal and political goals, including innovation, safety, competition, and human rights. And, as the technical community continues to build and deploy AI systems, this framework can support AI developers in ensuring their AI systems help achieve their intended goals, promote innovation and collaboration, and reduce harms. We look forward to working with the open source and AI community, as well as the policy and technical communities more broadly, to continue building on this framework going forward.

The post Releasing a new paper on openness and artificial intelligence appeared first on The Mozilla Blog.

Cameron KaiserDonnie Darko uses OS X

I think it's been previously commented upon, but we were watching Donnie Darko over the weekend (controversial opinion: we prefer the director's cut, we think it's an improvement) and noticed that Donnie's reality is powered by a familiar processor and operating system. These are direct grabs from the Blu-ray.
The entirety of the crash dump can't be seen and the scenes in which it/they appear are likely a composite of several unrelated traces, but the first two shots have a backtrace showing symbols from Unsanity Application Enhancer (APE), used for adding extra functionality to the OS like altering the mouse cursor and system menus. However, its infamous in-memory monkeypatching technique could sometimes make victim applications unstable and was unsurprisingly a source of some early crash reports in TenFourFox. (I never supported it for that reason, refused to even use it on principle, and still won't.) As a result, it wouldn't have been difficult for the art department to gin up a genuine crash backtrace as an insert. The second set of grabs appears when the Artifact returns to the Primary Universe and the Tangent Universe is purged (not a spoiler because it will make no sense to anyone who hasn't seen the movie).

All four are specific to the director's cut that premiered theatrically in May 2004. While APE was available at least as far back as Puma, i.e., OS X 10.1, Puma didn't come out until September 2001, months after the movie premiered in January of that year. In fact, the original movie is too early even for the release of Cheetah (10.0) in March. The first two images don't give an obvious version number but the second set shows a Darwin kernel version of 6.1, which corresponds to Jaguar 10.2.1 from September 2002. Although Panther 10.3 came out in October 2003, the recut movie would have moved to post-production (in its fashion) by then, and the shots may well have been done near the beginning of production when early versions of Jag remained current.

I'm waiting on the next Firefox ESR (128) in July, and there will be at least some maintenance updates then, so watch for that.

The Mozilla BlogSneha Revanur on empowering youth voices in AI, fighting for legislation and combating deepfakes and disinformation

At Mozilla, we know we can’t create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through our Rise 25 Awards. These storytellers, innovators, activists, advocates, builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.

This week, we chatted with Sneha Revanur, an activist behind Encode Justice, an organization that aims to elevate youth voices in support of human-centered AI. We talked with Sneha about her current work at Encode, working with legislations for AI regulation, fighting against disinformation and more.

So, the first question that I wanted to ask you about is the work that you’ve already done at such a really young age, including, obviously, founding Encode Justice at 15. How did you become so knowledgeable about the things that you wanted to do right now and so passionate early on in your life?

Sneha Revanur: I say this quite a bit, but honestly, had I not been born in San Jose in the heart of Silicon Valley, it’s totally possible, it would not exist. I think that growing up right there in the beating heart of innovation, being exposed to that culture from pretty much the day I was born, was really formative for me. I grew up in a household of software engineers — my parents both work in tech, my older sister works in tech. So I was kind of surrounded by that from a pretty young age. I think there was a point in my early childhood and middle school, probably when I myself thought that I would go and pursue a career in computer science. I think that it was only in and around early high school, when I began to think more critically about the social implications of these technologies and become more convinced that I wanted to be involved on the policy side of things, because I felt that was an area where we didn’t have as many voices who were actively involved in the conversation. So I definitely think just the fact that I was growing up in that orbit played a huge role, and I think that was really formative for me as a child. I think that at the same time, it was really helpful to have a background in working on campaigns. I got super involved politically around the 2020 election when I was in high school, I got involved working on Congressional campaigns. That was kind of around the time of my political awakening, and I think that learning the ropes at that point, building out that skill set and that toolkit of really understanding how to leverage the advocacy work, and better understanding how to apply that knowledge to the work that I was wanting to do in the AI world, I think, was really valuable for me. And so that’s kind of how it became a perfect storm of factors. And the summer of 2020 or so, I launched this initial campaign against a ballot measure in California that was seeking to replace the use of cash bill with an algorithm that had been shown in previous reports to be racially discriminatory. And so that was my initial entry point to a related advocacy, and that was kind of how the whole organization came to be following that campaign, and ever since, then the rest is the history — we’ve just grown internationally, expanded to think about all sorts of risks, some challenges in the age of AI. Not just bias, but also disinformation, job loss, so much more. And yeah, very excited to see where the feature leads us as well.

In what ways were you able to become more informative about AI and the things that you’re doing? 

So I think that when I was first getting started in the space, there were a couple of resources and pieces of media that were really helpful for me. I remember watching Coded Bias by Dr. Joy Buolamwini for the first time, and that was obviously an incredible entry point. I also remember watching The Social Dilemma on Netflix as well. I read a couple of books like Automating Inequality, Algorithms of Oppression, Weapons of Math Destruction. A lot of the classic text on like algorithmic bias, and those are really helpful for me as well. I actually think the initial entry point for me, the one that even got me thinking about was the ballot measure that I was working on in California and the risk of algorithmic bias in pre-trial tools, was an exposé published in ProPublica. I find that ProPublica and The Markup have great investigative journalism on AI issues, and so those are obviously fantastic resources, if you’re thinking about tech specific harms. So I think those are definitely some valuable resources for me, and ever since then, I’ve been expanding my repertoire of books. I also love The Alignment Problem, Human Compatible by Stuart Russell. I think so much literature out there on the full gamut of risks posed by AI. But yeah, that’s just a quick rundown of what I found to be most helpful.

A lot of younger people are growing up into this generation where AI is just a normal thing, right? How have you been able to see it become part of your daily life and in college as a young person?

I think over the past couple of years, the rate of AI adoption is just skyrocketed. I would say people probably use ChatGPT on a daily basis, if not like, many times per day — I myself use ChatGPT pretty actively. A lot of my peers do as well. I think there’s a whole range of uses. I think I find it really promising that my generation is becoming better equipped to understand how to responsibly interact with these tools, and I think that only through trial and error, only through experimentation can you figure out what kinds of use cases these tools are best equipped for what kinds of use cases they’re not as prepared for yet. And I think that it’s really helpful that we’re learning pretty early on how to integrate them into our lives in a meaningful and beneficial way. So I definitely think that the rate of adoption has really increased recently, and that’s definitely been a promising development. I would also say, it’s not just ChatGPT. Obviously, all of us are active social media users, or many of us are, and we’re becoming intimately aware that our online experiences on social media are obviously mediated by algorithms, the content that we’re consuming online, the information we’re being exposed to, whether that’s TikTok — even that’s under fire right now — or Instagram or Twitter, or anything of the sort. Obviously, like I said before, our online experiences are being shaped, governed, mediated by these complex algorithmic processes, and I think that young people might not be able to, in most cases, articulate the technical complexities of how those algorithms work, but they’ll understand generally that it’s obviously looking at prior data about them, and are becoming increasingly conscious of what kinds of personal information is being collected when they navigate online platforms. So I think that definitely in relation to social media, in relation to general generative AI use and the integration of generative AI in the classroom as well, I think that when it comes to general chatbots, for example, a lot of my peers were honestly quite disturbed by Snapchat’s My AI tool, which is like this chatbot that was just pinned to the top of your screen when you logged on, there was no opt out ability whatsoever. So I think that with the proliferation of those kinds of chat boxes that are designed to be youth facing tools like ChatGPT, all sorts of things, I just really seen it become a pivotal part of people’s daily lives.

I don’t think it is talked about enough how much the younger generation also feels that they should be included in being involved in the development of so much of the AI that’s coming along. What are some of the things that you are advocating for the most with legislators and officials when it comes to regulating AI?

There’s a whole host of things, I think. What’s become more challenging for us as we’ve grown as an organization is we’ve also realized there are so many issues out there and we want to be able to have capacity to take all of them on. I think in this year, especially, we’re thinking a lot about deep fakes and misinformation. Obviously, it’s the largest election year in human history. We’re going to have the governments of half the world’s population up for re-election. And what that means is that people are going to be marching to the polls under a fog of disinformation. We’re seeing how AI generates this information has exploded online. We’re seeing how deepfakes, not only in a political context, but also in the context of revenge porn, have been targeting vulnerable young girls and women, ranging from celebrities like Taylor Swift to ordinary people — girls in middle schools who are being impacted by these just because of their classmates being able to disseminate and make these like pretty sophisticated deepfake images on the spot. We’ve never had that kind of technology be so accessible to ordinary people. And so I think that people always compare it to like Photoshop, and it’s just not at all. It’s not. It’s not at all analogous because this is so hyperrealistic. We’re talking, not just photos, but also videos. I think we really are seeing some pretty concerning use cases already again, not just in the realm of politics, but in people’s daily and social lives as well. So I think that our top priority right now, especially in 2024, is going to be deepfakes and disinformation. But there’s so much else we’re thinking about as well. For example, we just had a member of our team return from Vienna, where they were hosting a conference on the use of autonomous swapping systems. We’re super concerned about the use of AI and warfare and some of the national security implications of that are obviously thinking a lot about algorithmic bias and job loss, especially as AI potentially begins to displace human labor. And of course, there are these growing debates over the potential catastrophic risks that could result from artificial intelligence and whether or not it could empower bad actors by helping people design bioweapons or helping launch cyberattacks. And those are all things that we’re really concerned about as well. So I think, yeah, full range of different issues here. But I would say the top thing we’re prioritizing right now is the disinformation issue.

What do you think is the biggest challenge as a whole that we face in the world this year, and on and offline? And how do we combat it?

Well, this is a challenge that isn’t just specific to AI, it’s one that I’m seeing on a society scale: It’s just this collapse of trust and human connection that I think is really, really concerning. And obviously AI is going to be the next frontier of that. I mean, whether it’s young people turning to chatbots in lieu of friends and family, meaning that we’re going to eventually erode the social bonds and sustained societies, or it’s people being exposed to more and more AI generated disinformation on social media, and people inherently not being able to trust what they see online. A couple of days ago, actually, I came across this deepfake recording of a principal in Baltimore, Maryland where he was allegedly saying all these like race, antisemitic things, and it was completely doctored, obviously using AI. If you hear it, it sounds like incredibly realistic. I wouldn’t have thought to second guess that or interrogate that if I heard it without knowing that it was generated by AI. And so I think that we’re really veering towards this state of almost reality collapse, as some have called it, where you don’t really know how to generate, how to sift through facts and fiction and understand what’s real and what’s not, and I think that again, that’s a larger problem that’s not just related to AI, but AI is definitely going to be a driving force, making things worse.

Where do you draw inspiration from the work that you do today? 

I think that a lot of the names that I mentioned before are some of the leading thinkers that I’ve been following in the space, and also like their books, their movies, things that have been super formative. But I would say, first and foremost, what I found to be most inspiring is just seeing how this like random issue that I was thinking about pretty much in a silo when I was 15 is now something that a lot more people my age are thinking a lot about now. It’s been really gratifying to see this movement grow from pretty much me in my like bedroom when I was 15, to like a thousand people now all over the world, and everyone’s super passionate about it, and it’s just so amazing to see people hosting events in their countries and running workshops and reaching out to legislators. And there’s so much excitement and agency around this that I find really, really inspiring. So I would just say, what keeps me going and what I find really re-energizing is just the spirit of the young people that I work with and seeing how immensely this network has grown, but also how deeply invested every single person is in the work, and how they’re taking ownership over this in their own lives, I think that has been really, really powerful for me to see, so that’s been really inspiring. In terms of direction and the issues and who I’m taking inspiration from in that sense, like I mentioned, some big influences have been Joy Buolamwini, Stuart Russell, Yoshua Bengio, some of the top AI thinkers who, I think, are thinking about a broad range of risks, and I think that getting that balance of perspectives has been really crucial for me and shaping my own views on AI.

Has anything in the last few years from when you started when you’re 15 surprised you that you maybe didn’t anticipate?

Well, I mean, I did not realize this whole like ChatGPT induced boom and public interest would take place. There was a time, I think maybe two years ago, where I was like, “am I just screaming into the void, what is going on here?” There was some interest in AI at that point, but definitely not at the level that it is right now, and I distinctly remember the feeling of going to lawmakers and feeling as though they would just be like, “Yeah, yeah, sounds good.” And then at the end of the day, they had like 20 other political priorities to get to. Obviously, there’s still a long way to go when it comes to getting Federal legislation on AI passed, but I think it was so inspiring coming out of a lot of the conversations around ChatGPT to have the same lawmakers who once ghosted us, reaching out to us, asking for briefings and wanting to get up to speed on the issues. And I think that just seeing that absolute reversal of fate was just absolutely stunning, and I think it was just really promising, of course, thinking about kind of being in a silo for a topic that was being discussed on campus in the dining halls with students and professors and people and seeing the conversation expand beyond the initial bubble. That has been really, really, really powerful for me.

What is one action that you think that everyone should take to make the world and our lives a little bit better?

There’s so many things that I could say. The first thing that I’m thinking about right now, especially in this critical deepfake and disinformation. So I would say, if you’re living in the U.S., call your member of Congress, ask them to pass deepfake legislation, urge them to pass deepfake legislation. I think it’s such an important priority this year, and unfortunately, it’s just not being prioritized, especially with so much else going on in the national political stage. So I would say call on your leaders to demand stronger AI regulation. I think that there are lots of ways that people can take direct action, whether or not you live in the U.S., or whether or not you know a lot about AI or have been exposed to AI issues in the past.

We started Rise25 to celebrate Mozilla’s 25th anniversary. What do you hope that people are celebrating in the next 25 years?

I hope that we’re celebrating a safer social media ecosystem where all users have agency and ownership over their personal data and their online experiences. I hope that we are moving towards a more AI literate world where people are prepared to navigate the surge of, for example, disinformation they’re going to experience, and understand how to navigate a world where you might be applying for a job and there’s an algorithmic screening tool that’s reviewing your application. Or you’re standing trial, and there is a risk assessment tool that’s assessing your level of criminal risk. I think people need to be aware of those things, and I hope we’re moving towards a more AI literate world in that sense. I hope that we have stronger international coordination on AI. I think that it’s truly a borderless issue and that right now, we’re seeing a lot of patchwork of different domestic regulations. We really need to harmonize international approach. And some sort of Paris climate agreement, but for AI. I would say those are a couple of things that I’m thinking about and hoping for the next couple of decades and years.

What gives you hope about the future of our world?

I’ve said this before, but I think what gives me hope is seeing the next generation so fired up thinking a lot about this. And I think it’s also really exciting to think about the fact that the next generation of people who are actually building the technologies are going to be approaching it with a much different mindset, and with a much different frame of thinking than the people who have been building these technologies in the past. And so I think that seeing that seismic shift has been really rewarding, definitely. And I mean, I’m excited to see how the next couple of years shake out. So I think it’s a mixture of optimism mixed with obviously anxiety for the future. But I think that first and foremost, the people that I work with, and my peers, have really inspired me.

Get Firefox

Get the browser that protects what’s important

The post Sneha Revanur on empowering youth voices in AI, fighting for legislation and combating deepfakes and disinformation appeared first on The Mozilla Blog.

Wil ClouserRetiring BrowserID on Mozilla Accounts

The tl;dr here is that Mozilla Accounts is turning off SyncStorage BrowserID support and it probably doesn’t affect you at all.

A little history

In 2011, when Mozilla Accounts (called “Firefox Accounts” back then) was first built it used BrowserID identity certificates in its authentication model. The BrowserID protocol never took off and Mozilla’s work on it ended in 2016. However, the sync service in Firefox continued to use BrowserID even as OAuth support was added to Mozilla Accounts as an alternative for all other relying parties.

Over time, we recognized BrowserID was becoming a maintenance liability. As a non-standard protocol it created significant complexity in our codebase. Therefore, we decided to migrate the Firefox clients off of it in favor of OAuth.

This was an enormous effort, and while much more could be written about this transition, the main takeaway is that Firefox Sync’s BrowserID support ended with Firefox 78, which shipped in June 2020 and reached its end of life in November 2021.

Present day

We’ve been waiting a long time for the usage of Firefox 78 to drop.

Aside from being an ESR version there are a couple of other reasons for its extra longevity:

  • It was the last version of the browser to support Flash
  • It was the last version of the browser to support OS X versions < 10.12

With Flash now largely obsolete on the web and traffic from older operating systems becoming rarer, we’ve decided that now is the appropriate time to turn off support for this legacy protocol.

To avoid surprises and not leave anyone behind, we attempted to email anyone still using that endpoint earlier this year. We didn’t receive any feedback and we continued with the plan.

Our method

Our plan is simple: BrowserID requests are the only traffic hitting our /v1/certificate/sign endpoint. We’ll begin returning HTTP 404 replies to a small percentage of traffic from that endpoint and monitor for any issues. Our testing showed no concerns but it’s challenging to be comprehensive with so many combinations of browser versions and operating systems. Over the next few weeks we’ll continue to ramp up the percentage of 404s until we can remove the endpoint completely and let the traffic bounce off the front-end like any other 404.

Current status

Surprise! I’m a few weeks late with this post. We started returning 404s on May 1 and are currently up to ~66% of traffic on that endpoint. So far there haven’t been any unexpected complications. We’ll continue to increase over the next few weeks and aim to have all the code removed this summer.

Don Martiremove AI from Google Search on Firefox

This seems to work to remove “AI” stuff from the top of Google search results on Firefox. (Tested on desktop Firefox for Linux.)

  1. Go to the hamburger menu → Settings → Search and remove “Google Search.”

  2. Do a regular Google search for a word.

  3. Bookmark the search result page.

  4. Go to the hamburger menu → Bookmarks → Manage Bookmarks.

  5. (optional) Make a new folder for search and put the new bookmark in it.

  6. Edit the bookmark to include udm=14 as a URL parameter, like this:

  7. Add a keyword or keywords (I use @gg).

In addition to these you will probably also want a couple of extensions:


Revolutionary New Google Feature Hidden Under ‘More’ Tab Shows Links to Web Pages

Bye Bye, AI: How to turn off Google’s annoying AI overviews and just get search results | Tom’s Hardware Article that covers how to remove “AI” material on Google Chrome and mobile Firefox.

How I Made Google’s “Web” View My Default Search

Dark Visitors - A List of Known AI Agents on the Internet is a good site for keeping track of “AI” crawlers if you want to block them in ads.txt. (This doesn’t work for blocking underground “AI” but will put the big companies on notice.)

Google Chrome ad features checklist (For Google Chrome users, prevent Google AI from classifying you in ways that are hard to figure out)

Bonus links

The Ukraine war is driving rapid innovation in drone technology Of course, there are new legal and moral questions that arise from giving drones the power to kill. But the CEO of this company points out there is a cost to not developing the technology. And in any case, this push to innovate—and defeat the invading enemy—has pushed off those questions for now. (imho this is going to be the number one immediate issue for AI in Europe. The only credible alternative to returning to large-scale conscription in European countries that have phased it out is for some European alliance to reach global leadership in autonomous military AI. Which explains why they’re putting civilian AI and surveillance businesses on a tight leash—to free up qualified developers for defense jobs.)

How Google harms search advertisers in 20 slides They’re not raising prices, they’re coming up with better prices or more fair prices, where those new prices are higher than the previous ones. lol

React, Electron, and LLMs have a common purpose: the labour arbitrage theory of dev tool popularity Why do some software frameworks and libraries grow in adoption while others don’t?…It’s not about output or productivity.

Meta’s ‘set it and forget it’ AI ad tools are misfiring and blowing through cash Small businesses have seen their ad dollars get wiped out and wasted as a result, and some have said the bouts of overspending are driving them from Meta’s platforms. (considering where Meta ad money goes— child safety and mental health concerns are just the latest—this seems like a good thing. Also Meta could face further squeeze on surveillance ads model in EU)

Microsoft Deleted Its LLM Because It Didn’t Get a Safety Test, But Now It’s Everywhere 404 Media has not tested the model and we don’t know if it is easily producing harmful or “toxic” answers, or if Microsoft only took it down because it didn’t check either way. Since the model is open source, it is also possible other people could have downloaded it and create uncensored versions of the model that would produce controversial answers anyway, as we’ve reported people have done previously. (underground AI is less capable but more predictable than big company AI APIs. From the point of view of an API caller, the AI you were using gets randomly nerfed because the provider is acting on a moderation issue you weren’t aware of.)

Firefox NightlyToday’s Forecast: Browser Improvements – These Weeks in Firefox: Issue 161


  • Volunteer contributor tamas.beno12 has fixed a 5 digit (25 year old) bug! The patch for the bug makes it easier to create transparent windows
  • The newtab team is experimenting with a weather widget! It’s still early days, but you can turn it on in Nightly with a set of 2 prefs found in about:config:
    • Set the following to true:
      • browser.newtabpage.activity-stream.showWeather
      • browser.newtabpage.activity-stream.system.showWeather
    • If you notice any bugs with it, you can file them under Firefox :: New Tab Page
  • Some nice updates to Picture-in-Picture:
  • Bounce Tracking Protection has been enabled in Nightly (Bug 1846492)
    • What is bounce tracking / redirect tracking?
    • The feature detects bounce trackers based on redirect behaviour and periodically purges their cookies & site data to prevent tracking.
    • If you notice that you lose site data or get logged out of sites more than usual please file a bug under Core :: Privacy: Anti-Tracking so we can investigate
    • The feature is still in development so detected trackers are not yet counted as part of our regular ETP stats or on about:protections.
    • Advanced: If you want to see which bounce trackers get detected and purged you can enable the logging by going to about:logging and adding the following logger: BounceTrackingProtection:3
  • Niklas made the screenshots initial state (crosshairs) keyboard accessible
    • The arrow keys can be used to move the cursor around the content area. Enter will select the current hovered region and space will start the dragging state to draw a region.

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Itiel

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • As already anticipated in this meeting, starting from Firefox 127, installing new single-signed add-ons is disallowed. The QA verification has been completed and this restriction is now enabled on all channels and riding the Firefox 127 release train (Bug 1886160).
WebExtension APIs
  • Starting from Firefox 127, the installType property returned by the management API (e.g. management.getSelf) will be set to ”admin” for extensions that are installed through Enterprise Policy Settings (Bug 1895341)
    • Thanks to mkaply for working on this API improvement for Enterprise Firefox add-ons!
  • As part of the ongoing work related to improving cross-browser compatibility for Manifest Version 3 extensions, starting from Firefox 127:
    • Host permissions requested by Manifest V3 extensions will be listed in the install dialog and granted as part of the add-on installation flow (Bug 1889402).
    • Extensions using the ”incognito”: “split” mode will be allowed to install successfully in Firefox (Bug 1876924)
      • Incognito split mode is still not supported in Firefox, and so the extensions using this mode will not be allowed access to private browsing tabs.
    • The new runtime.getContexts API method is now supported (Bug 1875480).
      • This new API method allows extensions to discover their existing Extensions contexts (but unlike runtime.getViews it returns a json representation of the metadata for the related extension contexts).

Developer Tools

  • Pier Angelo Vendrame prevented new request data to be persisted in Private Browsing (#1892052)
  • Arai fixed exceptions that could happen when evaluating Services.prompt and Services.droppedLinkHandler in the Browser Console (#1893611)
  • Nicolas fixed an issue that was preventing users to see stacktrace from WASM-issued error messages (#1888645)
  • Alexandre managed to tackle an issue that would prevent DevTools to be initialized when a page was using Atomics.wait , e.g. (#1821250)
  • Nicolas added the new textInput event to the Event Listener Breakpoints in the Debugger (#1892459)
  • Hubert is making good progress migrating the Debugger to CodeMirror 6 (#1887649, #1889277, #1889283, #1894379, #1894659, #1889276)
  • Nicolas made sure that ::backdrop pseudo-element rules are visible in the Rules view for popover elements (#1893644), as well as @keyframes rules nested in other at-rules (#1894603)
  • Nicolas fixed performance issue in the Inspector when displaying deeply nested rule (#1844446)
    • for example, a 15-level deep rule was taking almost 9 seconds to be displayed, now it’s only a few milliseconds 
  • Julian removed code that was forcing the Performance tab to be always enabled in the Browser Toolbox, even if the user disabled it in a previous session (#1895434)
WebDriver BiDi
  • Thanks to Victoria Ajala for replacing the usage of the “isElementEnabled” selenium atom with a custom implementation which is more lightweight and maintainable (#1798464)
  • Sasha implemented the permissions.setPermission command which allows clients to set permissions such as geolocation, notifications, … (#1875065)
  • Sasha fixed a bug where wheel scroll actions would not use the provided modifiers (eg shift, ctrl, …) (#1885542)
  • Sasha improved the implementation of the browsingContext.locateNodes command to also accept Document objects as the root to locate nodes. Previously this was restricted to Elements only, but Puppeteer relies heavily on using Document for this command. (#1893922)
  • Henrik fixed a bug where the WebDriver classic GetElementText command would fail to capitalise text containing a underscore (#1888004)

Migration Improvements

New Tab Page

  • Newtab wallpaper experiment going out either this release (next week) or next release, depending on some telemetry bug fix uplifts.
    • To enable wallpapers on HNT, set the following to TRUE:
      • browser.newtabpage.activity-stream.newtabWallpapers.enabled
  • Newtab wallpapers are getting some updates soon. A bunch more wallpapers as options, and some tweaks to the customize menu, a nested menu, to better organize the wallpapers so it’s easier to explore as we add more options.


  • Thanks to Joseph Webster for adding PiP captions support for more sites with our JWPlayer wrapper (bug)



Search and Navigation

  • Clipboard suggestions have been temporarily disabled in nightly as it was possible to freeze Firefox on Windows – we’re moving the feature to asynchronous clipboard API – 1894614
  • Features for an update to the urlbar UX codenamed scotchBonnet have started landing, secondary Actions have landed and dedicated search button + others are in progress. These will be enabled in nightly at some point so keep an eye out. Meta bug tracking @ 1891857
  • Mandy fixed an issue with stripping a leading question mark when the urlbar is already in search mode @ 1837624
  • Marco fixed protocols being trimmed when copying urls @ ​​1893871
    • In Nightly, when https stripping is enabled, the loaded URL will gain back the trimmed protocol when the user interacts with the urlbar input field text
  • Marco changed domain inline completion, so that when permanent private browsing is active domain will be picked based on the number of bookmarks to that domain. Bug 1893840
  • The new search configuration (aka search consolidation) is now rolling out in FF 126 release.
    • Ebay support in Poland has been added to application provided engines in the new search configuration and so will become available during FF 126 @ 1885391
  • For Places, Daisuke removed the ReplaceFaviconData() and ReplaceFaviconDataFromDataURL() APIs, replacing their use with a new SetFaviconForPage() API accepting a data URL for the favicon. Long term this is the API we want to use, Places should never fetch from the Network, only store data.

Storybook/Reusable Components

  • Work has started on form components with an eye for the Sidebar Settings feature (and potentially the Experiments section of preferences). Initial components: moz-checkbox, moz-radio-group and moz-fieldset

The message-bar component has been fully removed from the codebase (replaced by moz-message-bar) Bug 1845151 – Remove all code associated with the message-bar component – Thanks Anna!

Firefox Developer ExperienceFirefox DevTools Newsletter — 126

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 126 Nightly release cycle.

Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla, like Artem Manushenkov who added a setting that can be used to disable the split console (#1731635).

Firefox DevTools settings panel. Along side many items, there a new "Enable Split Console" checkbox in the Web Console section

Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues


As announced in previous newsletters, we’re focusing on performance for a few months to make our tools as fast as they can be.

A few years back, we got a report from a user telling us that modifying a property in a rule from the Inspector was very slow (#1644138). The stylesheet they were using was massive, with 185K lines of code and a total size of approximately 4 MB, and our machinery to replace the rule content was not handling this well. After rewriting some old Javascript code in Rust (#1882964), the function call that was taking more than 500ms on my machine now only takes about 10ms. Yes, that’s 50 times faster! This also shows in less extreme cases: our performance tests are reporting an almost 10% improvement to display Rules in the Inspector 🎉

Chart where x is the time and y is duration, where we can see the values going from 750ms to 700ms around April 8th<figcaption class="wp-element-caption">Performance test duration going from ~750ms to ~700ms</figcaption>

We also came across an issue that showed a pretty bad mistake when handling rules using pseudo-elements (#1886947), and fixing it, alongside some minor tweak (#1886818), got us another ~10% improvement when displaying Rules in the Inspector.

Finally, we realized we could have some unnecessary computation when editing a rule (#1888079, #1888081), so we fixed that for an even smoother experience.

Custom State

Firefox 126 adds support for CustomStateSet (mostly done by an external contributor, Keith Cirkel):

The CustomStateSet interface of the Document Object Model stores a list of states for an autonomous custom element, and allows states to be added and removed from the set.

The interface can be used to expose the internal states of a custom element, allowing them to be used in CSS selectors by code that uses the element.


The MDN page has some nice examples on how this can be used to style custom elements based on a specific state. Rules using the :state() pseudo-class are displayed in the Inspector and its properties can be modified like any other rules.

Firefox DevTools Inspector panel. The markup view has a `<label-checkbox>` custom element selected. In the rules view, we can see a few rules using `:state(checked)`, which are using to style the element

You can quickly see which states are in the set when logging a CustomStateSet instance in the console (#1862896).

Firefox DevTools Console with the following code being executed: `document.querySelector("labeled-checkbox")._internals.states`  The results shows an object whose header is`CustomStateSet [ "checked" ]`. The object is expanded, and we can see that it has a `<entries>` node, which contains one item, which is `"checked"`

And more…

  • We’re currently working on migrating our CodeMirror usage to CodeMirror 6 (#1773246), which we hope will allow for performance improvement in the Debugger. This is a pretty big task and we’ll report progress in the next newsletters!
  • We added support for Wasm exception handling proposal in the Debugger (#1885589)
  • We’re now showing the color swatch when a CSS custom property is used in color definition (#1718894)
  • In order to enable debugging Firefox on Android devices, we maintain an ADB extension,  we finally released a new version of the DevTools ADB extension that is used by about:debugging. The extension is now shipping with notarized binaries and can be used on recent macOS versions (#1890843)

That’s all folks, see you in June for the 127 newsletter!

Mozilla ThunderbirdThe New Thunderbird Website Has Hatched has a new look, but the improvements go beyond that. We wanted a website where you could quickly find the information you need, from support to contribution, in clear and easy to understand text. While staying grateful to the many amazing contributors who have helped build and maintain our website over the past 20 years, we wanted to refresh our information along with our look. Finally, we wanted to partner with Freehive’s Ryan Gorley for their sleek, cohesive design vision and commitment to open source.

We wanted a website that’s ready for the next 20 years of Thunderbird, including the upcoming arrival of Thunderbird on mobile devices. But you don’t have to wait for that future to experience the new website now.

The New

The new, more organized framework starts with the refreshed Home page. All the great content you’ve relied on is still here, just easier to find! The expanded navigation menu makes it almost effortless to find the information and resources you need.

Resources provide a quick link to all the news and updates in the Thunderbird Blog and the unmatched community assistance in Mozilla Support, aka SUMO. Release notes are linked from the download and other options page. That page has also been simplified while still maintaining all the usual options. It’s now the main way to get links to download Beta and Daily, and in the future any other apps or versions we produce.

The About section introduces the values and the people behind the Thunderbird project, which includes our growing MZLA team. Our contact page connects you with the right community resources or team member, no matter your question or concern. And if you’d like to join us, or just see what positions are open, you’ll find a link to our career page here.

Whether it’s giving your time and skill or making a financial donation, it’s easy to discover all the ways to contribute to the project. Our new and improved Participate page shows how to get involved, from coding and testing to everyday advocacy. No matter your talents and experience, everyone can contribute!

If you want to download the latest stable release, or to donate and help bring Thunderbird everywhere, those options are still an easy click from the navigation menu.

Your Feedback

We’d love to have your thoughts and feedback on the new website. Is there a new and improved section you love? Is there something we missed? Let us know in the comments below. Want to see all the changes we made? Check the repository for the detailed commit log.

The post The New Thunderbird Website Has Hatched appeared first on The Thunderbird Blog.

The Rust Programming Language BlogFaster linking times on nightly on Linux using `rust-lld`

TL;DR: rustc will use rust-lld by default on x86_64-unknown-linux-gnu on nightly to significantly reduce linking times.

Some context

Linking time is often a big part of compilation time. When rustc needs to build a binary or a shared library, it will usually call the default linker installed on the system to do that (this can be changed on the command-line or by the target for which the code is compiled).

The linkers do an important job, with concerns about stability, backwards-compatibility and so on. For these and other reasons, on the most popular operating systems they usually are older programs, designed when computers only had a single core. So, they usually tend to be slow on a modern machine. For example, when building ripgrep 13 in debug mode on Linux, roughly half of the time is actually spent in the linker.

There are different linkers, however, and the usual advice to improve linking times is to use one of these newer and faster linkers, like LLVM's lld or Rui Ueyama's mold.

Some of Rust's wasm and aarch64 targets already use lld by default. When using rustup, rustc ships with a version of lld for this purpose. When CI builds LLVM to use in the compiler, it also builds the linker and packages it. It's referred to as rust-lld to avoid colliding with any lld already installed on the user's machine.

Since improvements to linking times are substantial, it would be a good default to use in the most popular targets. This has been discussed for a long time, for example in issues #39915 and #71515, and rustc already offers nightly flags to use rust-lld.

By now, we believe we've done all the internal testing that we could, on CI, crater, and our benchmarking infrastructure. We would now like to expand testing and gather real-world feedback and use-cases. Therefore, we will enable rust-lld to be the linker used by default on x86_64-unknown-linux-gnu for nightly builds.


While this also enables the compiler to use more linker features in the future, the most immediate benefit is much improved linking times.

Here are more details from the ripgrep example mentioned above: linking is reduced 7x, resulting in a 40% reduction in end-to-end compilation times.

Before/after comparison of a ripgrep debug build

Most binaries should see some improvements here, but it's especially significant with e.g. bigger binaries, or when involving debuginfo. These usually see bottlenecks in the linker.

Here's a link to the complete results from our benchmarks.

If testing goes well, we can then stabilize using this faster linker by default for x86_64-unknown-linux-gnu users, before maybe looking at other targets.

Possible drawbacks

From our prior testing, we don't really expect issues to happen in practice. It is a drop-in replacement for the vast majority of cases, but lld is not bug-for-bug compatible with GNU ld.

In any case, using rust-lld can be disabled if any problem occurs: use the -Z linker-features=-lld flag to revert to using the system's default linker.

Some crates somehow relying on these differences could need additional link args. For example, we saw <20 crates in the crater run failing to link because of a different default about encapsulation symbols: these could require -Clink-arg=-Wl,-z,nostart-stop-gc to match the legacy GNU ld behavior.

Some of the big gains in performance come from parallelism, which could be undesirable in resource-constrained environments.


rustc will use rust-lld on x86_64-unknown-linux-gnu nightlies, for much improved linking times, starting in tomorrow's rustup nightly (nightly-2024-05-18). Let us know if you encounter problems, by opening an issue on GitHub.

If that happens, you can revert to the default linker with the -Z linker-features=-lld flag. Either by adding it to the usual RUSTFLAGS environment variable, or to a project's .cargo/config.toml configuration file, like so:

rustflags = ["-Zlinker-features=-lld"]

Support.Mozilla.OrgKitsune Release Notes – May 15, 2024

See full platform release notes on GitHub


Description of new features, how it benefits the user, and any relevant details.

  • Group messaging: Staff group members can send messages to groups as well as individual users.
  • Staff group permissions: We are now using a user’s membership in the Staff group rather than the user’s is_staff attribute to determine elevated privileges like being able to send messages to groups or seeing restricted KB articles
  • In-product link on article page: You’ll now see an indicator on the KB article page for articles that are the target of in-product links. This is visible to users in the Staff group.

Screenshot of the in-product indicator in a KB article


Explanation of the enhancements or changes to existing features, including performance improvements, user interface changes, etc.

  • Conversion from GA3 to GA4 data API for gathering Google Analytics data: We recently migrated SUMO’s Google Analytics (GA) from GA3 to GA4. This has temporarily impacted our access to historical data on the SUMO KB Dashboard. Data will now be pulled from GA4, which only has data since April 10, 2024. The number of “Visits” for the “Last 90 days” and “Last year” will only reflect the data gathered since this date. Stay tuned for additional dashboard updates, including the inclusion of GA3 data.

Screenshot of the Knowledge Base Dashboard in SUMO

Screenshot of how the new SUMO inbox looks like

  • Removed New Contributors link from the Contributor Tools: Discussions section of the top main menu (#1746)


Brief description of the bug and how it was fixed, possibly including affected components.


Spidermonkey Development BlogSpiderMonkey Newsletter (Firefox 126-127)

Hello and welcome to our newest newsletter. As the northern hemisphere warms and the southern hemisphere cools, we write to talk about what’s happened in the world of SpiderMonkey in the Firefox 126-127 timeline.

🚀 Performance

Though Speedometer 3 has shipped, we cannot allow ourselves get lax with our performance. It’s important that SpiderMonkey be fast so Firefox can be fast!

🔦 Contributor Spotlight

This newsletter, we’d like to Spotlight Jonatan Klemets. In his own words,

A full-stack web developer by day and a low-level enthusiast by night who likes tinkering with compilers, emulators, and other low-level projects

Jonatan has been helping us for a few years now and has been the main force of late driving forwards our work on the Import Attributes proposal. Pushing this proposal forward has required jumping into many different parts of Firefox, and Jonatan has done really well, and we are very thankful for the effort he has put into working on the project.

⚡ Wasm

🕸️ Web Features Work

👷🏽‍♀️ Other Work

The Mozilla BlogWhy I’m joining Mozilla as executive director

Delight — absolute delight — is what I felt when my parents brought home a Compaq Deskpro 386 for us to play with. It was love at first sight, thanks to games like Reader Rabbit, but I fell especially hard once we had a machine connected to the Internet. The unparalleled joy that comes from making things with and for other people was intoxicating. I can’t tell you how many hours were spent building Geocities websites for friends, poring over message boards, writing X-Files fan fiction, exchanging inside jokes and song lyrics on AIM and ICQ chats with friends and far-flung cousins across the world. 

Actually, I could tell you. In detail. But it would be embarrassing. 

Years later I would learn that the ability to share, connect, and create is rooted in how the Internet works differently than the media preceding it. The Internet speaks standards and protocols. It links instead of copying. Its nature is open. You don’t need permission to make something on the Internet. That freedom holds enormous potential: At its best, it helps us explore history we didn’t know, build movements to better the future, or make a meme to brighten someone’s day. At its best, the Internet lets us see each other. 

That magic — this power — is revolutionary. Protecting it, celebrating it, and expanding it is why I’m so excited to join the Mozilla Foundation as its executive director

I started my career as a media lawyer to protect those who made things that helped us see one another, and the truth about our shared world. Almost fifteen years ago, I co-founded and built a media law clinic to train others to do the same. After a stint at a law firm, I joined BuzzFeed as its first newsroom lawyer, which felt sort of like being a lawyer for the silliest and most serious parts of the internet all at the same time. In other words, I was a lawyer for the Internet at its best.

I am not naive about the Internet at its worst. From the Edward Snowden disclosures to a quick trip to Guantanamo Bay, Cuba, much of my career has confronted issues of surveillance — including of my own religious community. I watched as consumers became more concerned about surveillance and other harms online, and so we built an accountability journalism outlet, The Markup, to serve those needs. The Markup’s mission is to help people challenge technology to serve the public good, which intentionally centers human agency. So we didn’t just write articles: Our team imagined and made things people used to make informed choices. Blacklight, for example, empowers people to use the Web how they want, by helping them see the otherwise invisible set of tracking tools, watching them as they browse. 

The through-line of my career has been grappling with how technology can uplift or stifle human agency. I choose the former. I bet you do too. 

This, of course, brings me back to the Mozilla Foundation. In our particular moment – as we’re deploying large-scale AI systems for the first time, as we’re waking up home pages from their long rests, and trying to “rewild” the Internet beyond walled gardens – I can think of no other place that has the ability, to help people shape technology to achieve their goals on their own terms. And there is no more important time. 

After all, the world we live in now was once someone’s imagination. Someone dreamt, and then many someones built, the Internet, and democracy, and other wild-eyed ideas too. We can imagine a future that centers human agency, and then we can build it, bit-by-byte. In this wildly unpredictable moment in 2024, it certainly feels like it’s up for grabs as to whether technology will be used to liberate us or shackle us. But that also means it’s up to us – if we act now. 

With your help, together we can imagine and create the Internet we want. Not what Zuckerberg, Pichai, Musk, or any other tech titan wants – we can imagine and make what you want, on your own terms. Making things on your own terms is a team sport, and that’s why I’m especially thrilled to be joining Laura Chambers (CEO, Mozilla Corporation), Moez Draief (Managing Director,, Mohamed Nanabhay (Managing Partner, Mozilla Ventures), Mitchell Baker (Executive Chair of the Board), and Mark Surman (President, Mozilla) as part of Mozilla’s senior leadership team.

Technology’s come a long way since that Compaq, and it’s moving faster than ever before. My young boys won’t experience the Internet through Geocities or X-Files fan fiction or dial-up modems (probably?).* But it’s my mission to make sure they – and all of us – do have the sense of delight I felt at the dawn of our connected age: The unparalleled joy that comes from making things with and for other people.

Always yours,


*They will, however, have Pikachu. There’s always Pikachu. 

**There’s an important corollary to all this. I (and we at Mozilla) don’t have all the good ideas. We never will. So, consider my inbox to be yours. Got an idea? Let’s talk:

The post Why I’m joining Mozilla as executive director  appeared first on The Mozilla Blog.

The Mozilla BlogMozilla Foundation welcomes Nabiha Syed as executive director

Public interest tech advocate will harness collective power to deepen Mozilla’s focus on trustworthy AI

Today, Mozilla Foundation is proud to announce Nabiha Syed — media executive, lawyer, and champion of public interest technology — as its Executive Director. Syed joins Mozilla from The Markup, where she was chief executive officer. 

As technology companies, civil society, and governments race to keep up with the rapid pace of AI innovation, Syed will lead Mozilla’s advocacy and philanthropy programs to serve the public interest. Mozilla, with Syed’s leadership, will carry forward the Foundation’s nuanced, practical perspective to help steer society away from the real risks and toward the benefits of AI. 

“Nabiha has an exceptional understanding of how technology, humanity and broader society intersect — and how to engage with the complicated challenges and opportunities at that intersection,” said Mark Surman, Mozilla Foundation President. “Nabiha will make Mozilla a stronger, bigger, and more impactful organization, at a time when the internet needs it most.”

Syed is known for her mission-driven leadership, focused on increasing transparency into the most powerful institutions in society. She comes to Mozilla after leading The Markup, an award-winning publication that challenges technology to serve the public good, from its launch through its successful acquisition in 2024. The Markup drove Congressional debates, inspired watershed litigation, and won multiple prestigious awards including Fast Company’s “Most Innovative,” along with the Edward R. Murrow, National Press Club, and Scripps Howard prizes. 

“The through-line of my career has been grappling with how technology can uplift or stifle human agency,” said Nabiha Syed, incoming Mozilla Foundation Executive Director. “After all, the technology we have now was once just someone’s imagination. We can dream, build, and demand technology that serves all of us, not just the powerful few. Mozilla is the perfect place to make that happen.” 

As Executive Director, Syed will oversee a staff of more than 100 full-time employees and an annual budget of $30 million. She joins Mozilla at a time of growth and ambitious leadership: Mozilla is rapidly expanding its investment in building a movement for trustworthy AI through grantmaking, campaigning, and research. The Mozilla portfolio has also grown to include a venture capital arm and a commercial AI R+D lab

Prior to The Markup, Syed was a highly acclaimed media lawyer. Syed’s legal career spanned private practice, the New York Times First Amendment Fellowship, and leading BuzzFeed’s libel and newsgathering matters, including the successful defense of the Steele Dossier litigations. She sits on the board of the Scott Trust, the $1B+ British company that owns The Guardian newspaper, the New York Civil Liberties Union, the Reporters Committee for the Freedom of the Press, Upturn, the New Press, and serves as an advisor to ex/ante, the first venture fund dedicated to agentic tech.  

Syed is widely sought after for her views on technology and media law, and has briefed two sitting presidents on free speech matters as well as diverse audiences including the World Economic Forum, annual investor meetings, Stanford, Wharton, and Columbia, where she is a lecturer.

She has been recognized with numerous awards, including as a 40 Under 40 Rising Star by the New York Law Journal, Crain’s New York Business 40 under 40 award, a Rising Star award from the Reporter’s Committee for Freedom of the Press. Syed was selected to be on the National Commission for US-China Relations, and was recognized by Forbes as one of the best emerging free speech lawyers. 

Syed holds a J.D. from Yale Law School, an M.St from the University of Oxford where she was a Marshall Scholar, and a B.A from Johns Hopkins University. She lives in Brooklyn with her husband and her two young boys.

Also read:

Why I’m Joining Mozilla as Executive Director, by Nabiha Syed 

Growing Our Movement — and Growing Mozilla — to Shape the AI Era, by Mark Surman

The post Mozilla Foundation welcomes Nabiha Syed as executive director appeared first on The Mozilla Blog.

The Mozilla BlogGrowing our movement — and growing Mozilla — to shape the AI era

Last August, we announced that Mozilla was seeking a new executive director to lead its movement building arm. I’m excited to announce that Nabiha Syed — media executive, lawyer, and champion of public interest technology — is joining us to take on this role. 

I’ve gotten to know — and admire — Nabiha over the last few years in her role as the chief executive officer of The Markup. I’ve been impressed by her thinking on how technology, humanity and society intersect — and the way she has used journalism and research to uncover the challenges and opportunities we face in the AI era. 

As we talked about the executive director role, I also found a thought partner who sees the potential to combine the ‘market’ and ‘movement’ sides of Mozilla’s personality to shape how the tech universe works. I am convinced that Nabiha will make us a stronger, bigger and more impactful organization, at a time when the internet needs it most.

Nabiha will take over leadership of Mozilla Foundation’s $30M/year portfolio of movement building programs starting on July 1. Her first task will be to supercharge the Foundation’s trustworthy AI efforts, with an initial focus on:

  • Partnering with other public interest organizations to shift the narrative on AI.
  • Creating — and funding — open source and community-driven data sets, tools, and research.
  • Growing a global community of talent committed to building responsible and trustworthy tech. 

She will take on the responsibility for all of Mozilla’s philanthropic and advocacy programs, and will lead fundraising for our charitable initiatives.

It’s important to note: Nabiha’s appointment is part of a broader effort to build new leadership that can take Mozilla into its next chapter. She joins Laura Chambers (CEO, Mozilla Corporation), Moez Draief (Managing Director,, Mohamed Nanabhay (Managing Partner, Mozilla Ventures) as well as Mitchell Baker (Executive Chair of Mozilla Corporation) and I, as part of the senior leadership team charged with advancing the Mozilla Manifesto in the AI era. 

As Nabiha joins, I will be moving full-time to the role of Mozilla Foundation President, focusing even more deeply on the growth, cohesion and sustainability of the overall Mozilla portfolio of organizations. This includes further work with Mitchell and our Boards to develop a clear roadmap for Mozilla’s next chapter — with a particular focus on the role Mozilla can play in AI. It also includes support for senior leaders at and Mozilla Ventures — our two newest entities — as well as Mozilla’s new Global Head of Public Policy, Linda Griffin

This is an exciting and pivotal moment — for Mozilla, the internet and the world. More and more people are realizing the need for tech products that are designed to be trustworthy, empowering and delightful — and for a movement that mobilizes people to reclaim the internet and ownership over their digital lives. We have a chance to build these things right now, and to reshape the relationship between technology and humanity for the better. I’m so glad Nabiha has joined us to make this happen. Welcome!

The post Growing our movement — and growing Mozilla — to shape the AI era appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 547

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is stated-scope-guard, a library supporting a more flexible RAII pattern for stated resouce management.

Thanks to Evian Zhang for the self-suggestion!

Please submit your suggestions and votes for next week!

Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No calls for testing were issued this week.
  • No calls for testing were issued this week.
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

"*" = Issues open for student applications via OSPP. Selected students will be assigned a mentor(s), and may receive bonuses. Please register through the OSPP link.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

CFP - Speakers

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the submission website through a PR to TWiR.

Updates from the Rust Project

329 pull requests were merged in the last week

Rust Compiler Performance Triage

A pretty quiet week with only a few PRs being flagged for analysis. More improvements than regressions this week, and also several nice binary size reductions caused by generating less LLVM IR.

Triage done by @kobzol. Revision range: 69f53f5e..9105c57b


(instructions:u) mean range count
Regressions ❌
0.4% [0.2%, 0.9%] 8
Regressions ❌
0.9% [0.2%, 2.4%] 18
Improvements ✅
-1.1% [-2.3%, -0.2%] 51
Improvements ✅
-0.6% [-1.4%, -0.3%] 19
All ❌✅ (primary) -0.9% [-2.3%, 0.9%] 59

1 Regression, 0 Improvements, 3 Mixed; 0 of them in rollups 75 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
Rust Unsafe Code Guidelines
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-05-15 - 2024-06-12 🦀

North America
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.


Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Unfortunately, most people seem to have taken the wrong lesson from Rust. They see all of this business with lifetimes and ownership as a dirty mess that Rust has had to adopt because it wanted to avoid garbage collection. But this is completely backwards! Rust adopted rules around shared mutable state and this enabled it to avoid garbage collection. These rules are a good idea regardless.

without boats

Thanks to Jules Bertholet for the last-minute suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Mozilla Addons BlogManifest V3 Updates

Greetings add-on developers! We wanted to provide an update on some exciting engineering work planned for the next few Firefox releases in support of Manifest V3. The team continues to implement API changes that were previously defined in agreement with other browser vendors that participate in the WECG, ahead of Chrome’s MV2 deprecation. Another top area of focus has been around addressing some developer and end user friction related to MV3 host permissions.

The table below details some MV3 changes that are going to be available in the Firefox release channel soon.

Version Manifest V3 engineering updates Nightly Beta Release
126 Chrome extension porting API enhancements:

3/18 4/15 5/14
127 Updating MV3 host permissions on both desktop and mobile. 4/15 5/13 6/11
128 Implementing the UI necessary to control optional permissions and supporting host permissions on Android that landed in 127. 5/13 6/10 7/9

The Chrome extension porting API work that will land beginning in 126 will help ensure a higher level of compatibility and reduce friction for add-on developers supporting multiple browsers.

Beginning with Firefox 127, users will be prompted to grant MV3 host permissions as part of the install flow (similar to MV2 extensions). We’re excited to deliver this work as based on feedback from Firefox users and extension developers, this has been a major hurdle for MV3 extensions in Firefox.

However, unlike the host permission granted at install time for MV2 extensions, MV3 host permissions can still be revoked by the user at any time from the about:addons page on Firefox Desktop. Given that, MV3 extensions should still leverage the permissions API to ensure that the permissions required are already granted.

Lastly, in Firefox for Android 128, the Add-ons Manager will include a new permissions UI as shown below — this new UI will allow users to do the same as above on Firefox for Android with regards to host permissions, while also granting or revoking other optional permissions on MV2 and MV3 extensions.


We also wanted to take this opportunity to address a couple common questions we’ve been seeing in the community, specifically around the webRequest API and MV2:

  1. The webRequest API is not on a deprecation path in Firefox
  2. Mozilla has no current plans to deprecate MV2 as mentioned in our previous MV3 update

For more information on adopting MV3, please see our migration guide. Another great resource is the FOSDEM presentation a couple Mozilla engineers delivered recently, Firefox, Android, and Cross-browser WebExtensions in 2024.

If you have questions or feedback on our Manifest V3 plans we would love to hear from you in the comments section below or if you prefer, drop us an email.

The post Manifest V3 Updates appeared first on Mozilla Add-ons Community Blog.

The Mozilla BlogFirefox at the Webbys: Winners talk internet red flags and what they’d rather keep private online

A big screen reads: 28th Annual Webby Awards<figcaption class="wp-element-caption">Credit: Getty Images for the Webby Awards</figcaption>

The Firefox team hit the red carpet Monday at this year’s 28th annual Webby Awards with some of the internet’s most influential figures and their groundbreaking projects. But we weren’t just there to watch the honorees accept their trophies. We wanted the inside scoop on how they win the web game every day. 

So, we asked them about internet red flags and even threw down a challenge called “Unload or Private Mode,” where they had a choice: spill the beans or take a “Firefox shot” to keep it private. Check out the video below to see how Webby winners like Madison Tevlin, Abi Marquez, James and Oliver Phelps, Michelle Buteau and more responded:

The Webbys are hosted each year by the International Academy of Digital Arts and Sciences — a group of over 3,000 tech experts, industry leaders, and creative minds. Each category honors two achievements: The Webby Award, chosen by the Academy, and The Webby People’s Voice Award, which is voted on by the global internet community. It’s possible for nominees to win one or both. 

Monday’s ceremony featured notable guests like Keke Palmer, Coco Rocha, Ina Garten, Julia Louis-Dreyfus and Laverne Cox, as well as tech journalist Kara Swisher, who was honored with the Webby Lifetime Achievement Award. 

Kara Swisher accepts an award on stage.<figcaption class="wp-element-caption">Kara Swisher accepts her Webby Lifetime Achievement Award. Credit: Getty Images for the Webby Awards</figcaption>

The Webbys have evolved with the internet since the award’s inception in 1996, adding to its roster of acknowledgments like Podcasts; Games and AI, Metaverse & Virtual; and more. And just as the web is a critical tool for every area of life today, the Webby Awards remains an important and relevant award honoring achievement in interactive media.

A hallmark feature is the ceremony’s five-word acceptance speech limit, which has produced some memorable moments from the likes of David Bowie and Prince over the years. Monday night’s speeches didn’t disappoint. Here are some of our favorite speeches: 

  • “Cooking Show Pretend, Gratitude Real.” – Jennifer Garner
  • “Don’t put twinkies on pizza.” – Josh Scherer
  • “Actually, we are all one degree.” – Kevin Bacon
  • “I ain’t done, tech bros.” Kara Swisher  
  • “I’m blessed to do this.” – Keke Palmer
  • “Risk everything every time.” – Jerrod Carmichael
  •  “It’s fun proving people wrong.” – Madison Tevlin
  • “Healing, collective trauma, necessary, possible.” – Laverne Cox

Check out some other highlights:

Keke Palmer accepts an award on stage.<figcaption class="wp-element-caption">Keke Palmer accepts the Webby Award for Special Achievement. Credit: Getty Images for the Webby Awards</figcaption>
Julia Louis-Dreyfus accepts an award on stage.<figcaption class="wp-element-caption">Julia Louis-Dreyfus accepts the Webby Podcast of the Year Award. Credit: Getty Images for the Webby Awards</figcaption>
Shannon Sharpe accepts an award on stage.<figcaption class="wp-element-caption">Shannon Sharpe accepts his Webby Advocate of the Year Award. Credit: Getty Images for the Webby Awards</figcaption>
<figcaption class="wp-element-caption">Creator Abi Marquez accepts her Webby Award. Credit: Getty Images for the Webby Awards</figcaption>

See all the best moments from last night’s show on social media by searching #Webbys and at For the full list of Webby Award winners, visit

That’s a wrap on our Webby Awards coverage! Keep hanging with us and we’ll help you navigate the web safely and freely, having a little fun along the way. 

Get Firefox

Get the browser that protects what’s important

The post Firefox at the Webbys: Winners talk internet red flags and what they’d rather keep private online appeared first on The Mozilla Blog.

Firefox Developer ExperienceFirefox WebDriver Newsletter — 126

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 125 release cycle.


With Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette.

WebDriver BiDi

New: Support for the “contexts” argument for the “network.addIntercept” command

Since the introduction of the network.addIntercept command in Firefox 124, users could only apply network interceptions globally, affecting all open web pages across various tabs and windows. This necessitated the setup of specific filters to limit the impact to tabs requiring interception. However, this approach adversely affected performance, particularly when client code didn’t run locally, leading to increased data transmission over the network.

To address these issues and simplify the use of network interception for specific tabs, we’ve added the contexts argument in the network.addIntercept command. This enhancement facilitates the targeting of specific top-level browsing contexts, enabling the restriction of network request interception to individual tabs even with the same web page open in multiple tabs.

Bug fixes

The Mozilla BlogSee what’s changing in Firefox: Better insights, same privacy

An illustration shows the Firefox logo, a fox curled up in a circle.

Innovation and privacy go hand in hand here at Mozilla. To continue developing features and products that resonate with our users, we’re adopting a new approach to better understand how you engage with Firefox. Rest assured, the way we gather these insights will always put user privacy first.

What’s new in Firefox’s approach to search data 

To improve Firefox based on your needs, understanding how users interact with essential functions like search is key. We’re ramping up our efforts to enhance search experience by developing new features like Firefox Suggest, which provides recommended online content that corresponds to queries. To make sure that features like this work well, we need better insights on overall search activity – all without trading off on our commitment to user privacy. Our goal is to understand what types of searches are happening so that we can prioritize the correct features by use case.

With the latest version of Firefox for U.S. desktop users, we’re introducing a new way to measure search activity broken down into high level categories. This measure is not linked with specific individuals and is further anonymized using a technology called OHTTP to ensure it can’t be connected with user IP addresses.    

Let’s say you’re using Firefox to plan a trip to Spain and search for “Barcelona hotels.” Firefox infers that the search results fall under the category of “travel,” and it increments a counter to calculate the total number of searches happening at the country level.

Here’s the current list of categories we’re using: animals, arts, autos, business, career, education, fashion, finance, food, government, health, hobbies, home, inconclusive, news, real estate, society, sports, tech and travel.

Having an understanding of what types of searches happen most frequently will give us a better understanding of what’s important to our users, without giving us additional insight into individual browsing preferences. This helps us take a step forward in providing a browsing experience that is more tailored to your needs, without us stepping away from the principles that make us who we are. 

What Firefox’s search data collection means for you

We understand that any new data collection might spark some questions. Simply put, this new method only categorizes the websites that show up in your searches — not the specifics of what you’re personally looking up. 

Sensitive topics, like searching for particular health care services, are categorized only under broad terms like health or society. Your search activities are handled with the same level of confidentiality as all other data regardless of any local laws surrounding certain health services. 

Remember, you can always opt out of sending any technical or usage data to Firefox. Here’s a step-by-step guide on how to adjust your settings. We also don’t collect category data when you use Private Browsing mode on Firefox.  

As far as user experience goes, you won’t see any visible changes in your browsing. Our new approach to data will just enable us to better refine our product features and offerings in ways that matter to you. 

We’re here to make the internet safer, faster and more in tune with what you need – just as we have since open-sourcing our browser code more than 25 years ago. Thanks for being part of our journey!

Get Firefox

Get the browser that protects what’s important

The post See what’s changing in Firefox: Better insights, same privacy appeared first on The Mozilla Blog.

The Mozilla BlogRaphael Mimoun on creating tech for human rights and justice, combatting misinformation and building a privacy-centric culture

At Mozilla, we know we can’t create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through our Rise 25 Awards. These storytellers, innovators, activists, advocates. builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.

This week, we chatted with Raphael Mimoun, a builder dedicated to making tools that empower journalists and human rights defenders. We talk with Raphael about the launch of his app, Tella, combatting misinformation online, the future of social media platforms and more.

How did the work you did early on in human rights after you completed university help you understand the power of technology and ultimately inspire you to do a lot of the work that you do right now?

Raphael Mimoun: So I never worked in tech per se and only developed a passion for technology as I was working in human rights. It was really a time when, basically, the power of technology to support movements and to head movements around the world was kind of getting fully understood. You had the Arab Spring, you had Occupy Wall Street, you had all of these movements for social justice, for democracy, for human rights, that were very much kind of spread through technology, right? Technology played a very, very important role. But just after that, it was kind of like a hangover where we all realized, “OK, it’s not just all good and fine.” You also have the flip side, which is government spying on the citizens, identifying citizens through social media, through hacking, and so on and so forth — harassing them, repressing them online, but translating into offline violence, repression, and so on. And so I think that was the moment where I was like, “OK, there is something that needs to be done around technology,” specifically for those people who are on the front lines because if we just treat it as a tool — one of those neutral tools — we end up getting very vulnerable to violence, and it can be from the state, it can also be from online mobs, armed groups, all sort of things. So that was really the point when I was like, “OK, let’s try and tackle technology as its own thing.” Not just thinking of it as a neutral tool that can help or not.

There’s so much misinformation out there now that it’s so much harder to tell the difference between what’s real and fake news. Twitter was such a reliable tool of information before, but that’s changed. Do you think that any of these other platforms can be able to help make up for so much of the misinformation that is out there?

I think we all feel the weight of that loss of losing Twitter. Twitter was always a large corporation, partially owned by a billionaire. It was never kind of a community tool, but there was still an ethos, right? Like a philosophy, or the values of the platform were still very much like community-oriented, right? It was that place for activists and human rights defenders and journalists and communities in general to voice their opinions. So I think that loss was very hard on all of us.

I see a lot of misinformation on Instagram as well. There is very little moderation there. It’s also all visual, so if you want traction, you’re going to try to put something that is very spectacular that is very eye catchy, and so I think that leads to even more misinformation.

I am pretty optimistic about some of the alternatives that have popped up since Twitter’s downfall. Mastodon actually blew up after Twitter, but it’s much older — I think it’s 10 years old by now. And there’s Bluesky. So I think those two are building up, and they offer spaces that are much more decentralized with much more autonomy and agency to users. You are more likely to be able to customize your feeds. You are more likely to have tools for your own safety online, right? All of those different things that I feel like you could never get on Threads, on Instagram or on Twitter, or anything like that. I’m hoping it’s actually going to be able to recreate the community that is very much what Twitter was. It’s never going to be exactly the same thing, but I’m hoping we will get there. And I think the fact that it is decentralized, open source and with very much a philosophy of agency and autonomy is going to lead us to a place where these social networks can’t actually be taken over by a power hungry billionaire.

What do you think is the biggest challenge that we face in the world this year on and offline, and then how do you think we can combat it?

I don’t know if that’s the biggest challenge, but one of the really big challenges that we’re seeing is how the digital is meeting real life and how people who are active online or on the phone on the computer are getting repressed for that work in real life. So we developed an app called Tella, which encrypts and hides files on your phone, right? So you take a photo or a video of a demonstration or police violence, or whatever it is, and then if the police tries to catch you and grab your phone to delete it, they won’t be able to find it, or at least it will be much more difficult to find it. Or it would be uploaded already. And things like that, I think is one of the big things that we’re seeing again. I don’t know if that the biggest challenge online at the moment, but one of the big things we’re seeing is just that it’s becoming completely normalized to grab someone’s phone or check someone’s computer at the airport, or at the border, in the street and go through it without any form of accountability. People have no idea what the regulations are, what the rules are, what’s allowed, what’s not allowed. And when they abuse those powers, is there any recourse? Most places in the world, at least, where we are working, there is definitely no recourse. And so I think that connection between thinking you’re just taking a photo for social media but actually the repercussion is so real because you’re going to have someone take your phone, and maybe they’re going to delete the photo, or maybe they’re going to detain you. Or maybe they’re going to beat you up — like all of those different things. I think this is one of the big challenges that we’re seeing at the moment, and something that isn’t traditionally thought of as an internet issue or an online digital rights issue because it’s someone taking a physical device and looking through it. It often gets overlooked, and then we don’t have much kind of advocacy around it, or anything like that.

<figcaption class="wp-element-caption">Raphael Mimoun at Mozilla’s Rise25 award ceremony in October 2023.</figcaption>

How is this issue overseas compared to America?

It really depends on where in each country, but many places where we work, we work with human rights defenders who on the front lines, and journalists who are on the front lines in places that are very repressive. So there is no form of accountability whatsoever. They can take your phone again. It depends on where, but they can take your phone, put it into the trash, and you’ll never see it again. And you have no recourse whatsoever. It’s not like you can go to the police because they laugh at you and say, “What the hell are you doing here?” 

What do you think is one action everybody can take to make the world and our lives online a little bit better?

I think social media has a lot of negative consequences for everyone’s mental health and many other things, but for people who are active and who want to be active, consider social networks that are open source, privacy-friendly and decentralized. Bluesky, the Fediverse —including Mastodon — are examples because I think it’s our responsibility to kind of build up a community there, so we can move away from those social media platforms that are owned by either billionaires or massive corporations, who only want to extract value from us and who spy on us and who censor us. And I feel like if everyone committed to being active on those social media platforms — one way of doing that is just having an account, and whatever you post on one, you just post on the other — I feel like that’s one thing that can make a big difference in the long run.

We started Rise25 to celebrate Mozilla’s 25th anniversary. What do you hope that people are celebrating in the next 25 years?

I was talking a little bit earlier about how we are building a culture that is more privacy-centric, like people are becoming aware, becoming wary about all these things happening to the data, the identity, and so on. And I do think we are at a turning point in terms of the technology that’s available to us, the practices and what we need as users to maintain our privacy and our security.  I feel like in honestly not even 25, I think in 10 years, if things go well — which it’s hard to know in this field — and if we keep on building what we already are building, I can see how we will have an internet that is a lot more privacy-centric where communications are by default are private. Where end-to-end encryption is ubiquitous in our communication, in our emailing. Where social media isn’t extractive and people have actual ownership and agency in the social network networks they use. Where data mining is no longer a thing. I feel like overall, I can see how the infrastructure is now getting built, and that in 10,15 or 25 years, we will be in a place where we can use the internet without having to constantly watch over our shoulder to see if someone is spying on us or seeing who has access and all of those things.

Lastly, what gives you hope about the future of our world?

That people are not getting complacent and that it is always people who are standing up to fight back. We’re seeing it at. We saw it at Google with people standing up as part of No Tech for Apartheid coalition and people losing the jobs. We’re seeing it on university campuses around the country. We’re seeing it on the streets. People fight back. That’s where any change has ever come from: the bottom up. I think now, more than ever, people are willing to put something on the line to make sure that they defend their rights. So I think that really gives me hope.

Get Firefox

Get the browser that protects what’s important

The post Raphael Mimoun on creating tech for human rights and justice, combatting misinformation and building a privacy-centric culture appeared first on The Mozilla Blog.

The Mozilla BlogKeoni Mahelona on promoting Indigenous communities, the evolution of the Fediverse and data protection

At Mozilla, we know we can’t create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through our Rise 25 Awards. These storytellers, innovators, activists, advocates, builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.

This week, we chatted with Keoni Mahelona, a builder behind technologies that aim to protect and promote Indigenous languages and knowledge. We talked with Keoni about his current work at Te Hiku Media, the challenges of preserving Indigenous cultures, big tech and more.

So first off, what inspired you to do the work you’re doing now with Te Hiku Media?

Mahelona: I sort of started at the organization cause my partner, who’s the CEO, needed help with doing a website. But then the website turned into an entire digital platform, which then turned into building AI to help us do the work that we have to do, but I guess the most important thing is the alignment of values with like me as a person and as a native Hawaiian with the values of the community up here — Māori community and the organization. Having the strong desire for sovereignty for our land, which has been the struggle we’ve been having now for hundreds of years. We’re still trying to get that back, both in Aotearoa and in Hawaii, but also sovereignty for our languages and our data, and pretty much everything that encompasses us in our communities. And it was really clear that the work that we do at Te Hiku is very important for the community, but also that we needed to maintain sovereignty over that work. And if we made the wrong choices with how we store our data, where we put our data, what platforms we use, then we would cede some of that sovereignty over and take us further back rather than forward.

What were (and are) some of those challenges that you guys had to overcome to be able to create those tools? I feel like a lot of people might not know those challenges and how you have to persevere through those things to create, to preserve.

Sure, the lack of data is a challenge that big tech seem to overcome quite easily with their billions of dollars, whether they’re stealing it at scale or paying people for it at scale. They have the resources to do that and litigate if they need to, because of theft, and they’re just doing what America did right? Stole our land at scale. So for us, actually, we knew that the data would be the hardest part, but not so much like getting the data, or whether the data existed — there’s a vibrant community of language speakers here — the hard part was going to be, how do we protect the data that we collect? And even now, I worry because there’s just so many bots online scraping stuff, and we see bots trying to sort of log into our online forms. And I’m thinking hopefully these are just bots trying to log into a form because it sees the form, versus someone who knows that we’ve got some valuable data here, and if they can get in, they could use that data to add Māori to their models and profit off of that. When you have organizations like Microsoft and Google making hundreds of millions off of selling services to education and government in this country, you know that would be a valuable corpus for them — I’m not saying that they would sort of steal, I don’t know, I’d hope not, but I feel like OpenAI would probably do something like that.

And how do we overcome that? We just tried. We did the best we could do, given the resources we had to ensure that things are safe, and we think they’re relatively safe, although I still get anxiety about it. Some of the other challenges we face are being a bunch of brown people from a community, so there’s the stereotype associated with the area with anyone who might maybe sort of associates to this place. So there were people like, “Ha, you guys can’t do this.” And we proved them wrong. They were even funders who were Māori, who actually thought, “These guys are crazy, but you know what, this is exactly what we need to find. We need to find like people who are crazy and who might actually pull this off because it would be quite beneficial.” 

We’ve had other people inquire as to why our organization got science funding to do science research. I actually have a master’s in science — I actually have two masters in science, although one’s a business science degree, whatever that means — but there was this quite racist media organization on the south island of this country who did an official Information Act request on our organization, saying, “Why is this Māori media company getting science-based funding? They don’t know anything about science.” We actually had a scientist at our organization, and they didn’t, so this is some of the more interesting challenges that we’ve come across in this journey of going from radio broadcasting and television broadcasting to actually being a tech company and doing science and research. So it’s the racism and the discrimination that we’ve had to overcome as well. In some cases, we think we’ve been denied funding because our organization is Māori, and we’ve had to often do the hard work first off the smell of an oily rag, as they say here, to prove that we are capable of doing the work for people to recognize that, yeah, they can actually fund us. And that we can deliver results based on the stipulations of the fund or whatever when you’re getting science-based funding grants and stuff like that. I think we’ve shown the government that you don’t need to be a large university to actually do good research and have an impact in the science community. But it certainly hasn’t been easy.

<figcaption class="wp-element-caption">Keoni Mahelona at Mozilla’s Rise25 award ceremony in October 2023.</figcaption>

I imagine even with how long you’ve been there and how long you guys have been doing this, that there’s still an ongoing feeling of anxiety that’s extremely frustrating.

We’re a nonprofit, so a lot of our money comes from government funding, and we’re also a broadcaster, so we have public broadcasting funding that fades some of the work we do and then you science-based funding. 

The New Zealand political environment right now is absolutely terrible. There have been hundreds, probably thousands, of job cuts in the government. The current coalition government needs to raise something like three billion dollars for tax cuts for landlords, and in order to do that, they’re just slashing a lot of funding and projects and people’s jobs in government. There’s this rhetoric that’s been peddled that the government is quite inefficient, and we’re just hemorrhaging money and all these stupid positions and things like that. So that also gives us an anxiety, because a changing government might affect funding that is available to our organization. So we also have to deal with that as being a charity and not sort of being a capitalist organization.

The other thing that gives us anxiety is the inevitable, right? I actually think it’s inevitable, unfortunately, that these big tech companies will eventually be able to sort of replicate our languages. They won’t be good. They’ll never be good and good to the point where it will truly benefit and move our people forward. But they will be good enough that they will be able to profit from it. It profits by giving it that reputation of providing that service, ensuring you continue to go to Google, where you’re then served ads, and so they’re not selling the translation, but they are selling ads alongside it for profit, right? We see this essentially happening with a lot of Indigenous languages, where there is enough data being put online that these mostly American big tech corporations will profit from. And the sad thing is that it was the Americans in the first place and these other colonial nations that fought to make our languages extinct. And now their corporations stand to profit from the languages that they tried to make extinct. So it’s really terrible.

How do you think some of these bigger corporations can be more respectful, inclusive, and supportive of Indigenous communities?

That’s an interesting question. I guess the first question is, should they be inclusive? Because sometimes the best thing to do is just stay away and let us get on with it. We don’t need your help. The unfortunate reality is that so many of our people are on Facebook and are on Google, or whatever — the platforms are so dominating or imperialist that we have to use them in some cases, and because English is the dominant language on these platforms, especially for many Indigenous communities where they are colonized by English-speaking nations, it means that you’re just going to continue to be bombarded with English and not have a space if you don’t go out of your way to make a space and to sort of speak your language. It’s a bit of a catch-22, but I think it’s up to the communities to figure that one out because we could collectively come together as community and be like, “We’re not. We never expect Facebook or whatever to support our language and all these other tech companies and platforms.” And that’s fine, let’s go out into our own environment in our own communities and speak in languages rather than trying to rely on these tech companies to sort of do it for us, right?

There are ways that they can actually just kind of help, but like, stay out of our business.

And that’s the better way to do it, because this sort of outsider coming in trying to save us, it just doesn’t work. I’ve been advocating that you have to support these communities to lead the solutions and what they see is best for their people, because Google doesn’t know what’s best for these communities. So they need to support the communities, and I don’t mean by like building the language technologies themselves and selling it back to them, that is not the support I’m talking about. The support is staying away or giving them discounts on resources or giving them resources so that they can build, and they can lead, because then you’re also upskilling them. 

What do you think is the biggest challenge that we face in the world this year on and offline? And how do we combat it?

I see stuff happening to the Fediverse, which is interesting. Something that happened recently was some guy who very much knows and in his blog post identified as a tech bro from Silicon Valley, made the universal decision that the best thing to do for everybody is to hook up Threads and the Fediverse, so that people in Threads can access stuff in Mastodon etc., and then likewise the other way around. And this is like a single dude who apparently had talked to people and decided it was his duty or mission to connect Threads to the Fediverse, and it was just like, are you joking? And then there’s this other thing going on now, where there are these similar types of dudes getting angry at some instances for blocking other instances because they have people who are like racist or misogynist, and they’re getting angry at these moderators who are doing what the point of the Fediverse is, right? Where you can create a safe space and decide who gets to come in and who doesn’t. What I’m getting at is, I think that as the Fediverse kind of grows, it’s going to be interesting to see what sort of problems comes and how the things that we wanted to escape by leaving Twitter and jumping on Mastodon are kind of coming in. And I think that’s going to be interesting to see how we deal with that.

This is again where the incompatibility of capitalism and general communities sort of comes to play because if we have for-profit companies trying to do Fediverse stuff, then essentially, we’re going to get what we already have, because ultimately, at the end of the day you’re trying to maximize for profit. So long as the internet is a place where we have dominating companies trying to maximize for profit, we’re just always going to have more problems, and it’s absolutely terrible and frightening.

But yeah, politics and I think the evolution of the Fediverse are probably the thing that I would be most concerned about. Then there’s also the normal stuff, which is just the theft of data and privacy. 

What is one action that you think everybody should take to make the world and our online lives a little bit better?

I think they should just be more cognizant of the data they decide to put online and don’t just think about how that data affects you as an individual, but how does it affect those who are close to you? How does it affect the communities to which you belong? And how does it affect other people who might be similar to you in that way? 

People need to be respectful of the data and others data and think about their actions online in respect to being good stewards of all data — their own data from their communities, data of others. And whether you should download this thing or steal that thing or whatever. And that’s essentially what I think is my message, for everyone, is to be respectful, but think about data as you would think about your environment and taking care of it and respecting.

We started Rise25 to celebrate Mozilla’s 25th anniversary. What do you hope that people are celebrating in the next 25 years?

The fall of capitalism, I guess. The restoration of the Hawaiian nation — I can continue. Ultimately, I think a lot of problems come back to some very fundamental ways in which society has structured itself.

What gives you hope about the future of our world?

I think actually this younger generation. I had this impression coming out of high school going to university and then kind of seeing the new generation coming through and being confused having perceptions through your generations. When we live stream high school speeches … just the stuff that these kids talk about is amazing. And even sometimes you’re like having a bit of a cry, because it’s so good in terms of the topics they talk about. But to me, that gives me hope there are actually like some really amazing people and young people who will someday fill our shoes and be politicians. That gives me hope that these people still exist despite all the negative stuff that we see today. That’s what I’m hopeful for.

Get Firefox

Get the browser that protects what’s important

The post Keoni Mahelona on promoting Indigenous communities, the evolution of the Fediverse and data protection appeared first on The Mozilla Blog.

Firefox NightlyScreenshots++ – These Weeks in Firefox: Issue 160


  • The screenshots component pref just got enabled and is riding the trains in 127! This is a new implementation of the screenshots feature with a number of usability, accessibility and performance improvements over the original.
  • Thanks to Joseph Webster for creating a brand new JWPlayer video wrapper (bug) and for adding more sites under this wrapper to expand Picture-in-Picture captions support (bug).
    • New supported sites include AOL, C-SPAN, CPAC, CNBC, Reuters, The Independent, Yahoo and more!
  • Irene landed the first part of refreshed text formatting controls for Reader Mode. Check them out by toggling reader.improved_text_menu.enabled (bug 1880658)
    • A panel in Firefox's Reader Mode is shown for controlling layout and text on the page. The panel lets users control the content width, line spacing, character spacing, word spacing, and text alignment of the text in reader mode.
  • New tab wallpapers have landed in Nightly and will be released as an experiment in en-US. If you’d like to enable wallpapers, set browser.newtabpage.activity-stream.newtabWallpapers.enabled to true.
    • Firefox's New Tab page with a beautiful image of the aurora borealis set as the background wallpaper

      Set a new look for new tabs!

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Camille
  • gravyant
  • Itiel Joseph
  • Webster
  • Magnus Melin [:mkmelin]
  • Meera Murthy
  • Steve P

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Starting from Firefox 127, installing new single-signed add-ons is disallowed (while already installed single-signed add-ons are still allowed to run). This behavior is currently only enabled in Nightly (Bug 1886157) but it is expected to be extended to all channels later in the 127 cycle (Bug 1886160)
  • Fixed a styling issue hit by extensions options pages embedded in about:addons when the Dark mode is enabled (Bug 1888866)
WebExtensions APIs
  • As part of the ongoing work related to improving cross-browser compatibility for Manifest Version 3 extensions:
    • Customized keyboard shortcuts associated to _execute_browser_action command for Manifest Version 2 extensions will be automatically associated to the _execute_action command when the same extension migrates to Manifest Version 3 (Bug 1797811). This way, the custom keyboard shortcut will keep working as expected from a user perspective.
    • DNR rule limits have been raised to match the limits enforced by other browsers (Bug 1803370)
    • DNR getDynamicRules and getSessionRules API methods will be accepting the additional ruleIds filter as a parameter and improve compatibility with DNR API in more recent Chrome versions (Bug 1820870)
  • Improved errors logged when a content script file does not exist (Bug 1891502)
    • the error is now expected to look like Unable to load script: moz-extension://UUID/path/to/script.js

Developer Tools

  • Julian reverted a change a few months ago so DevTools screenshots are saved in the same location as Firefox screenshots (#1845037)
  • Alex fixed a Debugger crash (#1891699)
  • Nicolas fixed a visual glitch in the Debugger (#1891681)
  • Alex fixed an issue where Network request from iframe sent just before document destruction were not displayed in the Netmonitor (#1887852)
  • Nicolas replaced DevTools JS-based CSS lexer with a Rust-based version, using the same cssparser crate than Stylo (#1887638, #1892895)
    • This brought a ~10% performance improvement when displaying rules in the inspector (#1888607 + #1890552)
  • Thanks to :willdurand, we finally released a new version of the DevTools ADB extension used by about:debugging. The extension is now shipping with notarized binaries and can be used on recent macOS versions. (#1821449)
WebDriver BiDi
  • Thanks to gravyant who implemented a new helper Assert.isInstance to check whether objects are instances of specific classes (#1870880)
  • Henrik updated mozrunner/mozprocess to use “psutil” and support the new application restart mechanism on macos (#1884401)
  • Sasha added support for the a11y attributes locator for the browsingContext.locateNodes command (#1885577)
  • Sasha added support for the devicePixelRatio parameter for the browsingContext.setViewport command (#1857961)
  • Henrik improved the way we check if an element is disabled when using the WebDriver ElementClear command (#1863266)
  • Julian updated the vendored puppeteer version to v22.6.5, which enables new network interception features in Puppeteer using WebDriver BiDi (#1891762)

Migration Improvements

New Tab Page

  • Work continues on a weather widget for new tab (borrowing logic from URL bar). Stay tuned!

Privacy & Security

  • We’re working on a new anti-tracking feature: Bounce Tracking Protection. It works similar to the existing Cookie Purging feature in Firefox, but instead of a tracker list it relies on heuristics to detect bounce trackers.
    • It’s based on the navigational-tracking-protections spec draft in the PrivacyCG
    • Bug 1877432 first enabled the feature in Nightly in “dry run mode” where we don’t purge tracker storage but only collect telemetry. We’re looking to fully enable it in Nightly soon once we think it’s stable enough.

Profile Management (new this week!)

  • We’re getting underway with improvements to multiple profiles support in Firefox!
  • Eng discussion on Matrix: #fx-profile-eng
  • Backend work in toolkit/profile behind a build flag (MOZ_SELECTABLE_PROFILES)
  • Frontend work in browser/components/profiles behind a pref (browser.profiles.enabled)
  • Metabug is here: 1882882
  • Bugs landed so far:
    • Mossop added telemetry to record the version of the profiles database on startup and the number of profiles in it (bug 1878339)
    • Niklas added the profiles browser component (bug 1883143)
    • Niklas added profiles menu items to the app menu (bug 1883155)
  • Coming soon: Docs, final UX, and good-first-bugs


Search and Navigation

Storybook/Reusable Components

Anne van KesterenUndue base URL influence

The URL parser has many quirks due to its origins in a time where conformance test suites were atypical and implementation requirements were hidden in the examples section. Some consider these quirks deeply problematic, but personally I don’t really mind that one can write a hundred slashes after a scheme instead of two and get identical results. Sure, it would be better if that were not the case, but in the end it is something that is normalized away and therefore does not impact the fundamental aspects of the URL ecosystem.

I was reminded the other day that there is one quirk however that does yield rather undesirable results. In particular for certain (non-conforming) inputs, the result will not be failure, but the exact URL returned will depend on the presence and type of base URL. This might be best explained with examples:

InputBase URL (serialized)Output (serialized)

This quirk only impacts so-called special schemes, which include http and https. And only when they match between the input and base URL. As a user of URLs you could work around this quirk by first parsing without a base URL and only if that returns failure, parse a second time with a base URL. That does have the unfortunate side effect of being inconsistent with the web platform (for non-conforming input), but depending on your use case that might be okay.

I remember looking into whether this could be removed completely many years ago, but websites relied on it and end users trump theory.

Mozilla ThunderbirdThunderbird for Android / K-9 Mail: April 2024 Progress Report

Welcome to our monthly report on turning K-9 Mail into Thunderbird for Android! Last month you could read about how we found and fixed bugs after publishing a new stable release. This month we start with… telling you that we fixed even more bugs.

Fixing bugs

After the release of K-9 Mail 6.800 we dedicated some time to fixing bugs. We published the first bugfix release in March and continued that work in April.

K-9 Mail 6.802

The second bugfix release contained these changes:

  • Push: Notify user if permission to schedule exact alarms is missing
  • Renamed “Send client ID” setting to “Send client information”
  • IMAP: Added support for the \NonExistent LIST response attribute
  • IMAP: Issue EXPUNGE command after moving without MOVE extension
  • Updated translations; added Hebrew translation

I’m especially happy that we were able to add back the Hebrew translation. We removed it prior to the K-9 Mail 6.800 release due to the translation being less than 70% complete (it was at 49%). Since then volunteers translated the missing bits of the app and in April the translation was almost complete.

Unfortunately, the same isn’t true for the Korean translation that was also removed. It was 69% complete, right below the threshold. Since then there has been no significant change. If you are a K-9 Mail user and a native Korean speaker, please consider helping out.

F-Droid metadata (again?)

In the previous progress report we described what change had led to the app description disappearing on F-Droid and how we intended to fix it. Unfortunately we found out that our approach to fixing the issue didn’t work due to the way F-Droid builds their app index. So we changed our approach once again and hope that the app description will be restored with the next app release.

Push & the permission to schedule alarms

K-9 Mail 6.802 notifies the user when Push is enabled in settings, but the permission to schedule exact alarms is missing. However, what we really want to do is ask the user for this permission before we allow them to enable Push.

This change was completed in April and will be included in the next bugfix release, K-9 Mail 6.803.

Material 3

As briefly mentioned in March’s progress report, we’ve started work on switching the app to Google’s latest version of Material Design – Material 3. In April we completed the technical conversion. The app is now using Material 3 components instead of the Material Design 2 ones.

The next step is to clean up the different screens in the app. This means adjusting spacings, text sizes, colors, and sometimes more extensive changes. 

We didn’t release any beta versions while the development version was still a mix of Material Design 2 and Material 3. Now that the first step is complete, we’ll resume publishing beta versions.

If you are a beta tester, please be aware that the app still looks quite rough in a couple of places. While the app should be fully functional, you might want to leave the beta program for a while if the look of the app is important to you.

Targeting Android 14

Part of the necessary app maintenance is to update the app to target the latest Android version. This is required for the app to use the latest security features and to cope with added restrictions the system puts in place. It’s also required by Google in order to be able to publish updates on Google Play.

The work to target Android 14 is now mostly complete. This involved some behind the scenes changes that users hopefully won’t notice at all. We’ll be testing these changes in a future beta version before including them in a K-9 Mail 6.8xx release.

Building two apps

If you’re reading this, it’s probably because you’re excited for Thunderbird for Android to be finally released. However, we’ve also heard numerous times that people love K-9 Mail and wished the app would stay around. That’s why we’ve announced in December to do just that.

We’ve started work on this and are now able to build two apps from the same source code. Thunderbird for Android already includes the fancy new Thunderbird logo and a first version of a blue theme.

But as you can see in the screenshots above, we’re not quite done yet. We still have to change parts of the app where the app name is displayed to use a placeholder instead of a hard-coded string. Then there’s the About screen and a couple of other places that require app-specific behavior.

We’ll keep you posted.


In April 2024 we published the following stable release:

The post Thunderbird for Android / K-9 Mail: April 2024 Progress Report appeared first on The Thunderbird Blog.

This Week In RustThis Week in Rust 546

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is derive_more, a crate for deriving a whole lot of traits

Thanks to teor for the suggestion!

Please submit your suggestions and votes for next week!

Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:


If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

CFP - Speakers

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the submission website through a PR to TWiR.

Updates from the Rust Project

426 pull requests were merged in the last week

Rust Compiler Performance Triage

Largely uneventful week; the most notable shifts were considered false-alarms that arose from changes related to cfg-checking (either cargo enabling it, or adding cfg's like rustfmt to the "well-known cfgs list").

Triage done by @pnkfelix. Revision range: c65b2dc9..69f53f5e

3 Regressions, 2 Improvements, 3 Mixed; 5 of them in rollups 54 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-05-08 - 2024-06-05 🦀

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.


Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Rust and its borrow checker are like proper form when lifting boxes. While you might have been lifting boxes "the natural way" for decades without a problem, and its an initial embuggerance to think and perform proper lifting form, it is learnable, efficient, and prevents some important problems.

Or more succinctly:
C/C++: It'll screw your back(end).

And the reply:

  1. there’s a largish group of men who would feel their masculinity attacked if you implied they should learn it
  2. while it's learnable finding usefully targeted educational resources are hard to come by
  3. proper form while lifting boxes are a really terrible way to model graphs

Brett Witty and Leon on Mastodon

Thanks to Brett Witty for the self-suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Mozilla Addons BlogDeveloper Spotlight: Port Authority

Port Authority gives you intuitive control over global block settings, notifications, and allow-list customization.

A few years ago a developer known as ACK-J stumbled onto a tech article that revealed eBay was secretly port scanning their customers (i.e. scanning their users’ internet-facing devices to learn what apps and services are listening on the network). The article further claimed there was nothing anyone could do to prevent this privacy compromise. ACK-J took that as a challenge. “After going down many rabbit holes,” he says, “I found that this script, which was port scanning everyone, is in my opinion, malware.”

We spoke with ACK-J to better understand the obscure privacy risks of port scanning and how his extension Port Authority offers unique protections.

Why does port scanning present a privacy risk?

ACK-J: There is a common misconception/ignorance around how far websites are able to peer into your private home network. While modern browsers limit this to an extent, it is still overly permissive in my opinion. The privacy implications arise when websites, such as, have the ability to secretly interact with your router’s administrative interface, local services running on your computer and discover devices on your home network. This behavior should be blocked by the same-origin policy (SOP), a fundamental security mechanism built into every web browser since the mid 1990’s, however due to convenience it appears to be disabled for these requests. This caught a lot of people by surprise, including myself, and is why I wanted to make this type of traffic “opt-in” on my devices.

Do you consider port scanning “malware”? 

ACK-J: I don’t necessarily consider port scanning malware, port scanning is commonplace and should be expected for any computer connected to the internet with a public IP address. On the other hand, devices on our home networks do not have public IP addresses and instead are protected from this scanning due to a technology called network address translation (NAT). Due to the nature of how browsers and websites work, the website code needs to be rendered on the user’s device (behind the protections put in place by NAT). This means websites are in a privileged position to communicate with devices on your home network (e.g. IOT devices, routers, TVs, etc.). There are certainly legitimate use cases for port scanning even on internal networks, the most common being communicating with a program running on your PC such as Discord. I prefer to be able to explicitly allow this type of behavior instead of leaving it wide open by default.

Is there a way to summarize how your extension addresses the privacy leak of port scanning?

ACK-J: Port Authority acts in a similar manner to a bouncer at a bar, whenever your computer tries to make a request, Port Authority will verify that the request is not trying to port scan your private network. If the request passes the check it is allowed in and everything functions as normal. If it fails the request is dropped. This all happens in a matter of milliseconds, but if a request is blocked you will get a notification.

Should Port Authority users expect occasional disruptions using websites that port scan, like eBay?

ACK-J: Nope, I’ve been using it for years along with many friends, family, and 1,000 other daily users. I’ve never received a single report that a website would not allow you to login, check-out, or other expected functionality due to the extension blocking port scans. There are instances where you’d like your browser to communicate with an app on your PC such as Discord, in this case you’ll receive an alert and could add Discord to an allow-list or simply click the “Blocking” toggle to disable blocking temporarily.

Do you see Port Authority growing in terms of a feature set, or do you feel it’s relatively feature complete and your focus is on maintenance/refinement?

ACK-J: I like extensions that serve a specific purpose so I don’t see it growing in features but I’d never say never. I’ve added an allow-list to explicitly permit certain domains to interact with services on your private network. I haven’t enabled this feature on the public extension yet but will soon.

Apart from Port Authority, do you have any plans to develop other extensions?

ACK-J: I actually do! I just finished writing up an extension called MailFail that checks the website you are on for misconfigurations in their email server that would allow someone to spoof emails using their domain. This will be posted soon!

Do you have an intriguing extension development story? Do tell! Maybe your story should appear on this blog. Contact us at amo-featured [at] mozilla [dot] org and let us know a bit about your extension development journey.

The post Developer Spotlight: Port Authority appeared first on Mozilla Add-ons Community Blog.

Firefox Developer ExperienceFirefox DevTools Newsletter — 125

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 125 Nightly release cycle.

Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla, like Artem Manushenkov who updated the Debugger Watch Expressions panel input field placeholder (#1619201)

Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues

Pop it up!

Firefox 125 adds support for the Popover API, which is now supported across all major browsers 🎉. As said on the related MDN page:

The Popover API provides developers with a standard, consistent, flexible mechanism for displaying popover content on top of other page content. Popover content can be controlled either declaratively using HTML attributes, or via JavaScript.

In HTML, popover elements can be declared with a popover attribute. The popover can then be toggled from a button element which specifies a popovertarget attribute referencing the id of the popover element.

Firefox DevTools Inspector markup view. We can see a button with a popovertarget attribute and next to it a "select element" button. A div element with a popover attribute is displayed as well<figcaption class="wp-element-caption">Inspector displayed on</figcaption>

In the Inspector markup view, an icon is displayed next to the popovertarget attribute so you can quickly jump to the popover element.
Popover element can be toggled in Javascript HTMLElement.showPopover, HTMLElement.hidePopover and HTMLElement.
togglePopover. beforetoggle and toggle elements are fired when a popover element is toggled, and the Debugger provides those events in the Event Listeners Breakpoints panel.

Note that we don’t display ::backdrop pseudo-element rules yet, but will be soon (target is Firefox 127, see #1893644)


As announced in the last newsletter, we’re focusing on performance for a few months to provide a fast and snappy experience to our beloved users. We’re happy to report that the Style Editor panel is now up to 20% faster to open (#1884072).

Chart where x is the time and y is duration, where we can see the values going from 750ms to 600ms around March 14th<figcaption class="wp-element-caption">Performance test duration going from ~750ms to ~600ms</figcaption>

We also improved the Debugger opening when a page contains a lot of Javascript sources (#1880809). In a specific case, we could spend around 9 whole seconds to process the different sources and populate the sources tree (see the 124 Firefox profile). In 125, it now only take a bit more than 600 milliseconds, meaning it’s now 14 times faster (see the 125 Firefox profile).

Firefox Profiler Flame chart screenshot for the same function, on Firefox 124 and 125.

This also shows up on less extreme cases: our performance tests reported an average of 3% improvement on Debugger opening.


There is now a button indicating if the opened file is an original file or a bundle, or if there was an issue when trying to retrieve the Source Map file (#1853899).

Firefox debugger with a tsx file opened. At the bottom of the file text, there a button saying "original file". A popup menu is opened and has the following items: - Enable Source Maps - Show and open original location by default - Jump to the related original source - Open the Source Map file in a new tab

Clicking on the button opens a menu dedicated to Source Map, where you can:

  • enable or disable Source Map
  • indicate if the Debugger should open original files by default
  • select the related original/bundle source
  • open the .map file in a new Firefox tab

We also fixed a glitch around text selection and line highlighting (#1878698), as well as an issue which was preventing the Outline panel to work properly (#1879322). Finally we added back the preference that allows to disable the paused debugger overlay (#1865439). If you want to do so, go to about:config , search for devtools.debugger.features.overlay and toggle it to false.


  • CSP error messages in the Console now provide the effective directive (#1848315)
  • Infinity wasn’t visible in the Console auto-completion menu (#1698260)
  • Clicking on a relative URL of an image in the Inspector now honor the document’s base URL (#1871391)
  • An issue that could provoke crashes of the Network Monitor is now fixed (#1884571)

Thank you for reading this and using our tools, see you in a few weeks for a new round of updates 🙂

The Rust Programming Language BlogRust participates in OSPP 2024

Similar to our previous announcements of the Rust Project's participation in Google Summer of Code (GSoC), we are now announcing our participation in Open Source Promotion Plan (OSPP) 2024.

OSPP is a program organized in large part by The Institute of Software Chinese Academy of Sciences. Its goal is to encourage college students to participate in developing and maintaining open source software. The Rust Project is already registered and has a number of projects available for mentorship:

Eligibility is limited to students and there is a guide for potential participants. Student registration ends on the 3rd of June with the project application deadline a day later.

Unlike GSoC which allows students to propose their own projects, OSPP requires that students only apply for one of the registered projects. We do have an #ospp Zulip stream and potential contributors are encouraged to join and discuss details about the projects and connect with mentors.

After the project application window closes on June 4th, we will review and select participants, which will be announced on June 26th. From there, students will participate through to the end of September.

As with GSoC, this is our first year participating in this program. We are incredibly excited for this opportunity to further expand into new open source communities and we're hopeful for a productive and educational summer.

Support.Mozilla.OrgMake your support articles pop: Use the new Firefox Desktop Icon Gallery

Hello, SUMO community!

We’re thrilled to roll out a new tool designed specifically for our contributors: the Firefox Desktop Icon Gallery. This gallery is crafted for quick access and is a key part of our strategy to reduce cognitive load in our Knowledge Base content. By providing a range of inline icons that accurately depict interface elements of Firefox Desktop, this resource makes it easier for readers to follow along without overwhelming visual information.

We want your feedback! Join the conversation in our SUMO forum thread to ask questions or suggest new icons. Your feedback is crucial for improving this tool.

Thanks for helping us support the Firefox community. We can’t wait to see how you use these new icons to enrich our Knowledge Base!

Stay engaged and keep rocking the helpful web!


The Rust Programming Language BlogAnnouncing Rustup 1.27.1

The Rustup team is happy to announce the release of Rustup version 1.27.1. Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rustup installed, getting Rustup 1.27.1 is as easy as stopping any programs which may be using Rustup (e.g. closing your IDE) and running:

$ rustup self update

Rustup will also automatically update itself at the end of a normal toolchain update:

$ rustup update

If you don't have it already, you can get Rustup from the appropriate page on our website.

What's new in Rustup 1.27.1

This new Rustup release involves some minor bug fixes.

The headlines for this release are:

  1. Prebuilt Rustup binaries should be working on older macOS versions again.
  2. rustup-init will no longer fail when fish is installed but ~/.config/fish/conf.d hasn't been created.
  3. Regressions regarding symlinked RUSTUP_HOME/(toolchains|downloads|tmp) have been addressed.

Full details are available in the changelog!

Rustup's documentation is also available in the Rustup Book.


Thanks again to all the contributors who made Rustup 1.27.1 possible!

  • Anas (0x61nas)
  • cuiyourong (cuiyourong)
  • Dirkjan Ochtman (djc)
  • Eric Huss (ehuss)
  • eth3lbert (eth3lbert)
  • hev (heiher)
  • klensy (klensy)
  • Chih Wang (ongchi)
  • Adam (pie-flavor)
  • rami3l (rami3l)
  • Robert (rben01)
  • Robert Collins (rbtcollins)
  • Sun Bin (shandongbinzhou)
  • Samuel Moelius (smoelius)
  • vpochapuis (vpochapuis)
  • Renovate Bot (renovate)

The Rust Programming Language BlogAutomatic checking of cfgs at compile-time

The Cargo and Compiler team are delighted to announce that starting with Rust 1.80 (or nightly-2024-05-05) every reachable #[cfg] will be automatically checked that they match the expected config names and values.

This can help with verifying that the crate is correctly handling conditional compilation for different target platforms or features. It ensures that the cfg settings are consistent between what is intended and what is used, helping to catch potential bugs or errors early in the development process.

This addresses a common pitfall for new and advanced users.

This is another step to our commitment to provide user-focused tooling and we are eager and excited to finally see it fixed, after more than two years since the original RFC 30131.

A look at the feature

Every time a Cargo feature is declared that feature is transformed into a config that is passed to rustc (the Rust compiler) so it can verify with it along with well known cfgs if any of the #[cfg], #![cfg_attr] and cfg! have unexpected configs and report a warning with the unexpected_cfgs lint.


name = "foo"

lasers = []
zapping = []


#[cfg(feature = "lasers")]  // This condition is expected
                            // as "lasers" is an expected value
                            // of the `feature` cfg
fn shoot_lasers() {}

#[cfg(feature = "monkeys")] // This condition is UNEXPECTED
                            // as "monkeys" is NOT an expected
                            // value of the `feature` cfg
fn write_shakespeare() {}

#[cfg(windosw)]             // This condition is UNEXPECTED
                            // it's supposed to be `windows`
fn win() {}

cargo check:


Expecting custom cfgs

UPDATE: This section was added with the release of nightly-2024-05-19.

In Cargo point-of-view: a custom cfg is one that is neither defined by rustc nor by a Cargo feature. Think of tokio_unstable, has_foo, ... but not feature = "lasers", unix or debug_assertions

Some crates might use custom cfgs, like loom, fuzzing or tokio_unstable that they expected from the environment (RUSTFLAGS or other means) and which are always statically known at compile time. For those cases, Cargo provides via the [lints] table a way to statically declare those cfgs as expected.

Defining those custom cfgs as expected is done through the special check-cfg config under [lints.rust.unexpected_cfgs]:


unexpected_cfgs = { level = "warn", check-cfg = ['cfg(loom)', 'cfg(fuzzing)'] }

Custom cfgs in build scripts

On the other hand some crates use custom cfgs that are enabled by some logic in the crate For those crates Cargo provides a new instruction: cargo::rustc-check-cfg2 (or cargo:rustc-check-cfg for older Cargo version).

The syntax to use is described in the rustc book section checking configuration, but in a nutshell the basic syntax of --check-cfg is:

cfg(name, values("value1", "value2", ..., "valueN"))

Note that every custom cfgs must always be expected, regardless if the cfg is active or not! example

fn main() {
    //        ^^^^^^^^^^^^^^^^^^^^^^ new with Cargo 1.80
    if has_foo() {

Each cargo::rustc-cfg should have an accompanying unconditional cargo::rustc-check-cfg directive to avoid warnings like this: unexpected cfg condition name: has_foo.

Equivalence table

cargo::rustc-cfg cargo::rustc-check-cfg
foo cfg(foo) or cfg(foo, values(none()))
foo="" cfg(foo, values(""))
foo="bar" cfg(foo, values("bar"))
foo="1" and foo="2" cfg(foo, values("1", "2"))
foo="1" and bar="2" cfg(foo, values("1")) and cfg(bar, values("2"))
foo and foo="bar" cfg(foo, values(none(), "bar"))

More details can be found in the rustc book.

Frequently asked questions

Can it be disabled?

For Cargo users, the feature is always on and cannot be disabled, but like any other lints it can be controlled: #![warn(unexpected_cfgs)].

Does the lint affect dependencies?

No, like most lints, unexpected_cfgs will only be reported for local packages thanks to cap-lints.

How does it interact with the RUSTFLAGS env?

You should be able to use the RUSTFLAGS environment variable like it was before. Currently --cfg arguments are not checked, only usage in code are.

This means that doing RUSTFLAGS="--cfg tokio_unstable" cargo check will not report any warnings, unless tokio_unstable is used within your local crates, in which case crate author will need to make sure that that custom cfg is expected with cargo::rustc-check-cfg in the of that crate.

How to expect custom cfgs without a

UPDATE: Cargo with nightly-2024-05-19 now provides the [lints.rust.unexpected_cfgs.check-cfg] config to address the statically known custom cfgs.

There is currently no way to expect a custom cfg other than with cargo::rustc-check-cfg in a

Crate authors that don't want to use a and cannot use [lints.rust.unexpected_cfgs.check-cfg], are encouraged to use Cargo features instead.

How does it interact with other build systems?

Non-Cargo based build systems are not affected by the lint by default. Build system authors that wish to have the same functionality should look at the rustc documentation for the --check-cfg flag for a detailed explanation of how to achieve the same functionality.

  1. The stabilized implementation and RFC 3013 diverge significantly, in particular there is only one form for --check-cfg: cfg() (instead of values() and names() being incomplete and subtlety incompatible with each other).

  2. cargo::rustc-check-cfg will start working in Rust 1.80 (or nightly-2024-05-05). From Rust 1.77 to Rust 1.79 (inclusive) it is silently ignored. In Rust 1.76 and below a warning is emitted when used without the unstable Cargo flag -Zcheck-cfg.

Don Martian easy experiment to support behavioral advertising

This is a follow-up to a previous post on how a majority of US residents surveyed are now using an ad blocker, and how the survey found that privacy concerns are now the number one reason to block ads.

Almost as long as Internet privacy tools have been a thing, so have articles from personalized ad proponents telling us not to use them, because personalized ads are good actually. The policy debate over personalized (or surveillance, or cross-context behavioral, or tracking-based, or whatever you want to call it) advertising seems to keep repeating an endless argument that on the one hand, personalized advertising causes some risk or cost, I’m not going to summarize the risks or costs here, go read Bob Hoffman’s books or Microtargeting as Information Warfare for more info but on the other hand we have to somehow balance that against the benefits of personalized advertising.

Benefits? Let’s see them. Cross-context behavioral advertising is good for consumers should be straightforward to test. If ad personalization really helps match buyers and sellers in a market, then users of privacy tools and privacy settings must be buying worse products and services. Research should show that the more privacy options you pick, the less happy you are with your stuff. And the more personalized your ad experience is, the more satisfied of a customer you are. This is different from asking whether or not people prefer to have ad personalization turned on. That has been pretty extensively covered, and the answer is that some people do, and some people don’t. This question isn’t about whether people like personalized ads or not, it’s about whether people who get more personalized ads are happier with how they spend their money.

This should be a fairly low-cost project because in general, the companies that do the most personalized advertising are in the best position to do the research to support it. Are users of privacy tools and settings more or less satisfied with the products and services they buy than people who leave the personalized ad options on?

  • Do privacy-protected users give lower ratings to the products they buy?

  • Do privacy-protected users return or stop using more of their purchases?

  • Are privacy-protected users more likely to buy a replacement, competing product after an unsuccessful first purchase in a category?

  • Are privacy-protected users more likely to agree with general statements about a decline in quality and trustworthiness in business in general?

The correlation between more privacy and less satisfied consumer would be detectable from a variety of angles. Vendors of browsers with preferences that affect ad targeting should be able to show that people who turn on the privacy settings are somehow worse off than people who don’t. Anti-adblock companies do research on ad blocker users—so how are shopping experiences different for those users? Any product that connects to a server for updates or telemetry is providing data on how long the buyer chooses to keep using it. And—the biggest opportunity here—any company that has an Apple iOS app (and that’s a lot of companies) should be able to compare satisfaction metrics between customers with App Tracking Transparency (ATT) on or off.

Ad platforms, search engines, social network companies, and online retailers all have access to the needed info on ads, privacy settings, locations, and purchases. Best of all, they’re constantly running customer surveys and experiments of all kinds. It would be straightforward for any of these companies to run yet another user satisfaction survey, to prove what should be an obvious, measurable effect. I’m really looking for any kind of research here, whether it’s a credit card company running a SQL query on existing data to point out that customers with iOS app tracking turned off have more chargebacks, or a longer-term customer satisfaction study, anything.

looking at the data we do have

Update 16 May 2024: Balancing User Privacy and Personalization by Malika Korganbekova and Cole Zuber. This study simulated the effects of a privacy feature by truncating browsing history for some Wayfair shoppers, and found that people who were assigned to the personalized group and chose a product personalized to them were 10% less likely to return it than people in the non-personalized group.

The Welfare Effects of Ad Blocking by Lin et al. was different—members of the treatment group got an ad blocker affecting all sites, not just one retail site.

[P]articipants that were asked to install an ad-blocker become less likely to regret recent purchases, while participants that were asked to uninstall their ad-blocker report lower levels of satisfaction with their recent purchases.

The ad blockers used in that study, however, were multi-purpose ones such as uBlock Origin that block ads in general, not just personalization.

The effect of privacy settings on scams goes two ways: you can avoid being specifically targeted for a scam, but more likely you can also just get more scam ads by default if you feed in too little info to be targeted for the good ads.

The Internet as a whole is much more various in seller honesty level than the Wayfair platform is, which might help explain the difference in customer satisfaction seen between the Korganbekova and Zuber paper and the Lin et al. paper. Lin et al. showed that people were more satisfied as customers when receiving fewer ads in total, but they might have been even less satisified if they received more of the lower-quality ads that you’re more likely to get if adtech firms don’t have enough data to target you for a bigger-budget campaign.

Another related paper is Behavioral advertising and consumer welfare: An empirical investigation.

The presence of low quality vendors, along with the recent increase in the use of ad blockers, makes it increasingly difficult for new, high quality vendors, to reach new clients. Consumers benefit from having access to new sellers that are able to meet their needs through behavioral ads, as long as they are good sellers.


targeted ads are more likely to be associated with lower quality vendors, and higher prices for identical products, compared to competing alternatives found in organic search results

If you look back on the history of advertising, there has never been an ad medium that required so much legal and technical complexity to try to get people to accept it. Why is Meta going to so much trouble to try to come up with a legal way to require people in the EU to accept personalized ads? If ad personalization is so good for consumers, won’t they pick it on their own? Anyway, I’m looking for research on how personalization and privacy choices affect customer satisfaction.


free riding on future web ads?

Reputation, signaling, and targeted ads

B L O C K in the U S A

banning surveillance advertising

privacy economics sources

When can deceptive sellers outbid honest sellers for ad impressions?

Adrian GaudebertThe challenges of teaching a complex game

When I was 13, my mom bought me Civilization III from a retail shop, then went on to do some more shopping. I stayed in the car, with this elegant box in my hands, craving to play the game it contained. I opened the box, and there discovered something magical: the Civilization III Manual. Having nothing better to do, I started reading it…

The Civilization III manual <figcaption>That game manual was THICK.</figcaption>

More than 20 years later, I still remember how great reading that book felt. I was propelled into the game, learning about its systems and strategies, discovering screens of foggy maps and world wonders. It made me love the game before I had even played it! Since then I've played all Civilization games that came out — including Humankind, the unofficial 7th episode — and loved all of them. Would I have had the same connection to these games had I not read the manual? Impossible to tell. Would I have read that book had I not been trapped in a car with the game box on my laps? Definitely not! Even the developers of the game knew that nobody was reading those texts:

A quote from the Civilization III manual <figcaption>“The authors and developers of computer games know too well that most players never read the manual.”</figcaption>

Here's me now, 20-something years later, having made a game of my own and needing to teach it to potential players… Should I write a full-blown game manual, hoping that a little 13-years old will read it on a parking lot?

Heck no! Ain't nobody got time for that!

Let's make a tutorial instead

Dawnmaker has been built almost like a board game, in the sense that it has complex rules that you have to learn before you can play. Physical board game players are used to that: someone has to go through the rules before they can explain them to the rest of their players group. But video games are a different beast, and we've long moved away from reading… well, almost anything at all, really, and certainly not rules. You can't put each player into a car on a parking lot with nothing else to do other than reading the rules of your game. If you were to present the video game player with a rules book, in today's world of abundance, they would just move on to the next game in their unending backlog.

Teaching a game is thus incredibly difficult: it has to have as little text as possible, it has to be fun and rewarding, and it has to hook the player so that, by the end of the teaching phase, they still want to play the actual game.

It's with all those things in mind that I started building Dawnmaker's tutorial. I set two main rules in place: first, use as little words as possible, and second, make the player learn while doing. The first iteration of the tutorial was very terse: you only had a small goal written at the top of the screen, and almost no explanations whatsoever about what you were to do, or why. It turns out, that didn't work too well. Players were lost, especially when it came to the most complex actions or features of the game. Past a certain point in the tutorial, almost all of the players stopped reading the objectives at the top of the screen. And finally, they were also lacking a sense of purpose.

So for all my good intents, I had to revise my approach and write more words. The second iteration, which is now live in the game and demo, has a lot of small tooltips that pop up around the screen as the interface shows itself. I've tried to load information as slowly as possible, giving the player only what they need at a given moment. I think I approximately quadrupled the number of words in the tutorial, but such is the reality of teaching a complex game.

The other big change I made was to give the player a better sense of progression in the tutorial. The objectives now stay visible in a box on the left-hand side of the screen. They have little animations and sounds that reward the player when they complete a task. Seeing that list grow shows how the player has progressed and is also rewarding by itself.

Teaching the game doesn't only happen in the tutorial though, but also on the various signs and feedback we put around the game. Here's an example: during the tutorial, new players did not understand what was happening with the new building choice that was presented. The solution to this was not to explain with words what those buildings where, but to show a feedback. Now, whenever you gain a new building, you see that same building popping up in the center of the board, then moving towards the buildings roster. It's a double win: they understand that the building goes somewhere, they see where, and they are inclined to check that place and see what it is. I guess one feedback is worth a thousand words?

This version of the tutorial is still far from perfect. But it is the first thing players interact with, and thus it is a piece of the game that really has to shine. We'll keep collecting feedback from new players, and use that to polish the tutorial until, like Eclairium, it shines bright.

BTW: unlike Eclairium, diamonds do not shine, they simply reflect light. Rihanna has been lying to us all.

Next event: Geektouch in Lyon

If you're in Lyon or close to it, come and meet us at the Geektouch / Japan Touch festival in Eurexpo on May 4th and 5th! We'll have a stand on the Indie Game Lab space (lot A87). You will of course get to play with the latest version of Dawnmaker. We hope to see you there!

This piece was initially sent out to the readers of our newsletter. Wanna join in on the fun? Head out to Dawnmaker's presentation page and fill the form. You'll receive regular stories about how we're making this game, the latest news of its development, as well as an exclusive access to Dawnmaker's alpha version!

Join our community!

Wil ClouserI made a new hack poster

I was feeling nostalgic a couple months ago and built a hack poster out of plywood. It’s mostly modeled after the original but I added the radio tower and changed the words. “This technology could fall into the right hands” still makes me smile when I see it out in the world.

Poster hanging on the wall Close-up of radio tower Close-up of lettering

Mozilla Addons Blog1000+ Firefox for Android extensions now available

The new open ecosystem of extensions on Firefox for Android launched in December with just over 400 extensions. Less than five months later we’ve surpassed 1,000 Firefox for Android extensions. That’s an impressive achievement by this developer community! It’s exciting to see so many developers embrace the opportunity to explore new creative possibilities for mobile browser customization.

If you’re a developer intrigued to learn more about building extensions on Firefox for Android, here’s a great place to get started. Or maybe you already have some feedback about missing API’s on Firefox for Android?

What are some of your favorite new Firefox for Android extensions? Drop some props in the comments below.

The post 1000+ Firefox for Android extensions now available appeared first on Mozilla Add-ons Community Blog.

Mozilla Localization (L10N)L10n report: May 2024 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

New content and projects

What’s new or coming up in Firefox desktop

To start, a “logistic” announcement: on April 29 we changed the configuration of the Firefox project in Pontoon to use a different repository for source (English) strings. This is part of a larger change that will move Firefox development from Mercurial to Git.

While the change was mostly transparent for localizers, there is an added benefit: as part of the Firefox project, you will now be able to localize about 40 strings that are used by GeckoView, the core of our Android browsers (Firefox, Focus). For your convenience, these are grouped in a specific tag called GeckoView. Since these are mostly old strings dating back to Fennec (Firefox for Android up to version 68), you will also find that existing translations have been imported — in fact, we imported over 4 thousand translations.

Going back to Firefox desktop, version 127 is currently in Nightly, and will move to Beta on May 13. Over the past few weeks there have been a few new features and updates that’s it’s worth testing to ensure the best experience for users.

You are probably aware of the Firefox Translations feature available for a growing number of languages. While this feature was originally available for full-page translation, now it’s also possible to select text in the page and translate it through the context menu.

Screenshot of the translation selection feature in Firefox.

Screenshot of the Translation selection feature in Firefox.

Reader Mode is also in the process of getting a redesign, with more controls to customize the user experience.

Screenshot of the Reader Mode settings in Firefox Nightly.

Screenshot of the Reader Mode settings in Firefox Nightly.

The New Tab page has a new wallpaper function: in order to test it, go to about:config (see this page if you’re unfamiliar), search for browser.newtabpage.activity-stream.newtabWallpapers.enabled and flip its value to true (double-click will work). At this point, open a new tab and click the gear icon in the top-right corner. Note that the available wallpapers change depending on the current theme (dark vs light).

Screenshot of New Tab wallpaper selection in Nightly.

Screenshot of New Tab wallpaper selection in Nightly.

Last but not least, make sure to test the new features available in the integrated PDF Reader, in particular the dialog to add images and highlight elements in the page.

Screenshot of the PDF Viewer in Firefox, with the "Add image" UI.

Screenshot of the PDF Viewer in Firefox, with the “Add image” UI.

What’s new or coming up in mobile

The mobile team is currently redesigning the app menus in Firefox Android and iOS. There will be many new menu strings landing in the upcoming versions (you may have already noticed some prelanding), including some dynamic menu text that may get truncated for some locales – especially on smaller screens.

Testing for this type of localization issues will be a focus: we’ll set expectations for it soon and send testing instructions (v130 or v131 releases are currently the target). Strings will be making their way incrementally in the new menus available through Firefox Nightly, allowing enough time for localizers to translate and test continuously.

What’s new or coming up in web projects

The team is creating a regular cleanup routine by labeling the soon-to-be replaced strings with an expiration date, usually two months after the string has become obsolete. This approach will minimize communities’ time localizing strings no longer used. In other words, if you see a string labeled with a date, please skip it. Below is an example, and in this case, you want to localize the v2 string:

example-v2 = Security, reliability and speed — on every device, anywhere you go.

# Obsolete string (expires: 2024-03-18)
example = Security, reliability and speed — from a name you can trust.

Relay Website

This product is in maintenance mode and it will not be open for new locales until we remove obsolete strings and revert the content migration to (see also l10n report from November 2023).

What’s new or coming up in SUMO

  • Konstantina is joining the SUMO force! She moved from the Marketing team to the Customer Experience team in late Q1. If you haven’t get to know her, please don’t hesitate to say hi!
  • AI spam has been a big issue in our forum lately, so we decided to spin up a new contributor policy around the use of AI-generated tools. Please check this thread if you haven’t!
  • We opened an AAQ for NL in our support forum. Thanks to Tim Maks and the rest of the NL community, who’ve been very supportive of this work.
  • Are you contributing to our Knowledge Base? You may want to read the recent blog posts from the content team to get to know more about what they’re up to. In short, they’re doing a lot around freshening up our knowledge base articles.
  • Wanna know more about what we’ve done in Q1 2024, read the recap here.

What’s new or coming up in Pontoon

Large Language Model (LLM) Integration

We’re thrilled to announce the integration of LLM-assisted translations into Pontoon! For all locales utilizing Google Translate as a translation source, a new AI-powered option is now available within the ‘Machinery’ tab. This feature enhances Google Translate outputs by leveraging a Large Language Model (LLM). Users can now tailor translations to be more formal or informal and rephrase text for clarity and tone.

Since January, our team has conducted extensive research to explore how other localization services are utilizing AI. We specifically focused on comparing the capabilities of Large Language Models (LLMs) against traditional machine translation methods and identifying industry best practices.

Our findings revealed that while tools like Google Translate provide a solid foundation, they sometimes fall short, often translating text too literally. Recognizing the potential for improvement, we introduced functionality within Pontoon to adjust the tone and refine phrases directly.

For example, consider the phrase “Firefox has your back” translated in the Italian locale. The suggestion provided by Google’s machine translation is literal and incorrect (“Firefox covers your shoulders”). The images below demonstrate the use of the “Rephrase” option:

Screenshot of the LLM feature in Pontoon (before selecting a command).

Dropdown to use the LLM feature

Screenshot of the LLM feature in Pontoon (after selecting the rephrase command).

Enhanced translation output from the LLM rephrasing the initial Google Translate result.

Furthering our community engagement, on April 29th, we hosted a Localization Fireside Chat. During this session, we discussed the new feature in depth and provided a live demonstration. Catch the highlights of our discussion at the following recordings (the LLM feature is discussed at the 7:22 mark):

Performance improvements

At the end of the last year we’ve asked Mozilla localizers what areas of Pontoon would they like to see improved. Performance optimizations were one of the top-voted requests and we’re happy to report we’ve landed several speedups since the beginning of the year.

Most notable improvements were made to the dashboards, with Contributors, Insights and Tags pages now loading in a fraction of the time they took to load earlier in the year. We’ve also improved the loading times of Permissions tab, Notifications page and some filters.

As shown in the chart below, almost all the pages and actions will now take less time to load.

Chart showing the apdex score of several views in Pontoon.

Chart showing the improved apdex score of several views in Pontoon.


Watch our latest localization virtual events here.

Want to showcase an event coming up that your community is participating in? Contact us and we’ll include it.

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

The Rust Programming Language BlogAnnouncing Rust 1.78.0

The Rust team is happy to announce a new version of Rust, 1.78.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.78.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.78.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.78.0 stable

Diagnostic attributes

Rust now supports a #[diagnostic] attribute namespace to influence compiler error messages. These are treated as hints which the compiler is not required to use, and it is also not an error to provide a diagnostic that the compiler doesn't recognize. This flexibility allows source code to provide diagnostics even when they're not supported by all compilers, whether those are different versions or entirely different implementations.

With this namespace comes the first supported attribute, #[diagnostic::on_unimplemented], which can be placed on a trait to customize the message when that trait is required but hasn't been implemented on a type. Consider the example given in the stabilization pull request:

    message = "My Message for `ImportantTrait<{A}>` is not implemented for `{Self}`",
    label = "My Label",
    note = "Note 1",
    note = "Note 2"
trait ImportantTrait<A> {}

fn use_my_trait(_: impl ImportantTrait<i32>) {}

fn main() {

Previously, the compiler would give a builtin error like this:

error[E0277]: the trait bound `String: ImportantTrait<i32>` is not satisfied
  --> src/
12 |     use_my_trait(String::new());
   |     ------------ ^^^^^^^^^^^^^ the trait `ImportantTrait<i32>` is not implemented for `String`
   |     |
   |     required by a bound introduced by this call

With #[diagnostic::on_unimplemented], its custom message fills the primary error line, and its custom label is placed on the source output. The original label is still written as help output, and any custom notes are written as well. (These exact details are subject to change.)

error[E0277]: My Message for `ImportantTrait<i32>` is not implemented for `String`
  --> src/
12 |     use_my_trait(String::new());
   |     ------------ ^^^^^^^^^^^^^ My Label
   |     |
   |     required by a bound introduced by this call
   = help: the trait `ImportantTrait<i32>` is not implemented for `String`
   = note: Note 1
   = note: Note 2

For trait authors, this kind of diagnostic is more useful if you can provide a better hint than just talking about the missing implementation itself. For example, this is an abridged sample from the standard library:

    message = "the size for values of type `{Self}` cannot be known at compilation time",
    label = "doesn't have a size known at compile-time"
pub trait Sized {}

For more information, see the reference section on the diagnostic tool attribute namespace.

Asserting unsafe preconditions

The Rust standard library has a number of assertions for the preconditions of unsafe functions, but historically they have only been enabled in #[cfg(debug_assertions)] builds of the standard library to avoid affecting release performance. However, since the standard library is usually compiled and distributed in release mode, most Rust developers weren't ever executing these checks at all.

Now, the condition for these assertions is delayed until code generation, so they will be checked depending on the user's own setting for debug assertions -- enabled by default in debug and test builds. This change helps users catch undefined behavior in their code, though the details of how much is checked are generally not stable.

For example, slice::from_raw_parts requires an aligned non-null pointer. The following use of a purposely-misaligned pointer has undefined behavior, and while if you were unlucky it may have appeared to "work" in the past, the debug assertion can now catch it:

fn main() {
    let slice: &[u8] = &[1, 2, 3, 4, 5];
    let ptr = slice.as_ptr();

    // Create an offset from `ptr` that will always be one off from `u16`'s correct alignment
    let i = usize::from(ptr as usize & 1 == 0);
    let slice16: &[u16] = unsafe { std::slice::from_raw_parts(ptr.add(i).cast::<u16>(), 2) };
thread 'main' panicked at library/core/src/
unsafe precondition(s) violated: slice::from_raw_parts requires the pointer to be aligned and non-null, and the total size of the slice not to exceed `isize::MAX`
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread caused non-unwinding panic. aborting.

Deterministic realignment

The standard library has a few functions that change the alignment of pointers and slices, but they previously had caveats that made them difficult to rely on in practice, if you followed their documentation precisely. Those caveats primarily existed as a hedge against const evaluation, but they're only stable for non-const use anyway. They are now promised to have consistent runtime behavior according to their actual inputs.

  • pointer::align_offset computes the offset needed to change a pointer to the given alignment. It returns usize::MAX if that is not possible, but it was previously permitted to always return usize::MAX, and now that behavior is removed.

  • slice::align_to and slice::align_to_mut both transmute slices to an aligned middle slice and the remaining unaligned head and tail slices. These methods now promise to return the largest possible middle part, rather than allowing the implementation to return something less optimal like returning everything as the head slice.

Stabilized APIs

These APIs are now stable in const contexts:

Compatibility notes

  • As previously announced, Rust 1.78 has increased its minimum requirement to Windows 10 for the following targets:
    • x86_64-pc-windows-msvc
    • i686-pc-windows-msvc
    • x86_64-pc-windows-gnu
    • i686-pc-windows-gnu
    • x86_64-pc-windows-gnullvm
    • i686-pc-windows-gnullvm
  • Rust 1.78 has upgraded its bundled LLVM to version 18, completing the announced u128/i128 ABI change for x86-32 and x86-64 targets. Distributors that use their own LLVM older than 18 may still face the calling convention bugs mentioned in that post.

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.78.0

Many people came together to create Rust 1.78.0. We couldn't have done it without all of you. Thanks!

Dave TownsendTests don't replace Code Review

Featured image of post Tests don't replace Code Review

I frequently see a bold claim come up in tech circles. That as a team you’re wasting time by doing code reviews. You should instead rely on automated tests to catch bugs. This surprises me because I can’t imagine anyone thinking that such a blanket statement is true. But then most of the time this is brought up in places like Twitter where nuance is impossible and engagement farming is rife. Still it got me thinking about why I think code review is important even if you have amazing tests.

Before I elaborate I’ll point out what should be obvious. Different projects have different needs. You shouldn’t listen to me tell you that you must do code review any more than you should listen to anyone else tell you that you must not do code review. Be pragmatic in all things. Beware one-size-fits-all statements (in almost any context).

We’ve been religiously performing code review on every (well almost every) patch at Mozilla since well before I joined the project which was quite some time ago. And in that time I’ve seen Firefox go from having practically no automated tests to a set of automated test suites that if run end to end on a single machine (which is impossible but let’s ignore that) would take nearly two months (😱) to complete. And in that time I don’t think I’ve ever heard anyone suggest we should stop doing code review for anything that actually ships to users (we do allow documentation changes with no review). Why?

# A good set of automated tests doesn’t just magically appear

Let’s start with the obvious.

Someone has to have written all of those tests. And others have to have verified all that. And even if your test suite is already perfect, how do you know that the developer building a new feature has also included the tests necessary to verify that feature going forwards?

There are some helpful tools that exist, like code coverage. But these are more informative than indicative. Useful to track but should rarely be used by themselves.

# Garbage unmaintainable code passes tests

There are usually many ways to fix a bug or implement a feature. Some of those will be clear readable code with appropriate comments that a random developer in three years time can look at and understand quickly. Others will be spaghetti code that is to all intents and purposes obfuscated. Got a bug in there? It may take ten times longer to fix it. Lint rules can help with this to some extent, but a human code reviewer is going to spot unreadable code a mile away.

# You cannot test everything

It’s often not feasible to test for every possible case. Anywhere your code interacts with anything outside of itself, like a filesystem or a network, is going to have cases that are really hard to simulate. What if memory runs out at a critical moment? What if the OS suddenly decides that the disk is not writable? These are cases we have to handle all the time in Firefox. You could say we should build abstractions around everything so that tests can simulate all those cases. But abstractions are not cheap and performance is pretty critical for us.

# But I’m a 100x developer, none of this applies to me

I don’t care how senior a developer you are, you’ll make mistakes. I sure do. Now it’s true that there is something to be said for adjusting your review approach based on the developer who wrote the code. If I’m reviewing a patch by a junior developer I’m going to go over the patch with a fine tooth-comb and then when they re-submit I’m going to take care to make sure they addressed all my changes. Less so with a senior developer who I know knows the code at hand.

# So do tests help with code review at all?


Tests are there to automatically spot problems, ideally before a change even reaches the review stage. Code review is there to fill in the gaps. You can mostly skip over worrying about whether this breaks well tested functionality (just don’t assume all functionality is well tested!). Instead you can focus on what the change is doing that cannot be tested:

  • Is it actually fixing the problem at hand?
  • Does it include appropriate changes to the automated tests?
  • Is the code maintainable?
  • Is the approach going to cause problems for other changes down the road?
  • Could there be performance issues?

Code review and automated tests are complimentary. I believe you’ll get the best result when you employ both sensibly. Assuming you have the resources to do so of course. I don’t think large projects can do without both.

Mozilla ThunderbirdThunderbird Monthly Development Digest: April 2024

Graphic with text "Thunderbird Development Digest April 2024," featuring abstract ASCII art on a dark Thunderbird logo background.

Hello Thunderbird Community, and welcome back to the monthly Thunderbird development digest. April just ended and we’re running at full speed into May. We’re only a couple of months away from the next ESR, so things are landing faster and we’re seeing the finalization of a lot of parallel efforts.

20-Year-Old bugs

Something that has been requested for almost 20 years finally landed on Daily. The ability to control the display of recipients in the message list and better distinguish unknown addresses from those saved in the Address Book was finally implemented in Bug 243258 – Show email address in message list.

This is one of the many examples of features that in the past were very complicated and tricky to implement, but that we were finally able to address thanks to the improvements of our architecture and being able to work with a more flexible and modular code.

We’re aiming at going through those very very old requests and slowly addressing them when possible.

Exchange alpha

More Exchange support improvements and features are landing on Daily almost…daily (pun intended). If you want to test things with a local build, you can follow this overview from Ikey.

We will soon look at the possibility of enabling Rust builds by default, making sure that all users will be able to consume our Rust code from next beta, and only needing to switch a pref in order to test Exchange.

Folder compaction

If you’ve been tracking our most recent struggles, you’re probably aware of one of the lingering annoying issues which sees the bubbling up of the size of the user profile caused by local folder corruption.

Ben dive bombed into the code and found a spaghetti mess that was hard to untangle. You can read more about his exploration and discoveries in his recent post on TB-Planning.

We’re aiming to land this code hopefully before the end of the week and start calling for some testing and feedback from the community to ensure that all the various issues have been addressed correctly.

You can follow the progress in Bug 1890448 – Rewrite folder compaction.

Cards View

If you’re running Beta or Daily, you might have noticed some very fancy new UI for the Cards View. This has been a culmination of many weeks of UX analysis to ensure a flexible and consistent hover, selection, and focus state.

Micah and Sol identified a total of 27 different interaction states on that list, and implementing visual consistency while guaranteeing optimal accessibility levels for all operating systems and potential custom themes was not easy.

We’re very curious to hear your feedback.

Context menu

A more refined and updated context menu for the message list also landed on Daily.

A very detailed UX exploration and overview of the implementation was shared on the UX Mailing list a while ago.

This update is only the first step of many more to come, so we apologize in advance if some things are not super polished or things seem temporarily off.

ESR Preview

If you’re curious about what the next ESR will look like or checking new features, please consider downloading and installing Beta (preferably in another directory to not override your current profile.) Help us test this new upcoming release and find bugs early.

As usual, if you want to see things as they land you can always check the pushlog and try running daily, which would be immensely helpful for catching bugs early.

See ya next month.

Alessandro Castellani (he, him)
Director, Desktop and Mobile Apps

If you’re interested in joining the technical discussion around Thunderbird development, consider joining one or several of our mailing list groups here.

The post Thunderbird Monthly Development Digest: April 2024 appeared first on The Thunderbird Blog.

This Week In RustThis Week in Rust 545

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is efs, a no-std ext2 filesystem implementation with plans to add other file systems in the future.

Another week completely devoid of suggestions, but llogiq stays hopeful he won't have to dig for next week's crate all by himself.

Please submit your suggestions and votes for next week!

Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

  • No Calls for papers or presentations were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

CFP - Speakers

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the submission website through a PR to TWiR.

Updates from the Rust Project

409 pull requests were merged in the last week

Rust Compiler Performance Triage

Several non-noise changes this week, with both improvements and regresions coming as a result. Overall compiler performance is roughly neutral across the week.

Triage done by @simulacrum. Revision range: a77f76e2..c65b2dc9

2 Regressions, 2 Improvements, 3 Mixed; 1 of them in rollups 51 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-05-01 - 2024-05-29 🦀

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.


Please see the latest Who's Hiring thread on r/rust

Quote of the Week

"I'll never!" "No, never is in the 2024 Edition." "But never can't be this year, it's never!" "Well we're trying to make it happen now!" "But never isn't now?" "I mean technically, now never is the unit." "But how do you have an entire unit if it never happens?"

Jubilee on Zulip

Thanks to Jacob Pratt for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

William DurandMoziversary #6

Today is my sixth Moziversary 🎂 I joined Mozilla as a full-time employee on May 1st, 2018. I previously blogged in 2019, 2020, 2021, 2022, and 2023.

Last year, I mainly contributed to Firefox for Android as the lead engineer on a project called “Add-ons General Availability (GA)”. The goal was to allow for more add-ons on this platform. Success! More than a thousand extensions are now available on Android 🎉

In addition, I worked on a Firefox feature called Quarantined Domains and implemented a new abuse report form on (AMO) to comply with the Digital Services Act (DSA). I was also involved in two other cross-team efforts related to the Firefox installation funnel. I investigated various issues (e.g. this openSUSE bug), and I coordinated the deprecation of weak add-on signatures and some more changes around certificates lately, which is why I wrote xpidump.

Phew! There is no shortage of work.

When I moved to the WebExtensions team in 2022, I wrote about this incredible challenge. I echoed this sentiment several months later in my 2022 Moziversary update. I couldn’t imagine how much I would achieve in two years…

Back then, I didn’t know what the next step in my career would be. I have been aiming to bridge the gap between the AMO and WebExtensions engineering teams since at least 2021 and that is my “next step”.

I recently took a new role as Add-ons Tech Lead. This is the continuation of what I’ve been doing for some time but that comes with new challenges and opportunities as well. We’ll see how it goes but I am excited!

I’ll be forever grateful to my manager and coworkers. Thank you ❤️

Don Martiblog fix: remove stray files

Another update from the blog. Quick recap: I’m re-doing this blog with mostly Pandoc and make, with a few helper scripts.

This is a personal web site and can be broken sometimes, and one of the breakage problems was: oops, I removed a draft post from the directory of source files (in CommonMark) but the HTML version got built and put in public and copied to the server, possibly also affecting the index.html and the RSS feed.

If you’re reading the RSS and got some half-baked drafts, that’s why.

So, to fix it, I need to ask make if there’s anything in the public directory that doesn’t have a corresponding source file or files and remove it. Quick helper script:

That should mean a better RSS reading experience since you shouldn’t get it cluttered up with drafts if I make a mistake.

But I’m sure I have plenty of other mistakes I can make.


planning for SCALE 2025

Automatically run make when a file changes

Hey kids, favicon!

responsive ascii art

Bonus links

We can have a different web Nothing about the web has changed that prevents us from going back. If anything, it’s become a lot easier.

A Lawsuit Argues Meta Is Required by Law to Let You Control Your Own Feed (Section 230 protection for a research extension? Makes sense to me.)

Effects of Banning Targeted Advertising (The top 10 percent of Android apps for kids did better after an ad personalization policy change, while the bottom 90 percent lost revenue. If Sturgeon’s Law applies to Android apps, the average under-13 user might be better off?)

In Response To Google (Does anyone else notice more and more people working on ways to fix their personal information environment because of the search quality crisis? This blog series from Ed Zitron has some good background.)

The Rust Programming Language BlogAnnouncing Google Summer of Code 2024 selected projects

The Rust Project is participating in Google Summer of Code (GSoC) 2024, a global program organized by Google which is designed to bring new contributors to the world of open-source.

In February, we published a list of GSoC project ideas, and started discussing these projects with potential GSoC applicants on our Zulip. We were pleasantly surprised by the amount of people that wanted to participate in these projects and that led to many fruitful discussions with members of various Rust teams. Some of them even immediately began contributing to various repositories of the Rust Project, even before GSoC officially started!

After the initial discussions, GSoC applicants prepared and submitted their project proposals. We received 65 (!) proposals in total. We are happy to see that there was so much interest, given that this is the first time the Rust Project is participating in GSoC.

A team of mentors primarily composed of Rust Project contributors then thoroughly examined the submitted proposals. GSoC required us to produce a ranked list of the best proposals, which was a challenging task in itself since Rust is a big project with many priorities! We went through many rounds of discussions and had to consider many factors, such as prior conversations with the given applicant, the quality and scope of their proposal, the importance of the proposed project for the Rust Project and its wider community, but also the availability of mentors, who are often volunteers and thus have limited time available for mentoring.

In many cases, we had multiple proposals that aimed to accomplish the same goal. Therefore, we had to pick only one per project topic despite receiving several high-quality proposals from people we'd love to work with. We also often had to choose between great proposals targeting different work within the same Rust component to avoid overloading a single mentor with multiple projects.

In the end, we narrowed the list down to twelve best proposals, which we felt was the maximum amount that we could realistically support with our available mentor pool. We submitted this list and eagerly awaited how many of these twelve proposals would be accepted into GSoC.

Selected projects

On the 1st of May, Google has announced the accepted projects. We are happy to announce that 9 proposals out of the twelve that we have submitted were accepted by Google, and will thus participate in Google Summer of Code 2024! Below you can find the list of accepted proposals (in alphabetical order), along with the names of their authors and the assigned mentor(s):

Congratulations to all applicants whose project was selected! The mentors are looking forward to working with you on these exciting projects to improve the Rust ecosystem. You can expect to hear from us soon, so that we can start coordinating the work on your GSoC projects.

We would also like to thank all the applicants whose proposal was sadly not accepted, for their interactions with the Rust community and contributions to various Rust projects. There were some great proposals that did not make the cut, in large part because of limited review capacity. However, even if your proposal was not accepted, we would be happy if you would consider contributing to the projects that got you interested, even outside GSoC! Our project idea list is still actual, and could serve as a general entry point for contributors that would like to work on projects that would help the Rust Project maintainers and the Rust ecosystem.

Assuming our involvement in GSoC 2024 is successful, there's a good chance we'll participate next year as well (though we can't promise anything yet) and we hope to receive your proposals again in the future! We also are planning to participate in similar programs in the very near future. Those announcements will come in separate blog posts, so make sure to subscribe to this blog so that you don't miss anything.

The accepted GSoC projects will run for several months. After GSoC 2024 finishes (in autumn of 2024), we plan to publish a blog post in which we will summarize the outcome of the accepted projects.

Support.Mozilla.OrgWhat’s up with SUMO — Q1 2024

Hi everybody,

It’s always exciting to start a new year as it provides renewed spirit. Even more exciting because the CX team welcomed a few additional members this quarter, including Konstantina, who will be with us crafting better community experiences in SUMO. This is huge, since the SUMO community team has been under resourced for the past few years. I’m personally super excited about this. There are a few things that we’re working on internally, and I can’t wait to share them with you all. But first thing first, let’s read the recap of what happened and what we did in Q1 2024!

Welcome note and shout-outs

  • Thanks for joining the Social and Mobile Store Support program!
  • Welcome back to Erik L and Noah. It’s good to see you more often these days.
  • Shout-outs to Noah and Sören for their observations during the 125 release so we can take prompt actions on bug1892521 and bug1892612. Also, special thanks to Paul W for his direct involvement in the war room for the NordVPN incident.
  • Thanks to Philipp for his consistency in creating desktop thread in the contributor forum for every release. Your help is greatly appreciated!
  • Also huge thanks to everybody who is involved in the Night Mode removal issue on Firefox for iOS 124. In the end, we decided to end the experiment early, since many people raised concern about accessibility issues. This really shows the power of community and users’ feedback.

If you know someone who you’d like to feature here, please contact Kiki, and we’ll make sure to add them in our next edition.

Community news

  • As I mentioned, we started the year by onboarding Mandy, Donna, and Britney. If that’s not enough, we also welcomed Konstantina, who moved from Marketing to the CX team in March. If you haven’t got to know them, please don’t hesitate to say hi when you can.
  • AI spam has been a big issue in our forum lately, so we decided to spin up a new contributor policy around the use of AI-generated tools. Please check this thread if you haven’t!
  • We participated in FOSDEM 2024 in Brussels and it was a blast! It’s great to be able to meet face to face with many community members after a long hiatus since the pandemic. Kiki and the platform team also presented a talk in the Mozilla devroom. We also shared free cookies (not a tracking one) and talked with many Firefox fans from around the globe. All in all, it was a productive weekend, indeed.
  • We added a new capability in our KB to set restricted visibility on specific articles. This is a staff-only feature, but we believe it’s important for everybody to be aware of this. If you haven’t, please check out this thread to get to know more!
  • Please be aware of Hubs sunset plan from this thread.
  • We opened an AAQ for NL in our support forum. Thanks to Tim Maks and the rest of the NL community, who’ve been very supportive of this work.
  • We’ve done our usual annual contributor survey in March. Thank you to every one of you who filled out the survey and shared great feedback!
  • We change something around how we communicate product release updates through bi-weekly scrum meetings. Please be aware of it by checking out this contributor thread.
  • Are you contributing to our Knowledge Base? You may want to read the recent blog posts from the content team to get to know more about what they’re up to. In short, they’re doing a lot around freshening up our knowledge base articles.

Stay updated

  • Join our discussions in the contributor forum to see what’s happening in the latest release on Desktop and mobile.
  • Watch the monthly community call if you haven’t. Learn more about what’s new in January, and March (we canceled February)! Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can answer them during the meeting.
  • If you’re an NDA’ed contributor, you can watch the recording of our Firefox Pod Meeting from AirMozilla to catch up with the latest train release. You can also subscribe to the AirMozilla folder by clickling on the Subscribe button at the top right corner of the page to get notifications each time we add a new recording.
  • Consider subscribing to Firefox Daily Digest to get daily updates (Mon-Fri) about Firefox from across the internet.
  • Check out SUMO Engineering Board to see what the platform team is cooking in the engine room. Also, check out this page to see our latest release notes

Community stats


KB pageviews

Month Page views Vs previous month
Jan 2024 6,743,722 3.20%
Feb 2024 7,052,665 4.58%
Mar 2024 6,532,175 -7.38%
KB pageviews number is a total of English (en-US) KB pageviews

Top 5 KB contributors in the last 90 days: 

KB Localization

Top 10 locales based on total page views


Jan 2024

Feb 2024

Mar 2024 

Localization progress (per Apr, 23)
de 2,425,154 2,601,865 2,315,952 92%
fr 1,559,222 1,704,271 1,529,981 81%
zh-CN 1,351,729 1,224,284 1,306,699 100%
es 1,171,981 1,353,200 1,212,666 25%
ja 1,019,806 1,068,034 1,051,625 34%
ru 801,370 886,163 812,882 100%
pt-BR 661,612 748,185 714,554 42%
zh-TW 598,085 623,218 366,320 3%
It 533,071 575,245 529,887 96%
pl 489,532 532,506 454,347 84%
Locale pageviews is an overall pageview from the given locale (KB and other pages)

Localization progress is the percentage of localized articles from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Jan 2024 2999 72.6% 10.8% 61.3%
Feb 2024 2766 72.4% 9.5% 65.6%
Mar 2024 2516 71.5% 10.4% 71.6%

Top 5 forum contributors in the last 90 days:

Social Support

Month Total replies Total interactions Respond conversion rate
Jan 2024 33 46 71.74%
Feb 2024 25 65 38.46%
Mar 2024 14 87 16.09%

Top 5 Social Support contributors in the past 3 months: 


Play Store Support

Month Total replies Total interactions Respond conversion rate
Jan 2024 76 276 27.54%
Feb 2024 49 86 56.98%
Mar 2024 47 80 58.75%

Top 5 Play Store contributors in the past 3 months:

Stay connected

Mozilla Privacy BlogThe UK’s Digital Markets, Competition, and Consumers Bill will spark the UK’s digital economy, not stifle it

In today’s digital age, an open and competitive ecosystem with a diverse range of players is essential for building a resilient economy. New products and ideas must have the opportunity to grow to give people meaningful choices. Yet, this reality often falls short due to the dominance of a handful of large companies that create walled gardens by self-preferencing their services over independent competitors  –  limiting choice and hampering innovation.

The UK’s Digital Markets, Competition, and Consumers Bill (DMCCB) offers a unique opportunity to break down these barriers, paving the way for a more competitive and consumer-centric digital market. On the competition side, the DMCCB offers flexibility in allowing for targeted codes of conduct to regulate the behaviour of dominant players. This agile and future-proof approach makes it unique in the ex-ante interventions being considered around the world to rein in abuse in digital markets. An example of what such a code of conduct might look like in practice is the voluntary commitments given by Google to the CMA in the Privacy Sandbox case.

Mozilla, in line with our long history of supporting pro-competition regulatory interventions, supports the DMCCB and its underlying goal of fostering competition by empowering consumers. However, to truly deliver on this promise, the law must be robust, effective, and free from loopholes that could undermine its intent.

Last month, the House of Lords made some much needed improvements to the DMCCB – which are now slated to be debated in the House of Commons in late April/early May 2024. A high-level overview of the key positive changes and why they should remain a part of the law are:

  • Time Limits: To ensure the CMA can act swiftly and decisively, its work should be free from undue political influence. This reduces opportunities for undue lobbying and provides clarity for both consumers and companies. While it would be ideal for the CMA to be able to enforce its code of conduct, Mozilla supports the House of Lords’ amendment to introduce a 40-day time limit for the Secretary of State’s approval of CMA guidance. This is a crucial step in avoiding delays and ensuring effective enforcement. The government’s acceptance of this approach and the alternative proposal of 30 working days for debate in the House of Commons is a positive sign, which we hope is reflected in the final law.
  • Proportionality: The Bill’s approach to proportionality is vital. Introducing prohibitive proportionality requirements on remedies could weaken the CMA’s ability to make meaningful interventions, undermining the Bill’s effectiveness. Mozilla endorses the current draft of the Bill from the House of Lords, which strikes a balance by allowing for effective remedies without excessive constraints.
  • Countervailing Benefits: Similarly, the countervailing benefits exemption to CMA’s remedies, while powerful, should not be used as a loophole to justify anti-competitive practices. Mozilla urges that this exemption be reserved for cases of genuine consumer benefit by restoring the government’s original requirement that such exemptions are “indispensable”, ensuring that it does not become a ‘get out of jail free’ card for dominant players.

Mozilla remains committed to supporting the DMCCB’s swift passage through Parliament and ensuring that it delivers on its promise to empower consumers and promote innovation. We launched a petition earlier today to help push the law over the finish line. By addressing the key concerns we’ve highlighted above and maintaining a robust framework, the UK can set a global standard for digital markets and create an environment where consumers are truly in charge.

The post The UK’s Digital Markets, Competition, and Consumers Bill will spark the UK’s digital economy, not stifle it appeared first on Open Policy & Advocacy.

Don Martirealistically get rid of third-party cookies

How would a browser realistically get rid of third-party cookies, if the plan was to just replace third-party cookies, and the project requirements did not include a bunch of anticompetitive tricks too?

  1. Start offering a very scary dialog to a fraction of new users. Something like Do you want to test a new experimental feature? It might—maybe—have some privacy benefits but many sites will break. Don’t expect a lot of people to agree at first.

  2. Turn off third-party cookies for the users who did say yes in step 1, and watch the telemetry. There will be positive and negative effects, but they won’t be overwhelmingly bad because most sites have to work with other browsers.

  3. When the breakage detected in step 2 gets to be insignificant as a cause of new browser users quitting or reinstalling, start making the dialog less scary and show it to more people.

  4. Keep repeating until most new installs are third-party cookie-free, then start offering the dialog on browser upgrades.

  5. Continue, for more and more users, until you get to 95-99%. Leave the third-party cookies on for 1-5% of users for a couple of releases just to spot any lingering problems, then make third-party cookies default off, with no dialog (users would have to find the preference to re-enable them, or their sysadmin would have to push out a centralized change if some legacy corporate site still needs them).

But what about the personalized ads? Some people actually want those! Not a problem. The good news is that ad personalization can be done in an extension. Ask extension developers who have extensions that support ad personalization to sign up for a registry of ad personalization extensions, then keep track of how many users are installing each one. Adtech firms don’t (usually?) have personalization extensions today, but every company can develop one on its own schedule, with less uncertainty and fewer dependencies and delays than the current end of cookies mess. The extension development tools are really good now.

As soon as an ad personalization extension can pass an independent security audit (done by a company agreed on by the extension developer and the browser vendor) and get, say, 10,000 users, then the browser can put it on a choice screen that gets shown for new installs and, if added since last upgrade, upgrades. (The browser could give the dogmatic anti-personalization users a preference to opt out of these choice screens if they really wanted to dig in and find it.) This makes the work of competition regulators much easier—they just have to check that the browser vendor’s own ad personalization extension gets fair treatment with competing ones.

And we’re done. The privacy people and the personalized ad people get what they want with much less drama and delay, the whole web ad business isn’t stuck queued up waiting for one development team, and all that’s missing is the anticompetitive stuff that has been making end of cookies work such a pain since 2019.


the 30-40-30 rule An updated list of citations to user research on how many people want personalized ads

Can database marketing sell itself to the people in the database? Some issues that an ad personalization extension might have to address in order to get installs

User tracking as Chesterton’s Fence What tracking-based advertising still offers (that alternatives don’t)

Catching up to Safari? Some features that Apple has done right, with opportunities for other browsers to think different(ly)

Bonus links

An open letter to the advertising punditry I personally got involved in the Inventory Quality reviews to make sure that the data scientists weren’t pressured by the business and could find the patterns–like ww3 [dot] forbes [dot] com–and go after them.

The Rise of Large-Language-Model Optimization The advent of AI threatens to destroy the complex online ecosystem that allows writers, artists, and other creators to reach human audiences.

The Man Who Killed Google Search [M]any found that the update mostly rolled back changes, and traffic was increasing to sites that had previously been suppressed by Google Search’s “Penguin” update from 2012 that specifically targeted spammy search results, as well as those hit by an update from an August 1, 2018…

Lawsuit in London to allege Grindr shared users’ HIV status with ad firms (This is why you can safely mute anybody who uses the expression k-anonymity, the info about yourself that you most want to keep private is true for more than k other people.)

UK children bombarded by gambling ads and images online, charity warns (attention parents: copy the device rules that Big Tech big shots maintain for their own children, not what they want for yours)

Mozilla Privacy BlogWork Gets Underway on a New Federal Privacy Proposal

At Mozilla, safeguarding privacy has been core to our mission for decades — we believe that individuals’ security and privacy on the Internet are fundamental and must not be treated as optional. We have long advocated for a federal privacy law to ensure consumers have control over their data and that companies are accountable for their privacy practices.

Earlier this month, House Committee on Energy and Commerce Chair Cathy McMorris Rodgers (R-WA) and Senate Committee on Commerce, Science and Transportation Chair Maria Cantwell (D-WA) unveiled a discussion draft of the American Privacy Rights Act of 2024 (APRA). The Act is a welcome bipartisan effort to create a unified privacy standard across the United States, with the promise of finally protecting the privacy of all Americans.

At Mozilla, we are committed to the principle of data minimization – a concept that’s fundamental in effective privacy legislation – and we are pleased to see it at the core of APRA. Data minimization means we conscientiously collect only the necessary data, ensure its protection, and provide clear and concise explanations about what data we collect and why. We are also happy to see additional strong language from the American Data Privacy and Protect Act (ADPPA) reflected in this new draft, including non-discrimination provisions and a universal opt-out mechanism (though we support clarification that ensures allowance of multiple mechanisms).

However, the APRA discussion draft has open questions that must be refined. These include how APRA handles protections for children, options for strengthening data brokers provisions even further (such as a centralized mechanism for opt-out rights), and key definitions that require clarity around advertising. We look forward to engaging with policymakers as the process advances.

Achieving meaningful reform in the U.S. is long overdue. In an era where digital privacy concerns are on the rise, it’s essential to establish clear and enforceable privacy rights for all Americans. Mozilla stands ready to contribute to the dialogue on APRA and collaborate toward achieving comprehensive privacy reform. Together, we can prioritize the interests of individuals and cultivate trust in the digital ecosystem.


The post Work Gets Underway on a New Federal Privacy Proposal appeared first on Open Policy & Advocacy.

Mozilla Privacy BlogNet Neutrality is Back!

Yesterday, the Federal Communications Commission (FCC) voted 3-2 to reinstate net neutrality rules and protect consumers online. We applaud this decision to keep the internet open and accessible to all, and reverse the 2018 roll-back of net neutrality protections. Alongside our many partners and allies, Mozilla has been a long time proponent of net neutrality across the world and in U.S. states, and mobilized hundreds of thousands of people over the years.

The new FCC order reclassifies broadband internet as a “telecommunications service” and prevents ISPs from blocking, throttling, or paid prioritization of traffic. This action restores meaningful and enforceable FCC oversight and protection on the internet, and unlocks innovation, competition, and free expression online.

You can read Mozilla’s submission to the FCC on the proposed Safeguarding and Securing the Open Internet rules in December 2023 here and additional reply comments in January 2024 here.

Net neutrality and openness are essential parts of how we experience the internet, and as illustrated during the COVID pandemic, can offer important protections – so it shouldn’t come as a surprise that such a majority of Americans support it. Yesterday’s decision reaffirms the internet is and should remain a public resource, where companies cannot abuse their market power to the detriment of consumers, and where actors large and small operate on a level playing field.

Earlier this month, Mozilla participated in a roundtable discussion with experts and allies hosted by Chairwoman Rosenworcel at the Santa Clara County Fire Department. The event location highlighted the importance of net neutrality, as the site where Verizon throttled firefighters’ internet speeds in the midst of fighting a raging wildfire. You can watch the full press conference below, and read coverage of the event here.

We thank the FCC for protecting these vital net neutrality safeguards, and we look forward to seeing the details of the final order when released.

The post Net Neutrality is Back! appeared first on Open Policy & Advocacy.

The Servo BlogThis month in Servo: Acid2 redux, Servo book, Qt demo, and more!

Servo nightly, now rendering Acid2 perfectly <figcaption>Servo now renders Acid2 perfectly, but like all browsers, only at 1x dpi.</figcaption>

Back in November, Servo’s new layout engine passed Acid1, and this month, thanks to a bug-squashing sprint by @mrobinson and @Loirooriol, we now pass Acid2!

We would also like to thank you all for your generous support! Since we moved to Open Collective and GitHub Sponsors in March, we have received 1578 USD (after fees), including 1348 USD/month (before fees) in recurring donations. This smashed our first two goals, and is a respectable part of the way towards our next goal of 10000 USD/month. For more details, see our Sponsorship page and announcement post.

1348 USD/month

We are still receiving donations from 19 people on LFX, and we’re working on transferring the balance to our new fund, but we will stop accepting donations there soon — please move your recurring donations to GitHub or Open Collective. As always, use of these funds will be decided transparently in the Technical Steering Committee, starting with the TSC meeting on 29 April.

The Servo book, a book much like the Rust book

Servo’s docs are moving to the Servo book, and a very early version of this is now online (@delan, servo/book)! The goal is to unify our many sources of documentation, including the hacking quickstart guide, building Servo page, Servo design page, and other in-tree docs and wiki pages, into a book that’s richer and easier to search and navigate.

Servo now supports several new features in its nightly builds:

As of 2024-04-05, we now support non-autoplay <video> (@eerii, media#419, #32001), as long as the page provides its own controls, as well as the ‘baseline-source’ property (@MunishMummadi, #31904, #31913). Both of these contributors started out as Outreachy participants, and we’re thrilled to see their continued work on improving Servo.

We’ve also landed several other rendering improvements:

Our font rendering has improved, with support for selecting the correct weight and style in indexed fonts (.ttc) on Linux (@mukilan, @mrobinson, #32127), as well as support for emoji font fallback on macOS (@mrobinson, #32122). Note that color emoji are not yet supported.

Other big changes are coming to Servo’s font loading and rendering, thanks to @mrobinson’s font system redesign RFC (#32033). Work has already started on this (@mrobinson, @mukilan, #32034, #32038, #32100, #32101, #32115), with the eventual goal of making font data zero-copy readable from multiple threads. This in turn will fix several major issues with font caching, including cached font data leaking over time and between pages, unnecessary loading from disk, and unnecessary copying to layout.

We’ve also started simplifying the script–layout interface (@mrobinson, #31937, #32081), since layout was merged into the script thread, and script can now call into layout without IPC.

Embedding and dev changes

Servo running in a Qt app via CXX-Qt <figcaption>The prototype shows that Servo can be integrated with a Qt app via CXX-Qt.</figcaption>

A prototype for integrating Servo with Qt was built by @ahayzen-kdab and @vimpostor and shown at Embedded World 2024. We’re looking forward to incorporating their feedback from this to improve Servo’s embedding API. For more details, check out their GitHub repo and Embedding the Servo Web Engine in Qt.

Servo now supports multiple concurrent webviews (@wusyong, @delan, @atbrakhi, #31417, #32067)! This is a big step towards making Servo a viable embedded webview, and we will soon use it to implement tabbed browsing in servoshell (@delan, #31545).

Three of the slowest crates in the Servo build process are mozjs_sys, mozangle, and script. The first two compile some very large C++ libraries in their build scripts — SpiderMonkey and ANGLE respectively — and the third blocks on the first two. They can account for over two minutes of build time, even on a very fast machine (AMD 7950X), and a breaking change in newer versions of GNU Make (mozjs#375) can make mozjs_sys take over eight minutes to build!

mozjs_sys now uses a prebuilt version of SpiderMonkey by default (@wusyong, @sagudev, mozjs#450, #31824), cutting clean build times by over seven minutes on a very fast machine (see above). On Linux with Nix (the package manager), where we run an unaffected version of GNU Make, it can still save over 100 seconds on a quad-core CPU with SMT. Further savings will be possible once we do the same for mozangle.

If you use NixOS, or any Linux distro with Nix, you can now get a shell with all of the tools and dependencies needed to build and run Servo by typing nix-shell (@delan, #32035), without also needing to type etc/shell.nix.

As for CI, our experimental Android build now supports aarch64 (@mukilan, #32137), in addition to Android on armv7, x86_64, and i686, and we’ve improved flakiness in the WebGPU tests (@sagudev, #31952) and macOS builds (@mrobinson, #32005).

Conferences and events

Earlier this month, Rakhi Sharma gave her talk A year of Servo reboot: where are we now? at Open Source Summit North America (slides; recording available soon) and at the Seattle Rust User Group (slides).

In the Netherlands, Gregory Terzian will be presenting Modular Servo: Three Paths Forward at the GOSIM Conference 2024, on 6 May at 15:10 local time (13:10 UTC). That’s the same venue as RustNL 2024, just one day earlier, and you can also find Gregory, Rakhi, and Nico at RustNL afterwards. See you there!

Will Kahn-Greenecrashstats-tools v2.0.0 released!

What is it?

crashstats-tools is a set of command-line tools for working with Crash Stats (

crashstats-tools comes with four commands:

  • supersearch: for performing Crash Stats Super Search queries

  • supersearchfacet: for performing aggregations, histograms, and cardinality Crash Stats Super Search queries

  • fetch-data: for fetching raw crash, dumps, and processed crash data for specified crash ids

  • reprocess: for sending crash report reprocess requests

v2.0.0 released!

There have been a lot of improvements since the last blog post for the v1.0.1 release. New commands, new features, improved cli ui, etc.

v2.0.0 focused on two major things:

  1. improving supersearchfacet to support nested aggregation, histogram, and cardinality queries

  2. moving some of the code into a crashstats_tools.libcrashstats module improving its use as a library

Improved supersearchfacet

The other day, Alex and team finished up the crash reporter Rust rewrite. The crash reporter rewrite landed and is available in Firefox, nightly channel, where build_id >= 20240321093532.

The crash reporter is one of the clients that submits crash reports to Socorro which is now maintained by the Observability Team. Firefox has multiple crash reporter clients and there are many ways that crash reports can get submitted to Socorro.

One of the changes we can see in the crash report data now is the change in User-Agent header. The new rewritten crash reporter sends a header of crash-reporter/1.0.0. That gets captured by the collector and put in the raw crash metadata.user_agent field. It doesn't get indexed, so we can't search on it directly.

We can get a sampling of the last 100 crash reports, download the raw crash data, and look at the user agents.

16 out of 100 crash reports were submitted by the new crash reporter. We were surprised there are so many Firefox user agents. We discussed this on Slack. I loosely repeat it here because it's a great way to show off some of the changes of supersearchfacet in v2.0.0.

First, the rewritten crash reporter only affects the parent (aka main) process. The other processes have different crash reporters that weren't rewritten.

How many process types are there for Firefox crash reports in the last week? We can see that in the ProcessType annotation (docs) which is processed and saved in the process_type field (docs).

Judging by that output, I would expect to see a higher percentage of crashreporter/1.0.0 in our sampling of 100 crash reports.

Turns out that Firefox uses different code to submit crash reports not just by process type, but also by user action. That's in the SubmittedFrom annotation (docs) which is processed and saved in the submitted_from field (docs).

What is "Auto"? The user can opt-in to auto-send crash reports. When Firefox upgrades and this setting is set, then Firefox will auto-send any unsubmitted crash reports. The nightly channel has two updates a day, so there's lots of opportunity for this event to trigger.

What're the counts for submitted_from/process_type pairs?

We can spot check these different combinations to see what the user-agent looks like.

For brevity, we'll just look at parent / Client in this blog post.

Seems like the crash reporter rewrite only affects crash reports where ProcessType=parent and SubmittedFrom=Client. All the other process_type/submitted_from combinations get submitted a different way where the user agent is the browser itself.

How many crash reports has the new crash reporter submitted over time?

There are more examples in the crashstats-tools README.

crashstats_tools.libcrashstats library

Starting with v2.0.0, you can use crashstats_tools.libcrashstats as a library for Python scripts.

For example:

libcrashstats makes using the Crash Stats API a little more ergonomic.

See the crashstats_tools.libcrashstats library documentation.

Be thoughtful about using data

Make sure to use these tools in compliance with our data policy:

Where to go for more

See the project on GitHub which includes a README which contains everything about the project including examples of usage, the issue tracker, and the source code:

Let me know whether this helps you!

Hacks.Mozilla.OrgLlamafile’s progress, four months in

When Mozilla’s Innovation group first launched the llamafile project late last year, we were thrilled by the immediate positive response from open source AI developers. It’s become one of Mozilla’s top three most-favorited repositories on GitHub, attracting a number of contributors, some excellent PRs, and a growing community on our Discord server.

Through it all, lead developer and project visionary Justine Tunney has remained hard at work on a wide variety of fundamental improvements to the project. Just last night, Justine shipped the v0.8 release of llamafile, which includes not only support for the very latest open models, but also a number of big performance improvements for CPU inference.

As a result of Justine’s work, today llamafile is both the easiest and fastest way to run a wide range of open large language models on your own hardware. See for yourself: with llamafile, you can run Meta’s just-released LLaMA 3 model–which rivals the very best models available in its size class–on an everyday Macbook.

How did we do it? To explain that, let’s take a step back and tell you about everything that’s changed since v0.1.

tinyBLAS: democratizing GPU support for NVIDIA and AMD

llamafile is built atop the now-legendary llama.cpp project. llama.cpp supports GPU-accelerated inference for NVIDIA processors via the cuBLAS linear algebra library, but that requires users to install NVIDIA’s CUDA SDK. We felt uncomfortable with that fact, because it conflicts with our project goal of building a fully open-source and transparent AI stack that anyone can run on commodity hardware. And besides, getting CUDA set up correctly can be a bear on some systems. There had to be a better way.

With the community’s help (here’s looking at you, @ahgamut and @mrdomino!), we created our own solution: it’s called tinyBLAS, and it’s llamafile’s brand-new and highly efficient linear algebra library. tinyBLAS makes NVIDIA acceleration simple and seamless for llamafile users. On Windows, you don’t even need to install CUDA at all; all you need is the display driver you’ve probably already installed.

But tinyBLAS is about more than just NVIDIA: it supports AMD GPUs, as well. This is no small feat. While AMD commands a respectable 20% of today’s GPU market, poor software and driver support have historically made them a secondary player in the machine learning space. That’s a shame, given that AMD’s GPUs offer high performance, are price competitive, and are widely available.

One of llamafile’s goals is to democratize access to open source AI technology, and that means getting AMD a seat at the table. That’s exactly what we’ve done: with llamafile’s tinyBLAS, you can now easily make full use of your AMD GPU to accelerate local inference. And, as with CUDA, if you’re a Windows user you don’t even have to install AMD’s ROCm SDK.

All of this means that, for many users, llamafile will automatically use your GPU right out of the box, with little to no effort on your part.

CPU performance gains for faster local AI

Here at Mozilla, we are keenly interested in the promise of “local AI,” in which AI models and applications run directly on end-user hardware instead of in the cloud. Local AI is exciting because it opens up the possibility of more user control over these systems and greater privacy and security for users.

But many consumer devices lack the high-end GPUs that are often required for inference tasks. llama.cpp has been a game-changer in this regard because it makes local inference both possible and usably performant on CPUs instead of just GPUs. 

Justine’s recent work on llamafile has now pushed the state of the art even further. As documented in her detailed blog post on the subject, by writing 84 new matrix multiplication kernels she was able to increase llamafile’s prompt evaluation performance by an astonishing 10x compared to our previous release. This is a substantial and impactful step forward in the quest to make local AI viable on consumer hardware.

This work is also a great example of our commitment to the open source AI community. After completing this work we immediately submitted a PR to upstream these performance improvements to llama.cpp. This was just the latest of a number of enhancements we’ve contributed back to llama.cpp, a practice we plan to continue.

Raspberry Pi performance gains

Speaking of consumer hardware, there are few examples that are both more interesting and more humble than the beloved Raspberry Pi. For a bargain basement price, you get a full-featured computer running Linux with plenty of computing power for typical desktop uses. It’s an impressive package, but historically it hasn’t been considered a viable platform for AI applications.

Not any more. llamafile has now been optimized for the latest model (the Raspberry Pi 5), and the result is that a number of small LLMs–such as Rocket-3B (download), TinyLLaMA-1.5B (download), and Phi-2 (download)–run at usable speeds on one of the least expensive computers available today. We’ve seen prompt evaluation speeds of up to 80 tokens/sec in some cases!

Keeping up with the latest models

The pace of progress in the open model space has been stunningly fast. Over the past few months, hundreds of models have been released or updated via fine-tuning. Along the way, there has been a clear trend of ever-increasing model performance and ever-smaller model sizes.

The llama.cpp project has been doing an excellent job of keeping up with all of these new models, frequently rolling-out support for new architectures and model features within days of their release.

For our part we’ve been keeping llamafile closely synced with llama.cpp so that we can support all the same models. Given the complexity of both projects, this has been no small feat, so we’re lucky to have Justine on the case.

Today, you can today use the very latest and most capable open models with llamafile thanks to her hard work. For example, we were able to roll-out llamafiles for Meta’s newest LLaMA 3 models–8B-Instruct and 70B-Instruct–within a day of their release. With yesterday’s 0.8 release, llamafile can also run Grok, Mixtral 8x22B, and Command-R.

Creating your own llamafiles

Since the day that llamafile shipped people have wanted to create their own llamafiles. Previously, this required a number of steps, but today you can do it with a single command, e.g.:

llamafile-convert [model.gguf]

In just moments, this will produce a “model.llamafile” file that is ready for immediate use. Our thanks to community member @chan1012 for contributing this helpful improvement.

In a related development, Hugging Face recently added official support for llamafile within their model hub. This means you can now search and filter Hugging Face specifically for llamafiles created and distributed by other people in the open source community.

OpenAI-compatible API server

Since it’s built on top of llama.cpp, llamafile inherits that project’s server component, which provides OpenAI-compatible API endpoints. This enables developers who are building on top of OpenAI to switch to using open models instead. At Mozilla we very much want to support this kind of future: one where open-source AI is a viable alternative to centralized, closed, commercial offerings.

While open models do not yet fully rival the capabilities of closed models, they’re making rapid progress. We believe that making it easier to pivot existing code over to executing against open models will increase demand and further fuel this progress.

Over the past few months, we’ve invested effort in extending these endpoints, both to increase functionality and improve compatibility. Today, llamafile can serve as a drop-in replacement for OpenAI in a wide variety of use cases.

We want to further extend our API server’s capabilities, and we’re eager to hear what developers want and need. What’s holding you back from using open models? What features, capabilities, or tools do you need? Let us know!

Integrations with other open source AI projects

Finally, it’s been a delight to see llamafile adopted by independent developers and integrated into leading open source AI projects (like Open Interpreter). Kudos in particular to our own Kate Silverstein who landed PRs that add llamafile support to LangChain and LlamaIndex (with AutoGPT coming soon).

If you’re a maintainer or contributor to an open source AI project that you feel would benefit from llamafile integration, let us know how we can help.

Join us!

The llamafile project is just getting started, and it’s also only the first step in a major new initiative on Mozilla’s part to contribute to and participate in the open source AI community. We’ll have more to share about that soon, but for now: I invite you to join us on the llamafile project!

The best place to connect with both the llamafile team at Mozilla and the overall llamafile community is over at our Discord server, which has a dedicated channel just for llamafile. And of course, your enhancement requests, issues, and PRs are always welcome over at our GitHub repo.

I hope you’ll join us. The next few months are going to be even more interesting and unexpected than the last, both for llamafile and for open source AI itself.


The post Llamafile’s progress, four months in appeared first on Mozilla Hacks - the Web developer blog.

Niko MatsakisSized, DynSized, and Unsized

Extern types have been blocked for an unreasonably long time on a fairly narrow, specialized question: Rust today divides all types into two categories — sized, whose size can be statically computed, and unsized, whose size can only be computed at runtime. But for external types what we really want is a third category, types whose size can never be known, even at runtime (in C, you can model this by defining structs with an unknown set of fields). The problem is that Rust’s ?Sized notation does not naturally scale to this third case. I think it’s time we fixed this. At some point I read a proposal — I no longer remember where — that seems like the obvious way forward and which I think is a win on several levels. So I thought I would take a bit of time to float the idea again, explain the tradeoffs I see with it, and explain why I think the idea is a good change.

TL;DR: write T: Unsized in place of T: ?Sized (and sometimes T: DynSized)

The basic idea is to deprecate the ?Sized notation and instead have a family of Sized supertraits. As today, the default is that every type parameter T gets a T: Sized bound unless the user explicitly chooses one of the other supertraits:

/// Types whose size is known at compilation time (statically).
/// Implemented by (e.g.) `u32`. References to `Sized` types
/// are "thin pointers" -- just a pointer.
trait Sized: DynSized { }

/// Types whose size can be computed at runtime (dynamically).
/// Implemented by (e.g.) `[u32]` or `dyn Trait`.
/// References to these types are "wide pointers",
/// with the extra metadata making it possible to compute the size
/// at runtime.
trait DynSized: Unsized { }

/// Types that may not have a knowable size at all (either statically or dynamically).
/// All types implement this, but extern types **only** implement this.
trait Unsized { }

Under this proposal, T: ?Sized notation could be converted to T: DynSized or T: Unsized. T: DynSized matches the current semantics precisely, but T: Unsized is probably what most uses actually want. This is because most users of T: ?Sized never compute the size of T but rather just refer to existing values of T by pointer.

Credit where credit is due?

For the record, this design is not my idea, but I’m not sure where I saw it. I would appreciate a link so I can properly give credit.

Why do we have a default T: Sized bound in the first place?

It’s natural to wonder why we have this T: Sized default in the first place. The short version is that Rust would be very annoying to use without it. If the compiler doesn’t know the size of a value at compilation time, it cannot (at least, cannot easily) generate code to do a number of common things, such as store a value of type T on the stack or have structs with fields of type T. This means that a very large fraction of generic type parameters would wind up with T: Sized.

So why the ?Sized notation?

The ?Sized notation was the result of a lot of discussion. It satisfied a number of criteria.

? signals that the bound operates in reverse

The ? is meant to signal that a bound like ?Sized actually works in reverse from a normal bound. When you have T: Clone, you are saying “type T must implement Clone”. So you are narrowing the set of types that T could be: before, it could have been both types that implement Clone and those that do not. After, it can only be types that implement Clone. T: ?Sized does the reverse: before, it can only be types that implement Sized (like u32), but after, it can also be types that do not (like [u32] or dyn Debug). Hence the ?, which can be read as “maybe” — i.e., T is “maybe” Sized.

? can be extended to other default bounds

The ? notation also scales to other default traits. Although we’ve been reluctant to exercise this ability, we wanted to leave room to add a new default bound. This power will be needed if we ever adopt “must move” types1 or add a bound like ?Leak to signal a value that cannot be leaked.

But ? doesn’t scale well to “differences in degree”

When we debated the ? notation, we thought a lot about extensibility to other orthogonal defaults (like ?Leak), but we didn’t consider extending a single dimension (like Sized) to multiple levels. There is no theoretical challenge. In principle we could say…

  • T means T: Sized + DynSized
  • T: ?Sized drops the Sized default, leaving T: DynSized
  • T: ?DynSized drops both, leaving any type T

…but I personally find that very confusing. To me, saying something “might be statically sized” does not signify that it is dynamically sized.

And ? looks “more magical” than it needs to

Despite knowing that T: ?Sized operates in reverse, I find that in practice it still feels very much like other bounds. Just like T: Debug gives the function the extra capability of generating debug info, T: ?Sized feels to me like it gives the function an extra capability: the ability to be used on unsized types. This logic is specious, these are different kinds of capabilities, but, as I said, it’s how I find myself thinking about it.

Moreover, even though I know that T: ?Sized “most properly” means “a type that may or may not be Sized”, I find it wind up thinking about it as “a type that is unsized”, just as I think about T: Debug as a “type that is Debug”. Why is that? Well, beacuse ?Sized types may be unsized, I have to treat them as if they are unsized – i.e., refer to them only by pointer. So the fact that they might also be sized isn’t very relevant.

How would we use these new traits?

So if we adopted the “family of sized traits” proposal, how would we use it? Well, for starters, the size_of methods would no longer be defined as T and T: ?Sized

fn size_of<T>() -> usize {}
fn size_of_val<T: ?Sized>(t: &T) -> usize {}

… but instead as T and T: DynSized

fn size_of<T>() -> usize {}
fn size_of_val<T: DynSized>(t: &T) -> usize {}

That said, most uses of ?Sized today do not need to compute the size of the value, and would be better translated to Unsized

impl<T: Unsized> Debug for &T {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) { .. }

Option: Defaults could also be disabled by supertraits?

As an interesting extension to today’s system, we could say that every type parameter T gets an implicit Sized bound unless either…

  1. There is an explicit weaker alternative(like T: DynSized or T: Unsized);
  2. Or some other bound T: Trait has an explicit supertrait DynSized or Unsized.

This would clarify that trait aliases can be used to disable the Sized default. For example, today, one might create a Value trait is equivalent to Debug + Hash + Org, roughly like this:

trait Value: Debug + Hash + Ord {
    // Note that `Self` is the *only* type parameter that does NOT get `Sized` by default

impl<T: ?Sized + Debug + Hash + Ord> Value for T {}

But what if, in your particular data structure, all values are boxed and hence can be unsized. Today, you have to repeat ?Sized everywhere:

struct Tree<V: ?Sized + Value> {
    value: Box<V>,
    children: Vec<Tree<V>>,

impl<V: ?Sized + Value> Tree<V> {  }

With this proposal, the explicit Unsized bound could be signaled on the trait:

trait Value: Debug + Hash + Ord + Unsized {
    // Note that `Self` is the *only* type parameter that does NOT get `Sized` by default

impl<T: Unsized + Debug + Hash + Ord> Value for T {}

which would mean that

struct Tree<V: Value> {  }

would imply V: Unsized.


Different names

The name of the Unsized trait in particular is a bit odd. It means “you can treat this type as unsized”, which is true of all types, but it sounds like the type is definitely unsized. I’m open to alternative names, but I haven’t come up with one I like yet. Here are some alternatives and the problems with them I see:

  • Unsizeable — doesn’t meet our typical name conventions, has overlap with the Unsize trait
  • NoSize, UnknownSize — same general problem as Unsize
  • ByPointer — in some ways, I kind of like this, because it says “you can work with this type by pointer”, which is clearly true of all types. But it doesn’t align well with the existing Sized trait — what would we call that, ByValue? And it seems too tied to today’s limitations: there are, after all, ways that we can make DynSized types work by value, at least in some places.
  • MaybeSized — just seems awkward, and should it be MaybeDynSized?

All told, I think Unsized is the best name. It’s a bit wrong, but I think you can understand it, and to me it fits the intuition I have, which is that I mark type parameters as Unsized and then I tend to just think of them as being unsized (since I have to).

Some sigil

Under this proposal, the DynSized and Unsized traits are “magic” in that explicitly declaring them as a bound has the impact of disabling a default T: Sized bound. We could signify that in their names by having their name be prefixed with some sort of sigil. I’m not really sure what that sigil would be — T: %Unsized? T: ?Unsized? It all seems unnecessary.

Drop the implicit bound altogether

The purist in me is tempted to question whether we need the default bound. Maybe in Rust 2027 we should try to drop it altogether. Then people could write

fn size_of<T: Sized>() -> usize {}
fn size_of_val<T: DynSized>(t: &T) -> usize {}


impl<T> Debug for &T {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) { .. }

Of course, it would also mean a lot of Sized bounds cropping up in surprising places. Beyond random functions, consider that every associated type today has a default Sized bound, so you would need

trait Iterator {
    type Item: Sized;

Overall, I doubt this idea is worth it. Not surprising: it was deemed too annoying before, and now it has the added problem of being hugely disruptive.


I’ve covered a design to move away from ?Sized bounds and towards specialized traits. There are avrious “pros and cons” to this proposal but one aspect in particular feels common to this question and many others: when do you make two “similar but different” concepts feel very different — e.g., via special syntax like T: ?Sized — and when do you make them feel very similar — e.g., via the idea of “special traits” where a bound like T: Unsized has extra meaning (disabling defaults).

There is a definite trade-off here. Distinct syntax help avoid potential confusion, but it forces people to recognize that something special is going on even when that may not be relevant or important to them. This can deter folks early on, when they are most “deter-able”. I think it can also contribute to a general sense of “big-ness” that makes it feel like understanding the entire language is harder.

Over time, I’ve started to believe that it’s generally better to make things feel similar, letting people push off the time at which they have to learn a new concept. In this case, this lessens my fears around the idea that Unsized and DynSized traits would be confusing because they behave differently than other traits. In this particular case, I also feel that ?Sized doesn’t “scale well” to default bounds where you want to pick from one of many options, so it’s kind of the worst of both worlds – distinct syntax that shouts at you but which also fails to add clarity.

Ultimately, though, I’m not wedded to this idea, but I am interested in kicking off a discussion of how we can unblock extern types. I think by now we’ve no doubt covered the space pretty well and we should pick a direction and go for it (or else just give up on extern types).

  1. I still think “must move” types are a good idea — but that’s a topic for another post. ↩︎

Hacks.Mozilla.OrgPorting a cross-platform GUI application to Rust

Firefox’s crash reporter is hopefully not something that most users experience often. However, it is still a very important component of Firefox, as it is integral in providing insight into the most visible bugs: those which crash the main process. These bugs offer the worst user experience (since the entire application must close), so fixing them is a very high priority. Other types of crashes, such as content (tab) crashes, can be handled by the browser and reported gracefully, sometimes without the user being aware that an issue occurred at all. But when the main browser process comes to a halt, we need another separate application to gather information about the crash and interact with the user.

This post details the approach we have taken to rewrite the crash reporter in Rust. We discuss the reasoning behind this rewrite, what makes the crash reporter a unique application, the architecture we used, and some details of the implementation.

Why Rewrite?

Even though it is important to properly handle main process crashes, the crash reporter hasn’t received significant development in a while (aside from development to ensure that crash reports and telemetry continue to reliably be delivered)! It has long been stuck in a local maximum of “good enough” and “scary to maintain”: it features 3 individual GUI implementations (for Windows, GTK+ for Linux, and macOS), glue code abstracting a few things (mostly in C++, and Objective-C for macOS), a binary blob produced by obsoleted Apple development tools, and no test suite. Because of this, there is a backlog of features and improvements which haven’t been acted on.

We’ve recently had a number of successful pushes to decrease crash rates (including both big leaps and many small bug fixes), and the crash reporter has functioned well enough for our needs during this time. However, we’ve reached an inflection point where improving the crash reporter would provide valuable insight to enable us to decrease the crash rate even further. For the reasons previously mentioned, improving the current codebase is difficult and error-prone, so we deemed it appropriate to rewrite the application so we can more easily act on the feature backlog and improve crash reports.

Like many components of Firefox, we decided to use Rust for this rewrite to produce a more reliable and maintainable program. Besides the often-touted memory safety built into Rust, its type system and standard library make reasoning about code, handling errors, and developing cross-platform applications far more robust and comprehensive.

Crash Reporting is an Edge Case

There are a number of features of the crash reporter which make it quite unique, especially compared to other components which have been ported to Rust. For one thing, it is a standalone, individual program; basically no other components of Firefox are used in this way. Firefox itself launches many processes as a means of sandboxing and insulating against crashes, however these processes all talk to one another and have access to the same code base.

The crash reporter has a very unique requirement: it must use as little as possible of the Firefox code base, ideally none! We don’t want it to rely on code which may be buggy and cause the reporter itself to crash. Using a completely independent implementation ensures that when a main process crash does occur, the cause of that crash won’t affect the reporter’s functionality as well.

The crash reporter also necessarily has a GUI. This alone may not separate it from other Firefox components, but we can’t leverage any of the cross-platform rendering goodness that Firefox provides! So we need to implement a cross-platform GUI independent of Firefox as well. You might think we could reach for an existing cross-platform GUI crate, however we have a few reasons not to do so.

  • We want to minimize the use of external code: to improve crash reporter reliability (which is paramount), we want it to be as simple and auditable as possible.
  • Firefox vendors all dependencies in-tree, so we are hesitant to bring in large dependencies (GUI libraries are likely pretty sizable).
  • There are only a few third-party crates that provide a native OS look and feel (or actually use native GUI APIs): it’s desirable for the crash reporter to have a native feel to be familiar to users and take advantage of accessibility features.

So all of this is to say that third-party cross-platform GUI libraries aren’t a favorable option.

These requirements significantly narrow the approach that can be used.

Building a GUI View Abstraction

In order to make the crash reporter more maintainable (and make it easier to add new features in the future), we want to have as minimal and generic platform-specific code as possible. We can achieve this by using a simple UI model that can be converted into native GUI code for each platform. Each UI implementation will need to provide two methods (over arbitrary platform-specific &self data):

/// Run a UI loop, displaying all windows of the application until it terminates.
fn run_loop(&self, app: model::Application)

/// Invoke a function asynchronously on the UI loop thread.
fn invoke(&self, f: model::InvokeFn)

The run_loop function is pretty self-explanatory: the UI implementation takes an Application model (which we’ll discuss shortly) and runs the application, blocking until the application is complete. Conveniently, our target platforms generally have similar assumptions around threading: the UI runs in a single thread and typically runs an event loop which blocks on new events until an event signaling the end of the application is received.

There are some cases where we’ll need to run a function on the UI thread asynchronously (like displaying a window, updating a text field, etc). Since run_loop blocks, we need the invoke method to define how to do this. This threading model will make it easy to use the platform GUI frameworks: everything calling native functions will occur on a single thread (the main thread in fact) for the duration of the program.

This is a good time to be a bit more specific about exactly what each UI implementation will look like. We’ll discuss pain points for each later on. There are 4 UI implementations:

  • A Windows implementation using the Win32 API.
  • A macOS implementation using Cocoa (AppKit and Foundation frameworks).
  • A Linux implementation using GTK+ 3 (the “+” has since been dropped in GTK 4, so henceforth I’ll refer to it as “GTK”). Linux doesn’t provide its own GUI primitives, and we already ship GTK with Firefox on Linux to make a modern-feeling GUI, so we can use it for the crash reporter, too. Note that some platforms that aren’t directly supported by Mozilla (like BSDs) use the GTK implementation as well.
  • A testing implementation which will allow tests to hook into a virtual UI and poke things (to simulate interactions and read state).

One last detail before we dive in: the crash reporter (at least right now) has a pretty simple GUI. Because of this, an explicit non-goal of the development was to create a separate Rust GUI crate. We wanted to create just enough of an abstraction to cover the cases we needed in the crash reporter. If we need more controls in the future, we can add them to the abstraction, but we avoided spending extra cycles to fill out every GUI use case.

Likewise, we tried to avoid unnecessary development by allowing some tolerance for hacks and built-in edge cases. For example, our model defines a Button as an element which contains an arbitrary element, but actually supporting that with Win32 or AppKit would have required a lot of custom code, so we special case on a Button containing a Label (which is all we need right now, and an easy primitive available to us). I’m happy to say there aren’t really many special cases like that at all, but we are comfortable with the few that were needed.

The UI Model

Our model is a declarative structuring of concepts mostly present in GTK. Since GTK is a mature library with proven high-level UI concepts, this made it appropriate for our abstraction and made the GTK implementation pretty simple. For instance, the simplest way that GTK does layout (using container GUI elements and per-element margins/alignments) is good enough for our GUI, so we use similar definitions in the model. Notably, this “simple” layout definition is actually somewhat high-level and complicates the macOS and Windows implementations a bit (but this tradeoff is worth the ease of creating UI models).

The top-level type of our UI model is Application. This is pretty simple: we define an Application as a set of top-level Windows (though our application only has one) and whether the current locale is right-to-left. We inspect Firefox resources to use the same locale that Firefox would, so we don’t rely on the native GUI’s locale settings.

As you might expect, each Window contains a single root element. The rest of the model is made up of a handful of typical container and primitive GUI elements:

A class diagram showing the inheritance structure. An Application contains one or more Windows. A Window contains one Element. An Element is subclassed to Checkbox, Label, Progress, TextBox, Button, Scroll, HBox, and VBox types.

The crash reporter only needs 8 types of GUI elements! And really, Progress is used as a spinner rather than indicating any real progress as of right now, so it’s not strictly necessary (but nice to show).

Rust does not explicitly support the object-oriented concept of inheritance, so you might be wondering how each GUI element “extends” Element. The relationship represented in the picture is somewhat abstract; the implemented Element looks like:

pub struct Element {
    pub style: ElementStyle,
    pub element_type: ElementType

where ElementStyle contains all the common properties of elements (alignment, size, margin, visibility, and enabled state), and ElementType is an enum containing each of the specific GUI elements as variants.

Building the Model

The model elements are all intended to be consumed by the UI implementations; as such, almost all of the fields have public visibility. However, as a means of having a separate interface for building elements, we define an ElementBuilder<T> type. This type has methods that maintain assertions and provide convenience setters. For instance, many methods accept parameters that are impl Into<MemberType>, some methods like margin() set multiple values (but you can be more specific with margin_top()), etc.

There is a general impl<T> ElementBuilder<T> which provides setters for the various ElementStyle properties, and then each specific element type can also provide their own impl ElementBuilder<SpecificElement> with additional properties unique to the element type.

We combine ElementBuilder<T> with the final piece of the puzzle: a ui! macro. This macro allows us to write our UI in a declarative manner. For example, it allows us to write:

let details_window = ui! {
    Window title("Crash Details") visible(show_details) modal(true) hsize(600) vsize(400)
         halign(Alignment::Fill) valign(Alignment::Fill)
         VBox margin(10) spacing(10) halign(Alignment::Fill) valign(Alignment::Fill) {
            	Scroll halign(Alignment::Fill) valign(Alignment::Fill) {
                	TextBox content(details) halign(Alignment::Fill) valign(Alignment::Fill)
            	Button halign(Alignment::End) on_click(move || *show_details.borrow_mut() = false)
                 Label text("Ok")

The implementation of ui! is fairly simple. The first identifier provides the element type and an ElementBuilder<T> is created. After that, the remaining method-call-like syntax forms are called on the builder (which is mutable).

Optionally, a final set of curly braces indicate that the element has children. In that case, the macro is recursively called to create them, and add_child is called on the builder with the result (so we just need to make sure a builder has an add_child method). Ultimately the syntax transformation is pretty simple, but I believe that this macro is a little bit more than just syntax sugar: it makes reading and editing the UI a fair bit clearer, since the hierarchy of elements is represented in the syntax. Unfortunately a downside is that there’s no way to support automatic formatting of such macro DSLs, so developers will need to maintain a sane formatting.

So now we have a model defined and a declarative way of building it. But we haven’t discussed any dynamic runtime behaviors here. In the above example, we see an on_click handler being set on a Button. We also see things like the Window’s visible property being set to a show_details value which is changed when on_click is pressed. We hook into this declarative UI to change or react to events at runtime using a set of simple data binding primitives with which UI implementations can interact.

Many GUI frameworks nowadays (both for Rust and other languages) have been built with the “diffing element trees” architecture (think React), where your code is (at least mostly) functional and side-effect-free and produces the GUI view as a function of the current state. This approach has its tradeoffs: for instance, it makes complicated, stateful alterations of the layout very simple to write, understand, and maintain, and encourages a clean separation of model and view! However since we aren’t writing a framework, and our application is and will remain fairly simple, the benefits of such an architecture were not worth the additional development burden. Our implementation is more similar to the MVVM architecture:

  • the model is, well, the model discussed here;
  • the views are the various UI implementations; and
  • the viewmodel is (loosely, if you squint) the collection of data bindings.

Data Binding

There are a few types which we use to declare dynamic (runtime-changeable) values. In our UI, we needed to support a few different behaviors:

  • triggering events, i.e., what happens when a button is clicked,
  • synchronized values which will mirror and notify of changes to all clones, and
  • on-demand values which can be queried for the current value.

On-demand values are used to get textbox contents rather than using a synchronized value, in an effort to avoid implementing debouncing in each UI. It may not be terribly difficult to do so, but it also wasn’t difficult to support the on-demand implementation.

As a means of convenience, we created a Property type which encompasses the value-oriented fields as well. A Property<T> can be set to either a static value (T), a synchronized value (Synchronized<T>), or an on-demand value (OnDemand<T>). It supports an impl From for each of these, so that builder methods can look like fn my_method(&mut self, value: impl Into<Property<T>>) allowing any supported value to be passed in a UI declaration.

We won’t discuss the implementation in depth (it’s what you’d expect), but it’s worth noting that these are all Clone to easily share the data bindings: they use Rc (we don’t need thread safety) and RefCell as necessary to access callbacks.

In the example from the last section, show_details is a Synchronized<bool> value. When it changes, the UI implementations change the associated window visibility. The Button on_click callback sets the synchronized value to false, hiding the window (note that the details window used in this example is never closed, it is just shown and hidden).

In a former iteration, data binding types had a lifetime parameter which specified the lifetime for which event callbacks were valid. While we were able to make this work, it greatly complicated the code, especially because there’s no way to communicate the correct covariance of the lifetime to the compiler, so there was additional unsafe code transmuting lifetimes (though it was contained as an implementation detail). These lifetimes were also infectious, requiring some of the complicated semantics regarding their safety to be propagated into the model types which stored Property fields.

Much of this was to avoid cloning values into the callbacks, but changing these types to all be Clone and store static-lifetime callbacks was worth making the code far more maintainable.

Threading and Thread Safety

The careful reader might remember that we discussed how our threading model involves interacting with the UI implementations only on the main thread. This includes updating the data bindings, since the UI implementations might have registered callbacks on them! While we could run everything in the main thread, it’s generally a much better experience to do as much off of the UI thread as possible, even if we don’t do much that’s blocking (though we will be blocking when we send crash reports). We want our business logic to default to being off of the main thread so that the UI doesn’t ever freeze. We can guarantee this with some careful design.

The simplest way to guarantee this behavior is to put all of the business logic in one (non-Clone, non-Sync) type (let’s call it Logic) and construct the UI and UI state (like Property values) in another type (let’s call it UI). We can then move the Logic value into a separate thread to guarantee that UI can’t interact with Logic directly, and vice versa. Of course we do need to communicate sometimes! But we want to ensure that this communication will always be delegated to the thread which owns the values (rather than the values directly interacting with each other).

We can accomplish this by creating an enqueuing function for each type and storing that in the opposite type. Such a function will be passed boxed functions to run on the owning thread that get a reference to the owned type (e.g., Box<dyn FnOnce(&T) + Send + 'static>). This is simple to create: for the UI thread, it is just the UI implementation’s invoke method which we briefly discussed previously. The Logic thread does nothing but run a loop which will get these functions and run them on the owned value (we just enqueue and pass them using an mpsc::channel). Now each type can asynchronously call methods on the other with the guarantee that they’ll be run on the correct thread.

In a former iteration, a more complicated scheme was used with thread-local storage and a central type which was responsible for both creating threads and delegating the functions. But with such a basic use case as two threads delegating between each other, we were able to distill this to the essential aspects needed, greatly simplifying the code.


One nice benefit of this rewrite is that we could bring the localization of the crash reporter up to speed with our modern tooling. In almost every other part of Firefox, we use fluent to handle localization. Using fluent in the crash reporter makes the experience of localizers more uniform and predictable; they do not need to understand more than one localization system (the crash reporter was one of the last holdouts of the old system). It was very easy to use in the new code, with just a bit of extra code to extract the localization files from the Firefox installation when the crash reporter is run. In the worst case scenario where we can’t find or access these files, we have the en-US definitions directly bundled in the crash reporter binary.

The UI Implementations

We won’t go into much detail about the implementations, but it’s worth talking about each a bit.

Linux (GTK)

The GTK implementation is probably the most straightforward and succinct. We use bindgen to generate Rust bindings to the GTK functions we need (avoiding vendoring any external crates). Then we simply call all of the corresponding GTK functions to set up the GTK widgets as described in the model (remember, the model was made to mirror some of the GTK concepts).

Since GTK is somewhat modern and meant to be written by humans (not automated tools like some of the other platforms), there weren’t really any pain points or unusual behaviors that needed to be addressed.

We have a handful of nice features to improve memory safety and correctness. A set of traits makes it easy to attach owned data to GObjects (ensuring data remains valid and is properly dropped when the GObject is destroyed), and a few macros set up the glue code between GTK signals and our data binding types.

Windows (Win32)

The Windows implementation may have been the most difficult to write, since Win32 GUIs are very rarely written nowadays and the API shows its age. We use the windows-sys crate to access bindings to the API (which was already vendored in the codebase for many other Windows API uses). This crate is generated directly from Windows function metadata (by Microsoft), but otherwise its bindings aren’t terribly different from what bindgen might have produced (though they are likely a bit more accurate).

There were a number of hurdles to overcome. For one thing, the Win32 API doesn’t provide any layout primitives, so the high-level layout concepts we use (which allow graceful resize/repositioning) had to be implemented manually. There’s also quite a few extra API calls just to get to a GUI that looks somewhat decent (correct window colors, font smoothing, high DPI handling, etc). Even the default font ends up being a terrible looking bitmapped font rather than the more modern system default; we needed to manually retrieve the system default and set it as the font to use, which was a bit surprising!

We have a set of traits to facilitate creating custom window classes and managing associated window data of class instances. We also have wrapper types to properly manage the lifetimes of handles and perform type conversions (mainly String to null-terminated wide strings and back) as an extra layer of safety around the API.

macOS (Cocoa/AppKit)

The macOS implementation had its tricky parts, as overwhelmingly macOS GUIs are written with XCode and there’s a lot of automated and generated portions (such as nibs). We again use bindgen to generate Rust bindings, this time for the Objective-C APIs in macOS framework headers.

Unlike Windows and GTK, you don’t get keyboard shortcuts like Cmd-C, Cmd-Q, etc, for free if creating a GUI without e.g. XCode (which generates it for you as part of a new project template). To have these typical shortcuts that users expect, we needed to manually implement the application main menu (which is what governs keyboard shortcuts). We also had to handle runtime setup like creating Objective-C autorelease pools, bringing the window and application (which are separate concepts) to the foreground, etc. Even implementing invoke to call a function on the main thread had its nuances, since modal windows use a nested event loop which would not call queued functions under the default NSRunLoop mode.

We wrote some simple helper types and a macro to make it easy to implement, register, and create Objective-C classes from Rust code. We used this for creating delegate classes as well as subclassing some controls for the implementation (like NSButton); it made it easy to safely manage the memory of Rust values underlying the classes and correctly register class method selectors.

The Test UI

We’ll discuss testing in the next section. Our testing UI is very simple. It doesn’t create a GUI, but allows us to interact directly with the model. The ui! macro supports an extra piece of syntax when tests are enabled to optionally set a string identifier for each element. We use these strings in unit tests to access and interact with the UI. The data binding types also support a few additional methods in tests to easily manipulate values. This UI allows us to simulate button presses, field entry, etc, to ensure that other UI state changes as expected as well as simulating the system side effects.

Mocking and Testing

An important goal of our rewrite was to add tests to the crash reporter; our old code was sorely lacking them (in part because unit testing GUIs is notoriously difficult).

Mocking Everything

In the new code, we can mock the crash reporter regardless of whether we are running tests or not (though it is always mocked for tests). This is important because mocking allows us to (manually) run the GUI in various states to check that the GUI implementations are correct and render well. Our mocking not only mocks the inputs to the crash reporter (environment variables, command line parameters, etc), it also mocks all side-effectful std functions.

We accomplish this by having a std module in the crate, and using crate::std throughout the rest of the code. When mocking is disabled, crate::std is simply the same as ::std. But when it is enabled, a bunch of functions that we have written are used instead. These mock the filesystem, environment, launching external commands, and other side effects. Importantly, only the minimal amount to mock the existing functions is implemented, so that if e.g. some new functions from std::fs, std::net, etc. are used, the crate will fail to compile with mocking enabled (so that we don’t miss any side effects). This might sound like a lot of effort, but you might be surprised at how little of std really needed to be mocked, and most implementations were pretty straightforward.

Now that we have our code using different mocked functions, we need to have a way of injecting the desired mock data (both in tests and in our normal mocked operation). For example, we have the ability to return some data when a File is read, but we need to be able to set that data differently for tests. Without going into too much detail, we accomplish this using a thread-local store of mock data. This way, we don’t need to change any code to accommodate the mock data; we only need to make changes where we set and retrieve it. The programming language enthusiasts out there may recognize this as a form of dynamic scoping. The implementation allows our mock data to be set with code like

    .run(|| crash_reporter_main())

in tests, and

pub fn current_exe() -> std::io::Result {
    Ok(MockCurrentExe.get(|r| r.clone()))

in our crate::std::env implementation.


With our mocking setup and test UI, we are able to extensively test the behavior of the crash reporter. The “last mile” of this testing which we can’t automate easily is whether each UI implementation faithfully represents the UI model. We manually test this with a mocked GUI for each platform.

Besides that, we are able to automatically test how arbitrary UI interactions cause the crash reporter to affect its own UI state and the environment (checking which programs are invoked and network connections are made, what happens if they fail, succeed, or timeout, etc). We also set up a mock filesystem and add assertions in various scenarios over the precise resulting filesystem state once the crash reporter completes. This greatly increases our confidence in the current behaviors and ensures that future changes will not alter them, which is of the utmost importance for such an essential component of our crash reporting pipeline.

The End Product

Of course we can’t get away with writing all of this without a picture of the crash reporter! This is what it looks like on Linux using GTK. The other GUI implementations look the same but styled with a native look and feel.

The crash reporter dialog on Linux.

Note that, for now, we wanted to keep it looking exactly the same as it previously did. So if you are unfortunate enough to see it, it shouldn’t appear as if anything has changed!

With a new, cleaned up crash reporter, we can finally unblock a number of feature requests and bug reports, such as:

We are excited to iterate and improve further on crash reporter functionality. But ultimately it’d be wonderful if you never see or use it, and we are constantly working toward that goal!

The post Porting a cross-platform GUI application to Rust appeared first on Mozilla Hacks - the Web developer blog.

Firefox NightlyWall to Wall Improvements – These Weeks in Firefox: Issue 159


  • The team is in the early stages of adding wallpaper support! This is still very preliminary, but you can test what they’ve currently landed on Nightly:
    • Set browser.newtabpage.activity-stream.newtabWallpapers.enabled to true in about:config
    • Click on the “gear” icon in the top-right of the new tab page
    • Choose a wallpaper! Note that you get different options depending on whether or not you’re using a light or dark theme.
Firefox's New Tab page with a beautiful image of the aurora borealis set as the background wallpaper

Set a new look for new tabs!

Friends of the Firefox team


  • Shoutout to Yi Xiong Wong for submitting 16 patches to refactor a bunch of browser.js code into a separate file (bug)

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Camille
  • Magnus Melin [:mkmelin]
  • Meera Murthy

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtension APIs
  • As part of the ongoing work related to improving cross-browser compatibility for Manifest Version 3 extensions:
    • The options_page manifest property is supported as an alias of options_page.uiBug 1816960
    • A new webRequestAuthProvider permission allows extensions to register  webRequest.onAuthRequired blocking listeners (in addition to the webRequestBlocking permission, which is deprecated on Chrome but still supported by Firefox) – Bug 1820569
    • commands.onCommand listeners now receive details of the currently active tab – Bug 1843866
    • WebExtensions with a granted active tab permission can now call tabs.captureVisibleTab API method without any additional host permissions – Bug 1784920
    • MessageSender details received by runtime.onMessage/runtime.onConnect listeners include a new origin property – Bug 1787379

Developer Tools

  • Artem Manushenkov added a setting to disable the split console (#1731635)
  • Yury added support for Wasm exception handling proposal in the Debugger (#1885589)
  • Emilio fixed a rendering issue that could happen after exiting Responsive Design Mode (#1888242)
  • Alex migrated the last DevTools JSMs to ESMs (#1789981, #1827382, #1888171)
  • Nicolas improved performance of the Inspector when modifying a single rule, in a stylesheet with a lot of rules (#1888079, #1888081)
  • Nicolas improved the Rules view by showing the color picker button on color functions using CSS variables in their definition (#1718894)
  • Bomsy fixed a crash in the netmonitor (#1884571)
  • Julian reverted the location of DevTools screenshots on OSX to match where Firefox screenshots are saved (#1845037)
  • Nicolas added @property rules (enabled on Nightly by default) in the Style Editor sidebar (#1886392)
WebDriver BiDi
  • New contributor: :gravyant improved the error message when the command is used without a proper capabilities parameter (#1838152)
  • Julian Descottes added support for the contexts argument of the network.addIntercept command which allows to restrict a network intercept to a set of top-level browsing contexts (#1882260)
  • Sasha Borovova updated the storage.getCookies command to return third party cookies selectively, based on the value of the network.cookie.cookieBehavior and network.cookie.cookieBehavior.optInPartitioning preferences (#1879503)
  • Sasha Borovova removed the ownership and sandbox parameters for the browsingContext.locateNodes command to align with a recent specification update (#1884935)
  • Sasha Borovova updated the session.subscribe and session.unsubscribe commands to throw an error when the events or contexts parameters are empty arrays (#1887871)
  • The team completed the Milestone 10 of the project (bug list), where we implemented 50% of the commands needed to completely support Puppeteer, with 75% of the Puppeteer unit-tests passing with WebDriver BiDi. For Milestone 11 (bug list), our focus remains to implement the remaining commands and features required to fully support Puppeteer (doc).

Lint, Docs and Workflow

Migration Improvements


  • Thanks to :joe.scott.webster for submitting a patch that fixes PiP captions issues with several Yahoo sites (bug) and filing a follow-up ticket for AOL (bug).
    • Also thanks to Niklas Baumgardner (:niklas) for lending a hand!


Screenshots (enabled by default in Nightly)

Search and Navigation

  • Firefox Suggest experience
    • Daisuke renamed the “Learn More about Firefox Suggest” menuitem to a more direct “Manage Firefox Suggest”. Bug 1889820
    • Drew added new telemetry to measure in experiments the potential exposure of simulated results, depending on the typed search string. Bug 1881875
  • SERP categorization telemetry
    • James, Stephanie and Karandeep have landed many fixes to the storage, logging and measurements.
  • Search Config v2
    • Enabling new config in Nightly lowered the number of initialization errors for the Search Service
    • Mark fixed character-set handling. Bug 1890698
    • Mark added a new property covering the device type. Bug 1889910
    • Mandy sorted collections by engine identifier and property names, to make the config more easily navigable and diffs nicer and easier to maintain. Bug 1889247
    • Mandy updated the documentation. Bug 1889037
  • Other fixes
    • Marco fixed a bug causing engagement on certain results to be registered both as engagement and abandonment. Bug 1888627
    • Dao has fixed alignment of “switch to tab” and “visit” chiclets. Bug 1886761

Storybook/Reusable Components

Firefox NightlyFirefox Nightly Now Available for Linux on ARM64

We’re excited to share an update with people running Linux on ARM64 (also known as AArch64) architectures.

ARM64 Binaries Are Here

After launching the Firefox Nightly .deb package, feedback highlighted a demand for ARM64 builds. In response, we’re excited to now offer Firefox Nightly for ARM64 as both .tar archives and .deb packages. Keep the suggestions coming – feedback is always welcome!

  • .tar Archives: Prefer our traditional .tar.bz2 binaries? You can get them from our downloads page by selecting Firefox Nightly for Linux ARM64/AArch64.
  • .deb Packages: For updates and installation via our APT repository, you can follow these instructions and install the firefox-nightly package.

On ARM64 Build Stability

We want to be upfront about the current state of our ARM64 builds. Although we are confident in the quality of Firefox on this architecture, we are still incorporating comprehensive ARM64 testing into Firefox’s continuous integration and release pipeline. Our goal is to integrate ARM64 builds into Firefox’s extensive automated test suite, which will enable us to offer this architecture across the beta, release, and ESR channels.

Your Feedback Is Crucial

We encourage you to download the new ARM64 Firefox Nightly binaries, test them, and share your findings with us. By using these builds and reporting any issues, you’re empowering our developers to better support and test on this architecture, ultimately leading to a stable and reliable Firefox for ARM64. Please share your findings through Bugzilla and stay tuned for more updates. Thank you for your ongoing participation in the Firefox Nightly community!

IRL (podcast)Mozilla’s IRL podcast is a Shorty Awards finalist - we need your help to win!

We’re excited to share that Mozilla's IRL podcast is a Shorty Awards finalist in the Science and Technology Podcast category! If you enjoy IRL you can show your support by voting for us.

The Shorty Awards recognizes great content by brands, agencies and nonprofits. It’s really an honor to be able to feature the voices and stories of the folks who are putting people over profit in AI. A Shorty Award will help bring these stories to even more listeners. 

How to vote

1. Go to

2. Click 'Vote in Science and Technology Podcast'

3. create a username and password (it's easy, we promise!)

4. Come back and vote every day until April 30th

We believe putting people over profit is award-worthy. Don’t you?  Thanks for your support!

Mozilla ThunderbirdAdventures In Rust: Bringing Exchange Support To Thunderbird

Microsoft Exchange is a popular choice of email service for corporations and educational institutions, and so it’s no surprise that there’s demand among Thunderbird users to support Exchange. Until recently, this functionality was only available through an add-on. But, in the next ESR (Extended Support) release of Thunderbird in July 2024, we expect to provide this support natively within Thunderbird. Because of the size of this undertaking, the first roll-out of the Exchange support will initially cover only email, with calendar and address book support coming at a later date.

This article will go into technical detail on how we are implementing support for the Microsoft Exchange Web Services mail protocol, and some idea of where we’re going next with the knowledge gained from this adventure.

Historical context

Thunderbird is a long-lived project, which means there’s lots of old code. The current architecture for supporting mail protocols predates Thunderbird itself, having been developed more than 20 years ago as part of Netscape Communicator. There was also no paid maintainership from about 2012 — when Mozilla divested and  transferred ownership of Thunderbird to its community — until 2017, when Thunderbird rejoined the Mozilla Foundation. That means years of ad hoc changes without a larger architectural vision and a lot of decaying C++ code that was not using modern standards.

Furthermore, in the entire 20 year lifetime of the Thunderbird project, no one has added support for a new mail protocol before. As such, no one has updated the architecture as mail protocols change and adapt to modern usage patterns, and a great deal of institutional knowledge has been lost. Implementing this much-needed feature is the first organization-led effort to actually understand and address limitations of Thunderbird’s architecture in an incremental fashion.

Why we chose Rust

Thunderbird is a large project maintained by a small team, so choosing a language for new work cannot be taken lightly. We need powerful tools to develop complex features relatively quickly, but we absolutely must balance this with long-term maintainability. Selecting Rust as the language for our new protocol support brings some important benefits:

  1. Memory safety. Thunderbird takes input from anyone who sends an email, so we need to be diligent about keeping security bugs out.
  2. Performance. Rust runs as native code with all of the associated performance benefits.
  3. Modularity and Ecosystem. The built-in modularity of Rust gives us access to a large ecosystem where there are already a lot of people doing things related to email which we can benefit from.

The above are all on the standard list of benefits when discussing Rust. However, there are some additional considerations for Thunderbird:

  1. Firefox. Thunderbird is built on top of Firefox code and we use a shared CI infrastructure with Firefox which already enables Rust. Additionally, Firefox provides a language interop layer called XPCOM (Cross-Platform Component Object Model), which has Rust support and allows us to call between Rust, C++, and JavaScript.
  2. Powerful tools. Rust gives us a large toolbox for building APIs which are difficult to misuse by pushing logical errors into the domain of the compiler. We can easily avoid circular references or provide functions which simply cannot be called with values which don’t make sense, letting us have a high degree of confidence in features with a large scope. Rust also provides first-class tooling for documentation, which is critically important on a small team.
  3. Addressing architectural technical debt. Introducing a new language gives us a chance to reconsider some aging architectures while benefiting from a growing language community.
  4. Platform support and portability. Rust supports a broad set of host platforms. By building modular crates, we can reuse our work in other projects, such as Thunderbird for Android/K-9 Mail.

Some mishaps along the way

Of course, the endeavor to introduce our first Rust component in Thunderbird is not without its challenges, mostly related to the size of the Thunderbird codebase. For example, there is a lot of existing code with idiosyncratic asynchronous patterns that don’t integrate nicely with idiomatic Rust. There are also lots of features and capabilities in the Firefox and Thunderbird codebase that don’t have any existing Rust bindings.

The first roadblock: the build system

Our first hurdle came with getting any Rust code to run in Thunderbird at all. There are two things you need to know to understand why:

First, since the Firefox code is a dependency of Thunderbird, you might expect that we pull in their code as a subtree of our own, or some similar mechanism. However, for historical reasons, it’s the other way around: building Thunderbird requires fetching Firefox’s code, fetching Thunderbird’s code as a subtree of Firefox’s, and using a build configuration file to point into that subtree.

Second, because Firefox’s entrypoint is written in C++ and Rust calls happen via an interoperability layer, there is no single point of entry for Rust. In order to create a tree-wide dependency graph for Cargo and avoid duplicate builds or version/feature conflicts, Firefox introduced a hack to generate a single Cargo workspace which aggregates all the individual crates in the tree.

In isolation, neither of these is a problem in itself. However, in order to build Rust into Thunderbird, we needed to define our own Cargo workspace which lives in our tree, and Cargo does not allow nesting workspaces. To solve this issue, we had to define our own workspace and add configuration to the upstream build tool, mach, to build from this workspace instead of Firefox’s. We then use a newly-added mach subcommand to sync our dependencies and lockfile with upstream and to vendor the resulting superset.


While the availability of language interop through XPCOM is important for integrating our frontend and backend, the developer experience has presented some challenges. Because XPCOM was originally designed with C++ in mind, implementing or consuming an XPCOM interface requires a lot of boilerplate and prevents us from taking full advantage of tools like rust-analyzer. Over time, Firefox has significantly reduced its reliance on XPCOM, making a clunky Rust+XPCOM experience a relatively minor consideration. However, as part of the previously-discussed maintenance gap, Thunderbird never undertook a similar project, and supporting a new mail protocol requires implementing hundreds of functions defined in XPCOM.

Existing protocol implementations ease this burden by inheriting C++ classes which provide the basis for most of the shared behavior. Since we can’t do this directly, we are instead implementing our protocol-specific logic in Rust and communicating with a bridge class in C++ which combines our Rust implementations (an internal crate called ews_xpcom) with the existing code for shared behavior, with as small an interface between the two as we can manage.

Please visit our documentation to learn more about how to create Rust components in Thunderbird.

Implementing Exchange support with Rust

Despite the technical hiccups experienced along the way, we were able to clear the hurdles, use, and build Rust within Thunderbird. Now we can talk about how we’re using it and the tools we’re building. Remember all the way back to the beginning of this blog post, where we stated that our goal is to support Microsoft’s Exchange Web Services (EWS) API. EWS communicates over HTTP with request and response bodies in XML.

Sending HTTP requests

Firefox already includes a full-featured HTTP stack via its necko networking component. However, necko is written in C++ and exposed over XPCOM, which as previously stated does not make for nice, idiomatic Rust. Simply sending a GET request requires a great deal of boilerplate, including nasty-looking unsafe blocks where we call into XPCOM. (XPCOM manages the lifetime of pointers and their referents, ensuring memory safety, but the Rust compiler doesn’t know this.) Additionally, the interfaces we need are callback-based. For making HTTP requests to be simple for developers, we need to do two things:

  1. Support native Rust async/await syntax. For this, we added a new Thunderbird-internal crate, xpcom_async. This is a low-level crate which translates asynchronous operations in XPCOM into Rust’s native async syntax by defining callbacks to buffer incoming data and expose it by implementing Rust’s Future trait so that it can be awaited by consumers. (If you’re not familiar with the Future concept in Rust, it is similar to a JS Promise or a Python coroutine.)
  2. Provide an idiomatic HTTP API. Now that we had native async/await support, we created another internal crate (moz_http) which provides an HTTP client inspired by reqwest. This crate handles creating all of the necessary XPCOM objects and providing Rustic error handling (much nicer than the standard XPCOM error handling).

Handling XML requests and responses

The hardest task in working with EWS is translating between our code’s own data structures and the XML expected/provided by EWS. Existing crates for serializing/deserializing XML didn’t meet our needs. serde’s data model doesn’t align well with XML, making distinguishing XML attributes and elements difficult. EWS is also sensitive to XML namespaces, which are completely foreign to serde. Various serde-inspired crates designed for XML exist, but these require explicit annotation of how to serialize every field. EWS defines hundreds of types which can have dozens of fields, making that amount of boilerplate untenable.

Ultimately, we found that existing serde-based implementations worked fine for deserializing XML into Rust, but we were unable to find a satisfactory tool for serialization. To that end, we introduced another new crate, xml_struct. This crate defines traits governing serialization behavior and uses Rust’s procedural derive macros to automatically generate implementations of these traits for Rust data structures. It is built on top of the existing quick_xml crate and designed to create a low-boilerplate, intuitive mapping between XML and Rust.  While it is in the early stages of development, it does not make use of any Thunderbird/Firefox internals and is available on GitHub.

We have also introduced one more new crate, ews, which defines types for working with EWS and an API for XML serialization/deserialization, based on xml_struct and serde. Like xml_struct, it is in the early stages of development, but is available on GitHub.

Overall flow chart

Below, you can find a handy flow chart to help understand the logical flow for making an Exchange request and handling the response. 

A bird's eye view of the flow

Fig 1. A bird’s eye view of the flow

What’s next?

Testing all the things

Before landing our next major features, we are taking some time to build out our automated tests. In addition to unit tests, we just landed a mock EWS server for integration testing. The current focus on testing is already paying dividends, having exposed a couple of crashes and some double-sync issues which have since been rectified. Going forward, new features can now be easily tested and verified.

Improving error handling

While we are working on testing, we are also busy improving the story around error handling. EWS’s error behavior is often poorly documented, and errors can occur at multiple levels (e.g., a request may fail as a whole due to throttling or incorrect structure, or parts of a request may succeed while other parts fail due to incorrect IDs). Some errors we can handle at the protocol level, while others may require user intervention or may be intractable. In taking the time now to improve error handling, we can provide a more polished implementation and set ourselves up for easier long-term maintenance.

Expanding support

We are working on expanding protocol support for EWS (via ews and the internal ews_xpcom crate) and hooking it into the Thunderbird UI. Earlier this month, we landed a series of patches which allow adding an EWS account to Thunderbird, syncing the account’s folder hierarchy from the remote server, and displaying those folders in the UI. (At present, this alpha-state functionality is gated behind a build flag and a preference.) Next up, we’ll work on fetching message lists from the remote server as well as generalizing outgoing mail support in Thunderbird.


Of course, all of our work on maintainability is for naught if no one understands what the code does. To that end, we’re producing documentation on how all of the bits we have talked about here come together, as well as describing the existing architecture of mail protocols in Thunderbird and thoughts on future improvements, so that once the work of supporting EWS is done, we can continue building and improving on the Thunderbird you know and love.

EWS is deprecated for removal in 2026. Are there plans to add support for Microsoft Graph into Thunderbird?

This is a common enough question that we probably should have addressed it in the post! EWS will no longer be available for Exchange Online in October 2026, but our research in the lead-up to this project showed that there’s a significant number of users who are still using on-premise installs of Exchange Server. That is, many companies and educational institutions are running Exchange Server on their own hardware.

These on-premise installs largely support EWS, but they cannot support the Azure-based Graph API. We expect that this will continue to be the case for some time to come, and EWS provides a means of supporting those users for the foreseeable future. Additionally, we found a few outstanding issues with the Graph API (which is built with web-based services in mind, not desktop applications), and adding EWS support allows us to take some extra time to find solutions to those problems before building Graph API support.

Diving into the past has enabled a sound engineering-led strategy for dealing with the future: Thanks to the deep dive into the existing Thunderbird architecture we can begin to leverage more efficient and productive patterns and technologies when implementing protocols.

In time this will have far reaching consequences for the Thunderbird code base which will not only run faster and more reliably, but significantly reduce maintenance burden when landing bug fixes and new features.

Rust and EWS are elements of a larger effort in Thunderbird to reduce turnarounds and build resilience into the very core of the software.

The post Adventures In Rust: Bringing Exchange Support To Thunderbird appeared first on The Thunderbird Blog.

Firefox UXOn Purpose: Collectively Defining Our Team’s Mission Statement

How the Firefox User Research team crafted our mission statement

Image of person hugging Firefox logo

Firefox illustration by UX designer Gabrielle Lussier

Like many people who work at Mozilla, I’m inspired by the organization’s mission: to ensure the Internet is a global public resource, open and accessible to all. In thinking about the team I belong to, though, what’s our piece of this bigger puzzle?

The Firefox User Research team tackled this question early last year. We gathered in person for a week of team-focused activities; defining a team mission statement was on the agenda. As someone who enjoys workshop creation and strategic planning, I was on point to develop the workshop. The end goal? A team-backed statement that communicated our unique purpose and value.

Mission statement development was new territory for me. I read up on approaches for creating them and landed on a workshop design (adapted from MITRE’s Innovation Toolkit) that would enable the team to participate in a process of collectively reflecting on our work and defining our shared purpose.

To my delight, the workshop was fruitful and engaging. Not only did it lead us to a statement that resonates, it sparked meaningful discussion along the way.

Here, I outline the five workshop activities that guided us there.

1) Discuss the value of a good mission statement

We kicked off the workshop by discussing the value of a well-crafted statement. Why were we aiming to define one in the first place? Benefits include: fostering alignment between the team’s activities and objectives, communicating the team’s purpose, and helping the team to cohere around a shared direction. In contrast to a vision statement, which describes future conditions in aspirational language, a mission statement describes present conditions in concrete terms.

In our case, the team had recently grown in size to thirteen people. We had a fairly new leadership team, along with a few new members of the team. With a mix of longer tenure and newer members, and quantitative and mixed methods researchers (which at one point in the past had been on separate teams), we wanted to inspire team alignment around our shared goals and build bridges between team members.

2) Individually answer a set of questions about our team’s work

Large sheets of paper were set up around the room with the following questions:

A. What do we, as a user research team, do?

B. How do we do what we do?

C. What value do we bring?

D. Who benefits from our work?

E. Why does our team exist?

Markers in hand, team members dispersed around the room, spending a few minutes writing answers to each question until we had cycled through them all.

People in a workshop

Team members during the workshop

3) Highlight keywords and work in groups to create draft statements

Small groups were formed and were tasked with highlighting keywords from the answers provided in the previous step. These keywords served as the foundation for drafting statements, with the following format provided as a helpful guide:

Our mission is to (A — what we do) by (B — how we do it).

We (C — the value we bring) so that (D — who benefits from our work ) can (E — why we exist).

One group’s draft statement from Step 3

4) Review and discuss resulting statements

Draft statements emerged remarkably fluidly from the activities in Steps 2 and 3. Common elements were easy to identify (we develop insights and shape product decisions), while the differences sparked worthwhile discussions. For example: How well does the term ‘human-centered’ capture the work of our quantitative researchers? Is creating empathy for our users a core part of our purpose? How does our value extend beyond impacting product decisions?

As a group, we reviewed and discussed the statements, crossing out any jargony terms and underlining favoured actions and words. After this step, we knew we were close to a final statement. We concluded the workshop, with a plan to revisit the statements when we were back to work the following week.

5) Refine and share for feedback

The following week, we refined our work and shared the outcome with the lead of our Content Design practice for review. Her sharp feedback included encouraging us to change the phrase ‘informing strategic decisions’ to ‘influencing strategic decisions’ to articulate our role as less passive — a change we were glad to make. After another round of editing, we arrived at our final mission statement:

Our mission is to influence strategic decisions through systematic, qualitative, and quantitative research. We develop insights that uncover opportunities for Mozilla to build an open and healthy internet for all.

Closing thoughts

If you’re considering involving your team in defining a team mission statement, it makes for a rewarding workshop activity. The five steps presented in this article give team members the opportunity to reflect on important foundational questions (what value do we bring?), while deepening mutual understanding.

Crafting a team mission statement was much less of an exercise in wordsmithing than I might have assumed. Instead, it was an exercise in aligning on the bigger questions of why we exist and who benefits from our work. I walked away with a better understanding of the value our team brings to Mozilla, a clearer way to articulate how our work ladders up to the organization’s mission, and a deeper appreciation for the individual perspectives of our team members.

Support.Mozilla.OrgFreshening up the Knowledge Base for spring 2024

Hello, SUMO community!

This spring we’re happy to announce that we’re refreshing the Mozilla Firefox Desktop and Mobile knowledge bases. This is a project that we’ve been working on for the past several months and now, we’re ready to finally share it with you all! We’ve put together a video to walk you through what these changes mean for SUMO and how they’ll impact you.

Introduction of Article Categories

When exploring our knowledge base, we realized there’s so many articles and it’s important to set expectations for users. We’ll be introducing three article types:

  • About – Article that aims to be educational and informs the reader about a certain feature.
  • How To – Article that aims to teach a user how to interact with a feature or complete a task.
  • Troubleshooting – Article that aims to provide solutions to an issue a user might encounter.
  • FAQ – Article that focuses on answering frequently asked questions that a user might have.

We will standardize titles and how articles are formatted per category, so users know what to expect when interacting with an article.

Downsizing and concentration of articles

There’s hundreds upon hundreds of articles in our knowledge base. However, many of them are repetitive and contain similar information. We want to reduce the number of articles and improve the quality of our content. We will be archiving articles and revising active articles throughout this refresh.

Style guideline update focus on reducing cognitive load

As mentioned in a previous post, we will be updating the style guideline and aiming to reduce the cognitive load on users by introducing new style guidelines like in-line images. There’s not huge changes, but we’ll go over them more when we release the updated style guidelines.

With all this coming up, we hope you join us for the community call today and learn more about the knowledge base refresh today. We hope to collaborate with our community to make this update successful.

Have questions or feedback? Drop us a message in this SUMO forum thread.

Mozilla ThunderbirdApril 2024 Community Office Hours: Rust and Exchange Support

Text "COMMUNITY OFFICE HOURS APRIL 2024: RUST AND EXCHANGE" with a stylized Thunderbird bird icon in shades of blue and a custom community icon Iin the center on a lavender background with abstract circular design elements.

We admit it. Thunderbird is getting a bit Rusty, but in a good way! In our monthly Development Digests, we’ve been updating the community about enabling Rust in Thunderbird to implement native support for Exchange. Now, we’d like to invite you for a chat with Team Thunderbird and the developers making this change possible. As always, send your questions in advance to! This is a great way to get answers even if you can’t join live.

Be sure to note the change in day of the week and the UTC time. (At least the time changes are done for now!) We had to shift our calendar a bit to fit everyone’s schedules and time zones!

<figcaption class="wp-element-caption">UPDATE: Watch the entire conversation here. </figcaption>

April Office Hours: Rust and Exchange

This month’s topic is a new and exciting change to the core functionality: using Rust to natively support Microsoft Exchange. Join us and talk with the three key Thunderbird developers responsible for this shiny (rusty) new addition: Sean Burke, Ikey Doherty, and Brendan Abolivier! You’ll find out why we chose Rust, challenges we encountered, how we used Rust to interface with XPCOM and Necko to provide Exchange support. We’ll also give you a peek into some future plans around Rust.

Catch Up On Last Month’s Thunderbird Community Office Hours

While you’re thinking of questions to ask, watch last month’s office hours where we answered some of your frequently asked recent questions. You can watch clips of specific questions and answers on our TILvids channel. If you’d prefer a written summary, this blog post has you covered.

Join The Video Chat

We’ve also got a shiny new Big Blue Button room, thanks to KDE! We encourage everyone to check out their Get Involved page. We’re grateful for their support and to have an open source web conferencing solution for our community office hours.

Date and Time: Tuesday, April 23 at 16:00 UTC

Direct URL to Join:

Access Code: 964573

The post April 2024 Community Office Hours: Rust and Exchange Support appeared first on The Thunderbird Blog.

Firefox Developer ExperienceFirefox WebDriver Newsletter — 125

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 125 release cycle.


With Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla.

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette.


New: Support for the “userAgent” capability

We added support for the User Agent capability which is returned with all the other capabilities by the new session commands. It is listed under the userAgent key and contains the default user-agent string of the browser. For instance when connecting to Firefox 125 (here on macos), the capabilities will contain a userAgent property such as:

"userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:125.0) Gecko/20100101 Firefox/125.0"

WebDriver BiDi

New: Support for the “input.setFiles” command

The “input.setFiles” command is a new feature which allows clients to interact with <input> elements with type="file". As the name suggests, it can be used to set the list of files of such an input. The command expects three mandatory parameters. First the context parameter identifies the BrowsingContext (tab or window) where we expect to find an <input type="file">. Then element should be a sharedReference to this specific <input> element. Finally the files parameter should be a list (potentially empty) of strings which are the paths of the files to set for the <input>. This command has a null return value.

-> {
  "method": "input.setFiles",
  "params": {
    "context": "096fca46-5860-412b-8107-dae7a80ee412",
    "element": {
      "sharedId": "520c3e2b-6210-41da-8ae3-2c499ad66049"
    "files": [
  "id": 7
<- { "type": "success", "id": 7, "result": {} }

Note that providing more than one path in the files parameter is only supported for <input> elements with the multiple attribute set. Trying to send several paths to a regular <input> element will result in an error.

It’s also worth highlighting that the command will override the files which were previously set on the input. For instance providing an empty list as the files parameter will reset the input to have no file selected.

New: Support for the “storage.deleteCookies” command

In Firefox 124, we added two methods to interact with cookies: “storage.getCookies” and “storage.setCookie”. In Firefox 125 we are adding “storage.deleteCookies” so that you can remove previously created cookies. The parameters for the “deleteCookies” command are identical to the ones for the “getCookies” command: the filter argument allows to match cookies based on specific criteria and the partition argument allows to match cookies owned by a certain storage partition. All the cookies matching the provided parameters will be deleted. Similarly to “getCookies” and “setCookie”, “deleteCookies” will return the partitionKey which was built to retrieve the cookies.

# Assuming we have two cookies on, foo=value1 and bar=value2

-> {
  "method": "storage.deleteCookies",
  "params": {
    "filter": {
      "name": "foo",
      "domain": ""
  "id": 8

<- { "type": "success", "id": 8, "result": { "partitionKey": {} } }

-> {
  "method": "storage.getCookies",
  "params": {
    "filter": {
      "domain": ""
  "id": 9

<- {
  "type": "success",
  "id": 9,
  "result": {
    "cookies": [
        "domain": "",
        "httpOnly": false,
        "name": "bar",
        "path": "/",
        "sameSite": "none",
        "secure": false,
        "size": 9,
        "value": {
          "type": "string",
          "value": "value2"
    "partitionKey": {}

New: Support for the “userContext” property in the “partition” argument

All storage commands accept a partition parameter to specify which storage partition it should use, whether it is to retrieve, create or delete cookie(s). Clients can now provide a userContext property in the partition parameter to build a partition key tied to a specific user context. As a reminder, user contexts are collections of browsing contexts sharing the same storage partition, and are implemented as Containers in Firefox.

-> { "method": "browser.createUserContext", "params": {}, "id": 8 }
<- { "type": "success", "id": 8, "result": { "userContext": "6ade5b81-ef5b-4669-83d6-8119c238a3f7" } }
-> {
  "method": "storage.setCookie",
  "params": {
    "cookie": {
      "name": "test",
      "value": {
        "type": "string",
        "value": "cookie in user context partition"
      "domain": ""
    "partition": {
      "type": "storageKey",
      "userContext": "6ade5b81-ef5b-4669-83d6-8119c238a3f7"
  "id": 9

<- { "type": "success", "id": 9, "result": { "partitionKey": { "userContext": "6ade5b81-ef5b-4669-83d6-8119c238a3f7" } } }

Bug fixes

Mozilla ThunderbirdTeam Thunderbird Answers Your Most Frequently Asked Questions

We know the Thunderbird community has LOTS of questions! We get them on Mozilla Support, Mastodon, and (formerly Twitter). They pop up everywhere, from the Thunderbird subreddit to the teeming halls of conferences like FOSDEM and SCaLE. During our March Community Office Hours, we took your most frequently asked questions to Team Thunderbird and got some answers. If you couldn’t watch the full session, or would rather have the answers in abbreviated text clips, this post is for you!

Thunderbird for Android / K-9 Mail

The upcoming release on Android is definitely on everyone’s mind! We received lots of questions about this at our conference booths, so let’s answer them!

Will there be Exchange support for Thunderbird for Android?

Yes! Implementing Exchange in Rust in the Thunderbird Desktop client will enable us to reuse those Rust crates as shared libraries with the Mobile client. Stay up to date on Exchange support progress via our monthly Developer Digests.

Will Thunderbird Add-ons be available on Android?

Right now, no, they will not be available. K-9 Mail uses a different code base than Thunderbird Desktop. Thunderbird add-ons are designed for a desktop experience, not a mobile one. We want to have add-ons in the future, but this will likely not happen within the next two years.

When Thunderbird for Android launches, will it be available on F-Droid?

It absolutely will.

When Thunderbird for Android is ready to be released, what will the upgrade path look like?

We know some in the K-9 Mail community love their adorable robot dog and don’t want to give him up yet. So we will support K-9 Mail (same code, different brand) in parallel for a year or two, until the product is more mature, and we see that more K-9 Mail users are organically switching.

Because of Android security, users will need to manually migrate from K-9 Mail to Thunderbird for Android, versus an automatic migration. We want to make that effortless and unobtrusive, and the Sync feature using Mozilla accounts will be a large part of that. We are exploring one-tap migration tools that will prompt you to switch easily and keep all your data and settings – and your peace of mind.

Will CalDAV and CardDAV be available on Thunderbird for Android?

Probably! We’re still determining this, but we know our users like having their contacts and calendars inside one app for convenience, as well as out of privacy concerns. While it would be a lot of engineering effort, we understand the reasoning behind these requests. As we consider how to go forward, we’ll release all these explorations and ideas in our monthly updates, where people can give us feedback.

Will the K-9 Mail API provide the ability to download the save preferences that Sync stores locally to plug into automation like Ansible?

Yes! Sync is open source, so users can self-host their own instead of using Mozilla services. This question touches on the differences between data structure for desktop and mobile, and how they handle settings. So this will take a while, but once we have something stable in a beta release, we’ll have articles on how to hook up your own sync server and do your own automation.

Thunderbird for Desktop

When will we have native Exchange support for desktop Thunderbird?

We hope to land this in the next ESR (Extended Support Release), version 128, in limited capacity. Users will still need to use the OWL Add-on for all situations where the standard exchange web service is not available. We don’t yet know if native calendar and address book support will be included in the ESR. We want to support every aspect of Exchange, but there is a lot of code complexity and a history of changes from Microsoft. So our primary goal is good, stable support for email by default, and calendar and address book if possible, for the next ESR.

When will conversations and a true threaded view be added to Thunderbird?

Viewing your own sent emails is an important component of a true conversation view. This is a top priority and we’re actively working towards it. Unfortunately, this requires overhauling the backend database that underlies Thunderbird, which is 20 years old. Our legacy database is not built to handle conversation views with received and sent messages listed in the same thread. Restructuring a two decades old database is not easy. Our goal is to have a new global message database in place by May 31. If nothing has exploded, it should be much easier to enable conversation view in the front end.

When will we get a full sender name column with the raw email address of the sender? This will help further avoid phishing and spam.

We plan to make this available in the next ESR — Thunderbird 128 — which is due July 2024.

Will there ever be a browser-based view of Thunderbird?

Despite our foundations in Firefox, this is a huge effort that would have to be built from scratch. This isn’t on our roadmap and not in our plans for now. If there was a high demand, we might examine how feasible this could be. Alex explains this in more detail during the short video below:

The post Team Thunderbird Answers Your Most Frequently Asked Questions appeared first on The Thunderbird Blog.

Mozilla Performance BlogPerformance Testing Newsletter, Q1 Edition

Welcome to the latest edition of the Performance Testing Newsletter! The PerfTools team empowers engineers with tools to continuously improve the performance of Mozilla products. See below for highlights from the changes made in the last quarter.



  • Myeongjun Go [:myeongjun]

If you have any questions, or are looking to add performance testing for your code component, you can find us in #perftest on Element, or #perf-help on Slack.

P.S. If you’re interested in including updates from your teams in a quarterly newsletter like this, and you are not currently covered by another newsletter, please reach out to me (:sparky). I’m interested in making a more general newsletter for these.

Will Kahn-GreeneObservability Team Newsletter (2024q1)

Observability Team is a team dedicated to the problem domain and discipline of Observability at Mozilla.

We own, manage, and support monitoring infrastructure and tools supporting Mozilla products and services. Currently this includes Sentry and crash ingestion related services (Crash Stats (Socorro), Mozilla Symbols Server (Tecken), and Mozilla Symbolication Service (Eliot)).

In 2024, we'll be working with SRE to take over other monitoring services they are currently supporting like New Relic, InfluxDB/Grafana, and others.

This newsletter covers an overview of 2024q1. Please forward it to interested readers.


  • 🤹 Observability Services: Change in user support

  • 🏆 Sentry: Change in ownership

  • ‼️ Sentry: Please don't start new trials

  • ⏲️ Sentry: Cron monitoring trial ending April 30th

  • ⏱️ Sentry: Performance monitoring pilot

  • 🤖 Socorro: Improvements to Fenix support

  • 🐛 Socorro: Support guard page access information

See details below.

Blog posts

None this quarter.

Detailed project updates

Observability Services: Change in user support

We overhauled our pages in Confluence, started an #obs-help Slack channel, created a new Jira OBSHELP project, built out a support rotation, and leveled up our ability to do support for Observability-owned services.

See our User Support Confluence page for:

  • where to get user support

  • documentation for common tasks (get protected data access, create a Sentry team, etc)

  • self-serve instructions

Hop in #obs-help in Slack to ask for service support, help with monitoring problems, and advice.

Sentry: Change in ownership

The Observability team now owns Sentry service at Mozilla!

We successfully completed Phase 1 of the transition in Q1. If you're a member of the Mozilla Sentry organization, you should have received a separate email about this to the sentry-users Google group.

We've overhauled Sentry user support documentation to improve it in a few ways:

  • easier to find "how to" articles for common tasks

  • best practices to help you set up and configure Sentry for your project needs

Check out our Sentry user guide.

There's still a lot that we're figuring out, so we appreciate your patience and cooperation.

Sentry: Please don't start new trials

Sentry sends marketing and promotional emails to Sentry users which often include links to start a new trial. Please contact us before starting any new feature trials in Sentry.

Starting new trials may prevent us from trialing those features in the future when we’re in a better position to evaluate the feature. There's no way for admins to prevent users from starting a trial.

Sentry: Cron monitoring trial ending April 30th

The Cron Monitoring trial that was started a couple of months ago will end April 30th.

Based on feedback so far and other factors, we will not be enabling this feature once the trial ends.

This is a good reminder to build in redundancy in your monitoring systems. Don't rely solely on trial or pilot features for mission critical information!

Once the trial is over, we'll put together an evaluation summary.

Sentry: Performance monitoring pilot

Performance Monitoring is being piloted by a couple of teams; it is not currently available for general use.

In the meantime, if you are not one of these pilot teams, please do not use Performance Monitoring. There is a shared transaction event quota for the entire Mozilla Sentry organization. Once we hit that quota, events are dumped.

If you have questions about any of this, please reach out.

Once the trial is over, we'll put together an evaluation summary.

Socorro: Improvements to Fenix support

We worked on improvements to crash ingestion and the Crash Stats site for the Fenix project:

1812771: Fenix crash reporter's Socorro crash reports for Java exceptions have "Platform" = "Unknown" instead of "Android"

Previously, the platform would be "Unknown". Now the platform for Fenix crash reports is "Android". Further, the platform_pretty_version includes the Android ABI version.

/images/obs_2024q1_android_version.thumbnail.png <figcaption>

Figure 1: Screenshot of Crash Stats Super Search results showing Android versions for crash reports.


1819628: reject crash reports for unsupported Fenix forks

Forks of Fenix outside of our control periodically send large swaths of crash reports to Socorro. When these sudden spikes happened, Mozillians would spend time looking into them only to discover they're not related to our code or our users. This is a waste of our time and resources.

We implemented support for the Android_PackageName crash annotation and added a throttle rule to the collector to drop crash reports from any non-Mozilla releases of Fenix.

From 2024-01-18 to 2024-03-31, Socorro accepted 2,072,785 Fenix crash reports for processing and rejected 37,483 unhelpful crash reports with this new rule. That's roughly 1.7%. That's not a huge amount, but because they sometimes come in bursts with the same signature, they show up in Top Crashers wasting investigation time.

1884041: fix create-a-bug links to work with java_exception

A long time ago, in an age partially forgotten, Fenix crash reports from a crash in Java code would send a crash report with a JavaStackTrace crash annotation. This crash annotation was a string representation of the Java exception. As such, it was difficult-to-impossible to parse reliably.

In 2020, Roger Yang and Will Kahn-Greene spec'd out a new JavaException crash annotation. The value is a JSON-encoded structure mirroring what Sentry uses for exception information. This structure provides more information than the JavaStackTrace crash annotation did and is much easier to work with because we don't have to parse it first.

Between 2020 and now, we have been transitioning from crash reports that only contained a JavaStackTrace to crash reports that contained both a JavaStackTrace and a JavaException. Once all Fenix crash reports from crashes in Java code contained a JavaException, we could transition Socorro code to use the JavaException value for Crash Stats views, signature generation, generate-create-bug-url, and other things.

Recently, Fenix dropped the JavaStackTrace crash annotation. However, we hadn't yet gotten to updating Socorro code to use--and prefer--the JavaException values. This broke the ability to generate a bug for a Fenix crash with the needed data added to the bug description. Work on bug 1884041 fixed that.

Comments for Fenix Java crash reports went from:

Crash report:


Crash report:

Top 10 frames:

0  android.database.sqlite.SQLiteConnection  nativePrepareStatement
1  android.database.sqlite.SQLiteConnection  acquirePreparedStatement
2  android.database.sqlite.SQLiteConnection  executeForString
3  android.database.sqlite.SQLiteConnection  setJournalMode
4  android.database.sqlite.SQLiteConnection  setWalModeFromConfiguration
5  android.database.sqlite.SQLiteConnection  open
6  android.database.sqlite.SQLiteConnection  open
7  android.database.sqlite.SQLiteConnectionPool  openConnectionLocked
8  android.database.sqlite.SQLiteConnectionPool  open
9  android.database.sqlite.SQLiteConnectionPool  open

This both fixes the bug and also vastly improves the bug comments from what we were previously doing with JavaStackTrace.

Between 2024-03-31 and 2024-04-06, there were 158,729 Fenix crash reports processed. Of those, 15,556 have the circumstances affected by this bug: a JavaException but don't have a JavaStackTrace. That's roughly 10% of incoming Fenix crash reports.

While working on this, we refactored the code that generates these crash report bugs, so it's in a separate module that's easier to copy and use in external systems in case others want to generate bug comments from processed crash data.

Further, we changed the code so that instead of dropping arguments in function signatures, it now truncates them at 80 characters.

We're hoping to improve signature generation for Java crashes using JavaException values in 2024q2. That work is tracked in bug #1541120.

Socorro: Support guard page access information

1830954: Expose crashes which were likely accessing a guard page

We updated the stackwalker to pick up the changes for determining is_likely_guard_page. Then we exposed that in crash reports in the has_guard_page_access field. We added this field to the Details tab in crash reports and made it searchable. We also added this to the signature report.

This helps us know if a crash is possibly due to a bug with memory access that could be a possible security vulnerability vector--something we want to prioritize fixing.

Since this field is security sensitive, it requires protected data access to view and search with.

Socorro misc

Tecken/Eliot misc

  • Maintenance and documentation improvements.

  • 5 production deploys. Created 21 issues. Resolved 28 issues.

More information

Find us:

Thank you for reading!

Firefox NightlyExploring improvements to the Firefox sidebar

What are we working on? 

We have long been excited to improve the existing Firefox sidebar and strengthen productivity use cases in the browser. We are laying the groundwork for these improvements, and you may have seen early work-in-progress in our test builds and in Nightly behind preferences (Firefox Nightly with vertical tabs and Firefox is experimenting with a sidebar in Nightly).

What to expect next?

In the near future, we will be landing foundational sidebar features in Nightly to ensure parity with the existing sidebar and make the new experience more useful and easy to use. Many of the ideas we are exploring are based on your suggestions in Mozilla Connect. You’ve shared how you imagine productivity, switching between contexts, and juggling multiple tasks could improve in Firefox, and we’ve listened.

We are encouraged by your positive feedback on our early concepts, and we look forward to engaging with the community and hearing more about what you think once sidebar features are ready for testing. We will announce feature readiness for feedback in the follow-up blog posts and on Connect.

In the meantime, if you have questions or general feedback, please engage with us on Mozilla Connect.

Firefox NightlyCustomizing Reader Mode – These Weeks in Firefox: Issue 158


Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • Fixed a regression introduced in Firefox 116 for extensions including a sidebar command shortcut (fixed in Nightly 126 and Beta 125) – Bug 1881820
    • Thanks to Dao for investigating the regression and then following up with a fix
  • Fixed a long standing uninterruptible reflow triggered by browser-addons.js on handling “addon-install-confirmation” notifications – Bug 1360028
    • Thanks to Dao for fixing it too!

ESMification status

  • 100%
    • Thank you to everyone that has worked on and supported this effort.
    • Plan for out-of-tree changes & removing old loaders is coming soon.

Lint, Docs and Workflow

Migration Improvements

Screenshots (enabled by default in Nightly)

Search and Navigation

  • Firefox Suggest experience
  • Work continues integrating the new Rust backend and improving exposure metrics.
  • Daisuke has exposed icons mime-types along with blobs, from the offline Suggest backend. Bug 1882967
  • Drew has fixed a problem with the recording of exposure metrics in experiments. Bug 1886175
  • Clipboard result
  • Karandeep has fixed a problem with empty searches returning no results in Tab, History and Bookmarks Search Mode, and a problem with the clipboard result persisting when switching through multiple empty tabs. Bug 1884094, Bug 1865336
  • SERP categorization metrics
  • Stephanie and James have fixed multiple issues in this area.
  • Categorization metric has been enabled in Nightly.
  • Search Config v2
    • Standard8 and Mandy have fixed multiple issues in this area.
    • Work continues as Config v2 has been enabled in Nightly.
  • Frecency ranking
    • Marco has changed frecency recalculation to accelerate when many changes have been made from the last recalculation. This should help with large imports. Bug 1873629
    • Marco has corrected a schema migration mistake, preventing recalculation of frecency for not recently accessed domains. That caused autofill of domains to not work as expected in the Address Bar for Firefox 125 Nightly (and first week of Beta). Bug 1886975
  • Other fixes
    • Drew has corrected visual alignment of weather results. Bug 1886694
    • Dale has corrected visual alignment of rich search suggestions. Bug 1871022

Storybook/Reusable Components

  • Design Tokens
    • We’ve recently landed some changes to how our design tokens are handed in mozilla-central, we now have a JSON source of truth for these tokens
    • To update the tokens files (tokens-shared, tokens-platform, tokens-brand), you’ll need to modify the design-tokens.json file and then run ./mach npm run build –prefix=toolkit/themes/shared/design-system
    • Our current docs can be found on Storybook: JSON design tokens, and the more general design tokens docs
      • Porting these docs over to Firefox Source Docs will happen in the next couple of days
    • This info will also be sent out to the firefox-dev mailing list with more details and links to Firefox Source Docs

The Servo BlogServo and SpiderMonkey

As a web engine, Servo embeds another engine for its script execution capabilities, including both JavaScript and Wasm: SpiderMonkey. One of the goals of Servo is modularity, and the question of how modular it really was with regards to those capabilities came up. For example, how easy would it be for Servo to use Chrome’s V8 engine, or the next big script engine? To answer that question, we’ve written a short report analysing the relationship between Servo and SpiderMonkey.

The problem

Running a webpage happens inside the script component of Servo; the loading process starts there, and the page continues to run its HTML event loop there. By its very nature, executing a script from within a webpage requires an integration between the script engine and the web engine that surrounds it. Anything shared between the two, including the DOM itself and any other construct calling from one into the other, needs to be integrated somehow, and much but not all of that is done via WebIDL. For example, an integration area that is left for web and script engines to implement as they see fit is that with a garbage collector (see example in Rust for SpiderMonkey).

The need to integrate can result in tight coupling, but the classic ways of increasing modularity — abstractions and interfaces — can be applied here as well, and that is where we found Servo lacking in some ways, but also on the right path. Servo already comes with abstractions and interfaces for a large surface area of its integration with SpiderMonkey, providing ease of use and clarity while preserving boundaries between the two. Other parts of that integration rely on direct, and unsafe, calls into the low-level SpiderMonkey APIs.

The solution

The low-hanging fruit consists of removing these direct calls into low-level SpiderMonkey APIs, replacing them with safe and higher-level constructs. Work on this has started, through a combination of efforts from maintainers and the enthusiasm of community members: eri, tannal, and Taym Haddadi. These efforts have already resulted in the closing of several issues:

Note that the safer higher-level constructs that replace low-level SpiderMonkey API calls are still internally tightly coupled to SpiderMonkey. By centralizing these calls, and hiding them from the rest of the codebase, it becomes possible to enumerate what exactly Servo is doing with SpiderMonkey, and to start thinking about a second layer of abstraction: one that would hide the underlying script engine. An existing, and encouraging, example of such a layer comes from React Native in the form of its JavaScript Interface (JSI).

Call to action

If you are interested in contributing to these efforts, the issues below are good places to start:

For more details, read the full report on our wiki.

Don Martiplanning for SCALE 2025

I missed Southern California Linux Expo this year. Normally I can think of a talk to do, but between work and [virus redacted] I didn’t have a lot of conference abstract writing time last fall. I need some new material anyway. The talks that tend to do well for me there are kind of a mix of tips for doing weird stuff.

I didn’t really have anything good to submit last fall, but this year I am building up a bunch of miscellaneous Linux stuff similar to what has worked for me at SCALE before. Because of the big Fediverse trend, the search quality crisis, the ends of third-party cookies and Twitter, and enshittification in general, it seems like there’s a lot more interest in redoing your blog—I know I have been doing it, so that’s what I’m going to see if I can come up with something on for next SCALE. But I’m not going to use a blog software package. I’m more comfortable with a mix of different stuff. This blog is now mainly done in Pandoc, auto-rebuilt by Make, and has a bunch of scripts in various languages, including shell, Perl, Python, and even a little bit of Lua now.

protip: use cowsay(1) to alert the user to errors in Makefile before restarting<figcaption>protip: use cowsay(1) to alert the user to errors in Makefile before restarting</figcaption>

I don’t really expect anybody to copy this blog, more outdo it by getting the audience to realize how much you can now do with the available tools. I’m not going to win any design prizes but with modern CSS I can make a reasonable responsive layout and dark/light modes. And yes you can make a valid RSS feed in GNU Make.

The feature I just did today is the similar posts in the left column. Remember that paper about how you can measure the similarity between two pieces of text by seeing how well they compress together? “Low-Resource” Text Classification: A Parameter-Free Classification Method with Compressors - ACL Anthology This is Python code for rating similarity of chunks of text. Check it out in the left column, you can now follow the links to similar blog posts.

import gzip def z(s): return len(gzip.compress(bytes(s, 'utf-8'))) def simscore(t1, t2): "lower is better" if len(t1) == 0 or len(t2) == 0: return 1 base = z(t1) + z(t2) minsize = min(z(' '.join([t1, t2])), z(' '.join([t2, t1])), base) return int(10000 * minsize/base)

Next I will probably try stuff like Fediverse-powered comments, some kind of search feature, LLM training set poisoning, some privacy and p2p features, and maybe something else. A lot of what I’m doing here will be possible to translate into other environments, and should be portable to people’s favorite blog software.


Am I metal yet? The old blog software

Automatically run make when a file changes

Hey kids, favicon!

Bonus links

Notes on git’s error messages

Verified curl

Ideas for my dream blogging CMS

German state moving 30,000 PCs to LibreOffice

xz, Tidelift, and paying the maintainers

[$] Radicle: peer-to-peer collaboration with Git

An HTML Switch Control

A (tiny, incomplete, single user, write-only) ActivityPub server in PHP

Popular git config options

Don MartiB L O C K in the U S A

According to a survey done by Censuswide for Ghostery, a majority of Americans now use ad blockers. Yes, it looks like a well-designed survey of 2,000 people. But it’s hard to go from what people say they’re using to figuring out how much protection they really have.

  • Are they answering accurately? People might be under- or over-reporting their use of ad blockers. Under-reporting because they don’t want to admit to free-riding on ad-supported sites, or over-reporting because install an ad blocker is now one of the typical Internet tips you’re supposed to do, like not re-using passwords and installing software updates when they come out. People might be trying to look more responsible. When the FBI says you should be running an ad blocker to deal with fake search ads, that puts a certain amount of pressure on people.icymi: Ad Blockers and the Four Currencies by Lars Doucet.

  • Are they using an honest blocker with real protection? The ad blocking category has a lot of scams, including adware and paid allow-listing, so most of the people saying yes are not getting the blocking they think they are. (The company that owns the number one ad blocker makes a business out of selling exceptions from blocking. Senator Ron Wyden wrote a letter to the FTC asking them to investigate the ad-blocking industry back in 2020, but no action as far as I know. In the meantime you can check your ad blocker using a tool from the EFF.)

  • How much of their browsing is on a protected browser or device? It’s a lot easier to install an ad blocker on desktop than on mobile, and people have different habits.

  • Is protection being circumvented by server-to-server tracking? Ad blocking has been a thing for a long time, so the surveillance industry has gotten pretty good at working around it. Facebook has responded to Apple ATT and to blockage of their tracking pixels by rolling out server-to-server tracking, which avoids any protection on the client. Google and other companies also have server-to-server tracking.

The second most newsworthy part of the new Censuswide survey is why people say they’re using an ad blocker. Protect online privacy is now the number one reason, with block ads and speed up page loads coming in after that. I’ll leave the most newsworthy part to the end. I know, I know, the surveillance advertising people are going to reply with something like, yeah, right, these ad blocker users are just rationalizing free-riding on ad-supported sites, like Napster users making bogus fair use arguments instead of paying for CDs back when that was a thing. In order to understand this survey we have to put it in context with other research. Compare to Turow et al. on attitudes to cross-context tracking, and to an IAB Europe study that found only 20% of users would be happy for their data to be shared with third parties for advertising purposes.

It looks like the privacy concerns are real for a significant subset of people, and part of the same trend as popular US State Privacy Legislation. Different people have different norms around ad personalization, and if people can’t get companies to comply with those norms they will get the government to do something about it. For companies, adjusting to privacy norms doesn’t just mean throwing privacy-enhancing technologies (PETs) at the problem. Jerath et al. found similar levels of perceived privacy violations for on-device ad personalization as for old-fashioned cookie-based tracking. PETs have different mathematical properties from cookies, but either don’t address other problems or make them worse.

Companies deploying PETs are asking users to trust that they will do complicated math honestly—but they’re not starting from a position of trust. When users have the opportunity to evaluate the companies’ honesty in a way they do understand, the companies don’t measure up. Most people can look at an online map of their neighborhood and spot places where a locksmith isn’t. And it’s easy to look up a person on a social site and see where there are enough profiles that not all of them can be real.

screenshot of several fake Facebook profiles, all using the same two photos of retired US Army General Mark Hertling<figcaption>screenshot of several fake Facebook profiles, all using the same two photos of retired US Army General Mark Hertling</figcaption>

The biggest problem with PETs will be that the Big Tech companies do both easy-to-understand activities—like scams, fake profiles, and union bustingand hard-to-understand activities, like PET math. I see you served me scam ads and a map with fake companies in my neighborhood, but I totally trust your math to protect my privacy — no one ever If you don’t know if the PET math is honest, but you can see the same company acting dishonestly in other ways, then it’s hard to trust the PET math. (Personally I think the actual PETs are probably legit, but they’re being rolled out as part of a larger program to squeeze out legit publishers and other smaller companies.)

In AIC polls, confidence in Amazon, Meta, and Google has fallen since 2018.<figcaption>In AIC polls, confidence in Amazon, Meta, and Google has fallen since 2018.</figcaption>

(source: How Americans’ confidence in technology firms has dropped: evidence from the second wave of the American Institutional Confidence poll)

Maybe trust issues are behind Censuswide’s most newsworthy data point: experienced advertisers (with 5 or more years of experience in advertising) are more likely to run an ad blocker than average. (66% > 52%) Reminds me of how experienced email users were early adopters of spam filters—the more you know, the more you block. Between sketchy placements, bogus reports and a your call is important to us approach to advertiser support, the advertisers are having a much worse surveillance advertising experience than the rest of us. The Censuswide survey (full report PDF) also shows that more experienced advertisers than ordinary users believe that the Big Tech companies are likely to abuse data.but realistically, who knows if they are or not?

The Tragedy of the Commons is bogus when it comes to actual traditional practices for managing common resources, but it is a thing within large companies. Individual product managers are incentivized to achieve short-term goals either at the expense of other product managers, by dishonest practices that spend down the (common across the whole company) reputation level, or both. For example, within the same large company one business unit can achieve its goals by licensing e-books, while another business unit can achieve its goals by running ads on infringing copies of the same titles. Big Tech fans often ask, if these companies are so distrusted, why do people keep using their products? But another question is, if these companies are so trusted, why do voters keep asking the government to take over managing their products? Privacy settings are hard for users to figure out and easy for companies to override, but a vote for privacy is easier and sticks better. (and possibly the one thing that a bitterly divided nation can agree on)

Doc Searls called ad blocking the biggest boycott in world history back in 2015. Ad blocking looks like a response to creepy practices (or perceived privacy violations if that works better for you) and those practices are part of a more general scam culture crisis. Tressie McMillan Cottom writes,read the whole thing

Scams weaken our trust in social institutions, but their going mainstream—divorced from empathy for the victims or stigma for the perpetrators—means that we have accepted scams as institutions themselves.

I can’t see any one big policy solution for surveillance advertising, tech oligopolies, or the broader scam culture problem. All of that stuff would have to change in order to move the ad blocking numbers. It’s going to take a variety of approaches, maybe including a surveillance advertising ban, maybe a Pigovian tax on databases containing PII, maybe breaking up Big Tech firms. So far the most promising approach seems to be state laws with private right of action, which is one of the reasons I’m so optimistic about Washington State’s My Health My Data Act. My experience on a jury (not an ad-related case) was the most productive meeting I have been in since I came to California. If surveillance advertising issues can grind their way through a few jury trials, where lawyers have an incentive to explain what’s going on in an accurate, comprehensible way, then both surveillance marketers and privacy nerds will be able to reset how we approach this stuff based on more common sense.


Reputation, signaling, and targeted ads

banning surveillance advertising

improving web advertising

Bonus links

Why Does Ad Tech Still Fail To Spot – And Stop – MFA-Fueled Schemes?

Flying Under The Radar Is Not A Realistic Compliance Strategy

7 Things You Should Know About California’s Privacy Watchdog

Class-Action Lawsuit against Google’s Incognito Mode

CONFIRMED: Elon Musk’s X lost a HUGE brand safety certification after our complaint

Nubai Ventures Sues Outbrain, Claiming Its Traffic Is Riddled With Bots

Critics of the TikTok Bill Are Missing the Point

EU signals doubts over legality of Meta’s privacy fee

They Praised AI at SXSW—and the Audience Started Booing

AI Is Threatening My Tech and Lifestyle Content Mill

There is no EU cookie banner law

A Close Up Look at the Consumer Data Broker Radaris

The FTC’s PrivacyCon Was Chock-Full Of Warning Signs For Online Advertising

How Google Blew Up Its Open Culture and Compromised Its Product

The Tech Industry Doesn’t Understand Consent

Tracking ads industry faces another body blow in the EU

Privacy Sandbox’s Latency Issues Will Cost Publishers

Reminder – Google is enforcing stricter rules for consumer finance ad targeting

Hacks.Mozilla.OrgPrototype even faster with the Gradio UI for Figma component library

As an industry, generative AI is moving quickly, and so requires teams exploring new ideas and technologies to move quickly as well. To do so, we have been using Gradio, a low-code prototyping toolkit from Hugging Face, to spin up experiments and experiences. Gradio has allowed us to validate concepts through prototyping without large investments of time, effort, or infrastructure.

Although Gradio has made the development phase of prototyping easier, the design phase has been largely the same. Even with Gradio, designers have had to create components in Figma, outline expected user flows and behaviors, and hand off designs for developers in the same way they have always done. While working on a recent exploration, we realized something was needed: a set of Figma components based on Gradio that enabled designers to create wireframes quickly.

Today, we are releasing our library of design components for Gradio for others to use. The components are based on version 4.23.0 of Gradio and will be available through our Figma profile: Mozilla Innovation Projects, We hope these components help teams accelerate their discovery and experimentation with ML and generative AI.

You can find out more about Gradio at and more about innovation at Mozilla at

Thanks to Amy Chiu and Anais Ron who created the components and to the Gradio team for their work. Happy designing!

What’s Inside Gradio UI for Figma?

Because Gradio is an ever-changing prototyping kit, current components are based on version 4.23.0 of Gradio. We selected components based on their wide array of potential uses. Here is a list of the components inside the kit:

  • Typography (e.g. headers, body fonts)
  • Iconography (e.g. chevrons, arrows, corner expanders) 

Small Components:

  • Buttons
  • Checkbox
  • Radio
  • Sliders
  • Tabs
  • Accordion
  • Delete Button
  • Error Message
  • Media Type Labels
  • Media Player Controller

Big Components:

  • Label + Textbox
  • Accordion with Label + Input
  • Video Player
  • Label + Counter
  • Label + Slider
  • Accordion + Label
  • Checkbox with Label
  • Radio with Label
  • Accordion with Content
  • Accordion with Label + Input
  • Top navigation

How to Access and Use Gradio UI for Figma

To start using the library, follow these simple steps:

  1. Access the Library: Access the component library directly by visiting our public Figma profile ( or by searching for “Gradio UI for Figma” within the Figma Community section of your web or desktop Figma application.
  2. Explore the Documentation: Familiarize yourself with the components and guidelines to make the most out of your design process.
  3. Connect with Us: Connect with us by following our Figma profile or emailing us at

The post Prototype even faster with the Gradio UI for Figma component library appeared first on Mozilla Hacks - the Web developer blog.

Mozilla ThunderbirdThunderbird for Android / K-9 Mail: March 2024 Progress Report

Featured graphic for "Thunderbird for Android March 2024 Progress Report" with stylized Thunderbird logo and K-9 Mail Android icon, resembling an envelope with dog ears.

If you’ve been wondering how the work to turn K-9 Mail into Thunderbird for Android is coming along, you’ve found the right place. This blog post contains a report of our development activities in March 2024. 

We’ve published monthly progress reports for a while now. If you’re interested in what happened previously, check out February’s progress report. The report for the preceding month is usually linked in the first section of a post. But you can also browse the Android section of our blog to find progress reports and release announcements.

Fixing bugs

For K-9 Mail, new stable releases typically include a lot of changes. K-9 Mail 6.800 was no exception. That means a lot of opportunities to accidentally introduce new bugs. And while we test the app in several ways – manual tests, automated tests, and via beta releases – there’s always some bugs that aren’t caught and make it into a stable version. So we typically spend a couple of weeks after a new major release fixing the bugs reported by our users.

K-9 Mail 6.801

Stop capitalizing email addresses

One of the known bugs was that some software keyboards automatically capitalized words when entering the email address in the first account setup screen. A user opened a bug and provided enough information (❤) for us to reproduce the issue and come up with a fix.

Line breaks in single line text inputs

At the end of the beta phase a user noticed that K-9 Mail wasn’t able to connect to their email account even though they copy-pasted the correct password to the app. It turned out that the text in the clipboard ended with a line break. The single line text input we use for the password field didn’t automatically strip that line break and didn’t give any visual indication that there was one.

While we knew about this issue, we decided it wasn’t important enough to delay the release of K-9 Mail 6.800. After the release we took some time to fix the problem.

DNSSEC? Is anyone using that?

When setting up an account, the app attempts to automatically find the server settings for the given email address. One part of this mechanism is looking up the email domain’s MX record. We intended for this lookup to support DNSSEC and specifically looked for a library supporting this.

Thanks to a beta tester we learned that DNSSEC signatures were never checked. The solution turned out to be embarrassingly simple: use the library in a way that it actually validates signatures.

Strange error message on OAuth 2.0 failure

A user in our support forum reported a strange error message (“Cannot serialize abstract class com.fsck.k9.mail.oauth.XOAuth2Response”) when using OAuth 2.0 while adding their email account. Our intention was to display the error message returned by the OAuth server. Instead an internal error occurred. 

We tracked this down to the tool optimizing the app by stripping unused code and resources when building the final APK. The optimizer was removing a bit too much. But once the issue was identified, the fix was simple enough.

Crash when downloading an attachment

Shortly after K-9 Mail 6.800 was made available on Google Play, I checked the list of reported app crashes in the developer console. Not a lot of users had gotten the update yet. So there were only very few reports. One was about a crash that occurred when the progress dialog was displayed while downloading an attachment. 

The crash had been reported before. But the number of crashes never crossed the threshold where we consider a crash important enough to actually look at. 

It turned out that the code contained the bug since it was first added in 2017. It was a race condition that was very timing sensitive. And so it worked fine much more often than it did not. 

The fix was simple enough. So now this bug is history.

Don’t write novels in the subject line

The app was crashing when trying to send a message with a very long subject line (around 1000 characters). This, too, wasn’t a new bug. But the crash occurred rarely enough that we didn’t notice it before.

The bug is fixed now. But it’s still best practice to keep the subject short!

Work on K-9 Mail 6.802

Even though we fixed quite a few bugs in K-9 Mail 6.801, there’s still more work to do. Besides fixing a couple of minor issues, K-9 Mail 6.802 will include the following changes.

F-Droid metadata

In preparation of building two apps (Thunderbird for Android and K-9 Mail), we moved the app description and screenshots that are used for F-Droid’s app listing to a new location inside our source code repository. We later found out that this new location is not supported by F-Droid, leading to an empty app description on the F-Droid website and inside their app.

We switched to a different approach and hope this will fix the app description once K-9 Mail 6.802 is released.

Push not working due to missing permission

Fresh installs of the app on Android 14 no longer automatically get the permission to schedule exact alarms. But this permission is necessary for Push to work. This was a known issue. But since it only affects new installs and users can manually grant this permission via Android settings, we decided not to delay the stable release until we added a user interface to guide the user through the permission flow.

K-9 Mail 6.802 will include a first step to improve the user experience. If Push is enabled but the permission to schedule exact alarms hasn’t been granted, the app will change the ongoing Push notification to ask the user to grant this permission.

In a future update we’ll expand on that and ask the user to grant the permission before allowing them to enable Push.

What about new features?

Of course we haven’t forgotten about our roadmap. As mentioned in February’s progress report we’ve started work on switching the user interface to use Material 3 and adding/improving Android 14 compatibility.

There’s not much to show yet. Some Material 3 changes have been merged already. But the user interface in our development version is currently very much in a transitional phase.

The Android 14 compatibility changes will be tested in beta versions first, and then back-ported to K-9 Mail 6.8xx.


In March 2024 we published the following stable release:

There hasn’t been a release of a new beta version in March.

The post Thunderbird for Android / K-9 Mail: March 2024 Progress Report appeared first on The Thunderbird Blog.