Mozilla ThunderbirdVIDEO: Talking MZLA with Ryan Sipes

In this month’s Community Office Hours, we’re chatting with our director Ryan Sipes. This talk opens with a brief history of Thunderbird and ends on our plans for its future. In between, we explain more about MZLA and its structure, and how this compares to the Mozilla Foundation and Corporation. We’ll also cover the new Thunderbird Pro and Thundermail announcement And we talk about how Thunderbird put the fun in fundraising!

And if you’d like to know even more about Pro, next month we’ll be chatting with Services Software Engineer Chris Aquino about our upcoming products. Chris, who most recently has been working on Assist, is both incredibly knowledgeable and a great person to chat with. We think you’ll enjoythe upcoming Community Office Hours as much as we do.

April Office Hours: Thunderbird and MZLA

The beginning is always a very good place to start. We always love hearing Ryan recount Thunderbird’s history, and we hope you do as well. As one of the key figures in bringing Thunderbird back from the ashes, Ryan is ideal to discuss how Thunderbird landed at MZLA, its new home since 2020. We also appreciate his perspective on our relationship to (and how we differ from) the Mozilla Foundation and Corporation. And as Thunderbird’s community governance model is both one of its biggest strengths and a significant part of its comeback, Ryan has some valuable insights on our working relationship.

Thunderbird’s future, however, is just as exciting a story as how we got here. Ryan gives us a unique look into some of our recent moves, from the decision to develop mobile apps to the recent move into our own email service, Thundermail, and the Thunderbird Pro suite of productivity apps. From barely surviving, we’re glad to see all the ways in which Thunderbird and its community are thriving.

Watch, Read, and Get Involved

The entire interview with Ryan is below, on YouTube and Peertube. There’s a lot of references in the interview, which we’ve handily provided below. We hope you’re enjoying these looks into what we’re doing at Thunderbird as much as we’re enjoying making them, and we’ll see you next month!

VIDEO (Also on Peertube):

Resources

  • The untold history of Thunderbird: https://blog.thunderbird.net/2023/11/the-untold-history-of-thunderbird/
  • The Mozilla Foundation: https://wiki.mozilla.org/Foundation
  • Thunderbird’s New Home at MZLA: https://blog.thunderbird.net/2020/01/thunderbirds-new-home/
  • Community Office Hours with the Thunderbird Council: https://blog.thunderbird.net/2024/09/video-learn-about-the-thunderbird-council/
  • The Mozilla Manifesto: https://www.mozilla.org/about/manifesto/
  • Thundermail and Thunderbird Pro Announcement: https://blog.thunderbird.net/2025/04/thundermail-and-thunderbird-pro-services/
  • Get Involved: https://www.thunderbird.net/participate/

The post VIDEO: Talking MZLA with Ryan Sipes appeared first on The Thunderbird Blog.

Support.Mozilla.OrgIntroducing Flavius Floare

Hi folks,

I’m so excited to share that Flavius Floare joined our team recently as a Technical Writer. He’s working alongside with Dayani to handle the Knowledge Base articles. Here’s a bit more from Flavius himself:

Hi, everyone. My name is Flavius, and I’m joining the SUMO team as the new Technical Writer. I’m really excited to be here and look forward to collaborating with you. My goal is to be as helpful as possible, so feel free to reach out to me with suggestions or feedback.

Please join me to welcome Flavius into the team. He will also join our community call this week, so please make sure to join us tomorrow to say hi to him!

Don Martireinventing Gosplan

Time for some horseshoe theory. Right-wing surveillance oligarchy has looped all the way back around to left-wing central economic planning.

Cory Doctorow sums up some recent news from Meta, in Pluralistic: Mark Zuckerberg announces mind-control ray (again). Zuck has finally described how he’s going to turn AI’s terrible economics around: he’s going to ask AI to design his advertisers’ campaigns, and these will be so devastatingly effective that advertisers will pay a huge premium to advertise on Meta.

Or, as Nilay Patel at The Verge put it, Mark Zuckerberg just declared war on the entire advertising industry. What Mark is describing here is a vision where a client comes to Meta and says I want customers for my product, and Meta does everything else. It generates photos and videos of those products using AI, writes copy about those products with AI, assembles that into an infinite number of ads with AI, targets those ads to all the people on its platforms with AI, measures which ads perform best and iterates on them with AI, and then has those customers buy the actual products on its platforms using its systems.

But the mind-control ray story, if true, would affect more companies, and functions within companies, than just advertising. Myles Younger writes, Zuck Says AI Will Make Advertising So Good Its Share of GDP Will Grow. Is That Really Possible? In the Meta version of the future, somehow the advertising share of the economy grows to include media, sales, and customer service. And a business that wants to sell a product or service would be able to change the number of units sold with one setting—the amount of money sent to Meta. That means the marketing department within the business can also be dramatically reduced. Or do you even need a marketing department when the one decision it has to make is how much money to send to Meta to move how many units? That could be handled as part of some other job.

TheZvi, in Zuckerberg’s Dystopian AI Vision, writes,

When asked what he wants to use AI for, Zuckerberg’s primary answer is advertising, in particular an ultimate black box where you ask for a business outcome and the AI does what it takes to make that outcome happen. I leave all the do not want and misalignment maximalist goal out of what you are literally calling a black box, film at 11 if you need to watch it again and general dystopian nightmare details as an exercise to the reader.

Such rightsizing, much futuristic! But wait a minute, this has been done. Centrally planned economies are already a thing, and have had well-known challenges for a while. On paper, the central planers decide up front how much of each product or service will be produced and consumed, but in reality the system ends up with everyone faking their numbers and gaming the system. For surveillance capitalism, that’s already happening. A majority of US teens have lost trust in Big Tech, and advertisers are starting to walk away from platforms’ AI solutions that once promised them everything. The top-down AI-driven social media story is less of a nightmare and more just more slop.

Related

Antitrust Policy for the Conservative by FTC Commissioner Mark R. Meador. (This is basically a good memo but is not going to have much impact in a political environment where a powerful monopoly can avoid government action by showing up at Mar-A-Lago to invest in a memecoin or settle a lawsuit. If we get to the point where there is a reasonably powerful honest conservative movement in the USA, then Meador’s work will be useful, probably with not too many updates.)

Bonus links

Even a Broken Clock Can Lower Drug Prices by Joan Westenberg. The CBO has repeatedly found that negotiated drug pricing—including international benchmarking—can save significant amounts of public money.

Crypto Is Still for Criming by Paul Krugman. (Will cryptocurrencies go mainstream, or will they be stuck as just a crime thing? Turns out the answer is both because crime is going mainstream.)

The AI Slop Presidency by Matthew Gault. (This kind of thing is a good reason to avoid generative AI header images in blog posts. The AI look has become the signature style of the pro-oligarch, pro-surveillance side. This is particularly obvious on LinkedIn. An AI-look image tends to mean a growth hacking or pro-Big Tech post, while pro-human-rights or pro-decentralization posters tend to use original graphics or stock photos.)

Monopoly Round-Up: China Is Not Why America Is Sputtering by Matt Stoller. Simply put, modern American law is oriented towards ensuring very high returns on capital to benefit Wall Street and hinder the ability to make things. (fwiw, surveillance capitalism is probably part of the problem too. Creepy negative-sum games to move more units of existing products have higher and more predictable ROI than product innovation does.)

Industry groups are not happy about the imminent demise of Energy Star by Marianne Lavelle The nonprofit Alliance to Save Energy has estimated that the Energy Star program costs the government about $32 million per year, while saving families more than $40 billion in annual energy costs.

The Mozilla Firefox New Terms of Use Disaster: What Actually Happened? by Youssuff Quips. It is clear is that Mozilla wants to be able to unambiguously claim to regulators that people agreed to have their data sold – they want that permission to be persistent, and they want it to be modifiable in perpetuity. That changes what Firefox has been, and the Firefox I loved is gone. (For what it’s worth I don’t think it’s as bad as all that. In a seemingly never-ending quest to get extra income that’s not tied to the Google search deal, Mozilla management has done a variety of stupid shit but they always learn from it and move on. They’ll drop their risky adfraud in the browser thing too at some point. More: why privacy-enhancing advertising technologies failed)

Mozilla Security BlogFirefox Security Response to pwn2own 2025

At Mozilla, we consider security to be a paramount aspect of the web. This is why not only does Firefox have a long running bug bounty program but also mature release management and security engineering practices. These practices combined with well-trained and talented Firefox teams are also the reason why we respond to security bugs as quickly as we do. This week at the security hacking competition pwn2own, security researchers demonstrated two new content-process exploits against Firefox. Neither of the attacks managed to break out of our sandbox, which is required to gain control over the user’s system.

Out of abundance of caution, we just released new Firefox versions in response to these attacks – all within the same day of the second exploit announcement. The updated versions are Firefox 138.0.4, Firefox ESR 128.10.1, Firefox ESR 115.23.1 and Firefox for Android. Despite the limited impact of these attacks, all users and administrators are advised to update Firefox as soon as possible.

Just last year at the same security event, we responded to an exploitable security bug within 21 hours, for which we earned an award as the fastest to patch. But this year was special. This year, two security researchers signed up to attack Firefox at pwn2own. We continued the same rapid security response this year too.

Background

Pwn2Own is an annual computer hacking contest where participants aim to find security vulnerabilities in major software such as browsers. This year, the event was held in Berlin, Germany, and a lot of popular software was listed as potential targets for security research. As part of the event preparation, we were informed that Firefox was also listed as a target. But it took until the day before the event when we learned that not just one but two groups signed up to demonstrate their work.

Typically, people attacking a browser require a multi-step exploit. At first, they need to compromise the web browser tab to gain limited control of the user’s system. But due to Firefox’s robust security architecture, another bug (a sandbox escape) is required to break out of the current tab and gain wider system access. Unlike prior years, neither participating group was able to escape our sandbox this year. We have verbal confirmation that this is attributed to the recent architectural improvements to our Firefox sandbox which have neutered a wide range of such attacks. This continues to build confidence in Firefox’s strong security posture.

To review and fix the reported exploits a diverse team of people from all across the world and in various roles (engineering, QA, release management, security and many more) rushed to work. We tested and released a new version of Firefox for all of our supported platforms, operating systems, and configurations with rapid speed.

Our work does not end here. We continue to use opportunities like this to improve our incident response. We will also continue to study the reports to identify new hardening features and security improvements to keep all of our Firefox users across the globe protected.

Related Resources

If you’re interested in learning more about Mozilla’s security initiatives or Firefox security, here are some resources to help you get started:

Mozilla Security
Mozilla Security Blog
Bug Bounty Program

Furthermore, if you want to kickstart your own security research in Firefox, we invite you to follow our deeply technical blog at Attack & Defense – Firefox Security Internals for Engineers, Researchers, and Bounty Hunters .

The post Firefox Security Response to pwn2own 2025 appeared first on Mozilla Security Blog.

Don Martiwhy privacy-enhancing advertising technologies failed

Previously: PET projects or real privacy?

From recent news: Google is reportedly wrapping up work on in-browser privacy-enhancing advertising features. Instead, they’re keeping third-party cookies and even encouraging going back to older user tracking methods like fingerprinting.

Google’s Privacy Sandbox projects were their own special case, and it certainly looks possible that their continued struggles were mostly because of trying to replicate a bunch of anticompetitive tricks from Google’s old ad stack inside the browser. Privacy-enhancing technologies are hard enough without adding in all the anticompetitive stuff too. But by now it looks more and more clearn that it wasn’t just a problem with Privacy Sandbox trying to do too much. Most of the hard problems of PETs for advertising are more general. Although in-browser advertising features persist, for practical purposes they’re already dead code. Right now we’re in a period of adjustment, and some of the interesting protocols and code will probably end up being adaptable to other areas, just not advertising.

While PETs for advertising were a bad idea for a lot of reasons, all I’m going to list here are the big problems they couldn’t get over.

PETs without consent didn’t work. The original plan in the early days of Privacy Sandbox was to deploy to users with a simple Got it! dialog. That didn’t work. Regulators in the UK wrote (PDF),

We believe that further user research and testing of the dialogue box, using robust methodologies and a representative sample of users, is critical to resolve these concerns. Also, it is not clear if users will be prompted to revisit their choices, and the frequency of this.

In the real world, PETs will be required to get the same kind of consent that other adtech is. Buried in a clickwrap agreement isn’t going to pass inspection. PETs are catching the same kinds of complaints over lack of consent as any other adtech. And getting consent will be hard, because…

Users are about as creeped out by PETs as by other kinds of tracking. Jereth et al. find that perceived privacy violations for a browser-based system that does not target people individually are similar to the perceived violations for conventional third-party cookies. Co-author Klaus M. Miller presented the research at FTC PrivacyCon (PDF):

So keeping your data safer on your device seems to help in terms of consumer perceptions, but it doesn’t make any difference whether the firm is targeting the consumer at the individual or group level in the perceived privacy perceptions.

Martin et al. find substantial differences between the privacy that users expect and the privacy (ish) features of PETs. In fact, users might actually feel better about old-fashioned web tracking than about the PET kind.

In sum, the use of inferences rather than raw data collected by a primary site is not a privacy solution for users. In most instances, respondents judged the use of raw data such as browsing history, location, search terms, and engagement data to be statistically the same as using inferences based on that same data. Further, for improving services across contexts, consumers judged the use of raw data as more appropriate compared to using inferences based on that same raw data.

PET developers tried to come up with solutions that would work as a default for all web users, but that’s just not realistic considering that the research consistently shows that people are different. About 30% of people prefer cross-context personalized advertising, 30% really don’t want it, and for 40% it depends how you ask. PETs are too lossy for people who want cross-context personalized ads and too creepy for people who don’t. (In addition to this published research, there is also in-house research at a variety of companies, including at some of the companies that had been most enthusiastically promoting PETs.)

PETs never had a credible anti-fraud story. One of the immutable laws of adtech is that you can take any adtech term and put fraud after it, and it’s a thing. PETs are no exception.

  • Anti-fraud lesson of the 1990s: never trust the client

  • Anti-fraud lesson of the 2000s: use machine learning on lots of data to spot patterns of fraud

  • PETs: trust the client to obfuscate the data that your ML would have needed to spot fraud. (how was this even supposed to work?)

If PET developers could count on an overwhelming percentage of users to participate in PETs honestly, then there might not be a problem. A few people would try fraud but they would get lost in the noise created by PET math. But active spoofing of PETs, if they ever caught on, would have the same triad of user motivations that open-source software does: it feels like the right thing to do (since the PETs come from the same big evil companies that people are already protesting), you would have been able to make money doing it, and it’s fun. Any actual data collected by PETs would have been drowned out by fake data generated either on principle, for money, or for lulz.

PETs didn’t change the market. The original, optimistic pitch for PETs was that they would displace other surveillance advertising technologies in marketing budgets and VC portfolios. That didn’t happen. The five-year publicity frenzy around Google’s Privacy Sandbox might actually have had the opposite effect. The project’s limitations, well understood by adtech developers and later summarized in an IAB Tech Lab report, encouraged more investment in the kinds of non-cookie, non-PET tracking methods that Mozilla calls unintended identification techniques.

Just as we didn’t see articles written for end users recommending PETs as a privacy tip—because the privacy they provide isn’t the privacy that users want—we also didn’t see anyone in the advertising business saying they were cutting back on other tracking to do PETs instead. Even Google, which was the biggest proponent of PETs for a while, lifted its 2019 ban on fingerprinting as Privacy Sandbox failed to take off.

PETs would create hard-to-predict antitrust issues. If users are still creeped out by PETs, and advertisers find PET features too limiting, then the designers of PETs must be splitting the difference and doing something right, right? Well, no. PETs aren’t just about users vs. advertisers, they’re about large-scale platforms vs. smaller companies. PETs introduce noise and obfuscation, to make data interpretation only practical above a certain data set size—for a few large companies, or one. Designers of PETs can tune the level of obfuscation introduced to make their systems practical for any desired minimum size of company.

The math is complicated enough, and competition regulators have enough on their to-do lists, to make it hard to tell when PET competition issues will come up. But they will eventually.

PETs would have made privacy enforcement harder. This year’s most promising development in privacy news in the USA is the Honda case. Companies that had been getting by with non-compliant opt outs and Right to Know forms are finally fixing their stuff. CCPA+CPRA are progressing to their (intended?) true form, as a kind of RCRA for PII. Back in the 1980s, companies that had a bunch of random hazardous materials around decided that it was easier to safely get rid of them than to deal with RCRA paperwork, and something similar is happening for surveillance marketing today.

PETs would have interfered with this trend by making it harder for researchers to spot problematic data usage practices, and helping algorithmic discrimination to persist.

Conclusion: learning from the rise and fall of PETs. In most cases, there should be little or no shame in chasing a software fad. At their best, hyped-up technologies can open up a stale industry to new people by way of hiring frenzies, and create change that would have been harder to do otherwise.all right, the cryptocurrency and AI bubbles might be an exception because of the environmental impact, but the PET fad wasn’t that big. Having been into last year’s trendy thing can feel a little embarrassing, but really, a trend-driven industry has two advantages.

  • A trend can give you a face-saving way to dig up and re-try a previous good project idea that didn’t get funded at the time. (this could still happen with prediction markets)

  • Investing in a trend can be an excuse to fix your dependencies (I once got to work on fixing software builds, making RPMs, and automating a GPL corresponding source release, because Docker containers were a big thing at the time) and produce software that’s useful later (PDF-to-structured-text tools, so hot right now)

In the case of PETs there probably should have been more user research earlier, to understand that the default PETs without consent idea wouldn’t have worked and save development time, but that’s a deeper problem with the relative influence of people who write code and people who do user research within companies, and not just a PET thing.

The development work that went into PETs wasn’t wasted, because PETs are still really promising in other areas, just not advertising. For example, energy markets could benefit from being able to predict demand without revealing when individual utility customers are at home or away. PETs are already valuable for software telemetry—for example, revealing that a certain web page crashed the browser without telling the browser maintainer which users visited which pages—and could end up being more widely used for other products, where the manufacturer and user have a shared interest in facilitating maintenance and improving quality. But advertising is different, mostly because it’s unavoidably adversarial. Every market has honest and dishonest advertisers, and advertising’s main job is to build reputation by doing something that’s practical for a legit advertiser to do and as difficult as possible for a dishonest advertiser. As the shift to a low-trust economy continues, and more software companies see their reputations continue to slide, real ad reform solutions will need to come from somewhere else. More: Sunday Internet optimism

Bonus links

Time To Get Serious by Brian Jacobs. (A must-follow RSS feed for anyone interested in #adReform. We have spent decades building up thoughtful measurement systems through collaboration and compromise. And yet we are prepared to believe what suits the largest vendors without question and without any hint of criticism. Indeed, we build what suits them into our thinking, with scant regard as to whether it fits with what we know. We are in a crisis in large part of our own making.

What Do We Do With All This Consumer Rage? by Anne Helen Petersen. As consumers, the globalized marketplace (with a noted assist from venture capital) has taught us to expect and demand levels of seamless service at low prices. But the companies that provide seamless service at low prices often provide lower-quality products and service. Or, now that VC-backed enterprises like Uber and DoorDash have ceased to subsidize the on-demand lifestyle, they provide lower quality products or experiences at higher prices.

The anatomy of Anatomy of Humbug, ten years on by Paul Feldwick. The text of Anatomy emerged (over many years) as my attempt to articulate the unspoken assumptions that underlay the way we made advertising, in the thirty years I worked at a successful agency. It seemed to me that the theories we all uncritically believed fitted rather badly with the kind of advertising we produced, and, more worryingly, with the kind of advertising that we increasingly knew worked best. We agonised over single-minded propositions and consumer benefits; then we created singing polar bears, comic Yorkshiremen, and laughing aliens, and the public loved them. Something didn’t quite make sense.

Rage of the Oligarchs Naomi Klein: ’What They Want Is Absolutely Everything (I’m more worried about evil oligarchs who know how to hire and use a good PR team than I am about the guys who are willing to be the main character on Twitter or whatever, but maybe that’s just me)

Costco’s Kirkland brand is bigger than Nike—and it’s about to get even bigger by Rob Walker. Like all private labels, it competes with brand-name consumer products largely on price—an obvious advantage in belt-tightening times. But Kirkland is also the rare private label that’s developed its own powerful, and surprisingly elastic, brand identity. (Kirkland might be a success story for brand building by investing in measurable improvements. There is a content niche for posts like What to Buy from Costco & What to Avoid, which means an opportunity for Costco in offering low-overhead bargains and leaving it to independent content creators to get the word out.)

The Mozilla BlogThe future of the web depends on getting this right

The remedies phase of the U.S. v. Google LLC search case wrapped up last week. As the Court weighs how to restore competition in the search market, Mozilla is asking it to seriously consider the unintended consequences of some of the proposed remedies, which, if adopted, could harm browser competition, weaken user choice and undermine the open web.

Mozilla has long supported competition interventions in tech markets. Recent highlights include campaigns to pass the American Innovation and Choice Online Act, reports detailing the operating system power wielded by Apple, Google and Microsoft (among others), and detailed research into remedy design on Android and Windows to support the enforcement of EU Digital Markets Act.  

In relation to the Google Search case, our message is simple: search competition must improve, but this can be done without harming browser competition.

As the maker of Firefox and Gecko, the only major browser engine left competing with Big Tech, we know what it means to fight for privacy, innovation and real choice online. That is why we have filed an amicus brief, urging the Court not to prohibit Google from making search revenue payments to independent browsers (i.e., browser developers that do not provide desktop or mobile devices or operating systems). Such a ban would destroy valuable competition in browsers and browser engines by crippling their ability to innovate and serve users in these fundamentally important areas. As explained in our amicus brief:

  • Mozilla has spent over two decades fighting for an open and healthy internet ecosystem. Through developing open source products, advancing better web standards, and advocating for competition and user choice, Mozilla has tangibly improved privacy, security, and choice online. Much of this work is funded by Firefox’s search revenue and implemented in Gecko—the last remaining cross-platform browser engine challenger to Google’s Chromium.   
  • Firefox offers unparalleled search choice. Mozilla has tried alternatives (like Yahoo! In 2014-2017) and knows that Google Search is the preferred option of Firefox users. While Google provides the default search engine, Firefox offers multiple, dynamic ways for people to change their search engine.  
  • Banning search payments to independent browsers would threaten the survival of Firefox and Gecko. The Court previously recognized that Mozilla depends on revenue share payments from Google. This was underlined by testimony the Court heard from Eric Muhlheim, Mozilla’s CFO. Eric explained how complex and expensive it is to maintain Firefox and Gecko and why switching to another search provider would result in a “precipitous” decline in revenue. Undermining Mozilla’s ability to fund this work risks handing control of the web to Apple and Google and further entrenching the power of the largest tech companies.
  • Banning search payments to independent browsers would not improve search competition. Independent browsers play an important role in the ecosystem, far beyond their market share. The Court previously found that they account for 2.3% of US search traffic covered by Google’s contracts. As a result, the DOJ’s expert calculated that banning payments to independent browsers would shift only 0.6% of Google’s current market share to another search engine. This is not a prize worth destroying browser competition for.

At Mozilla, we believe that a more tailored approach to the remedies is absolutely critical. The Court should permit independent browsers like Firefox to continue to receive revenue share payments from Google to avoid further harm to competition. This would be a consistent approach with other jurisdictions that have sought to improve search competition and would not undermine the effectiveness of any remedies the court orders. 

To learn more about Mozilla’s position and why we’re urging the Court to carefully consider the unintended consequences of these proposed remedies, read our full amicus brief.

The post The future of the web depends on getting this right appeared first on The Mozilla Blog.

Niko MatsakisRust turns 10

Today is the 10th anniversary of Rust’s 1.0 release. Pretty wild. As part of RustWeek there was a fantastic celebration and I had the honor of giving some remarks, both as a long-time project member but also as representing Amazon as a sponsor. I decided to post those remarks here on the blog.

“It’s really quite amazing to see how far Rust has come. If I can take a moment to put on my sponsor hat, I’ve been at Amazon since 2021 now and I have to say, it’s been really cool to see the impact that Rust is having there up close and personal.

“At this point, if you use an AWS service, you are almost certainly using something built in Rust. And how many of you watch videos on PrimeVideo? You’re watching videos on a Rust client, compiled to WebAssembly, and shipped to your device.

“And of course it’s not just Amazon, it seems like all the time I’m finding out about this or that surprising place that Rust is being used. Just yesterday I really enjoyed hearing about how Rust was being used to build out the software for tabulating votes in the Netherlands elections. Love it.

“On Tuesday, Matthias Endler and I did this live podcast recording. He asked me a question that has been rattling in my brain ever since, which was, ‘What was it like to work with Graydon?’

“For those who don’t know, Graydon Hoare is of course Rust’s legendary founder. He was also the creator of Monotone, which, along with systems like Git and Mercurial, was one of the crop of distributed source control systems that flowered in the early 2000s. So defintely someone who has had an impact over the years.

“Anyway, I was thinking that, of all the things Graydon did, by far the most impactful one is that he articulated the right visions. And really, that’s the most important thing you can ask of a leader, that they set the right north star. For Rust, of course, I mean first and foremost the goal of creating ‘a systems programming language that won’t eat your laundry’.

“The specifics of Rust have changed a LOT over the years, but the GOAL has stayed exactly the same. We wanted to replicate that productive, awesome feeling you get when using a language like Ocaml – but be able to build things like web browsers and kernels. ‘Yes, we can have nice things’, is how I often think of it. I like that saying also because I think it captures something else about Rust, which is trying to defy the ‘common wisdom’ about what the tradeoffs have to be.

“But there’s another North Star that I’m grateful to Graydon for. From the beginning, he recognized the importance of building the right culture around the language, one committed to ‘providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender identity and expression, disability, nationality, or other similar characteristic’, one where being ‘kind and courteous’ was prioritized, and one that recognized ’there is seldom a right answer’ – that ‘people have differences of opinion’ and that ’every design or implementation choice carries a trade-off’.

“Some of you will probably have recognized that all of these phrases are taken straight from Rust’s Code of Conduct which, to my knowledge, was written by Graydon. I’ve always liked it because it covers not only treating people in a respectful way – something which really ought to be table stakes for any group, in my opinion – but also things more specific to a software project, like the recognition of design trade-offs.

“Anyway, so thanks Graydon, for giving Rust a solid set of north stars to live up to. Not to mention for the fn keyword. Raise your glass!

“For myself, a big part of what drew me to Rust was the chance to work in a truly open-source fashion. I had done a bit of open source contribution – I wrote an extension to the ASM bytecode library, I worked some on PyPy, a really cool Python compiler – and I loved that feeling of collaboration.

“I think at this point I’ve come to see both the pros and cons of open source – and I can say for certain that Rust would never be the language it is if it had been built in a closed source fashion. Our North Star may not have changed but oh my gosh the path we took to get there has changed a LOT. So many of the great ideas in Rust came not from the core team but from users hitting limits, or from one-off suggestions on IRC or Discord or Zulip or whatever chat forum we were using at that particular time.

“I wanted to sit down and try to cite a bunch of examples of influential people but I quickly found the list was getting ridiculously long – do we go all the way back, like the way Brian Anderson built out the #[test] infrastructure as a kind of quick hack, but one that lasts to this day? Do we cite folks like Sophia Turner and Esteban Kuber’s work on error messages? Or do we look at the many people stretching the definition of what Rust is today… the reality is, once you start, you just can’t stop.

“So instead I want to share what I consider to be an amusing story, one that is very Rust somehow. Some of you may have heard that in 2024 the ACM, the major academic organization for computer science, awarded their SIGPLAN Software Award to Rust. A big honor, to be sure. But it caused us a bit of a problem – what names should be on there? One of the organizers emailed me, Graydon, and a few other long-time contributors to ask us our opinion. And what do you think happened? Of course, we couldn’t decide. We kept coming up with different sets of people, some of them absurdly large – like thousands of names – others absurdly short, like none at all. Eventually we kicked it over to the Rust Leadership Council to decide. Thankfully they came up with a decent list somehow.

“In any case, I just felt that was the most Rust of all problems: having great success but not being able to decide who should take credit. The reality is there is no perfect list – every single person who got named on that award richly deserves it, but so do a bunch of people who aren’t on the list. That’s why the list ends with All Rust Contributors, Past and Present – and so a big shout out to everyone involved, covering the compiler, the tooling, cargo, rustfmt, clippy, core libraries, and of course organizational work. On that note, hats off to Mara, Erik Jonkers, and the RustNL team that put on this great event. You all are what makes Rust what it is.

“Speaking for myself, I think Rust’s penchant to re-imagine itself, while staying true to that original north star, is the thing I love the most. ‘Stability without stagnation’ is our most important value. The way I see it, as soon as a language stops evolving, it starts to die. Myself, I look forward to Rust getting to a ripe old age, interoperating with its newer siblings and its older aunts and uncles, part of the ‘cool kids club’ of widely used programming languages for years to come. And hey, maybe we’ll be the cool older relative some day, the one who works in a bank but, when you talk to them, you find out they were a rock-and-roll star back in the day.

“But I get ahead of myself. Before Rust can get there, I still think we’ve some work to do. And on that note I want to say one other thing – for those of us who work on Rust itself, we spend a lot of time looking at the things that are wrong – the bugs that haven’t been fixed, the parts of Rust that feel unergonomic and awkward, the RFC threads that seem to just keep going and going, whatever it is. Sometimes it feels like that’s ALL Rust is – a stream of problems and things not working right.

“I’ve found there’s really only one antidote, which is getting out and talking to Rust users – and conferences are one of the best ways to do that. That’s when you realize that Rust really is something special. So I do want to take a moment to thank all of you Rust users who are here today. It’s really awesome to see the things you all are building with Rust and to remember that, in the end, this is what it’s all about: empowering people to build, and rebuild, the foundational software we use every day. Or just to ‘hack without fear’, as Felix Klock legendarily put it.

“So yeah, to hacking!”

Mozilla ThunderbirdThunderbird for Mobile April 2025 Progress Report

Here is an update of what Thunderbird’s mobile community has been up to in April 2025. With a new team member, we’re getting Thunderbird for iOS out in the open and continuing to work on release feedback from Thunderbird for Android.

The Team is Growing

Last month we introduced Todd and Ashley to the MZLA mobile team, and now we have another new face in the team! Rafael Tonholo joins us as a Senior Android Engineer to focus on Thunderbird for Android. He also has much experience with Kotlin Multiplatform, which will be beneficial for Thunderbird for iOS as well.

Thunderbird for iOS

We’ve published the initial repository of Thunderbird for iOS! The application doesn’t really do a lot right this moment, since we intend to work very incrementally and start in the open. You’ll see a familiar welcome screen, slightly nicer than Thunderbird for Android and have the opportunity to make a financial contribution.

Testflight Distribution

We’re planning to distribute Thunderbird for iOS through TestFlight. To support that, we’ve set up an Apple Developer account and completed the required verification steps.

Unlike Android, where we maintain separate release and beta versions, the iOS App Store will have a single “Thunderbird” app. Apple prefers not to list beta versions as separate apps, and their review process tends to be stricter. Once the main app is published, we’ll be able to use TestFlight to offer a beta channel.

Before the App Store listing goes live, we’ll use TestFlight to distribute our builds. Apple provides an internal TestFlight option that doesn’t require a review, but it only works if testers have access to the developer account. That makes it unsuitable for community testing.

Initial Features for the Public Testflight Alpha

To share a public TestFlight link, we need to pass an initial App Store review. Apple expects apps to meet a minimum bar for functionality, so we can’t publish something like a simple welcome screen. Our goal for the first public TestFlight build is to support manual account setup and display emails in the inbox. Here are the specifics:

  • Initial account setup will be manual with hostname/username/password.
  • There will be a simple message list that will only show the INBOX folder messages, with a sender, subject, and maybe 2–3 preview lines.
  • You’ll have the opportunity to pull to refresh your inbox.

That is certainly not what you’d call a fully functional email client, but it could qualify for bare minimum functionality required for the Apple review. We have more details and a feature comparison in this document.

In other exciting news, we’re going to build Thunderbird for iOS with JMAP support first and foremost. While support on the email provider side is limited, we start with a modern email stack. This will allow us to build towards some of the features that email from the late 80’s was missing. We’ll be designing the code architecture in a way that adding IMAP support is very simple, so it will ideally follow soon after.

iOS Release Engineering and Localization

We’ve also gone through a few initial conversations on what the release workflow might look like. We’re currently deciding between:

  • GitHub Actions with Upload Actions (Pro: very open, re-use of some work on the Thunderbird for Android side. Con: Custom work, not many well-supported upload actions)
  • GitHub Actions with Fastlane (Pro: very open, well-supported, uses the same listing metadata structure we already have on Android. Con: Ruby as yet another language, no prior releng work)
  • Xcode Cloud (Pro: built in to Xcode, easy to configure, we’ll probably get by with the free tier for quite some time. Con: Not very open, increasing build cost)
  • Bitrise (Pro: Easy to configure, used by Firefox for iOS, we’ll get some support from Mozilla on this. Con: Can be pricy, not very open)

For now, our release process is pressing a button every once in a while. Xcode makes this very easy, which gives the release operations more time to plan a solution.

For localization, we’re aiming to use Weblate, just as Thunderbird for Android. The strings will mostly be the same, so we don’t need to ask our localizers to do double work.

Thunderbird for Android

We’re still focusing on release feedback by working on the drawer and looking to improve stability. April has very much been focused on onboarding the new team. I’ll keep the updates in this section a bit more brief, as we have less to explore and more to fix 🙂

  • We’ve accepted a new ADR to change the shared modules package from app.k9mail and com.fsck to net.thunderbird. We’ll be doing this gradually when migrating over legacy code.
  • Ashley has fixed a few keyboard accessibility issues to get started. She has also resolved a crash related to duplicate folder ids in the drawer. Her next projects are improving our sync debug tooling and other projects to resolve stability issues in retrieving emails.
  • Clément Rivière added initial support for showing hierarchical folders. The work is behind a feature flag for now, as we need to do some additional refactoring and crash fixes before we can release it. You can however try it out on the beta channel.
  • Fishkin removed a deprecated progress indicator, which provides slightly better support for Android watches.
  • Rafael fixed an issue related to Outlook/Microsoft accounts. If you have received the “Authentication Unsuccessful” message in the past, please try again on our beta channel.
  • Shamim continues on his path to refactor and move over some of our legacy code into the new modular structure. He also added support to attach files from the camera, and has resolved an issue in the drawer where the wrong folder was selected.
  • Timur Erofeev added support for algorithmic darkening where supported. This makes dark mode work better for a wider range of emails, following the same method that is used on web pages.
  • Wolf has been working diligently to improve our settings and drawer infrastructure. He took a number of much needed detours to refactor legacy code, which will make future work easier. Most notably,  we have a new settings system based on Jetpack Compose, where we will eventually migrate all the settings screens to.

That’s a wrap for April! Let us know if you have comments, or see opportunities to help out. See you soon!

The post Thunderbird for Mobile April 2025 Progress Report appeared first on The Thunderbird Blog.

The Rust Programming Language BlogAnnouncing Rust 1.87.0 and ten years of Rust!

Live from the 10 Years of Rust celebration in Utrecht, Netherlands, the Rust team is happy to announce a new version of Rust, 1.87.0!

picture of Rustaceans at the release party

Today's release day happens to fall exactly on the 10 year anniversary of Rust 1.0!

Thank you to the myriad contributors who have worked on Rust, past and present. Here's to many more decades of Rust! 🎉


As usual, the new version includes all the changes that have been part of the beta version in the past six weeks, following the consistent regular release cycle that we have followed since Rust 1.0.

If you have a previous version of Rust installed via rustup, you can get 1.87.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.87.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.87.0 stable

Anonymous pipes

1.87 adds access to anonymous pipes to the standard library. This includes integration with std::process::Command's input/output methods. For example, joining the stdout and stderr streams into one is now relatively straightforward, as shown below, while it used to require either extra threads or platform-specific functions.

use std::process::Command;
use std::io::Read;

let (mut recv, send) = std::io::pipe()?;

let mut command = Command::new("path/to/bin")
    // Both stdout and stderr will write to the same pipe, combining the two.
    .stdout(send.try_clone()?)
    .stderr(send)
    .spawn()?;

let mut output = Vec::new();
recv.read_to_end(&mut output)?;

// It's important that we read from the pipe before the process exits, to avoid
// filling the OS buffers if the program emits too much output.
assert!(command.wait()?.success());

Safe architecture intrinsics

Most std::arch intrinsics that are unsafe only due to requiring target features to be enabled are now callable in safe code that has those features enabled. For example, the following toy program which implements summing an array using manual intrinsics can now use safe code for the core loop.

#![forbid(unsafe_op_in_unsafe_fn)]

use std::arch::x86_64::*;

fn sum(slice: &[u32]) -> u32 {
    #[cfg(target_arch = "x86_64")]
    {
        if is_x86_feature_detected!("avx2") {
            // SAFETY: We have detected the feature is enabled at runtime,
            // so it's safe to call this function.
            return unsafe { sum_avx2(slice) };
        }
    }

    slice.iter().sum()
}

#[target_feature(enable = "avx2")]
#[cfg(target_arch = "x86_64")]
fn sum_avx2(slice: &[u32]) -> u32 {
    // SAFETY: __m256i and u32 have the same validity.
    let (prefix, middle, tail) = unsafe { slice.align_to::<__m256i>() };
    
    let mut sum = prefix.iter().sum::<u32>();
    sum += tail.iter().sum::<u32>();
    
    // Core loop is now fully safe code in 1.87, because the intrinsics require
    // matching target features (avx2) to the function definition.
    let mut base = _mm256_setzero_si256();
    for e in middle.iter() {
        base = _mm256_add_epi32(base, *e);
    }
    
    // SAFETY: __m256i and u32 have the same validity.
    let base: [u32; 8] = unsafe { std::mem::transmute(base) };
    sum += base.iter().sum::<u32>();
    
    sum
}

asm! jumps to Rust code

Inline assembly (asm!) can now jump to labeled blocks within Rust code. This enables more flexible low-level programming, such as implementing optimized control flow in OS kernels or interacting with hardware more efficiently.

unsafe {
    asm!(
        "jmp {}",
        label {
            println!("Jumped from asm!");
        }
    );
}

For more details, please consult the reference.

Precise capturing (+ use<...>) in impl Trait in trait definitions

This release stabilizes specifying the specific captured generic types and lifetimes in trait definitions using impl Trait return types. This allows using this feature in trait definitions, expanding on the stabilization for non-trait functions in 1.82.

Some example desugarings:

trait Foo {
    fn method<'a>(&'a self) -> impl Sized;
    
    // ... desugars to something like:
    type Implicit1<'a>: Sized;
    fn method_desugared<'a>(&'a self) -> Self::Implicit1<'a>;
    
    // ... whereas with precise capturing ...
    fn precise<'a>(&'a self) -> impl Sized + use<Self>;
    
    // ... desugars to something like:
    type Implicit2: Sized;
    fn precise_desugared<'a>(&'a self) -> Self::Implicit2;
}

Stabilized APIs

These previously stable APIs are now stable in const contexts:

i586-pc-windows-msvc target removal

The Tier 2 target i586-pc-windows-msvc has been removed. i586-pc-windows-msvc's difference to the much more popular Tier 1 target i686-pc-windows-msvc is that i586-pc-windows-msvc does not require SSE2 instruction support. But Windows 10, the minimum required OS version of all windows targets (except the win7 targets), requires SSE2 instructions itself.

All users currently targeting i586-pc-windows-msvc should migrate to i686-pc-windows-msvc.

You can check the Major Change Proposal for more information.

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.87.0

Many people came together to create Rust 1.87.0. We couldn't have done it without all of you. Thanks!

This Week In RustThis Week in Rust 599

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Foundation
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is brush, a bash compatible shell implemented completely in Rust.

Thanks to Josh Triplett for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

RFCs
Rust
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

  • No Calls for papers or presentations were submitted this week.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

397 pull requests were merged in the last week

Compiler
Library
Cargo
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Lot of changes this week. Overall result is positive, with one large win in type check.

Triage done by @panstromek. Revision range: 62c5f58f..718ddf66

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.5% [0.2%, 1.4%] 113
Regressions ❌
(secondary)
0.5% [0.1%, 1.5%] 54
Improvements ✅
(primary)
-2.5% [-22.5%, -0.3%] 45
Improvements ✅
(secondary)
-0.9% [-2.3%, -0.2%] 10
All ❌✅ (primary) -0.3% [-22.5%, 1.4%] 158

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Rust

No Items entered Final Comment Period this week for Cargo, Rust RFCs, Language Reference, Language Team or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2025-05-14 - 2025-06-11 🦀

Virtual
Asia
Europe
North America
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

If a Pin drops in a room, and nobody around understands it, does it make an unsound? #rustlang

Josh Triplett on fedi

Thanks to Josh Triplett for the self-suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Mozilla Blog‘Shifting left’ for better accessibility in Firefox

Illustration showing accessibility features: a microphone icon for voice input, "Aa" for text size, a hand tapping gesture, and an eye icon for visual settings.

As a product manager for Firefox, one of the areas I’m most passionate about is accessibility. This is not only because I’m a disabled person myself, but also because I’ve seen firsthand that building in accessibility from the beginning results in better outcomes for everyone. Our new profile management feature is a great example of this approach.

Shifting left means building accessibility in from the start

If you picture the product development process as a horizontal line, with “user research” on the extreme left and “launch to market” on the extreme right, accessibility tends to fall on the right side of the line. On the right side of the line, we are reactive: the product is already built for the needs of non-disabled users, so we’re just checking it for accessibility bugs. On the right side of the line, it’s often too late or very expensive to fix accessibility bugs, so they don’t get fixed. On the right side of the line, the best we can hope for is accessibility compliance with an industry standard like WCAG. On the right side of the line, we are more likely to build something unusable – even if we checked all the accessibility compliance boxes. 

So how do we ensure that accessibility moves to the other end of the line, the left side? One of the most powerful ways to “shift left” is to include disabled people in the process as early as possible. On the left side of the line, we become proactive: we build products with disabled folks, not for them. On the left side of the line, we prevent accessibility bugs from ever happening because we spot them in the designs. On the left side of the line, we have a chance to go beyond compliance and achieve accessibility delight. On the left side of the line, working together, we have a better chance to discover curb cut effects: solutions designed with people with disabilities that end up benefitting everyone.

How Firefox profiles shifted left

Firefox is not always on the left side of the line, but we’ve been working hard over the last couple years to “shift left.” 

A Firefox browser window labelled “Choose a Firefox profile” with options to select a green “work” profile with a briefcase avatar or a lavender “personal” profile with a flower avatar, create a new profile, or set a specific profile when Firefox opens.

I’m a proudly disabled university student who works full time and is passionate about rowing and musical theater. I made four profiles: medical, school, work and personal. Each profile has its own unique avatar, color theme and name so I can easily recognize and switch between them in one click. I especially love that browsing history, bookmarks and tabs no longer intermix. I’m now much less likely to accidentally share my health information with my professors or my strategic work plans with fellow Sondheim nerds.

Throughout this project, we partnered with disabled folks to aim for accessibility compliance and, more importantly, delight. They gave us valuable feedback from our very first user research studies and continue to do so. 

One group dreamed up brand new ideas and suggested enhancements during an in-depth review of an early prototype (including an awesome curb-cut effect we hope to share with you later this year). Testers who are experts in assistive tech (AT) pinpointed areas where we still needed to improve. 

This truly was a community effort. We learned a lot, and we have more work to do.

Try profiles now and help shape what’s next

While we’d love to make it available to everyone immediately, profile management is more complex than it probably appears: It’s built on core Firefox code, and it interacts with and affects several other features and essential systems. To ensure Firefox and the profile management feature remain stable and compatible, we need to continue our incremental rollout for now.

In the meantime, we’d love for you to use profile management on Nightly and Beta, where it’s on by default for everyone, then share your thoughts in this thread on Mozilla Connect, our forum for community feedback and ideas. You’ll help us validate fixes and catch new bugs, as well as get early access to new features and enhancements. 

At least 29% of the population is disabled, which means many of you have the insight and lived experience to help Firefox “shift left” on accessibility. That collaboration is already shaping a better browser — and a better web.

Get the browser that puts your privacy first – and always has

Download Firefox

The post ‘Shifting left’ for better accessibility in Firefox appeared first on The Mozilla Blog.

The Mozilla BlogJump into Firefox Labs: A place to try new features and help shape Firefox

Ever thought, “I wish I could try that new Firefox feature early?” Good news – we’ve been trying out new features and now, you can try them out, too.

Firefox Labs is our space for sharing experimental features with our community before they’re fully baked. It’s a chance to play around with new ideas, tell us what’s working (and what’s not) and help shape the future of Firefox together.

Early access to what we’re building

Firefox Labs is built on a simple idea: If we’re building for Firefox users, we should be building with them, too.

“We created Firefox Labs to get features into users’ hands earlier,” said Karen Kim, senior product manager at Mozilla. “It’s a safe space where people can turn things on, play around, and help us learn faster.”

In the past, testing out new ideas usually meant downloading special builds like Nightly or digging into advanced settings. That’s not for everyone. Firefox Labs makes it way easier — just head to your Firefox settings, flip a switch, and try something new.

It’s inspired by our old Test Pilot program (shoutout to longtime Firefox fans!), which helped launch popular features like picture-in-picture. Firefox Labs carries that same spirit — but with a closer connection to the people using Firefox today.

Try these Firefox Labs features now 

We’ve got a couple of features live in Firefox Labs that you can try today:

🎨 Custom wallpapers for new tab

Inspired by your feedback, you can now upload your own image or choose from a set of new wallpapers and colors to customize your Firefox home screen.

<figcaption class="wp-element-caption">Click on choose a custom wallpaper or color for New Tab</figcaption>

“You can choose your own color — go bold, go subtle, it’s completely up to you,” said Amber Meryman, product manager for the New Tab team. “We’ve added a new celestial category, plus even more images across all your favorite themes, these new wallpapers are all about making Firefox feel more like you.”

Pet photos, space scenes, whatever you’re into – the choice is up to you.

🔍 Link previews

Not sure if that link is worth clicking? Link previews give you a quick snapshot of what’s behind a link — so you can decide if it’s relevant before opening a new tab.

“Link previews are about saving time and reducing clutter,” said Joy Chen, who works on Firefox’s AI Experiences team. “When you’re scanning a lot of content, it’s easy to feel overwhelmed. Link Previews helps you quickly assess what’s most relevant to you, so you can browse and learn more efficiently.”

The team is already seeing valuable feedback in Firefox Labs, from shortcut suggestions to content quality questions. 

“All of it helps — even critical feedback gives us a clearer picture of how people might use or feel about these tools,” Joy said.

Link previews are especially handy for staying focused while doing research, browsing news, or avoiding tab overload.

How to share feedback (yes, we’re listening)

Each experiment includes a link to Mozilla Connect — our community hub for feedback, suggestions, and discussion. If you sign in or create an account, it’s where you can:

  • Share what you love (or what’s confusing)
  • Suggest improvements
  • See what others are saying
  • Help guide what we build next
  • Hear directly from product teams and engineers who regularly jump into the conversation

How to get started with Firefox Labs

First, check to make sure you’re using the latest version of Firefox. Then:

  • Go to Settings > Firefox Labs (it only shows up if a feature is available).
  • Turn on a feature and give it a try.
  • Head to Connect to share your thoughts!

Your ideas help shape Firefox. Many features like custom wallpapers got their start from community posts. Your idea could be next -– head to Mozilla Connect

So whether you want to test new features, share your thoughts, or just peek at what’s coming, Firefox Labs is your front-row seat to the future of Firefox.

Update: The post was revised on May 14 to clarify a quote about link previews.

Get the browser that puts your privacy first – and always has

Download Firefox

The post Jump into Firefox Labs: A place to try new features and help shape Firefox appeared first on The Mozilla Blog.

Mozilla ThunderbirdThunderbird Monthly Developer Digest – April 2025

Hello from the Thunderbird development team! With some of our time spent onboarding new team members and interviewing for open positions, April was a fun and productive month. Our team grew and we were amazed at how smooth the onboarding process has been, with many contributions already boosting the team’s output.

Gearing up for our annual Extended Support Release 

We have now officially entered the release cycle which will become our annual “ESR” at the end of June. The code we’re writing, the features we’re adding, the bugs we’re fixing at the moment should all make their way into the next major update, to be enjoyed by millions of users. This most stable release is used by enterprises, governments and institutions who have specific requirements around consistency, long-term support, and minimized change over time.

If waiting a whole year doesn’t sound appealing to you, our Monthly release may be better suited. It offers access to the latest features, improvements, and fixes as soon as they’re ready. Watch out for an in-app invitation to upgrade or install over ESR to retain your profile settings.

Calendar UI Rebuild

The implementation of the new event dialog hit some challenges in April with the dialog positioning and associated tests causing more than a few headaches when our CI started reporting test failures that were not easy to debug. Not surprising given the 60,000 tests which run for this one patch alone!!

The focus on loading data into the various containers continues, so that we can enable this feature and begin the QA process.

Keep track of feature delivery via the [meta] bug 

Exchange Web Services support in Rust

Our 0.2 release will make it into the hands of Daily and QA testers this month, with only a handful of smaller items left in our current milestone, before the “polish” milestone begins. The following items were completed in April:

  • Connectivity check for EWS accounts
  • Threading support
  • Folder updates & deletions in sync
  • Folder cache cleanup
  • Folder copy/move
  • Bug fixes!

Our hope is to include this feature set to users on beta and monthly release in 140 or 141.

Keep track of feature delivery here.

Account Hub

The new email account feature was “preffed on” as the default experience for the Daily build but recent changes to our Oauth process have required some rework to this user experience. We’re currently working on designing a UX and associated functionality that can detect whether account autodiscovery requires a password, and react accordingly.

The redesigned UI for Address Book account additions is also underway and planned for release to users on 25th May.

Global Message Database

We welcomed a new team member in April so technical onboarding has been a priority. In addition, a long list of patches landed, with the team focused on refactoring core code responsible for the management of common folders such as Drafts or Sent Mail, and significant changes to nsIMsgPluggableStore.

Time was spent to research and plan a path to tackle dangling folders in May.

To follow their progress, the team maintains documentation in Sourcedocs which are visible here.

New Features Landing Soon

A number of requested features and important fixes have reached our Daily users this month. We want to give special thanks to the contributors who made the following possible…

If you would like to see new features as they land, and help us squash some early bugs, you can try running daily and check the pushlog to see what has recently landed. This assistance is immensely helpful for catching problems early.

Toby Pilling

Senior Manager, Desktop Engineering

Thunderbird

The post Thunderbird Monthly Developer Digest – April 2025 appeared first on The Thunderbird Blog.

Mozilla Addons BlogNew Extension Data Consent Experience now available in Firefox Nightly

In a previous blog post I explained that we’re working to streamline the data consent experience for extensions and allow users to consent to sharing data with extensions directly in the Firefox add-on installation flow itself — rather than during a separate post-install experience and asking developers to build their own custom consent experiences, which is the case today.

We are not changing our policies on data collection, nor are we changing how extensions can collect data. Our goal is to simplify how a developer can be compliant with our existing policies so that we can dramatically reduce the:

  1. development effort required to be compliant with Firefox data policies
  2. confusion users faces when installing extensions by providing a more consistent experience, giving them more confidence and control around the data collected or transmitted
  3. time it takes for an extension to be reviewed to ensure it’s compliant with our data collection policies

I’m pleased to announce that the initial version of this feature is now available in Firefox Nightly version 139 (and later) for extension developers to test out and provide feedback.

We need your help!

We want to make sure that the new data consent experience is easy for extension developers to adopt, and works as a drop-in replacement for any existing custom consent experiences you may have created. We also need to know if the data categories available to choose from are appropriate for your extension.

We encourage extension developers to test out this new experience with their own extensions in Firefox Nightly, and let us know what they think by posting on this Mozilla Connect thread, or reach out to me directly on BlueSky!

To install an extension that has this experience configured you will need to install it from a file. You’ll need to first set the xpinstall.signatures.required preference to false in about:config. This will only work on Nightly, and not on release versions of Firefox.

How it works

Developers can specify what data they wish to collect or transmit in their extensions manifest.json file. This information will be parsed by the browser and shown to the user when they first install the extension. A user can then choose to accept or reject the data collection, just like they do with extension permissions. The developer can also specify that the extension collects no data.

To standardize this information for both developers and end users, we have created categories based on data types that extensions might be using today. In line with our current policies, there are two types of data:  Personal data, and Technical and Interaction data.

To provide feedback on these categories, please let us know via our research survey. Therefore, please note that these options are subject to change based on the feedback we receive during this initial phase.

Personal data

Personally identifiable information can be actively provided by the user or obtained through extension APIs. It includes, but is not limited to names, email addresses, search terms and browsing activity data, as well as access to and placement of cookies.

 

Data type
Visible during install
Data collection permission

Used in the manifest

Definition / Examples
Personally identifying information personallyIdentifyingInfo Examples:  contact information like name and address, email, and phone number, as well as other identifying data such as ID numbers, voice or video recordings, age, demographic information, or biometric data.
Health information healthInfo Examples:  medical history, symptoms, diagnoses, treatments, procedures, or heart rate data.
Financial and payment information financialAndPaymentInfo Examples:  credit card numbers, transactions, credit ratings, financial statements, or payment history.
Authentication information authenticationInfo Examples:  passwords, usernames, personal identification numbers (PINs), security questions, and registration information for extensions that offer account-based services.
Personal communications personalCommunications Examples:  emails, text or chat messages, social media posts, and data from phone calls and conference calls.
Location locationInfo Examples:  region, GPS coordinates, or information about things near a user’s device.
Browsing activity browsingActivity Information about the websites you visit, like specific URLs, domains, or categories of pages you view over time.
Website content  websiteContent Covers anything visible on a website — such as text, images, videos, and links — as well as anything embedded like cookies, audio, page headers, request, and response information.
Website activity websiteActivity Examples:  interactions and mouse and keyboard activity like scrolling, clicking, typing, and covers actions such as saving and downloading.
Search terms searchTerms Search terms entered into search engines
Bookmarks bookmarksInfo Information about Firefox bookmarks, including specific websites, bookmark names, and folder names.

Technical and interaction data

Technical data describes information about the environment the user is running, such as browser settings, platform information, and hardware properties. User interaction data includes how the user interacts with Firefox and the installed add-on, metrics for product improvement, and error information.

Data type
Visible during install
Data collection permission

Used in the manifest

Definition
Technical and interaction data  technicalAndInteraction Examples: Device and browser info, extension usage and settings data, crash and error reports.

Specifying data types

You specify data types your extension transmits in the browser_specific_settings.gecko key in the manifest.json file. As a reminder, our policies state that data transmission refers to any data that is collected, used, transferred, shared, or handled outside of the add-on or the local browser.

Personal data

Personal data permissions can either be required or optional (only technicalAndInteraction cannot be required, and this is documented later):

"browser_specific_settings": {
  "gecko": {
    "data_collection_permissions": {
      "required": [...],
      "optional": [...]
    }
  }
}

The rest of this section describes each key in the data_collection_permissions object.

Required data

When types of data are specified in the required list, users must opt in to this data collection to use the extension. Users cannot opt-out, and Figure 1 gives an example of how it could look. If a user does not agree to the data collection the extension is not installed. Unlike today, this gives the user a chance to review the data collection requirements of an extension before it is installed in their browser.

In the manifest.json file below, the developer specifies a single type of required data: locationInfo.

{
  "manifest_version": 2,
  "name": "Example - Data collection with fallback",
  "version": "1.0.0",
  "permissions": [
    "storage",
    "management"
  ],
  "browser_specific_settings": {
    "gecko": {
        "id": "example-data-collection-with-fallback@test.mozilla.org",
        "data_collection_permissions": {
          "required": [
             "locationInfo"
          ],
          "optional": [
             "technicalAndInteraction"
          ]
         }
      }
  },
  "background": {
    "scripts": [
      "background.js"
    ]
  },
  "browser_action": {},
  "options_ui": {
     "page": "options/page.html"
  }
}

This results in a new paragraph in the installation prompt (see Figure 1). The data permissions are also listed in about:addons as shown in Figure 2.

Screenshot of Firefox extension installation popup showing the new data collection settings

Figure 1: Installation prompt with data types as specified in the manifest

Screenshot of a Firefox extensions Permissions and Data tab showing the new data collection options

Figure 2: The data permissions are also listed in about:addons

Optional data

Optional data collection permissions can be specified using the optional list. These are not surfaced during installation (except technicalAndInteraction; see next section), and they are not granted by default. The extension can request the user opts in to this data collection after installation via a prompt, and the user can enable or disable this option data collection at any time in about:addons in the Permissions and data section of the extension settings.

Technical and interaction data

The technicalAndInteraction data type behaves differently compared to all others. This data permission can only be optional, but unlike other optional data collection options the user has the opportunity to enable or disable this during the installation flow.. In Figure 1, we can see this choice available in the optional settings section of the installation prompt.

No data collection

We also want to be clear to users when an extension collects no data. To enable this, developers can explicitly indicate that their extension does not collect or transmit any data by specifying the ”none” required permission in the manifest, as follows:

{
  "manifest_version": 2,
  "name": "extension without data collection",
  "version": "1.0.0",
  "browser_specific_settings": {
    "gecko": {
      "id": "@extension-without-data-collection",
      "data_collection_permissions": {
        "required": ["none"]
      }
    }
  },
  "permissions": [
    "bookmarks",
    "<all_urls>"
  ]
}

When a user attempts to install this extension, Firefox will show the usual installation prompt with the description of the required (API) permissions as well as a new description to indicate that the extension does not collect any data (see Figure 3).

Screenshot of the Firefox extension installation dialog showing no data collection by the extension

Figure 3: Installation prompt with no data transmission defined in the manifest

 

The “no data collected” type is also listed in the “Permissions and data” tab of the extension in about:addons as shown in Figure 4.

Screenshot of a Firefox extensions Permissions and data tab in about:addons showing no data collection by the extension

Figure 4: The “no data collected” permission is listed in about:addons

Note: The none data type can only be required, and it cannot be used with other data types, including optional types. When that happens, Firefox will ignore the none type, and only consider the other data types (see next section for more information). In addition, Firefox will show a warning message intended to developers in about:debugging as shown in Figure 5.

Screenshot showing a warning message about the data collection settings configured in the manifest.json file

Figure 5: A warning message is displayed when the none type
is combined with other data collection permissions

Accessing the data permissions programmatically

Extension developers can use the browser.permissions API (MDN docs) to interact with the optional data permissions. Specifically, the getAll() method would now return the list of granted optional data permissions as follows:

await browser.permissions.getAll()
{
  origins: ["<all_urls>"],
  permissions: ["bookmarks"],
  // In this case, the permission is granted.
​  data_collection: ["technicalAndInteraction"]
}

Extension developers can also use the browser.permissions.request() API method (MDN docs) to get consent from users for ancillary data collection (defined in the optional list):

await browser.permissions.request({ data_collection: ["healthInfo"] });

This will show the following message to the Firefox user, giving them the choice to opt in to this data collection or not.

Firefox optional data collection consent message

Updates

When an extension is updated, Firefox will only show the newly added required data permissions, unless it’s the special none data type because we don’t need to bother the user when the extension does not collect any data. This should behave like today for traditional permissions.

Please try it out and let us know what you think!

As I mentioned, we really want to make sure that the new data consent experience is easy for extension developers to adopt, and works as a drop-in replacement for any existing custom consent experiences you may have created.

Please test out this new experience with your own extensions in Firefox Nightly, and let us know what you think by posting on this Mozilla Connect thread

The post New Extension Data Consent Experience now available in Firefox Nightly appeared first on Mozilla Add-ons Community Blog.

The Servo BlogTwo months in Servo: CSS nesting, Shadow DOM, Clipboard API, and more!

Before we start, let’s address the elephant in the room. Last month, we proposed that we would change our AI contributions policy to allow the use of AI tools in some situations, including GitHub Copilot for code. The feedback we received from the community was overwhelmingly clear, and we’ve listened. We will keep the AI contributions ban in place, and any future proposals regarding this policy will be discussed together, as a community.

At the same time, we have other big news! Complex sites such as Gmail and Google Chat are now usable in Servo, with some caveats. This milestone is only possible through the continued hard work of many Servo contributors across the engine, and we’re thankful for all of the efforts to reach this point.

Google Chat rendering in Servo
Gmail rendering in Servo

Servo now supports single-valued <select> elements (@simonwuelker, #35684, #36677), disabling stylesheets with <link disabled> (@Loirooriol, #36446), and the Refresh header in HTTP responses and <meta> (@sebsebmc, #36393), plus several new CSS features:

We’ve also landed a bunch of new web API features:

servoshell showing new support for ‘image-set()’, ‘fit-content()’, ‘scale’, ‘translate’, ‘rotate’, ‘setLineDash()’, caret and text selection in <input>, and single-valued <select>

The biggest engine improvements we’ve made recently were in Shadow DOM (+70.0pp to 77.9%), the Trusted Types API (+57.8pp to 57.8%), Content Security Policy (+54.0pp to 54.8%), the Streams API (+31.9pp to 68.1%), and CSS Text (+20.4pp to 57.6%).

We’ve enabled Shadow DOM by default after significantly improving support, allowing Servo to render sites like wpt.fyi correctly (@simonwuelker, @longvatron111, @elomscansio, @jdm, @sakupi01, #35923, #35899, #35930, #36104, #34964, #36024, #36106, #36173, #36010, #35769, #36230, #36620).

wpt.fyi rendering in Servo

ReadableStream, WritableStream, DOMPoint, DOMPointReadOnly, and DOMException can now be sent over postMessage() and structuredClone() (@gterzian, @kkoyung, @jdm, @mrobinson, #36181, #36588, #36535, #35989).

We’ve started working on support for stream transforms (@Taym95, #36470) and the trusted types API (@TimvdLippe, @jdm, #36354, #36355, #36422, #36454, #36409, #36363, #36511, #36596). We’ve also laid the groundwork for supporting the ::marker pseudo element (@mrobinson, #36202), animated images in web content (@rayguo17, #36058, #36141), and getClientRects() and getBoundingClientRect() on Range (@simonwuelker, #35993).

Servo can now render the caret and text selection in input fields (@dklassic, @webbeef, #35830, #36478), and we’ve landed a few fixes to radio buttons (@elomscansio, #36252, #36431), file inputs (@sebsebmc, #36458), and input validation (@MDCODE247, #36236).

Having disabled by default Servo’s original, experimental layout implementation back in November 2024, we’ve now taken the step of deleting all of the disabled code (@Loirooriol, @TimvdLippe, @mrobinson, #35943, #36281, #36698) and moving all of the remaining layout code to layout (@mrobinson, #36613). Our new layout engine is improving significantly month over month!

We’ve added a new --enable-experimental-web-platform-features option that enables all engine features, even those that may not be stable or complete. This works much like Chromium’s option with the same name, and it can be useful when a page is not functioning correctly, since it may allow the page to make further progress. Servo now uses this option when running the Web Platform Tests (@Loirooriol, #36335, #36519, #36348, #36475), and the features enabled by this option are expected to change over time.

Servo-the-browser (servoshell)

Our devtools integration now supports iframes (@simonwuelker, #35874) and color scheme simulation (@uthmaniv, #36253, #36168, #36297), shows computed display values when inspecting elements (@stephenmuss, #35870), and supports multiple tabs open in the servoshell browser (@atbrakhi, #35884). We’ve also landed the beginnings of a Sources panel (@delan, @atbrakhi, #36164, #35971, #36631, #36632, #36667). To use devtools, we now require Firefox 133 or newer (@atbrakhi, #35792).

Dialogs support keyboard interaction to close and cancel them (@chickenleaf, #35673), and the URL bar accepts any domain-like input (@kafji, #35756). We’ve also enabled sRGB colorspaces on macOS for better colour fidelity (@IsaacMarovitz, #35683). Using the --userscripts option without providing a path defaults to resources/user-agent-js. Finally, we’ve renamed the OpenHarmony app bundle (@jschwe, #35790).

Servo-the-engine (embedding)

We’ve landed some big changes to our webview API:

Embedders can now inject userscript sources into all webviews (@Legend-Master, #35388). Links can be opened in a new tab by pressing the Ctrl or modifier (@webbeef, @mrobinson, #35017). Delegates will receive send error notifications for requests (@delan, #35668), and we made progress towards a per-webview renderer model (@mrobinson, @delan, #35701, #35716).

We fixed a bug causing flickering cursors (@DevGev, #35934), and now create the config directory if it does not exist (@yezhizhen, #35761). We also fixed a number of bugs in the WebDriver server related to clicking on elements, opening and closing windows, and returning references to exotic objects (@jdm, #35737).

Under the hood

We’ve finally finished splitting up our massive script crate (@jdm, #35988, #35987, #36107, #36216, #36220, #36095, #36323), which should cut incremental build times for that crate by 60%. This is something we’ve wanted to do for over eleven years (@kmcallister, #1799)!

webgpu rebuilds are now faster as well, with changes to that crate no longer requiring a script rebuild (@mrobinson, #36332, #36320).

Stylo has been upgraded to 2025-03-15 (@nicoburns, @Loirooriol, #35782, #35925, #35990), and we upgraded to the 2024 Rust edition (@simonwuelker, #35755).

We added a memory usage view for Servo embedders: visit about:memory for a breakdown of identified allocations (@webbeef, @jdm, #35728, #36557, #36558, #36556, #36581, #36553, #36664).

about:memory screenshot

Perf and stability

We’ve started building an incremental layout system in Servo (@mrobinson, @Loirooriol, #36404, #36448, #36447, #36513), with a huge speedup to offsetWidth, offsetHeight, offsetLeft, offsetTop, and offsetParent layout queries (@mrobinson, @Loirooriol, #36583, #36629, #36681, #36663). Incremental layout will allow Servo to respond to page updates and layout queries without repeating layout work, which is critical on today’s highly dynamic web.

OffscreenRenderingContext is no longer double buffered, which can improve rendering performance in embeddings that rely on it. We also removed a source of canvas rendering latency (@sagudev, #35719), and fixed performance cliffs related to the Shadow DOM (@simonwuelker, #35802, #35725). We improved layout performance by reducing allocations (@jschwe, #35781) and caching layout results (@Loirooriol, @mrobinson, #36082), and reduced the latency of touch events when they are non-cancelable (@kongbai1996, #35785).

We also fixed crashes involving touch events (@kongbai1996, @jschwe, #35763, #36531, #36229), service workers (@jdm, #36256), WritableStream (@Taym95, #36566), Location (@jdm, #36494), <canvas> (@tharkum, @simonwuelker, #36569, #36705), <input> (@dklassic, #36461), <iframe> (@leftmostcat, #35742), ‘min-content’ and ‘max-content’ (@Loirooriol, #36518, #36571), flexbox (@mrobinson, #36123), global objects (@jdm, #36491), resizing the viewport (@sebsebmc, #35967), and --pref shell_background_color_rgba (@boluochoufeng, #35865). Additionally, we removed undefined behaviour from the Rust bindings to the SpiderMonkey engine (@gmorenz, #35892, #36160, #36161, #36158).

The project to decrease the risk of intermittent GC-related crashes continues to make progress (@jdm, @Arya-A-Nair, @Dericko681, @yerke, #35753, #36014, #36043, #36156, #36116, #36180, #36111, #36375, #36371, #36395, #36392, #36464, #36504, #36495, #36492).

More changes

Our flexbox implementation supports min/max keyword sizes for both cross and main axes (@Loirooriol, #35860, #35961), as well as keyword sizes for non-replaced content (@Loirooriol, #35826) and min and max sizing properties (@Loirooriol, #36015). As a result, we now have full support for size keywords in flexbox!

We made lots of progress on web API features:

On security and networking:

On the DOM:

And on many other bugs:

Donations

Thanks again for your generous support! We are now receiving 4664 USD/month (+6.8% over February) in recurring donations. This helps cover the cost of our self-hosted CI runners and our latest Outreachy interns, Usman Baba Yahaya (@uthmaniv) and Jerens Lensun (@jerensl)!

Servo is also on thanks.dev, and already 24 GitHub users (+3 over February) that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

4664 USD/month
10000

As always, use of these funds will be decided transparently in the Technical Steering Committee. For more details, head to our Sponsorship page.

Conference talks

Josh Matthews (@jdm) will be speaking about Servo at RustWeek 2025, on Tuesday 13 May at 17:05 local time (15:05 UTC). See you there!

The Rust Programming Language BlogAnnouncing Google Summer of Code 2025 selected projects

The Rust Project is participating in Google Summer of Code (GSoC) again this year. GSoC is a global program organized by Google that is designed to bring new contributors to the world of open-source.

In March, we published a list of GSoC project ideas, and started discussing these projects with potential GSoC applicants on our Zulip. We had many interesting discussions with the potential contributors, and even saw some of them making non-trivial contributions to various Rust Project repositories, even before GSoC officially started!

After the initial discussions, GSoC applicants prepared and submitted their project proposals. We received 64 proposals this year, almost exactly the same number as last year. We are happy to see that there was again so much interest in our projects.

A team of mentors primarily composed of Rust Project contributors then thoroughly examined the submitted proposals. GSoC required us to produce a ranked list of the best proposals, which was a challenging task in itself since Rust is a big project with many priorities! Same as last year, we went through several rounds of discussions and considered many factors, such as prior conversations with the given applicant, the quality of their proposal, the importance of the proposed project for the Rust Project and its wider community, but also the availability of mentors, who are often volunteers and thus have limited time available for mentoring.

As is usual in GSoC, even though some project topics received multiple proposals1, we had to pick only one proposal per project topic. We also had to choose between great proposals targeting different work to avoid overloading a single mentor with multiple projects.

In the end, we narrowed the list down to a smaller number of the best proposals that we could still realistically support with our available mentor pool. We submitted this list and eagerly awaited how many of them would be accepted into GSoC.

Selected projects

On the 8th of May, Google has announced the accepted projects. We are happy to share that 19 Rust Project proposals were accepted by Google for Google Summer of Code 2025. That's a lot of projects, which makes us super excited about GSoC 2025!

Below you can find the list of accepted proposals (in alphabetical order), along with the names of their authors and the assigned mentor(s):

Congratulations to all applicants whose project was selected! The mentors are looking forward to working with you on these exciting projects to improve the Rust ecosystem. You can expect to hear from us soon, so that we can start coordinating the work on your GSoC projects.

We would also like to thank all the applicants whose proposal was sadly not accepted, for their interactions with the Rust community and contributions to various Rust projects. There were some great proposals that did not make the cut, in large part because of limited mentorship capacity. However, even if your proposal was not accepted, we would be happy if you would consider contributing to the projects that got you interested, even outside GSoC! Our project idea list is still actual and could serve as a general entry point for contributors that would like to work on projects that would help the Rust Project maintainers and the Rust ecosystem. Some of the Rust Project Goals are also looking for help.

There is also a good chance we'll participate in GSoC next year as well (though we can't promise anything at this moment), so we hope to receive your proposals again in the future!

The accepted GSoC projects will run for several months. After GSoC 2025 finishes (in autumn of 2025), we will publish a blog post in which we will summarize the outcome of the accepted projects.

Steve FinkSinful Debugging

Recently, I was debugging my SpiderMonkey changes when running a JS test script, and got annoyed at the length of the feedback cycle: I’d make a change to the test script or the C++ code, rerun (under rr), go into the debugger, stop execution at a point where I knew what variable was what, set […]

Tantek ÇelikRunning For Re-election in the 2025 W3C Advisory Board (AB) Election

Tantek Çelik is nominated by Mozilla Foundation.
Nomination statement from Tantek Çelik:

Hi, I'm Tantek Çelik and I'm running for the W3C Advisory Board (AB) to build on the momentum the AB has built with transitioning W3C to a community-led and values-driven organization. I have been participating in and contributing to W3C groups and specifications for over 25 years.

I am Mozilla’s Advisory Committee (AC) representative and previously served on the AB for several terms, starting in 2013, with a two year break before returning in 2020. In early years I drove the movement to shift W3C to more open licenses for specifications, and more responsiveness to the needs of open source communities and independent website publishers.

Most recently on the AB I led the AB’s Priority Project for a W3C Vision as contributor and editor, taking it through wide review, and consensus at the AB to a vote by the AC to adopt the Vision as an official W3C Statement.

Previously I also co-chaired the W3C Social Web Working Group that produced several widely interoperably deployed Social Web Standards. Mastodon and other open source software projects built a social network on ActivityPub and other social web specs which now require maintenance from implementation experience. As such, I have participated in the Social Web Incubator Community Group and helped draft a new charter to restart the Social Web Working Group and maintain these widely adopted specifications.

With several members stepping down, the AB is experiencing much higher than usual turnover in this election.

I am running for re-election to both help with continuity, on the Vision project and other efforts, and work with new and continuing Advisory Board members to build a fresh, forward looking focus for the AB.

I believe governance of W3C, and advising thereof, is most effectively done by those who have the experience of actively collaborating in working groups producing interoperable specifications, and especially those who directly create on the web using W3C standards. This direct connection to the actual work of the web is essential to prioritizing the purpose & scope of governance of that work.

Beyond effective governance, the AB has played the more crucial role of a member-driven change agent for W3C. While the Board and Team focus on the operations of keeping the W3C legal entity running smoothly, the AB has been and should continue to be where Members go to both fix problems and drive forward-looking improvements in W3C to better fulfill our Vision and Mission.

I have Mozilla's financial support to spend my time pursuing these goals, and ask for your support to build the broad consensus required to achieve them.

I post on my personal site tantek.com. You may follow my posts there or from Mastodon: @tantek.com@tantek.com

If you have any questions or want to chat about the W3C Advisory Board, Values, Vision, or anything else W3C related, please reach out by email: tantek at mozilla.com. Thank you for your consideration.

Addendum: More Candidates Blogged Nomination Statements

Several other candidates (all new candidates) have also blogged their nomination statements, on their personal websites, naturally. This is the first AB election I know of where more than one candidate blogged their nomination statement. Ordered earliest published first:

And one more candidate blogged about why he is running:

Data@MozillaData and Firefox Suggest

Introduction

Firefox Suggest is a  feature that displays direct links to content on the web based on what users type into the Firefox address bar. Some of the content that appears in these suggestions is provided by partners, and some of the content is sponsored. It may also include locally-stored items from the user’s history or bookmarks.

In building Firefox Suggest, we have followed our long-standing Lean Data Practices and Data Privacy Principles. Practically, this means that we take care to limit what we collect, and to limit what we pass on to our partners. The behavior of the feature is straightforward–suggestions are shown as you type, and are directly relevant to what you type.

We take the security of the datasets needed to provide this feature very seriously. We pursue multi-layered security controls and practices, and strive to make as much of our work as possible publicly verifiable.

In this post, we wanted to give more detail about what data is needed to provide this feature, and about how we handle it.

What is Firefox Suggest?

 

The address bar experience in Firefox has long been a blend of results provided by partners (such as the user’s default search provider) and information local to the client (such as recently visited pages). Firefox Suggest augments these data sources with search completions from Mozilla, which it displays alongside the local and default search engine suggestions.

Firefox Suggest data flow diagram

Suggest is currently available by default to users in the following countries:

  • The United States
  • The United Kingdom
  • France
  • Germany
  • Poland
  • Italy

Data Collected by Mozilla for an improved experience

Users with access to Suggest can choose to enable an expanded version of the feature.  This feature requires access to additional data and is only available to users who have chosen to opt-in (via an opt-in prompt or their Settings menu). When users have opted in to the improved experience, Mozilla collects the following information to power Firefox Suggest.

  • Clicks and impressions: Mozilla receives information about the fact that a suggestion was shared.  When a user clicks on a suggestion, Mozilla receives notice that a suggested link was clicked.
  • Location: Mozilla collects city-level location data along with searches, in order to properly serve location-sensitive queries.
  • Search keywords: Firefox Suggest sends Mozilla information about certain search keywords, which may be shared with partners (after being stripped of any personally identifiable information) to fetch the suggested content and improve the Suggest feature.

How Data is Handled and Shared

Mozilla handles this data conservatively. When passing data on to our partners, we are careful to only provide the partner with the minimum information required to serve the feature.

For example, we only do not share user’s specific search queries (except where the user has signed up for the enhanced experience), and we do not identify which specific user sent the request, or use cookies to track users’ online activity after their search is performed.

Similarly, while a Firefox client’s location can typically be determined from their IP address, we convert a user’s IP address to a more general location immediately after we receive it, and we remove it from all datasets and reports downstream. Access to machines and (temporary, short-lived) datasets that might include the IP address is highly restricted, and limited only to a small number of administrators. We don’t enable or allow analysis on data that includes IP addresses.

We’re excited to be bringing Firefox Suggest to you. See the product announcement to learn more!

EDIT: May 7, 2025: Updated to clarify product details and reflect changes.

 

This Week In RustThis Week in Rust 598

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Research
Miscellaneous

Crate of the Week

This week's crate is structstruck, a proc-macro crate for enabling nested struct/enum definitions.

Thanks to Julius Michaelis for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

RFCs
Rust
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

No Calls for papers or presentations were submitted this week.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

447 pull requests were merged in the last week

Compiler
Library
Cargo
Rustdoc
Rustfmt
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

A relatively noisy week due to addition of new benchmarks as part of our 2025 benchmark update, and a number of large regressions in a rollup landing late in the week (and so not yet investigated).

Triage done by @simulacrum. Revision range: 25cdf1f6..62c5f58f

2 Regressions, 2 Improvements, 6 Mixed; 3 of them in rollups 31 artifact comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Rust
Other Areas
Cargo
Rust RFCs

No Items entered Final Comment Period this week for Language Reference, Language Team or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2025-05-07 - 2025-06-04 🦀

Virtual
Asia
Europe
North America
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Well, the answer is basically yes. Our firmware is all Rust. Every component of our autonomy stack is Rust. Our app is 50% in Rust. And, our visualization tools are in Rust. Our production tools are in rust. The production QC software, which we ship to China, is in rust. Our internal websites are in rust. It's rust all over. We’ve drank the Rust Kool-Aid. In fact, there is no Python installed on the robots. This is not to dis Python at all, but it’s just simply not there.

We use Python for neural network training. But Python is boxed to that. Everything else is Rust. And, the advantage of using Rust exponentially builds up.

Vivek Bagaria on filtra.io

Thanks to Brian Kung for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Firefox NightlyKeep on Rolling with Profile Improvements – These Weeks in Firefox: Issue 180

Highlights

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Gautam Panakkal
  • Harold Camacho
  • Julian Gaibler

New contributors (🌟 = first patch)

  • Brian Ouyang: Bug 1955567 – Fix spacing of search bar in overflow menu in customize mode
  • Chris Shiohama: Bug 1920146 – Fixed sorting of duration column in network monitor
  • Gautam Panakkal: Bug 1895516 – Align ‘x’ button in ‘Import history…’ banner with right edge of card’s inner padding
  • Harold Camacho
    • Bug 1323331 – Improve reader mode code dealing with detected language/direction
    • Bug 1824630 – TabStateCache documentation/function signatures are misleading
  • Jason Jones: Bug 1960383 – Remove vestigial logic related to `browser.translations.panelShown`
  • John McCann [:johnm]: Bug 1952307 – Use hasAttribute instead of getAttribute in restoreWindowFeatures
  • Abdelaziz Mokhnache: Bug 1953454 – Extract shared helper to compute the title of File, Url and Path columns
  • Ricardo Delgado Gomez: Bug 1960409 – Add mozMessageBar.ftl localization link to about:translations

Project Updates

Accessibility

  • James Teh [:Jamie] has flipped the UIA pref patches which would provide a native UIA support for many assistive technologies on Windows (in addition to IA2), for instance the Win-own Narrator (speech-to-text software) would be able to better catch Accessibility tree from Firefox (Meta bug 762769):
    • When getting a report on an assistive technology (i.e. JAWS screen reader) not working properly with Firefox, try to toggle the `accessibility.uia.enable` to 1 and 0 to find out if the UIA or IA2 are to blame

Add-ons / Web Extensions

Add-ons Manager & about:addons
  • Colorways built-in themes cleanup has been completed in Firefox 139 and all the expired  Colorways built-in themes have been removed from mozilla-central
    • Most of the clients have been migrated to AMO hosted themes 2 years ago, the subset of clients that have not been able to migrate automatically to the AMO themes are being notified about how to reach the Colorways themes hosted on AMO (with a notification box shown at browser startup and/or a message bar shown in about:addons). They will also be switched automatically to the default theme.
  • Deprecated app-system-default XPIProvider location has been removed (followup to migrating all system add-ons into the omni jar)
WebExtensions Framework
WebExtension APIs

DevTools

WebDriver BiDi

Lint, Docs and Workflow

Migration Improvements

  • We discovered last week that Chrome on Windows is using Application Bound Encryption for various data stores, which is great for protecting those data stores from malware, but also means that it’s very difficult for us to import things like credentials and payment methods automatically from those local data stores.
    • The current workaround is to use the same approach we use for Safari, and guide the user through exporting Chrome passwords to a CSV file, and importing that CSV file. We have some WIP patches up to add support for this in the migration wizard, but are exploring other options as well.
    • Thanks to the Credential Management team for their help with the analysis!

New Tab Page

Profile Management

  • On track for a Nimbus-driven rollout in 138, starting at 0.5% but may go larger
  • Sorry, we broke profiles in 139 Nightly last Wed/Thurs
    • Bug 1962531 – Profiles lost when the startup profile selector is enabled
    • If you updated and restarted and lost your profile group, you got stung by this bug.
    • We paused updates Friday until the fix landed (thanks Mossop!), so if you haven’t seen the bug by now, you won’t see it.
    • Your data is not lost! We’ve just accidentally broken the link between your default profile and the profile group database.
    • For help – join us in #fx-profile-eng on Matrix and we’ll help you get reconnected (also blog post coming with details for a self-service fix)
  • So what happened that caused the bug? A huge refactoring landed that split the profiles feature toggle from the cross-profile shared database, and we missed the edge case where we startup into the profile selector window. See the bug 1953884 and its parent metabug 1953861 for details.

Search and Navigation

  • Daisuke enabled weather for Firefox Suggest by default – 1961069
  • Daisuke added getFaviconForPage to nsIFaviconService – 1915762
  • Dale added “save page” as at term one can use to see the “Save page as PDF” Quick Actions button – 1953492
  • Dale also added “manage” keyword to see quick actions related to managing settings – 1953486
  • Moritz landed a couple patches related to telemetry – 1788088, 1915252
  • Mark expanded search config with a new record type to allow easy retrieval of all locales used in search config – 1962432

Storybook/Reusable Components/Acorn Design System

  • Metrics updates
  • More design system components show code examples in Figma now
  • Acorn newsletter went out last Wednesday Web preview (images are currently broken 😞)

Tab Groups

  • Firefox 138, released 29 April 2025:
    • Rolled out to 95% of users worldwide
    • You can now drag and drop entire tab groups in the tab strip
    • tabs Web Extension API additions to support tab groups (see also the Add-ons / Web Extensions section)
  • Planned for Firefox 139, now in Beta, releasing 27 May 2025:
    • Enabled by default worldwide
    • tabGroups Web Extension API additions to support tab groups (see also the Add-ons / Web Extensions section)

William DurandMoziversary #7

A few days ago, this was my seventh Moziversary 🎂 I joined Mozilla as a full-time employee on May 1st, 2018. I previously blogged in 2019, 2020, 2021, 2022, 2023, and 2024.

While I may not have the energy to reflect extensively on the past year right now, I can say with confidence that the last 12 months have been incredibly productive, and things are generally going well for me.

Seven years later, I am still part of the Add-ons team. As a senior staff engineer, I am no longer working full time on the WebExtensions team. Instead, I am spending my time on anything related to Add-ons within Mozilla (be it Firefox, AMO, etc.).

My team went through a lot of changes over the last few years1, with some years more memorable than others. About a year ago, things started to head into the right direction, and I am rather hopeful. It’s going to take some time, but the team is really set up for success again!

Shout-out to all my amazing colleagues at Mozilla, I wouldn’t be where I am today without y’all ❤️

  1. Let’s talk briefly about the elephant. Mozilla has changed a lot too but I don’t have much control over that so I tend to not think too much about it 🤷 

The Rust Programming Language BlogAnnouncing rustup 1.28.2

The rustup team is happy to announce the release of rustup version 1.28.2. Rustup is the recommended tool to install Rust, a programming language that empowers everyone to build reliable and efficient software.

What's new in rustup 1.28.2

The headlines of this release are:

  • The cURL download backend and the native-tls TLS backend are now officially deprecated and a warning will start to show up when they are used. pr#4277

    • While rustup predates reqwest and rustls, the rustup team has long wanted to standardize on an HTTP + TLS stack with more components in Rust, which should increase security, potentially improve performance, and simplify maintenance of the project. With the default download backend already switched to reqwest since 2019, the team thinks it is time to focus maintenance on the default stack powered by these two libraries.

    • For people who have set RUSTUP_USE_CURL=1 or RUSTUP_USE_RUSTLS=0 in their environment to work around issues with rustup, please try to unset these after upgrading to 1.28.2 and file an issue if you still encounter problems.

  • The version of rustup can be pinned when installing via rustup-init.sh, and rustup self update can be used to upgrade/downgrade rustup 1.28.2+ to a given version. To do so, set the RUSTUP_VERSION environment variable to the desired version (for example 1.28.2). pr#4259

  • rustup set auto-install disable can now be used to disable automatic installation of the toolchain. This is similar to the RUSTUP_AUTO_INSTALL environment variable introduced in 1.28.1 but with a lower priority. pr#4254

  • Fixed a bug in Nushell integration that might generate invalid commands in the shell configuration. Reinstalling rustup might be required for the fix to work. pr#4265

How to update

If you have a previous version of rustup installed, getting the new one is as easy as stopping any programs which may be using rustup (e.g. closing your IDE) and running:

$ rustup self update

Rustup will also automatically update itself at the end of a normal toolchain update:

$ rustup update

If you don't have it already, you can get rustup from the appropriate page on our website.

Rustup's documentation is also available in the rustup book.

Caveats

Rustup releases can come with problems not caused by rustup itself but just due to having a new release.

In particular, anti-malware scanners might block rustup or stop it from creating or copying files, especially when installing rust-docs which contains many small files.

Issues like this should be automatically resolved in a few weeks when the anti-malware scanners are updated to be aware of the new rustup release.

Thanks

Thanks again to all the contributors who made this rustup release possible!

Tantek ÇelikCSF_02: Entropy Is Your Friend In Security

Deliberate use of entropy, randomness, even changing routines can provide a layer of defense for cybersecurity.

More Steps for Cybersecurity

Here are three more steps (in addition to Three Steps for IndieWeb Cybersecurity) that you can take to add obstacles to any would be attackers, and further secure your online presence.

  1. Different email address for each account, AKA email masking. Use or create a different email alias for each service you sign-up for. With a single email inbox, like any username at Gmail, you can often append a plus sign (+) and a brief random string. If you use your own #indieweb domain for email addresses, pick a different name at that domain for each service, with a bit of entropy like a short number. Lastly, another option is to use an email masking service — try a web search for that phrase for options to check out. Each of these works to limit or at least slow down an attacker, because even if they gain control of one email alias or account, any “forgot password” (AKA password reset or account reset, or sometimes called recovery) attempts with that same email on other services won’t work, since each service only knows about an email address unique to it.
  2. Different password for each account. This is a well known security technique against credential stuffing attacks. I.e. if someone retrieves your username and password from a data breach, or guesses them, or tricks (phishes) you into entering them for one service, they may try to “stuff” those “credentials” into other services. Using different passwords for all online services you use can thwart that attack. Note however that different passwords with the same email address will not stop an account reset attack, which is why this tip is second to email masking.
  3. Use a password manager to autofill. All modern browsers and many operating systems have built-in password managers, most of which also offer free sync services across devices. There is also third party password manager software and third party password manager services which are designed to work across devices, browsers, and operating systems. Regardless of which option you choose, always using a password manager to autofill your login username (or email) and password can be a very effective method of reducing the chances of being phished. Password managers will not autofill forms on fake phishing domains that are pretending to be a legitimate service. Password managers can also help with keeping track of unique email addresses and passwords for each service. Most will also auto-generate long and random (high entropy) passwords for you.

I’ll close with a reminder that Perfect is the enemy of good. This post has been a draft for a while so I decided to publish it as a summary, rather than continuing to iterate on it. I’m sure others have written much longer posts. Similarly, even if you cannot take all these actions immediately everywhere, you can benefit by incrementally taking some of these steps on some accounts. Prioritize important accounts and take steps to increase their security.

Previous post in this series: CSF_01: Three Steps for IndieWeb Cybersecurity

Glossary

Glossary for some terms, phrases, and further reading on each.

credential stuffing
https://en.wikipedia.org/wiki/Credential_stuffing
data breach
https://en.wikipedia.org/wiki/Data_breach
entropy
https://en.wikipedia.org/wiki/Entropy_(information_theory)
password manager
https://en.wikipedia.org/wiki/Password_manager
phish, phished, phishes, phishing
https://en.wikipedia.org/wiki/Phishing

Syndicated to: IndieNews

The Mozilla BlogMozilla’s CEO discusses testimony in U.S. v. Google search case

Today, Mozilla Chief Financial Officer, Eric Muhlheim, testified in the U.S. v. Google LLC search trial, highlighting the potential impacts this case could have on small and independent browsers, and the overall ecosystem. 

There are a few key themes of Muhlheim’s testimony that we’ll expound on: 

Mozilla’s search options are based on user choice 

Firefox users view Google as the best quality search engine. Mozilla experienced this firsthand when we switched the Firefox browser’s default search engine from Google to Yahoo between 2014 and 2017 in an effort to support search competition. Firefox users found Yahoo’s search quality lacking and some switched to Google search while others left the Firefox browser altogether.

Firefox offers its users greater and more easily accessible search engine choice than any major browser. From providing search engine shortcuts, to easy default settings and a range of options in the address bar, alternative search engines are readily available within Firefox. Put simply, our long-standing search strategy has been to evaluate and select the best search experience region by region, enabling choice for Firefox users with more than 50 search providers across more than 90 locales. We make sure our agreements do not make Google an exclusive search provider on Firefox or impede our ability to promote choice.

The breaking point

It’s no secret that search revenue accounts for a large portion of Mozilla’s annual revenue. Firefox is an independent browser — we don’t have our own OS, devices, or app store. Without this revenue, Mozilla and other small, independent browsers may be forced to scale back operations and cut support for critical projects like Gecko, the only remaining browser engine competing with Google’s Chromium and Apple’s WebKit. 

Innovation, privacy and user choice can only thrive when browser engines compete. Without that, there’s no push to make the web faster, safer, or more inclusive. If we lose or weaken Gecko, the web will be optimized for commercial business models and priorities, not the values that Mozilla champions for the web such as privacy, accessibility and user choice. The open web only stays open if websites, apps, and content interoperate and work everywhere.

Truly improving competition and choice cannot solve one problem by creating another.

The path forward

Following the testimony, Laura Chambers, CEO of Mozilla, emphasized what we’d like to see coming out of the trial by stating: “This case will shape the competitive landscape of the internet for years to come, and any remedy must strengthen, rather than weaken, the independent alternatives that people rely on for privacy, innovation, and choice.

Smaller, independent browsers, like Firefox, rely on monetization through search partnerships to sustain our work and invest in user-focused innovation. Without these partnerships, we’d face serious constraints—limiting not just our ability to grow but also the availability to provide a non-profit-backed alternative to Chrome, Edge, and Safari. 

This case is also about user choice. Mozilla’s approach to search is built around giving people options. Time and again, we’ve seen people leave our browser when forced to use a search engine they don’t prefer. Without search partnerships, independent browsers — like Mozilla’s Firefox browser and Gecko browser engine — would face severe constraints.

We recognize the importance of improving search competition. However, doing so shouldn’t come at the cost of browser competition. We believe the court should ensure that small and independent browsers are not harmed in any final remedies. Without this, we risk trading one monopoly for another, and the vibrant, people-first web we’ve spent decades fighting for could begin to fade.”

The post Mozilla’s CEO discusses testimony in U.S. v. Google search case appeared first on The Mozilla Blog.

Don Martinew browser buying rules for states?

The W3C TAG, in Third Party Cookies Must Be Removed, writes, Third-party (AKA cross-site) cookies are harmful to the web, and must be removed from the web platform.

But, because of a variety of business, legal and/or political reasons, that’s not happening right now. As power users know but a lot of people don’t, a typical web browser is not really usable out of the box. (Remember when Linux distributions came with a mail server set up as an open SMTP relay? And you had to learn how to turn that off or have your Linux box used by email spammers? Good times.) Some of the stuff that needs to get fixed before using a browser seriously includes:

Not every user can be expected to reconfigure their browser and install extensions. In a higher-trust society users would not have to learn this stuff—the browser vendors would have been taking their Fiduciary Duties seriously all along. But that’s not the way it is. So the responsibility ends up falling on the company or school desktop administrator, or family computer person, to fix as much as possible (turning off browser ad features from the command line).

Power users and support people (paid and unpaid) can do some of the work, and another place to pay attention to browser problems is at the state level. States buy a lot of desktop computers, and the procurement process is an opportunity to require some fixes. Back in the late 1990s, the Microsoft Windows game Minesweeper caused a moral panic over government employee time wasting, and three states required that computers purchased by the government must have the pre-installed games removed.

Web surveillance has a much bigger set of risks than just time-suckage, so states could add the necessary browser reconfiguration or extensions to their requirements. The purchasing policy change to remove third-party cookies is about as easy as the change to remove Minesweeper. Requiring a complete ad blocker would be going too far because of speech issues and the use of ads to support legit sites, so a state requirement could result in funding for a blocklist that covers just the super crime-filled and otherwise risky ad services and leaves the rest alone for now.

More: there ought to be a law

Bonus links

The hammer falls on Apple’s malicious-compliance scheme by Jason Snell, A senior Apple exec could be jailed in Epic case; it’s time to end this disaster by Ben Lovejoy. (Nice economic stimulus—more subscription money will go to small content or software businesses that are more likely to hire people, and less to a big company that’s just going to sit on it or do stock buybacks or whatever.)

Why I’m getting off US tech by Paris Marx. A proper response to the dominance of US tech firms and the belligerence of the US government won’t come through individual actions; it requires governments in Europe, Canada, Brazil, and many other parts of the world to strategize and deploy serious resources to develop an alternative.

Mozilla Addons BlogWebExtensions Support for Tab Groups

Exciting news: with yesterday’s release of Firefox 138, tab groups are now available to all users! Tab groups have been a long standing feature request for users, so it’s wonderful to see this go out to everyone.

New browser features are great, but what’s even better is when they’re backed by WebExtensions APIs that allow our amazing developer community to deeply integrate with those features. So, without further ado, let’s get into the new capabilities available in this release.

What’s new in 138

Firefox 138 includes initial support for tab group management in WebExtensions APIs. More specifically, we’ve updated the Tabs API with a few new tricks that allow extension developers to create tab groups, modify a group’s membership, and ungroup tabs:

  • tabs.group() creates a new tab group that contains the specified tab(s) (MDN, bug 1959714)
  • tabs.ungroup() remove the specified tab(s) from their associated tab groups (MDN, bug 1959714)
  • tabs.query() can now be used to query for tabs with a given groupId (MDN, bug 1959715)
  • Tab objects now have a groupId property that identifies which group it’s in (if any) (MDN, bug 1959713)
  • The tabs.onUpdated event now emits updates for tab group membership changes (MDN, bug 1959716)

Best practices

As we learn more about how users interact with Tab Groups and how extensions integrate Tab Groups into their features, we’ll build out and expand on suggestions to help Add-on developers create better interactions for users. Here’s some suggestions we have so far.

Moving tabs

Be aware that changing a tab’s position in the tab strip may change its group membership, and that your users may not expect that moving tabs using your add-on will move tabs in or out of their tab groups. Use the groupId property on Tab instances to ensure that the tab is or is not grouped as expected.

Reorganizing tabs

Take tab groups into consideration when organizing tabs. For example, Firefox Multi-Account Containers has a “sort tabs by container” feature that reorganizes tabs so that tabs in the same container are grouped together. Since moving a tab can change its group membership, this could have unexpected consequences for users. To avoid this destructive operation, the add-on was updated to skip over grouped tabs.

To avoid destructive changes to a user’s tab groups, we recommend reorganizing ungrouped tabs or tabs inside a window’s tab groups as opposed to organizing all tabs within a window.

What’s coming

In addition to the features added in 138, we are also looking to further expand tab group support with the introduction of the Tab Groups API in Firefox 139. This will address a few gaps in our tab group supporting including the ability to:

  • set a tab group’s title, color, and collapsed state (tabGroups.update())
  • move an entire tab group (tabGroups.move())
  • get info about a single tab group (tabGroups.get())
  • get info about all tab groups (tabGroups.query())
  • subscribe to specific tab group events (onUpdated, onMoved, onCreated, onRemoved)

We’ve already landed the initial implementation of this API in Firefox 139 Beta, but we’d love to get feedback on the API design and capabilities from our development community. If you’re feeling adventurous, you can start experimenting with these new capabilities and sharing feedback with us today. We encourage you to share your experiences and thoughts with us on Discourse.

If everything proceeds smoothly during the next beta cycle, we anticipate that the Tab Groups API will be available with the release of Firefox 139. We look forward to seeing what you build!

The post WebExtensions Support for Tab Groups appeared first on Mozilla Add-ons Community Blog.

The Mozilla BlogAn NYC culture reporter on YouTube’s influence and the tab that got away

Man with a short afro and beard wearing a light yellow jacket, looking directly at the camera in a softly lit room with beige walls.<figcaption class="wp-element-caption">Adlan Jackson is a writer, editor and worker-owner at Hell Gate, a New York City news publication founded as a journalist-run cooperative.</figcaption>

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we know the internet is pretty darn magical. The internet opens up doors and opportunities, allows for human connection, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner Of The Internet, we talk with people about the online spaces they can’t get enough of, the sites and forums that shaped them, and how they would design their own corner of the web.

We caught up with Adlan Jackson, the culture reporter and editor at Hell Gate, a reader-supported New York City news site owned and run by journalists. He talks about YouTube’s cultural influence, the browser tab he shouldn’t have closed and joining his first online forum at age 11 (with parental permission).

What is your favorite corner of the internet? 

I’m a millennial, so I still think YouTube is maybe the most important and underrated social network. I feel like so much culture runs downstream from YouTube. 

I’ve got a few different niches. One is “A Song of Ice and Fire,” the “Game of Thrones” book series. I’m into the deep lore and theory videos, especially the esoteric stuff decoding symbolism. That’s my “chew through some hours” zone.

I also love watching performance videos. The YouTube of the late 2000s and early 2010s had this thriving music community. People would post covers, concert footage, TV performances — all of it. I feel like I developed my entire music taste and sensibility from those videos. That scene has kind of dropped off in the Instagram era, which is a shame, because Instagram just doesn’t archive like YouTube does.

There are still some people out there doing it, though. There’s someone on YouTube right now who’s super active in New York — they go to a ton of indie shows and tape them. I’ve actually been DMing them to ask for an interview, but they haven’t responded.

And yeah, I read the comments. YouTube comments on music videos are famously sentimental and mostly pretty positive. But I like the arguments, too. There’s a lot of generational overlap in the YouTube community, so you’ll see these debates play out that don’t really happen on other platforms.

What is an internet deep dive that you can’t wait to jump back into?

I’ve been really trying to understand online gambling.

I’m not a sports person, so the whole legalization and mainstreaming of sports betting completely passed me by. But it feels like it’s everywhere now — so pervasive that I feel like I’m missing out by not understanding the culture, how it works and why it seems to have hooked people so universally. Lately, I’ve been trying to spend more time in online gambling communities to figure it out.

What is the one tab you always regret closing?

I kind of have this eternal regret that there was some tab I closed that I shouldn’t have — and if I hadn’t, my life would be completely different and better. I have no idea what it was, but I’m sure it mattered.

I used to have hundreds of tabs open all the time. I’ve recently resolved to stop doing that and just close everything out regularly. But back then, I definitely felt like there were essays and Substack posts that were going to lead me to my next big story — and now they’re just gone.

What can you not stop talking about on the internet right now?

I try to avoid posting [on social media] too much. I used to tweet a lot.  Now, in my capacity as a blogger at Hell Gate, I can’t stop talking about the local music scene.

What was the first online community you engaged with?

It was probably this MMO RPG I used to play called “MapleStory” — a Korean side-scrolling, action-adventure, anime-style RPG. There was a forum called sleepywood.net. Sleepywood was a town in MapleStory, so that’s what the website was named after.

I was in there at 11 years old. I remember signing up for the forum — it was just an old style web forum. You had to be 13 or older, and I wasn’t. So I asked my mom, “Can you give me permission to be on this forum?” She wrote a thing, and they let me on.

What’s funny is, I could have just made it up. But I specifically remember that I didn’t. I really got my mom’s permission.

If you could create your own corner of the internet, what would it look like?

I think it would be a place where people feel empowered to create on their own terms. A space where independent media is thriving, and where people are more motivated to pay for work created by people they personally value — not by large conglomerates.

So, someone who skips a Netflix subscription but pays for their friend’s blog. Or someone who doesn’t have Amazon Prime, but subscribes to a local newspaper. 

What articles and/or videos are you waiting to read/watch right now?

Let me look. What do I have opened? The first thing on my YouTube is a Lord of the Rings lore video by In Deep Geek, which is a channel I follow pretty regularly. It’s about the Dead Men of Dunharrow,  the ghost warriors who join Aragorn at the gates of Mordor. I’ll probably watch that later today.

If the internet were designed to strengthen local news, what would that look like? Who should be responsible for making that happen?

I think the government should give money to local news outlets because we’re an important part of civil society. Mostly, I think the government should support local media. But it’s also nice when people really believe in it, too.

As for tech companies — it depends on the company. Some shouldn’t play a role at all. But unconditional cash? That would be great. Cash with no conditions attached.


Adlan Jackson is a writer, editor and worker-owner at Hell Gate, a New York City news publication founded as a journalist-run cooperative. He joined the team in 2023 to focus on arts and culture coverage — a beat Hell Gate has always embraced, but Adlan is the first staffer dedicated specifically to it. He covers what’s happening around the city and keeps readers up to date on the local art scene. His work has also appeared on Pitchfork, the New York Times Magazine and The New Yorker.

The post An NYC culture reporter on YouTube’s influence and the tab that got away appeared first on The Mozilla Blog.

Mike HommeyFirefox Git Migration, the unofficial guide

The migration is imminent. By the time you read this, it has probably already happened.

The official procedure to migrate a developer's workstation is to create a fresh clone and manually transfer local branches through patch files.

That can be a bit limiting, so here I'm going to lay out an alternative (unofficial) path for the more adventurous who want to convert their working tree in-place.

The first step—if you don't already have it—is to install git-cinnabar (version 0.7.0 or newer), because it will be temporarily used for the migration. Then jump to the section that applies to your setup.

Edit: But what section applies? You might ask.

If you're using Mercurial, you already know ;)

If you're using Git, the following commands will help you figure it out (assuming you already installed git-cinnabar, see below):

git cinnabar git2hg 05e5d33a570d48aed58b2d38f5dfc0a7870ff8d3

If the command prints out 9b2a99adc05e53cd4010de512f50118594756650, you want the section for gecko-dev. If it prints 0000000000000000000000000000000000000000, try the next command.

git cinnabar git2hg 028d2077b6267f634c161a8a68e2feeee0cfb663

If this command prints out 9b2a99adc05e53cd4010de512f50118594756650, go to the section for a pure git-cinnabar clone. Otherwise, try the next command.

git cinnabar git2hg 2ca566cd74d5d0863ba7ef0529a4f88b2823eb43

If this command prints out 9b2a99adc05e53cd4010de512f50118594756650, congratulations, you're already using the new repository. This can happen if you bootstrapped during roughly the second half of April. Go to the section for a recently bootstrapped clone for some extra cleanup.

If none of the commands above returned the expected output, I don't know what to tell you, unfortunately :(

Installing git-cinnabar

Run the following commands:

$ mkdir -p ~/.mozbuild/git-cinnabar
$ cd ~/.mozbuild/git-cinnabar
$ curl -sOL https://raw.githubusercontent.com/glandium/git-cinnabar/master/download.py
$ python3 download.py && rm download.py
$ PATH=$PATH:$HOME/.mozbuild/git-cinnabar

Migrating from Mercurial

This was covered in my previous post about the migration, but here is an up-to-date version:

As a preliminary to simplify the conversion, in your local clone of the Mercurial repository, apply your MQ stacks and create bookmarks for each of the heads in the repository.

Something like the following should list all your local heads:

$ hg log -r 'head() & draft()'

And for each of them, you can create a bookmark with:

$ hg bookmark local/<name> -r <revision>

(the local/ part is a namespace used to simplify the conversion below)

Then run the following commands:

$ git init
$ git remote add origin https://github.com/mozilla-firefox/firefox
$ git remote update origin
$ git -c cinnabar.graft=true fetch hg://hg.mozilla.org/mozilla-unified
$ git -c cinnabar.refs=bookmarks fetch hg::$PWD refs/heads/local/*:refs/heads/*
$ git reset $(git cinnabar hg2git $(hg log -r . -T '{node}'))

And you're all set. The local master branch will point to the same commit your Mercurial repository was checked out at. If you had local uncommitted changes, they are also preserved. Once you've verified everything is in order and have converted everything you need, you can run the following commands:

$ rm -rf .hg
$ git cinnabar clear

That will remove both the Mercurial repository and the git-cinnabar metadata, leaving you with only a git repository.

Migrating from gecko-dev

If for some reason you have a gecko-dev clone that you never used with git-cinnabar, you first need to initialize git-cinnabar, running the following command in your working copy:

$ git -c cinnabar.graft=true fetch hg://hg.mozilla.org/mozilla-unified

Once the above ran, or if you already had used gecko-dev with git-cinnabar, you can processed with the conversion. Assuming the remote that points to https://github.com/mozilla/gecko-dev is origin, run:

$ git remote set-url origin hg://hg.mozilla.org/mozilla-unified

Then, follow the instructions in the section below for migrating from a plain git-cinnabar clone.

Migrating from a plain git-cinnabar clone

Run the following command from your local working copy:

$ git -c cinnabar.graft=https://github.com/mozilla-firefox/firefox cinnabar reclone --rebase

That command will automatically rebase all your local branches on top of the new git repository.

If the reclone command output something like the following:

Could not rewrite the following refs:
   refs/heads/<name>
They may still be based on the old remote branches.

it means your local clone may have contained branches based on a different root, and the corresponding branches couldn't be converted. You'll have to go through them to rebase them manually.

Once everything is in order, you can finish the setup by following the instructions in the section below for migrating from a recently bootstrapped clone.

Migrating from a recently bootstrapped clone

Assuming the remote that points to the Mercurial repository is origin, run:

$ git remote set-url origin https://github.com/mozilla-firefox/firefox
$ git -c fetch.prune=true remote update origin
$ git cinnabar clear

Once you've run that last command, the git-cinnabar metadata is gone, and you're left with a pure git repository, as if you had cloned from scratch (except for some now dangling git objects that will be cleaned up later by git gc)

You may need to adjust the upstream branches your local branches track. Run git remote show -n origin to check which remote branch each local branch is set to merge with. If you see entries like merges with remote branches/<something> or merges with remote bookmarks/</something>, you'll need to update your Git configuration accordingly. You can inspect those settings using the output of git config --get-regex 'branch.*.merge'.

If you encounter any problem, please leave a comment below or ping @glandium on #git-cinnabar on Element.

This Week In RustThis Week in Rust 597

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is rust-sel4, a no_std crate to bind to the Se4L microkernel APIs.

Thanks to Robbie VanVossen for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Rust

Let us know if you would like your feature to be tracked as a part of this list.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

No Calls for papers or presentations were submitted this week.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

389 pull requests were merged in the last week

Compiler
Library
Cargo
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Strange week with lots of noise peeking through the performance runs. The only really significant change was a performance improvement that comes from allowing out of order encoding of the dep graph.

Triage done by @rylev. Revision range: 8f2819b0..25cdf1f6

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.5% [0.1%, 3.0%] 77
Regressions ❌
(secondary)
0.6% [0.1%, 2.4%] 77
Improvements ✅
(primary)
-0.7% [-1.3%, -0.2%] 106
Improvements ✅
(secondary)
-0.7% [-1.2%, -0.2%] 29
All ❌✅ (primary) -0.2% [-1.3%, 3.0%] 183

4 Regressions, 2 Improvements, 4 Mixed; 2 of them in rollups 38 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Rust
Other Areas
Language Reference

No Items entered Final Comment Period this week for Cargo, Rust RFCs, Language Team or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2025-04-30 - 2025-05-28 🦀

Virtual
Asia
Europe
North America
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

With Bevy clearly being an extended test suite for Rust's trait solver, how did you get the idea to also turn it into a game engine?

Every sufficiently advanced test is indistinguishable from a game engine 🙂

/u/0x564A00 and /u/_cart on /r/rust

Thanks to Ludwig Stecher and Josh Triplett for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Don Martilinks for 30 April 2025

Some legal risk avoidance notes for independent sites: Online Safety Act Notes for Small Sites — Russ Garrett, A guide to potential liability pitfalls for people running a Mastodon instance by Denise. (Not legal advice, consult your lawyer…)

The one interview question that will protect you from North Korean fake workers by Iain Thomson. (yes, it’s a clickbait headline, but worth it.)

Europe Has Failed, But Ukraine Might Still Save It by Phillips P. O’Brien. Even though the return of the openly pro-Putin Donald Trump to the White House was at least a 50-50 proposition for most of 2024, European states refused to accept the reality staring them straight in the face.

Docling Technical Report. This technical report introduces Docling, an easy to use, self-contained, MIT-licensed open-source package for PDF document conversion. It is powered by state-of-the-art specialized AI models for layout analysis (DocLayNet) and table structure recognition (TableFormer), and runs efficiently on commodity hardware in a small resource budget. The code interface allows for easy extensibility and addition of new features and models.

Two Peoples by Brian Jacobs. The likes of Google and Meta realised early and have exploited brilliantly the reality that he who controls measurement controls revenue.

Why Individual Rights Can’t Protect Privacy by Daniel Solove. While I admire the CPPA’s effort to educate, the notion that the ball is in the individuals’ court is not a good one. This puts the on individuals to protect their privacy when they are ill-equipped to do so and then leads to blaming them when they fail to do so.

Cutting back poppies and harvesting seed on Weeding Wild Suburbia, How to Make Seed Bombs on MrBrownThumb. Happy California Golden Poppy season to all who celebrate.

Molly White’s napkin math: I saved hundreds or even thousands of dollars a month just from switching from Substack to self-hosted Ghost

BPS is a GPS alternative that nobody’s heard of by Jeff Geerling. (Not for navigation, for timekeeping)

Ian Jackson: Rust is indeed woke In the Rust community, we care about empowerment. We are trying to help liberate our users. And we want to empower everyone because everyone is entitled to technological autonomy. (For a programming language, empowering individuals means empowering their communities, of course.)

Firefox Developer ExperienceFirefox WebDriver Newsletter 138

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 138 release cycle.

Contributions

Firefox is an open source project, and we are always happy to receive external code contributions to our WebDriver implementation. We want to give special thanks to everyone who filed issues, bugs and submitted patches.

In Firefox 138, several contributors managed to land fixes and improvements in our codebase:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi. Join our chatroom if you need any help to get started!

General

New: --remote-enable-system-access Firefox argument

A new Firefox argument, --remote-enable-system-access, was added to enable sensitive features, such as interacting with Browsing Contexts in the parent process (e.g., Browser UI) or using privileged APIs in content processes. This will be used for WebDriver BiDi features in the next releases, and can already be used with Marionette (see the Marionette section below).

Bug fixes:

WebDriver BiDi

Updated: webExtension.install command now installs web extensions temporarily

The webExtension.install command now installs web extensions temporarily by default, allowing it to be used with unsigned extensions – either as an XPI file or as an unpacked folder. A new Firefox-specific parameter, moz:permanent, has been added to force installation as a regular extension instead.

Updated: browsingContext.setViewport command now supports userContexts

The browsingContext.setViewport command now supports a userContexts parameter, which must be an array of user context (Firefox container) ids. When provided, the viewport configuration will be applied to all Browsing Contexts belonging to those user contexts, as well as any future contexts created within them. This parameter cannot be used together with the existing context parameter.

Updated: browsingContext.Info now includes a clientWindow property

The browsingContext.Info type now includes a clientWindow property corresponding to the ID of the window owning the Browsing Context. It is typically returned by browsingContext.getTree or included in the payload of events such as browsingContext.contextCreated.

Marionette

Updated: --remote-enable-system-access required to use chrome context

Switching to the chrome (parent process) context with Marionette now requires using the --remote-enable-system-access command-line flag when starting Firefox.

The Mozilla BlogTackle tab overload with Firefox Tab Groups

A Firefox browser window showing the Mozilla homepage, with a "Create tab group" pop-up open to name a group "Mozilla" and select a color.

Open a web browser and you step into a garden of forking paths — where news, messages, memes, work, learning and cat videos all compete for your attention. Every click sprouts another tab. Before you know it, your author has opened 68,000 tabs in one year alone, while some people manage to keep 7,000 tabs open for years. Firefox Tab Groups is designed to help you organize tabs, offering a better way to browse.

(Eager to try it out? Jump to getting started with Tab Groups.)

How too many tabs drain your focus and time

​​Research shows that tab overload can start with just five to eight tabs open. And when you factor in how many tabs we open each day, it’s clear we need better tools to stay organized.

If you use a browser at work or school, this scenario may sound familiar: You are writing in one tab when you realize you need information from another. You go on a tab hunt, scanning the tiny icons in your tab bar, skimming the first few letters of each tab title, navigating from window to window, searching for that elusive document or presentation.

You finally find it. Victory is mine! But before you can celebrate, you have to find your way back to that first tab. And so the cycle of tab re-finding begins anew.

Or you glance at your browser and feel overwhelmed. So many semi-random tabs open that the only solution seems to be to declare tab bankruptcy, close everything and start over.

These are all symptoms of information overload, the condition of having more information than you need to make a decision or complete a task efficiently. It is one of the defining challenges of our digital age. As former Google CEO Eric Schmidt once said, “There were 5 exabytes of information created between the dawn of civilization and 2003, but that much information is now created every two days.” Whether or not you believe the numbers, the feeling is real: We are surrounded by more information, more distractions and more tabs.

Firefox Tab Groups are designed to give you more control, whether you manage thousands of tabs or prefer to keep just a few open.

How to organize your tabs with Tab Groups 

Tab Groups add a layer of color-coded organization to your browser, making it easier to keep related tabs together. You can create groups for topics, projects or recurring tasks — like the news sites you read daily, ideas for a new woodworking hobby or research for an upcoming trip to Thailand.

Ideas for organizing your tabs:

  • By urgency: Tabs for tasks you need to finish soon, like a “Friday to-do” list
  • By frequency: Sites you visit daily, such as news or email
  • By topic: Tabs for different courses, hobbies, or areas of interest
  • By project: Resources and tools collected for an important project
  • By type: Similar tabs grouped together, like PDFs or pages from the same site

Once you have a few groups set up, it becomes much faster to find your tabs and switch between tasks.

Getting started with Tab Groups

Starting with Firefox Version 138, available April 29, you can manage your tabs more easily with the new Tab Groups feature.

Here’s how to get started:

  • Create a group: drag a tab on top of another, pause and drop
    A Firefox browser window showing a "Mozilla" tab group collapsing, with individual tabs combining into a single group tab.
  • Name and color the group (name is optional)
    A Firefox browser window showing a "Manage tab group" pop-up where a tab group named "Mozilla" is being edited, with color options visible.
  • Add or remove tabs from a group by dragging tabs in and out of a group
    A Firefox browser window showing a tab being dragged into the "Mozilla" tab group.
  • Manage the group by right-clicking the group label. From there you can:
    • Create a new tab in the group
    • Move group to new window
    • Save and close the group to free up space on the tab bar
    • Ungroup tabs
    • Delete the group
      A Firefox browser window showing a "Manage tab group" pop-up where a tab group named "Mozilla" is being edited, with color options visible.
  • Reposition a group on the tab bar by dragging it
  • Expand or collapse a group when you single-click the group label
    A Firefox browser window displaying the animation of a "Mozilla" tab group expanding to show its tabs.
  • Retrieve a group. Browse all groups in the List all tabs menu (a downward caret in the top right corner of the tab bar)
    A Firefox browser window showing the "Recent tab groups" menu, highlighting the "Mozilla" group with two tabs inside.

See our support page for Firefox Tab Groups for more details.

Other ways to use Tab Groups

If you manage a lot of tabs, you might want to explore Firefox’s new Vertical Tabs mode. With vertical tabs, you can expand or collapse the amount of the tab title you see, which can make it easier to re-find the tab you need.

You can also combine Tab Groups with add-ons to manage your tabs even more efficiently. If you’ve ever closed a tab and wished you could get it back, Firefox has plenty of add-ons to help you recover and organize your tabs.

New APIs (tools that help programs work together) are on the way, giving add-on developers even more ways to manage tabs and groups. If you’re using add-ons with Tab Groups today, just keep in mind that some add-ons may move tabs into or out of groups, or close grouped tabs.

Making tab management even smarter with AI

Tab Groups make it easier to stay organized, but even with better tools, tab management can still be a chore — especially as your habits and needs change. 

To make it even easier over time, we’re exploring new AI-powered tools for organizing tabs by topic. You can try an early prototype today with on-device AI in Firefox Nightly, our next-generation browser for testing and development.

Shape what’s next for managing tabs

Try out Tab Groups in Firefox and tell us what you think in our community forum, Mozilla Connect.

If you want a sneak peek at what’s next, you can also test an early AI-powered prototype in Firefox Nightly and look for the “Suggest more of my tabs” button when creating a group. And unlike other browsers, with Firefox you can always feel confident that no one sees your tabs except you, even if you organize them with AI.

The web will keep growing and changing. With Firefox, you stay in control of your tabs and the path you choose to take.

Get the browser that puts your privacy first — and always has

Download Firefox

The post Tackle tab overload with Firefox Tab Groups appeared first on The Mozilla Blog.

The Mozilla BlogYou asked, we built it: Firefox tab groups are here

What happens when 4,500 people ask for the same feature? At Firefox, we build it.

Tab groups have long been the most requested idea on Mozilla Connect – our community platform – and thanks to thousands of votes, comments and passionate feedback, it’s finally here. 🎉

But this is more than just a feature launch. It’s the story of what happens when community insight, real-world pain points, and a whole lot of curiosity come together.

A feature the community asked for, loud and clear

Just one day after Mozilla Connect quietly launched in March 2022, a request for tab groups appeared. We hadn’t even promoted the platform. There were no announcements. But the community found Mozilla Connect and rallied behind the request for tab groups.

“It’s still the number one most upvoted post on Mozilla Connect,” said Jon Siddoway, who helps surface user insights to Firefox teams. “Even when the feature was in beta, people were still voting for it and saying, ‘We want this.’”

At Mozilla, we work hard to make Firefox the best browser for you. Last year, we shared what we were working on – features that help you stay organized, like our handy sidebar, vertical tabs and tab groups. As we noted then, community feedback directly shaped what came next. 

“It’s still the number one most upvoted post on Mozilla Connect.”

Jon Siddoway, product manager of Mozilla Connect

That early request kicked off a collaboration between the Firefox team and community. Before any code was written, Jon summarized comments, tracked trends across 64+ pages of feedback and brought key themes to the team.

That enthusiasm spilled into beta testing. Before the official invite to community members went out, many of them discovered the hidden toggle in the Nightly release, turned it on themselves, and started sharing how to use it. The team watched, learned and iterated from the sidelines.

Listening and learning from thousands of voices

Stefan Smagula, product manager for the tabs group feature, didn’t just skim the posts – he dove in.

“I read Mozilla Connect every day for the first month,” he said. “Sometimes the ideas confirmed what we were already thinking. Other times they were totally new and unexpected, like requests for nested tab groups.”

But with over 1,000 comments and many differing opinions, how do you make decisions?

“Sometimes the ideas confirmed what we were already thinking. Other times they were totally new and unexpected.”

Stefan Smagula, product manager at Firefox

“You try to get to the underlying needs behind each request,” Stefan explained. “Instead of just implementing one person’s idea, you look for the broader pattern — the thing that could help the most people.”

This approach helped shape a feature that balances flexibility with simplicity. With tab groups, you can drag and drop tabs into organized groups, label them by name or color, and stay focused. Whether you’re a minimalist with 10 tabs or a power user juggling 10,000 (seriously — one of our colleagues does this), tab groups can help.

Browser window showing Firefox's tab grouping feature. A 'Create tab group' pop-up is open, with 'Thailand Trip' entered as the group name and a purple color selected. Tabs for 'Thailand Trip,' 'Google Flights,' 'Hotels.com,' and a 'New Tab' are visible, along with existing tab groups labeled 'Work,' 'Reading,' and 'Shopping.'<figcaption class="wp-element-caption">Keep it together — group your tabs by trip, work, or whatever you need!</figcaption>

“Tab groups aren’t just about decluttering,” Stefan said. “It’s about reclaiming your flow and finding focus again.”

It also reinforced the team’s belief that done is never truly done.

“Tab groups aren’t just about decluttering. It’s about reclaiming your flow and finding focus again.”

Stefan Smagula, product manager at Firefox

What’s next: Make tab groups smarter

Once early testers began using tab groups in Firefox Nightly and Beta, feedback kept rolling in – both on Mozilla Connect and in places like Reddit and X, where Stefan scouts for feedback. Many users wanted less friction and more flow when managing their tabs, which inspired the team to explore the next step: having the browser help organize things automatically. 

Now, the team is experimenting with smart tab groups, a new AI-powered feature that suggests names and groups based on the tabs you have open. Other browsers might send your tab info to the cloud, but Firefox keeps it on your device. Your tabs stay private and never leave your device.

“I used to have 30 windows open, each with 30 or 40 tabs. Smart tab groups changed the way I work. It made it easier to find what I need and resume tasks faster,” said Stefan.

It’s just the beginning of what’s possible when you pair smart tech with real human needs.

“I used to have 30 windows open, each with 30 or 40 tabs. Smart tab groups changed the way I work. It made it easier to find what I need and resume tasks faster.”

Stefan Smagula, product manager at Firefox

Thank you – and keep the ideas coming

This feature wouldn’t exist without you. Your upvotes, comments, ideas and testing helped bring it to life.

As Stefan put it: “It’s extremely motivating to know how many people want this. It makes the hard work easier and more meaningful.”

So if you’ve ever felt tab overload — or if you just want your browser to feel a bit more like your own — try out tab groups. Share what you love and what you’d change.

“It’s extremely motivating to know how many people want this. It makes the hard work easier and more meaningful.”

Stefan Smagula, product manager at Firefox

You can join the conversation anytime on Mozilla Connect. 💬

Get the browser that puts your privacy first — and always has

Download Firefox

The post You asked, we built it: Firefox tab groups are here  appeared first on The Mozilla Blog.

Firefox Developer ExperienceFirefox DevTools Newsletter — 137

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 137 release cycle As always, I’m quite late writing these updates, but better late than never, so here we go 🙂

Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla:

Firefox inspector markup view. A search input is filled in with "button", a label next to it indicates that the current search result is the second out of 27. Next to this label, there are two icons, to navigate to respectively the previous and the next result (the tooltip for the second button indicates "Next result")

Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues

Network Monitor

Julian added a new feature for the Network Monitor: network response override (#1849920). You can check the nicely detailed blog post we wrote about it: https://fxdx.dev/network-override-in-firefox-devtools/ . TL;DR: you can override any network request with a local file, which can be very handy when you need to fix something while not having the ability to modify the file on the server.

Hubert added early hint headers in the headers panel (#1932069)
Julian added a new feature for the Network Monitor: network response override (#1849920). You can check the nicely detailed blog post we wrote about it: https://fxdx.dev/network-override-in-firefox-devtools/ . TL;DR: you can override any network request with a local file, which can be very handy when you need to fix something while not having the ability to modify the file on the server.

Hubert added early hint headers in the headers panel (#1932069)

A new section in Netmonitor Headers sidebar, whose title is "Early Hints Response Headers", with the size of said Headers  (339 B) There are 3 headers displayed : link, original-trial and X-Firefox-Spdy

The Early hints response headers section shows the headers provided by HTTP 103 informational response. For each line in the early hints response headers section, a question mark links to the documentation for that response header, if one is available.


In some cases, subsequent cached requests for scripts would not appear in the list of requests, this is now fixed (#1945492)

Finally, following what we did in Firefox 136 in the JSON Viewer, the Netmonitor Response sidebar will show the source value, as well as a badge that show the JS parsed value for values can’t be accurately parsed in Javascript (for example, JSON.parse('{"large": 1516340399466235648}') returns { large: 1516340399466235600 }) (#1942072)

Debugger

There was a bug in the variable tooltip where it wasn’t possible to inspect a variable properties (#1944408). This is now fixed, and in general, the tooltip should be more reliable (#1938418). We also fixed a couple issues for navigating to function definition from the tooltip (#1947692, #1932021)

Inspector

You might now know (I definitely didn’t), but font files contains very handy metadata like the font version, designer URL, license information, …
We’re now displaying those in the Fonts panel (under the “All Fonts on Page” section) so you can find other awesome fonts from the designer of a font you like for example.

In Firefox DevTools fonts panel, in the "All Fonts on Page" section, the "Mozilla Headline" font is being used, and some new informations are now visible (e.g. Version: 0.100, Designer: Studio DRAMA, …)

CSS Nesting usage is on the rise, and with that, we’re getting reports of issues in the Inspector, especially since the change in the specification that resulted in the addition of CSSNestedDeclarations rules. In 137, we fixed a couple issues:

  • Declarations after nested rule were displayed (incorrectly) in their parent rule (#1946445)
  • Adding a declaration in the Rules view would add it after any nested declaration (#1954704)

We know we still have other issues with those CSSNestedDeclarations (#1946439, #1960123, #1951605) and we’re actively working on fixing them.

Misc

We made the search feature in the Style Editor much more usable; you can now hit Enter multiple time to navigate through the results in the stylesheet (#1846465).

Finally, we fixed an important issue that could lead to blank screen when using about:debugging to inspect a page in Firefox for Android (#1931651)

That’s it for this month, thank you for reading this and using our tools, see you in a few weeks for a new round of updates 🙂

Full list of fixed bugs in DevTools for the Firefox 137 release:

Mozilla Localization (L10N)Lost (and Found) in Translation: My Internship Story

If you were to ask my parents or sister what my favourite hobby was as a child, they’d say something along the lines of “sitting in front of our family computer”. I’d spend hours browsing the internet, usually playing Flash games or watching early YouTube videos. Most of my memories of using the computer are now a blur, however, one detail stands out. I distinctly remember that our family computer used Mozilla Firefox as our primary internet browser. So imagine my surprise when I was offered an opportunity to intern here at Mozilla!

In the midst of my third year studying Computer Engineering at the University of Toronto, I had been searching for a 12-month internship to complete our Professional Experience Year (PEY) Co-op credit. Incredibly, I landed the privilege of working at Mozilla for 12 months alongside 17 other students. Coincidentally, one of my closest friends from high school would also be completing his internship at Mozilla too!

As a Software Engineer (SWE) Intern, I had been hired on the Localization (L10N) team, and would be based out of the Toronto office. I had already connected with both my manager, Francesco “Flod” Lodolo, and my mentor, Matjaž Horvat, before my start date. I couldn’t wait to begin my internship, and after I finished my final exam for third year, I began counting the days before my start date.

LGTM! (Onboarding)

From our first day at the office, I knew I was going to love working here. The Toronto office is so vibrant and filled with some truly amazing people! After finishing the office tour with the rest of the interns, we booted up our computers and began installing all our tools. Luckily for me, Ayanaa (who was the previous SWE Intern on the Localization team) was in the office too. She would be here until the end of August, helping to mentor and guide me along the way.

With her help, I got started on some bug fixes in Pontoon, Mozilla’s translation management system. I was mainly using Python (specifically the Django framework) and JavaScript/TypeScript (React) for the duration of the internship. Since I had some prior internship experience with these tools, I was able to hit the ground running, and by the end of my third month I had already completed 12 tickets! Matjaž and Flod were both instrumental in my progress, and with their help, I narrowed down the larger projects I wanted to work on for the rest of my internship.

I also took an interest in web standards within my first few months. Eemeli, the other engineer on our team, was an active contributor to the MessageFormat2 API, a new Unicode standard for localization. With his support, I was able to attend the Working Group’s weekly meetings. These meetings included some of the most influential and experienced people in this domain, spanning across many large companies and organizations.

Our first day tour of the Toronto office!

Coast to Continent to Coast (MozWeek and Work Week)

Around the middle of August, we were given the opportunity to attend MozWeek 2024, which is our annual week-long, company-wide conference. MozWeek 2024 was being held in Dublin, Ireland, so this was my first time ever travelling to Europe! From day one, the atmosphere at The Convention Centre Dublin was electric. I could tell a lot of thought, planning, and care went into creating the best possible experience for all employees. Throughout the week, we attended plenary talks, workshops, and strategic meetings.

Seeing how Mozilla is a remote-first international company, this was the first time I had met any of my full-time colleagues in person. It was so nice to finally see and chat with them outside my laptop screen. We even had our team dinner next to the famous Temple Bar! In our free time, the other interns and I had a blast walking through the streets of Dublin, and exploring what Ireland has to offer.

The interns and I at the MozWeek 2024 Closing Party, hosted at the Guinness Storehouse.

Dublin wasn’t my only travel destination though. Each team meets up once a year in one of Mozilla’s many office spaces across the world. Owing to our remote-first policy, these ‘Work Weeks’ are an opportunity for teams to reflect on the past year and align on OKRs for the coming year. Our Work Week happened in November, in sunny San Mateo, California, marking my first time on the West Coast! The Work Week was a great experience filled with good food, and it was super fun to explore San Francisco in my free time.

L10N team dinner at Porterhouse Restaurant San Mateo!

Building for a Better Web (Projects Overview)

One of my favourite parts of working at Mozilla was that almost all of my work was public-facing. I worked on three major projects during my internship, so here’s a brief description of each:

Pontoon Search

My first major project had me improving Pontoon’s search capabilities. Despite the many filters Pontoon already contained to sift through over 4.5 million strings, there were still no options for common filters like ‘Match Case’ or to limit a search to specific elements, like source text. My job was to create a new full-stack feature to enable users to refine their search queries. By leveraging TypeScript, React, and Django’s ORM capabilities, I created a new search panel with 5 options for users to toggle:

Multiple options

Improving the searching in Pontoon not only made the user experience more streamlined, but also improved Pontoon’s API capabilities, which was later used in the Mozilla Language Portal (see below).

Pontoon Achievement Badges

My second major project involved adding gamification elements into Pontoon. In a nutshell, we wanted to implement achievement badges into Pontoon to recognize contributions made by our vibrant volunteer community, while also further promoting positive behaviours on the platform. Ayanaa had created both the proposal document and technical specification before her term ended, so it was my job to implement the feature. This project mainly involved TypeScript and a bit of Django for counting badge actions, and the initial user feedback was overwhelmingly positive! For more information, check out the blog post I wrote to announce the feature.

Achievement badges

Mozilla Language Portal

My final project, and the one I had the most ownership over, was the creation of the Mozilla Language Portal. For a long time, the localization industry was missing a central hub for sharing knowledge, best practices, and searchable translation memories. We decided it was a good idea to leverage our influence to create the Mozilla Language Portal, in hopes to fill this gap and make localization tools themselves more accessible. We decided to create the Portal using Birdbox, an internal tool created by the websites team to quickly spin up Mozilla-branded web pages. The deployment of the Portal was handled primarily through Google Cloud Services and Terraform, which was a whole new set of tools for me to learn. The website itself was made using Wagtail CMS, built on top of Django. With the help of the Websites and Site Reliability Engineering teams, I was able to both create the MVP and deploy the site.

Closing Thoughts

Since taking an anthropology course in my third year of university, I’ve come to appreciate how important human connection and social interactions are, especially in this day and age. Most people would agree that technology (in particular the internet) has now thoroughly integrated itself into the fabric of our societies, so I believe it’s in our collective best interest to keep the internet in a healthy and open state. In recent years, it sadly seems like many bad actors are increasing their influence and control over what should be a vital and protected resource. As one of my long-term goals, I want to focus my career towards improving the internet and using its influence over society for good.

So naturally with this goal in mind, Mozilla’s position as a non-profit organization dedicated to creating an open and accessible web was a perfect fit for me. Coincidentally, Localization was also the perfect team for me. As a very community-facing team, Localization gave me the unique chance to see the direct results of creating technology to make the internet more accessible, and I was able to explore my burning interests such as web standards.

I think it goes without saying that the lessons I learned at Mozilla, both from an engineering perspective and from a community perspective, will stick with me for the rest of my career. Regardless of if I continue to be a SWE in the future, I want to focus on creating technology to grow and help humanity, and thus I’ve promised myself to only work for organizations whose missions I align with.

To me, my time at Mozilla will always be emblematic of my growth: as a student, as an engineer, and as an individual. They say all good things must come to an end, but I oddly don’t feel as though my time at Mozilla is coming to an end. The lessons instilled in me and the drive to keep fighting for an open web won’t ever leave me.

Team photo with everyone! Taken in August 2024

 

Acknowledgements

I’d like to dedicate this section to my amazing team that has supported me and helped me grow both professionally and personally this past year.

To Ayanaa, thank you for being a great coworker, mentor and friend. I’ve been following the path you carved out, both at Mozilla and beyond, and I’m extremely grateful for all the advice and support you gave me throughout.

To Matjaž, I can’t really put into words how helpful and kind you have been to me. You truly have a talent for mentoring, and I’m so incredibly grateful you were my mentor. I hope you continue to inspire others the way you’ve inspired me. Let’s hope Lebron and Luka can win it all (eventually).

To Flod, your support as my manager has been monumental to my professional development. Thank you for being patient with me, and for supporting all of my interests and endeavors during my term. It sounds cliché, but I truly couldn’t have asked for a better manager.

To Eemeli, thank you for supporting my interest in MessageFormat2. Your great sense of humour will definitely stick with me, and you’ve inspired me to carry on your tradition of taking walks during online meetings.

To Bryan, it was always such a pleasure to speak and work with you. I’m glad I had someone else to nerd-out with about Pokémon! I really appreciate how we could always find something to talk about.

To Peiying, I loved hearing all about your travel anecdotes during MozWeek and our Work Week. I promise to keep my photo blog updated as long as you do too! I hope to see you and Leo again soon.

To Delphine, your enthusiasm and bubbly personality always brought a smile to my face. It was so nice to finally have met you during our Work Week! Congrats again on all your personal achievements in this past year.

And thank you to all the Mozillians I’ve had the privilege to work with this past year, both in the Toronto office and across the globe. I’m sure our paths will cross again! As they say, “once a Mozillian, always a Mozillian”.

*Thanks for reading, and if you’d like to learn more or connect with me, please feel free to add me on LinkedIn*

Don MartiWinners don’t click search ads

Another FBI announcement about search ads is up: Cyber Criminals Impersonating Employee Self-Service Websites to Steal Victim Information and Funds.

Cyber criminals use advertisements that imitate legitimate companies to misdirect targets conducting an internet search for a specific website. The fraudulent URL appears at the top of search results and mimics the legitimate business URL with minimal differences, such as a minor misspelling. When targets click on the fraudulent advertisement link, they are redirected to a phishing website that closely mirrors the legitimate website. When the target enters login credentials, the cyber criminal intercepts the credentials.

And the FBI repeats the advice from last time (don’t make me tap the sign).

Use an ad blocking extension when performing internet searches.

The previous announcement has been removed but the FBI is still on about this problem.

It is possible to fix Google Search to remove the ads, among other things. For now, as the FBI points out, the safest thing to do is block the ads now and turn them back on for legit sites later. And it is still a good idea to get into the habit of using a browser bookmark, not the search box, to navigate to sites you have an account on, especially SaaS applications and financial services sites.

This isn’t just the FBI giving Google grief because of some political issues. It looks like Google’s Ad Safety Report for 2024 got edited with a view to making it more Russia/Republican-friendly—Google is no longer removing ads for misinformation which is an issue for that faction here—but the big issue is that they’re they’re understaffing the ad review department. More: Google Ads Shitshow Report 2024

(source of the title for this post: Winners Don’t Use Drugs)

Related

how to break up Google This kind of thing would not be so much of an issue if the search market were more competitive. IT departments would be able to configure the search engine for employee use based in part on security issues like this. Legit b2b search advertisers could still get their ads seen, instead of getting blocked along with the fraud.

Return of the power user Advanced PC users used to have a better experience because they could customize early microcomputers that were poorly set up by default and get them to work right. Then the mainstream mass-market computers entered the Windows XP/Mac OS X era, when the hardware was easier to set up correctly and the software was more stable, better designed, and updated automatically—so the upside of learning to dink with your computer was lower. Now, the mainstream computers are designed to surveil and upsell users to other products and services, so dinking with your computer can make it a lot better again. (Another good recent example: New Windows 11 trick lets you bypass Microsoft Account requirement)

time to sharpen your pencils, people The fraud issue might be another good, politically neutral way to justify moving ad budgets away from surveillance oligarchs and toward legit content.

Bonus links

Bot farms invade social media to hijack popular sentiment by Eric Schwartzman. In a world where all information is now suspect and decisions are based on sentiment, bot farm amplification has democratized market manipulation. But stock trading is only one application. Anyone can use bot farms to influence how we invest, make purchasing decisions, or vote. (icymi: The majority of traffic from Elon Musk’s X may have been fake during the Super Bowl, report suggests by Matt Binder)

How to Prepare (Not Prep) for Uncertain Times… and Build a Better World in the Process by Susan Kaye Quinn. Look around, see what’s already happening in your community: generally a really good first option before assuming you have to do everything yourself or start something new.

Bringing Pollinators Back to Bay Farm Island: A Community Effort by Kristen Smeal and Mike Nettles. (open for volunteers)

Resistance from the tech sector by Drew De Vault The fact of the matter is that the tech sector is extraordinarily important in enabling and facilitating the destructive tide of contemporary fascism’s ascent to power….It’s clear that the regime will be digital. The through line is tech – and the tech sector depends on tech workers. That’s us. This puts us in a position to act, and compels us to act.

Don’t Forget The Forgotten Tech User by Ernie Smith. Fact is, there are a lot of people like this out there, who don’t necessarily want to be forced to buy the latest and greatest thing….

It’s Safer in the Front: Taking the Offensive against Tyranny Faced with intensifying repression and state violence, there is an understandable inclination to seek safety by avoiding confrontation. But this is not always the most effective strategy.

Law professors side with authors battling Meta in AI copyright case by Kyle Wiggers. The brief, filed on Friday in the U.S. District Court for the Northern District of California, San Francisco Division, calls Meta’s fair use defense a breathtaking request for greater legal privileges than courts have ever granted human authors.

The Shocking Far-Right Agenda Behind the Facial Recognition Tech Used by ICE and the FBI by Luke O’Brien. This story, based on interviews with insiders and thousands of newly obtained emails, texts, and other records, including internal ICE communications, provides the fullest account to date of the extent of the company’s far-right origins and of the implementation of its facial recognition technology within the federal government’s immigration enforcement apparatus.

Why Are All the Smart People So Bad at History? by Joan Westenberg. This is a subculture that praises nuance and complexity in physics and economics but laps up the most simplistic historical narratives imaginable. (fwiw, it’s the same in advertising. People who can learn hella math don’t bother to learn the human factors. Perplexity CEO says its browser will track everything users do online to sell ‘hyper personalized’ ads Srinivas believes that Perplexity’s browser users will be fine with such tracking because the ads should be more relevant to them. There’s already a hyper-personalized medium, it’s called cold calls. And people hang up on those.)

Heat pumps outsold gas furnaces again last year — and the gap is growing by Alison F. Takemura. According to data from the Air-Conditioning, Heating, and Refrigeration Institute released last week, Americans bought 21 percent more heat pumps in 2023 than the next-most popular heating appliance, fossil gas furnaces. That’s the biggest lead heat pumps have opened up over conventional furnaces in the two decades of data available from the trade group.

The Mismeasure of Man by Mandy Brown. As Gould capably shows, every effort to quantify intelligence has been beset by racist tautologies, errors of logic, mathematical mistakes, and repeated instances of fraud. We presume that intelligence is quantifiable but more than a century of efforts to adequately quantify it have failed. (icymi: IQ is largely a pseudoscientific swindle by Nassim Nicholas Taleb)

a promise, a threat, and a proportional response (Why declaring yourself a Nazi is different from declaring yourself as a member of other movements or factions.)

Cameron KaiserA PowerBook G4 reporting the news

The San Francisco Chronicle had an article today on the retirement of KCBS political reporter Doug Sovern. I'm an all-news-radio junkie and I happen to enjoy his pieces when I'm in the Bay Area, but that wouldn't merit a mention here except for this photo:
This is a KCBS photo of Sovern filing a report, or something, at the 2008 Republican National Convention in St. Paul, Minnesota. (No politics in the comment section, please.) Although the camera's white balance was displeasingly set somewhere between lemon and urine sample, or there was an inopportune incandescent bulb in the way, he's quite clearly typing on a late-model 15" PowerBook G4 — besides the dead-on match for the ports and power supply, the MacBooks of the era have a different keyboard and an iSight in the screen bezel which this one doesn't. The screen is difficult to see clearly but looks like Safari viewing Sovern's own site ("Sovern Nation") on KCBS, and the menu bar seems consistent with Tiger. While it would have been only a couple years into the Intel transition at this point, it's nice to see it still being used.

Other points of interest include all the good old analogue equipment (probably for pool audio), an ugly PC laptop with what looks like a Designed for Windows XP sticker being used by somebody with a bandanna, and in the foreground a touch-tone landline phone, which might as well be an alien artifact to anyone younger than a certain age. Enjoy your retirement, Doug.

Mozilla Localization (L10N)L10n report: April 2025 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

What’s new or coming up in Firefox desktop

There are a number of new features launched recently or upcoming in Nightly to look out for.

Smart Tab Grouping

With the recent release of Tab Groups in Firefox 137, we’ll see some additional development on enhancements in the future. Currently only available in English on Nightly, Smart Tab Grouping uses a local AI model to suggest similar tabs to group together.

Link Previews

This feature will be coming to Firefox Labs in 138, Link Previews uses a local AI model to quickly see what’s behind a link, by distilling key points from the page.

Signing in PDFs

You have likely seen these strings while working on Beta, but the ability to add signatures using the built-in PDF editor will be released fully in the upcoming 138 release on April 29.

What’s new or coming up in mobile

We’re adding customization options for Firefox icons on mobile! Some of the icon names may be tricky to localize, so we’ll be sharing a reference sheet that includes each icon along with its visual and contextual usage. This will help you choose the most accurate and user-friendly translations for your locale. Keep an eye out for upcoming Pontoon notifications for more details!

What’s new or coming up in web projects

AMO and AMO Frontend

To enhance user experience, the AMO team has established a minimum translation completion threshold of 80% for locales to remain on production sites. The team will start implementing the new policy in May. Last month, locales with a completion rate of 40% or lower were removed from the production site. However, affected communities can continue making progress in Pontoon, and their status will change once they meet the threshold.

Once this new standard is fully implemented, the addon team will reassess the list of locales on a monthly basis, evaluating those that have met or fallen below the 80% threshold. Based on this review, they will determine which locales to retain and which to remove from the production site. Regardless of your locale’s current status, you can check your work in context using the links to the production, staging, and developer sites which can be found on the top left of the project dashboards.

What’s new or coming up in Pontoon

We’re working on some sizable back-end improvements to how Pontoon internally represents and deals with translatable messages, i.e. source-locale entries and their translations. Thus far we’ve refactored Pontoon’s sync code (how it reads from and writes data to project repositories) and the serialization of our supported file formats; the next step will be replacing our file format parsers.

Mostly this work should remain invisible to users, though it has already allowed us to fix quite a few long-standing bugs and improved sync performance. Eventually, this will make it much easier for us to expand the file formats and features supported by Pontoon.

Events

We are hosting our first localization office hour on Apr 30, 2025 at 3:30pm UTC, it will be live streamed on both AirMozilla and YouTube (recordings can be found at the same links). This session will focus on common errors localizers may encounter and how to overcome them. Feel free to ask questions beforehand via the Google form or reach out directly to delphine at mozilla dot com.

Want to showcase an event coming up that your community is participating in? Contact us and we’ll include it.

Friends of the Lion

Know someone in your l10n community who’s been doing a great job and should appear here? Contact us and we’ll make sure they get a shout-out!

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

The Mozilla BlogMozilla, EleutherAI launch toolkits to help AI builders create open datasets

Wireframe illustration of a dense urban cityscape with overlapping geometric buildings and cylindrical structures on a black background.

Easy-to-follow guides on how to transcribe audio files into text using privacy friendly tools and how to convert different documents into a singular format. Watch the live demo here.

As concerns around AI transparency grow, datasets remain one of the least visible and least standardized parts of the pipeline. Many are assembled behind closed doors, with little documentation or clarity around sourcing. Independent developers are often left without the infrastructure or tools needed to do things differently. 

Mozilla and EleutherAI’s year-long collaboration aims to change that. They’re releasing two toolkits that help developers build large-scale datasets from scratch—whether that means extracting content from PDFs, structuring web archives, or simply documenting what they’re using in a clear and reusable way.

These toolkits help developers get started with creating open datasets. The code and demos will be available on the Mozilla.ai Blueprints hub, a platform that helps developers prototype with open-source AI using out of the box workflows. 

Toolkit 1: Transcribing Audio Files with Open-Source Whisper Models

This Blueprint guides developers through transcribing audio using open-source Whisper models via Speaches, a self-hosted server similar to the OpenAI Whisper API. Designed for local use, this privacy focused setup offers a secure alternative to commercial APIs, making it ideal for handling sensitive or private audio data. Inspired by real-world use cases, the toolkit features an easy to follow setup using either Docker or the CLI. 

Toolkit 2: Converting Unstructured Documents into Markdown Format

This toolkit helps developers convert diverse document formats (PDFs, DOCX, HTML, etc.) into Markdown using Docling, a command-line tool with powerful Optical Character Recognition and image-handling capabilities. Ideal for building open-text datasets for use in downstream applications, this toolkit emphasizes accessibility and versatility, including batch-processing capabilities. 

Mozilla and EleutherAI’s partnership included an AI dataset convening, which brought together 30 leading scholars and practitioners from prominent open-source AI startups, nonprofit AI labs, and civil society organizations to discuss emerging practices for a new focus within the open LLM community, culminating with the publication of the research paper: “Towards Best Practices for Open Datasets for LLM Training. The new toolkits are a final milestone in this partnership and are a resource to help builders action the best practices previously shared. 

“As AI development continues to move at warp speed, we must ask ourselves ‘how can we responsibly curate and govern data so that the AI ecosystem becomes more equitable and transparent’ says Ayah Bdeir, Mozilla Foundation Senior Advisor, AI Strategy “Today’s open data ecosystem depends on the community sharing its expertise and our partnership with EleutherAI is part of our commitment to support incredible builders who are iterating and experimenting on the front lines of open source AI.

Currently, the threat of litigation is often cited as a reason for minimizing dataset transparency, hindering transparency and innovation. Building open access data is the antidote. Building a future of responsibly curated, openly licensed datasets requires collaboration across legal, technical, and policy fields, along with investments in standards and digitization. In short, open-access data can address many AI challenges, but creating it is difficult. The toolkits from EleutherAI and Mozilla are a crucial step in making this process easier.

Creating high-quality, large-scale datasets is one of the biggest bottlenecks in AI development,” says Stella Biderman, Executive Director, EleutherAI. “ Developers—especially those outside of major tech firms—often resort to whatever data is easiest to access, even when more valuable sources are trapped in PDFs or audio. These tools make it easier for open-source developers to unlock that data and build stronger, more diverse datasets.”

Update: On April 28, EleutherAI and Mozilla hosted an event to demo the two blueprints. Watch the demo here.

The post Mozilla, EleutherAI launch toolkits to help AI builders create open datasets appeared first on The Mozilla Blog.

Data@MozillaComparing data-stewardship at Mozilla with Lauren Maffeo’s book “Designing Data Governance from the Ground Up”

Data Stewardship: A Mozilla Perspective

In Designing Data Governance from the Ground Up, author Lauren Maffeo presents data stewardship as a pivotal role in data governance that is focused on maintaining data quality, consistency, and usability. Data stewards, in her view, are operational experts who ensure that data is of the highest quality, aligns with organizational standards, and supports business objectives.

At Mozilla, rather than taking such a broad role in data governance, a data steward’s responsibilities are deeply intertwined with the organization’s commitment to user privacy and ethical data practices. This approach reflects Mozilla’s mission to promote an open and accessible internet while safeguarding user trust.

Maffeo’s Framework: Operational Excellence

Maffeo outlines data stewards as key players in:

  • Ensuring Data Accuracy: Identifying and correcting data quality issues.
  • Maintaining Metadata: Documenting data definitions and standards.
  • Enforcing Policies: Applying data governance policies consistently.
  • Facilitating Collaboration: Bridging gaps between technical and business teams.

This model emphasizes the importance of data stewards in operationalizing data governance to enhance data quality, decision-making, and organizational efficiency. This work is spread amongst the product, data, data-engineering, and other organizations at Mozilla.

Mozilla’s Approach: Privacy-Centric Stewardship

At Mozilla, data stewards focus on:

  • Evaluating Data Collection Requests: As outlined in Mozilla’s Data Collection documentation, data stewards are responsible for reviewing proposed data collections to ensure they align with Mozilla’s Data Privacy Principles, which emphasize user control, transparency, and minimal data collection.
  • Collaborating Across Teams: Working with engineers, product managers, and legal teams to assess the necessity and impact of data collection and helping to ensure the collection is properly categorized and documented in a public way that is accessible to our users.
  • Advocating for Lean Data Practices: Promoting the collection of only essential data needed to improve user experiences, in line with Mozilla’s commitment to user privacy.
  • Guiding Data Publishing: Ensuring that any data shared publicly adheres to Mozilla’s Data Publishing policies, which categorize data sensitivity and dictate appropriate aggregation levels to protect user anonymity.

This stewardship model is proactive, emphasizing ethical considerations and user trust over data quality and operational efficiency.

Mozilla’s Data Stewardship in Practice

Mozilla’s data stewards operate within a structured framework that includes:

Data Collection Review: Any new data collection undergoes a review process to assess its necessity, potential privacy impact, and alignment with Mozilla’s principles. This includes ensuring data is correctly categorized by its sensitivity in order to ensure it is properly handled.

User Control and Transparency: Mozilla ensures users have meaningful choices regarding data collection, including the ability to opt-out and have their data deleted.

Public Data Sharing: When publishing data, Mozilla applies rigorous standards to prevent the release of sensitive information, following guidelines outlined in their Data Publishing documentation.

This approach ensures that data stewardship at Mozilla is less focused on managing data, but more about upholding the organization’s core values of user privacy and transparency.

Conclusion

Lauren Maffeo’s framework provides a solid foundation for understanding the operational aspects of data governance. Mozilla’s implementation of data stewardship focuses this role on ethical responsibility and user advocacy. At Mozilla, data stewards are less “custodians of data quality” and more “champions of user privacy”, ensuring that every data-related decision aligns with the organization’s mission to foster an open and trustworthy internet.

If you’re interested in learning more about Mozilla’s data practices or becoming involved in data stewardship initiatives, feel free to reach out to the Data Stewardship team.

Don Martiremove AI from Google Search on Safari

It turns out that it is possible to remove the AI slop and other extra crap from the top of Google Search in the Safari browser. These steps are based on a helpful post on the Apple Community board.

  1. Install Customize Search Engine from the Mac App Store.

  2. In Safari, select Settings from the Safari menu, then select Extensions and enable CSE.

  3. From the Applications folder, open CSE then click Default Search Engine. From Recommended Search Engines, choose Google &udm=14.

Set AI-free Google search as the default search engine for Apple Safari, using Customized Search Engine <figcaption>Set AI-free Google search as the default search engine for Apple Safari, using Customized Search Engine</figcaption>

And that’s it. Works for me.

Related

remove AI from Google Search on Firefox

Bonus links

util-ai is one package that contains literally every function imaginable. (powered by ChatGPT…what could possibly go wrong?)

The Rise of Slopsquatting by Sarah Gooding. It refers to the practice of registering a non-existent package name hallucinated by an LLM, in hopes that someone, guided by an AI assistant, will copy-paste and install it without realizing it’s fake. It’s a twist on typosquatting: instead of relying on user mistakes, slopsquatting relies on AI mistakes.

OpenAI offers to buy the Chrome web browser from Google. Uh huh. by David Gerard. This is beyond the vibes of AOL buying Time Warner…

Google Isn’t Launching A User Choice Prompt For Third-Party Cookies In Chrome by Allison Schiff. (In case you still need to turn them off, along with some related stuff: Google Chrome ad features checklist)

Firefox NightlyA Tab Groups Scoop – These Weeks in Firefox: Issue 179

Highlights

  • The WebExtensions team is fast-tracking support for “tab groups”-related updates to the tabs API (the updates have landed in Nightly 139 and been uplifted to Beta 138)
  • New Picture-in-Picture captions support was added to several sites including iq.com, rte.ie and joyn.de. Thanks to kernp25 and cmhernandezdev for their contributions!
  • The Profiles team is happy to report that the feature is currently in 138 beta with no open blockers from QA!
    • Next up, we plan to do a 0.5% rollout in 138 release. We’re being extremely cautious because profiles are where user data is stored, and we need to get this right.
  • The WebExtensions team has introduced a new pref to allow developers to more easily test the add-on update flow from about:addons. Setting extensions.webextensions.prefer-update-over-install-for-existing-addon to true changes the behavior of the “Install Add-on From File…” menu item to use the update flow rather than the install flow for pre-existing add-ons (Bug 1956540)

Friends of the Firefox team

Introductions/Shout-Outs

  • Welcome to Joel Kelly who is joining the New Tab front-end team!

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Gautam Panakkal
  • Jason Jones
  • Kernp25
  • Ricardo Delgado Gomez

New contributors (🌟 = first patch)

  • 🌟 Yakub Abdulrahman Alada: Bug 1917908 – Improve layout of the about:translations controls for small-width screens
  • Brian Ouyang: Bug 1948995 – Allow Full-Page Translations on moz-extension URLs
  • Cruz Hernandez: Bug 1958974 – Updated disneyplus PiP wrapper
  • Gautam Panakkal
    • Bug 1640117 – update console warnings to state ‘enhanced tracking protection’ instead of ‘content blocking’
    • Bug 1860037 – Split up and clean up browser/extensions/formautofill/test/browser/creditCard/browser_creditCard_telemetry.js
    • Bug 1955584 – Set right margin equal to top/bottom margin on vertical tab close buttons
  • 🌟 Isaac Briandt: Bug 1944944 – Update code to align with A11y Audit for Full-Page ARIA Attribute Translations
  • 🌟 Jason Jones
    • Bug 1176600 – Remove defunct pref listed in m-c prefs/all.js
    • Bug 1689254 – Lazily initialize zoom UI
    • Bug 1855787 – Persist translation panel intro until first translation is complete
    • Bug 1956009 – Remove browser/base/content/test/zoom/browser_default_zoom_multitab_002.js
  • joel.mozillaosi: Bug 1953387 – [devtools] Display empty string for undefined/NaN in netmonitor time columns
  • keanucuco: Bug 1855839 – [Translations] Refresh offline translation language list via TranslationsView observer
  • Abdelaziz Mokhnache: Bug 1957554 – add sort predicate for path column
  • 🌟 Ricardo Delgado Gomez
    • Bug 1815793 – Display error when failing to load supported languages
    • Bug 1952132 – Add a border-radius to new-tab broken-image tiles, for consistency with other tiles
  • 🌟 Sangie[:sangie50]: Bug 1958324 – Rephrase history clearing to not include search in sanitize dialog
  • Shane Ziegler: Bug 1957495 – Move ToolbarIconColor helper object from `browser.js` into its own module `browser/themes/ToolbarIconColor.sys.mjs`
  • 🌟 Raksha Kumari: Bug 1947278 – Replace div with moz-card in Button Group story for emphasis

Project Updates

Add-ons / Web Extensions

WebExtension APIs
  • Localized string using the i18n WebExtensions API will cascade through locale subtags to find translations before falling back to the extension’s default language (Bug 1381580
    • Thanks to Carlos for contributing this enhancement to the i18n API 🎉
  • A new text-to-audio task type has been added to the trialML API to allow extensions to use the Xenova/speecht5_tts model for text to speech tasks (Bug 1959146).

DevTools

WebDriver BiDi

Lint, Docs and Workflow

New Tab Page

Picture-in-Picture

Performance Tools (aka Firefox Profiler)

  • Profiling xpcshell tests locally just became easier, ./mach test <path to xpcshell test> -–profiler will open a profile of the test at the end.

Profile Management

  • 139 is a catchup / blocker uplift / bugfix release. Main focus is making the cross-profile shared database and cross-instance notifier code independent of the profiles feature, to support Nimbus and OMC storing cross-profile data there even if the profiles feature isn’t enabled (metabug 1953861).
  • Recently fixed bugs:
    • Jared fixed bug 1957924, ensuring the profile group ID gets correctly set across a profile group if a user disables, then re-enables data collection
    • Jared fixed bug 1958196, fixing visibility issues in profiles using the System Theme after an OS theme change
    • Niklas fixed bug 1956111, App menu back button hover coloring is incorrect in HCM
    • Teddy fixed bug 1941576, Profile manager: Subcopy missing from “Open” checkbox
    • Teddy fixed bug 1956350, Limit theme chip labels to one truncated line
    • Sammy fixed bug 1957767, When account is disconnected, cannot log back into profile
    • Dave fixed some expiring probes in bugs 1958163 and 1958171

Search and Navigation

  • Dao fixed the toolbar context menu being shown on address bar results @ 1957448
  • Drew fixed several bugs relating to sponsored suggestions @ 1955360 + 1955257 + 1958038 + 1958421
  • James has been fixing TypeScript definitions across multiple files @ 1958104 + 1958102 + 1958640
  • Moritz implemented a patch to reset Search settings when they are corrupt instead of just failing @ 1945178
  • Daisuke fixed the unified search button being persisted incorrectly @ 1957630
  • Moritz fixed quick actions not being shown after tab switch @ 1958878

Storybook/Reusable Components/Acorn Design System

  • Jules created a new color palette and all of the colors are now available in the design tokens (not just the ones that are being used already 🎉)
    • You can check out the Colors section of the Tokens Table in Storybook
    • Grays were not updated in this, that is being done in another bug
  • We’re playing with the idea of being called the “Acorn Engineering”/”Design System Engineering” team so if you see that it’s still recomp 🙂

The Mozilla BlogAds performance, re-imagined. Now in beta: Anonym Private Audiences.

A pixelated padlock icon with a fingerprint pattern, symbolizing digital privacy and security, on a mint green background.

Together, Mozilla and Anonym are proving that effective advertising doesn’t have to come at the cost of user privacy. It’s possible to deliver both — and we’re building the tools to show the industry how.

Today, we’re unveiling Anonym Private Audiences: a confidential computing solution allowing advertisers to securely build new audiences and boost campaign results.

Powered by advanced privacy-preserving machine learning, Anonym Private Audiences enables advertisers and platforms to work together using first-party data to create targeted audiences without ever handing their users’ information to one another. Brands can discover and engage look-alike communities — reaching new high value customers — without sending, or exposing their customers’ data to ad platforms. As the evolving advertising landscape makes third-party data less viable, Private Audiences supports privacy while enabling the performance advertisers have come to expect.

Private Audiences employs differential privacy and secure computation to minimize the sharing of data commonly passed between advertisers and ad networks. It operates separately, and is not integrated with, our flagship Firefox browser.

Why advertisers are turning to Private Audiences

Advertisers today are facing a difficult challenge: how to grow their business without breaking the trust of the people they’re trying to reach. Private Audiences was built to meet that moment — helping teams use the data they already have to find new high-value customers, without giving up data control along the way.

Early adopters are already seeing meaningful gains, with campaign performance improving an average of 30% compared to traditional broad targeting. And the reasons why it’s resonating are relevant to any brand looking to grow smarter and more sustainably:

  • Find the right people, not just more people. Predictive machine learning helps advertisers reach new audiences that look and behave like their best customers — improving efficiency without ramping up spend.
  • Keep trust intact. In sectors where privacy expectations are highest, early adopters are showing that it’s possible to respect user’s privacy and still drive results.
  • Use what you already know. Private Audiences works with the tools teams already rely on. Audiences show up in platform-native interfaces, so there’s nothing new to learn or configure.
  • Stay ahead of shifting standards. Private Audiences is built on privacy-first architecture — helping brands keep pace with evolving norms, expectations, and technical requirements.

How Private Audiences protects user privacy

In most audience-building workflows today, advertisers integrate directly with ad platforms to share customer data— whether through raw file uploads or automated server-to-server transfers. The platform then uses that data to build ‘look-alike’ audiences or, in some cases, retarget those same individuals directly. Anonym’s approach enables businesses to retain full control over their user data and employ gold standard protections, which are particularly important in privacy-sensitive industries and regions. 

Private Audiences takes a fundamentally different approach

Instead of sharing data directly with platforms, brands securely upload a list of high-value customers using a simple drag-and-drop interface. That data is encrypted and processed inside Anonym’s Trusted Execution Environment (TEE), where audience modeling happens in isolation. No data is exposed — not to Anonym, and not to the platform. Anonym trains the model, ranks eligible audiences based on likely performance, and returns a ready-to-use audience segment. Anonym’s ad platform partners only learn which of their existing users to include in the audience – they receive no new personal information or audience attributes. When the process is finished, the TEE is wiped clean.

The result: strong performance, without giving up data control or compromising on privacy.

Diagram illustrating how Anonym's machine learning identifies users similar to an advertiser's high-value customers based on shared attributes.

Breakthrough performance and privacy capabilities with Private Audiences, and more

Private Audiences joins the ranks of Anonym’s other solutions: Private Attribution, which enables accurate view-through attribution without user tracking, and Private Lift, which helps advertisers understand incrementality without exposing identities. Together, Anonym’s tools represent a new foundation for digital advertising trust — a solution portfolio built on transparency, accountability, and respect for the people it reaches. 

Because trust isn’t optional — it’s foundational

Mozilla has always believed privacy is a fundamental human right, and we will continue our relentless focus on designing and delivering products and services to protect it. Advertising performance — as much as privacy — is a foundational part of this journey. 

Anonym Private Audiences is currently in closed beta, supporting early-use cases where privacy matters most. We’re excited to partner with all advertisers seeking a better way to build high-performing audiences without compromising your customers’ trust.  

For a deeper dive or beta participation details, get in touch with us here.

A teal lock icon next to the bold text "Anonym" on a black background.

Performance, powered by privacy

Learn more about Anonym

The post Ads performance, re-imagined. Now in beta: Anonym Private Audiences. appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 596

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Foundation
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs

Crate of the Week

This week's crate is Maycoon, an experimental vello/wGPU-based UI framework.

Thanks to DraftedDev for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

RFCs
Rust
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

465 pull requests were merged in the last week

Compiler
Miri
Library
Cargo
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Mostly positive week. Most of the improvements come from a revert of a regression from a few weeks ago, but we also get nice wins from re-using Sized fast-path, coming from Sized hierarchy implementation work.

Triage done by @panstromek. Revision range: 15f58c46..8f2819b0

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
1.3% [0.4%, 2.1%] 7
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
-1.0% [-12.9%, -0.1%] 144
Improvements ✅
(secondary)
-2.2% [-12.3%, -0.2%] 111
All ❌✅ (primary) -0.9% [-12.9%, 2.1%] 151

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Rust
Other Areas
Cargo Language Reference

No Items entered Final Comment Period this week for Rust RFCs, Language Team or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2025-04-23 - 2025-05-21 🦀

Virtual
Europe
North America
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

I don’t think about rust either. That’s a compiler’s job

Steve Klabnik on Bluesky

Thanks to Matt Wismer for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Spidermonkey Development Blog5 Things You Might Not Know about Developing Self-Hosted Code

Self-hosted code is JavaScript code that SpiderMonkey uses to implement some of its intrinsic functions for JavaScript. Because it is written in JavaScript, it gets all the benefits of our JITs, like inlining and inline caches.

Even if you are just getting started with self-hosted code, you probably already know that it isn’t quite the same as your typical, day-to-day JavaScript. You’ve probably already been pointed at the SMDOC, but here are a couple tips to make developing self-hosted code a little easier.

1. When you change self-hosted code, you need to build

When you make changes to SpiderMonkey’s self-hosted JavaScript code, you will not automatically see your changes take effect in Firefox or the JS Shell.

SpiderMonkey’s self-hosted code is split up into multiple files and functions to make it easier for developers to understand, but at runtime, SpiderMonkey loads it all from a single, compressed data stream. This means that all those files are gathered together into a single script file and compressed at build time.

To see your changes take effect, you must remember to build!

2. dbg()

Self-hosted JavaScript code is hidden from the JS Debugger, and it can be challenging to debug JS using a C++ debugger. You might want to try logging messages to console.log() to help you debug your code, but that is not available in self-hosted code!

In debug builds, you can print out messages and objects using dbg(), which takes a single argument to print to stderr.

3. Specification step comments

If you are stuck trying to figure out how to implement a step in the JS specification or a proposal, you can see if SpiderMonkey has implemented a similar step elsewhere and base your implementation off that. We try to diligently comment our implementations with references to the specification, so there’s a good chance you can find what you are looking for.

For example, if you need to use the specification function CreateDataPropertyOrThrow(), you can search for it (SearchFox is a great tool for this) and discover that it is implemented in self-hosted code using DefineDataProperty().

4. getSelfHostedValue()

If you want to explore how a self-hosted function works directly, you can use the JS Shell helper function getSelfHostedValue().

We use this method to write many of our tests. For example, unicode-extension-sequences.js checks the implementation of the self-hosted functions startOfUnicodeExtensions() and endOfUnicodeExtensions().

You can also use getSelfHostedValue() to get C++ intrinsic functions, like how toLength.js tests ToLength().

5. You can define your own self-hosted functions

You can write your own self-hosted functions and make them available in the JS Shell and XPC shell. For example, you could write a self-hosted function to print a formatted error message:

  function report(msg) {
      dbg("|ERROR| " + msg + "|");
  }

Then, while you are setting up globals for your JS runtime, call JS_DefineFunctions(cx, obj, funcs):

  static const JSFunctionSpec funcs[] = {
      JS_SELF_HOSTED_FN("report", "report", 1, 0),
      JS_FS_END,
  };

  if (!JS_DefineFunctions(cx, globalObject, funcs)) {
    return false;
  }

The JS_SELF_HOSTED_FN() macro takes the following parameters:

  1. name - The name you want your function to have in JS.
  2. selfHostedName - The name of the self-hosted function.
  3. nargs - Number of formal JS arguments to the self-hosted function.
  4. flags - This is almost always 0, but could be any combination of JSPROP_*.

Now, when you build the JS Shell or XPC Shell, you can call your function:

js> report("BOOM!");          
Iterator.js#6: |ERROR| BOOM!|

Mitchell BakerGlobal AI Summit on Africa: my experience

The Mozilla BlogExploring on-device AI link previews in Firefox

Ever opened a bunch of tabs only to realize none of them have what you need? Or felt like you’re missing something valuable in a maze of hyperlinks? In Firefox Labs 138, we introduced an optional experimental feature to enhance your browsing experience by showing a quick snapshot of what’s behind a link before you open it. This post provides some technical details of this early exploration for the community to help shape this feature and set the stage for deeper discussions into specific areas like AI models.

Interaction

To activate a Link Preview, hover over a link and press Shift (⇧) plus Alt (Option ⌥ on macOS), and a card appears including the title, description, image, reading time, and 3 key points generated by an on-device language model. This is built on top of the Firefox behavior to show the URL when over a link, so it also works when links are focused with the keyboard. We picked this keyboard shortcut to try avoiding conflicts with common shortcuts, e.g., opening tabs or Windows menus. Let us know: do you prefer some keyboard shortcut or potentially other triggers like long press, context menu, or maybe hover with delay?

animation showing shift+alt keyboard presses triggering link preview

The card appears in a panel separate from the page, allowing it to extend past the edges of the window. This helps us position the link within the card near your mouse cursor, making it convenient to visit the previewed page, while also reinforcing that this comes from Firefox and not the page. We’re also exploring the possibility of making it part of the page, allowing them to scroll together or more separately, such as a persistent space to gather multiple previews for cross-referencing or subsequent actions. Let us know: which approaches better support your browsing workflows?

Page fetching and extraction

This initial implementation uses credentialless HTTPS requests to retrieve a page’s HTML and parses it without actually loading the page or or executing scripts. While we don’t currently send cookies, we do send a custom x-firefox-ai header allowing website authors to potentially decide what content can be previewed. Let us know: would you want previews of content requiring login, perhaps with a risk of accidentally changing related logged in state?

With the parsed page, we look for metadata, such as Open Graph tags, which are commonly used for social media link sharing, to display the title, description, and image. We also reuse Firefox’s Reader View capabilities for extracting reading time and the main article content to generate key points. Improvements to page parsing capabilities can enhance both Reader View and Link Previews. Let us know: which sites you find the feature useful and on which it might pull the wrong information.

Key points, locally generated 

To ensure user privacy, we run inference on-device with Reader View’s content. This is currently powered by wllama (WebAssembly llama.cpp) with SmolLM2-360M from HuggingFace, chosen based on our evaluation of performance, relevance, consistency, etc. Testing so far shows most people can see the first key point within 4 seconds and each additional point within a second, so let us know: how that feels for you and if you’d want it faster or smarter.

There are various optimizations to speed things up, such as downloading the AI model (369MB) when you first enable the feature in Firefox Labs, as well as limiting how much content is provided to the model to match up with the intent of a preview. We also use pre-processing and post-processing heuristics that are English-focused, but some in the community have already configured the language limiting pref from “en” and provided helpful feedback that this model can work for other languages too.

Next steps

We’re actively working on improving support for multiple languages, key points quality and length, and general polish to the feature capability and user experience as well as exploring how to bring this to Android. We invite you to try Link Preview and look forward to your feedback in enhancing how Firefox helps users accomplish more on the web. You can also chat with us on AI@Mozilla discord #firefox-ai.

The post Exploring on-device AI link previews in Firefox appeared first on The Mozilla Blog.

Don MartiGoogle Ads Shitshow Report 2024

Google ads are full of crime and most web users should block them. If you don’t believe the FBI, or Malwarebytes, believe Google. Their 2024 Ads Safety Report is out (Search Engine Land covered it) and things do not look good. The report is an excellent example of some of the techniques that big companies use to misrepresent an ongoing disaster as somehow improving, so I might as well list them. If I had to do a corporate misinformation training session, I’d save this PDF for a reading assignment.

release bad news when other news is happening This was a big news week for Google, which made it the best time to release this embarrassing report. Editors aren’t going to put their Google reporter to work on an ad safety story when there’s big news from the Federal courthouse.

counting meaningless numbers Somehow our culture teaches us to love to count, so Google gives us a meaningless number when the meaningful numbers would look crappy.

Last year, we continued to invest heavily in making our LLMs more advanced than ever, launching over 50 enhancements to our models which enabled more efficient and precise enforcement at scale.

The claim is that Google continued to invest heavily and that’s the kind of statement that’s relatively easy to back up with a number that has meaningful units attached. Currency units, head count, time units, even lines of code. Instead, the count is enhancements which could be almost anything. Rebuild an existing package with different compiler optimizations? Feed an additional data file to some ML system? What this looks like from the outside is that the meaningful numbers are going in the wrong direction (maybe some of the people who would have made them go up aren’t there any more?) so they decided to put out a meaningless number instead.

control the denominator to juice the ratio Only takes elementary school math to spot this, but might be easy to miss if you’re skimming.

Our AI-powered models contributed to the detection and enforcement of 97% of the pages we took action on last year.

Wow, 97%, that’s a big number. But it’s out of pages we took action on which is totally under Google’s control. There are a bunch of possible meaningful ratios to report here, like

  • (AI-flagged ads)/(total ads)

  • (ads removed)/(AI-flagged ads)

  • (bad ad impressions)/(total ad impressions)

and those could have been reported as a percentage, but it looks like they wanted to go for the big number.

pretend something that’s not working is working The AI models contributed to 97% of the actions, but contributed isn’t defined. Does it count as contributed if, say, human reviewers flagged 1,000 ads, the AI flagged 100,000 ads, and 970 ads were flagged by both? If AI were flagging ads that had been missed by other methods, this would have been the place to put it.

This is an obvious fake “Continue” button, running as a Google ad. The same advertiser has many other ads that are misleading “Play Game” or “Download” buttons. If Google is really good at AI, why are they running so many of these? <figcaption>This is an obvious fake “Continue” button, running as a Google ad. The same advertiser has many other ads that are misleading “Play Game” or “Download” buttons. If Google is really good at AI, why are they running so many of these?</figcaption>

The newsworthy claim that’s missing is the count of bad ads first detected by AI before getting caught by a human reviewer. Contributed to the detection could be a lot of things. (If this were a report on a free trial of an AI-based abuse detection service, contributed wouldn’t get me to upgrade to the paid plan.)

report the number caught, not the number that get through Numbers of abusers caught is always the easiest number to juice. The simplest version is to go home at lunch hour, code up the world’s weakest bot, start it running from a non-work IP address, then go back to work and report some impressive numbers.

To put this into perspective: we suspended over 39.2 million accounts in total, the vast majority of which were suspended before they ever served an ad.

Are any employees given target numbers of suspensions to issue? Can anyone nail their OKRs by raising the number of accounts suspended? If this number is unreliable enough that a company wouldn’t use it for management, it’s not reliable enough to pay attention to. They’re also reporting the number of accounts, not individuals or companies. If some noob wannabe scammer writes a script to POST the new account form a million times, do they count for a million?

don’t compare to last year Here’s the graph of bad ads caught by Google in 2024.

5.1 billion bad ads were stopped in 2024 <figcaption>5.1 billion bad ads were stopped in 2024</figcaption>

And here’s the same graph from the 2023 report.

5.5 billion bad ads were stopped in 2023 <figcaption>5.5 billion bad ads were stopped in 2023</figcaption>

The total number isn’t as interesting as the individual, really problematic categories. The number caught for enabling dishonest behavior went down from about 20 million in 2023 to under 9 million in 2024.

Did the number of attempts at dishonest behavior with Google ads really go down by more than half in a single year? Or did Google catch fewer of them? From the outside, it’s fairly easy to tell that Google Ads is understaffed and the remaining employees are in the weeds, but it’s hard to quantify the problem. What’s really compelling about this report is that the staffing situation has gotten bad enough that it’s even showing up in Google’s own hand-picked numbers. In general when a report doesn’t include how a number has changed since the last report, the number went in the wrong direction and there’s no good explanation for why. And the number of ads blocked or removed for misinformation went from 30 million in 2023 to (checks notes) zero in 2024. Yes, misinformation has friends in high places now, but did all of the sites worldwide that run Google ads just go from not wanting to run misinformation to being fine with it?

report detection, not consequences Those numbers on bad ads are interesting, but pay attention to the text. These are numbers for ads blocked or removed, and repeat offenders drive the bulk of tech support scams via Google Ads. Does an advertiser caught doing misrepresentation in one ad get to keep going with different ads?

don’t compare to last year, part 2 The previous two graphs showed Google’s bad ads/good site problem, so here’s how they’re doing on their good ad/bad site problem. Here’s 2024:

1.3 billion pages taken action against in 2024 <figcaption>1.3 billion pages taken action against in 2024</figcaption>

And 2023:

2.1 billion pages taken action against in 2023 <figcaption>2.1 billion pages taken action against in 2023</figcaption>

Ad-supported AI slop is on the way up everywhere, making problem pages easier to create at scale, but Google somehow caught 800 million fewer pages than in 2023. How many pages they took action against isn’t even a good metric (and I would be surprised if anyone is incentivized based on it). Some more useful numbers would be stuff like

  • What percentage of advertisers had their ad run on a page that later had action taken against it?

  • How much money was paid out to sites that were later removed for violating the law or Google policy?

But as in the previous graph, the big problem is in one of the categories. Google caught fewer pages for malicious or unwanted software in 2024 than in 2023. Is there a good explanation for why Google is taking less action on malicious or unwanted software in 2024 than in 2023? As far as I know, nobody is claiming that developers are writing less of this kind of software, or promoting it less. (icymi: Researcher uncovers dozens of sketchy Chrome extensions with 4 million installs) Google management is just so mad about the union situation that they’re willing to make the users suffer in order to keep threatening workers with more layoffs. And did the amount of dangerous or derogatory content really go down from 104 million pages to 24.8 million pages in a year? Or did something happen on the Google side? (icymi: A trillion-dollar problem: how a broken digital ad industry is fracturing society – and how we can fix it)

A real Ad Safety Report would help an advertiser answer questions about how likely they are to sponsor illegal content when they buy Google ads. And it would help a publisher understand how likely they are to have an ad for malware show up on their pages. No help from this report. Even though from the outside we can see that Google runs a bunch of ads on copyright-infringing sites, not only does Google not report the most meaningful numbers, they’re doing worse than before on the less meaningful numbers they do choose to report.

Google employees, (yes, both FTEs and TVCs) are doing a lot of good work trying to do the right thing on the whole ads/crime problem, but management just isn’t staffing and funding the ad safety stuff at the level it needs. A company with real competition would have had to straighten this situation out by now, but that’s not the case for Google. Google’s services like Search are both free and overpriced—users don’t pay in money, but in over-exposure to fraud and malware risks that would be lower in a competitive market. If a future Google breakup works, one of the best indicators of success will be more meaningful, and more improved, metrics in future ad safety reports.

More: Winners don’t click search ads

just in case anyone wants to release better numbers: How to leak to a journalist by Laura Hazard Owen

Related

Click this to buy better stuff and be happier Another protection measure that’s quick to do.

fix Google Search Get rid of the AI slop and other growth hacking features, and you can almost get Google back to where it was.

Bonus links

Flaming Fame. by George Tannenbaum. We don’t see shitty work and say that’s shitty. It’s worse than that. We simply don’t see it at all.

LG TVs’ integrated ads get more personal with tech that analyzes viewer emotions by Scharon Harding. The company plans to incorporate a partner company’s AI tech into its TV software in order to interpret psychological factors impacting a viewer, such as personal interests, personality traits, and lifestyle choices. (What happens when you do a Right to Know for the family TV?)

Former Substack creators say they’re earning more on new platforms that offer larger shares of subscription revenue by Alexander Lee. Since leaving Substack, some writers’ subscriber counts have plateaued over the past year, while others have risen — but in both cases, creators said that their share of revenue has increased because Ghost and Beehiiv charge creators flat monthly rates that scale based on their subscriber counts, rather than Substack’s 10 percent cut of all transaction fees.

The Mediocrity of Modern Google by Om Malik. What’s particularly ironic is that today’s Google has become exactly what its founders warned against in their 1998 paper: an advertising company whose business model fundamentally conflicts with serving users’ needs.

With Support of Check My Ads Institute’s Advocacy, Senator Warner Urges FTC and DOJ to Investigate Ad Fraud Affecting U.S. Government Agencies Senator Warner’s letters to FTC Chairman Andrew Ferguson and DOJ Attorney General Pam Bondi cite new research by cybersecurity and digital forensics firm Adalytics, exposing how major adtech vendors have failed to deliver the “real-time bot detection” that they promised. As a result, advertisements intended for human audiences instead were shown, for at least five years, to easily-identifiable bots operated from data centers, including bots on industry group bot lists. (Adalytics: The Ad Industry’s Bot Problem Is Worse Than We Thought)

Git turns 20: A Q&A with Linus Torvalds by Taylor Blau. So I was like, okay, I’ll do something that works for me, and I won’t care about anybody else. And really that showed in the first few months and years—people were complaining that it was kind of hard to use, not intuitive enough. And then something happened, like there was a switch that was thrown.

I’m not an expert on electric cars, so I don’t know enough to criticize some of the hard parts of the design of a Tesla. But when they get obvious stuff like getting out without power wrong, that’s a pretty good sign to stay away.

How the U.S. Became A Science Superpower by Steve Blank. Post war, it meant Britain’s early lead was ephemeral while the U.S. built the foundation for a science and technology innovation ecosystem that led the world—until now.

Mitchell BakerExpanding Mozilla’s Boards in 2020

Mozilla is a global community that is building an open and healthy internet. We do so by building products that improve internet life, giving people more privacy, security and control over the experiences they have online. We are also helping to grow the movement of people and organizations around the world committed to making the digital world healthier.

As we grow our ambitions for this work, we are seeking new members for the Mozilla Foundation Board of Directors. The Foundation’s programs focus on the movement building side of our work and complement the products and technology developed by Mozilla Corporation.

What is the role of a Mozilla board member?

I’ve written in the past about the role of the Board of Directors at Mozilla.

At Mozilla, our board members join more than just a board, they join the greater team and the whole movement for internet health. We invite our board members to build relationships with management, employees and volunteers. The conventional thinking is that these types of relationships make it hard for the Executive Director to do his or her job. I wrote in my previous post that “We feel differently”. This is still true today. We have open flows of information in multiple channels. Part of building the world we want is to have built transparency and shared understandings.

It’s worth noting that Mozilla is an unusual organization. We’re a technology powerhouse with broad internet openness and empowerment at its core. We feel like a product organization to those from the nonprofit world; we feel like a non-profit organization to those from the technology industry.

It’s important that our board members understand the full breadth of Mozilla’s mission. It’s important that Mozilla Foundation Board members understand why we build consumer products, why it happens in the subsidiary and why they cannot micro-manage this work. It is equally important that Mozilla Corporation Board members understand why we engage in the open internet activities of the Mozilla Foundation and why we seek to develop complementary programs and shared goals.

What are we looking for?

Last time we opened our call for board members, we created a visual role description. Below is an updated version reflecting the current needs for our Mozilla Foundation Board.

Here is the full job description: https://mzl.la/MoFoBoardJD

Here is a short explanation of how to read this visual:

  • In the vertical columns, we have the particular skills and expertise that we are looking for right now. We expect new board members to have at least one of these skills.
  • The horizontal lines speaks to things that every board member should have. For instance, to be a board member, you should have to have some cultural sense of Mozilla. They are a set of things that are important for every candidate. In addition, there is a set of things that are important for the board as a whole. For instance, international experience. The board makeup overall should cover these areas.
  • The horizontal lines will not change too much over time, whereas the vertical lines will change, depending on who joins the Board and who leaves.

Finding the right people who match these criteria and who have the skills we need takes time. We hope to have extensive discussions with a wide range of people. Board candidates will meet the existing board members, members of the management team, individual contributors and volunteers. We see this as a good way to get to know how someone thinks and works within the framework of the Mozilla mission. It also helps us feel comfortable including someone at this senior level of stewardship.

We want your suggestions

We are hoping to add three new members to the Mozilla Foundation Board of Directors over the next 18 months. If you have candidates that you believe would be good board members, send them to msurman@mozillafoundation.org. We will use real discretion with the names you send us.

Chris H-CPerfect is the Enemy of Good Enough

My Papa, my mother’s father, C. J. Mortimer died in Saint John, New Brunswick in 2020. Flying through the Toronto and Montreal airports in September to his funeral was one of the surreal experiences of my life, with misting tunnels of aerosolized alcohol to kill any microbe on your skin, hair, clothes, and luggage; airport terminals with more rodent traps than people; and a hypersensitivity to everyone’s cough and sniffle that I haven’t been able to shake.

I was angry, then. I’m still angry. Angry that I couldn’t hug my grandmother. Angry that weeping itself was complicated and contagious. Angry that I couldn’t be together or near or held. Angry that I was putting my family at home at risk by even going. Angry that we didn’t hold the line on the lockdowns long enough to manage the disease properly. Angry at the whiners.

This isn’t a pandemic post, though. Well, no more than any post I’ve made since 2020. No more than any post I will make for the foreseeable.

This is a post about what my grandfather gave to me.

Y’see, I’m not the first computer nerd in the family. My Grampy, my father’s father, was and my father is a computer nerd. Grampy’s memoirs were typed into a Commodore 64. Dad is still fighting with Enterprise Java, of all things, to help his medical practice run smoothly.

And Papa? In the 60s he was offered lucrative computer positions at Irving Oil in Saint John and IBM in the US. Getting employment in the tech industry was different in those days, not leastwise because the tech industry didn’t really exist yet. You didn’t get jobs because you studied it in school, because there weren’t classes in it. You didn’t get jobs because of your experience in the field, because the most experienced you could be was the handful of years they’d been at it. You didn’t get jobs because of your knowledge of a programming language, because there were so few of them and they were all so new (and proprietary).

So what was a giant like International Business Machines to do? How could it identify in far-flung, blue-collar Atlantic Canada a candidate computer programmer? Because though the tech industry didn’t exist in a way we’d necessarily recognize, it was already hungrier for men to work in it than the local markets could supply.

In my Papa’s case, they tested his aptitude with the IBM Aptitude Test for Programmer Personnel (copyright 1964):

Logo and explanation from the front cover of a "IBM Aptitude Test for Programmer Personnel" with directions that read "1. Do not make any marks in this booklet. 2. On the separate answer sheet print your name, date, and other requested information in the proper spaces. 3. Then wait for further instructions.". The design is geometric and bland but not unpleasant.

Again, though, how do you evaluate programmer aptitude without a common programming language? Without common idioms? Without even a common vocabulary of what “code” could mean or be?

IBM used pattern-matching questions with letters:

Instructions reading "In Part I you will be given some problems like those on this page. The letters in each series follow a certain rule. For each series of letters you are to find the correct rule and complete the series. One of the letters at the right side of the page is the correct answer. Look at the example below." The provided example, marked "W." reads "a b a b a b a b" followed by five possible numbered answers "a b c d e".

And pattern-matching questions with pictures:

Instructions reading "In Part II you will be given some problems like those on this page. Each row is a problem. Each row consists of four figures on the left-hand side of the page and five figures on the right-hand side of the page. The four figures on the left make a series. You are to find out which one of the figures on the right-hand side would be the next or the fifth one in the series. Now look at example X". Example X is four squares, each with a single quadrant shaded. In order, top-right, bottom-right, bottom-left, top-left. The five possible answers labeled A through E are squares with one quadrant shaded (A bottom-right, B top-right, C bottom-left, D top-left), and a square with no quadrant shaded (E).

And arithmetic reasoning questions:

Instructions reading "In Part III you will be given some problems in arithmetical reasoning. After each problem there are five answers, but only one of them is the correct answer. You are to solve each problem and indicate the correct answer on the answer sheet. The following problems have been done correctly. Study them carefully." followed by "Example X: How many apples can you buy for 80 cents at the rate of 3 for 10 cents? (a) 6 (b) 12 (c) 18 (d) 24 (e) 30"

And that was it. For the standardized portion of the process, at least.

Papa delivered this test to my siblings and I when I think I was in Grade 9, so about 15 years of age. Even my 2- and 4-year-younger siblings performed well, and I and my 2-year-older sibling did nearly perfectly. Apparently the public education system had adapted to turning out programming personnel of high aptitude in the forty years or so since the test had been printed.

I was gifted Papa’s aptitude test booklet, some IBM flowcharting and diagramming worksheets, and a couple example punchcards before his death. I was thrilled to be entrusted with them. I had great plans for high-quality preservation digitization. If my Brother multi-function’s flatbed scanner wouldn’t do the trick, I’d ask the local University’s library for help. Or the Internet Archive itself!

The test booklet sat on my desk for years. And then Papa died. I placed the bulletin from the funeral service next to it on my desk. They both sat on my desk for further years.

I couldn’t bring myself to start the project of digitizing and preserving these things. I just couldn’t.

Part of it was how my brain works. But I didn’t need a diagnosis to develop coping mechanisms for projects that were impossible to start. I bragged about having it to my then-coworker Mike Hoye, the sort who cared about things like this. Being uncharacteristically prideful in front of a peer, a mentor, that’d surely force me to start.

They sat on my desk for years more.

We renovated the old spare room into an office for my wife and moved her desk and all her stuff out so I could have the luxury of an office to myself. We repainted and reorganized my office.

I looked at the test booklet.

I filed it away. I forgot where. I gave up.

But then, today, I read an essay that changed things. I read Dr. Cat Hicks’ Why I Cannot Be Technical. Not only does she reference Papa’s booklet (“Am I the only person who is extremely passionate about getting their hands on a copy of things like the IBM programmer aptitude tests from the 60s?”) but what she writes and how she writes reminds me of what drew me to blogging. What I wanted to contribute to and to change in this industry of ours. The feeling of maybe being a part of a movement, not a part of a machine.

I searched and searched and found the booklet. I looked at the flatbed scanner and remembered my ideas of finding the ideal digitization. The perfect preservation.

I said “Fuck it” and put it on the ground and started taking pictures with my phone.

To hell with perfect, I needed good enough.

I don’t remember what else was involved in IBM’s test of my Papa. I don’t even know if they conducted it in Canada or flew him to the States. He probably told me. I’m sorry I don’t remember.

I don’t know why he never kept up with programming. I don’t remember him ever working, just singing beautifully in the church choir, stuttering when speaking on the telephone, playing piano in the living room. He did like tech gadgets, though. He converted all our old home movies to DVD without touching a mouse or keyboard. I should’ve asked him why he never owned a minicomputer.

I do know why he didn’t choose the IBM job, though. Sure, yes, he could stay closer to his family in Nova Scotia. Sure, he wouldn’t have to wear quite as many suits. But the real crux was the offer that Irving gave him. IBM wanted him as a cog in their machine. Another programming person to feed into their maw and… well, who knows what next. But Irving? Well, Irving _also_ wanted that, true. They needed someone to operate their business machines for payroll and accounts and stuff.

But when the day’s work was done? And all the data entry girls (because of course they were all women) were still on the clock? And there were CPU cycles available?

Irving offered to let my Papa input and organize his record collection.1

My recollection of my grandfather isn’t perfect. But perhaps it’s good enough.

:chutten

  1. Another thing I have in my good enough memory is that, to have the mainframe index his 78s, Papa needed to know the longest title of all the sides in his collection. It’s a war song. And prepare your historical appreciation goggles because it’s sexist as hell in 2025. But I may never forget 1919’s “Would You Rather Be A Colonel With an Eagle on Your Shoulder or a Private With a Chicken on Your Knee?↩

Mitchell BakerThe Ethos of Open Source

A couple of months ago I started posting about how I want to build a better world through technology and how I’ll be doing that outside of Mozilla going forward.  The original post has many references to “open” and “open source.”   It’s easy to think that we all understand open source and we just need to apply it to new settings.   I feel differently:  we need to shape our collective understanding of the ethos of open source.   

Open source has become mainstream as a part of the software development process.   We can rightly say that the open source movement “won.”  However, this isn’t enough for the future. 

The open source movement was about more than convenience and avoiding payment. For many of us, open source was both a tool and an end in itself.   Open source software allows people to participate in creating the software that has such great impact on our lives.   The “right to fork” allows participants to try to correct wrongs in the system; it provides a mechanism for alternatives to emerge. This isn’t a perfect system of course, and we’ve seen how businesses can wrap open source and the right to fork with other systems that diminish the impact of this right.  So the past is not “The Perfect Era” that we should aim to replicate.  The history of open source gives us valuable learning into what works and what doesn’t so we can iterate towards what we need in this era.  

The practical utility of open source software has become mainstream.  The time is ripe to reinforce the deeper ethos of participation, opportunity, security and choice that drove the open source movement.  

I’m looking for a good conversation about these topics.  If you know of a venue where such conversations are happening in a thoughtful, respectful way please do let me know.  

Don Martipicking up cheap shoes in front of a steamroller

Here’s another privacy paradox for people who collect them.

  • On the web, the average personalized ad is probably better than the average non-personalized ad. (The same ad campaigns that have a decent budget for ad creative also have a budget for targeting data.)

  • But users who block personalized ads, or avoid personalization by using privacy tools and settings, are, on average, better off than users who get personalized ads.

There’s an expression in finance: Picking Up Nickels In Front Of A Steam Roller. For some kinds of investing decisions, the investor is more likely to make a small gain than to lose money in each individual trade. But the total expected return over time is negative, because a large loss is an unlikely outcome of each trade. The decision to accept personalized ads or try to avoid them might be a similar bet.

For example, a typical positive outcome of getting personalized ads might be getting better shoes, cheaper. There’s a company in China that is working the personalized ad system really well. Instead of paying for high production value ads featuring high-profile athletes in the USA, they’re just doing the incremental data-driven marketing thing. Make shoes, experiment with the personalized ad system, watch the numbers, reinvest in both shoe improvements and improvements to the personalized ads. For customers, the shoe company represents the best-case scenario for turning on the personalized ads. You get a pair of shoes from China for $40 that are about as good as the $150 shoes from China that you would get from a a big-name brand. (The shoes might even be made by the same people out of the same materials.) I don’t need to link to the company, just turn on personalized ads and if you want the shoes they’ll find you.

That example might be an outlier on the win-win side, though. On average, personalized (behaviorally targeted) ads are likely to be associated with lower quality vendors and higher product prices compared to competing alternatives found among search results. (Mustri et al.) but let’s pretend for a minute and say you figured out how to get targeted in the best possible way and come out on the winning side. That’s pretty sweet—personalized ads save you more than a hundred bucks on shoes, right?

Here comes the steamroller, though.

In recent news, Baltimore sues 2 sportsbooks over alleged exploitative practices. Some people are likely to develop a gambling problem, and if you don’t know in advance whether or not you’re one of them, should you have the personalized ads turned on? You stand to lose a lot more than you would have gained by getting the cheap shoes or other miscellaneous stuff. It is possible that machine learning on the advertising or recommended content side could know more about you than you do, and the negative outcomes from falling for an online elder fraud scheme tend to be much larger than the positive outcomes from selecting the best of competing legitimate products.

The personalized advertising system can facilitate both win-win offers like the good shoes from an unknown brand or win-lose offers like those from sports betting apps that use predatory practices. The presence of both win-win and win-lose offers in the market is a fact that keeps getting oversimplified away by personalized advertising’s advocates in academia. In practice, ad personalization gives an advantage to deceptive sellers. Another good example comes from the b2b side: malware in search ads personalized to an employee portal or SaaS application. From the CIO point of view, are you better off having employees get better-personalized search ads at work, or better off blocking a security incident before it starts?

People’s reactions to personalization are worth watching, and reflect more widely held understanding of how information works in markets than personalized ad fandom does. The fact that Google may have used this data to conduct focused ad campaigns targeted back to you was disclosed as if it was a security issue, which makes sense. Greg Knauss writes, Blue Shield says that no bad actor was involved, but is that really true? Shouldn’t a product that, apparently by default, takes literally anything it can—privacy be damned—and tosses it into the old ad-o-matic not be considered the output of a bad actor? Many people (but not everybody) consider being targeted for a personalized ad as a threat in itself. More: personalization risks

Bonus links

What If We Made Advertising Illegal? by Kōdō Simone. The traditional argument pro-advertising—that it provides consumers with necessary information—hasn’t been valid for decades. In our information-saturated world, ads manipulate, but they don’t inform. The modern advertising apparatus exists to bypass rational thought and trigger emotional responses that lead to purchasing decisions. A sophisticated machine designed to short-circuit your agency, normalized to the point of invisibility. (Personally I think it would be hard to come up with a law that would squeeze out all incentivized communication intended to cause some person to purchase some good or service, but it would be possible to regulate the information flows in the other direction—surveillance of audience by advertiser and intermediaries—in a way that would mostly eliminate surveillance advertising as we know it: Big Tech platforms: mall, newspaper, or something else?)

Meta secretly helped China advance AI, ex-Facebooker will tell Congress by Ashley Belanger. In her prepared remarks, which will be delivered at a Senate subcommittee on crime and counterterrorism hearing this afternoon, Wynn-Williams accused Meta of working hand in glove with the Chinese Communist Party (CCP). That partnership allegedly included efforts to construct and test custom-built censorship tools that silenced and censored their critics as well as provide the CCP with access to Meta user data—including that of Americans. (And if they’re willing to do that, then the elder fraud ads on Facebook are just business as usual.)

Protecting Privacy, Empowering Small Business: A Path Forward with S.71 (A privacy law with private right of action gets enforced based on what makes sense to normal people in a jury box, not to bureaucrats who think it’s normal to read too many PDFs. Small businesses are a lot better off with this common-sense approach instead of having to feed the compliance monster.)

This startup just hit a big milestone for green steel production by Casey Crownhart. Boston Metal uses electricity in a process called molten oxide electrolysis (MOE). Iron ore gets loaded into a reactor, mixed with other ingredients, and then electricity is run through it, heating the mixture to around 1,600 °C (2,900 °F) and driving the reactions needed to make iron. That iron can then be turned into steel. Crucially for the climate, this process emits oxygen rather than carbon dioxide…

Spidermonkey Development BlogShipping Temporal

The Temporal proposal provides a replacement for Date, a long standing pain-point in the JavaScript language. This blog post describes some of the history and motivation behind the proposal. The Temporal API itself is well docmented on MDN.

Temporal reached Stage 3 of the TC39 process in March 2021. Reaching Stage 3 means that the specification is considered complete, and that the proposal is ready for implementation.

SpiderMonkey began our implementation that same month, with the initial work tracked in Bug 1519167. Incredibly, our implementation was not developed by Mozilla employees, but was contributed entirely by a single volunteer, André Bargull. That initial bug consisted of 99 patches, but the work did not stop there, as the specification continued to evolve as problems were found during implementation. Beyond contributing to SpiderMonkey, André filed close to 200 issues against the specification. Bug 1840374 is just one example of the massive amount of work required to keep up to date with the specification.

As of Firefox 139, we’ve enabled our Temporal implementation by default, making us the first browser to ship it. Sometimes it can seem like the ideas of open source, community, and volunteer contributors are a thing of the past, but the example of Temporal shows that volunteers can still have a meaningful impact both on Firefox and on the JavaScript language as a whole.

Interested in contributing?

Not every proposal is as large as Temporal, and we welcome contributions of all shapes and sizes. If you’re interested in contributing to SpiderMonkey, please have a look at our mentored bugs. You don’t have to be an expert :). If your interests are more on the specification side, you can also check out how to contribute to TC39.

Mozilla ThunderbirdVIDEO: The New Account Hub

In this month’s Community Office Hours, we’re chatting with Vineet Deo, a Software Engineer on the Desktop team, who walks us through the new Account Hub on the Desktop app. If you want a sneak peak at this new streamlined experience, you can find it in the Daily channel now and the Beta channel towards the end of April.

Next month, we’ll be chatting with our director Ryan Sipes. We’ll be covering the new Thunderbird Pro and Thundermail announcement and the structure of MZLA compared to the Mozilla Foundation and Corporation. And we’ll talk about how Thunderbird put the fun in fundraising!

March Office Hours: The New Account Hub

Setting up a new email account in Thunderbird is already a solid experience, so why the update? Firstly, this is the first thing new users will see in the app. Thus, it’s important it has the same clean, cohesive look that is becoming the new Thunderbird design standard. It’s also helpful for users coming from other email clients to have a familiar, wizard-like experience. While the current account setup works well, it’s browser based. This makes it possible a user could exit out before finishing and get lost before they even started. This is the opposite of what we want for potential users!

Vineet and his team are also working to make the new Account Hub ready for Exchange. Likewise, they also have plans for a similar hub to set up new address books and calendars. We’re proud of the collaboration between back and frontend teams, and designers and engineers, to make the Account Hub.

Watch, Read, and Get Involved

But don’t take our word for it! Watch Vineet’s Account Hub talk and demo, along with a Q&A session. If you’re comfortable testing Daily, you can test this new feature now. (Go to File > New > Email Account to start the experience.) Otherwise, keep an eye on our Beta release channel at the end of April. And if you’re watching this after Account Hub is part of the regular release, now you know the feature’s story!

VIDEO (Also on Peertube):

Get Involved

The post VIDEO: The New Account Hub appeared first on The Thunderbird Blog.

The Rust Programming Language Blogcrates.io security incident: improperly stored session cookies

Today the crates.io team discovered that the contents of the cargo_session cookie were being persisted to our error monitoring service, Sentry, as part of event payloads sent when an error occurs in the crates.io backend. The value of this cookie is a signed value that identifies the currently logged in user, and therefore these cookie values could be used to impersonate any logged in user.

Sentry access is limited to a trusted subset of the crates.io team, Rust infrastructure team, and the crates.io on-call rotation team, who already have access to the production environment of crates.io. There is no evidence that these values were ever accessed or used.

Nevertheless, out of an abundance of caution, we have taken these actions today:

  1. We have merged and deployed a change to redact all cookie values from all Sentry events.
  2. We have invalidated all logged in sessions, thus making the cookies stored in Sentry useless. In effect, this means that every crates.io user has been logged out of their browser session(s).

Note that API tokens are not affected by this: they are transmitted using the Authorization HTTP header, and were already properly redacted before events were stored in Sentry. All existing API tokens will continue to work.

We apologise for the inconvenience. If you have any further questions, please contact us on Zulip or GitHub.

Mozilla ThunderbirdThunderbird for Android March 2025 Progress Report

Hello, everyone, and welcome to the Thunderbird for Android March 2025 Progress Report. We’re keeping our community updated on everything that’s been happening in the Android team, which is quickly becoming a more general mobile team with some recent hires. In addition to team news, we’re talking about our roadmap board on GitHub.

Team Changes

In March we said goodbye to cketti, the K-9 Mail maintainer who joined the team when Thunderbird first announced plans for an Android app. We’re very grateful for everything he’s created, and for his trust that K-9 Mail and Thunderbird for Android are in good hands. But we also said hello to Todd Heasley, our new iOS engineer, who started March 26. We also have just added Ashley Soucar, an Android/iOS engineer, who joined us on April 7. If all continues to go well, we’ll also be adding another Android engineer in the next couple of weeks.

Our Roadmap Board

Our roadmap board is now available! We’re grateful to the Council for their trust and support in approving it. As the board will reflect any changes in our planning, this is the most up-to-date source for our upcoming development. Each epic will show its objective and what’s in scope – and as importantly, what’s out of scope. The project information on the side will tell you if an epic is in the backlog or work in progress.

If you’d like to know what we’re working on right now, check out our sprint board.

Contribute by Triaging GitHub Issues

One way to contribute to Thunderbird for Android is by triaging open GitHub Issues. In March, we did a major triage with over 150 issues closed as duplicates, marked with ‘works for me,’ or elevating them up to the efforts and features described in the roadmap above. Especially since we’re a small team, triaging issues helps us know where to act on incoming issues. This is a great way to get started as a Thunderbird for Android contributor.


To start triaging bugs, have a look at the ‘unconfirmed’ issues. Try to reproduce the issue  to help verify that the issue exists. Then add a comment with your results and any other information you found that might help narrow down the issue. If you see users generally saying “it doesn’t work”, ask them for more details or to enable logs. This way we know when to remove the unconfirmed label. If you have questions along the way or need someone to confirm a thought you had, feel free to ask in the community support channel.

Account Drawer

Our main engineering focus in March has been the account drawer we shared screenshots on in the January/February update. Given the settings design includes a few non-standard components, we took the opportunity to write a modern settings framework based on Jetpack Compose and make use of it for the new drawer. There will be some opportunities to contribute here in the future, as we’d like to migrate our old settings UI to the new system.

We have a few crashes and rough edges to polish, but are very close to enabling the feature flag in beta. If you aren’t already using it and want to get early access, install our beta today.

I’d also like to call out a pull request by Clément, who contributed support for a folder hierarchy. The amazing thing here—our design folks were working out a proposal because we were interested in this as well, and without knowing, Clément came up with the same idea and came in with a pull request that really hit the spot. Great work!

Community Contributions

In addition to the folder hierarchy mentioned above, here are a few community activities in March:

  • Shamim made sure the Unified Inbox shows up when you add your second account, retained scroll position in the drawer when rotating, removed font size customizations in favor of Android OS controls, flipped the default for being notified about new email and helped out with a few refactorings to make our codebase more modern.
  • Sergio has improved back button navigation when editing drafts.
  • Salkinnoma made our workflow runs more efficient and fixed an issue in the find folders view where a menu item was incorrectly shown.
  • Smatek improved our edge to edge support by making the bottom Android navigation bar background transparent
  • Husain fixed some inconsistencies when toggling “Show Unified Inbox”.
  • Vayun has begun work to update the Thunderbird for Android app widgets to Jetpack compose (including dark theming)
  • SttApollo has made the logo size more dynamic in the onboarding screen.

This is quite a list, great work! When you think about Thunderbird for Android or K-9 Mail, what was the last major annoyance you stumbled upon? If you are an Android developer, now is a good time to fix it. You’ll see your name up here next time as well 🙂

The post Thunderbird for Android March 2025 Progress Report appeared first on The Thunderbird Blog.

Mozilla ThunderbirdThunderbird Monthly Development Digest – March 2025

Hello again Thunderbird Community! It’s been almost a year since I joined the project and I’ve recently been enjoying the most rewarding and exciting work days in recent memory. The team who works on making Thunderbird better each day is so passionate about their work and truly dedicated to solving problems for users and supporting the broader developer community. If you are reading this and wondering how you might be able to get started and help out, please get in touch and we would love to get you off the ground!

Paddling Upstream

As many of you know, Thunderbird relies heavily on the Firefox platform and other lower-level code that we build upon. We benefit immensely from the constant flow of improvements, fixes, and modernizations, many of which happen behind the scenes without requiring our input. 

The flip side is that changes upstream can sometimes catch us off guard – and from time to time we find ourselves firefighting after changes have been made. This past month has been especially busy as we’ve scrambled to adapt to unexpected shifts, with our team hunting down places to adjust Content Security Policy (CSP) handling and finding ways to integrate a new experimental whitespace normalizer. Very much not part of our plan, but critical nonetheless.

Calendar UI Rebuild

The implementation of the new event dialog is moving along steadily with the following pieces of the puzzle recently landing:

  • Title
  • Border
  • Location Row
  • Join Meeting button
  • Time & Recurrence

The focus has now turned to loading data into the various containers so that we can enable this feature later this month and ask our QA team and Daily users to help us catch early problems.

Keep track of feature delivery via the [meta] bug 

Exchange Web Services support in Rust

We’re aiming to get a 0.2 release into the hands of Daily and QA testers by the end of April so a number of remaining tasks are in the queue – but March saw a number of features completed and pushed to Daily

  • Folder copy/move
  • Sync folder – update
  • Complete composition support (reply/forward)
  • Bug fixes!

Keep track of feature delivery here.

Account Hub

This feature was “preffed on” as the default experience for the Daily build but recent changes to our Oauth process have required some rework to this user experience, so it won’t hit beta until the end of the month. It’s beautiful and well worth considering a switch to Daily if you are currently running beta.

Global Message Database

The New Zealand team completed a successful work week and have since pushed through a significant chunk of the research and refactoring necessary to integrate the new database with existing interfaces.

The patches are pouring in and are enabling data adapters, sorting, testing and message display for the Local Folders Account, with an aim to get all existing tests to pass with the new database enabled. The path to this goal is often meandering and challenging but with our most knowledgeable and experienced team members dedicated to the project, we’re seeing inspiring progress.

The team maintains their documentation in Sourcedocs which are visible here.

In-App Notifications

A few last-minute changes were made and uplifted to our ESR version early this month so if you use the ESR and are in the lucky 2% of users targeted, watch out for an introductory notification!
We’ve also wrapped up work on two significant enhancements which are now on Daily and will make their way to other releases over the course of the month:

  • Granular control of notifications by type via EnterprisePolicy
  • Enhanced triggering mechanism to prevent launch when Thunderbird is in the background

 Meta Bug & progress tracking.

New Features Landing Soon

A number of requested features and important fixes have reached our Daily users this month. We want to give special thanks to the contributors who made the following possible…

As usual, if you want to see and use new features as they land, and help us squash some early bugs, you can try running daily and check the pushlog to see what has recently landed. This assistance is immensely helpful for catching problems early.

Toby Pilling

Senior Manager, Desktop Engineering

The post Thunderbird Monthly Development Digest – March 2025 appeared first on The Thunderbird Blog.

The Rust Programming Language BlogMarch Project Goals Update

The Rust project is currently working towards a slate of 40 project goals, with 3 of them designated as Flagship Goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.

Flagship goals

Why this goal? This work continues our drive to improve support for async programming in Rust. In 2024H2 we stabilized async closures; explored the generator design space; and began work on the dynosaur crate, an experimental proc-macro to provide dynamic dispatch for async functions in traits. In 2025H1 our plan is to deliver (1) improved support for async-fn-in-traits, completely subsuming the functionality of the async-trait crate; (2) progress towards sync and async generators, simplifying the creation of iterators and async data streams; (3) and improve the ergonomics of Pin, making lower-level async coding more approachable. These items together start to unblock the creation of the next generation of async libraries in the wider ecosystem, as progress there has been blocked on a stable solution for async traits and streams.

What has happened? Generators. Initial implementation work has started on an iter! macro experiment in https://github.com/rust-lang/rust/pull/137725. Discussions have centered around whether the macro should accept blocks in addition to closures, whether thunk closures with an empty arguments list should implement IntoIterator, and whether blocks should evaluate to a type that is Iterator as well as IntoIterator. See the design meeting notes for more.

dynosaur. We released dynosaur v0.2.0 with some critical bug fixes and one breaking change. We have several more breaking changes queued up for an 0.3 release line that we also use plan to use as a 1.0 candidate.

Pin ergonomics. https://github.com/rust-lang/rust/pull/135733 landed to implement &pin const self and &pin mut self sugars as part of the ongoing pin ergonomics experiment. Another PR is open with an early implementation of applying this syntax to borrowing expressions. There has been some discussion within parts of the lang team on whether to prefer this &pin mut T syntax or &mut pin T, the latter of which applies equally well to Box<pin T> but requires an edition.

No detailed updates available.

Why this goal? May 15, 2025 marks the 10-year anniversary of Rust's 1.0 release; it also marks 10 years since the creation of the Rust subteams. At the time there were 6 Rust teams with 24 people in total. There are now 57 teams with 166 people. In-person All Hands meetings are an effective way to help these maintainers get to know one another with high-bandwidth discussions. This year, the Rust project will be coming together for RustWeek 2025, a joint event organized with RustNL. Participating project teams will use the time to share knowledge, make plans, or just get to know one another better. One particular goal for the All Hands is reviewing a draft of the Rust Vision Doc, a document that aims to take stock of where Rust is and lay out high-level goals for the next few years.

What has happened?

  • Invite more guests, after deciding on who else to invite. (To be discussed today in the council meeting.)
  • Figure out if we can fund the travel+hotel costs for guests too. (To be discussed today in the council meeting.)

Mara has asked all attendees for suggestions for guests to invite. Based on that, Mara has invited roughly 20 guests so far. Only two of them needed funding for their travel, which we can cover from the same travel budget.

  • Open the call for proposals for talks for the Project Track (on wednesday) as part of the RustWeek conference.

The Rust Project Track at RustWeek has been published: https://rustweek.org/schedule/wednesday/

This track is filled with talks that are relevant to folks attending the all-hands afterwards.

1 detailed update available.

Comment by @m-ou-se posted on 2025-04-01:

  • Invite more guests, after deciding on who else to invite. (To be discussed today in the council meeting.)
  • Figure out if we can fund the travel+hotel costs for guests too. (To be discussed today in the council meeting.)

I've asked all attendees for suggestions for guests to invite. Based on that, I've invited roughly 20 guests so far. Only two of them needed funding for their travel, which we can cover from the same travel budget.


Why this goal? This goal continues our work from 2024H2 in supporting the experimental support for Rust development in the Linux kernel. Whereas in 2024H2 we were focused on stabilizing required language features, our focus in 2025H1 is stabilizing compiler flags and tooling options. We will (1) implement RFC #3716 which lays out a design for ABI-modifying flags; (2) take the first step towards stabilizing build-std by creating a stable way to rebuild core with specific compiler options; (3) extending rustdoc, clippy, and the compiler with features that extract metadata for integration into other build systems (in this case, the kernel's build system).

What has happened? Most of the major items are in an iteration phase. The rustdoc changes for exporting doctests are the furthest along, with a working prototype; the RFL project has been integrating that prototype and providing feedback. Clippy stabilization now has a pre-RFC and there is active iteration towards support for build-std.

Other areas of progress:

  • We have an open PR to stabilize -Zdwarf-version.
  • The lang and types team have been discussing the best path forward to resolve #136702. This is a soundness concern that was raised around certain casts, specifically, casts from a type like *mut dyn Foo + '_ (with some lifetime) to *mut dyn Foo + 'static (with a static lifetime). Rust's defaulting rules mean that the latter is more commonly written with a defaulted lifetime, i.e., just *mut dyn Foo, which makes this an easy footgun. This kind of cast has always been dubious, as it disregards the lifetime in a rather subtle way, but when combined with arbitrary self types it permits users to disregard safety invariants making it hard to enforce soundness (see #136702 for details). The current proposal under discussion in #136776 is to make this sort of cast a hard error at least outside of an unsafe block; we evaluated the feasibility of doing a future-compatibility-warning and found it was infeasible. Crater runs suggest very limited fallout from this soundness fix but discussion continues about the best set of rules to adopt so as to balance minimizing fallout with overall language simplicity.
2 detailed updates available.

Comment by @nikomatsakis posted on 2025-03-13:

Update from our 2025-03-12 meeting (full minutes):

  • RFL team requests someone to look at #138368 which is needed by kernel, @davidtwco to do so.
  • -Zbinary-dep-info may not be needed; RFL may be able to emulate it.
  • rustdoc changes for exporting doctests are being incorporated. @GuillaumeGomez is working on the kernel side of the feature too. @ojeda thinks it would be a good idea to do it in a way that does not tie both projects too much, so that rustdoc has more flexibility to change the output later on.
  • Pre-RFC authored for clippy stabilization.
  • Active iteration on the build-std design; feedback being provided by cargo team.
  • @wesleywiser sent a PR to stabilize -Zdwarf-version.
  • RfL doesn't use cfg(no_global_oom_handling) anymore. Soon, stable/LTS kernels that support several Rust versions will not use it either. Thus upstream Rust could potentially remove the cfg without breaking Linux, though other users like Windows may be still using it (#t-libs>no_global_oom_handling removal).
  • Some discussion about best way forward for disabling orphan rule to allow experimentation with no firm conclusion.

Comment by @nikomatsakis posted on 2025-03-26:

Updates from today's meeting:

Finalizing 2024h2 goals

ABI-modifying compiler flags

Extract dependency information, configure no-std externally (-Zcrate-attr)

Rustdoc features to extract doc tests

  • No update.

Clippy configuration

  • Pre-RFC was published but hasn't (to our knowledge) made progress. Would be good to sync up on next steps with @flip1995.

Build-std

  • No update. Progress will resume next week when the contributor working on this returns from holiday.

-Zsanitize-kcfi-arity


Goals looking for help

Help wanted: Help test the deadlock code in the issue list and try to reproduce the issues. If you'd like to help, please post in this goal's dedicated zulip topic.

1 detailed update available.

Comment by @SparrowLii posted on 2025-03-18:

  • Key developments: Several deadlock issue that remain for more than a year were resolved by #137731 The new test suit for parallel front end is being improved
  • Blockers: null
  • Help wanted: Help test the deadlock code in the issue list and try to reproduce the issue

Help wanted: T-compiler people to work on the blocking issues #119428 and #71043. If you'd like to help, please post in this goal's dedicated zulip topic.

1 detailed update available.

Comment by @epage posted on 2025-03-17:

  • Key developments: @tgross35 got rust-lang/rust#135501 merged which improved which made progress on rust-lang/rust#119428, one of the two main blockers. In rust-lang/rust#119428, we've further discussed further designs and trade offs.
  • Blockers: Further work on rust-lang/rust#119428 and rust-lang/rust#71043
  • Help wanted: T-compiler people to work on those above issues.

Other goal updates

1 detailed update available.

Comment by @BoxyUwU posted on 2025-03-17:

camelids PR has been merged, we now correctly (to the best of my knowledge) lower const paths under mgca. I have a PR open to ensure that we handle evaluation of paths to consts with generics or inference variables correctly, and that we do not attempt to evaluate constants before they have been checked to be well formed. I'm also currently mentoring someone to implement proper handling of normalization of inherent associated constants under mgca.

1 detailed update available.

Comment by @davidtwco posted on 2025-03-03:

A small update, @adamgemmell shared revisions to the aforementioned document, further feedback to which is being addressed.

Earlier this month, we completed one checkbox of the goal: #[doc(hidden)] in sealed trait analysis, live in cargo-semver-checks v0.40. We also made significant progress on type system modeling, which is part of two more checkboxes.

  • We shipped method receiver types in our schema, enabling more than a dozen new lints.
  • We have a draft schema for ?Sized bounds, and are putting the finishing touches on 'static and "outlives" bounds. More lints will follow here.
  • We also have a draft schema for the new use<> precise capturing syntax.

Additionally, cargo-semver-checks is participating in Google Summer of Code, so this month we had the privilege of merging many contributions from new contributors who are considering applying for GSoC with us! We're looking forward to this summer, and would like to wish the candidates good luck in the application process!

1 detailed update available.

Comment by @obi1kenobi posted on 2025-03-08:

Key developments:

  • Sealed trait analysis correctly handles #[doc(hidden)] items. This completes one checkbox of this goal!
  • We shipped a series of lints detecting breakage in generic types, lifetimes, and const generics. One of them has already caught accidental breakage in the real world!

cargo-semver-checks v0.40, released today, includes a variety of improvements to sealed trait analysis. They can be summarized as "smarter, faster, more correct," and will have an immediate positive impact on popular crates such as diesel and zerocopy.

While we already shipped a series of lints detecting generics-related breakage, more work is needed to complete that checkbox. This, and the "special cases like 'static and ?Sized", will be the focus of upcoming work.

No detailed updates available.
1 detailed update available.

Comment by @tmandry posted on 2025-03-25:

Since our last update, there has been talk of dedicating some time at the Rust All Hands for interop discussion; @baumanj and @tmandry are going to work on fleshing out an agenda. @cramertj and @tmandry brainstormed with @oli-obk (who was very helpful) about ways of supporting a more ambitious "template instantiation from Rust" goal, and this may get turned into a prototype at some point.

There is now an early prototype available that allows you to write x.use; if the type of X implements UseCloned, then this is equivalent to x.clone(), else it is equivalent to a move. This is not the desired end semantics in a few ways, just a step along the road. Nothing to see here (yet).

1 detailed update available.

Comment by @nikomatsakis posted on 2025-03-17:

Update: rust-lang/rust#134797 has landed.

Semantics as implemented in the PR:

  • Introduced a trait UseCloned implemented for Rc and Arc types.
  • x.use checks whether x's type X implements the UseCloned trait; if so, then x.use is equivalent to x.clone(), otherwise it is a copy/move of x;
  • use || ...x... closures act like move closures but respect the UseCloned trait, so they will either clone, copy, or move x as appropriate.

Next steps:

  • Modify codegen so that we guarantee that x.use will do a copy if X: Copy is true after monomorphization. Right now the desugaring to clone occurs before monomorphization and hence it will call the clone method even for those instances where X is a Copy type.
  • Convert x.use to a move rather than a clone if this is a last-use.
  • Make x equivalent to x.use but with an (allow-by-default) lint to signal that something special is happened.

Notable decisions made and discussions:

  • Opted to name the trait that controls whether x.use does a clone or a move UseCloned rather than Use. This is because the trait does not control whether or not you can use something but rather controls what happens when you do.
  • Question was raised on Zulip as to whether x.use should auto-deref. After thinking it over, reached the conclusion that it should not, because x and x.use should eventually behave the same modulo lints, but that (as ever) a &T -> T coercion would be useful for ergonomic reasons.
1 detailed update available.

Comment by @ZuseZ4 posted on 2025-03-25:

I just noticed that I missed my February update, so I'll keep this update a bit more high-level, to not make it too long.

Key developments:

  1. All key autodiff PRs got merged. So after building rust-lang/rust with the autodiff feature enabled, users can now use it, without the need for any custom fork.
  2. std::autodiff received the first PRs from new contributors, which have not been previously involved in rustc development! My plan is to grow a team to maintain this feature, so that's a great start. The PRs are here, here and here. Over time I hope to hand over increasingly larger issues.
  3. I received an offer to join the Rust compiler team, so now I can also officially review and approve PRs! For now I'll focus on reviewing PRs in the fields I'm most comfortable with, so autodiff, batching, and soon GPU offload.
  4. I implemented a standalone batching feature. It was a bit larger (~2k LoC) and needed some (back then unmerged) autodiff PRs, since they both use the same underlying Enzyme infrastructure. I therefore did not push for merging it.
  5. I recently implemented batching as part of the autodiff macro, for people who want to use both together. I subsequently split out a first set of code improvements and refactorings, which already got merged. The remaining autodiff feature PR is only 600 loc, so I'm currently cleaning it up for review.
  6. I spend time preparing an MCP to enable autodiff in CI (and therefore nightly). I also spend a lot of time discussing a potential MLIR backend for rustc. Please reach out if you want to be involved!

**Help wanted: ** We want to support autodiff in lib builds, instead of only binaries. oli-obk and I recently figured out the underlying bug, and I started with a PR in https://github.com/rust-lang/rust/pull/137570. The problem is that autodiff assumes fat-lto builds, but lib builds compile some of the library code using thin-lto, even if users specify lto=fat in their Cargo.toml. We'd want to move every thing to fat-lto if we enable Autodiff as a temporary solution, and later move towards embed-bc as a longer-term solution. If you have some time to help please reach out! Some of us have already looked into it a little but got side-tracked, so it's better to talk first about which code to re-use, rather than starting from scratch.

I also booked my RustWeek ticket, so I'm happy to talk about all types of Scientific Computing, HPC, ML, or cursed Rust(c) and LLVM internals! Please feel free to dm me if you're also going and want to meet.

1 detailed update available.

Comment by @Eh2406 posted on 2025-03-14:

Progress continues to be stalled by high priority tasks for $DAY_JOB. It continues to be unclear when the demands of work will allow me to return focus to this project.

No detailed updates available.
1 detailed update available.

Comment by @epage posted on 2025-03-17:

  • Key developments:
    • Between tasks on #92, I've started to refresh myself on the libtest-next code base
  • Blockers:
  • Help wanted:
No detailed updates available.
No detailed updates available.

We've started work on implementing #[loop_match] on this branch. For the time being integer and enum patterns are supported. The benchmarks, are extremely encouraging, showing large improvements over the status quo, and significant improvements versus -Cllvm-args=-enable-dfa-jump-thread.

Our next steps can be found in the todo file, and focus mostly on improving the code quality and robustness.

3 detailed updates available.

Comment by @folkertdev posted on 2025-03-18:

@traviscross how would we make progress on that? So far we've mostly been talking to @joshtriplett, under the assumption that a #[loop_match] attribute on loops combined with a #[const_continue] attribute on "jumps to the next iteration" will be acceptable as a language experiment.

Our current implementation handles the following

#![feature(loop_match)]

enum State {
    A,
    B,
}

fn main() {
    let mut state = State::A;
    #[loop_match]
    'outer: loop {
        state = 'blk: {
            match state {
                State::A =>
                {
                    #[const_continue]
                    break 'blk State::B
                }
                State::B => break 'outer,
            }
        }
    }
}

Crucially, this does not add syntax, only the attributes and internal logic in MIR lowering for statically performing the pattern match to pick the right branch to jump to.

The main challenge is then to implement this in the compiler itself, which we've been working on (I'll post our tl;dr update shortly)

Comment by @folkertdev posted on 2025-03-18:

Some benchmarks (as of march 18th)

A benchmark of https://github.com/bjorn3/comrak/blob/loop_match_attr/autolink_email.rs, basically a big state machine that is a perfect fit for loop match

Benchmark 1: ./autolink_email
  Time (mean ± σ):      1.126 s ±  0.012 s    [User: 1.126 s, System: 0.000 s]
  Range (min … max):    1.105 s …  1.141 s    10 runs
 
Benchmark 2: ./autolink_email_llvm_dfa
  Time (mean ± σ):     583.9 ms ±   6.9 ms    [User: 581.8 ms, System: 2.0 ms]
  Range (min … max):   575.4 ms … 591.3 ms    10 runs
 
Benchmark 3: ./autolink_email_loop_match
  Time (mean ± σ):     411.4 ms ±   8.8 ms    [User: 410.1 ms, System: 1.3 ms]
  Range (min … max):   403.2 ms … 430.4 ms    10 runs
 
Summary
  ./autolink_email_loop_match ran
    1.42 ± 0.03 times faster than ./autolink_email_llvm_dfa
    2.74 ± 0.07 times faster than ./autolink_email

#[loop_match] beats the status quo, but also beats the llvm flag by a large margin.


A benchmark of zlib decompression with chunks of 16 bytes (this makes the impact of loop_match more visible)

Benchmark 1 (65 runs): target/release/examples/uncompress-baseline rs-chunked 4
  measurement          mean ± σ            min … max           outliers         delta
  wall_time          77.7ms ± 3.04ms    74.6ms … 88.9ms          9 (14%)        0%
  peak_rss           24.1MB ± 64.6KB    24.0MB … 24.2MB          0 ( 0%)        0%
  cpu_cycles          303M  ± 11.8M      293M  …  348M           9 (14%)        0%
  instructions        833M  ±  266       833M  …  833M           0 ( 0%)        0%
  cache_references   3.62M  ±  310K     3.19M  … 4.93M           1 ( 2%)        0%
  cache_misses        209K  ± 34.2K      143K  …  325K           1 ( 2%)        0%
  branch_misses      4.09M  ± 10.0K     4.08M  … 4.13M           5 ( 8%)        0%
Benchmark 2 (68 runs): target/release/examples/uncompress-llvm-dfa rs-chunked 4
  measurement          mean ± σ            min … max           outliers         delta
  wall_time          74.0ms ± 3.24ms    70.6ms … 85.0ms          4 ( 6%)        🚀-  4.8% ±  1.4%
  peak_rss           24.1MB ± 27.1KB    24.0MB … 24.1MB          3 ( 4%)          -  0.1% ±  0.1%
  cpu_cycles          287M  ± 12.7M      277M  …  330M           4 ( 6%)        🚀-  5.4% ±  1.4%
  instructions        797M  ±  235       797M  …  797M           0 ( 0%)        🚀-  4.3% ±  0.0%
  cache_references   3.56M  ±  439K     3.08M  … 5.93M           2 ( 3%)          -  1.8% ±  3.6%
  cache_misses        144K  ± 32.5K     83.7K  …  249K           2 ( 3%)        🚀- 31.2% ±  5.4%
  branch_misses      4.09M  ± 9.62K     4.07M  … 4.12M           1 ( 1%)          -  0.1% ±  0.1%
Benchmark 3 (70 runs): target/release/examples/uncompress-loop-match rs-chunked 4
  measurement          mean ± σ            min … max           outliers         delta
  wall_time          71.6ms ± 2.43ms    69.3ms … 78.8ms          6 ( 9%)        🚀-  7.8% ±  1.2%
  peak_rss           24.1MB ± 72.8KB    23.9MB … 24.2MB         20 (29%)          -  0.0% ±  0.1%
  cpu_cycles          278M  ± 9.59M      270M  …  305M           7 (10%)        🚀-  8.5% ±  1.2%
  instructions        779M  ±  277       779M  …  779M           0 ( 0%)        🚀-  6.6% ±  0.0%
  cache_references   3.49M  ±  270K     3.15M  … 4.17M           4 ( 6%)        🚀-  3.8% ±  2.7%
  cache_misses        142K  ± 25.6K     86.0K  …  197K           0 ( 0%)        🚀- 32.0% ±  4.8%
  branch_misses      4.09M  ± 7.83K     4.08M  … 4.12M           1 ( 1%)          +  0.0% ±  0.1%
Benchmark 4 (69 runs): target/release/examples/uncompress-llvm-dfa-loop-match rs-chunked 4
  measurement          mean ± σ            min … max           outliers         delta
  wall_time          72.8ms ± 2.57ms    69.7ms … 80.0ms          7 (10%)        🚀-  6.3% ±  1.2%
  peak_rss           24.1MB ± 35.1KB    23.9MB … 24.1MB          2 ( 3%)          -  0.1% ±  0.1%
  cpu_cycles          281M  ± 10.1M      269M  …  312M           5 ( 7%)        🚀-  7.5% ±  1.2%
  instructions        778M  ±  243       778M  …  778M           0 ( 0%)        🚀-  6.7% ±  0.0%
  cache_references   3.45M  ±  277K     2.95M  … 4.14M           0 ( 0%)        🚀-  4.7% ±  2.7%
  cache_misses        176K  ± 43.4K      106K  …  301K           0 ( 0%)        🚀- 15.8% ±  6.3%
  branch_misses      4.16M  ± 96.0K     4.08M  … 4.37M           0 ( 0%)        💩+  1.7% ±  0.6%

The important points: loop-match is faster than llfm-dfa, and when combined performance is worse than when using loop-match on its own.

Comment by @traviscross posted on 2025-03-18:

Thanks for that update. Have reached out separately.

1 detailed update available.

Comment by @celinval posted on 2025-03-17:

We have been able to merge the initial support for contracts in the Rust compiler under the contracts unstable feature. @tautschnig has created the first PR to incorporate contracts in the standard library and uncovered a few limitations that we've been working on.

1 detailed update available.

Comment by @jieyouxu posted on 2025-03-15:

Update (2025-03-15):

  • Doing a survey pass on compiletest to make sure I have the full picture.
1 detailed update available.

Comment by @yaahc posted on 2025-03-03:

After further review I've decided to limit scope initially and not get ahead of myself so I can make sure the schemas I'm working with can support the kind of queries and charts we're going to eventually want in the final version of the unstable feature usage metric. I'm hoping that by limiting scope I can have most of the items currently outlined in this project goal done ahead of schedule so I can move onto building the proper foundations based on the proof of concept and start to design more permanent components. As such I've opted for the following:

  • minimal change to the current JSON format I need, which is including the timestamp
  • Gain clarity on exactly what questions I should be answering with the unstable feature usage metrics, the desired graphs and tables, and how this influences what information I need to gather and how to construct the appropriate queries within graphana
  • gathering a sample dataset from docs.rs rather than viewing it as the long term integration, since there are definitely some sampleset bias issues in that dataset from initial conversations with docs.rs
    • Figure out proper hash/id to use in the metrics file names to avoid collisions with different conditional compilation variants of the same crate with different feature enabled.

For the second item above I need to have more detailed conversations with both @rust-lang/libs-api and @rust-lang/lang

1 detailed update available.

Comment by @nikomatsakis posted on 2025-03-17:

Update:

@tiif has been working on integrating const-generic effects into a-mir-formality and making good progress.

I have begun exploring integration of the MiniRust definition of MIR. This doesn't directly work towards the goal of modeling coherence but it will be needed for const generic work to be effective.

I am considering some simplification and cleanup work as well.

1 detailed update available.

Comment by @lcnr posted on 2025-03-17:

The two cycle handling PRs mentioned in the previous update have been merged, allowing nalgebra to compile with the new solver enabled. I have now started to work on opaque types in borrowck again. This is a quite involved issue and will likely take a few more weeks until it's fully implemented.

1 detailed update available.

Comment by @veluca93 posted on 2025-03-17:

Key developments: Started investigating how the proposed SIMD multiversioning options might fit in the context of the efforts for formalizing a Rust effect system

No detailed updates available.
1 detailed update available.

Comment by @blyxyas posted on 2025-03-17:

Monthly update!

  • https://github.com/rust-lang/rust-clippy/issues/13821 has been merged. This has successfully optimized the MSRV extraction from the source code.

On the old MSRV extraction,Symbol::intern use was sky high being about 3.5 times higher than the rest of the compilation combined. Now, it's at normal levels. Note that Symbol::intern is a very expensive and locking function, so this is very notable. Thanks to @Alexendoo for this incredible work!

As a general note on the month, I'd say that we've experimented a lot.

  • Starting efforts on parallelizing the lint system.
  • https://github.com/rust-lang/rust-clippy/issues/14423 Started taking a deeper look into our dependence on libLLVM.so and heavy relocation problems.
  • I took a look into heap allocation optimization, seems that we are fine. For the moment, rust-clippy#14423 is the priority.
1 detailed update available.

Comment by @oli-obk posted on 2025-03-20:

I opened an RFC (https://github.com/rust-lang/rfcs/pull/3762) and we had a lang team meeting about it. Some design exploration and bikeshedding later we have settled on using (const)instead of ~const along with some more annotations for explicitness and some fewer annotations in other places. The RFC has been updated accordingly. There is still ongoing discussions about reintroducing the "fewer annotations" for redundancy and easier processing by humans.

No detailed updates available.
2 detailed updates available.

Comment by @JoelMarcey posted on 2025-03-14:

Key Developments: Working on a public announcement of Ferrous' contribution of the FLS. Goal is to have that released soon. Also working out the technical details of the contribution, particularly around how to initially integrate the FLS into the Project itself.

Blockers: None yet.

Comment by @JoelMarcey posted on 2025-04-01:

Key Developments: Public announcement of the FLS donation to the Rust Project.

Blockers: None

2 detailed updates available.

Comment by @celinval posted on 2025-03-20:

We have proposed a project idea to Google Summer of Code to implement the refactoring and infrastructure improvements needed for this project. I'm working on breaking down the work into smaller tasks so they can be implemented incrementally.

Comment by @celinval posted on 2025-03-20:

I am also happy to share that @makai410 is joining us in this effort! 🥳

No detailed updates available.
2 detailed updates available.

Comment by @nikomatsakis posted on 2025-03-03:

Update: February goal update has been posted. We made significant revisions to the way that goal updates are prepared. If you are a goal owner, it's worth reading the directions for how to report your status, especially the part about help wanted and summary comments.

Comment by @nikomatsakis posted on 2025-03-17:

Update: We sent out the first round of pings for the March update. The plan is to create the document on March 25th, so @rust-lang/goal-owners please get your updates in by then. Note that you can create a TL;DR comment if you want to add 2-3 bullet points that will be embedded directly into the final blog post.

In terms of goal planning:

  • @nandsh is planning to do a detailed retrospective on the goals program in conjunction with her research at CMU. Please reach out to her on Zulip (Nandini) if you are interested in participating.
  • We are planning to overhaul the ping process as described in this hackmd. In short, pings will come on the 2nd/3rd Monday of the month. No pings will be sent if you've posted a comment that month. The blog post will be prepared on the 3rd Friday.
  • We've been discussing how to structure 2025H2 goals and are thinking of making a few changes. We'll break out three categories of goals (Flagship / Core / Stretch), with "Core" goals being those deemed most important. We'll also have a 'pre-read' before the RFC opens with team leads to look for cross-team collaborative opportunities. At least that's the current plan.
  • We drafted a Rust Vision Doc Action Plan.
  • We expect to publish our announcement blog post by end of Month including a survey requesting volunteers to speak with us. We are also creating plans for interviews with company contacts, global community groups, and Rust maintainers.
1 detailed update available.

Comment by @nikomatsakis posted on 2025-03-17:

Update:

I've asked @jackh726 to co-lead the team with me. Together we pulled together a Rust Vision Doc action plan.

The plan begins by posting a blog post (draft available here) announcing the effort. We are coordinating with the Foundation to create a survey which will be linked from the blog post. The survey questions ask about user's experience but also look for volunteers we can speak with.

We are pulling together the team that will perform the interviewing. We've been in touch with UX reseearchers who will brief us on some of the basics of UX research. We're finalizing team membership now plus the set of focus areas, we expect to cover at least users/companies, Rust project maintainers, and Rust global communities. See the Rust Vision Doc action plan for more details.

1 detailed update available.

Comment by @davidtwco posted on 2025-03-03:

A small update, @Jamesbarford aligned with @kobzol on a high-level architecture and will begin fleshing out the details and making some small patches to rustc-perf to gain familiarity with the codebase.

1 detailed update available.

Comment by @lqd posted on 2025-03-24:

Here are the key developments for this update.

Amanda has continued on the placeholder removal task. In particular on the remaining issues with rewritten type tests. The in-progress work caused incorrect errors to be emitted under the rewrite scheme, and a new strategy to handle these was discussed. This has been implemented in the PR, and seems to work as hoped. So the PR should now be in a state that is ready for more in-depth review pass, and should hopefully land soon.

Tage has started his master's thesis with a focus on the earliest parts of the borrow checking process, in order to experiment with graded borrow-checking, incrementalism, avoiding work that's not needed for loans that are not invalidated, and so on. A lot of great progress has been made on these parts already, and more are being discussed even in the later areas (live and active loans).

I have focused on taking care of the remaining diagnostics and test failures of the location-sensitive analysis. For diagnostics in particular, the PRs mentioned in the previous updates have landed, and I've fixed a handful of NLL spans, all the remaining differences under the compare-mode, and blessed differences that were improvements. For the test failures, handling liveness differently in traversal fixed most of the remaining failures, while a couple are due to the friction with mid-points avoidance scheme. For these, we have a few different paths forward, but with different trade-offs and we'll be discussing and evaluation these in the very near future. Another two are still left to analyze in-depth to see what's going on.

Our near future focus will be to continue down the path to correctness while also expanding test coverage that feels lacking in certain very niche areas, and that we want to improve. At the same time, we'll also work on a figuring out a better architecture to streamline the entire end-to-end process, to allow early outs, avoid work that is not needed, etc.

No detailed updates available.
1 detailed update available.

Comment by @lqd posted on 2025-03-26:

This project goal was actually carried over from 2024h2, in https://github.com/rust-lang/rust-project-goals/pull/294

2 detailed updates available.

Comment by @davidtwco posted on 2025-03-03:

A small update, we've opened a draft PR for the initial implementation of this - rust-lang/rust#137944. Otherwise, just continued to address feedback on the RFCs.

Comment by @davidtwco posted on 2025-03-18:

  • We've been resolving review feedback on the implementation of the Sized Hierarchy RFC on rust-lang/rust#137944. We're also working on reducing the performance regression in the PR, by avoiding unnecessary elaboration of sizedness supertraits and extending the existing Sized case in type_op_prove_predicate query's fast path.
  • There's not been any changes to the RFC, there's minor feedback that has yet to be responded to, but it's otherwise just waiting on t-lang.
  • We've been experimenting with rebasing rust-lang/rust#118917 on top of rust-lang/rust#137944 to confirm that const sizedness allows us to remove the type system exceptions that the SVE implementation previously relied on. We're happy to confirm that it does.
No detailed updates available.
1 detailed update available.

Comment by @Muscraft posted on 2025-03-31:

While my time was limited these past few months, lots of progress was made! I was able to align annotate-snippets internals with rustc's HumanEmitter and get the new API implemented. These changes have not been merged yet, but they can be found here. As part of this work, I got rustc using annotate-snippets as its only renderer. During all of this, I started working on making rustc use annotate-snippets as its only renderer, which turned out to be a huge benefit. I was able to get a feel for the new API while addressing rendering divergences. As of the time of writing, all but ~30 tests of the roughly 18,000 UI tests are passing.

test result: FAILED. 18432 passed; 29 failed; 193 ignored; 0 measured; 0 filtered out; finished in 102.32s

Most of the failing tests are caused by a few things:

  • annotate-snippets right aligns numbers, whereas rustc left aligns
  • annotate-snippets doesn't handle multiple suggestions for the same span very well
  • Problems with handling FailureNote
  • annotate-snippets doesn't currently support colored labels and titles, i.e., the magenta highlight rustc uses
  • rustc wants to pass titles similar to error: internal compiler error[E0080], but annotate-snippets doesn't support that well
  • differences in how rustc and annotate-snippets handle term width during tests
    • When testing, rustc uses DEFAULT_COLUMN_WIDTH and does not subtract the code offset, while annotate-snippets does
  • Slight differences in how "newline"/end of line highlighting is handled
  • JSON output rendering contains color escapes

Frederik BraunWith Carrots & Sticks - Can the browser handle web security?

NB: This is the blog version of my keynote from Measurements, Attacks, and Defenses for the Web (MADWeb) 2025, earlier this year. It was not recorded.

In my keynote, I examined web security through the browser's perspective. Various browser features have helped fix transport security issues and increase HTTPS adoption …

Firefox NightlyPutting up Wallpaper – These Weeks in Firefox: Issue 178

Putting up Wallpaper – These Weeks in Firefox: Issue 178

Highlights

  • Custom Wallpapers for New Tab are undergoing further refinement and bugfixing! Amy just fixed an issue which would cause the custom wallpaper image to flash under certain circumstances.
    • This can be tested in Nightly by visiting Firefox Labs in about:preferences and making sure “Choose a custom wallpaper or colour for New Tab” is checked.
  • Profile Management
    • We are on track to ship our initial feature set to Beta and 0.5% of Release in Firefox 138!
    • We’ve been enabled in Nightly for a while, but to try this out in 138 Beta/Release, flip the browser.profiles.enabled pref to true
  • Nicolas Chevobbe fixed an 11 year old bug by improving the performance of StyleEditor autocomplete for a specific case that would end up freezing/crashing Firefox!

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug
  • Carlos
  • Chris Shiohama
  • cob.bzmoz
  • Harold Camacho
  • Shane Ziegler
New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions
Addon Manager & about:addons
WebExtensions Framework
  • Thanks to Florian, the last remaining WebExtensions telemetry recorded as legacy telemetry scalars and histograms have been migrated to Glean (and mirrored to legacy telemetry through GIFFT) – Bug 1953106
  • Fixed manifest validation error on manifests using an empty background.scripts property – Bug 1954637
DevTools
WebDriver BiDi
Fluent
Lint, Docs and Workflow
  • Julien fixed an issue where the ESLint configuration was defining ContentTaskUtils as a global variable available for all tests (it is only available within certain test functions).
New Tab Page
  • New Tab is now packaged as a built-in addon on the Beta channel! This sets us up to try our first pilot out-of-band update to New Tab sometime in May.
  • Nathan has added ASRouter / OMC plumbing to make it possible to show onboarding messages inline within New Tab. The first use of this capability will be to highlight the new Custom Wallpapers feature
Performance
Places
  • Moritz has fixed cases where bookmarks were sorted wrongly in the view after moving multiple of them at once. Bug 1557853
Profile Management
  • Big picture
    • 100% release timeline planning is well underway with Nimbus, OMC, and DI teams
    • We are starting to look at testing across multiple Firefox instances using Marionette or using background tasks. Reach out if you have suggestions or ideas.
  • Bugs fixed in the past 2 weeks:
    • tschuster fixed bug 1883387, suppressing a telemetry error shown at startup on linux
    • Jared fixed bug 1933264 and bug 1956105 to locally propagate changes to data policy preferences between profiles in a group
    • Teddy fixed bug 1934921 – Voice over reads the “Edit your profile” title as Article
    • Cieara fixed bug 1949022 – ‘Customize your new profile’ does not have the correct heading level
    • Teddy fixed bug 1950198 – Correct styling details for profile editor
    • Teddy fixed bug 1950199 – Correct styling details for profile toolbar menu
    • Niklas fixed bug 1950250 – <img> used in Theme radios buttons need null alt text
    • Niklas fixed bug 1952985 – Update theme names
    • Jared fixed bug 1955222 – remove profile name input autofocus on about:editprofile and about:newprofile pages to improve screen readerability with NVDA
    • Dave fixed bug 1926997 – Selectable Profile directory permissions are incorrect
    • Cieara fixed bug 1955036 – Double focus ring on edit button in Profiles submenu of FxA toolbar button menu
    • Niklas fixed bug 1955397 – Avatars and profiles panel and cards display mixed themes colours after switching themes
    • Teddy fixed bug 1956286 – The selected theme and avatar don’t remain focused, as specified in the Figma guidelines
    • Cieara fixed bug 1955244 – Update the SUMO URL in the profiles Learn More links
    • Dave fixed bug 1954832 – [macOS] The Original profile can’t be reached when all the other profiles are Unicode-named
Search and Navigation

Don Marticonverting PDFs for Tesseract

There are two kinds of PDFs. Some have real embedded text that you can select in a PDF reader, and some are just images.

The second kind is what I sometimes get in response to a CCPA/CPRA Right to Know. Some companies, for whatever reason, want to make it harder to do automated processing of multiple RtKs. This should make privacy researchers more likely to look at them, because what are they hiding and they must be up to something.

But the PDF still needs to get run through some kind of OCR. Tesseract OCR has been giving me pretty good results, but it needs to be fed images, not PDFs.

So I have been feeding the PDFs to pdf2image—in Python code, and then passing the images to Tesseract. But it turns out that Tessaract works a lot better with higher resolution images, and the default for pdf2image is 200 DPI. So I’m gettting a lot more accurate OCR by making the images oversized with the dpi named parameter:

pages = pdf2image.convert_from_bytes(blob, dpi=600)

I might tweak this and try 300 DPI, or also try passing grayscale=True to preserve more information. Some other approaches to try next, if I need them.

Anyway, Meta (Facebook) made some of their info easy to parse (in JSON format) and got some of us to do research on them. Some of the other interesting companies, though, are going to be those who put in the time to obfuscate their responses to RtKs.

Related

OCRmyPDF is an all-in-one tool that adds a text layer to the PDF. Uses Tessaract internally. When possible, inserts OCR information as a “lossless” operation without disrupting any other content. Thanks to Gaurav Ujjwal for the link. (I’m doing an OCR step as part of ingesting PDFs into a database, so I don’t need to see the text, but this could be good for PDFs that you actually want to read and not just do aggregated reporting on.)

Example of where GDPR compliance doesn’t get you CCPA compliance: This is the mistake that Honda recently made. CCPA/CPRA is not just a subset of GDPR. GDPR allows a company to verify an objection to processing, but CCPA does not allow a company to verify an opt out of sale. (IMHO the EU should harmonize by adopting the California good-faith, reasonable, and documented belief that a request to opt-out is fraudulent standard for objections to processing.)

New Report: Many Companies May Be Ignoring Opt-Out Requests Under State Privacy Laws - Innovation at Consumer Reports The study examined 40 online retailers and found that many of them appear to be ignoring opt-out requests under state privacy laws. (A lot more companies are required to comply with CCPA/CPRA than there are qualified compliance managers. Even if companies fix some of the obvious problems identified in this new CR report, there are still a bunch of data transfers that are obvious detectable violations if a GPC flag wasn’t correctly set for a user in the CRM system. You can’t just fix the cookie—GPC also has to cover downstream usage such as custom audiences and server-to-server APIs.)

Bonus links

EU may “make an example of X” by issuing $1 billion fine to Musk’s social network by Jon Brodkin at Ars Technica. (A lot of countries don’t need to raise their own tariffs in order to retaliate against the USA’s tariffs. They just need to stop letting US companies slide when they violate laws over there. If they can’t rely on the USA for regional security, there’s no reason not to. Related: US Cloud soon illegal? at noyb.eu)

Big Tech Backed Trump for Acceleration. They Got a Decel President Instead by Emanuel Maiberg and Jason Koebler at 404 Media. Unless Trump folds, the tariffs will make the price of everything go up. Unemployment will go up. People will buy less stuff, and companies will spend less money on advertising that powers tech platforms. The tech industry, which has thrived on the cheap labor, cheap parts, cheap manufacturing, and supply chains enabled by free and cheap international trade, will now have artificial costs and bureaucracy tacked onto all of this. The market knows this, which is why tech stocks are eating shit. (Welcome to the weak men create hard times phase—but last time we had one of these, the dismal Microsoft monopoly days are when we got the web and Linux scenes that evolved into today’s Big Tech. Whatever emerges from the high-unemployment, import-denied generation, it’s going to surprise us.)

Alternative to Starlink: Eutelsat Provides Ukraine With Access to Satellite Internet by Taras Safronov. According to Berneke, Eutelsat has been providing high-speed satellite Internet services in Ukraine through a German distributor for about a year.

The coming pro-smoking discourse by Max Read. (Then: social media is the new smoking. Now: smoking is the new social media?)

Signal sees its downloads double after scandal by Sarah Perez on TechCrunch. Appfigures chalks up the doubling of downloads to the old adage all press is good press, as the scandal increased Signal’s visibility and likely introduced the app to thousands of users for the first time. (Signal is also, according to traders on Manifold Markets, the e2e messaging program least likely to provide message content to US law enforcement. Both Apple, the owner of iMessage, and Meta, the owner of WhatsApp, have other businesses that governments can lean on in order to get cooperation. Signal just has e2e software and reputation, so fewer points of leverage.)

Substack rival Ghost is now connected to the fediverse also by Sarah Perez. Per-byline RSS feeds ftw. Check some reporter pages on there, such as Sarah Perez, Author at TechCrunch and Natasha Lomas, Author at TechCrunch with the RSSPreview extension installed. (I’m cautiously optimistic that ActivityPub might be able to address the comments and pingback problems for blogs and small sites in ways that SaaS comments didn’t and Twitter at its peak almost did.)

YouTube removes ‘gender identity’ from hate speech policy by Taylor Lorenz (In the medium term, a lot of the moderation changes at Big Tech are going to turn into a recruiting challenge for hiring managers in marketing departments. If an expected part of working in marketing is going to be mandatory involvement in sending money to weird, creepy right-wing dudes, that means you’re mostly going to get to hire…weird, creepy right-wing dudes.) Related: slop capitalism and dead internet theory by Adam Aleksic. Our best way of fighting back? Spend as little time on algorithmic media as possible, strengthen our social ties, and gather information from many different sources—remembering that the platforms are the real enemy.

William LachanceElectrification and solar

I did up an evidence dashboard with some (hopefully) data-driven thoughts on the environmental and financial aspects of heat pump and solar technology in the Greater Toronto / Hamilton area:

wlach.github.io/gtha-electrification

Evidence is pretty neat: very close to what I originally had in mind when building Irydium a few years ago at Mozilla and Recurse (see previous entries in this journal).

Mozilla ThunderbirdThundermail and Thunderbird Pro Services

Today we’re pleased to announce what many in our open source contributor community already know. The Thunderbird team is working on an email service called “Thundermail” as well as file sharing, calendar scheduling and other helpful cloud-based services that as a bundle we have been calling “Thunderbird Pro.”

First, a point of clarification: Thunderbird, the email app, is and always will be free. We will never place features that can be delivered through the Thunderbird app behind a paywall. If something can be done directly on your device, it should be. However, there are things that can’t be done on your computer or phone that many people have come to expect from their email suites. This is what we are setting out to solve with our cloud-based services.

All of these new services are (or soon will be) open source software under true open source licenses. That’s how Thunderbird does things and we believe it is our super power. It is also a major reason we exist: to create open source communication and productivity software that respects our users. Because you can see how it works, you can know that it is doing the right thing.

The Why for offering these services is simple. Thunderbird loses users each day to rich ecosystems that are both products and services, such as Gmail and Office365. These ecosystems have both hard vendor lock-ins (through interoperability issues with 3rd-party clients) and soft lock-ins (through convenience and integration between their clients and services). It is our goal to eventually have a similar offering so that a 100% open source, freedom-respecting alternative ecosystem is available for those who want it. We don’t even care if you use our services with Thunderbird apps, go use them with any mail client. No lock-in, no restrictions – all open standards. That is freedom.

What Are The Services?

Thunderbird Appointment

Appointment is a scheduling tool that allows you to send a link to someone, allowing them to pick a time on your calendar to meet. The repository for Appointment has been public for a while and has seen pretty remarkable development so far. It is currently in a closed Beta and we are letting more users in each day.

Appointment has been developed to make meeting with others easier. We weren’t happy with the existing tools as they were either proprietary or too bloated, so we started building Appointment.

Thunderbird Send

Send is an end-to-end encrypted file sharing service that allows you to upload large files to the service and share links to download those files with others. Many Thunderbird users have expressed interest in the ability to share large files in a privacy-respecting way – and it was a problem we were eager to solve.

Thunderbird Send is the rebirth of Firefox Send – well, kind of. At this point, we have a bit of a Ship of Theseus situation – having rebuilt much of the project to allow for a more direct method of sharing files (from user-to-user without the need to share a link). We opened up the repo to the public earlier this week. So we encourage everyone interested to go and check it out.

Thunderbird Send is currently in Alpha testing, and will move to a closed Beta very soon.

Thunderbird Assist

Assist is an experiment, developed in partnership with Flower AI, a flexible open-source framework for scalable, privacy-preserving federated learning, that will enable users to take advantage of AI features. The hope is that processing can be done on devices that can support the models, and for devices that are not powerful enough to run the language models locally, we are making use of Flower Confidential Remote Compute in order to ensure private remote processing (very similar to Apple’s Private Cloud Compute). 

Given some users’ sensitivity to this, these types of features will always be optional and something that users will have to opt into. As a reminder, Thunderbird will never train AI with your data. The repo for Assist is not public yet, but it will be soon.

Thundermail

Thundermail is an email service (with calendars and contacts as well). We want to provide email accounts to those who love Thunderbird, and we believe that we are capable of providing a better service than the other providers out there. Email that aligns with our values of privacy, freedom and respect of our users. No ads, no selling or training AI on your data – just your email and it is your email.

With Thundermail, it is our goal to create a next generation email experience that is completely, 100% open source and built by all of us, our contributors and users. Unlike the other services, there will not be a single repository where this work is done. But we will try and share relevant places to contribute in future posts like this.

The email domain for Thundermail will be Thundermail.com or tb.pro. Additionally, you will be able to bring your own domain on day 1 of the service.

Heading to thundermail.com you will see a sign up page for the beta waitlist. Please join it!

Final Thoughts

Don’t services cost money to run?

You may be thinking: “this all sounds expensive, how will Thunderbird be able to pay for it?” And that’s a great question! Services such as Send are actually quite expensive (storage is costly). So here is the plan: at the beginning, there will be paid subscription plans at a few different tiers. Once we have a sufficiently strong base of paying users to sustainably support our services, we plan to introduce a limited free tier to the public. You see this with other providers: limitations are standard as free email and file sharing are prone to abuse.

It’s also important to highlight again that Thunderbird Pro will be a completely separate offering from the Thunderbird you already use. While Thunderbird and the additional new services may work together and complement each other for those who opt in, they will never replace, compromise, or interfere with the core features or free availability of Thunderbird. Nothing about your current Thunderbird experience will change unless you choose to opt in and sign up with Thunderbird Pro. None of these features will be automatically integrated into Thunderbird desktop or mobile or activated without your knowledge.

The Realization of a Dream

This has been a long time coming. It is my conviction that all of this should have been a part of the Thunderbird universe a decade ago. But it’s better late than never. Just like our Android client has expanded what Thunderbird is (as will our iOS client), so too will these services.

Thunderbird is unique in the world. Our focus on open source, open standards, privacy and respect for our users is something that should be expressed in multiple forms. The absence of Thunderbird web services means that our users must make compromises that are often uncomfortable ones. This is how we correct that.

I hope that all of you will check out this work and share your thoughts and test these things out. What’s exciting is that you can run Send or Appointment today, on your own server. Everything that we do will be out in the open and you can come and help us build it! Together we can create amazing experiences that enhance how we manage our email, calendars, contacts and beyond.

Thank you for being on this journey with us.

Ryan Sipes
Managing Director of Product
Thunderbird

The post Thundermail and Thunderbird Pro Services appeared first on The Thunderbird Blog.

The Rust Programming Language BlogHelp us create a vision for Rust's future

tl;dr: Please take our survey here

Rust turns 10 this year. It's a good time to step back and assess where we are at and to get aligned around where we should be going. Where is Rust succeeding at empowering everyone to build reliable, efficient software (as it says on our webpage)? Where are there opportunities to do better? To that end, we have taken on the goal of authoring a Rust Vision RFC, with the first milestone being to prepare a draft for review at the upcoming Rust All Hands.

Goals and non-goals

The vision RFC has two goals

  • to build a shared understanding of where we are and
  • to identify where we should be going at a high-level.

The vision RFC also has a non-goal, which is to provide specific designs or feature recommendations. We'll have plenty of time to write detailed RFCs for that. The vision RFC will instead focus more on higher-level recommendations and on understanding what people need and want from Rust in various domains.

We hope that by answering the above questions, we will then be able to evolve Rust with more confidence. It will also help Rust users (and would-be users) to understand what Rust is for and where it is going.

Community and technology are both in scope

The scope of the vision RFC is not limited to the technical design of Rust. It will also cover topics like

  • the experience of open-source maintainers and contributors, both for the Rust project and for Rust crates;
  • integrating global Rust communities across the world;
  • and building momentum and core libraries for particular domains, like embedded, CLI, or gamedev.

Gathering data

To answer the questions we have set, we need to gather data - we want to do our best not to speculate. This is going to come in two main formats:

  1. A survey about peoples' experiences with Rust (see below). Unlike the Annual Rust survey, the questions are open-ended and free-form, and cover somewhat different topics. This also allows us to gather a list of people to potentially interview.
  2. Interviews of people from various backgrounds and domains. In an ideal world, we would interview everyone who wants to be interviewed, but in reality we're going to try to interview as many people as we can to form a diverse and representative set.

While we have some idea of who we want to talk to, we may be missing some! We're hoping that the survey will not only help us connect to the people that we want to talk to, but also potentially help us uncover people we haven't yet thought of. We are currently planning to talk to

  • Rust users, novice to expert;
  • Rust non-users (considering or not);
  • Companies using (or considering) Rust, from startup to enterprise;
  • Global or language-based Rust affinity groups;
  • Domain-specific groups;
  • Crate maintainers, big and small;
  • Project maintainers and contributors, volunteer or professional;
  • Rust Foundation staff.

Our roadmap and timeline

Our current "end goal" is to author and open a vision RFC sometime during the second half of the year, likely in the fall. For this kind of RFC, though, the journey is really more important than the destination. We plan to author several drafts along the way and take feedback, both from Rust community members and from the public at large. The first milestone we are targeting is to prepare an initial report for review at the Rust All Hands in May. To that end, the data gathering process starts now with the survey, but we intend to spend the month of April conducting interviews (and more after that).

How you can help

For starters, fill out our survey here. This survey has three sections

  1. To put the remaining responses into context, the survey asks a few demographic questions to allow us to ensure we are getting good representation across domains, experience, and backgrounds.
  2. It asks a series of questions about your experiences with Rust. As mentioned before, this survey is quite different from the Annual Rust survey. If you have experiences in the context of a company or organization, please feel free to share those (submitting this separately is best)!
  3. It asks for recommendations as to whom we ought to speak to. Please only recommend yourself or people/companies/groups for which you have a specific contact.

Note: The first part of the survey will only be shared publicly in aggregate, the second may be made public directly, and the third section will not be made public. For interviews, we can be more flexible with what information is shared publicly or not.

Of course, other than taking the survey, you can also share it with people. We really want to reach people that may not otherwise see it through our typical channels. So, even better if you can help us do that!

Finally, if you are active in the Rust maintainer community, feel free to join the #vision-doc-2025 channel on Zulip and say hello.

The Rust Programming Language BlogC ABI Changes for `wasm32-unknown-unknown`

The extern "C" ABI for the wasm32-unknown-unknown target has been using a non-standard definition since the inception of the target in that it does not implement the official C ABI of WebAssembly and it additionally leaks internal compiler implementation details of both the Rust compiler and LLVM. This will change in a future version of the Rust compiler and the official C ABI will be used instead.

This post details some history behind this change and the rationale for why it's being announced here, but you can skip straight to "Am I affected?" as well.

History of wasm32-unknown-unknown's C ABI

When the wasm32-unknown-unknown target was originally added in 2017, not much care was given to the exact definition of the extern "C" ABI at the time. In 2018 an ABI definition was added just for wasm and the target is still using this definition to this day. This definitions has become more and more problematic over time and while some issues have been fixed, the root cause still remains.

Notably this ABI definition does not match the tool-conventions definition of the C API, which is the current standard for how WebAssembly toolchains should talk to one another. Originally this non-standard definition was used for all WebAssembly based targets except Emscripten, but this changed in 2021 where the WASI targets for Rust use a corrected ABI definition. Still, however, the non-standard definition remained in use for wasm32-unknown-unknown.

The time has now come to correct this historical mistake and the Rust compiler will soon be using a correct ABI definition for the wasm32-unknown-unknown target. This means, however, that generated WebAssembly binaries will be different than before.

What is a WebAssembly C ABI?

The definition of an ABI answers questions along the lines of:

  • What registers are arguments passed in?
  • What registers are results passed in?
  • How is a 128-bit integers passed as an argument?
  • How is a union passed as a return value?
  • When are parameters passed through memory instead of registers?
  • What is the size and alignment of a type in memory?

For WebAssembly these answers are a little different than native platforms. For example, WebAssembly does not have physical registers and functions must all be annotated with a type. What WebAssembly does have is types such as i32, i64, f32, and f64. This means that for WebAssembly an ABI needs to define how to represent values in these types.

This is where the tool-conventions document comes in. That document provides a definition for how to represent primitives in C in the WebAssembly format, and additionally how function signatures in C are mapped to function signatures in WebAssembly. For example a Rust u32 is represented by a WebAssembly i32 and is passed directly as a parameter as a function argument. If the Rust structure #[repr(C)] struct Pair(f32, f64) is returned from a function then a return pointer is used which must have alignment 8 and size of 16 bytes.

In essence, the WebAssembly C ABI is acting as a bridge between C's type system and the WebAssembly type system. This includes details such as in-memory layouts and translations of a C function signature to a WebAssembly function signature.

How is wasm32-unknown-unknown non-standard?

Despite the ABI definition today being non-standard, many aspects of it are still the same as what tool-conventions specifies. For example, size/alignment of types is the same as it is in C. The main difference is how function signatures are calculated. An example (where you can follow along on godbolt) is:

#[repr(C)]
pub struct Pair {
    x: u32,
    y: u32,
}

#[unsafe(no_mangle)]
pub extern "C" fn pair_add(pair: Pair) -> u32 {
    pair.x + pair.y
}

This will generate the following WebAssembly function:

(func $pair_add (param i32 i32) (result i32)
  local.get 1
  local.get 0
  i32.add
)

Notably you can see here that the struct Pair was "splatted" into its two components so the actual $pair_add function takes two arguments, the x and y fields. The tool-conventions, however specifically says that "other struct[s] or union[s]" are passed indirectly, notably through memory. We can see this by compiling this C code:

struct Pair {
    unsigned x;
    unsigned y;
};

unsigned pair_add(struct Pair pair) {
    return pair.x + pair.y;
}

which yields the generated function:

(func (param i32) (result i32)
  local.get 0
  i32.load offset=4
  local.get 0
  i32.load
  i32.add
)

Here we can see, sure enough, that pair is passed in linear memory and this function only has a single argument, not two. This argument is a pointer into linear memory which stores the x and y fields.

The Diplomat project has compiled a much more comprehensive overview than this and it's recommended to check that out if you're curious for an even deeper dive.

Why hasn't this been fixed long ago already?

For wasm32-unknown-unknown it was well-known at the time in 2021 when WASI's ABI was updated that the ABI was non-standard. Why then has the ABI not been fixed like with WASI? The main reason originally for this was the wasm-bindgen project.

In wasm-bindgen the goal is to make it easy to integrate Rust into a web browser with WebAssembly. JavaScript is used to interact with host APIs and the Rust module itself. Naturally, this communication touches on a lot of ABI details! The problem was that wasm-bindgen relied on the above example, specifically having Pair "splatted" across arguments instead of passed indirectly. The generated JS wouldn't work correctly if the argument was passed in-memory.

At the time this was discovered it was found to be significantly difficult to fix wasm-bindgen to not rely on this splatting behavior. At the time it also wasn't thought to be a widespread issue nor was it costly for the compiler to have a non-standard ABI. Over the years though the pressure has mounted. The Rust compiler is carrying an ever-growing list of hacks to work around the non-standard C ABI on wasm32-unknown-unknown. Additionally more projects have started to rely on this "splatting" behavior and the risk has gotten greater that there are more unknown projects relying on the non-standard behavior.

In late 2023 the wasm-bindgen project fixed bindings generation to be unaffected by the transition to the standard definition of extern "C". In the following months a future-incompat lint was added to rustc to specifically migrate users of old wasm-bindgen versions to a "fixed" version. This was in anticipation of changing the ABI of wasm32-unknown-unknown once enough time had passed. Since early 2025 users of old wasm-bindgen versions will now receive a hard error asking them to upgrade.

Despite all this heroic effort done by contributors, however, it has now come to light that there are more projects than wasm-bindgen relying on this non-standard ABI definition. Consequently this blog post is intended to serve as a notice to other users on wasm32-unknown-unknown that the ABI break is upcoming and projects may need to be changed.

Am I affected?

If you don't use the wasm32-unknown-unknown target, you are not affected by this change. If you don't use extern "C" on the wasm32-unknown-unknown target, you are also not affected. If you fall into this bucket, however, you may be affected!

To determine the impact to your project there are a few tools at your disposal:

  • A new future-incompat warning has been added to the Rust compiler which will issue a warning if it detects a signature that will change when the ABI is changed.
  • In 2023 a -Zwasm-c-abi=(legacy|spec) flag was added to the Rust compiler. This defaults to -Zwasm-c-abi=legacy, the non-standard definition. Code can use -Zwasm-c-abi=spec to use the standard definition of the C ABI for a crate to test out if changes work.

The best way to test your crate is to compile with nightly-2025-03-27 or later, ensure there are no warnings, and then test your project still works with -Zwasm-c-abi=spec. If all that passes then you're good to go and the upcoming change to the C ABI will not affect your project.

I'm affected, now what?

So you're using wasm32-unknown-unknown, you're using extern "C", and the nightly compiler is giving you warnings. Additionally your project is broken when compiled with -Zwasm-c-abi=spec. What now?

At this time this will unfortunately be a somewhat rough transition period for you. There are a few options at your disposal but they all have their downsides:

  1. Pin your Rust compiler version to the current stable, don't update until the ABI has changed. This means that you won't get any compiler warnings (as old compilers don't warn) and additionally you won't get broken when the ABI changes (as you're not changing compilers). Eventually when you update to a stable compiler with -Zwasm-c-abi=spec as the default you'll have to port your JS or bindings to work with the new ABI.

  2. Update to Rust nightly as your compiler and pass -Zwasm-c-abi=spec. This is front-loading the work required in (1) for your target. You can get your project compatible with -Zwasm-c-abi=spec today. The downside of this approach is that your project will only work with a nightly compiler and -Zwasm-c-abi=spec and you won't be able to use stable until the default is switched.

  3. Update your project to not rely on the non-standard behavior of -Zwasm-c-abi=legacy. This involves, for example, not passing structs-by-value in parameters. You can pass &Pair above, for example, instead of Pair. This is similar to (2) above where the work is done immediately to update a project but has the benefit of continuing to work on stable Rust. The downside of this, however, is that you may not be able to easily change or update your C ABI in some situations.

  4. Update to Rust nightly as your compiler and pass -Zwasm-c-abi=legacy. This will silence compiler warnings for now but be aware that the ABI will still change in the future and the -Zwasm-c-abi=legacy option will be removed entirely. When the -Zwasm-c-abi=legacy option is removed the only option will be the standard C ABI, what -Zwasm-c-abi=spec today enables.

If you have uncertainties, questions, or difficulties, feel free to reach out on the tracking issue for the future-incompat warning or on Zulip.

Timeline of ABI changes

At this time there is not an exact timeline of how the default ABI is going to change. It's expected to take on the order of 3-6 months, however, and will look something roughly like this:

  • 2025 March: (soon) - a future-incompat warning will be added to the compiler to warn projects if they're affected by this ABI change.
  • 2025-05-15: this future-incompat warning will reach the stable Rust channel as 1.87.0.
  • 2025 Summer: (ish) - the -Zwasm-c-abi flag will be removed from the compiler and the legacy option will be entirely removed.

Exactly when -Zwasm-c-abi is removed will depend on feedback from the community and whether the future-incompat warning triggers much. It's hoped that soon after the Rust 1.87.0 is stable, though, that the old legacy ABI behavior can be removed.

Mozilla Addons BlogRethinking Extension Data Consent: Clarity, Consistency, and Control

Firefox logoHello, extension developers! I’m Alan, the Product Manager at Mozilla responsible for the Firefox add-ons ecosystem.

I wanted to share news about a project we’re working on that will streamline how extension developers implement user data consent experiences.

Firefox extension data collection policies protect our users

Today, our Add-on policies dictate that any extension that collects or transmits user data must create and display a data consent dialog. This consent dialog must clearly state what type of data is being collected and inform the user about the impact of accepting or declining the data collection.

Whilst the policy is a great example of Firefox’s commitment to transparency and protecting user data, it can add significant overhead for developers who want to build on our platform, and it creates a confusing experience for end users who often encounter many different data consent experiences for every extension they install. These custom data consent experiences also increase the time it takes for add-on reviewers to process a new extension version, as they need to verify this custom code is compliant with our policies.

We’re simplifying how extensions gets consent to collect data

In 2025 we will launch a new data consent experience for extensions, built into the Firefox add-on installation flow itself. This will dramatically reduce the:

  1. development effort required to be compliant with Firefox data policies
  2. confusion users faces when installing extensions by providing a more consistent experience, giving them more confidence and control around the data collected or transmitted
  3. effort it takes AMO reviewers to evaluate an extension version to ensure it’s compliant with our data collection policies

Developers won’t need to bother with creating their own custom data consent experiences. Soon, developers will simply be able to specify in the manifest what types of data the extension collects/transmits and this will automatically be reflected in a unified consent experience across all Firefox extensions.

When a user then adds an extension to Firefox, the installation prompt will show what required types of data the extension collects, if any, alongside a list of permissions that the extension requests. Users will have a choice to opt in/out of providing the optional technical and usage data if the add-on has requested it, as well as any optional data collection the developer requests. As always, the user then has the choice to continue adding the extension if they agree to the required permissions and data collection, or cancel the installation flow. We plan to extend the existing WebExtensions permissions APIs to include these data collection options, making it as easy as possible for developers to adopt this new functionality.

The data collection information will also be displayed on AMO extension listing pages to help Firefox users make informed download decisions. We’re also exploring ways to let developers provide more context about their data practices, if they wish.

We will eventually accept this standardized approach instead of requiring a developer to build custom consent screens, but acknowledge this will take time as we gather feedback from our community of developers and users. To begin with, we will be adding this functionality to the Nightly version of Firefox for desktop in an upcoming release so that we can gather feedback on how this approach compares with their existing consent experiences. We’ll be sure to announce here on this blog with further technical details about how to use it, so stay tuned!

Help us make this better

We would love our Firefox extension developers to help us shape the future of this feature and we encourage you to test it out in Nightly when it’s released and send us your feedback. Finally, if you’re an extension developer, please help us build this feature by completing a survey about how you’re using permissions and data in your own extensions. This will help us make sure we’re not missing anything important during this stage of design!

Complete the extension permissions and data collection survey

The post Rethinking Extension Data Consent: Clarity, Consistency, and Control appeared first on Mozilla Add-ons Community Blog.

The Rust Programming Language BlogAnnouncing Rust 1.86.0

The Rust team is happy to announce a new version of Rust, 1.86.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.86.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.86.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.86.0 stable

Trait upcasting

This release includes a long awaited feature — the ability to upcast trait objects. If a trait has a supertrait you can coerce a reference to said trait object to a reference to a trait object of the supertrait:

trait Trait: Supertrait {}
trait Supertrait {}

fn upcast(x: &dyn Trait) -> &dyn Supertrait {
    x
}

The same would work with any other kind of (smart-)pointer, like Arc<dyn Trait> -> Arc<dyn Supertrait> or *const dyn Trait -> *const dyn Supertrait.

Previously this would have required a workaround in the form of an upcast method in the Trait itself, for example fn as_supertrait(&self) -> &dyn Supertrait, and this would work only for one kind of reference/pointer. Such workarounds are not necessary anymore.

Note that this means that raw pointers to trait objects carry a non-trivial invariant: "leaking" a raw pointer to a trait object with an invalid vtable into safe code may lead to undefined behavior. It is not decided yet whether creating such a raw pointer temporarily in well-controlled circumstances causes immediate undefined behavior, so code should refrain from creating such pointers under any conditions (and Miri enforces that).

Trait upcasting may be especially useful with the Any trait, as it allows upcasting your trait object to dyn Any to call Any's downcast methods, without adding any trait methods or using external crates.

use std::any::Any;

trait MyAny: Any {}

impl dyn MyAny {
    fn downcast_ref<T>(&self) -> Option<&T> {
        (self as &dyn Any).downcast_ref()
    }
}

You can learn more about trait upcasting in the Rust reference.

HashMaps and slices now support indexing multiple elements mutably

The borrow checker prevents simultaneous usage of references obtained from repeated calls to get_mut methods. To safely support this pattern the standard library now provides a get_disjoint_mut helper on slices and HashMap to retrieve mutable references to multiple elements simultaneously. See the following example taken from the API docs of slice::get_disjoint_mut:

let v = &mut [1, 2, 3];
if let Ok([a, b]) = v.get_disjoint_mut([0, 2]) {
    *a = 413;
    *b = 612;
}
assert_eq!(v, &[413, 2, 612]);

if let Ok([a, b]) = v.get_disjoint_mut([0..1, 1..3]) {
    a[0] = 8;
    b[0] = 88;
    b[1] = 888;
}
assert_eq!(v, &[8, 88, 888]);

if let Ok([a, b]) = v.get_disjoint_mut([1..=2, 0..=0]) {
    a[0] = 11;
    a[1] = 111;
    b[0] = 1;
}
assert_eq!(v, &[1, 11, 111]);

Allow safe functions to be marked with the #[target_feature] attribute.

Previously only unsafe functions could be marked with the #[target_feature] attribute as it is unsound to call such functions without the target feature being enabled. This release stabilizes the target_feature_11 feature, allowing safe functions to be marked with the #[target_feature] attribute.

Safe functions marked with the target feature attribute can only be safely called from other functions marked with the target feature attribute. However, they cannot be passed to functions accepting generics bounded by the Fn* traits and only support being coerced to function pointers inside of functions marked with the target_feature attribute.

Inside of functions not marked with the target feature attribute they can be called inside of an unsafe block, however it is the caller's responsibility to ensure that the target feature is available.

#[target_feature(enable = "avx2")]
fn requires_avx2() {
    // ... snip
}

#[target_feature(enable = "avx2")]
fn safe_callsite() {
    // Calling `requires_avx2` here is safe as `safe_callsite`
    // requires the `avx2` feature itself.
    requires_avx2();
}

fn unsafe_callsite() {
    // Calling `requires_avx2` here is unsafe, as we must
    // ensure that the `avx2` feature is available first.
    if is_x86_feature_detected!("avx2") {
        unsafe { requires_avx2() };
    }
}

You can check the target_features_11 RFC for more information.

Debug assertions that pointers are non-null when required for soundness

The compiler will now insert debug assertions that a pointer is not null upon non-zero-sized reads and writes, and also when the pointer is reborrowed into a reference. For example, the following code will now produce a non-unwinding panic when debug assertions are enabled:

let _x = *std::ptr::null::<u8>();
let _x = &*std::ptr::null::<u8>();

Trivial examples like this have produced a warning since Rust 1.53.0, the new runtime check will detect these scenarios regardless of complexity.

These assertions only take place when debug assertions are enabled which means that they must not be relied upon for soundness. This also means that dependencies which have been compiled with debug assertions disabled (e.g. the standard library) will not trigger the assertions even when called by code with debug assertions enabled.

Make missing_abi lint warn by default

Omitting the ABI in extern blocks and functions (e.g. extern {} and extern fn) will now result in a warning (via the missing_abi lint). Omitting the ABI after the extern keyword has always implicitly resulted in the "C" ABI. It is now recommended to explicitly specify the "C" ABI (e.g. extern "C" {} and extern "C" fn).

You can check the Explicit Extern ABIs RFC for more information.

Target deprecation warning for 1.87.0

The tier-2 target i586-pc-windows-msvc will be removed in the next version of Rust, 1.87.0. Its difference to the much more popular i686-pc-windows-msvc is that it does not require SSE2 instruction support, but Windows 10, the minimum required OS version of all windows targets (except the win7 targets), requires SSE2 instructions itself.

All users currently targeting i586-pc-windows-msvc should migrate to i686-pc-windows-msvc before the 1.87.0 release.

You can check the Major Change Proposal for more information.

Stabilized APIs

These APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.86.0

Many people came together to create Rust 1.86.0. We couldn't have done it without all of you. Thanks!

Mozilla Open Policy & Advocacy BlogNew Mozilla Research: Civil Liability Along the AI Value Chain

What happens when AI systems fail? Who should be held responsible when they cause harm? And how can we ensure that people harmed by AI can seek redress?

READ THE REPORT HERE

As AI is increasingly integrated in products and services across sectors, these questions will only become more pertinent. In the EU, a proposal for an AI Liability Directive (AILD) in 2022 catalyzed debates around this issue.  Its recent withdrawal by the European Commission leaves a wide range of open questions to linger as businesses and consumers will need to navigate fragmented liability rules across the EU’s 27 member states.

To answer these questions, policymakers will need to ask themselves: what does an effective approach to AI and liability look like?

New research published by Mozilla tackles these thorny issues and explores how liability could and should be assigned across AI’s complex and heterogeneous value chain.

Solving AI’s “problem of many hands” 

The report, commissioned from Beatriz Botero Arcila — a professor at Sciences Po Law School and a Faculty Associate at Harvard’s Berkman Klein Center for Internet and Society — explores how liability law can help solve the “problem of many hands” in AI: that is, determining who is responsible for harm that has been dealt in a value chain in which a variety of different companies and actors might be contributing to the development of any given AI system. This is aggravated by the fact that AI systems are both opaque and technically complex, making their behavior hard to predict.

Why AI Liability Matters

To find meaningful solutions to this problem, different kinds of experts have to come together. This resource is designed for a wide audience, but we indicate how specific audiences can best make use of different sections, overviews, and case studies.

Specifically, the report:

  • Proposes a 3-step analysis to consider how liability should be allocated along the value chain: 1) The choice of liability regime, 2) how liability should be shared amongst actors along the value chain and 3) whether and how information asymmetries will be addressed.
  • Argues that where ex-ante AI regulation is already in place, policymakers should consider how liability rules will interact with these rules.
  • Proposes a baseline liability regime where actors along the AI value chain share responsibility if fault can be demonstrated, paired with measures to alleviate or shift the burden of proof and to enable better access to evidence — which would incentivize companies to act with sufficient care and address information asymmetries between claimants and companies.
  • Argues that in some cases, courts and regulators should extend a stricter regime, such as product liability or strict liability.
  • Analyzes liability rules in the EU based on this framework.

Why Now?

We have already seen examples of AI causing harm, from biased automated recruitment systems to predictive AI tools used in public services and law enforcement generating faulty outputs. As the number of such examples will increase with AI’s diffusion across the economy, affected individuals should have effective ways of seeking redress and justice — as we have already argued in our initial response to the AILD proposal in 2022 — and businesses should be incentivized to take sufficient measures to prevent harm. At the same time, they should not be overburdened with ineffective rules and have legal certainty rather than facing a patchwork of varying rules across different jurisdictions in which they operate. A well-designed, targeted, and robust liability regime for AI could address all of these challenges — and we hope the research released today can contribute to a more grounded debate around this issue.

The post New Mozilla Research: Civil Liability Along the AI Value Chain appeared first on Open Policy & Advocacy.

Mozilla Security BlogUpdated GPG key for signing Firefox Releases

The GPG key used to sign the Firefox release manifests is expiring soon, and so we’re going to be switching over to a new signing subkey shortly.

The GPG fingerprint is 14F2 6682 D091 6CDD 81E3 7B6D 61B7 B526 D98F 0353. The new signing subkey’s fingerprint is 09BE ED63 F346 2A2D FFAB 3B87 5ECB 6497 C1A2 0256, and it expires 2027-03-13.

The public key can be fetched from KEY files from the latest Firefox Nightly, keys.openpgp.org, or from below. This can be used to validate existing releases signed with the current key, or future releases signed with the new key.

-----BEGIN PGP PUBLIC KEY BLOCK-----

mQINBFWpQAQBEAC+9wVlwGLy8ILCybLesuB3KkHHK+Yt1F1PJaI30X448ttGzxCz
PQpH6BoA73uzcTReVjfCFGvM4ij6qVV2SNaTxmNBrL1uVeEUsCuGduDUQMQYRGxR
tWq5rCH48LnltKPamPiEBzrgFL3i5bYEUHO7M0lATEknG7Iaz697K/ssHREZfuuc
B4GNxXMgswZ7GTZO3VBDVEw5GwU3sUvww93TwMC29lIPCux445AxZPKr5sOVEsEn
dUB2oDMsSAoS/dZcl8F4otqfR1pXg618cU06omvq5yguWLDRV327BLmezYK0prD3
P+7qwEp8MTVmxlbkrClS5j5pR47FrJGdyupNKqLzK+7hok5kBxhsdMsdTZLd4tVR
jXf04isVO3iFFf/GKuwscOi1+ZYeB3l3sAqgFUWnjbpbHxfslTmo7BgvmjZvAH5Z
asaewF3wA06biCDJdcSkC9GmFPmN5DS5/Dkjwfj8+dZAttuSKfmQQnypUPaJ2sBu
blnJ6INpvYgsEZjV6CFG1EiDJDPu2Zxap8ep0iRMbBBZnpfZTn7SKAcurDJptxin
CRclTcdOdi1iSZ35LZW0R2FKNnGL33u1IhxU9HRLw3XuljXCOZ84RLn6M+PBc1eZ
suv1TA+Mn111yD3uDv/u/edZ/xeJccF6bYcMvUgRRZh0sgZ0ZT4b0Q6YcQARAQAB
tC9Nb3ppbGxhIFNvZnR3YXJlIFJlbGVhc2VzIDxyZWxlYXNlQG1vemlsbGEuY29t
PokCTwQTAQIAIgUCValABAIbAwYLCQgHAwIGFQgCCQoLBBYCAwECHgECF4AAIQkQ
Ybe1JtmPA1MWIQQU8maC0JFs3YHje21ht7Um2Y8DU1CqD/9Gvr9Xu4uqsjDHRQWS
fI0lqxElmFSRjF0awsPXzM7Q1rxV7dCxik4LeiOmpoVTOmqboo2/x5d938q7uPdY
av2Q+RuNk2CG/LpXku9rgmTE7oszEqQliqKoXajUZ91rw19wrTwYXLgLQvzM3CUA
O+Z0yjjfza2Yc0ZtNN+3sF5VpGsT3Fb14aYZDaNg6yPFvkyxp0B1lS4rwgL3lkeV
QNHeAf0qqF9tBankGj3bgqK/5/YlTM2usb3x46bVBvwX2t4/NnYM5hEnI57inwam
X6SiMJc2e2QmBzAnVrXJETrDL1HOl4GUJ6hC4tL3Yw2d7515BlSyRNkWhhdRp1/q
9t1+ovSe48Ip2X2WF5/VA3ATfQhHKa3p+EkIV98VCMZ14x9KIIeBwjyJyFBuvOEE
IYZHdsAdqf1zYRtD6m6obcBrRiNfoNsYmNY4joDrVupI96ksIxVpepXaZkQhplZ1
mQ4eOdGtToIl1cb/4PibVgFnBgzrR4mQ27h4wzAwWdGweJZ/tuGoqm3C6TwfIgan
ajiPyKqsVFUkRsr9y12EDcfUCUq6D182t/AJ+qE0JIGO73tXTdTbqPTgkyf2etnZ
QQZum3L7w41NvfxZfn+gLrUGDBXwqLjovDJvt8iZTPPyMTzemOHuzf40Iq+9sf5V
9PXZ/5X9+ymE3cTAbAk9MLd9fbkCDQRkVUBzARAA1cD3n5ue0sCcZmqX2FbtIFRs
k39rlGkvuxYABsWBTzr0RbRW7h46VzWbOcU5ZmbJrp/bhgkSYRR3drmzT63yUZ62
dnww6e5LJjGSt19zzcber9BHELjqKqfAfLNsuZ7ZQ5p78c6uiJhe8WpbWogbspxJ
20duraLGmK4Kl23fa3tF0Gng1RLhoFcSVK/WtDZyC+elPKpch1Sru6sw/r8ktfuh
NIRGxdbj/lFHNVOzCXb3MTAqpIynNGMocFFnqWLZLtItphHxPUqVr6LKvc3i3aMl
C6IvLNg0Nu8O088Hg3Ah9tRmXKOshLjYjPeXqM9edqoWWqpzxDTNl6JlFMwP+Oac
MKsyX7Wq+ZXC/o3ygC/oclYUKtiuoGg47fSCN2GS3V2GX2zFlT6SEvEQQb2g5yIS
LX9Q/g9AyJdqtfaLe4Fv6vM4P1xhOUDnjmdoulm3FGkC701ZF7eFhMSRUM9QhkGH
6Yz2TvS4ht6Whg7aVt4ErIoJfj9jzJOp6k9vna5Lmgkj8l19NTiUQ7gk98H3wW4m
RrINxZ2yQD47V/LJ+tUamJc5ac+I0VP7c15xmKEJ2rfGCGhiSWQwZZw7Y2/qoADS
BlI28RlBTuRP2i6AdwyJU+75CzxGzMpr/wBLhZT+fNRV4HHd5dgR3YxajpkzZ6wX
L2aaJhznFEmLBLokOwMAEQEAAYkEcgQYAQoAJhYhBBTyZoLQkWzdgeN7bWG3tSbZ
jwNTBQJkVUBzAhsCBQkDwmcAAkAJEGG3tSbZjwNTwXQgBBkBCgAdFiEErdcHlHlw
Dcrf3VM34207E/PZMnQFAmRVQHMACgkQ4207E/PZMnRgdg/+LAha8Vh1SIVpXzUH
Vdx81kPyxBSaXtOtbBw6u9EiPW+xCUiF/pyn7H1lu+hAodeNFADsXmmONKcBjURV
fwO81s60gLKYBXxpcLLQXrfNOLrYMnokr5FfuI3zZ0AoSnEoS9ufnf/7spjba8Rl
dV1q2krdw1KtbiLq3D8v4E3qRfx5SqCA+eJSavaAh3aBi6lvRlUSZmz8RWwq6gP9
Z4BiTTyFp5jQv1ZKJb5OJ+44A0pS+RvGDRq/bAAUQULLIJVOhiTM74sb/BPmeRYU
S++ee10IFW4bsrKJonCoSQTXQexOpH6AAFXeZDakJfyjTxnl3+AtA4VEp1UJIm0Y
we0h6lT0isSJPVp3RFZRPjq0g+/VniBsvYhLE/70ph9ImU4HXdNumZVqXqawmIDR
wv7NbYjpQ8QnzcP3vJ5XQ4/bNU/xWd1eM2gdpbXI9B46ER7fQcIJRNrawbEbfzuH
y5nINAzrznsg+fAC76w2Omrn547QiY2ey7jy7k79tlCXGXWAt9ikkJ95BCLsOu5O
TxPi4/UUS2en1yDbx5ej7Hh79oEZxzubW1+v5O1+tXgMOWd6ZgXwquq50vs+X4mi
7BKE2b1Mi6Zq2Y+Kw7dAEbYYzhsSA+SRPu5vrJgLTNQmGxxbrSA+lCUvQ8dPywXz
00vKiQwI9uRqtK0LX1BLuHKIhg4OgxAAnmFSZgu7wIsE2kBYwabCSIFJZzHu0lgt
RyYrY8Xh7Pg+V9slIiMGG4SIyq5eUfmU8bXjc4vQkE6KHxsbbzN6gFVLX1KDjxRK
h+/nG/RDtfw/ic7iiXZfgkEqzIVgIrtlDb/DK6ZDMeABnJcZZTJMAC4lWpJGgmnZ
xfAIGmtcUOA0CKGT43suyYET7L7HXd0TM+cJRnbEb7m8OexT9Xqqwezfqoi1MGH2
g8lRKQE4Z2eEFvCiuJnCw547wtpJWEQrGw1eqL3AS8Y051YqblbXLbgf5Oa49yo6
30ehq9OxoLd7+GdWwYBlr/0EzPUWezhdIKKvh1RO+FQGAlzYJ6Pq7BPwvu3dC3YY
dN3Ax/8dj5036Y+mHgDsnmlUk8dlziJ0O3h1fke/W81ABx4ASBktXAf1IweRbbxq
W8OgMhG6xHTeiEjjav7SmlD0XVOxjhI+qBoNPovWlChqONxablBkuh0Jd6kdNiaS
EM9cd60kK3GT/dBMyv0yVhhLci6HQZ+Mf4cbn0KtayzuQLOcdRCN3FF/JNQH3v6L
A1MdRfmJlgC4UdiepBb1uCgtVIPizRuXWDjyjzePZRN/AqaUbEoNBHhIz0nKhQGD
bst4ugIzJWIX+6UokwPC3jvJqQQttccjAy6kXBmxfxyRMB5BEeLY0+qVPyvOxpXE
GnlSHYmdIS65Ag0EZ9KQfQEQAOVIyh0sZPPFLWxoFT0WhPzHw8BhgnCBNdZAh9+S
M0Apq2VcQKSjBjKiterOTtc6EVh0K2ikbGKHQ1SvwNdsYL01cSkJSJORig/1Du1e
h+2nlo8nut7xT//V+2FQyWFCLDeQvLlAs3QHMrMYxTcwNk3qi/z1Z5Q4e6Re2aKR
U00LtSomD6CKWy9nAaqTRNzzdndJwIyCyshX4bbUzAzE7Wbgh/E0/FgBGw87LYIT
qyU6US4lvoUXB+89XxwMxO9I74L118gXEyybz+JN0/w87hXAKnaKjasSvobKE4ma
u8SXqmOO66MxiMaF4Xsmr3oIwo8q9W5d+hA+t225ipq2rZZErmPL44deMCeKmepj
LTa9CoxX2oVpDWGOYFRyJRkLDyyH4O3gCo/5qv4rOTJqPFfKPtrjWFJKGf4P4UD0
GSBX2Q+mOf2XHWsMJE4t8T7jxQCSAQUMwt6M18h1auIqcfkuNvdJhcl2GvJyCMIb
kA3AoiuKaSPgoVCmJdbc6Ao9ydmMUB5Q1rYpMNKCMsuVP9OcX8FoHEVMXOvr0f6W
fj+iHytfO2VTqrw/cqoCyuPoSrgxjs1/cRSz5g9fZ0zrOtQyNB5yJ3YPTG3va1/X
LflrjPcT4ZUkej9nkFpCNWdEZVWD/z3vXBGSV11N9Cdy60QbD4yZvDjV2GQ+dwAF
1o1BABEBAAGJBHIEGAEKACYWIQQU8maC0JFs3YHje21ht7Um2Y8DUwUCZ9KQfQIb
AgUJA8JnAAJACRBht7Um2Y8DU8F0IAQZAQoAHRYhBAm+7WPzRiot/6s7h17LZJfB
ogJWBQJn0pB9AAoJEF7LZJfBogJW9I4QAJbv4Rhb4x6Jl75x2Lfp46/e3fZVDhzU
dLjK8A/acRF7JRBuJVJRaijJ5tngdknmlmbzfqlyzsMWUciAwVJRvijNFDeicet5
zJpBRsXEUAug3iVCD1KlVvLzjCi9Eb9s6xCQjSJ8DZE020s41wdqtb1nziDASAkg
+YH2DzpTEaZVNM39uNDKbaJLYIjKA9MV1YHArqUldFsoofBe4zIZRFyvMD7Gmr7X
m0IWYLrfmnenm1JJYIkvGUeVoP8dEonAVhLVwvwwufobV0qdtMfhZsgFwf1XSHI9
MtD4yAVtBqBTkfFeRLnBjJK/ywYxGqbadt1b57I4ywTQ16oXNrlTF1Su0I8i/fo0
i/9ohNl3opN3LbaEbhT37M4xpy4MgL2Fthddc2gWvF/8TFRaXw7LaLSR7HwO+Y0C
pOtV/Ct4RzKEulY5DpV9b1JQJhpLcjMz+pBDAM3KJuiV6Bcfoz5PZowFy74UmE02
Vzk/oyuI/o4KMihy0UzWQVkOZTTu4eONktgGiZOnRFdiLKVgeLEDXTLdhbuwGS2+
wX3I7lLP9AWpK8Ahc81eUwU6MwdbfwfJ1ELtKaa/JmMjaWkr5aGrp88d8ePR9jYA
47Z2q0esB67pRJVe0McVJlu9GQGq05S7lZKs6mi9dHTzeHwua//IXHMK0s3WhMU7
vGwJ3E2+pTstf8AQALSwkezD3QchPV+5CAUYY7CmMXB6zzIU18wCS61Y8QdDvqmt
WHdMVTp4xT14fS6cvB4uFzacGQJ7CVIWeZgwEFzZiev3dKpnUOGg0WQSwmQQA0JC
g6/qS0AeUPINjhWtNcR7voCqAYeRcjo47UJclD/KKNTCn27btHRaEmpTdTtC6sxi
VElFObb3a9tHXqwLWp8gJ+NZ+6mlrvvH2hm1CAyQTDRYC7nN69QJrKHR8HA3AeR5
figQHLwvmfQlV2erZE17GT+L5t0HxX/HKZCim91PApqa+7iY0eKPAG5iacABrBi9
zzh/ex0ovvuxsBDKUFCSu7HIivnAVrdS/kbO1qJ5I3MBMp0dlQ6PS6LeZIRhxts0
aPPZedsXytoL7kFLISfJ55AuhJpskz+55uviJhp/H3zNBYtQ+dmFmp4RRk/Nvu0z
v6OGtaZy6M5X24Pbzb/OApBML84cEmb3iZie9J2ZYW68/D96sP09x6GItCJlCIdQ
ZkRcwmkQwgtq9sJDw92/vSGeYdRn+oCAxJ14eObCsVwcfJARLt45btEnx+zRCAHA
HQHpV6qTGT6nqg57XuM9iNNdyTGKRU+Iklgb9LRxVAQfbn5uXYb5j2ox5pjxtbXT
f9Lbo7RkygcWSKZPWmYgGsKS6jmXkDa/TyOlPxkbaknpPbYMBztRT4Ju0VU4
=8qIP
-----END PGP PUBLIC KEY BLOCK-----

The post Updated GPG key for signing Firefox Releases appeared first on Mozilla Security Blog.

Mozilla Open Policy & Advocacy BlogMozilla Mornings: Unleashing PETs – Regulating Online Ads for a Privacy-First Future

Our first edition of Mozilla Mornings in 2025 will explore the state of online advertising and what needs to change to ensure a fairer, healthier, and privacy-respecting ads ecosystem where everyone stands to benefit.

The European regulatory landscape for online advertising is at a turning point. Regulators step up enforcement under the GDPR, the DMA and the DSA and industry players explore alternatives to cookies. Despite these advancements, online advertising remains an area where users do not experience strong privacy protections, and the withdrawal of the ePrivacy Regulation proposal can only exacerbate these concerns.

The industry’s reliance on invasive tracking, excessive profiling, and opaque data practices makes the current model deeply flawed. At the same time, online advertising remains central to the internet economy, supporting access to information, content creators, and journalism.

This Mozilla Mornings session will bring together policymakers, industry experts and civil society to discuss how online advertising can evolve in a way that benefits both users and businesses.

  • How can we move towards a more privacy-respecting and transparent advertising ecosystem while maintaining the economic sustainability of the open web?
  • How can regulatory reforms, combined with developments in the space of Privacy-Enhancing Technologies (PETs) and Privacy-Preserving Technologies (PPTs), provide a viable alternative to today’s surveillance-based advertising?
  • And what are the key challenges in making this shift at both the policy and technological levels?

To discuss these issues, the panel will welcome:

  • Rob van Eijk, Managing Director at Future of Privacy Forum
  • Svea Windwehr, Associate Director Public Policy at Electronic Frontier Foundation
  • Petra Wikström, Senior Director Public Policy at Schibsted
  • Martin Thomson, Distinguished Engineer at Mozilla

The discussion will also feature a fireside chat with Prof. Dr. Max von Grafenstein from Einstein Center Digital Future at the UdK Berlin.

  • Date: Wednesday 9th April 2025
  • Time: 08:45-10:15 CET
  • Venue: L42, Rue de la Loi 42, 1000 Brussels

To register, click here.

The post Mozilla Mornings: Unleashing PETs – Regulating Online Ads for a Privacy-First Future appeared first on Open Policy & Advocacy.

Firefox Developer ExperienceNetwork override in Firefox DevTools

With Firefox 137 comes a new feature for the Network Monitor: network response override!

Screenshot of Firefox DevTools network panel, showing several requests and a context menu with the "Set Network Override" item selected

Override all the things!

A long, long time ago when I was building rather serious web-applications, one of my worst fears was getting a frontend bug which only occurred with some specific live production data. We didn’t really have source maps back then, so debugging the minified code was already complicated. If I was lucky enough to understand the issue and to write a fix, it was hard to be sure that it would fully address the issue.

Thankfully all that changed, the moment I installed a proxy tool (in my case, I was using Fiddler). Now I could pick any request captured on the network and setup a rule to redirect future similar requests to a local file. All of a sudden, minified JS files could be replaced with development versions, API endpoints could be mocked to return reduced test cases, and most importantly I could verify a frontend patch against live production data from my machine without having to wait for a 2 months release cycle.

We really wanted to bring this feature to Firefox DevTools, but it was a complex task, and took quite some time and effort. But we are finally there, so let’s take a look at what is available in Firefox 137.

Debugger Local Script Override

You may or may not know, but Firefox DevTools already had an override feature: the Debugger Local Script Override. This functionality was added in Firefox 113 and allows you to override JavaScript files from the Debugger Source Tree.

Screenshot the Firefox DevTools' Debugger Source Tree, with a context menu opened on a JS file showing the "Add script override" menu item.<figcaption class="wp-element-caption">Debugger Source Tree context menu to “Add script override”</figcaption>

After opening the context menu on a JS file in the Debugger Source Tree and selecting “Add script override”, you are prompted to create a new local file which will initially have the same contents as the file you selected. But since this is a local file, you can now modify it. The next time this script will be loaded, Firefox will use your local file as the response.

Thanks to Local Script Override, you could already modify and test JavaScript changes on any website, even if you didn’t have direct access to the sources. However this feature was limited to JS. If you had to modify inline scripts in HTML pages, or the data returned returned by an API endpoint, you were out of luck.

Introducing Network Response Override

Without going into details, the main reason this feature was limited to the Debugger and to JS files was because our trick to override responses involved redirecting the request to a data URI containing the overridden content. And while this was OK for scripts, this hack didn’t work for HTML files or other resources. But in the meantime, we also worked on overriding responses for WebDriver BiDi and we implemented a solution that worked for any response. After that, it was only a matter of reusing this solution in DevTools and updating the UI to support overriding responses of any request in Firefox DevTools.

The workflow is similar to the Debugger Local Script Override. First you find the request you want to override in the Network Monitor, open the context menu and select “Set Network Override”.

<figcaption class="wp-element-caption">Network Panel context menu: Set Network Override</figcaption>

After that you will also be prompted to create a new local file, which will have the same content as the original response you want to override. Open this file in the editor of your choice to modify it. Back to the DevTools’ Network panel, you should notice that a new column called “Override” appeared and shows a purple circle on the line where you added the override.

<figcaption class="wp-element-caption">Network Panel shows a purple circle for overridden requests</figcaption>

In case you forgot the path of the file you created, just hover on the override icon and it will display the path again. Note that the Override column can not be hidden manually. It is automatically displayed if you have any override enabled, and it will disappear after all overrides have been removed.

Now that the override is set, go ahead and modify the file locally, reload your tab and you should see the updated content. You might want to check the “Disable Cache” option in the network panel to make sure the browser will send a new request and your override will be used – we have a bug filed to automatically do this. Again, you can use this feature with any request from the network monitor: HTML, CSS, JS, images etc…

Once you are done with testing you can remove the override by opening the context menu again and selecting “Remove Network Override”.

<figcaption class="wp-element-caption">Network Panel context menu: Remove Network Override</figcaption>

Limitations and next steps

I am very happy to be able to use network overrides directly from Firefox DevTools without any additional tool, but I should still mention some known limitations and issues with the current feature.

First of all, the overrides are not persisted after you close DevTools or the tab. In a sense it’s good because it makes it easy to get rid of all your overrides at once. But if you have a complicated setup requiring to override several requests, it would be nice be able to persist some of that configuration.

Also the Override “status” only indicates that you enabled an override for a given request, not that the response was actually overridden. It would be great if it also indicated whether the response for this request was overridden or not (bug).

We also currently don’t support network overrides in remote debugging (bug).

In terms of user experience, we might also look into what Chrome DevTools is doing for network overrides, where you can set a folder to store all your network overrides.

Finally we are open to suggestions on which network debugging tool could be useful to you. For example it would be nice to allow modifying response headers or delaying responses. But you probably have other ideas and we would be happy to read those, either in the comments down below or directly on discourse, bugzilla or Element.

In the meantime, thanks for reading and happy overrides!

Firefox Developer ExperienceFirefox DevTools Newsletter — 136

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 136 release cycle.

Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla:

  • like Karan Yadav who fixed the box model dimensions for element with display:none (#1007374), made it possible to save a single Network request to HAR (#1513984) and fixed the “offline” setting in the throttling menu or the Responsive Design Mode (#1873929).
  • Meike [:mei] added the pt unit in the Fonts panel (#1940009)

Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues

Highlights

  • Show profile details in the network throttling menu items (#1770932)
  • Throttling menu in Network monitor. The different throttling simulations indicate the download, upload and ping data being simulated (for example, for GPRS, it says download 50kbps, upload 20Kbps, ping 500ms)
  • The JSON Viewer is parsing the JSON string it’s displaying, which was causing some troubles when some values can’t be accurately parsed in JS (for example, JSON.parse('{"large": 1516340399466235648}') returns { large: 1516340399466235600 }). In such case, we now properly show the source value, as well as a badge that show the JS parsed value to avoid any confusion (#1431808)
  • Firefox JSON viewer showing a JSON file with multiple items, some of them having a special badge prefixed with JS. For example, there's a `big: 1516340399466235648` property, and next to it a badge with: `JS:1516340399466235600` A tooltip is displayed with the text: "Javascript parsed value"
  • Links to MDN were added in the Network Monitor for Cross-Origin-* headers (#1943610)
  • We made the “Raw” network response toggle persist: once you check it, all the request you click on will show you the raw response (until you uncheck it) (#1555647)
  • We drastically improved the Network Monitor performance, especially when it has a lot of requests (#1942149, #1943339)
  • A couple issues were fixed in the Inspector Rules view autocomplete (#1184538, #1444772), as well as autocomplete for classes with non-alpha characters in the markup view search (#1220387)
  • Firefox 132 added support for CSSNestedDeclarations rules, which changed how declarations set after a nested declaration were handled. Previously, those declarations were “moved up”, before any nested declarations. This could be confusing and the specification was updated to better align with developers expectations. Unfortunately, this caused a few breakages in the Inspector when dealing with nested rules; for example when adding a declaration, those would appears twice and wouldn’t be placed at the right position. This should now be behaving correctly (#1946445), and we have a few other fixes coming around nested declaration.
  • We fixed an issue that was preventing to delete cookies with Domain attributes (#1947240)
  • Finally, after many months of hard work, we successfully migrated the Debugger to use codemirror 6 (#1942702)

That’s it for this month, thank you for reading this and using our tools, see you in a few weeks for a new round of updates 🙂

Full list of fixed bugs in DevTools for the Firefox 136 release:

Firefox Developer ExperienceFirefox DevTools Newsletter — 135

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 135 release cycle.

Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla:

Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues

Highlights

  • You can now navigate the Debugger call stack panel using the keyboard up/down arrow keys (#1843324)
  • We added an option in the Debugger so you can control whether WebExtensions scripts should be visible in the sources tree (#1413872, #1933194)
The Debugger sources tree with a cog-icon button in the top-right corner. An anchored menu is displayed poiting to the button, and includes a "Show content script" checked item. In the sources tree we can see entries for a few webextension (for example uBlock Origin)
  • Did you know that you can set a name for Workers? The threads panel in the Debugger will now use this name when it’s defined for a worker (#1589908)
  • We fixed an issue where the Preview popup wouldn’t show the value for the hovered expression (#1941269)
  • File and data URI requests are now visible in the Network Monitor (#1903496, #972821)
  • The partition key for CHIPS cookies is now displayed in the storage panel (#1895215)
  • You probably know that the WebConsole comes with nice helpers to interact with the page. For example $$() is kind of an alias for document.querySelectorAll() (except it returns an Array, while the latter returns a NodeList). In Firefox 135, we added a $$$() helper which will returns element matching the passed selector, including elements in the shadow DOM.
  • Finally, we improved the stability of the toolbox, especially when your machine is under heavy load (#1918267)

That’s it for this month, thank you for reading this and using our tools, see you in a few weeks for a new round of updates 🙂

Full list of fixed bugs in DevTools for the Firefox 135 release:

Mitchell BakerGlobal AI Summit on Africa