Looking For

William Durand

Moziversary #8

Today is my eighth Moziversary 🎂 I joined Mozilla as a full-time employee on May 1st, 2018. I previously blogged in 2019, 2020, 2021, 2022, 2023, 2024, and 2025.

You might have come across this built-in data consent thing for extensions in Firefox. I spent a good chunk of last year working on this project, from developing a technical proposal to implementing the feature in Gecko, Firefox for desktop and Firefox for Android.

Talking about Android, I became the module owner for Fenix::Add-ons, a module for all the code related to add-ons in Firefox for Android (which we call “Fenix” internally). Between the creation of this new module, and an ever-solidifying collaboration between the Add-ons and Android teams, the support for extensions in Firefox for Android has a bright future! Having started my Android journey in 2023, this feels like a noteworthy achievement.

Near the end of last year, I moved back to being a full-time AMO engineer to support a team that was down to two engineers. I redesigned the detail page, and started some refactoring on our security scanners, which I had originally created back in 2019 😬

In other news, I joined the AI/LLM/vibe-coding crowd thanks to my colleague Paul, and it took me about a month to get brain-fried
 AI fatigue is real, indeed. That said, Claude code has been somewhat useful to me, and I don’t hate it, but I also don’t love it.

Thank you to everyone in the Add-ons team as well as to all the folks I had the pleasure to work with so far. Cheers!

The Rust Programming Language Blog

Raising the baseline for the `nvptx64-nvidia-cuda` target

The nvptx64-nvidia-cuda target is a compilation target for NVIDIA GPUs. When using this target, the final output is PTX. Two version choices shape that output:

  • a GPU architecture (for example, sm_70, sm_80, 
), which determines which GPUs can run the PTX, and
  • a PTX ISA version, which determines which CUDA driver versions can load (and JIT-compile) the PTX.

In Rust 1.97 (scheduled for release on July 9, 2026), the baseline PTX ISA version and GPU architecture for nvptx64-nvidia-cuda will be increased. These changes affect both the Rust compiler (rustc) and related host tooling, and they make it impossible to generate PTX artifacts compatible with older GPUs and older CUDA drivers.

The new minimum supported versions will be:

  • PTX ISA 7.0 (requires a CUDA 11 driver or newer)
  • SM 7.0 (GPUs with compute capability below 7.0 are no longer supported)

Why are the requirements being changed?

Until now, Rust has supported emitting PTX for a wide range of GPU architectures and PTX ISA versions. In practice, several defects existed that could cause valid Rust code to trigger compiler crashes or miscompilations. Raising the baseline addresses these issues and enables more complete support for the remaining supported hardware.

Removing support affects users of the architectures being removed. In this case, the most recent affected GPU architectures date back to 2017 and are no longer actively supported by NVIDIA. We therefore expect the overall impact of this change to be limited.

Maintaining support for these architectures would require substantial effort. These removals let us focus development efforts on improving correctness and performance for currently supported hardware.

What happens when I update to Rust 1.97?

If you need to target a CUDA driver that does not support PTX ISA 7.0 (CUDA 10-era drivers and older), Rust 1.97 will no longer be able to generate PTX compatible with that environment. Similarly, if you need to run on GPUs with compute capability below 7.0 (for example, Maxwell or Pascal), Rust 1.97 will no longer be able to generate compatible PTX for those GPUs.

Assuming you are targeting a CUDA driver compatible with CUDA 11 or newer and using GPUs with compute capability 7.0 or newer:

  • If you do not specify -C target-cpu, the new default will be sm_70, and your build should continue to work (but will no longer be compatible with pre-Volta GPUs).
  • If you currently specify an older -C target-cpu (for example, sm_60), you will need to either:
    • remove that flag and let it default to sm_70, or
    • update it to sm_70 or a newer architecture.
  • If you already specify -C target-cpu=sm_70 (or newer), there should be no behavioral changes from this update.

For more details on building and configuring nvptx64-nvidia-cuda, see the platform support documentation.

Firefox Nightly

Import-ant Updates – These Weeks in Firefox: Issue 201

Highlights

  • Import attributes will be supported for WebExtensions, starting in Firefox 150!
    • This allows WebExtension authors to import CSS module scripts and JSON into their JavaScript modules.
    • Examples:
      • import sheet from ‘./styles.css’ assert { type: ‘css’ };
      • import schema from “./policies-schema.json” with { type: “json” };
  • The Web Serial API is now available for testing in Firefox Nightly!
  • Dharma created a new quick action for Firefox Library
    • You can test this out in Firefox Nightly 151 by typing “library” in the URL bar

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Chris Vander Linden
  • Chukwuka Rosemary
  • DrSeed
  • EJiro Oghenekome
  • FrĂ©dĂ©ric Wang NĂ©lar
  • Itoro James
  • japandi
  • John Iweh
  • jonathancabera
  • Josh Aas
  • Justin Peter
  • Keji Bakare
  • kofoworola shonuyi
  • konyhĂ©a
  • Noble Chinonso
  • Okhuomon Ajayi
  • Oluwatobi
  • Pranjali Srivastava
  • ROSHAAN
  • Sameeksha

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Removed obsolete migration logic that forced distribution language packs to be reinstalled when upgrading from Firefox versions older than 67 – Bug 2000797
    • Thanks to Aloys for contributing the changes needed to cleanup this old XPIProvider migration logic
WebExtensions Framework
  • Fixed a regression where WebRTC permission popups were queued and suppressed while an extension popup was open – Bug 1982832
WebExtension APIs
  • Fixed an edge case where tabs.move would revert splitview tabs order while moving splitview tab to a new window  – Bug 2028832
  • Fixed windows API reporting window type normal instead of popup for windows opened via window.open() – Bug 2030631
    • Thanks to Brandon Lucier for contributing this small but very much appreciated fix to the windows WebExtensions API!

DevTools

WebDriver

Lint, Docs and Workflow

New Tab Page

  • You can preview Nova on the New Tab browser.newtabpage.activity-stream.nova.enabled to true, and then opening a few tabs.
    • The browser.nova.enabled pref was just introduced to turn on the Nova design tokens. That’s still very much a work in progress.

Picture-in-Picture

Search and Urlbar

Search
  • Mandy updated the search-config-v2 schema for partnerCodes (2027191)
  • Florian fixed a high frequency intermittent test failure in the search code (2009494)
  • Standard8 fixed a bug with duplicate keywords for search engines, so that we now prefer the default search engine (2024714)
Nova
  • mconley made a rounder search input for about:newtab and about:privatebrowsing (2027144)
  • Drew is continuing his work on making Nova updates to the urlbar  (2026859)
New searchbar
Urlbar
  • Dharma also fixed a quick action telemetry error (1955058)
  • James has been working on Adaptive History for Autofill Improvements (2019695, 2019695, 2021079, 2019719, 2021036, 2028730, 2021039, 2019626)
  • Gijs landed a patch so that we use aria-notify instead of A11yUtils.announce for UrlbarView’s “special” announcements (2026753)
  • Dale added a tooltip to engines in the unified search panel (2028668)
  • Drew has been working on sports suggestions (2025052)
  • James fixed a bug where persisted search was not working for Fx versions prior to 148 (2025933)

Smart Window

Storybook/Reusable Components/Acorn Design System

UX Fundamentals

  • Felt Privacy error pages now support more NSS errors instead of falling through to the legacy page. Updated introductory text for the denied-port-access error. – 2024150
  • Fixed a test in browser_aboutCertError.js that was failing on Linux opt standalone and removed the platform skip. – 2028651
  • Added clock skew detection to the Felt Privacy error pages. When a certificate error is caused by a wrong system clock, the Felt Privacy error pages now show the same dedicated clock-skew message that the legacy error pages had, and helps guide users to correct their system time. – 2025049
  • ​​Fixed misaligned bullet points in the “What can you do?” section of Felt Privacy network error pages, restoring correct visual indentation for that list. – 2028632

The Mozilla Blog

Welcoming Abigail Besdin, Mozilla’s new Chief Operating Officer

We’re delighted that Abigail Besdin has joined Mozilla as our new Chief Operating Officer.

This is an incredibly exciting time for Mozilla. Our focus is to become the world’s most trusted software company by building products that let people use the internet openly, safely, and on their terms. As technology changes rapidly, we are working to strengthen the business foundation and infrastructure that champions our mission. Delivering on that ambition takes more than great products; it demands operational rigor. Abigail will lead this effort, demonstrating how values-driven organizations can scale with discipline, speed, and trust in the AI era.

As COO, Abigail will drive company strategy and oversee Mozilla’s Core Services teams: Business Operations, Data, Infrastructure, IT, Legal, People, Security, and Strategy. These are the functions that enable us to move quickly and scale with focus. Abigail will sharpen how we plan, prioritize, and execute across the company.

Abigail brings more than 18 years of experience building and scaling high-impact platforms. She co-founded Great Jones, a venture-backed property management startup where she raised $30M, reached $10M in ARR, and led a successful acquisition by Roofstock. At Roofstock, she served as Chief of Staff to the CEO — functioning as an internal COO — where she launched new product lines, closed and integrated two acquisitions, and led the company’s strategic planning process. 

Earlier in her career, she spent six years at Skillshare, where she launched the company’s online learning platform and built its growth and content engines from the ground up.

That combination of founder’s instinct and operator’s discipline is exactly what Mozilla needs right now. Abigail will report directly to our CEO and join the executive team.

I’ve learned firsthand that ambitious product goals are only as effective as the operations underpinning them. Mozilla’s mission is as big as it gets, and I’m thrilled to lead our Core Services organization to enable rigorous, smart, and quick decision-making across the business. With a powerful execution engine, we can make sure the best of Mozilla’s mission materializes. 

Abigail Besdin, Chief Operating Officer

Abigail studied Philosophy at NYU, with a focus on Ethics and Mathematical Logic. Born and raised in New York City, she still lives there with her husband and three kids. 

Please join us in welcoming Abigail to Mozilla.

The post Welcoming Abigail Besdin, Mozilla’s new Chief Operating Officer appeared first on The Mozilla Blog.

Firefox Tooling Announcements

Firefox Profiler Deployment (April 28, 2026)

The latest version of the Firefox Profiler is now live! Check out the full changelog below to see what’s changed:

Highlights:

  • [fatadel] Dim non-matching nodes in the stack chart when searching (#5935)

  • [Markus Stange] Always render the CPU-usage-aware activity graph when CPU information is available (#5918)

  • [fatadel] Add CounterDisplayConfig to counters in the processed profile format (#5912)

  • [Nazım Can Altınova] Fallback to javascript highlighting in the source view as a backup (#5936)

  • [fatadel] Replace 4 counter track components with a single generic TrackCounter (#5944)

  • [Ryan Hunt] Add a fullscreen button to the bottom box (#5605)

  • [Nazım Can Altınova] Add “Include idle samples” toggle to the call tree settings (#5968)

  • [Markus Stange] Update the hovered item when panning any viewport canvas (#5903)

  • [Nazım Can Altınova] Fix loading .json.gz profiles from inside zip archives (#5959)

  • [Markus Stange] Replace symbolicator-cli with a profiler-edit node tool (#5965)

Other Changes:

  • [fatadel] Fix arrow panel appearing behind marker tooltips (#5926)

  • [fatadel] Upgrade Node.js from v22 to v24 (#5923)

  • [Markus Stange] Use createStackTableBySkippingDiscarded in focusSelf. (#5916)

  • [Markus Stange] Propagate isJS to symbolicated funcs (#5907)

  • [Nazım Can Altınova] Properly type the return value of _languageExtForPath (#5937)

  • [Nazım Can Altınova] Update typescript eslint dependencies (#5938)

  • [Markus Stange] Modernize more of the transform functions (#5934)

  • [Paul Adenot] Fix extractGeckoLogs for structured Log marker format (bug 2022540) (#5927)

  • [Nazım Can Altınova] Move some profile fetching code into a separate module. (#5939)

  • [Markus Stange] Migrate Home page animation to CSS transitions and remove react-transition-group (#5649)

  • [Nazım Can Altınova] Fix test/lint commands on Windows and fix CI (#5947)

  • [Nazım Can Altınova] Convert profile-logic/js-tracer.tsx to a ts file (#5942)

  • [Markus Stange] Remove panelLayoutGeneration (#5946)

  • [Nazım Can Altınova] Fix eslint-config-prettier silently overriding custom rules (#5955)

  • [Markus Stange] Speed up _computeCallNodeTableHierarchy by keeping siblings ordered by func (#5964)

  • [Nazım Can Altınova] Add dark mode versions of the fullscreen icons (#5972)

  • [fatadel] Use ephemeral port for esbuild’s internal dev server (#5974)

  • [carverdamien] Remove category from LongTaskMarkerPayload (#5975)

Big thanks to our amazing localizers for making this release possible:

  • de: Ger

  • de: Michael Köhler

  • el: Jim Spentzos

  • en-GB: Ian Neal

  • es-CL: ravmn

  • fr: ThĂ©o Chevalier

  • ia: Melo46

  • it: Francesco Lodolo [:flod]

  • nl: Mark Heijl

  • pt-BR: Marcelo Ghelman

  • ru: Valery Ledovskoy

  • ru: berry

  • sv-SE: Andreas Pettersson

  • tr: Grk

  • zh-CN: Olvcpr423

  • zh-CN: wxie

  • zh-TW: Pin-guang Chen

Find out more about the Firefox Profiler on profiler.firefox.com! If you have any questions, join the discussion on our Matrix channel!

1 post - 1 participant

Read full topic

Mozilla Data YouTube Channel

Outreachy Mentorship: A Retrospective

Will Lachance does a retrospective on the Glean Dictionary outreachy internship. See also "Linh's Outreachy Internship Highlights" https://www.youtube.com/watch?v=UJdIkHDPgGQ To learn more about Outreachy, see https://www.outreachy.org/

Mozilla Localization (L10N)

L10n Report: April Edition 2026

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

What’s new or coming up in Firefox desktop

Firefox string deadline changes

Starting with 149, some changes in developer deadlines relating to Nightly and Beta have resulted in a slight shift in string translation deadlines, giving us 2 extra days to land strings. Previously deadlines in Pontoon were set to the Sunday ahead of the final Release Candidate but going forward they will be set to a Tuesday. For example the upcoming deadline for Firefox 151 is Tuesday, May 12.

If you’re interested to see more details on upcoming Firefox releases and milestones, https://whattrainisitnow.com has all the latest details.

UI Refresh

Behind the scenes a refresh on the visual look of Firefox has been ongoing using the internal name “Nova”. You may have seen some blog reports recently on this, or perhaps have been seeing bugs in Bugzilla with this in the title. We will start seeing new strings related to these changes here and there as development work progresses, however we don’t expect a large number of string changes stemming from this work.

That being said, these updates also bring some changes in how we communicate directly to our users within Firefox. One of these changes you may have already met: our new mascot Kit. If you missed the announcement give it a read here. You may also notice a shift voice for user directed messages — with source strings becoming more Genuine, Fiery, and Playful. See this recent update in Firefox’s brand voice for more details.

Settings redesign

Localization for the update to about:settings has been going on for some time (starting early this year) and the bulk of the translation work is behind us at this point. You may see some new strings (particularly around Privacy & Security) but many of the strings are in a viewable/testable state in Nightly 152. You can check your translations and test out the redesign by typing about:config into your URL bar, proceeding past the warning message, and searching for browser.settings-redesign.enabled and setting the value to true.

What’s new or coming up in mobile

Things have been particularly busy on mobile over the past couple of months. For example, Firefox for Android saw a significant spike in April, with the number of new strings increasing to over 200 compared to fewer than 50 in March — more than eight times the typical monthly volume*.

There are two main drivers behind this increase. First, Firefox for Android is introducing a built-in VPN feature, bringing it in line with the functionality already available in Firefox. Second, both iOS and Android teams are working on a new widget for the upcoming 2026 World Cup, allowing users to follow their team directly from the browser.

Given the short turnaround time for this feature, you will notice that many strings are intentionally kept consistent across platforms — and started landing on Desktop as well. We’re also pre-landing as many strings as possible, ahead of implementation, to give localizers more time to complete translations.

* Did you know that you can track the number of new strings in a project from the Insights page in Pontoon? Check for example Firefox for Android. In the Translation activity chart, click on New source strings in the legend to display this data. Given the difference in scale, it can also help to hide other metrics to make the chart easier to read.

What’s new or coming up in Pontoon

New documentation system. Pontoon now features a brand-new, unified documentation system. This new hub brings together previously scattered resources into a single, streamlined experience, consolidating developer, localizer, and admin documentation from three separate sites into one cohesive platform. By centralizing content, the new system makes it easier to find, navigate, and maintain documentation, ensuring contributors of all roles have quick access to up-to-date and consistent guidance.

Search. You can now set default search options directly in your profile. This allows you to tailor your search without having to adjust filters each time.

The same settings are also applied when using the recently introduced global search page, which brings a major step forward in unifying localization across Mozilla by allowing users to search for strings across all projects and locales in one place. Inspired by Transvision and designed as its successor, the feature integrates deeply with Pontoon, making it easy to filter results, compare translations across languages, and jump directly into the translation workflow.

AI integration. We’ve also refined the prompt used by the LLM-powered translation feature. The goal is not to change how the feature works, but to make its output more consistent and better aligned with the context available in Pontoon. For example, the updated prompt improves how punctuation is handled, reducing variability in suggestions.

In addition, the prompt now includes more contextual data:

  • String ID.
  • Comments, including pinned comments from project managers.
  • Matches from terminology.

This additional context helps the model generate more relevant suggestions. It also represents a first step toward making LLM suggestions more useful, ahead of potential experiments with displaying them by default alongside suggestions from traditional machine translation.

New contributors. We’re also excited to welcome a group of new contributors who have started making an impact on Pontoon over the past few months. MundiaNderi, nishitmistry, dannycolin, first-afk, wassafshahzad, huseynovvusal, and Peacanduck have all contributed valuable improvements across different parts of the project, helping us move faster and improve the overall experience.

A special shoutout goes to Serah (MundiaNderi), who not only made significant contributions but also shared insights into her work in a recent blog post about enhancing comment management in Pontoon—an excellent example of the kind of collaboration and knowledge sharing we love to see in the community.

Newly published localizer facing documentation

As part of the recent documentation update for Pontoon, we’ve reorganized the content around pretranslation to make it clearer and easier to navigate. There is now a dedicated page outlining the criteria required to enable pretranslation for a locale, along with guidance on how to monitor its effectiveness over time (for example, by tracking metrics like acceptance rate or time to review). If you’re a locale manager and want to try pretranslation for your locale, you can request it directly from Pontoon.

Over the past 12 months, we also ran a limited experiment using paid translation agencies for two locales. The goal was to restore the localization level of Firefox for Android in cases where the community was inactive — situations that have since improved, with both communities now active again.
Because volunteer communities remain the foundation of Mozilla’s localization model, we wanted to be transparent about when and why this approach was used, and what it means in practice. This includes clarifying how external support fits within a community-driven ecosystem, where localizers retain ownership and responsibility for quality and direction. You can find more details in this page.

Friends of the Lion

Image by Elio Qoshi

We continue the localizer spotlight series this year.

  • Meet Oliver from China Firefox localizer, accounting student, former Minecraft translator, and Bocchi the Rock! fan He talks about starting with a single typo, why Firefox’s independence matters to him, and how the Simplified Chinese community keeps quality high with cross-review and shared responsibility.
  • Marcelo from Argentina needs no introduction to the localization communities. From Phoenix 0.3 to 24 years later, he shares how he got started, what it meant to be part of the Firefox 1.0 release, his experience as an l10n manager, and why using Mozilla products in his own language — Spanish (Argentina) — continues to motivate him.
  • What does 18 years of volunteer localization look like? From discovering Firefox and Linux out of curiosity to leading the Portuguese translation team, ClĂĄudio from Portugal reflects on why localization is a form of digital activism, and how every translated word helps build a more inclusive internet.
  • Baurzhan from Kazakhstan began his localization journey with a simple question: why wasn’t Kazakh available in widely used software? That curiosity grew into a long-term commitment to localization, leading to the successful translation of Firefox and many other open source projects. His work demonstrates the power of perseverance in making technology accessible to all.

If you enjoy the series, please help us identify the localizers you’d like to see featured filling out this nomination form. If you have stories to share, tell us in your own words.

Know someone in your l10n community who’s been doing a great job and should appear here? Contact us and we’ll make sure they get a shout-out!

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

The Servo Blog

March in Servo: keyboard navigation, better debugging, FreeBSD support, and more!

Servo 0.1.0 represents Servo’s biggest month ever, with a record 530 commits and our first ever release on crates.io! For security fixes, see § Security.

With this release Servo becomes more accessible, thanks to tab navigation (@mrobinson, @Loirooriol, #42952, #43019, #43058, #43246, #43267, #43067), keyboard navigation with Alt+Shift and the accesskey attribute (@mrobinson, #43031, #43144, #43434), and keyboard scrolling with Space and Shift+Space (@mrobinson, #43322).

We’ve shipped several new web platform features:

Plus a bunch of new DOM APIs:

servoshell is now installed as servoshell or servoshell.exe, rather than servo or servo.exe (@jschwe, @mrobinson, #42958). --userscripts has been removed for now, but anyone who uses it is welcome to reinstate it as a wrapper around User­Content­Manager::add­_script (@jschwe, #43573). We’ve fixed a bug where link hover status lines are sometimes not legible (@simartin, #43320), and we’re working on getting servoshell signed for macOS to avoid getting blocked by Gatekeeper (@jschwe, #42912).

After a long effort by @valpackett, @dlrobertson, and more recently @nortti0 and @sagudev (#43116, #43134), we can now build Servo for FreeBSD! Note that Servo 0.1.0 still has some issues that need to be worked around, but you can get all the details in #44601.

A great deal of work went into making the crates.io release possible, including renaming libservo to just servo (@jschwe, #43141), making each package self-contained (@jschwe, #43180, #43165), fixing build issues (@delan, @jschwe, #43170, #43458, #43463) and crates.io compliance issues (@jschwe, #43459), configuring package metadata (@jschwe, @StaySafe020, #43078, #43264, #43451, #43457, #43654), and organising our dependency tree (@jschwe, @yezhizhen, @webbeef, @mrobinson, #42916, #43243, #43263, #43516, #43526, #43552, #43615, #43622, #43273, #43092). As a result, you can now take your first step towards embedding Servo in a Rust app with:

$ cargo add servo

This is another big update, so here’s an outline:

Security

crypto.subtle.deriveBits() for X25519 checking for all-zero secrets, and verify() for HMAC comparing signatures, are now done in constant time (@kkoyung, #43775, #43773).

‘Content-Security-Policy’ now handles redirects correctly (@TimvdLippe, #43438), and sends violation reports with the correct blockedURI and referrer (@TimvdLippe, #43367, #43645, #43483). The policy in <meta> now combines with the policy sent in HTTP headers, rather than overriding it (@TimvdLippe, @elomscansio, #43063). When checking nonces, we now reject elements with duplicate attributes (@dyegoaurelio, #43216).

The document containing an <iframe> can no longer access the contents of error pages (@TimvdLippe, #43539), and CSP violations inside an <iframe> are now correctly reported (@TimvdLippe, #43652).

Work in progress

We’ve landed more work towards supporting IndexedDB, under --pref dom­_indexeddb­_enabled (@arihant2math, @gterzian, @Taym95, @jerensl, #42139, #42727, #43096, #43041, #42451, #43721, #43754, #42786), and towards supporting IntersectionObserver, under --pref dom­_intersection­_observer­_enabled (@stevennovaryo, @mrobinson, #42251).

We’re continuing to implement document.execCommand() for rich text editing (@TimvdLippe, #43177), under --pref dom­_exec­_command­_enabled. ‘beforeinput’ and ‘input’ events are now fired when executing supported and enabled commands (@TimvdLippe, #43087), the ‘defaultParagraphSeparator’ and ‘styleWithCSS’ commands are now supported (@TimvdLippe, #43028), and the ‘delete’ command is partially supported (@TimvdLippe, #43016, #43082).

We’re also working on the Font Loading API (@simonwuelker, #43286), under --pref dom­_fontface­_enabled. new FontFace() now accepts ArrayBuffer in its source argument (@simonwuelker, #43281).

All of the features above are enabled in servoshell’s experimental mode.

Work on accessibility support for web contents continues under --pref accessibility­_enabled. There was a breaking change in the embedding API (@delan, @alice, #43029), and we’ve landed support for “grafting” the accessibility tree of a document into that of its containing webview (@delan, @alice, #43012, #43013, #43556). As a result, when you navigate, separate documents can have separate accessibility trees without complicating the embedder.

<link rel=modulepreload> is now partially supported (@Gae24, #42964), though recursive fetching of descendants is gated by --pref dom­_allow­_preloading­_module­_descendants (@Gae24, #43353).

For a long time, Servo has had some support for the Web Bluetooth API under --pref dom­_bluetooth­_enabled. We’ve recently reworked our implementation to adopt btleplug, the cross-platform Rust-native Bluetooth LE library (@webbeef, #43529, #43581).

We’re now implementing the Web Animations API, starting with AnimationTimeline and DocumentTimeline (@mrobinson, #43711).

We’ve landed more fixes to Servo’s async parser (@simonwuelker, #42930, #42959), under --pref dom­_servoparser­_async­_html­_tokenizer­_enabled. If we can get the feature working more reliably (#37418), it could halve the energy Servo spends on parsing, lower latency for pages that don’t use document.write(), and even improve the html5ever API for the ecosystem.

For developers

Servo’s DevTools feature now has partial support for inspecting service workers (@CynthiaOketch, #43659), as well as using the navigation controls along the top of the UI (@brentschroeter, @eerii, #43026).

In the Inspector tab, we’ve fixed a bug where the UI stops updating when navigating to a new page (@brentschroeter, #43153).

In the Console tab, you can now evaluate JavaScript in web workers and service workers (@SharanRP, #43361, #43492).

In the Debugger tab, you can now Step In, Step Out, and Step Over (@eerii, @atbrakhi, #42907, #43040, #43042, #43135). We’ve landed partial support for the Scopes panel (@eerii, @atbrakhi, #43166, #43167, #43232), the Call stack panel (@atbrakhi, @eerii, #43015, #43039), and showing you information when hovering over objects, arrays, functions, and other values (@atbrakhi, @eerii, #43319, #43356, #43456, #42996, #42936, #42994).

We’ve fixed some long-outstanding bugs where the DevTools UI may stop responding due to protocol desyncs (@brentschroeter, @eerii, #43230, #43236), or due to messages from multiple Servo threads being interleaved (@brentschroeter, @eerii, #43472).

For developers of Servo itself, mach can be a bit opaque at times. To make mach more transparent and composable, we’ve added mach print-env and mach exec commands (@jschwe, #42888).

We’re also working on a new dev container, which will provide an alternative to our usual procedures for setting up a Servo build environment (@jschwe, @sagudev, #43127, #43131, #43139).

Embedding and automation

Breaking changes:

Removed from our API:

You can now read and write cookies with SiteDataManager::cookies­_for­_url() and set­_cookie­_for­_url() (@longvatrong111, #43600).

ClipboardDelegate and StringRequest are now exposed to the public API, allowing you to implement custom clipboard delegates (@jdm, @chrisduerr, #43203, #43261). You can pass your custom delegate to WebViewBuilder::clipboard­_delegate().

You can now get the EmbedderControlId associated with an InputMethodControl by calling InputMethodControl::id() (@chrisduerr, #43248).

PixelFormat now implements Debug (@chrisduerr, @mrobinson, #43249).

We’ve improved the docs for Servo, ServoBuilder, WebViewBuilder, RenderingContext (@chrisduerr, #43229), EmbedderControlId, EmbedderControlRequest, EmbedderControlResponse, SimpleDialogRequest, AlertResponse, ConfirmResponse, PromptResponse, EmbedderMsg (@mukilan, #43564), ResourceReaderMethods (@jschwe, @mrobinson, #43769), servo::input­_events (@mukilan, #43681), and WheelDelta (@yezhizhen, @mrobinson, #43210).

We fixed a deadlock in WebDriver that occurs under heavy use of actions from multiple input sources (@yezhizhen, #43202, #43169, #43262, #43275, #43301), ‘pointerMove’ actions with a ‘duration’ are now smoothly interpolated (@yezhizhen, #42946, #43076).

Add Cookie is now more conformant (@yezhizhen, #43690), which led to Servo developers landing a spec patch. ‘pause’ actions are now slightly more efficient (@yezhizhen, #43014), and we’ve fixed a bug where ‘wheel’ actions fail to interleave with other actions (@yezhizhen, #43126).

More on the web platform

Carets now blink in text fields (@mrobinson, #43128). You can configure or disable blinking carets with --pref editing_caret_blink_time=0 or a duration in milliseconds. Clicking to move the caret is more forgiving now (@mrobinson, #43238), and moving the caret by a word at a time is more conventional on Windows and Linux, with Ctrl instead of Alt (@mrobinson, #43436). We’ve also fixed a bug where pressing the arrow keys in text fields both moves the caret (good) and scrolls the page (bad), and fixed a bug where the caret fails to render on empty lines (@mrobinson, @freyacodes, #43247, #42218).

Input has improved, with more responsive touchpad scrolling on Linux (@mrobinson, @chrisduerr, #43350). Pointer events and mouse events can now be captured across shadow DOM boundaries (@simonwuelker, #42987), and we’ve now started working towards shadow-DOM-compatible focus (@mrobinson, #43811). Pressing Space or Enter inside text fields no longer causes them to be clicked (@mrobinson, #43343).

The lang attribute is now taken into account when shaping, which is important for the correct rendering of Chinese and Japanese text (@RichardTjokroutomo, @mrobinson, #43447). ‘font-weight’ is now matched more accurately when no available font is an exact match (@shubhamg13, #43125).

Navigation is one of the most complicated parts of HTML: navigating can run some JavaScript that replaces the page, just run some JavaScript, or depending on the response, do nothing at all. <iframe> makes navigation doubly complicated: the document containing an <iframe> can observe and interact with the document inside the <iframe> in various ways, often synchronously. This has been the source of many bugs over the years, but we’ve recently fixed one of those major issues (@jdm, #43496).

javascript: URLs are a massive special case with many quirks, and <iframe> has its own big edge cases.

new Worker() now supports JS modules (@pylbrecht, @Gae24, #40365), and CanvasRenderingContext2D now supports drawing text with Variation Selectors, allowing you to control things like emoji presentation and CJK shaping (@yezhizhen, #43449).

Servo now fires ‘pointerover’, ‘pointerout’, ‘pointerenter’, and ‘pointerleave’ events on web content (@webbeef, #42736), ‘scroll’ events on VisualViewport (@stevennovaryo, #42771), and ‘scrollend’ events on Document, Element, and VisualViewport (@abdelrahman1234567, @mrobinson, #38773). We also fire ‘error’ events when event handler attributes contain syntax errors (@simonwuelker, #43178).

We’ve improved the default appearance of <summary> (@Loirooriol, #43111), <select> (@lukewarlow, #43175), <input type=file> (@lukewarlow, @AlexVasiluta, @lukewarlow, #43498, #43186), and <textarea> and <input type=text> and friends (@mrobinson, #43132), plus ‘::marker’ in mixed LTR/RTL content (@Loirooriol, #43201). <select> also now requires user interaction to open the picker (@SharanRP, #43485).

<form action>, <iframe src>, open(url) on XMLHttpRequest, new EventSource(url), and new Worker(url) now correctly resolve the URL with the page encoding (@SharanRP, @jdm, @jayant911, @Veercodeprog, @sabbCodes, #43521, #43554, #43572, #43537, #43634, #43588).

‘direction’ now works on grid containers (@nicoburns, #42118), SVG images can now be used in ‘border-image’ (@shubhamg13, #42566), ‘linear-gradient()’ now dithers to reduce banding (@Messi002, #43603), ‘letter-spacing’ no longer applies to invisible zero-width formatting characters (@simonwuelker, #42961), and ‘:active’ now matches disabled or non-focusable elements too, as long as they are being clicked (@webbeef, #42935).

DOMContentLoaded timings in Performance­Navigation­Timing are more accurate (@simonwuelker, #43151). Performance­Paint­Timing and Largest­Contentful­Paint are more accurate too, taking <iframe> into account (@shubhamg13, #42149), and checking for and ignoring things like broken images and transparent backgrounds (@shubhamg13, #42833, #42975, #43475).

We’ve improved the conformance of JS modules (@Gae24, #43585), <button command> (@lukewarlow, #42883), <font size> (@shubhamg13, #43103), <link media> and <link type> (@TimvdLippe, #43043), <option selected> (@SharanRP, #43582), <script integrity> and <style integrity> (@Gae24, #42931), EventSource (@mishop-15, #42179), SubtleCrypto (@kkoyung, #42984, #43315, #43533, #43519), Worker (@simonwuelker, #43329), HTMLVideoElement (@shubhamg13, #43341), dataset on Element (@TimvdLippe, #43046), and querySelector() and querySelectorAll() (@simonwuelker, #42991).

We’ve fixed bugs related to error reporting (@simonwuelker, @xZaisk, @yezhizhen, @eyupcanakman, #43191, #43323, #43101, #43560), event loops (@jayant911, #43523), focus (@jakubadamw, #43431), quirks mode (@mrobinson, @Loirooriol, @lukewarlow, #42960, #43368), <iframe> (@TimvdLippe, @jdm, #43539, #43732), the ‘animationstart’ and ‘animationend’ events (@simonwuelker, #43454), the ‘touchmove’ event (@yezhizhen, #42926), CanvasRenderingContext2D (@simonwuelker, #43218), Worker (@bruno-j-nicoletti, #43213), ‘:active’ on <input> (@mrobinson, #43722), ‘overflow: scroll’ on ‘::before’ and ‘::after’ (@stevennovaryo, #43231), ‘position: absolute’ (@yoursanonymous, @Loirooriol, #43084), and <img> and <svg> without width or height attributes (@Loirooriol, #42666). Fixing that last bug led to Servo developers finding two spec issues!

We’ve landed partial support for using CSS counters in ‘list-style-type’ on ‘display: list-item’ and ‘content’ on ‘::marker’, but the counter values themselves are not calculated yet, so all list items still read as 0. or similar. In any case, you can use a <counter-style-name> or ‘symbols()’ in ‘list-style-type’, and ‘counter()’ and ‘counters()’ in ‘content’ (@Loirooriol, #43111).

We’ve also landed partial support for <marquee> and the HTMLMarqueeElement interface, including basic layout, but the contents are not animated yet (@mrobinson, @lukewarlow, #43520, #43610).

Servo now exposes several attributes that have no direct effect, but are needed for web compatibility (@lukewarlow, #43500, #43499, #43502, #43518):

  • noHref on HTMLAreaElement
  • hreflang, type, charset on HTMLAnchorElement
  • useMap on HTMLInputElement and HTMLObjectElement
  • longDesc on HTMLIFrameElement and HTMLFrameElement

Performance and stability

We’ve fixed sluggish scrolling on long documents like this page on docs.rs (@webbeef, @yezhizhen, #43074, #43138), and reduced the memory usage of BoxFragment by 10% (@stevennovaryo, #43056). about:memory now has a Force GC button (@webbeef, #42798), and no longer reports all processes as content processes in multiprocess mode (@webbeef, #42923).

Web fonts are no longer fetched more than once, and they no longer cause reflow when they fail to load (@minghuaw, #43382, #43595). We’re also working towards better caching for shaping results (@mrobinson, @lukewarlow, @Loirooriol, #43653). Event handler attribute lookup is more efficient now (@Narfinger, #43337), and we’ve made DOM tree walking more efficient in many cases (@Narfinger, #42781, #42978, #43476).

crypto.subtle.encrypt(), decrypt(), sign(), verify(), digest(), importKey(), unwrapKey(), decapsulateKey(), and decapsulateBits() are more efficient now (@kkoyung, #42927), thanks to a recent spec update.

More of Servo now uses cheaper crossbeam channels instead of IPC channels, unless Servo is running in multiprocess mode, or avoids IPC altogether (@Narfinger, @jschwe, @Taym95, #42077, #43309, #42966). We’ve also reduced clones, allocations, conversions, comparisons, and borrow checks in many parts of Servo (@simonwuelker, @kkoyung, @mrobinson, @Narfinger, @yezhizhen, @TG199, #43212, #43055, #43066, #43304, #43452, #43717, #43780, #43088, #43226).

DOM data structures (#[dom_struct]) can refer to one another, with the help of garbage collection. But when DOM objects are being destroyed, those references can become invalid for a brief moment, depending on the order the GC finalizers run in. This can be unsound if those references are accessed, which is a very easy mistake to make if the type has an impl Drop. To help prevent that class of bug, we’re reworking our DOM types so that none of them have #[dom_struct] and impl Drop at the same time (@willypuzzle, #42937, #42982, #43018, #43071, #43222, #43288, #43544, #43563, #43631).

We’ve fixed a crash caused by an IPC resource leak when making many requests over time (@yezhizhen, #43381), and some bugs found by ThreadSanitizer and --debug-mozjs (@jdm, @Loirooriol, #42976, #42963, #43487). We’ve also fixed crashes in CanvasRenderingContext2D (@yezhizhen, #43449), Crypto (@rogerkorantenng, #43501), devtools (@simonwuelker, #43133), event handler attributes (@simonwuelker, #43178), Promise (@Narfinger, @jdm, #43470), and WebDriver (@Tarmil, @yezhizhen, #42739, #43381).

We’ve continued our long-running effort to use the Rust type system to make certain kinds of dynamic borrow failures impossible (@Narfinger, @Gae24, @Uiniel, @TimvdLippe, @yezhizhen, @sagudev, @PuercoPop, @pylbrecht, @arabson99, @jayant911, #42957, #43108, #43130, #43215, #43183, #43219, #43245, #43220, #43252, #43268, #43184, #43277, #43278, #43284, #43302, #43312, #43348, #43327, #43362, #43365, #43383, #43432, #43259, #43439, #43473, #43481, #43480, #43479, #43525, #43535, #43543, #43549, #43570, #43571, #43569, #43579, #43584, #43657, #43713).

Thanks to a wide range of people, many of whom were contributing to Servo for their first time, we’ve also landed a bunch of architectural improvements (@elomscansio, @mukilan, #43646), cleanups (@simartin, @SharanRP, @TG199, @sabbCodes, @niyabits, @eerii, @atbrakhi, #43276, #43285, #43532, #43778, #43771, #43566, #43567, #43587, #43140, #43316), and refactors (@sabbCodes, @arabson99, @jayant911, @StaySafe020, @saydmateen, @eerii, @TimvdLippe, @elomscansio, @CynthiaOketch, #43614, #43641, #43619, #43642, #43623, #43656, #43644, #43672, #43664, #43676, #43684, #43679, #43678, #43655, #43675, #43731, #43729, #43728, #43740, #43751, #43748, #43747, #43752, #43745, #43724, #43723, #43765, #43767, #43181, #43269, #43270, #43279, #43437, #43597, #43607, #43602, #43616, #43609, #43612, #43647, #43651, #43662, #43714, #43774).

Donations

Thanks again for your generous support! We are now receiving 7167 USD/month (+2.6% from February) in recurring donations. This helps us cover the cost of our speedy CI and benchmarking servers, one of our latest Outreachy interns, and funding maintainer work that helps more people contribute to Servo.

Servo is also on thanks.dev, and already 37 GitHub users (+5 from February) that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

We now have sponsorship tiers that allow you or your organisation to donate to the Servo project with public acknowlegement of your support. If you’re interested in this kind of sponsorship, please contact us at join@servo.org.

7167 USD/month
10000

Use of donations is decided transparently via the Technical Steering Committee’s public funding request process, and active proposals are tracked in servo/project#187. For more details, head to our Sponsorship page.

The Rust Programming Language Blog

Announcing Google Summer of Code 2026 selected projects

As previously announced, the Rust Project is participating in Google Summer of Code (GSoC) 2026. GSoC is a global program organized by Google that is designed to bring new contributors to the world of open source.

A few months ago, we published a list of GSoC project ideas, and started discussing these projects with potential GSoC applicants on our Zulip. We had many interesting discussions with the potential contributors, and even saw some of them making non-trivial contributions to various Rust Project repositories before GSoC officially started!

The applicants prepared and submitted their project proposals by the end of March. This year, we received 96 proposals, which is a 50% increase from last year. We are glad that there was again a lot of interest in our projects! Like many other GSoC organizations this year, we somewhat struggled with some AI-generated proposals and low-quality contributions generated using AI agents, but it stayed manageable.

GSoC requires us to produce an ordered list of the best proposals, which is always challenging, as Rust is a big project with many priorities. Our mentors examined the submitted proposals and evaluated them based on their prior interactions with the given applicant, their contributions so far, the quality of the proposal itself, but also the importance of the proposed project for the Rust Project and its wider community. We also had to take mentor bandwidth and availability into account. Unfortunately, we had to cancel some projects due to several mentors losing their funding for Rust work in the past few weeks.

As is usual in GSoC, even though some project topics received multiple proposals1, we had to pick only one proposal per project topic. We also had to choose between proposals targeting different work to avoid overloading a single mentor with multiple projects. In the end, we narrowed the list down to the best proposals that we could still realistically support with our available mentor pool. We submitted this list and eagerly awaited how many of them would be accepted into GSoC.

Selected projects

On the 30th of April, Google has announced the accepted projects. We are happy to share that 13 Rust Project proposals were accepted by Google for Google Summer of Code 2026. That is a lot of projects! We are really happy and excited about GSoC 2026!

Below you can find the list of accepted proposals (in alphabetical order), along with the names of their authors and the assigned mentor(s):

Congratulations to all applicants whose project was selected! Our mentors are looking forward to working with you on these exciting projects to improve the Rust ecosystem. You can expect to hear from us soon, so that we can start coordinating the work on your GSoC projects.

We are excited to mentor three contributors who already experienced GSoC with us in the previous year. Welcome back, Kei, Marcelo and Shourya!

We would like to thank all the applicants whose proposal was sadly not accepted, for their interactions with the Rust community and contributions to various Rust projects. There were some great proposals that did not make the cut, in large part because of limited mentorship capacity. However, even if your proposal was not accepted, we would be happy if you would consider contributing to the projects that got you interested, even outside GSoC! Our project idea list is still current and could serve as a general entry point for contributors that would like to work on projects that would help the Rust Project and the Rust ecosystem. Some of the Rust Project Goals are also looking for help.

There is a good chance we'll participate in GSoC next year as well (though we can't promise anything at this moment), so we hope to receive your proposals again in the future!

The accepted GSoC projects will run for several months. After GSoC 2026 finishes (in autumn of 2026), we will publish a blog post in which we will summarize the outcome of the accepted projects.

Firefox Tooling Announcements

MozPhab 2.14.0 Released

Bugs resolved in Moz-Phab 2.14.0:

  • bug 2032102 Parallelize revision creation and diff property calls in submit for faster stack submission

Discuss these changes in #engineering-workflow on Slack or #Conduit Matrix.

1 post - 1 participant

Read full topic

This Week In Rust

This Week in Rust 649

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Research
Miscellaneous

Crate of the Week

This week's crate is dithr, a buffer-first dithering and halftoning library.

Thanks to pbkx for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

No calls for testing were issued this week by Rust, Cargo, Rustup or Rust language RFCs.

Let us know if you would like your feature to be tracked as a part of this list.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

480 pull requests were merged in the last week

Compiler
Library
Cargo
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Relatively few perf-affecting changes this week. Perf report is more positive than users should see due to the -Zincremental-verify-ich related improvements in #155473.

Triage done by @simulacrum. Revision range: 9ab01ae5..ca9a134e

1 Regression, 5 Improvements, 3 Mixed; 3 of them in rollups 32 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Rust Cargo Compiler Team (MCPs only) Rust RFCs Unsafe Code Guidelines

No Items entered Final Comment Period this week for Language Reference, Language Team or Leadership Council. Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2026-04-29 - 2026-05-27 🩀

Virtual
Asia
Europe
North America
Oceania
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Sometimes, the best projects are the ones you never thought you could build.

– Chris Dell on his blog

Another week bereft of any quote suggestions. llogiq is glad to have found this anyway.

Please submit quotes and vote for next week!

This Week in Rust is edited by:

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Mozilla Data YouTube Channel

Glean Dictionary Looker Demo

A quick demonstration of the Glean Dictionary's new integration with Mozilla's instance of Looker.

Firefox Tooling Announcements

MozPhab 2.13.1 Released

Bugs resolved in Moz-Phab 2.13.1:

  • bug 2033054 Add AGENTS.md/CLAUDE.md for moz-phab
  • bug 2034269 reorg --force aborts on abandoned-revision ghost links in stackGraph

Discuss these changes in #engineering-workflow on Slack or #Conduit Matrix.

1 post - 1 participant

Read full topic

Jonathan Almeida

Rebase all WIPs to the latest upstream head

A small pet-peeve with fetching the latest main on jujutsu is that I like to move all my WIP patches to the new one. That's also nice because jj doesn't make me fix the conflicts immediately!

The solution from a co-worker (kudos to skippyhammond!) is to query all immediate decendants of the previous main after the fetch.

jj git fetch
# assuming 'z' is the rev-id of the previous main.
jj rebase -s "mutable()&z+" -d main

I haven't learnt how to make aliases accept params with it yet, so this will have to do for now.

Update: After a bit of searching, it seems that today this is only possible by wrapping it in a shell script. Based on the examples in the jj documentation an alias would look like this:

Update 2: After some months of usage across multiple repositories, I've found it better to be clear with the destination since main, trunk or others can be tracked with a combination of repository aliases too.

[aliases]
# Update all revs to the latest main; point to the previous one.
hoist = ["util", "exec", "--", "bash", "-c", """
set -euo pipefail
jj rebase -s "mutable()&$1+" -d "$2"
""", ""]

You can use this to rebase all your WIPs like so:

$ jj hoist <prev_main> <current_main>

If my previous main revision was kz, this is what I would end up doing:

$ jj fetch origin
$ jj hoist kz main@origin

Thunderbird Blog

Thunderbird Pro April 2026 Update

One of the most exciting aspects of bringing Thunderbird Pro to life is the opportunity to build an email service from Thunderbird together with our community, giving users the control and freedom they expect without relying on third party email service providers.

Over the past few months, we’ve been checking in with our community through quick surveys, and the feedback is clear: people care most about Thundermail. We’re listening and working to deliver what you expect as quickly as possible, focusing our resources on building a great Thundermail experience first, with Appointment and Send as power features alongside that foundation. We’re also adjusting the initial price to better align with your expectations.

We’ll be sending out the first wave of Early Bird Beta invites next month. If you haven’t already, please join the waitlist HERE and keep an eye on your inbox. We’re excited to get Thundermail into your hands and continue building it together.

Latest Thundermail Developments

Our work right now is focused on making Thundermail reliable, easy to set up, and ensuring a smooth onboarding experience with an intuitive design, both visually and functionally.

Sign-in and Setup

A new connection flow is in development that will make it much easier to add a Thundermail account to Thunderbird, including options like QR code setup and deeper integration within the app. We have also fixed a range of sign in issues, improved domain setup, and made it easier to move from account creation to actually using the service.

The account dashboard has been updated for a cleaner look, smoother onboarding, and easier access to the key details our users care about.  Configuring settings like app passwords, custom domains and aliases are now front and center when you first sign in.

Infrastructure

On the infrastructure side, we’re continuing to improve stability and performance. This includes completed work on upgrading Stalwart to strengthen spam detection so legitimate emails are far less likely to end up in spam, along with improvements to how we monitor the services so problems are easier to catch and less likely to affect users. Everyday actions like archiving and managing settings should feel more intuitive for users, and the web app, add-ons, and related services now work together more smoothly.

April Onward

  • Next up for the account experience is better alias and custom-domain handling, and even better integration between Thunderbird and the web account flow.
  • The dashboard is also getting another round of refinement so settings, account details, and subscription information are easier to understand at a glance.
  • Thundermail work continues by focusing on reliability and security, including aliases, delivery, transport security, and admin access controls.
  • There will also be a final layer of polish across the entire experience between the web app, add-on, and desktop flows.
  • Finally: Webmail is moving up our priority list. While still early, development is actively progressing and we’re aiming to bring a usable experience much sooner than originally planned.

Progress on Appointment and Send

While Thundermail is our primary focus, work on other Thunderbird Pro services is continuing.

For Appointment, we’ve made progress on reliability and backend performance, including improvements to how calendar tasks are processed and fixes to event handling. Our priorities heading up to the release are also focused on reliability, with refinement on calendar connections, event syncing, Zoom access, and a simpler first-time setup flow.

For Send, we’ve made substantial visual improvement so that it feels like a more natural part of Thunderbird Pro. We’ve also made a number of security improvements and are continuing to evaluate infrastructure choices to ensure long term reliability. Our priorities for Send in the coming months include better encryption-key handling and clearer password-protected downloads.

What’s Next

We’ll begin inviting people from the waitlist into the Early Bird beta shortly. If you haven’t signed up yet, now’s the time. Your feedback will directly shape how Thundermail evolves.


For more up to date news, check out our services roadmap at: https://roadmaps.thunderbird.net/services/

If you want to get involved in the direction of these features or want to contribute ideas to the team, you can visit https://ideas.tb.pro/.

The post Thunderbird Pro April 2026 Update appeared first on The Thunderbird Blog.

Firefox Nightly

VPN, Split View, and Other Goodies – These Weeks in Firefox: Issue 200!

Highlights

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Chris Vander Linden
  • EJiro Oghenekome
  • Keji Bakare
  • konyhĂ©a
  • Noble Chinonso
  • Pranjali Srivastava
  • Sameeksha

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Fixed a long-standing issue where extension paths stored in extensions.json (and addonStartup.json.lz4) became incorrect after restoring a Firefox profile to a different location and causing all previously installed add-ons to fail to load  – Bug 1429838

DevTools

WebDriver

Lint, Docs and Workflow

New Tab Page

Search and Urlbar

  • Dao and Moritz worked on follow ups and fixed several bugs related to the new search bar implementation, which shipped in Fx 149 (2026248, 2023611, 2025746, 2022159, 2022809, 2023656, 2023141)
  • Mandy added a new `hasBeenUsed` attribute to search engines to enable better messaging system targeting (2024078)
  • Mandy also added a new telemetry category for when search mode is activated via a feature callout (2018806)
  • Drew and Daisuke are continuing their work on making Nova updates to the urlbar (2019165)
  • Dale fixed two bugs related to the new Unified Trust Panel (2019928, 2013044)
  • Florian used Claude to fix intermittent bugs related to urlbar and search telemetry (2009767, 2024301)
  • James fixed a high frequency intermittent test failure in the urlbar code (2001962)
  • Marco fixed a high frequency intermittent test failure in the places database code (1981199)

Tab Groups

DJ is adding the ability to copy all URLs from the tabs in a tab group 1984338 – Add a way to share/send/copy all tabs urls from a given tab group

Jonathan Almeida

My Firefox for Android local build environment

The Firefox for Android app has always had a complicated build process - we're cramping a complex cross-platform browser engine and all the related components that make it work on Android into one package. In its current form, it lives in the Firefox mono-repo at mozilla-central (now mozilla-firefox using the git repository).

I wanted to document my "artifact-mode" environment here since it's worked quite successfully for me for many years with minor changes.

NOTE: After a fresh clone of the mono-repo, don't forget to first run and follow the prompts of ./mach bootstrap .

mozconfig

My mozconfig below is enabled for artifact mode, but occasionally I switch between various configurations. You can see those commented out, with these few extra notes:

  • I like to separate out my objdirs to avoid cache pollution between the different build types. I think you can get away without needing to specify this and an objdir for your build type and arch will be generated.
  • sccache speeds up the native portion of full builds after the first slow one, but it's a hit or miss if you fetch from the remote repository but don't need to rebuild as often.
  • I don't care to manually run the clobber step, and I don't truly appreciate why that isn't always automatically done.
  • Emilio's mozconfig manager looks like a better solution, however my needs are very simple.
# Build GeckoView/Firefox for Android:
ac_add_options --enable-application=mobile/android
# Targeting the following architecture.
# For regular phones, no --target is needed.
# For x86 emulators (and x86 devices, which are uncommon):
# ac_add_options --target=i686
# For newer phones or Apple silicon
ac_add_options --target=aarch64
# For x86_64 emulators (and x86_64 devices, which are even less common):
# ac_add_options --target=x86_64
# sccache will significantly speed up your builds by caching
# compilation results. The Firefox build system will download
# sccache automatically.
# This only works for non-artifact builds.
#ac_add_options --with-ccache=sccache
# Enable artifact builds; manager-mode.
ac_add_options --enable-artifact-builds
# Write build artifacts to..
## Full build dir
#mk_add_options MOZ_OBJDIR=./objdir-droid
#mk_add_options MOZ_OBJDIR=./objdir-desktop
## Artifact builds
mk_add_options MOZ_OBJDIR=./objdir-frontend
# Automatic clobbering; don't ask me.
mk_add_options AUTOCLOBBER=1

JAVA_HOME

Sometimes you might find yourself needing to run a (non-mach) command in the terminal. Those typically will need to invoke some parts of gradle for an Android build, so it's best to make sure those are using the same JDK as the bootstrapped one in the mono-repo. This avoids weird build errors where something that compiles in one place isn't working in another (like Android Studio).

The location for the JDKs are typically in ~/.mozbuild/jdk/, and if you've between around for ~6 months you end up with multiple versions after every JDK bump:

$ ls -l ~/.mozbuild/jdk/
drwxr-xr-x@ - jalmeida 15 Apr  2025 jdk-17.0.15+6
drwxr-xr-x@ - jalmeida 15 Jul  2025 jdk-17.0.16+8
drwxr-xr-x@ - jalmeida 21 Oct  2025 jdk-17.0.17+10
drwxr-xr-x@ - jalmeida 20 Jan 09:00 jdk-17.0.18+8
drwxr-xr-x@ - jalmeida 26 Feb 15:04 mozboot

You can find some way to point your latest JDK to one location or you can be lazy like me and pick the latest version to assign as your JAVA_HOME property by adding this to your shell's RC file:

export JAVA_HOME="$(ls -1dr -- $HOME/.mozbuild/jdk/jdk-* | head -n 1)/Contents/Home"

Android Studio

Similarly for Android Studio, let's do the same so that environment is identical. Head to, Settings | Build, Execution, Deployment | Build Tools | Gradle, and ensure that "Gradle JDK" path is set to JAVA_HOME.

Lately, the default seems to be for it to follow GRADLE_LOCAL_JAVA_HOME which is a property we can't easily override, so we have to manually set this ourselves.

Using the same Android SDK also helps speed things up and avoids source confusion. You can typically find it in ~/.mozbuild/android-sdk-macosx and update it at Settings | Languages & Frameworks | Android SDK.

Debugging

This section is for miscellaneous build error situations that come-up, but assuming mach build work and there are no known Android build changes, my solution has typically always been the same.

For example, the other day I fetched another engineers patch to test out locally1 as part of reviewing it where I faced the error message below:

Execution failed for task ':components:feature-pwa:compileDebugKotlin'.
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':components:feature-pwa:compileDebugKotlin'.
> A failure occurred while executing org.jetbrains.kotlin.compilerRunner.GradleCompilerRunnerWithWorkers$GradleKotlinCompilerWorkAction
   > Internal compiler error. See log for more details
* Try:
> Run with --info or --debug option to get more log output.
> Run with --scan to generate a Build Scan (powered by Develocity).
> Get more help at https://help.gradle.org.
* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':components:feature-pwa:compileDebugKotlin'.
	at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.lambda$executeIfValid$1(ExecuteActionsTaskExecuter.java:135)
	at org.gradle.internal.Try$Failure.ifSuccessfulOrElse(Try.java:288)
	at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:133)
	at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:121)
	at org.gradle.api.internal.tasks.execution.ProblemsTaskPathTrackingTaskExecuter.execute(ProblemsTaskPathTrackingTaskExecuter.java:41)
	at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46)
	at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:51)
	at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
	at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:74)
	at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
	at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77)
	at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55)
	at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52)
	at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:209)
	at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
	at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
	at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
	at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
	at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
	at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
	at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52)
	at org.gradle.execution.plan.DefaultNodeExecutor.executeLocalTaskNode(DefaultNodeExecutor.java:55)
	at org.gradle.execution.plan.DefaultNodeExecutor.execute(DefaultNodeExecutor.java:34)
	at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:355)
	at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:343)
	at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.lambda$execute$0(DefaultTaskExecutionGraph.java:339)
	at org.gradle.internal.operations.CurrentBuildOperationRef.with(CurrentBuildOperationRef.java:84)
	at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:339)
	at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:328)
	at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:459)
	at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:376)
	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
	at org.gradle.internal.concurrent.AbstractManagedExecutor$1.run(AbstractManagedExecutor.java:47)
Caused by: org.gradle.workers.internal.DefaultWorkerExecutor$WorkExecutionException: A failure occurred while executing org.jetbrains.kotlin.compilerRunner.GradleCompilerRunnerWithWorkers$GradleKotlinCompilerWorkAction
	at org.gradle.workers.internal.DefaultWorkerExecutor$WorkItemExecution.waitForCompletion(DefaultWorkerExecutor.java:289)
	at org.gradle.internal.work.DefaultAsyncWorkTracker.lambda$waitForItemsAndGatherFailures$2(DefaultAsyncWorkTracker.java:130)
	at org.gradle.internal.Factories$1.create(Factories.java:33)
	at org.gradle.internal.work.DefaultWorkerLeaseService.lambda$withoutLocks$2(DefaultWorkerLeaseService.java:344)
	at org.gradle.internal.work.ResourceLockStatistics$1.measure(ResourceLockStatistics.java:42)
	at org.gradle.internal.work.DefaultWorkerLeaseService.withoutLocks(DefaultWorkerLeaseService.java:342)
	at org.gradle.internal.work.DefaultWorkerLeaseService.withoutLocks(DefaultWorkerLeaseService.java:326)
	at org.gradle.internal.work.DefaultWorkerLeaseService.withoutLock(DefaultWorkerLeaseService.java:331)
	at org.gradle.internal.work.DefaultAsyncWorkTracker.waitForItemsAndGatherFailures(DefaultAsyncWorkTracker.java:126)
	at org.gradle.internal.work.DefaultAsyncWorkTracker.waitForItemsAndGatherFailures(DefaultAsyncWorkTracker.java:92)
	at org.gradle.internal.work.DefaultAsyncWorkTracker.waitForAll(DefaultAsyncWorkTracker.java:78)
	at org.gradle.internal.work.DefaultAsyncWorkTracker.waitForCompletion(DefaultAsyncWorkTracker.java:66)
	at org.gradle.api.internal.tasks.execution.TaskExecution$3.run(TaskExecution.java:260)
	at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:29)
	at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:26)
	at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
	at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
	at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
	at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
	at org.gradle.internal.operations.DefaultBuildOperationRunner.run(DefaultBuildOperationRunner.java:47)
	at org.gradle.api.internal.tasks.execution.TaskExecution.executeAction(TaskExecution.java:237)
	at org.gradle.api.internal.tasks.execution.TaskExecution.executeActions(TaskExecution.java:220)
	at org.gradle.api.internal.tasks.execution.TaskExecution.executeWithPreviousOutputFiles(TaskExecution.java:203)
	at org.gradle.api.internal.tasks.execution.TaskExecution.execute(TaskExecution.java:170)
	at org.gradle.internal.execution.steps.ExecuteStep.executeInternal(ExecuteStep.java:105)
	at org.gradle.internal.execution.steps.ExecuteStep.access$000(ExecuteStep.java:44)
	at org.gradle.internal.execution.steps.ExecuteStep$1.call(ExecuteStep.java:59)
	at org.gradle.internal.execution.steps.ExecuteStep$1.call(ExecuteStep.java:56)
	at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:209)
	at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
	at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
	at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
	at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
	at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
	at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
	at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:56)
	at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:44)
	at org.gradle.internal.execution.steps.CancelExecutionStep.execute(CancelExecutionStep.java:42)
	at org.gradle.internal.execution.steps.TimeoutStep.executeWithoutTimeout(TimeoutStep.java:75)
	at org.gradle.internal.execution.steps.TimeoutStep.execute(TimeoutStep.java:55)
	at org.gradle.internal.execution.steps.PreCreateOutputParentsStep.execute(PreCreateOutputParentsStep.java:50)
	at org.gradle.internal.execution.steps.PreCreateOutputParentsStep.execute(PreCreateOutputParentsStep.java:28)
	at org.gradle.internal.execution.steps.RemovePreviousOutputsStep.execute(RemovePreviousOutputsStep.java:68)
	at org.gradle.internal.execution.steps.RemovePreviousOutputsStep.execute(RemovePreviousOutputsStep.java:38)
	at org.gradle.internal.execution.steps.BroadcastChangingOutputsStep.execute(BroadcastChangingOutputsStep.java:61)
	at org.gradle.internal.execution.steps.BroadcastChangingOutputsStep.execute(BroadcastChangingOutputsStep.java:26)
	at org.gradle.internal.execution.steps.CaptureOutputsAfterExecutionStep.execute(CaptureOutputsAfterExecutionStep.java:69)
	at org.gradle.internal.execution.steps.CaptureOutputsAfterExecutionStep.execute(CaptureOutputsAfterExecutionStep.java:46)
	at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:39)
	at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:28)
	at org.gradle.internal.execution.steps.BuildCacheStep.executeWithoutCache(BuildCacheStep.java:189)
	at org.gradle.internal.execution.steps.BuildCacheStep.lambda$execute$1(BuildCacheStep.java:75)
	at org.gradle.internal.Either$Right.fold(Either.java:176)
	at org.gradle.internal.execution.caching.CachingState.fold(CachingState.java:62)
	at org.gradle.internal.execution.steps.BuildCacheStep.execute(BuildCacheStep.java:73)
	at org.gradle.internal.execution.steps.BuildCacheStep.execute(BuildCacheStep.java:48)
	at org.gradle.internal.execution.steps.StoreExecutionStateStep.execute(StoreExecutionStateStep.java:46)
	at org.gradle.internal.execution.steps.StoreExecutionStateStep.execute(StoreExecutionStateStep.java:35)
	at org.gradle.internal.execution.steps.SkipUpToDateStep.executeBecause(SkipUpToDateStep.java:75)
	at org.gradle.internal.execution.steps.SkipUpToDateStep.lambda$execute$2(SkipUpToDateStep.java:53)
	at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:53)
	at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:35)
	at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:37)
	at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:27)
	at org.gradle.internal.execution.steps.ResolveIncrementalCachingStateStep.executeDelegate(ResolveIncrementalCachingStateStep.java:49)
	at org.gradle.internal.execution.steps.ResolveIncrementalCachingStateStep.executeDelegate(ResolveIncrementalCachingStateStep.java:27)
	at org.gradle.internal.execution.steps.AbstractResolveCachingStateStep.execute(AbstractResolveCachingStateStep.java:71)
	at org.gradle.internal.execution.steps.AbstractResolveCachingStateStep.execute(AbstractResolveCachingStateStep.java:39)
	at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:64)
	at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:35)
	at org.gradle.internal.execution.steps.ValidateStep.execute(ValidateStep.java:62)
	at org.gradle.internal.execution.steps.ValidateStep.execute(ValidateStep.java:40)
	at org.gradle.internal.execution.steps.AbstractCaptureStateBeforeExecutionStep.execute(AbstractCaptureStateBeforeExecutionStep.java:76)
	at org.gradle.internal.execution.steps.AbstractCaptureStateBeforeExecutionStep.execute(AbstractCaptureStateBeforeExecutionStep.java:45)
	at org.gradle.internal.execution.steps.AbstractSkipEmptyWorkStep.executeWithNonEmptySources(AbstractSkipEmptyWorkStep.java:136)
	at org.gradle.internal.execution.steps.AbstractSkipEmptyWorkStep.execute(AbstractSkipEmptyWorkStep.java:66)
	at org.gradle.internal.execution.steps.AbstractSkipEmptyWorkStep.execute(AbstractSkipEmptyWorkStep.java:38)
	at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsStartedStep.execute(MarkSnapshottingInputsStartedStep.java:38)
	at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:36)
	at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:23)
	at org.gradle.internal.execution.steps.HandleStaleOutputsStep.execute(HandleStaleOutputsStep.java:75)
	at org.gradle.internal.execution.steps.HandleStaleOutputsStep.execute(HandleStaleOutputsStep.java:41)
	at org.gradle.internal.execution.steps.AssignMutableWorkspaceStep.lambda$execute$0(AssignMutableWorkspaceStep.java:35)
	at org.gradle.api.internal.tasks.execution.TaskExecution$4.withWorkspace(TaskExecution.java:297)
	at org.gradle.internal.execution.steps.AssignMutableWorkspaceStep.execute(AssignMutableWorkspaceStep.java:31)
	at org.gradle.internal.execution.steps.AssignMutableWorkspaceStep.execute(AssignMutableWorkspaceStep.java:22)
	at org.gradle.internal.execution.steps.ChoosePipelineStep.execute(ChoosePipelineStep.java:40)
	at org.gradle.internal.execution.steps.ChoosePipelineStep.execute(ChoosePipelineStep.java:23)
	at org.gradle.internal.execution.steps.ExecuteWorkBuildOperationFiringStep.lambda$execute$2(ExecuteWorkBuildOperationFiringStep.java:67)
	at org.gradle.internal.execution.steps.ExecuteWorkBuildOperationFiringStep.execute(ExecuteWorkBuildOperationFiringStep.java:67)
	at org.gradle.internal.execution.steps.ExecuteWorkBuildOperationFiringStep.execute(ExecuteWorkBuildOperationFiringStep.java:39)
	at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:46)
	at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:34)
	at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:44)
	at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:31)
	at org.gradle.internal.execution.impl.DefaultExecutionEngine$1.execute(DefaultExecutionEngine.java:64)
	at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:132)
	... 30 more
Caused by: org.jetbrains.kotlin.gradle.tasks.FailedCompilationException: Internal compiler error. See log for more details
	at org.jetbrains.kotlin.gradle.tasks.TasksUtilsKt.throwExceptionIfCompilationFailed(tasksUtils.kt:22)
	at org.jetbrains.kotlin.compilerRunner.GradleKotlinCompilerWork.run(GradleKotlinCompilerWork.kt:112)
	at org.jetbrains.kotlin.compilerRunner.GradleCompilerRunnerWithWorkers$GradleKotlinCompilerWorkAction.execute(GradleCompilerRunnerWithWorkers.kt:75)
	at org.gradle.workers.internal.DefaultWorkerServer.execute(DefaultWorkerServer.java:68)
	at org.gradle.workers.internal.NoIsolationWorkerFactory$1$1.create(NoIsolationWorkerFactory.java:64)
	at org.gradle.workers.internal.NoIsolationWorkerFactory$1$1.create(NoIsolationWorkerFactory.java:61)
	at org.gradle.internal.classloader.ClassLoaderUtils.executeInClassloader(ClassLoaderUtils.java:100)
	at org.gradle.workers.internal.NoIsolationWorkerFactory$1.lambda$execute$0(NoIsolationWorkerFactory.java:61)
	at org.gradle.workers.internal.AbstractWorker$1.call(AbstractWorker.java:44)
	at org.gradle.workers.internal.AbstractWorker$1.call(AbstractWorker.java:41)
	at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:209)
	at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
	at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
	at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
	at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
	at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
	at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
	at org.gradle.workers.internal.AbstractWorker.executeWrappedInBuildOperation(AbstractWorker.java:41)
	at org.gradle.workers.internal.NoIsolationWorkerFactory$1.execute(NoIsolationWorkerFactory.java:58)
	at org.gradle.workers.internal.DefaultWorkerExecutor.lambda$submitWork$0(DefaultWorkerExecutor.java:176)
	at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.runExecution(DefaultConditionalExecutionQueue.java:194)
	at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.access$700(DefaultConditionalExecutionQueue.java:127)
	at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner$1.run(DefaultConditionalExecutionQueue.java:169)
	at org.gradle.internal.Factories$1.create(Factories.java:33)
	at org.gradle.internal.work.DefaultWorkerLeaseService.lambda$withLocksAcquired$0(DefaultWorkerLeaseService.java:269)
	at org.gradle.internal.work.ResourceLockStatistics$1.measure(ResourceLockStatistics.java:42)
	at org.gradle.internal.work.DefaultWorkerLeaseService.withLocksAcquired(DefaultWorkerLeaseService.java:267)
	at org.gradle.internal.work.DefaultWorkerLeaseService.withLocks(DefaultWorkerLeaseService.java:259)
	at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:127)
	at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:132)
	at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.runBatch(DefaultConditionalExecutionQueue.java:164)
	at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.run(DefaultConditionalExecutionQueue.java:133)
	... 2 more

The full trace was long and didn't seem related to a code failure in the module itself. So I employed the solution, which is always the same:

  1. ./mach build
  2. In Android Studio, File > Sync Project with Gradle Files.

Yup, that's all. Very simple and boring.


1

With Jujutsu, this is the moz-phab command I use which has made it easier to manage review patches: moz-phab patch <patch-id> --no-branch --apply-to main@origin

Comments

With an account on the Fediverse or Mastodon, you can respond to this post. Since Mastodon is decentralized, you can use your existing account hosted by another Mastodon server or compatible platform if you don't have an account on this one. Known non-private replies are displayed below.

Learn how this was implemented from the original source here.

<noscript><p>Loading comments relies on JavaScript. Try enabling JavaScript and reloading, or visit <a href="https://mindly.social/@jonalmeida/116197244320129422">the original post</a> on Mastodon.</p></noscript>
<noscript>You need JavaScript to view the comments.</noscript> &>"'

Mozilla Addons Blog

WebExtensions API Changes (Firefox 149-152)

Intro

Hey everyone, we’ve been working on some exciting changes, and want to share them with you.

But first, let me introduce myself. I am Christos, the new Sr. Developer Relations engineer in Add-ons, and I’m excited to write my first post on the Add-ons engineering blog.

Deprecations and changes

To start, I’m looking at a couple of features that are going away: avoiding content script execution in extension contexts, decoupling file access from host permissions, and improving the display of pageAction SVG icon.

executeScript / registerContentScript in moz-extension documents

Deprecated: Firefox 149  Removed: Firefox 152

Starting in Firefox Nightly 149 and scheduled for Firefox 152, the scripting and tabs injection APIs no longer inject into moz-extension://documents. This change brings the API in line with broader efforts to discourage string-based code execution in extension contexts, alongside the default CSP that restricts script-src to extension URLs and the removal of remote source allowlisting in MV3 (bug 1581608).

Firefox emits a warning when this restriction is met, so you are aware of and can address any use of this process in your extensions. This is an example of the warning message:

Content Script execution in moz-extension document has been deprecated and it has been blocked

To work around this change,  you can:

  • Import scripts directly in the extension page’s HTML.
  • Use module imports or standard <script> tags in extension documents.
  • Restructure code to avoid dynamic code execution patterns. An extension can run code in its documents dynamically by registering a runtime.onMessage listener in the document’s script, then sending a message to trigger execution of the required code.

File access becomes opt-in

Target: Firefox 152

Extensions requesting file://*/ or <all_urls> currently trigger the “Access your data for all websites” permission message, and when granted, can run content scripts in file:-URLs. From Firefox 152, file access in extensions requires an opt-in for all extensions, including those already installed (bug 2034168).

pageAction SVG icon CSS filter (automatic color scheme)

Removed: Firefox 152

Firefox has been automatically applying a greyscale and brightness CSS filter to pageAction (address bar button) SVG icons when a dark theme is active. This was intended to improve contrast, but it actually reduced contrast for multi-color icons and caused poor visibility for some extensions, such as Firefox Multi-Account Containers.

For icons that adapt to light and dark color schemes, you can now use @media (prefers-color-scheme: dark) in the SVG icon, or the MV3 action manifest key, and specify theme_icons.

Here is an example of how to use a `prefers-color-scheme` media query in a pageAction SVG icon to control how the icon adapts to dark mode:

manifest.json

"page_action": {
  "default_icon": "icons/icon.svg"
}

icons/icon.svg

<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" width="16" height="16">
  <style>
    :root { color: black; }
    @media (prefers-color-scheme: dark) { :root { color: white; } }
  </style>
  <path fill="currentColor" d="M2 2h12v12H2z"/>
</svg>

Use of prefers-color-scheme media queries is also allowed in MV2 browserAction and MV3 action SVG icons as an alternative to the theme_icons manifest properties.

There are additional examples at the Mozilla Developer Network on how to test your extension pageAction icon with and without the implicit CSS filter.


New APIs & Capabilities

Now to the new stuff. Here, you get the ability to use popups without user activation, initial support for the new tab split view feature, and WebAuthn RP ID assertion.

openPopup without user activation (Firefox Desktop)

Available: Firefox 149 Desktop

action.openPopup() and browserAction.openPopup() no longer require a user gesture on Firefox Desktop. You can open your extension’s popup programmatically, e.g., in response to a native-messaging event, an alarm, or a background-script condition.

This change is part of the ongoing cross-browser alignment work in the WebExtensions Community Group to harmonize popup behavior across engines.

Example

Before (Firefox < 149): must hang off a user gesture, e.g., a context menu click:

browser.menus.create({
  id: "nudge",
  title: "Open popup",
  contexts: ["all"],
});

browser.menus.onClicked.addListener((info) => {
  if (info.menuItemId === "nudge") {
    browser.action.openPopup(); // user clicked the menu → allowed
  }
});

 

After (Firefox ≄ 149) — same intent, no user gesture needed, fires from a timer:

browser.alarms.create("nudge", { delayInMinutes: 1 });

browser.alarms.onAlarm.addListener((alarm) => {
  if (alarm.name === "nudge") {
    browser.action.openPopup(); // works without a click
  }
});

It’s the same call with the same result, but only the trigger changes from a user-action handler to any background event.

It’s the same call with the same result, but only the trigger changes from a user-action handler to any background event.

splitViewId in the tabs API

Available: Firefox 149

Firefox 149 introduces a new read-only splitViewId property on the tabs.Tab object to expose Firefox’s new split view feature (where two tabs are displayed side-by-side in one window). Split views are treated as one unit, and Web Extensions treat them the same way.

In Firefox 150, extensions can swap tabs within a split view. This update also resolves a confusing issue where using the user interface to reverse tab order incorrectly reports the tabs.onMoved event with inaccurate values. Additionally, Firefox introduces unsplitting behavior for web extensions: when tabs.move() is called with split-view tabs positioned separately (non-adjacently) in the array. Now, after the call, Firefox removes the split view rather than keeping the tabs locked together.

Here is an example of using the new splitViewId property.

// Log whenever a tab joins or leaves a split view.
browser.tabs.onUpdated.addListener((tabId, changeInfo) => {
  if (!("splitViewId" in changeInfo)) return;

  if (changeInfo.splitViewId === browser.tabs.SPLIT_VIEW_ID_NONE) {
    console.log(`Tab ${tabId} left its split view`);
  } else {
    console.log(`Tab ${tabId} joined split view ${changeInfo.splitViewId}`);
  }
});
// Firefox desktop also supports a filter to limite onUpdated events:
// }, { properties: ["splitViewId"] });

 

Firefox 151 enables extensions to move split views in tab groups. More improvements are coming, such as the ability to create split views from extensions (bug 2016928).

 

WebAuthn RP ID assertion

Available: Firefox 150

Previously, web extensions couldn’t use WebAuthn credentials registered on their company’s website or mobile apps. When extensions tried to set a custom Relying Party ID (RP ID) in navigator.credentials.create() or navigator.credentials.get(), Firefox rejected it with “SecurityError: The operation is insecure.”

With Firefox 150, Extensions can now assert a WebAuthn RP ID for any domain they have host permissions for

when calling navigator.credentials.create() or navigator.credentials.get(). This applies to both the publicKey.rp.id field during credential creation and the publicKey.rpId field during authentication.

A critical detail for server-side validation: When relying party servers validate credentials created by extensions, they must account for different origin formats across browsers. In Chrome, the origin follows the pattern chrome-extension://extensionid, which matches the extension’s location.origin. Firefox 150 introduces a new stable origin format: moz-extension://hash, where the hash is a 64-character SHA-256 representation of the extension ID (using characters a-p to represent hex values). Importantly, this hash-based origin is the same all users, unlike Firefox’s existing UUID-based moz-extension:// URLs used for extension documents.

To extract the origin from a credential for validation:

let clientData = JSON.parse(new TextDecoder().decode(
  publicKeyCredential.response.clientDataJSON
));
console.log(clientData.origin);

For more details, see Use Web Authn API in web extensions on MDN.

Summary

Change Type Firefox Version
executeScript / registerContentScript in moz-extension documents Deprecation → Removal Deprecated 149, removed 152
File access opt-in Change 152
pageAction SVG CSS filter Removal 152
openPopup() without user activation New capability 149 (Desktop only)
splitViewId on tabs.Tab New API 149
WebAuthn RP ID assertion New capability 150

Need more?

You can always find detailed information about WebExtensions API and Add-ons updates in the MDN release notes, e.g., for Firefox 149 and Firefox 150.

For any help or questions navigating any changes, don’t hesitate to post your topic on the Add-ons Discourse.

 

The post WebExtensions API Changes (Firefox 149-152) appeared first on Mozilla Add-ons Community Blog.

Thunderbird Blog

Mobile Progress Report – April 2026

It’s been a very busy couple of months as we’ve reworked processes & priorities and established a roadmap for both iOS and Android.  We are determining how best we can coordinate with the community, and think that our roadmap for the year has a good balance of fixes and features.  Today, I want to talk about our contributors and pull requests, Notifications in the Android app, progress in the iOS app, and an overview of our roadmap for both apps this year.

Contributors & Pull Requests

We are so grateful for the support and code contributions of many members, whether building items on our roadmap, improving the user experience, or, of course, translating.  As we work on our roadmap priorities, we will make time to review PRs and will discuss them weekly, and prioritize those that help solve issues and bugs or align with our roadmap items. Please be patient with our Pull Request pipeline.  Typically, in working with the community, we try to react very quickly.   

Roadmap

For Android, we’ve chosen the items on our roadmap because we think these will be the highest-impact features and bring the most value to everyone.  Our focus this year is to simplify and modernize the Android codebase.  This means reworking some of the architecture. This will be super helpful for us to move more quickly and will reduce complex bugs. The app has an older codebase, and like many older ones, it has its challenges. We have three full-time Android engineers and several community contributors, and we hope to better position ourselves to move quickly.  At a high level, Android is focusing on the rearchitecture, a better Message List experience, and Message Reader screens.  We are also simplifying how users can connect to Thunder Mail as we open it up.

Notifications

One thing that is at the top of my mind right now, too, is Push Notifications, specifically changes that Google has made to background processes, which affect our Notifications.  We are looking into what we can do to solve this, so know that it has become a top priority for us. I’ve been asked, “Why is it so hard for Thunderbird to get Push Notifications right?” and I wanted to speak to some of the challenges we have.  Most apps’ Notifications are triggered by their own web services, which then send Notifications through Apple or Google, who pass them to users.  But email is different. In an email client, we typically don’t own our own backend services, but other companies do (Microsoft, Google, Hotmail, Yahoo, Proton, etc.).  And they can have their own flavors of SMTP – how we get the emails, and no specific Push Notification implementation. 

So we have a work around: polling those providers ever X minutes asking for new emails, and triggering local notifications – but we can’t hook into a native Push Notification process like your banking app for example. This is under the IMAP implementation. The JMAP implementation (think modern email protocols) has something in place we can more readily consume.  Another challenge is how the battery is affected by how often we poll the providers, and we need specific permissions from Google to run this process in the background.  Those permissions changed recently which is why Notifications are having issues.

I’ve simplified some pieces here, but hopefully that gives you an idea of some of the complexity and tradeoffs that we are working with.  With all of that said, this is very important to us, and is our users’ biggest pain point.  It is becoming our biggest need for a fix.  I’ll give an update on where that sits within the roadmap next progress report when we have explored what solutions we can provide.

iOS Progress

For the iOS roadmap, everything is moving along well.  We have been wrapping up most of our IMAP & SMTP tickets, and we are moving into the Account Data pieces to manage accounts and authorizations.  We will also be having a new member join us in the next couple of weeks.  This will add some speed, but we’ve made good progress in getting the inner pieces together – what I consider the most complex parts.  As we move to more standard mobile backend pieces and more standard UI, we leave the world of unknown unknowns, and will be picking up steam.

At a high level our iOS roadmap is build out these screens:

  • Account Setup and Drawer
  • Messages: List, Reader, Compose, Search

And have these pieces in place: 

  • IMAP
  • SMTP
  • MIME
  • OAuth
  • Encryption
  • Email Composition 

And our target is still end of the year for the iOS release.  

Thank You!

Again we are so grateful to you, our community, for your support, and we are excited for this next quarter as we start to see the fruits of our labors.  

The post Mobile Progress Report – April 2026 appeared first on The Thunderbird Blog.

Wil Clouser

Firefox Sync adds official PostgreSQL support

The Sync Storage team has landed official PostgreSQL support for Firefox Sync.

Historically, Sync has only officially supported Google Spanner as a storage backend, with MySQL working unofficially. That has been a pretty high barrier to entry for people self-hosting their own services.

With PostgreSQL support, we hope to make self-hosting more approachable and continue supporting people who want the agency of hosting their data on infrastructure they control.

There is updated documentation for running it with Docker, including a one-shot docker compose setup:

https://mozilla-services.github.io/syncstorage-rs/how-to/how-to-run-with-docker.html

Mozilla is publishing Docker images for the PostgreSQL build here:

https://ghcr.io/mozilla-services/syncstorage-rs/syncstorage-rs-postgres

If you’ve been interested in self-hosting Sync but were put off by the storage requirements, take another look. If you run into bugs or have feedback, please file issues here:

https://github.com/mozilla-services/syncstorage-rs/issues

Jonathan Almeida

Gmail filters based on X-Phabricator-Stamps header

I want Phabricator emails to have a Gmail label so I can know which patches had me as a reviewer that then had follow-up comments from other folks.

This is useful for me when I review a patch and then I need to respond back to discussions in a more timely manner in comment threads that I've created.

It's difficult to do this today similar to Bugzilla Gmail filters because there are fewer identifiers that the more simplistic Gmail filter parameters can help with.

Today I learnt that there is an X-Phabricator-Stamps header in those Phabricator emails that let's you identify you as a the reviewer in a patch. So using that information, I wrote the Google script below to run every minute and avoid re-processing the same email twice.

A couple variables were added to the top and some console.logs are sprinkled around for my own debugging.

Code
var REVIEWER = "jonalmeida";
var LABEL_NAME = "Phabricator/Comments";
var BODY_MATCH = "commented on this revision.";
var SENDER = "phabricator@mozilla.com";
/**
 * Run once manually to install the per-minute trigger.
 */
function install() {
uninstall();
ScriptApp.newTrigger('processInbox')
  .timeBased()
  .everyMinutes(1)
  .create();
}
/**
 * Run once manually to remove the trigger.
 */
function uninstall() {
ScriptApp.getProjectTriggers().forEach(function(t) {
ScriptApp.deleteTrigger(t);
  });
PropertiesService.getScriptProperties().deleteProperty('lastRun');
}
/**
 * Every run, we try to avoid processing the same email twice because
 * there is no API trigger to run a script on every new email received.
 */
function processInbox() {
var props = PropertiesService.getScriptProperties();
var lastRun = parseInt(props.getProperty('lastRun') || '0');
var now = Math.floor(Date.now() / 1000);
// On first run, look back 2 minutes
if (lastRun === 0) {
lastRun = now - 120;
  }
var label = GmailApp.getUserLabelByName(LABEL_NAME);
if (!label) {
label = GmailApp.createLabel(LABEL_NAME);
  }
console.log("last run: " + lastRun);
var threads = GmailApp.search("from:" + SENDER + " after:" + lastRun);
console.log("threads to process: " + threads.length);
for (var i = 0; i < threads.length; i++) {
var thread = threads[i];
var messages = thread.getMessages();
console.log("messages to process: " + messages.length);
for (var j = 0; j < messages.length; j++) {
if (hasReviewerStamp(messages[j])) {
thread.addLabel(label);
console.log(thread.getFirstMessageSubject());
break;
      }
    }
  }
props.setProperty('lastRun', String(now));
}
function hasReviewerStamp(message) {
var raw = message.getRawContent();
var match = raw.match(/^X-Phabricator-Stamps:\s*(.+)$/m);
if (!match) {
return false;
  }
var stamps = match[1].trim().split(/\s+/);
return (stamps.indexOf("reviewer(@" + REVIEWER + ")") > -1) && raw.indexOf(BODY_MATCH) > -1;
}
/**
 * For debugging - see the list of labels you can search which
 * differs from what is used in the Gmail UI filter.
 */
function listAllLabels() {
console.log("All labels");
var labels = GmailApp.getUserLabels();
for (var i = 0; i < labels.length; i++) {
console.log(labels[i].getName());
  }
}

Mozilla Data YouTube Channel

Towards a Telemetry Taxonomy

Leif Oines talks about an effort to define a more complete taxonomy for Mozilla's data.

Frederik Braun

Multiple things can be true at the same time

Dear reader. I am sure you have read a lot of blog posts about AI in the past weeks or months. And now I too am writing. Mostly to help me cope with what my kind of hacker people would call out as hypocrisy or cognitive dissonance.

There are various 


This Week In Rust

This Week in Rust 648

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Foundation
  • RustConf 2026 schedule and registration are live! Early bird ticket prices are available through April 29.
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs

Crate of the Week

This week's crate is farben, a German-named macro crate for terminal colors.

Thanks to Nik Revenco for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

No calls for testing were issued this week by Rust, Cargo, Rustup or Rust language RFCs.

Let us know if you would like your feature to be tracked as a part of this list.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

  • EuroRust | 2026-04-27 | Barcelona, Spain | 2026-10-14 - 2026-10-17
  • NDC Techtown | 2026-05-03 | Kongsberg, Norway | 2026-09-21 to 23.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

542 pull requests were merged in the last week

Compiler
Library
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

This week was a bit all over the place, but the largest regressions were either already fixed or they are being investigated. There were also a couple of nice perf. wins.

Triage done by @Kobzol. Revision range: dab8d9d1..9ab01ae5

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.7% [0.2%, 4.6%] 39
Regressions ❌
(secondary)
0.6% [0.2%, 1.4%] 31
Improvements ✅
(primary)
-0.6% [-4.8%, -0.1%] 70
Improvements ✅
(secondary)
-0.7% [-4.1%, -0.0%] 93
All ❌✅ (primary) -0.1% [-4.8%, 4.6%] 109

3 Regressions, 4 Improvements, 6 Mixed; 4 of them in rollups 41 artifact comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Rust Cargo Compiler Team (MCPs only)

No Items entered Final Comment Period this week for Language Reference, Language Team, Leadership Council, Rust RFCs or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2026-04-22 - 2026-05-20 🩀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

in Rust we pay the price of composition up-front

– Nadieril on rust zulip

Thanks to Nadieril for the self-suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by:

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Mozilla Performance Blog

Telemetry Alerting: How It Works

We recently released the telemetry alerting beta, and announced it in the blog post here! This blog post will dive into the details of how it works across Treeherder, and Mozdetect. At a high level, MozDetect handles the change point detection for telemetry probes, and Treeherder handles storing the detections, and producing the emails/bugs for these.

MozDetect

All of the existing, and any future change detection point techniques used for telemetry alerting are built in MozDetect. Having these live outside of Treeherder gives a low-barrier to entry for adding new features, and testing existing ones without having to set up everything needed for alerting in Treeherder. It’s built as a python module that is run through uv. This makes it very easy for anyone to run the code because of uv’s excellent python version, and dependency management. How to work with the code in this repository is outlined here, along with how to add your own techniques to it (note the access to mozdata through gcloud is required for this).

Detectors are split into two parts: (i) a detector that performs a comparison between two groups, and (ii) a detector that performs detection on a time series (using the detector from (i)). Our default detection technique, called  cdf_squared  lives here. The  timeseries_detector_name  is the name that will be used to access the detector from the telemetry probe side through the  change_detection_technique  field. The only method that absolutely needs to be implemented by these is the detect_changes method and it must return a list of Detection objects. These detection objects contain all the necessary information for producing an alert. There is also an  optional_detection_info  field that can contain additional things like attachments that would be added to Bugzilla bugs, and additional_data that can hold JSON data for storage in the DB. The cumulative distribution function (CDF) squared technique uses these to store the CDF before and after the detection along with a graph of these as an attachment for the Bugzilla bug.

Example of a CDF graph that is provided in bugs.

CDF Squared Detection Technique

The CDF squared technique detects changes in time-series histogram data by comparing CDFs between consecutive windows. It takes two CDFs, each representing the distribution of measurements over a time window, and computes the sum of squared differences between the two CDFs at each bin. The sign of the summed linear difference is then used to assign a direction to the squared difference score so that the output encodes whether the distribution moved to higher values (right shift) or lower values (left shift).

For time-series detection, this base comparison is applied in a rolling fashion across the full history of data. Each day’s 7-day smoothed CDF is compared against the next one, producing a continuous signal of squared CDF differences over time. A Butterworth low-pass filter is then applied to that signal to remove high-frequency noise while preserving genuine trend changes. Finally, scipy’s find_peaks function is used to locate statistically significant peaks and valleys in the filtered signal using a dynamic alert threshold based on the historical data. Information is extracted from those areas and then used to build the detection information needed for the alert generation process.

 

Alerting

Our alerting tooling lives in the Treeherder codebase. It’s run through our PerfSheriff Bot (called Sherlock) and runs once per day. When a detection is produced from MozDetect, a telemetry alert is added to the database and then the TelemetryAlertManager is called to handle it. The manager’s tasks are split into 6 ordered phases:

  1. Update alerts with changes from Bugzilla. This step ensures that any changes that happen in the bugs filed are mirrored into our database. Currently, we only track resolution changes here.
  2. Comment on existing bugs. This step is for updating existing bugs with information from new alerts. This step is not currently being used. In the future, this could be used to inform probe owners that a probe which doesn’t produce bugs has produced an alert in the same time range.
  3. File new bugs for alerts. This step handles filing bugs for any new alerts on probes set up for producing bugs.
  4. Modify existing bugs with new alerts. This step handles any modifications needed to existing bugs based on the new bugs that were created. Currently, the “See Also” field is modified for existing bugs to include the new bugs.
  5. Produce emails for new alerts. This step handles producing emails for any alerts set up to produce emails.
  6. Housekeeping. This step handles redoing any failures that happen above in either the current run or past runs. Currently, it’s being used to retry bug modifications and sending emails when we encounter a failure there. This excludes retrying bug filling since we delete the alert in that case and retry it the next time the alert is generated.

After the housekeeping step, the manager is done for the day and runs again on the next day to handle any updates and new alerts. Contrary to how alerting works for performance tests in CI, this process is fully automated and requires no human input at any point.

Setting up telemetry probes for alerting happens on the mozilla-central side in their probe schema using the new  monitor  field in the  metadata  section (example for email alerts, example for bug alerts). The telemetry alerting documentation has information about how to do this. We then use an index.json file from the telemetry dictionary to gather all the probes that should be alerting. The information there is supplemented by more granular information later in the pipeline to gather things like the time unit used for the probe to be able to better format the Bugzilla bug table.

Once a telemetry probe is set up for alerting and is found by our system, the owners (those listed in the email notification fields) will begin either receiving emails or have bugs produced for them. These can also be viewed by everyone on this dashboard.

Example of an alert being viewed in the dashboard.

 

 

Acknowledgements

Getting the project to this point involved work from people across multiple teams here at Mozilla. Special thanks to Eduardo Filho for his support on the telemetry probe side, to Bas Schouten for his guidance and work on the CDF Squared detection technique, and to Andrej Glavic and Beatrice Acasandrei for their help in reviewing the Treeherder-related changes.

If you hit any issues with the telemetry alerting system, or have any suggestions feel free to file a bug in the Testing :: Performance component or reach out to us in either #perf-help on Slack or in #perftest on Matrix.

Mozilla Data YouTube Channel

Data Incident Process

Mike Droettboom talks about Data @ Mozilla's process for handling incidents.

Mozilla Performance Blog

Telemetry Alerting Beta Announcement

We’re happy to announce that the Telemetry Alerting beta is now open to everyone!

Monitoring for changes in telemetry probes that you own can be difficult to do on a regular and continuous basis. With telemetry alerting, that changes today! You can now quickly set up your timing distribution probes for automated monitoring on Windows with notifications through email or a Bugzilla bug.

To get started, if you only need email alerts, simply add  monitor: True  to the  metadata  section of your probe (example).

Example of an email alert.

If you would prefer to receive Bugzilla bugs when a change is detected, set the monitor field like so (example):

monitor:
    alert: True
    lower_is_better: True/False # Optional
    bugzilla_notification_emails:
        - <YOUR-BUGZILLA-EMAIL-HERE>

Example of an alert bug.

 

More information about telemetry alerting, and how to set up a probe can be found here in the documentation. There’s also a dashboard that can show you all of the existing telemetry alerts along with some detection information. For now, we only support change detection on Windows for `timing_distribution` probes (see here for other desktop platforms, and android).

Please note that this is an open beta and we are actively looking for feedback on this system. If you hit any issues, or have any suggestions feel free to file a bug in the Testing :: Performance component or reach out to us in either #perf-help on Slack or in #perftest on Matrix.

Special thanks to Eduardo Filho for his support on the telemetry probe side, to Bas Schouten for his guidance and work on the CDF Squared detection technique, and to Andrej Glavic and Beatrice Acasandrei for their help in reviewing the Treeherder changes.

For a more detailed look at how this works, see this blog post.

The Mozilla Blog

What’s new in Firefox mobile: Less clutter, more control and a free built-in VPN

Mobile browsing hasn’t kept up with how people actually use their phones.

Right now, even basic tasks can feel harder than they should. Finding what you need can mean scrolling through ads and filler content, keeping track of too many tabs, or thinking twice about how private your connection is.

A mobile browser should do more — and we’re raising the bar. Firefox is rolling out a set of updates that build on our most popular desktop features and adapt them for how you browse on-the-go. Here’s what’s out now, and what’s coming next.

Get the key points with Shake to Summarize 

When you’re following a recipe, reading a product review, or deciding whether a long article is worth your time, getting to the useful part can take longer than it should. 

With Shake to Summarize, you can shake or tap your phone to generate a quick summary of the page. Currently available for iOS users in English, we’re expanding availability to all iOS users in German, French, Spanish, Portuguese, Italian and Japanese starting with Firefox 150 on April 21. We’ll also soon be making Shake to Summarize available to Android users in English, so they too can get to the key points of any article in seconds.

Take control of how AI shows up

AI features are becoming a more common part of browsers — but not everyone wants the same experience. Firefox gives you a say in how they’re used. With AI Controls, you can turn AI features off entirely, enable only the ones you want, or adjust things over time. Rolling out on Android and iOS beginning May 21.

Stay protected with a free, built-in VPN

Firefox’s free built-in VPN covers up to 50 gigabytes of your browsing in Firefox each month, across desktop and mobile devices. It adds a layer of protection to your browsing activity by masking your IP address – especially useful when you’re on public Wi-Fi. Unlike many “free VPNs” that rely on ads or selling user data to generate revenue, Firefox is built with a different model: no selling your browsing data, no injecting ads into your traffic. Instead, we offer a limited amount of browser-level protection for free, alongside Mozilla VPN, our paid, unlimited, full-device VPN service. Rolling out on Android soon.

Keep your tabs organized with Tab Groups

Tab Groups have been among the most-requested mobile features from our Mozilla community, and they’re coming on mobile soon. You’ll be able to group related tabs to stay organized, whether you’re comparing restaurants, planning a trip or saving articles to read later.

We’re also building toward smart groupings, where Firefox can automatically suggest tab groups for you. Rolling out on Android soon. 

More updates, built around how you browse on mobile

Your phone comes with a browser. That doesn’t mean it has to stay your default

“Firefox exists to give people a better way to experience the web, and that has to be just as true on mobile as it is on desktop,” said Ajit Varma, head of Firefox. “For many people, their phone is their primary way of getting online, and they deserve a browser that’s fast, intuitive and built around their needs. That’s why we’re investing in mobile more than ever before. We’re building for the millions of people who choose Firefox every day, and giving even more people a reason to do the same.”

Firefox is building a mobile experience designed around how people browse — with tools that help you move faster, stay organized and stay in control.

These updates begin rolling out in April with more on the way.

Take Firefox with you

Download Firefox mobile

The post What’s new in Firefox mobile: Less clutter, more control and a free built-in VPN appeared first on The Mozilla Blog.

The Mozilla Blog

The zero-days are numbered

Since February, the Firefox team has been working around the clock using frontier AI models to find and fix latent security vulnerabilities in the browser. We wrote previously about our collaboration with Anthropic to scan Firefox with Opus 4.6, which led to fixes for 22 security-sensitive bugs in Firefox 148.

As part of our continued collaboration with Anthropic, we had the opportunity to apply an early version of Claude Mythos Preview to Firefox. This week’s release of Firefox 150 includes fixes for 271 vulnerabilities identified during this initial evaluation.

As these capabilities reach the hands of more defenders, many other teams are now experiencing the same vertigo we did when the findings first came into focus. For a hardened target, just one such bug would have been red-alert in 2025, and so many at once makes you stop to wonder whether it’s even possible to keep up.

Our experience is a hopeful one for teams who shake off the vertigo and get to work. You may need to reprioritize everything else to bring relentless and single-minded focus to the task, but there is light at the end of the tunnel. We are extremely proud of how our team rose to meet this challenge, and others will too. Our work isn’t finished, but we’ve turned the corner and can glimpse a future much better than just keeping up. Defenders finally have a chance to win, decisively.

Until now, the industry has largely fought security to a draw. Vendors of critical internet-exposed software like Firefox take security extremely seriously and have teams of people who get out of bed every morning thinking about how to keep users safe. Nevertheless, we’ve all long quietly acknowledged that bringing exploits to zero was an unrealistic goal. Instead, we aimed to make them so expensive that only actors with functionally unlimited budgets can afford them, and that the cost of burning such an expensive asset disincentivizes those actors against casual use.

This is because security to date has been offensively-dominant: the attack surface isn’t infinite, but it’s large enough to be difficult to defend comprehensively with the tools we’ve had available. This gives attackers an asymmetric advantage, since they only need to find one chink in the armor.

We use defense-in-depth to apply multiple layers of overlapping defenses, but no layer is bulletproof. Firefox runs each website in a separate process sandbox, but attackers try to combine bugs in the rendering code with bugs in the sandbox to escape to a more privileged context. We’ve led the industry in building and adopting Rust, but we still can’t afford to stop everything to rewrite decades of C++ code, especially since Rust only mitigates certain (very common) classes of vulnerabilities.

We pair defense-in-depth engineering with an internal red team tasked with staying on the leading edge of automated analysis techniques. Until recently, these have largely been dynamic analysis techniques like fuzzing. Fuzzing is quite fruitful in practice, but some parts of the code are harder to fuzz than others, leading to uneven coverage.

Elite security researchers find bugs that fuzzers can’t largely by reasoning through the source code. This is effective, but time-consuming and bottlenecked on scarce human expertise. Computers were completely incapable of doing this a few months ago, and now they excel at it. We have many years of experience picking apart the work of the world’s best security researchers, and Mythos Preview is every bit as capable. So far we’ve found no category or complexity of vulnerability that humans can find that this model can’t.

This can feel terrifying in the immediate term, but it’s ultimately great news for defenders. A gap between machine-discoverable and human-discoverable bugs favors the attacker, who can concentrate many months of costly human effort to find a single bug. Closing this gap erodes the attacker’s long-term advantage by making all discoveries cheap.

Encouragingly, we also haven’t seen any bugs that couldn’t have been found by an elite human researcher. Some commentators predict that future AI models will unearth entirely new forms of vulnerabilities that defy our current comprehension, but we don’t think so. Software like Firefox is designed in a modular way for humans to be able to reason about its correctness. It is complex, but not arbitrarily complex1.

The defects are finite, and we are entering a world where we can finally find them all.


1  There’s a risk that codebases begin to surpass human comprehension as a result of more AI in the development process, scaling bug complexity along with (or perhaps faster than) discovery capability. Human-comprehensibility is an essential property to maintain, especially in critical software like browsers and operating systems.

The post The zero-days are numbered  appeared first on The Mozilla Blog.

Niko Matsakis

Symposium: community-oriented agentic development

I’m very excited to announce the first release of the Symposium project as well as its inclusion in the Rust Foundation’s Innovation Lab. Symposium’s goal is to let everyone in the Rust community participate in making agentic development better. The core idea is that crate authors should be able to vend skills, MCP servers, and other extensions, in addition to code. The Symposium tool then installs those extensions automatically based on your dependencies. After all, who knows how to use a crate better than the people who maintain it?

If you want to read more details about how Symposium works, I refer you to the announcement post from Jack Huey on the main Symposium blog. This post is my companion post, and it is focused on something more personal – the reasons that I am working on Symposium.

I believe in extensibility everywhere

The short version is that I believe in extensibility everywhere. Right now, the Rust language does a decent job of being extensible: you can write Rust crates that offer new capabilities that feel built-in, thanks to proc-macros, traits, and ownership. But we’re just getting started at offering extensibility in other tools, and I want us to hurry up!

I want crate authors to be able to supply custom diagnostics. I want them to be able to supply custom lints. I want them to be able to supply custom optimizations. I want them to be able to supply custom IDE refactorings. And, as soon as I started messing around with agentic development, I wanted extensibility there too.

Symposium puts crate authors in charge

The goal of Symposium is to give crate authors, and the broader Rust community, the ability to directly influence the experience of people writing Rust code with agents. Rust is a really popular target language for agents because the type system provides strong guardrails and it generates efficient code – and I predict it’s only going to become more popular.

Despite Rust’s popularity as an agentic coding target, the Rust community right now are basically bystanders when it comes to the experience of people writing Rust with agents; I want us to have a means of influencing it directly.

Enter Symposium. With Symposium, Crate authors can package up skills etc and then Symposium will automatically make them available for your agent. Symposium also takes care of bridging the small-but-very-real gaps between agents (e.g., each has their own hook format, and some of them use .agents/skills and some use .claude/skills, etc).

Example: the assert-struct crate

Let me give you an example. Consider the assert-truct crate, recently created by Carl Lerche. assert-struct lets you write convenient assertions that test the values of specific struct fields:

assert_struct!(val,_{items: [1,2,..],tags: #("a","b",..),..});

The problem: agents don’t know about it

This crate is neat, but of course, no models are going to know how to use it – it’s not part of their training set. They can figure it out by reading the docs, but that’s going to burn more tokens (expensive, slow, consumes carbon), so that’s not a great idea.

You could teach the agent how to use it


In practice what people do today is to add skills to their project – for example, in his toasty crate, Carl has a testing skill that also shows how to use assert-struct. But it seems silly for everybody who uses the crate to repeat that content.


but wouldn’t it be better the crate could teach the agent itself?

With Symposium, teaching your agent how to use your dependencies should not be necessary. Instead, your crates can publish their own skills or other extensions.

The way this works is that the assert-struct crate defines the skill once, centrally, in its own repository1. Then there is a separate file in Symposium’s central recommendations repository with a pointer to the assert-struct repository. Any time that the assert-struct repository updates that skill, the updates are automatically synchronized for you. Neat! (You can also embed skills directly in the rr repository, but then updating them requires a PR to that repo.)

Frequently asked questions

How do I add support for my crate to Symposium?

It’s easy! Check out the docs here:

https://symposium.dev/crate-authors/supporting-your-crate.html

What kind of extensions does Symposium support?

Skills, hooks, and MCP Servers, for now.

Why does Symposium have a centralized repository?

Currently we allow skill content to be defined in a decentralized fashion but we require that a plugin be added to our central recommendations repository. This is a temporary limitation. We eventually expect to allow crate authors to adds skills and plugins in a fully decentralized fashion.

We chose to limit ourselves to a centralized repository early on for three reasons:

  • Even when decentralized support exists, a centralized repository will be useful, since there will always be crates that choose not to provide that support.
  • Having a central list of plugins will make it easy to update people as we evolve Symposium.
  • Having a centralized repository will help protect against malicious skills[^threat] while we look for other mechanisms, since we can vet the crates that are added and easily scan their content.

What if I want to add skills for crates private to my company? I don’t want to put those in the central repository!

No problem, you can add a custom plugin source.

Are you aware of the negative externalities of LLMs?

I am, very much so. I feel like a lot of the uses of LLMs we see today are not great (e.g., chat bots hijack conversational and social cues to earn trust that they don’t deserve) and to reconfirm peoples’ biases instead of challenging their ideas. And I’m worried about the environmental cost of data centers and the way companies have retreated from their climate goals. And I don’t like how centralized models concentrate economic power.2 So yeah, I see all that. And I also see how LLMs enable people to build things that they couldn’t build before and help to make previously intractable problems soluble – and that includes more and more people who never thought of themselves as programmers3. My goal with Symposium and other projects is to be part of the solution, finding ways to leverage LLMs that are net positive: opening doors, not closing them.

Extensibility: because everybody has something to offer

Fundamentally, the reason I am working on Symposium is that I believe everybody has something unique to offer. I see the appeal of strongly opinionated systems that reflect the brilliant vision of a particular person. But to me, the most beautiful systems are the ones that everybody gets to build together4. This is why I love open source. This is why I love emacs5. It’s why I love VSCode’s extension system, which has so many great gems6.

To me, Symposium is a double win in terms of empowerment. First, it makes agents extensible, which is going to give crate authors more power to support their crates. But it also helps make agentic programming better, which I believe will ultimately open up programming to a lot more people. And that is what it’s all about.


  1. Actually as of this posting, the assert-struct skill is embedded directly in the recommendations repo. But I opened a PR to put it on assert-struct and I’ll port it over once it lands. ↩

  2. I’m very curious to do more with open models. ↩

  3. Within Amazon, it’s been amazing to watch how many people who never thought of themselves as software developers are starting to build software. Considering the challenges the software industry has with representation, I find this very encouraging. Diverse teams are stronger, better teams! ↩

  4. None of this is to say I don’t believe in good defaults; there’s a reason I use Zed and VSCode these days, and not emacs, much as I love it in concept. ↩

  5. OMG. One of my friends college wrote this amazing essay some time back on emacs. Next time you’re doomscrolling on the toilet or whatever, pop over to this essay instead. Fair warning, it’s long, so it’ll take you a while to read, but I think it nails what people love about emacs. ↩

  6. These days I’m really enjoying Zed, but I have to say, I really miss kahole/edamagit! Which of course is inspired by the magit emacs package. ↩

Firefox Developer Experience

Firefox WebDriver Newsletter 150

WebDriver is a remote control interface that enables introspection and control of user agents. As such, it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 150 release cycle.

Contributions

Firefox is an open source project, and we are always happy to receive external code contributions to our WebDriver implementation. We want to give special thanks to everyone who filed issues, bugs and submitted patches.

In Firefox 150, Khalid AlHaddad contributed several improvements:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi. Join our chatroom if you need any help to get started!

General

WebDriver BiDi

Marionette

The Rust Programming Language Blog

Announcing Rust 1.95.0

The Rust team is happy to announce a new version of Rust, 1.95.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.95.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.95.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.95.0 stable

cfg_select!

Rust 1.95 introduces a cfg_select! macro that acts roughly similar to a compile-time match on cfgs. This fulfills the same purpose as the popular cfg-if crate, although with a different syntax. cfg_select! expands to the right-hand side of the first arm whose configuration predicate evaluates to true. Some examples:

cfg_select! {
    unix => {
        fn foo(){ /* unix specific functionality */ }
}
    target_pointer_width = "32" => {
        fn foo(){ /* non-unix, 32-bit functionality */ }
}
    _ => {
        fn foo(){ /* fallback implementation */ }
}
}
let is_windows_str = cfg_select! {
    windows => "windows",
    _ => "not windows",
};

if-let guards in matches

Rust 1.88 stabilized let chains. Rust 1.95 brings that capability into match expressions, allowing for conditionals based on pattern matching.

match value {
    Some(x) if let Ok(y) = compute(x) => {
        // Both `x` and `y` are available here
        println!("{}, {}", x, y);
}
    _ => {}
}

Note that the compiler will not currently consider the patterns matched in if let guards as part of the exhaustiveness evaluation of the overall match, just like if guards.

Stabilized APIs

These previously stable APIs are now stable in const contexts:

Destabilized JSON target specs

Rust 1.95 removes support on stable for passing a custom target specification to rustc. This should not affect any Rust users using a fully stable toolchain, as building the standard library (including just core) already required using nightly-only features.

We're also gathering use cases for custom targets on the tracking issue as we consider whether some form of this feature should eventually be stabilized.

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.95.0

Many people came together to create Rust 1.95.0. We couldn't have done it without all of you. Thanks!

Mozilla Localization (L10N)

Localizer Spotlight: Baurzhan

About you

My name is Baurzhan Muftakhidinov. I’m from Kazakhstan. I speak Kazakh, Russian, English and I have been contributing to Mozilla localization for more than 18 years.

From Linux Curiosity to Mozilla Localization

Q: How did you get involved in localization, and what drew you to Mozilla?

A: I came to Mozilla through Linux during my student years. I became interested in Linux at university, and very quickly I noticed how closely the open source world was connected: where there was Linux, Firefox was usually nearby.

When installing Linux distributions, one of the first things I noticed was language support. Many languages were available, but Kazakh was often missing or only partially supported. That made me ask a simple question: why is that, and what can be done about it?

Through Ubuntu’s CD distribution program, I discovered Launchpad and began translating Firefox there. Around the same time, through a local Linux forum, I connected with Timur Timirkhanov, who already had experience with Mozilla localization. He helped me understand Mozilla’s processes, pointed me to packages that needed translation, and opened a locale registration ticket for Kazakh in Bugzilla.

Soon after, Dauren Sarsenov joined, and in the beginning it was mainly the two of us working on Firefox. When Kazakh first appeared in a Firefox beta in spring 2009, we were extremely proud. It felt like a real milestone — not just translating isolated strings, but seeing a major global product appear in Kazakh.

For me, that was bigger than one browser. At the time, we were dreaming about a fully usable open source desktop in Kazakh, and Mozilla localization became one important part of that larger goal. What started as curiosity became a long-term commitment: making technology more accessible in Kazakh and proving that our language belongs in modern software.

Q: Which Mozilla products are closest to you? Do you use them regularly?

A: Firefox is definitely the product closest to me because I use it every day — both desktop and mobile. It never feels like I am translating something distant from my real life. I see the interface, the wording choices, and the practical impact of localization almost daily.

What makes Firefox especially meaningful is that it is both symbolic and practical. Symbolically, it showed that Kazakh could be present in one of the most important pieces of everyday software. Practically, it gave users a browser they could use in their own language. A browser is the gateway to the internet, so localizing Firefox means much more than translating one application.

I also use Thunderbird from time to time and visit MDN quite often. Even when I am not translating, I interact with Mozilla products as a user, so there is always a natural connection between volunteer work and daily habits.

People around me know me through Firefox localization more than through anything else. Very often I am simply “the person who translated Firefox into Kazakh.” That says a lot about how visible Firefox has been.

Promoting Kazakh Localization and Building an Ecosystem

Q: How have you promoted Kazakh-localized software?

A: Most of my promotion work has been grassroots. In earlier years, I shared updates on Linux and open source forums, especially communities already interested in free software. Even when people were not personally interested in contributing, many showed strong support and encouragement. That confirmed that localization mattered beyond just the translation team.

One of my bigger efforts was creating a Debian-based Linux distribution from 2012 to 2015 called Kazsid. I built it partly to test how Kazakh localization worked across multiple applications in a real desktop environment. I included programs that already had Kazakh translations — Firefox, LibreOffice, desktop environments, and other tools — set Kazakh as the default language, and tested how everything worked together.

I shared the builds on forums, and some people downloaded and tried them. It was one of the most practical ways I encouraged interest in Linux and localized software.

Later, as translations matured upstream, maintaining a separate distribution was no longer necessary. That was actually a positive sign — users could install standard distributions and get the same localized experience.

Today I post updates on LinkedIn. It helps maintain visibility, even if it does not often bring in new contributors.

Working Independently — and Working Systematically

Q: What does the Kazakh localization community look like today?

A: At the moment, I am effectively the only active contributor across several major open source localization efforts in Kazakh, including Mozilla products, LibreOffice, GNOME, Xfce, and others.

In the early years, several people made meaningful contributions, but most eventually moved on. Timur helped significantly, especially in the earlier stages and in understanding Mozilla’s processes, and I still occasionally consult trusted people when I need a second opinion.

The challenge for smaller languages is not only starting a translation but maintaining it over the long term. From early on, I was not thinking about one application. My goal was broader: to help create a real open source desktop experience in Kazakh. A browser translated into Kazakh is important, but a full ecosystem is even more meaningful. Sustainability is the hardest part.

Q: How do you approach quality when you are the main translator?

A: Direct user feedback is rare. So QA depends largely on my own testing, judgment, and systems.

I test software in real use, especially Firefox. In earlier years, I also used Nightly builds. Before settling on new terminology, I check dictionaries and reference materials. I consult fluent speakers when needed, and sometimes I discuss wording with my wife to see how natural it sounds.

My principle is that translations should feel clear and alive, not mechanically imported. I studied in Kazakh and remember the terms we were actually taught in IT-related subjects, and that background matters to me.

Because of my scripting background, I have written small tools in Python to help verify translations, track terminology, and maintain consistency. QA is not just “reading it once and hoping for the best.” It is a combination of linguistic judgment, real usage, consultation, and automated checking.

More recently, I have been exploring how AI can assist localization. By testing translations through tools like the Google Gemini API and guiding terminology carefully, I have been able to close significant translation gaps. For Kazakh, newer models understand context much better than traditional machine translation systems. AI does not replace judgment, but it can make the work faster and more effective.

Professional Background

Q: How does your professional background influence your localization work?

Baurzhan at GIS Day 2025

A: My background is partly technical and partly analytical. I studied IT, worked as a Linux system administrator, and later moved into data analysis and GIS.

Those technical skills helped significantly. Automation makes a long-term localization effort much more manageable, especially when one person is doing most of the work.

Localization has strengthened my discipline and consistency. It requires patience and regular effort. Over time, I developed an instinct for terminology and phrasing — whether a term feels natural or artificial in context.

A Few Personal Notes

I have loved reading since I was four years old. My favorite genres are science fiction and popular science. Reading is still how I recharge.

I have lived in several cities in Kazakhstan, so I sometimes joke that I am a true nomad.

My family has always been supportive of my open source work. And when I run into a particularly difficult translation, I can still discuss it with my wife and get a fresh perspective.

Firefox Tooling Announcements

Happy BMO Push Day! (20260415.1)

Github Link

The following changes have been pushed to bugzilla.mozilla.org:

  • Bug 2023761 - [GITHUB] Allow use of individual api keys for pull requests and push comments instead of single share secret
  • Bug 2012634 - “Phabricator Revisions” table overflows on X axis on mobile
  • Bug 2028222 - Pasting multi-line text after selecting multi-line text does not overwrite, but applies markup for link
  • Bug 2029522 - CI workflow uses deprecated docker-compose v1 and actions/checkout@v3
  • Bug 2031520 - Missing space in “Throw away my changes, andrevisit bug NNN” message (when marking a bug as a duplicate of a hidden bug)
  • Bug 2030581 - REST API: PUT /rest/bug/attachment/{id} does not pass is_markdown when adding comment
  • Bug 2018260 - “Fields You Can Search On” is blocking people from making it through quicksearch.html doc
  • Bug 2028240 - Cloned security bugs should default to being secure
  • Bug 2031007 - When linking a Github pull request to a BMO bug, the attachment filename should contain the repository name in addition to the pull request ID

Discuss these changes in the BMO Matrix Room

1 post - 1 participant

Read full topic

Firefox Nightly

QR Codes, Speed Calculators, Better RAM Usage – These Weeks in Firefox: Issue 199

Highlights

  • Thanks to overholt, macOS Nightly now has support for sharing the current tab’s URL via QR Code (Right-click tab, Share > Generate QR Code). This is held to Nightly for now.
  • Scott Downe fixed an issue where newtab background GIFs could max out RAM while idle in the background by throttling animated background decoding on backgrounded tabs. This fix is going out in Firefox 150.
  • Special thanks to volunteer contributor 1justinpeter who just added speed unit conversion support to the AwesomeBar!
    • You can try it in Nightly – try 1000 km/h to m/s
    • This is tentatively slated to go out in Firefox 150
  • New Tab Sections have been enabled by default in Canada!

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Chris Vander Linden
  • Itiel
  • Justin Peter
  • Khalid AlHaddad
  • Mauro V [:cheff]
  • Sam Johnson
  • Sebastian Zartner [:sebo]

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • Fixed a structuredClone regression in the MV2 userScripts sandbox, bringing it in line with the fix already applied to content scripts and MV3 userScripts sandboxes, fix applied in Nightly 150 and uplifted to Beta 149 – Bug 2020773
  • Fixed a tab crash triggered by calling Document.parseHTMLUnsafe() from a browser extension content script (due to an assertion failure hit because parseHTMLUnsafe was wrongly trying to create a document that belongs to the expanded privileged principal that originated the call). The changes applied is now making sure parseHTMLUnsafe will use the webpage document’s principal (and prevent the crash as a side effect of that) – Bug 1912587
  • Fixed a regression where the load event on about:blank iframes would not fire when a content script injected a style element (regressed in Firefox 148 as a side effect of the changes applied by Bug 543435, fix landed in Nightly 140 and uplifted to Beta 149 and Release 148) – Bug 2020300
    • Thanks to Vincent Villa for promptly investigating and fixing this regression!
WebExtension APIs
  • As part of followups related to the work to allow action.openPopup calls without user activation, action.openPopup() will reject the requests when another panel, context menu, doorhanger, or notification is already open in the window – Bug 2022281
  • As part of followups to the work in support of the splitView mode support introduced in the tabs API, tabs.move() has been tweaked to correctly return all specified tabs moved in a split view – Bug 2022372
Addon Manager & about:addons
  • Fixed a rendering regression where the context menu on about:addons cards would appear with a transparent background at non-default zoom levels – Bug 2006926
    • Thanks to Botond Ballo for investigating and fixing this small rendering regression!

DevTools

WebDriver

Lint, Docs and Workflow

New Tab Page

Search

  • Marco worked on replacing ContentTask.spawn in various tests @ 2023131, 2023134
  • Dale worked on several TrustPanel UX fixes @ 2017369, 2017376
  • Daisuke fixed issue with SwitchTab results with no title @ 2020341
  • Moritz worked on several telemetry issues relating to the new search bar 2021927
  • Moritz fixed the unified search button having no focus border @ 2023656
  • Dao added support for middle clicking to search in the unified search menu @ 2011220

Smart Window

  • “Beta” badges 2017667
  • Added AI Controls integration, supporting the blocked state (2010599)
  • Added more user choice with split memory generation (2017428)
  • Added chat search in Firefox View (2009070)
  • Follow-up prompts now persist across tab switches (2019696)
  • Conversation starter prompts now update with the foregrounded tab (2013657)
  • Current tab context is now removable (2018802)
  • Keyboard fixes for smartbar input (2019556 2017939 2015090)
  • Show sign in button in chat if signed out (2015720)

Storybook/Reusable Components/Acorn Design System

  • Tim Giles fixed some issues with panel-list using popover:
    • A macOS issue where keyboard events went to the wrong <moz-select>Bug 2017668
    • Scrolling now dismisses an open panel-listBug 2018563
  • akulyk updated the panel-list variant of moz-select to use role=”combobox”, improving screen reader semantics and keyboard navigation consistency.
  • Dustin Whisman updated the moz- widgets CSS to pass use-design-tokens:
  • Dustin Whisman updated some design token names for consistency (Bug 2013342):
    • –font-size-heading-* now a font-size variant (was –heading-font-size)
    • –card-border-color, –card-box-shadow, –card-box-shadow-hover, –popup-box-shadow, –tab-box-shadow (were nested as variants previously and now are under their component name)

UX Fundamentals

  • Added keyboard autofocus to the “Try Again” button in Felt Privacy error pages so users who land on an error page can immediately press enter to retry. (2021447)
  • Added keyboard access keys to the three primary error page buttons: G (Go Back), T (Try Again), and P (Proceed to Site). (404501)
  • In progress: refactoring net error illustrations into a shared object and adding alt text so assistive technology can read out meaningful descriptions. (2022033)
  • In progress: adding improved messaging to the file-not-found error page. (2018850)
  • In progress: restoring the error page for Work Offline mode so users see messaging that accurately reflects that they’re in Offline mode, not that there’s a network problem. (

Mozilla Open Policy & Advocacy Blog

Mozilla Urges the FTC to Tackle Harmful Design Practices

In response to concerns from both consumers and the industry, the US Federal Trade Commission (FTC) invited public comment on whether it should amend the current Rule Concerning the Use of Prenotification Negative Option Plans to address deceptive or unfair negative option practices.

Negative option marketing is a practice in which a seller treats a consumer’s silence or failure to take action as consent to be charged for goods or services. This technique is often used in subscription services, where users may be guided toward accepting recurring charges through default selections or obscure disclosures. These design practices, also known as “dark patterns,” successfully manipulate and influence user behavior on a systematic level and are often employed in all aspects of digital markets, not just with subscriptions.

As a browser developer, Mozilla is well-acquainted with the negative impacts of manipulative design. The web browser market provides a documented case study illustrating how operating systems deploy deceptive design practices to weaponize friction and status-quo bias to influence consumer behavior. As such, Mozilla was eager to provide feedback and encourage the Commission to examine the breadth of deceptive design practices that undermine choice.

Dark patterns are a byproduct of power asymmetry between companies  and consumers. If we don’t protect meaningful choice and effective competition now, we risk giving even more control to the biggest players — and losing what makes the web open and innovative in the first place.

The FTC has a critical opportunity, both in this rulemaking and more broadly, to modernize consumer protection for the realities of digital markets. We encourage the FTC to:

  • Make clear that practices which manipulate, coerce, or mislead users through interface design, defaults, or friction fall within the scope of unfair or deceptive acts or practices.
  • Investigate remedies for digital markets to operate with meaningful consumer choice.
  • Prioritize targeted enforcement against well-documented uses of deceptive design, such as tactics prevalent on the Windows operating system, designed to push users to the Edge browser.

We welcome the opportunity to share our relevant experiences in the browser space and look forward to continuing the conversation.

Read our full comments to the FTC for more details on our recommendations.

The post Mozilla Urges the FTC to Tackle Harmful Design Practices appeared first on Open Policy & Advocacy.

Firefox Tooling Announcements

MozPhab 2.13.0 Released

Bugs resolved in Moz-Phab 2.13.0:

  • bug 1925717 stop calling edge.search in moz-phab patch by making use of the stackGraph revision field
  • bug 2030443 Switch to uv for package management in moz-phab
  • bug 2031283 Parallelize network requests in moz-phab patch

Discuss these changes in #engineering-workflow on Slack or #Conduit Matrix.

1 post - 1 participant

Read full topic

This Week In Rust

This Week in Rust 647

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs

Crate of the Week

This week's crate is Myth Engine, a high-performance, cross-platform rendering engine.

Thanks to Pan Xinmiao for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

No calls for testing were issued this week by Rust, Cargo, Rustup or Rust language RFCs.

Let us know if you would like your feature to be tracked as a part of this list.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No Calls for participation were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

  • EuroRust | CFP open until 2026-04-27 | Barcelona, Spain | 2026-10-14 - 2026-10-17

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

519 pull requests were merged in the last week

Compiler
Library
Cargo
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

This week was negative, mainly caused by a type system fix and because we had to temporarily revert some attribute cleanups that previously improved performance.

Triage done by @panstromek. Revision range: e73c56ab..dab8d9d1

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.4% [0.2%, 0.7%] 46
Regressions ❌
(secondary)
0.5% [0.1%, 2.3%] 102
Improvements ✅
(primary)
-0.5% [-0.6%, -0.4%] 4
Improvements ✅
(secondary)
-0.4% [-0.6%, -0.2%] 5
All ❌✅ (primary) 0.4% [-0.6%, 0.7%] 50

4 Regressions, 1 Improvement, 5 Mixed; 6 of them in rollups 41 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Rust Cargo Compiler Team (MCPs only) Rust RFCs Leadership Council

No Items entered Final Comment Period this week for Language Reference, Language Team or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs
  • No New or Updated RFCs were created this week.

Upcoming Events

Rusty Events between 2026-04-15 - 2026-05-13 🩀

Virtual
Asia
Europe
North America
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

the amount of times that I spend 15 min in the docs + coding which end up in a monstrous or().flatten().map().is_ok_and() only to get slapped by clippy saying replace your monster with this single function please is way too high 😀

– Teufelchen on RIOT off-topic matrix chat

Thanks to chrysn for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by:

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Firefox Application Security Team

Firefox Security & Privacy Newsletter 2026 Q1

Welcome to the Q1 2026 edition of the Firefox Security & Privacy Newsletter.

Security and privacy are foundational to Mozilla’s manifesto and central to how we build Firefox. In this edition, we highlight key security and privacy work from Q1 2026, organized into the following areas:

  • Firefox Product Security & Privacy — new security and privacy features and integrations in Firefox
  • Community Engagement — updates from our security research and bug bounty community
  • Web Security & Standards — advancements that help websites better protect their users from online threats

Preface

Note: Some of the bugs linked below might not be accessible to the general public and restricted to specific work groups. We de-restrict fixed security bugs after a grace-period, until the majority of our user population have received Firefox updates. If a link does not work for you, please accept this as a precaution for the safety of all Firefox users.

Firefox Product Security & Privacy

Collaboration with Anthropic: A few weeks ago, Anthropic’s Frontier Red Team shared the results of a new AI-assisted vulnerability detection approach. Using this method, we have identified more than a dozen confirmed security issues, each supported by reproducible test cases. Learn more in our blog: Hardening Firefox with Anthropic’s Red Team. Leveraging our Firefox Security expertise, we ended up finding dozens of additional vulnerabilities that were fixed in the following Firefox updates.

YouTube coverage of Firefox at pwn2own 2025: To demonstrate Firefox’s focus on user security and Mozilla’s commitment to openness, we invited LiveOverflow to follow us during the prestigious hacking competition pwn2own last year. LiveOverflow’s four-party documentary provides behind-the-scenes coverage of our quick response to fixing two Firefox 0-day security bugs. The videos go from preparation (part 1), to exploit analysis (part 2) and disclosure (part 3), all the way to the rapid release of a Firefox update (part 4) for the 2-day event coverage.

Trustworthy JavaScript for the Open Web: Alongside partners from Meta, Proton AG, Cloudflare, and the Freedom of the Press Foundation, we presented our plans to improve the trustworthiness of JavaScript on the Web at Real World Crypto.

SafeBrowsing: Firefox 147 shipped with SafeBrowsing v5 support, allowing to protect users against malicious URLs. And starting with v149, Firefox blocks and revokes websites permissions for sites on the SafeBrowsing lists (Bug 1986300), leveling-up the built-in protection from online threats.

Stronger XSS Protection through the Sanitizer API: Starting with v148, Firefox was the first browser to add support for the Sanitizer API, helping prevent XSS attacks on the web. Learn more in our blog post, Goodbye innerHTML, Hello setHTML: Stronger XSS Protection in Firefox 148, or tune in to the ShopTalk Show podcast, where Freddy Braun discusses the details of the Sanitizer API.

2048-bit Minimum for RSA Certificates: Firefox now enforces a minimum 2048-bit RSA key size for certificates issued by Mozilla’s built-in root CAs. As publicly trusted CAs already meet this requirement, no significant impact to the broader web is expected.

Community Engagement

Bug Bounty Program Updates: As the threat landscape evolves, addressing the increasing volume of AI-assisted security bug reports, we’re evolving our security program alongside it. With continued advances in browser security architecture, our bug bounty program is refining its incentives to prioritize the highest-impact research and the most critical classes of vulnerabilities while focusing on novelty. Learn more in our blogpost: Bug Bounty Program Updates 2026. We have also just updated our Bug Bounty hall of fame, to list all people who helped us find and fix security vulnerabilities in Q1 of 2026.

Web Security & Standards

Storage-Access Headers: Firefox 147 is shipping an extension of the Storage Access API to improve both web compatibility and parity with Chrome. These Storage Access headers allow web pages to opt out of storage isolation upfront and without the need to first load a document.

Going Forward

As a Firefox user, you automatically benefit from the security and privacy improvements described above through Firefox’s regular automatic updates. If you’re not using Firefox yet, you can download it to enjoy a fast, secure browsing experience—while supporting Mozilla’s mission of a healthy, safe, and accessible web for everyone.

We’d like to thank everyone who helps make Firefox and the open web more secure and privacy-respecting.

See you next time with the Q2 2026 report.

— The Firefox Security and Privacy Teams

Mozilla Data YouTube Channel

Responsible Data Collection is Good, Actually (Ubisoft Data Summit 2021)

Firefox Telemetry Engineer and Data Steward Chris H-C (:chutten) gives a talk at Ubisoft's Data Summit 2021 about how Responsible Data Collection as practised at Mozilla makes cataloguing easy, stops instrumentation mistakes before they ship, and allows you to build self-serve analysis tooling that gets everyone invested in data quality. Oh, and it's cheaper, too.

Spidermonkey Development Blog

Benchmark Mode in SpiderMonkey

You ever get to the end of running benchmarks, maybe a long running one, and realize
 “Oh no. I forgot to set that important option, and these results are useless”

Yeah. I have. Too many times.

So I’ve added --benchmark-mode and --strict-benchmark-mode to SpiderMonkey.

These options configure the shell for benchmarking, taking the wisdom of the team and boiling multiple shell options down to a single --benchmark-mode flag, and in --strict-benchmark-mode will abort the run if the shell is configured in a way where effective benchmarking is unlikely to be possible (e.g. benchmarking a debug build!)

The nice thing about nailing this down is that this is something we can point anyone to and know that their shell is following the rules any of us would follow.

The general design philosophy of benchmark mode is to disable things you wouldn’t see enabled in Firefox in normal configuration, as well as debugging code that maybe makes sense for test suites but doesn’t make sense for a benchmark.

Hopefully this is the end of me realizing that I forgot to pass --no-async-stacks yet again.

Mozilla Open Policy & Advocacy Blog

Anti-hacking laws should not be used to lock up the open internet

Mozilla has joined EFF, the Alliance for Responsible Data Collection, Digital Medusa, and EleutherAI in filing an amicus brief in Amazon v. Perplexity, urging the Ninth Circuit not to stretch the Computer Fraud and Abuse Act (CFAA) far beyond its intended purpose.

We have said this before, and it remains true: laws designed to protect the security of the internet should not be used to undermine how people want to use it.

Our mission is grounded in the idea that the internet must remain open and accessible to all, and that privacy and security online are fundamental. Mozilla joined this brief because overly broad interpretations of computer crime laws can put those values at risk.

The CFAA is an anti-hacking law. It was meant to address break-ins to computer systems — not to criminalize tools that enable people to access and engage with information that is publicly available on the web. While there are no-doubt many challenging legal and policy questions around the growth and use of agentic AI tools, we believe expanding the reach of CFAA to address these issues would threaten innovation, chill the development of useful tools and services for researchers and journalists, and undermine competition online.

The post Anti-hacking laws should not be used to lock up the open internet appeared first on Open Policy & Advocacy.

The Servo Blog

Servo is now available on crates.io

Today the Servo team has released v0.1.0 of the servo crate. This is our first crates.io release of the servo crate that allows Servo to be used as a library.

We currently do not have any plans of publishing our demo browser servoshell to crates.io. In the 5 releases since our initial GitHub release in October 2025, our release process has matured, with the main “bottleneck” now being the human-written monthly blog post. Since we’re quite excited about this release, we decided to not wait for the monthly blog post to be finished, but promise to deliver the monthly update in the coming weeks.

As you can see from the version number, this release is not a 1.0 release. In fact, we still haven’t finished discussing what 1.0 means for Servo. Nevertheless, the increased version number reflects our growing confidence in Servo’s embedding API and its ability to meet some users’ needs.

In the meantime we also decided to offer a long-term support (LTS) version of Servo, since breaking changes in the regular monthly releases are expected and some embedders might prefer doing major upgrades on a scheduled half-yearly basis while still receiving security updates and (hopefully!) some migration guides. For more details on the LTS release, see the respective section in the Servo book.

Andreas Farre

How to make Firefox builds1 17% faster2

In the previous post, I mentioned that buildcache has some unique properties compared to ccache and sccache. One of them is its Lua plugin system, which lets you write custom wrappers for programs that aren’t compilers in the traditional sense. With Bug 2027655 now merged, we can use this to cache Firefox’s WebIDL binding code generation.

What’s the WebIDL step?

When you build Firefox, one of the earlier steps runs python3 -m mozbuild.action.webidl to generate C++ binding code from hundreds of .webidl files. It produces thousands of output files: headers, cpp files, forward declarations, event implementations, and so on. The step isn’t terribly slow on its own, but it runs on every clobber build, and the output is entirely deterministic given the same inputs. That makes it a perfect candidate for caching.

The problem was that the compiler cache was never passed to this step. Buildcache was only wrapping actual compiler invocations, not the Python codegen.

The change

The fix in Bug 2027655 is small. In dom/bindings/Makefile.in, we now conditionally pass $(CCACHE) as a command wrapper to the py_action call:

WEBIDL_CCACHE=
ifdef MOZ_USING_BUILDCACHE
WEBIDL_CCACHE=$(CCACHE)
endif

webidl.stub: $(codegen_dependencies)
	$(call py_action,webidl $(relativesrcdir),$(srcdir),,$(WEBIDL_CCACHE))
	@$(TOUCH) $@

The py_action macro in config/makefiles/functions.mk is what runs Python build actions. The ability to pass a command wrapper as a fourth argument was also introduced in this bug. When buildcache is configured as the compiler cache, this means the webidl action is invoked as buildcache python3 -m mozbuild.action.webidl ... instead of just python3 -m mozbuild.action.webidl .... That’s all buildcache needs to intercept it.

Note the ifdef MOZ_USING_BUILDCACHE guard. This is specific to buildcache because ccache and sccache don’t have a mechanism for caching arbitrary commands. Buildcache does, through its Lua wrappers.

The Lua wrapper

Buildcache’s Lua plugin system lets you write a script that tells it how to handle a program it doesn’t natively understand. The wrapper for WebIDL codegen, webidl.lua, needs to answer a few questions for buildcache:

  • Can I handle this command? Match on mozbuild.action.webidl in the argument list.
  • What are the inputs? All the .webidl source files, plus the Python codegen scripts. These come from file-lists.json (which mach generates) and codegen.json (which tracks the Python dependencies from the previous run).
  • What are the outputs? All the generated binding headers, cpp files, event files, and the codegen state files. Again derived from file-lists.json.

With that information, buildcache can hash the inputs, check the cache, and either replay the cached outputs or run the real command and store the results.

The wrapper uses buildcache’s direct_mode capability, meaning it hashes input files directly rather than relying on preprocessed output. This is the right approach here since we’re not dealing with a C preprocessor but with a Python script that reads .webidl files.

Numbers

Here are build times for ./mach build on Linux, comparing compiler cachers. Each row shows a clobber build with an empty cache (cold), followed by a clobber build with a filled cache (warm):

tool cold warm with plugin
none 5m35s n/a n/a
ccache 5m42s 3m21s n/a
sccache 9m38s 2m49s n/a
buildcache 5m43s 1m27s 1m12s

The “with plugin” column is buildcache with the webidl.lua wrapper active. It shaves another 15 seconds1, bringing the total down to 1m12s2. Not a revolutionary improvement on its own, but it demonstrates the mechanism. The WebIDL step is just the first Python action to get this treatment; there are other codegen steps in the build that could benefit from the same approach.

More broadly, these numbers show buildcache pulling well ahead on warm builds. Going from a 5m35s clean build to a 1m12s cached rebuild is a nice improvement to the edit-compile-test cycle.

These are single runs on one machine, not rigorous benchmarks, but the direction is clear enough.

Setting it up

If you’re already using buildcache with mach, the Makefile change is available when updating to today’s central. To enable the Lua wrapper, clone the buildcache-wrappers repo and point buildcache at it via lua_paths in ~/.buildcache/config.json:

{"lua_paths":["/path/to/buildcache-wrappers/mozilla"],"max_cache_size":10737418240,"max_local_entry_size":2684354560}

Alternatively, you can set the BUILDCACHE_LUA_PATH environment variable. A convenient place to do that is in your mozconfig:

mk_add_options "export BUILDCACHE_LUA_PATH=/path/to/buildcache-wrappers/mozilla/"

The large max_local_entry_size (2.5 GB) is needed because some Rust crates produce very large cache entries.

What’s next

The Lua plugin system is the interesting part here. The WebIDL wrapper is a proof of concept, but the same technique applies to any deterministic build step that takes known inputs and produces known outputs. There are other codegen actions in the Firefox build that could get the same treatment, and I plan to explore those next.

Notes
  1. For a clobber build with a warm cache ↩

  2. On my machine ↩

The Mozilla Blog

Old habits die hard: Microsoft tries to limit our options, this time with AI

Microsoft recently announced it’s pulling back Copilot from several of its core Windows apps — Photos, Notepad, the Snipping Tool, and Widgets. Rolling back these forced AI integrations is the right move, but this is just the most recent example of Microsoft going too far without user consent. 

Copilot was pushed onto users

Over the past year, Copilot wasn’t offered to Windows users — it was installed on them. The M365 Copilot app began auto-installing on any Windows device running Microsoft 365 desktop apps, with no prompt and no consent. A new physical keyboard key was added to laptops that launched Copilot by default, with no simple way to remap it. By default, Copilot was pinned to the taskbar starting with Windows 11 PCs. And, going a step further, Microsoft planned to embed it into three of the most fundamental surfaces for the operating system: the Windows notification center, the Settings app, and File Explorer. 

Then came the user backlash. 

When Microsoft says it now wants to be “intentional” about Copilot, they’re really admitting that they made repeated choices to serve their business over their customers. 

This isn’t the first time – Microsoft has a pattern of deceptive design patterns

The pattern of behavior here isn’t new. Independent research commissioned by Mozilla has documented how Microsoft uses design and distribution tactics to override user choice — from deliberately complicated processes for changing your default browser, to UI that routes users back to Microsoft’s Edge browser even after they’ve explicitly chosen something else.

Since Mozilla published that research, Microsoft has continued to escalate its use of dark patterns to force behaviors that help the bottom line, not people’s lives. Here are a few examples from the rollout of Windows 11 that have continued to strip users of their choice: 

  • The Windows Search bar, embedded in the taskbar on both Windows 10 and Windows 11, is hardcoded to only open Microsoft Edge, regardless of your default browser.
  • Windows has not implemented a true device migration system, like we see with Android, iOS, and MacOS, where your apps, settings and data are all reflected on your new device when you buy a new computer. Instead, the defaults are changed back to Microsoft’s own products. 
  • Microsoft Outlook and Microsoft Teams by default ignore your default browser selection and open links directly in Edge.
  • Windows does not offer a simple prompt that other browsers can trigger asking to become your default browser. Instead, other browsers have to direct you to Windows settings and hope you finish the multi-step process.

The Copilot rollout followed the same playbook we’ve come to expect from Microsoft: use automatic installs, physical hardware, and default settings to force behaviors. In the most recent instance, they allowed their AI to learn and gather data as quickly as possible before people had a choice. 

What ‘genuinely useful’ AI integration actually looks like

We, like Microsoft and basically every tech company, have been asking ourselves the same question: What does it mean for AI to be genuinely useful? For us, the answer is simple. AI should work on your terms, not ours. Firefox’s goal is to create AI enhancements that are made for people, not just because they can increase profit. 

We’ve rolled out AI-enhanced features that make browsing smarter, faster, and more personalized, such as translations that stay local on your device to help you browse the web in your preferred language, alt text in PDFs to add accessibility descriptions to images in PDF pages and tab grouping which suggests related tabs and group names.

But we also know users deserve a choice. We built our answer into Firefox 148, introducing a centralized AI Controls panel in your browser settings including a single “Block AI Enhancements” switch that turns off every AI feature at once. Each option is also individually controllable. 

The premise is simple: You should decide whether AI is part of your browsing experience at all. Not Big Tech. Not Mozilla. You.

And critically, your preferences also persist across browser updates, which means AI tools won’t silently re-enable themselves after a major upgrade. No reinstalling. No opting out again after the fact. It’s designed for people who care about what’s happening on their computer but shouldn’t have to become a systems administrator to stay in control of it.

The stakes are bigger than one rollback

When a company with Microsoft’s reach continues to control users — and only walks it back when the noise gets loud enough — it shapes what people expect from technology. It tells people that their only real move is to complain until, hopefully, the company relents. It also makes it harder for alternatives to compete when a company uses its reach and control to steer people back into its own products.  

We don’t think that’s the internet we have to accept. People have been clear about what they want when it comes to this era of the internet. They want to feel like they’re in control of their own devices and their own data. That’s the internet we’re trying to build. 

The post Old habits die hard: Microsoft tries to limit our options, this time with AI appeared first on The Mozilla Blog.

The Mozilla Blog

0DIN is open-sourcing AI security and the hard-earned knowledge behind it

Retro-futuristic scientist using an open-source AI scanner to analyze floating vintage technology and digital data streams in space.
Image generated by Nano Banana 2 in response to a request for a “Retro-futuristic collage of a scientist using an open-source AI scanner to analyze floating vintage tech and digital data streams.”

We’re launching across the developer and security community this week on Product Hunt and Hacker News. If you’ve been following AI security, we’d love your support and your feedback. 

At Mozilla, open source has never been just a licensing choice. It’s a conviction: the internet gets healthier when tools and knowledge circulate freely, when anyone can audit what’s running, extend what exists, and build on what came before. That’s why we built Firefox in the open. It’s why we’ve kept building that way ever since.

0DIN, Mozilla’s AI security team, is working from the same premise. This week we’re releasing the 0DIN AI Security Scanner as open source software under the Apache 2.0 license, along with 179 community probes covering 35 vulnerability families, plus six specialty probes drawn exclusively from our bug bounty library.

The scanner, and the intelligence behind it

The 0DIN Scanner isn’t another benchmark suite built from textbook examples. We’re seeding it with probes drawn directly from our bug bounty program, where security researchers compete to find novel techniques to manipulate, extract data from, and subvert AI systems. As new vulnerabilities are discovered and disclosed through that program, we’ll continue adding probes to the open-source library over time.

That loop, from researcher discovery to packaged reusable test, is what separates 0DIN Scanner from generic tooling. It’s high impact intelligence on jailbreaks, updated frequently as our researchers find new techniques.

Built on NVIDIA’s GARAK open-source framework, the 0DIN Scanner adds a graphical interface, automated scan scheduling, cross-model comparative analysis, and enterprise-grade reporting. It runs against frontier models, open source LLMs, chatbots and anything with a prompt interface. Security teams can see attack success rates, a vulnerability breakdown, and a comparison against the frontier models that attackers are also probing every day.

Six of those bug bounty probes are named here for the first time: Placeholder Injection, Incremental Table Completion, Technical Field Guide, Chemical Compiler Debug, Correction, and Hex Recipe Book. Each represents a real technique that worked against production AI systems before we closed the loop.

These probes are scored using JEF (Jailbreak Evaluation Framework), our open-source library for measuring prohibited content output, which is also seeing major updates this week.

The code is at github.com/0din-ai/ai-scanner. Fork it, extend it, build on it.

Knowing your risk before attackers do

Not every organization has a red team or the bandwidth to run adversarial testing. Many companies are deploying AI in production right now without a clear picture of where they’re exposed. To help close that gap, we’re offering free security assessments for enterprise AI deployments.

The assessment delivers an attack success rate against your systems, a breakdown across prompt injection, jailbreaks, and data extraction categories, and a benchmark comparison against major frontier models. The process takes a few minutes to setup with scan duration varying based on the number of probes chosen. If you’re actively deploying AI and haven’t tested it under adversarial conditions, this is a good place to start.

For teams that don’t want to manage the open source scanner on their own, we also offer a managed Enterprise edition with access to nearly 500 pre-disclosure probes from the bug bounty program, giving organizations advance notice of emerging techniques before they’re publicly known.

Why open source, and why now

AI is moving fast enough that no single team will solve this alone. There are too many threats, too many models, too much attack surface. Keeping our tools locked away would make 0DIN marginally stronger while leaving the broader internet weaker.

The researchers who submitted findings through our bug bounty program earned bounties for their work. We’re releasing a meaningful portion of that intelligence as open source and we’ll keep doing so as new vulnerabilities are discovered and disclosed. That’s the deal Mozilla has always offered: we build in the open, the community helps make it better, and the web gets a little healthier for it.

Get involved

The post 0DIN is open-sourcing AI security and the hard-earned knowledge behind it appeared first on The Mozilla Blog.

Andreas Farre

BuildCache now works with mach

I’m happy to announce that buildcache is now a first-class compiler cache in mach. This has been a long time coming, and I’m excited to finally see it land.

For those unfamiliar, buildcache is a compiler cache that can drastically cut down your rebuild times by caching compilation results. It’s similar to ccache, but even more so sccache, in that it supports C/C++ out of the box, as well as Rust. It has some nice unique properties of its own though, which we’ll look at more closely in following posts.

Getting started

Setting it up is straightforward. Just add the following to your mozconfig:

ac_add_options --with-ccache=buildcache

Then build as usual:

./mach build

That’s it.

Give it a try

If you run into any issues, please file a bug and tag me. I’d love to hear how it works out for people, and any rough edges you might hit.

Firefox Tooling Announcements

MozPhab 2.12.0 Released

Bugs resolved in Moz-Phab 2.12.0:

  • bug 2029015 Clean up previous_commit state tracking
  • bug 2029072 Using moz-phab uplift --assessment-id shouldn’t require extra browser clicks

Discuss these changes in #engineering-workflow on Slack or #Conduit Matrix.

1 post - 1 participant

Read full topic

This Week In Rust

This Week in Rust 646

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Foundation
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is aimdb-core, a type-safe and platform-agnostic data pipeline where the Rust type system is the schema and trait implementations define its behavior.

Thanks to sounds.like.lx for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

No calls for testing were issued this week by Rust, Cargo, Rustup or Rust language RFCs.

Let us know if you would like your feature to be tracked as a part of this list.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

  • NDC Techtown | CFP open until 2026-04-14 | Kongsberg, Norway | 2026-09-09 - 2026-09-12.
  • EuroRust | CFP open until 2026-04-27 | Barcelona, Spain | 2026-10-14 - 2026-10-17

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

479 pull requests were merged in the last week

Compiler
Library
Cargo
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

A shorter week than normal (probably due to later perf triage last week). Overall fairly small changes scattered across various PRs, though the net effect was slightly positive (-0.5% avg change). All changed ended up either mixed or improvements this week.

Triage done by @simulacrum. Revision range: cf7da0b7..e73c56ab

0 Regressions, 3 Improvements, 8 Mixed; 5 of them in rollups 26 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Compiler Team (MCPs only)

No Items entered Final Comment Period this week for Rust RFCs, Language Reference, Language Team, Leadership Council or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2026-04-08 - 2026-05-06 🩀

Virtual
Asia
Europe
North America
Oceania
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Rust tried to have polymorphic generics in the early pre-1.0 days, and they quite reasonably gave up because it was too much work. For real Swift, great fucking working for getting all of this to work!

– Aria Desires on her blog

llogiq thanks himself for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by:

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Firefox Tooling Announcements

New Deploy of PerCompare April 7th

Firefox Tooling Announcements

Engineering Effectiveness Newsletter (Q1 2026 Edition)

Welcome to the Q1 edition of the Engineering Effectiveness Newsletter! The Engineering Effectiveness org makes it easy to develop, test and release Mozilla software at scale. See below for some highlights, then read on for more detailed info!

Highlights

  • Suhaib Integrated Review Helper with Phabricator and moz-phab making AI-powered code review quick and simple.
  • Connor Sheehan implemented ETL from Lando to STMO, which allows us to get better visibility into lando’s performance and usage.
  • Firefox 150 will ship with new PDF editing features completed by Calixte, letting users delete, copy, move, and export pages to a new PDF.

Detailed Project Updates

AI for Development

  • Suhaib Mujahid integrated Review Helper with Phabricator, enabling AI-powered code review directly from patches by clicking a “Request AI Review” button, allowing it to analyze the patch and post comments with any findings.
  • Suhaib Mujahid extended moz-phab to support requesting an AI review at patch submission time, enabling contributors to trigger Review Helper analysis directly from the command line via moz-phab --ai.

Bugzilla

  • Marco trained a new model in bugbug to detect bugs that are accessibility-related and missing the “access” keyword, to bring them to the attention of the accessibility team
  • Two fixes from dkl to improve the reliability of the background bot that syncs Phabricator revisions with Bugzilla bugs.
  • Kohei updated the markdown comment editor now intelligently handles pasting URLs. When you paste a URL while text is selected, it automatically formats it as a markdown link “selected text”.
  • Kohei has also done significant improvements to the Guided Bug Entry page for new Bugzilla pages that should be going live soon.

Build System and Mach Environment

  • Better scheduling of rust dependencies through Bug 2011880 leads to ~1m saving in build time for opt build with hot cache.
  • Warning flags can no longer be added directly to CFLAGS or CXXFLAGS in moz.build, they have to go in COMPILE_FLAGS[“WARNINGS_CXXFLAGS”] (resp. COMPILE_FLAGS[“WARNINGS_CFLAGS”]) (see Bug 1986258)

Firefox-CI, Taskcluster and Treeherder

  • Matt Boris upgraded FxCI to use RabbitMQ quorum queues and upgraded pulse to the latest available version for performance, security, and reliability.
  • Abhishek Madan migrated schema validation from Voluptuous to msgspec across taskgraph, mozilla-taskgraph, and firefox, resulting in a 30% improvement to decision task times.
  • Abhishek Madan moved Firefox from a vendored copy of taskgraph to PyPI installs at setup time, enabling support for packages that include compiled components.
  • Andrew Halberstadt made lots of progress migrating CI to Github, currently being used by mozilla/enterprise-firefox:
  • Andrew Halberstadt wrote a patch implementing the ability for the Taskcluster Github service to trigger hooks listed in .taskcluster.yml files. This will pave the way to share cross-project workflows and simplify in-repo configuration.
  • Cameron Dawson upgraded major frontend libraries of Treeherder

Lint, Static Analysis and Code Coverage

  • New linter for header guards, through bug 2009182, triggered by mach lint --linter header-guards . It enforces our code style.
  • A limited subset of clang-tidy’s static analysis is now run and enforced on our whole codebase. It is also reported during review on phabricator (see Bug 2023518 and related bugs)
  • ESLint and Prettier have been updated to the latest versions.
  • eslint-env comments are being removed as ESLint v9 does not support them (use eslint-file-globals.config.mjs instead). ESLint v10 (currently in rc) will raise errors for them.
  • More eslint-plugin-jsdoc rules have been enabled across the whole tree. These are the ones relating to valid-jsdoc. A few remain, but will need work by teams to fix the failur
  • The “Black” python formatter has now been replaced by “Ruff”.
  • Marco greatly simplified the code coverage infrastructure, getting rid of two Heroku services, a frontend service, and a lot of code. The code coverage official UI is now Searchfox.
  • Marco added a new mach command (“./mach coverage-report”) to generate a coverage report from a push. The command is documented on the code coverage page in the Firefox source docs.
  • Teklia added added support for Github pull requests to Code Review Bot (prototype)

PDF.js

  • Calixte finished the implementation of the new reorganize and split functionality in PDF, which will ship in Firefox 150! Users will be able to delete, copy, move pages, and to export a subset of pages to a new PDF.
  • NicolĂČ Ribaudo implemented the ability to open context menus on images in PDFs, allowing users to perform actions they are used to (such as downloading images). This was a long standing feature request (11 years!).

Firefox Translations

Phabricator, moz-phab, and Lando

  • Connor Sheehan implemented ETL from Lando to STMO, which allows us to get better visibility into lando’s performance and usage, e.g., the new uplift feature: Client Challenge
  • Zeid continues spear-heading the GitHub PR pilot, gathering feedback and fixing usability issues as they are reported. One key focus was on supporting triggering the Code Review Bot on request, via pushes to try.
  • Olivier Mehani added backward-compatible support for try pushes in the new instance of lando. It will become the default soon, but you can try it out now by setting LANDO_TRY_CONFIG=lando-prod-new in your environment prior to running `mach try .
  • Olivier Mehani landed a small change to lando, to make the current Tree Status visible on main landing pages (Bug 2025629). This, with the landing queue visible on the job details pages, should help get a better understanding of why jobs sometimes seem to take longer than expected to land.
  • moz-phab had several new releases:

Release Engineering and Release Management

  • Ben Hearsum added new tests to verify update integrity on mozilla-central.
  • Julien Cristau updated the docker images for many build and related tasks from Debian 12 to Debian 13
  • Relman streamlined the release process by removing the Nightly soft code freeze and adjusting the Beta schedule to reduce end-of-cycle friction, create more effective stabilization time, and simplify release candidate workflows.
  • We now ship to the Xiaomi Store.
  • Delivered mid-cycle ESR dot releases to address critical security fixes ahead of the standard cadence, improving responsiveness while coordinating across multiple ESR versions and release channels.
  • Andrew Halberstadt helped support and build out the Firefox Enterprise release pipeline.

Release Operations

  • Mark Cornmesser improved Windows hardware management, including self-configuration and self-deployment capabilities, automated BIOS management, and standardization of BIOS settings across performance testing environments to ensure consistency and reliability.

Other

  • Thanks to Bug #2013401 mozilla::Maybe<scalar_type> generates better and denser code, which led to a reduction of 300kB for libxul.so

  • Thanks to A new clang-tidy pass we’ve been able to automatically add std::move in location where it could improve performance (see Bug 2012658)

Thanks for reading and see you next quarter!

1 post - 1 participant

Read full topic

The Rust Programming Language Blog

docs.rs: building fewer targets by default

Building fewer targets by default

On 2026-05-01, docs.rs will make a breaking change to its build behavior.

Today, if a crate does not define a targets list in its docs.rs metadata, docs.rs builds documentation for a default list of five targets.

Starting on 2026-05-01, docs.rs will instead build documentation for only the default target unless additional targets are requested explicitly.

This is the next step in a change we first introduced in 2020, when docs.rs added support for opting into fewer build targets. Most crates do not compile different code for different targets, so building fewer targets by default is a better fit for most releases. It also reduces build times and saves resources on docs.rs.

This change only affects:

  1. new releases
  2. rebuilds of old releases

How is the default target chosen?

If you do not set default-target, docs.rs uses the target of its build servers: x86_64-unknown-linux-gnu.

You can override that by setting default-target in your docs.rs metadata:

[package.metadata.docs.rs]
default-target = "x86_64-apple-darwin"

How do I build documentation for additional targets?

If your crate needs documentation to be built for more than the default target, define the full list explicitly in your Cargo.toml:

[package.metadata.docs.rs]
targets = [
    "x86_64-unknown-linux-gnu",
    "x86_64-apple-darwin",
    "x86_64-pc-windows-msvc",
    "i686-unknown-linux-gnu",
    "i686-pc-windows-msvc"
]

When targets is set, docs.rs will build documentation for exactly those targets.

docs.rs still supports any target available in the Rust toolchain. Only the default behavior is changing.

The Rust Programming Language Blog

Changes to WebAssembly targets and handling undefined symbols

Rust's WebAssembly targets are soon going to experience a change which has a risk of breaking existing projects, and this post is intended to notify users of this upcoming change, explain what it is, and how to handle it. Specifically, all WebAssembly targets in Rust have been linked using the --allow-undefined flag to wasm-ld, and this flag is being removed.

What is --allow-undefined?

WebAssembly binaries in Rust today are all created by linking with wasm-ld. This serves a similar purpose to ld, lld, and mold, for example; it takes separately compiled crates/object files and creates one final binary. Since the first introduction of WebAssembly targets in Rust, the --allow-undefined flag has been passed to wasm-ld. This flag is documented as:

  --allow-undefined       Allow undefined symbols in linked binary. This options
                          is equivalent to --import-undefined and
                          --unresolved-symbols=ignore-all

The term "undefined" here specifically means with respect to symbol resolution in wasm-ld itself. Symbols used by wasm-ld correspond relatively closely to what native platforms use, for example all Rust functions have a symbol associated with them. Symbols can be referred to in Rust through extern "C" blocks, for example:

unsafe extern "C" {
    fn mylibrary_init();
}
fn init(){
    unsafe {
        mylibrary_init();
}
}

The symbol mylibrary_init is an undefined symbol. This is typically defined by a separate component of a program, such as an externally compiled C library, which will provide a definition for this symbol. By passing --allow-undefined to wasm-ld, however, it means that the above would generate a WebAssembly module like so:

(module
    (import "env" "mylibrary_init" (func $mylibrary_init))
    ;; ...
)

This means that the undefined symbol was ignored and ended up as an imported symbol in the final WebAssembly module that is produced.

The precise history here is somewhat lost to time, but the current understanding is that --allow-undefined was effectively required in the very early days of introducing wasm-ld to the Rust toolchain. This historical workaround stuck around till today and hasn't changed.

What's wrong with --allow-undefined?

By passing --allow-undefined on all WebAssembly targets, rustc is introducing diverging behavior between other platforms and WebAssembly. The main risk of --allow-undefined is that misconfiguration or mistakes in building can result in broken WebAssembly modules being produced, as opposed to compilation errors. This means that the proverbial can is kicked down the road and lengthens the distance from where the problem is discovered to where it was introduced. Some example problematic situations are:

  • If mylibrary_init was typo'd as mylibraryinit then the final binary would import the mylibraryinit symbol instead of calling the linked mylibrary_init C symbol.

  • If mylibrary was mistakenly not compiled and linked into a final application then the mylibrary_init symbol would end up imported rather than producing a linker error saying it's undefined.

  • If external tooling is used to process a WebAssembly module, such as wasm-bindgen or wasm-tools component new, these tools don't know what to do with "env" imports by default and they are likely to provide an error message of some form that isn't clearly connected back to the original source code and where the symbols was imported from.

  • For web users if you've ever seen an error along the lines of Uncaught TypeError: Failed to resolve module specifier "env". Relative references must start with either "/", "./", or "../". this can mean that "env" leaked into the final module unexpectedly and the true error is the undefined symbol error, not the lack of "env" items provided.

All native platforms consider undefined symbols to be an error by default, and thus by passing --allow-undefined rustc is introducing surprising behavior on WebAssembly targets. The goal of the change is to remove this surprise and behave more like native platforms.

What is going to break, and how to fix?

In theory, not a whole lot is expected to break from this change. If the final WebAssembly binary imports unexpected symbols, then it's likely that the binary won't be runnable in the desired embedding, as the desired embedding probably doesn't provide the symbol as a definition. For example, if you compile an application for wasm32-wasip1 if the final binary imports mylibrary_init then it'll fail to run in most runtimes because it's considered an unresolved import. This means that most of the time this change won't break users, but it'll instead provide better diagnostics.

The reason for this post, however, is that it's possible users could be intentionally relying on this behavior. For example your application might have:

unsafe extern "C" {
    fn js_log(n: u32);
}
// ...

And then perhaps some JS code that looks like:

let instance = await WebAssembly.instantiate(module, {
    env: {
        js_log: n => console.log(n),
    }
});

Effectively it's possible for users to explicitly rely on the behavior of --allow-undefined generating an import in the final WebAssembly binary.

If users encounter this then the code can be fixed through a #[link] attribute which explicitly specifies the wasm_import_module name:

#[link(wasm_import_module = "env")]
unsafe extern "C" {
    fn js_log(n: u32);
}
// ...

This will have the same behavior as before and will no longer be considered an undefined symbol to wasm-ld, and it'll work both before and after this change.

Affected users can also compile with -Clink-arg=--allow-undefined as well to quickly restore the old behavior.

When is this change being made?

Removing --allow-undefined on wasm targets is being done in rust-lang/rust#149868. That change is slated to land in nightly soon, and will then get released with Rust 1.96 on 2026-05-28. If you see any issues as a result of this fallout please don't hesitate to file an issue on rust-lang/rust.

Mozilla Localization (L10N)

Enhancing Comment Management in Pontoon

We’re excited to highlight the work of Serah Nderi, a volunteer contributor to Pontoon who has quickly made a meaningful impact on the project. Since getting involved earlier this year, Serah has contributed a steady stream of improvements — including 10 patches in just the past two months — ranging from good-first issues to fully fledged features.

Serah joined the Mozilla community as an Outreachy intern on the SpiderMonkey team, where she demonstrated both strong technical skills and a passion for languages. That combination naturally led her to Pontoon, where she has been contributing not only as a developer but also as a localizer, exploring translations for languages like Kiswahili and Kikuyu.

Her latest contribution introduces long-awaited functionality for editing and deleting comments in Pontoon, improving collaboration and moderation workflows for translators and project managers alike.

You can follow Serah’s work on GitHub and connect with her on LinkedIn.

Last year, I earned a B1 certification in German and TOPIK I certification in Korean. This year, I decided to explore something at the intersection of technology and languages, which led me to start contributing to Pontoon.

Pontoon is Mozilla’s web-based localization platform, used by thousands of contributors to translate Firefox and other Mozilla projects into hundreds of languages.

I began by adding Kiswahili translations and exploring localization for my mother tongue, Kikuyu. While Kikuyu doesn’t yet have a project manager and presents unique challenges, it made the experience even more interesting. After working on a few good-first issues, I decided to take on a larger challenge: implementing a full feature—the ability for users to edit and delete comments.

Previously, users could only add comments. If a comment contained a typo or needed clarification, the only option was to add another comment. This often led to cluttered discussions and made collaboration less efficient. I set out to improve this experience.

Under the hood

The frontend implementation had a natural starting point. Pontoon comments already included actions like pinning, so adding Edit and Delete followed a similar interaction pattern.

One of the main challenges was handling comment content. Comments in Pontoon are stored as serialized HTML paragraphs with support for @mentions. To enable editing, I needed to deserialize this stored content back into the editor so that users would see a fully functional input field pre-populated with their original comment—including mentions. When saving, the content is serialized again before being stored.

In addition to the UI changes, I implemented the backend views for editing and deleting comments, along with the necessary tests. The final result allows users to edit and delete their own comments, while project managers can delete any comment for moderation purposes.

This feature makes discussions in Pontoon more flexible, reduces noise from duplicate comments, and improves the overall collaboration experience for localization teams.

Firefox Tooling Announcements

MozPhab 2.11.1 Released

Bugs resolved in Moz-Phab 2.11.1:

  • bug 2028700 Only request AI review for updates if the --ai flag is passed

Discuss these changes in #engineering-workflow on Slack or #Conduit Matrix.

1 post - 1 participant

Read full topic

Firefox Tooling Announcements

MozPhab 2.11.0 Released

Bugs resolved in Moz-Phab 2.11.0:

Discuss these changes in #engineering-workflow on Slack or #Conduit Matrix.

1 post - 1 participant

Read full topic

Mozilla Localization (L10N)

Localizer Spotlight: ClĂĄudio

About you

My name is Cláudio Esperança, I’m from Portugal. I speak Portuguese and English. I have been contributing to Mozilla localization projects for more than 18 years.

Mozilla localization

Q: How did you first get involved in localization, and what drew you to Mozilla?

A: Curiosity has always driven me to understand how things work. Discovering open-source software, specifically Firefox and Linux, opened a world of limitless possibilities. I saw software translation not only as a way to improve my English but also as a great opportunity to start collaborating and contributing to the Mozilla mission. I began by following the community email list, contributing translations, and attending events. Before I knew it, I was leading the Portuguese translation team.

Q: You contribute across many projects in Pontoon. Is there a product that stands out to you? Have you shared with family and friends what you have been doing and promoting the products?

A: Firefox is always my favorite and the browser I use most regularly, as I trust it with my personal data. However, I contribute to all projects to provide users with more people-focused, secure, and private options, in a market often dominated by other vested interests.

I don’t actively promote my work, as I prefer when people discover Mozilla products because they are the best solution for their needs. It may seem counterintuitive, but actually, I love when I see someone using Firefox, or another Mozilla product, not because they feel pressured by something I said, but because they’ve discovered it’s the best solution for them. It is very gratifying to know that the strings I translate are used by thousands of people every day, including family, friends, coworkers, and many other people which I probably will never know.

Q: What have been some of the most rewarding or impactful projects you’ve localized?

A: Firefox is undoubtedly the most impactful due to its fundamental role on the web. I also found Firefox OS particularly interesting: the concept was great, and it had great potential, but unfortunately it didn’t go as far as I would have liked. I still hope to see it reborn in some form one day.

Q: What advice would you give to someone considering contributing to Mozilla localization today?

A: One of the best things about L10n at Mozilla is how accessible localization has become. You don’t need to be a developer to make a difference. Whether by starting with a smaller project to build up confidence or diving straight into a high-impact application, or focus on a tool you love or explore something entirely new, the choice is yours. The most important step is simply to begin. And there’s no such thing as a ‘small’ contribution — every translated word helps to build a more inclusive internet for everyone.

Community & leadership

ClĂĄudio and Kit, celebrating 18+ years of Mozilla localization.

Q: How does the Portuguese localization community collaborate today?

A: The Portuguese community is small, and we don’t have many members with recurring contributions. One of the reasons they give for this disengagement is that they feel their help isn’t needed because our translation completion rate is high (which isn’t true at all). There are other reasons like lack of time (main reason), and the fact that a large portion of the user base are pretty comfortable using software in English, Brazilian, or Spanish.

Regarding community communication, while we previously used various discussion groups, we now primarily communicate via email and direct contact, with most of the work happening directly on Pontoon.

Q: You’ve been leading the team for many years. How do you approach mentorship and conflict resolution?

A: When I started, I didn’t have a mentor, so I had to rely on Mozilla’s resources and some reverse engineering. Today, platforms like Pontoon and SUMO make the process much easier for volunteers. Regarding conflicts, like all communities, we sometimes face significant challenges regarding personality and linguistic differences. Overall, we try to maintain a positive, constructive, and inclusive attitude, where all well-founded contributions are welcome. We use a democratic process for most decisions, with a “benevolent dictator” model as a final fallback if consensus cannot be reached.

Professional background & skills

Q: What is your professional background, and how has it influenced your localization work?

A: I have a background in software engineering (Master’s in Mobile Computing, Bachelor’s in Information Systems, technical training in TCP/IP networks, Linux, and other technologies). This experience helps me handle technical aspects of software translation like placeholder syntax, HTML tags, and technical terminology, though modern tools like Pontoon have made localization much more accessible to everyone.

Q: How has localization influenced your professional work?

A: Localization provides a unique perspective on applications by allowing a deeper understanding of how they work. We get to learn about the various options available in the software, sometimes hidden in the more obscure areas of the application. Unlike more traditional applications that rely on older technologies, applications developed within the Mozilla ecosystem are at the forefront of web innovation, allowing early exposure to the future of the Internet. As a software engineer, I incorporate these insights into my own projects to create more modern and user-friendly solutions.

Q: After 18+ years, what keeps you motivated to continue contributing?

A: Our mission remains unfinished. We have a responsibility to ensure the internet remains a global public resource that doesn’t require English as a barrier to entry. In an era where AI and massive platforms are consolidating power, the need for diverse alternatives has never been more urgent. Localizing Mozilla products into my native language is my way of practicing digital activism. It’s incredibly rewarding to know that a handful of translated sentences can improve the lives of so many people instantly. The mission continues


Interesting facts

Q: Tell us something unexpected about yourself.

A: How someone born on an island in the Azores, who lived in half a dozen different cities in a country as small as Portugal, and who has worked as a farmer, shepherd, beekeeper, construction worker, electrician, trainer, programmer, and software engineer ended up translating world-class open-source software is a difficult story to explain. Ultimately, I think it all comes back to curiosity


Firefox Tooling Announcements

MozPhab 2.10.0 Released

Bugs resolved in Moz-Phab 2.10.0:

  • bug 2024404 Add --ai flag to moz-phab to trigger Review Helper automatically
  • bug 2028164 moz-phab test failure: TypeError: Object of type AiReviewState is not JSON serializable

Discuss these changes in #engineering-workflow on Slack or #Conduit Matrix.

1 post - 1 participant

Read full topic

Thunderbird Blog

Thunderbird Monthly Development Digest: March 2026

Welcome back from the Thunderbird development team!

Reflecting back, the first quarter of the year has been a mix of deep technical focus and forward-looking planning. Much of the team’s energy has gone into tackling some of the more complex, “gnarly” parts of our projects to land key milestones. In parallel, we’ve been laying the groundwork for what’s next from ongoing hiring efforts to aligning our goals with broader company initiatives that support the roadmap ahead.

Security & Hardening

We’ve continued to make good progress on improving Thunderbird’s security and privacy model, not just at a technical level, but in ways that are more usable and transparent for everyday users.

Unobtrusive Signatures

Kai recently presented his work at the IETF on Unobtrusive Signatures, which aims to make email signatures more reliable and less intrusive. The goal is to ensure message authenticity can be verified automatically and consistently, without requiring constant user attention or confusing workflows.

Improving Key Safety and Revocation

We’re also exploring better ways to handle key revocation. Today, users often have no reliable way to know when a key should no longer be trusted. A proposed revocation service aims to improve how this information is distributed, while avoiding overly centralized or privacy-invasive approaches.

Moving Beyond “Encrypted or Not”

A major shift underway is how we present trust in encrypted email.

Instead of treating encryption as a simple on/off state, we’re moving toward a graduated confidence model. Thunderbird will evaluate the strength of each recipient’s key whether it’s manually verified, CA-backed, or unverified, and present an overall confidence level to the user.

This allows encryption to work more automatically, while still giving users clear insight into how much trust they can place in a given message. Kai has worked with the design team and internal subject matter experts to refine the UX in this area and is getting close to a final UI. 

Ongoing Security Fixes and Improvements

Alongside these larger initiatives, Kai, Magnus, and Justin have been actively triaging and addressing security issues and long-standing feature gaps. Recent work includes:

  • Enabling search within encrypted messages
  • Fixing issues with incorrect IMAP literal size handling
  • Addressing a link spoofing vulnerability (CVE-2025-13015)

Together, these efforts reflect a broader direction: making strong security more accessible, while ensuring users remain informed and in control.

Exchange Email Support

Since our last update in February, the team has been moving quickly and has now completed Phase 1 and Phase 2 of the Graph API implementation for email, with Phase 3 already underway.

These phases focused on establishing a solid foundation and delivering core functionality required for real-world usage. Highlights include:

  • Graph API login with OAuth
  • Connectivity checks and account validation
  • Autodiscover support for Graph endpoints
  • Folder synchronization (fetching and populating folder hierarchy)
  • Sending messages (including support for different recipient types)
  • Support for POST requests and improved request handling
  • Delta query support for efficient syncing
  • Support for pageable results (x-ms-pageable)
  • Test infrastructure for Graph (xpcshell and mochitests)
  • Continued backend refactoring and interoperability work (C++/Rust integration, shared protocol components)

With these milestones in place, Phase 3 is now underway, focusing on deeper message handling (such as fetching message headers) and continued feature expansion.

Keep track of our Graph API implementation here. 

Add-ons, Extensions and Experiments

While onboarding a new junior team member, John has also made a strong impact on the add-ons ecosystem, reaching an important milestone in the effort to move away from legacy, insecure experiments.

A key piece of this work is the VFS Toolkit, which leverages the Origin Private File System and introduces a more secure and maintainable way for WebExtensions to interact with the file system. As part of this, John developed a provider that allows extensions to access a user’s local home folder through a controlled interface.

Under the hood, this works by combining WebExtensions with a small native helper application. The extension communicates with this helper via native messaging, allowing safe, permissioned access to local files, something that modern WebExtensions cannot do directly

The current focus is to enhance the Calendar API ahead of the next ESR release with some of this work tracked here.

Linux System Tray – Contributor Spotlight

We’d like to give a special shoutout this month to Christophe Henry, who has gone above and beyond with an ambitious contribution to improve Thunderbird’s system tray integration on Linux.

This work isn’t a small patch and spans multiple parts of the codebase, including JavaScript, C++, and Rust, and even bridges into XPCOM interfaces. The goal is to unify how unread mail indicators and tray icons behave across platforms, which is a surprisingly complex problem once you account for the differences between Linux environments, Windows, and macOS.

What really stood out was the level of persistence behind this contribution. Over multiple iterations, Christophe worked through build failures, lint issues, platform quirks, and detailed review feedback, all while tackling tricky problems like image encoding, system tray APIs, and cross-language integration.

This kind of work is rarely straightforward, and often requires deep dives into unfamiliar parts of the stack. Seeing it pushed forward with this level of care and determination is exactly what makes open source collaboration so powerful.

Thank you for the dedication and effort! It truly makes a difference.

Calendar UI Rebuild – Front End Team shoutout

A huge shoutout to the Front End team, who recently met in person in London for a work week and absolutely delivered.

Getting the chance to collaborate face-to-face made a real difference. The team came together to align on priorities, cut through complexity, and focus on what mattered most – and the results speak for themselves. They successfully pushed through the Event Read and Enhancements milestones at an impressive pace, clearing the path to shift full attention onto the First Time User Experience (FTUE) work.

It’s not easy to balance quality, speed, and coordination across a distributed team, but this was a great example of what happens when everything clicks. Thoughtful planning, strong collaboration, and excellent execution all came together to move things forward in a big way.

Stay tuned to our milestones here:

First Time User Experience (FTUE)

Following that strong push on Calendar, the front end team turned their focus to the First Time User Experience and made remarkable progress in a very short time.

In just a few weeks, the majority of the FTUE work has been completed, with only a handful of smaller items remaining in review. This included not only delivering the core experience, but also laying the groundwork for future improvements (such as early components of the “Sign in with Thundermail” flow, already available behind a preference).

Pulling together a milestone of this size on such a tight timeline is no small feat. It reflects both the clarity of planning coming out of the work week, and the team’s ability to execute quickly without losing sight of the bigger picture.

Maintenance, Upstream adaptations, Recent Features and Fixes

Over the past couple of months, the team has continued to navigate changes from upstream dependencies that occasionally impact build stability, test reliability, and CI. While this is a normal part of working in a large, shared ecosystem, it does require ongoing attention, particularly when tracking down the root cause of regressions and ensuring Thunderbird-specific changes remain on solid ground. Some days it feels like a full-time job!

Alongside this, we’ve seen strong support from both the team and the wider contributor community, with a steady stream of fixes and improvements landing across the codebase.

This collective effort has resulted in a number of impactful patches landing recently, with the following being particularly helpful:

If you would like to see new features as they land, and help us find some early bugs, you can try running daily and check the pushlog to see what has recently landed. This assistance is immensely helpful for catching problems early.

—

Toby Pilling

Senior Manager, Desktop Engineering

The post Thunderbird Monthly Development Digest: March 2026 appeared first on The Thunderbird Blog.

The Servo Blog

February in Servo: faster layout, pause and resume scripts, and more!

Servo 0.0.6 includes some exciting new features:

Plus a bunch of new DOM APIs:

This is a big update, so here’s an outline:

Work in progress

We’ve started working on accessibility support for web content (@alice, @delan, #42333, #42402), gated by a pref (--pref accessibility_enabled). Each webview will be able to expose its own accessibility tree, which the embedder can then integrate into its own accessibility tree. As part of this work:

We’ve started implementing document.execCommand() (@TimvdLippe, #42621, #42626, #42750), gated by a pref (--pref dom_exec_command_enabled). This feature is also enabled in experimental mode, and together with contenteditable, it’s critical for rich text editing on the web. The work done in February includes:

Developer tools

DevTools has seen some big improvements in February!

When enabled in servoshell, the DevTools server is more secure by default, listening only on localhost when only a port number is specified (@Narfinger, #42502). You can open the port for remote debugging by passing a full SocketAddr, such as --devtools=[::]:6080 or --devtools=0.0.0.0:6080.

In the Inspector tab, you can now edit DOM attributes, and the DOM tree updates when attributes change (@simonwuelker, #42601, #42785). You can now list the event type and phase of event listeners attached to a DOM node as well (@simonwuelker, #42355).

In the Console tab, objects can now be previewed when passed to console.log() and friends (@simonwuelker, #42296, #42510, #42752), and boolean values are now syntax highlighted (@pralkarz, #42513).

In the Debugger tab, you can now pause and resume script execution, both manually and when breakpoints are hit (@eerii, @atbrakhi, #42599, #42580, #42874). We’ve also started working on other debugger features (@atbrakhi, @eerii, #42306), including stepping execution (@eerii, @atbrakhi, #42844, #42878, #42906), so once again stay tuned!

servoshell

Back in August, we added a servo:preferences page to servoshell that allows you to set some of Servo’s most common preferences at runtime (@jdm, #38159).

servoshell now has a servo:config page (@arihant2math, #40324), allowing you to set any preference, even internal ones. Note that preference changes are not yet persistent, and not all prefs take effect when changed at runtime.

You can now press F5 to reload the page in servoshell (@Narfinger, #42538), in addition to pressing Ctrl+R or ⌘R.

We’ve fixed a regression where the caret stopped being visible in the location bar (@mrobinson, #42470).

Embedding API

Servo is now easier to build offline, using the complete source tarball included in each release (@jschwe, #42852). Go to a release on GitHub, then download servo-[version]-src-vendored.tar.gz to get started.

You can now add and remove user stylesheets with User­Content­Manager::add­_stylesheet and remove­_stylesheet, and remove user scripts with User­Content­Manager::remove­_script (@mukilan, #42288). Previously user stylesheets were only configurable via servoshell’s --user-stylesheet option.

Before opening any context menus on behalf of web content, Servo now closes any context menus that were opened by web content (@mrobinson, #42487), to avoid UI problems on some platforms. This is done by calling WebView­Delegate::hide­_embedder­_control before calling show­_embedder­_control in those cases.

Input method events from web content now indicate whether or not the virtual keyboard should be shown (@stevennovaryo, @mrobinson, #42467), with the new Input­Method­Control::allow­_virtual­_keyboard method. Generally the virtual keyboard should only be shown when the page has sticky activation.

We’re reworking our gamepad API, with WebView­Delegate::play­_gamepad­_haptic­_effect and stop­_gamepad­_haptic­_effect being replaced by a new API that (as of the end of February at least) is known as GamepadProvider (@atbrakhi, #41568). The old methods are no longer called (#43743), and may be removed at some point.

We now have better diagnostic output when we fail to create an OpenGL context (@mrobinson, #42873), including when the OpenGL versions supported by the device are too old.

Servo::constellation_sender was removed (@jdm, #42389), since it was never useful to embedders.

We’ve also made some changes to Preferences:

  • devtools­_server­_port is now devtools­_server­_listen­_address, and can now take either a port number (as before) or a full SocketAddr (@Narfinger, #42502)

  • dom­_worklet­_blockingsleep is now dom­_worklet­_blockingsleep­_enabled (@mukilan, #42897)

  • Removed many unused preferences (@mukilan, #42897) – js­_asyncstack, js­_discard­_system­_source, js­_dump­_stack­_on­_debuggee­_would­_run, js­_ion­_offthread­_compilation­_enabled, js­_mem­_gc­_allocation­_threshold­_avoid­_interrupt­_factor, js­_mem­_gc­_allocation­_threshold­_factor, js­_mem­_gc­_allocation­_threshold­_mb, js­_mem­_gc­_decommit­_threshold­_mb, js­_mem­_gc­_dynamic­_heap­_growth­_enabled, js­_mem­_gc­_dynamic­_mark­_slice­_enabled, js­_shared­_memory, js­_throw­_on­_asmjs­_validation­_failure, js­_throw­_on­_debuggee­_would­_run, js­_werror­_enabled, and network­_mime­_sniff

More on the web platform

If you navigate to a video file or audio file as a document, the player now has controls (@webbeef, #42488).

Images now rotate according to their EXIF metadata by default (@rayguo17, #42567), like they would once we add support for ‘image-orientation: from-image’.

We’re implementing system-font-aware font fallback (@mrobinson, #42466), with support for this on macOS landing this month (@mrobinson, #42776). This allows Servo to render text in scripts that are not covered by web fonts or any of the fonts on Servo’s built-in lists of fallback fonts, as long as they are covered by fonts installed on the system.

Servo now supports the newer pointermove, pointerdown, pointerup, and pointercancel events (@webbeef, #41290). The older touchmove, touchstart, touchend, and touchcancel events continue to be supported.

The default language in ‘Accept-Language’ and navigator.language is now taken from the $LANG environment variable if present (@webbeef, #41919), rather than always being set to en-US.

<input type=color> now supports any CSS color value (@simonwuelker, #42275), including the more complex values like color-mix(). We’ve also landed the colorspace attribute (@simonwuelker, #42279), but only in the web-facing side of Servo for now, not the embedding API or in servoshell.

‘vertical-align’ is now a shorthand for ‘alignment-baseline’ and ‘baseline-shift’ (@Loirooriol, #42361), and scrollParent on HTMLElement is now a function per this recent spec update (@TimurBora, #42689).

Cookies are now more conformant (@sebsebmc, #42418, #42427, #42435). ‘Expires’ and ‘Max-Age’ attributes are now handled correctly in ‘Set-Cookie’ headers, get() and getAll() on CookieStore now trim whitespace in cookie names and values, and the behaviour of set() on CookieStore has been improved.

<iframe> elements are now more conformant in how load events are fired on the element and its contentWindow (@TimvdLippe, #42254), although there are still some bugs. This has long behaved incorrectly in Servo, and it has historically caused many problems in the Web Platform Tests.

IndexedDB is now more conformant in our handling of transactions (@Taym95, #41508, #42732), and when opening and closing connections (@gterzian, @Taym95, #42082, #42669).

We’ve started implementing Largest Contentful Paint timings (@shubhamg13, #42024), and we’ve landed a bunch of improvements to how First Contentful Paint timings work in Servo:

new WebSocket() now resolves relative URLs (@webbeef, #42425).

requestFullscreen() on Element now requires user activation (@stevennovaryo, #42060).

performance.getEntries() now returns PerformanceResourceTiming entries for navigations in <iframe> (@muse254, #42270).

When geolocation is enabled (--pref dom_geolocation_enabled), navigator­.geolocation­.get­Current­Position() and watch­Position() now support the optional errors argument (@arihant2math, #42295).

We now support the ‘-webkit-text-security’ property in CSS (@mrobinson, #42181), which is not specified anywhere but required for MotionMark.

Performance and stability

Our about:memory page now knows how to report many new kinds of memory usage, including the DevTools server (@Narfinger, #42478, #42480), WebGL (@sagudev, #42570), localStorage and sessionStorage (@arihant2math, #42484), and some of the memory used by IndexedDB (@arihant2math, #42486). We’ve also started internally tracking the memory usage of the media subsystem (@Narfinger, #42504) and WebXR (@Narfinger, #42505).

Layout has seen a lot of performance work in February, with our main focus being on improving incremental layout of the box tree and fragment tree.

We now have our first truly incremental box tree layout (@mrobinson, @Loirooriol, @lukewarlow, #42700), rather than our previous “dirty roots”-based approach. Depending on how they were damaged, some boxes for floats (as above, #42816), independent formatting contexts (as above, #42783), and their descendants (as above, #42582) can now be reused, and they avoid damaging their parents (as above, #42847). We also destroy boxes with ‘display: none’ earlier in the layout process (as above, #42584).

Incremental fragment tree layout is improving too! Whereas we previously had to decide whether to run fragment tree layout in an “all or nothing” way, we can now reuse cached fragments in independent formatting contexts (@mrobinson, @Loirooriol, @lukewarlow, #42687, #42717, #42871). We can also measure how much work is being done on each layout (as above, #42817).

Servo uses shared memory for many situations where copying data over channels would be too expensive, such as for images and fonts. In multiprocess mode (--multiprocess), we use the operating system to create the shared memory in a way that can be shared with other processes, such as shm_open(3) or CreateFileMappingW, but this consumes resources that can sometimes be exhausted. We only need to use those kinds of shared memory in multiprocess mode, so we’ve reworked Servo to use Arcï»ż<Vec<u8>> in single-process mode (@Narfinger, #42083), which should avoid resource exhaustion.

Parsing web pages is complicated: we want pages to render incrementally as they stream in from the network, and we want to prefetch resources, but scripts can call document.write(), which injects markup “on the spot”. This is further complicated if that markup also contains a <script>.

We’ve recently landed some fixes to Servo’s async parser (@simonwuelker, #42882, #42910), which handles these issues more efficiently. This is currently an obscure and somewhat buggy feature (--pref dom­_servoparser­_async­_html­_tokenizer­_enabled), but if we can get the feature working more reliably (#37418), it could halve the energy Servo spends on parsing, lower latency for pages that don’t use document.write(), and even improve the html5ever API for the ecosystem.

We’ve also landed optimisations for ‘Content-Security-Policy’ (@Narfinger, #42716), IntersectionObserver (@Narfinger, @mrobinson, @stevennovaryo, #42366, #42390), layout queries (@webbeef, #42327), the bfcache (@Narfinger, #42703), loading images (@Narfinger, #42684), and checks for multiprocess mode (@Narfinger, #42782), as well as the interfaces between Servo and SpiderMonkey (@sagudev, #42135, #42576).

We’ve continued our long-running effort to use the Rust type system to make certain kinds of dynamic borrow failures impossible (@Gae24, @pralkarz, @BryanSmith00, @sagudev, @Narfinger, @TimvdLippe, @kkoyung, @TimurBora, @onsah, #42342, #42294, #42370, #42417, #42619, #42616, #42637, #42640, #42662, #42679, #42681, #42665, #42667, #42699, #42712, #42725, #42729, #42726, #42720, #42738, #42737, #42735, #42751, #42805, #42809, #42780, #42820, #42715, #42635, #42880, #42846).

Bug fixes

We’ve landed some fixes for issues preventing Servo from being built on Windows arm64 (@dpaoliello, @npiesco, #42371, #42341). Work to enable Windows arm64 as a build platform is ongoing (@npiesco, #42312).

<img height> now takes the default <img width> from the aspect ratio of the image (@Loirooriol, #42577), rather than using a width of 300px by default. <svg width=0> and <svg height=0> now take the default width and height (respectively) from the aspect ratio of the <svg viewBox> (@Loirooriol, #42545).

We’ve fixed a bug in the result of layout queries, such as getBoundingClientRect(), on inline <svg> (@jdm, @Loirooriol, #42594), and we’ve fixed layout bugs related to ‘display: table-cell’ (@Loirooriol, #42778), ‘display: list-item’ (@Loirooriol, #42825, #42864), ‘inset: auto’ (@Loirooriol, #42586), ‘width: max-content’ (@mrobinson, @Loirooriol, @lukewarlow, #42574), ‘align-self: last baseline’ (@rayguo17, #42724), ‘list-style-image’ (@lukewarlow, #42332), ‘content: <image>’ (@lukewarlow, #42332), negative ‘margin’ (@Loirooriol, #42889), and ink overflow (@mrobinson, #42403).

HTML and CSS bugs:

  • Empty ‘url()’ values making requests when they shouldn’t (@rayguo17, #42622)
  • <template> failing to throw HierarchyRequestError when a DOM API is used to create an invalid hierarchy (@TimvdLippe, #42276)
  • <input> and <textarea> selection behaviour being incorrect when the text contains more than one script (@mrobinson, #42399)
  • <script nonce> validation failing to work correctly in some cases (@dyegoaurelio, #40956)
  • <a target> failing to work correctly after the related <iframe> is removed and a new one added with the same name (@jdm, #42344)
  • <base> not taking effect in some cases, or taking effect when given a data: or javascript: URL (@TimvdLippe, #42255, #42339)

JavaScript and DOM bugs:

  • event.target being incorrect on touchmove, touchend, and touchcancel events (@yezhizhen, #42654)
  • touchmove events not being fired when part of a two-finger pinch zoom (@yezhizhen, #42528)
  • touchend events erroneously firing after touchcancel events (@yezhizhen, #42654)
  • assignedNodes() on HTMLSlotElement returning incorrect results after the <slot> was removed from the shadow tree (@rayguo17, #42250)
  • Largest Contentful Paint timings no longer being collected after reloading or navigating (@shubhamg13, #41169)
  • PerformancePaintTiming being exposed to Worker globals when they shouldn’t be (@shubhamg13, #42409)
  • JavaScript modules resolved incorrectly when there are overlapping .imports or .scopes or import maps (@Gae24, #42668, #42630, #42754, #42821)
  • changes to how we trigger garbage collection breaking Speedometer (@sagudev, #42271)

WebDriver bugs:

We’ve fixed crashes in DevTools, in the Inspector tab (@eerii, @mrobinson, #42330), when exiting Servo while DevTools is connected (@simonwuelker, #42543), when setting breakpoints (@atbrakhi, #42810), and after clients disconnect (@simonwuelker, #42583).

We’ve fixed crashes in layout, when using ‘background-repeat: round’ (@mrobinson, #42303), when using ‘list-style-image’ or ‘content: <image>’ (@lukewarlow, #42332), when calling elementFromPoint() on Document (@mrobinson, @Loirooriol, @lukewarlow, #42822), and when handling layout queries like getBoundingClientRect() on inline <svg> (@jdm, @Loirooriol, #42594).

We’ve fixed crashes related to stylesheets, when removing stylesheets from the DOM (@TimvdLippe, #42273), when changing the href of a <link rel=stylesheet> (@TimvdLippe, #42481), and when loading stylesheets with --layout-threads=1 (@mrobinson, @Loirooriol, @lukewarlow, #42685).

We’ve also fixed crashes when using multitouch input (@yezhizhen, #42350), when using MediaStreamAudioSourceNode (@mrobinson, #42914), when calling add() on HTMLOptionsCollection (@mrobinson, #42263), when calling elementFromPoint() on Document or ShadowRoot(), when we fail to open a database for IndexedDB (@jdm, @mrobinson, #42444), and when certain pages are run with a mozjs debug build (@Gae24, #42428).

Donations

Thanks again for your generous support! We are now receiving 6985 USD/month (−0.4% from January) in recurring donations. This helps us cover the cost of our speedy CI and benchmarking servers, one of our latest Outreachy interns, and funding maintainer work that helps more people contribute to Servo.

Servo is also on thanks.dev, and already 32 GitHub users (–1 from January) that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

We now have sponsorship tiers that allow you or your organisation to donate to the Servo project with public acknowlegement of your support. If you’re interested in this kind of sponsorship, please contact us at join@servo.org.

6985 USD/month
10000

Use of donations is decided transparently via the Technical Steering Committee’s public funding request process, and active proposals are tracked in servo/project#187. For more details, head to our Sponsorship page.

Cameron Kaiser

So long, cheesegrater

9To5Mac is reporting that Apple has confirmed the Mac Pro is no longer for sale, and indeed, although it was up yesterday, today it's gone.

And are you surprised? After all, Macs have their own bespoke GPUs now, and RAM is on-die. (Glad I sprang for the 16GB option on my M1 Air — that has greatly lengthened its useful service life.) If Apple isn't shipping computers with DIMM slots anymore, then why would they ship PCIe slots for anything else? It wasn't like there were many options you could put in the last iteration anyway, because it too had a non-upgradeable GPU and fixed RAM. Okay, okay, you could stick a whole bunch of NVMe sticks in it and it had good cooling. Was that worth it?

This marks the end of the venerable tower Macs that we loved in the PowerPC days. The Mac Studio is the new Mac Pro. We were always at war with Eastasia.

Firefox Tooling Announcements

MozPhab 2.9.1 Released

Bugs resolved in Moz-Phab 2.9.1:

  • bug 2026194 moz-phab uplift should not set a reviewerless patch as WIP
  • bug 2026300 Remove redundant “Figuring out who you are” wait message

Discuss these changes in #engineering-workflow on Slack or #Conduit Matrix.

1 post - 1 participant

Read full topic

The Mozilla Blog

Mozilla and Mila announce strategic research partnership to advance open source and sovereign AI capabilities

The future of AI should belong to all of humanity, well beyond a handful of countries or companies. For that to happen, AI needs to be open, trusted, and built in ways that give people, institutions, and nations real choices. That’s why, today, Mozilla is announcing a strategic partnership with Mila – Quebec Artificial Intelligence Institute to advance open source and sovereign AI capabilities.

This partnership marks a landmark strategic collaboration for both organizations and Mozilla’s first-ever partnership with a major AI research lab. It is designed to grow over time, with an inaugural project that focuses on the intersection of trust and usability, including private memory architectures for AI agents.

Mila brings world-class research depth and a proven track record moving ideas into systems — from fundamental breakthroughs to applied tools and the diffusion of technology. Mozilla brings deep open source experience, a vibrant developer community, and the ecosystem instincts needed to turn research into something that spreads. The partnership is designed to show that open source AI can close the gap between cutting-edge research and real-world impact. 

As we saw in the web era, having a robust open source software stack can democratize and accelerate innovation in dramatic ways. The same opportunity exists in AI — across compute, models, data, and developer experience — and much of the stack is already being built in the open. But gaps remain, particularly in the layers that determine whether AI is trustworthy, private, and built for a world with many languages, many cultures, and many legitimate ways of organizing society. If we can close  those gaps,  open source AI becomes a genuine option for the people and institutions that need it most.

“We are working to build a future where AI development is rooted in openness, privacy, and humanity,” said Mark Surman, president of Mozilla. “This partnership is a delivery vehicle for that vision — and for breakthroughs that will help governments, developers, and companies alike. Canada can lead on AI sovereignty; we’re joining with Mila to make it happen.”

“Canada has what it takes to lead on frontier AI that the world can actually trust: the research depth, the values, and the will to do it differently. The next frontier in AI isn’t just capability, it is trustworthiness, and Canada is uniquely positioned to lead on both. This partnership is a concrete step in that direction. Open, trustworthy AI isn’t a compromise on ambition. It’s the higher bar,” said ValĂ©rie Pisano, president and CEO of Mila.

Together, Mila and Mozilla will develop the technologies and approaches that reduce dependence on closed systems and create more room for transparency, accountability, and shared innovation. The partnership also lays the groundwork for middle-power cooperation in AI: Open source projects have consistently provided the framework for technical collaboration across geographies and jurisdictions. Both organizations welcome research institutions, developers, and like-minded organizations to help fill the stack.

This is the first of what both organizations intend to be a sustained and growing body of work. 

Read more about our Open Source AI Strategy here. Learn more about Mila here.

The post Mozilla and Mila announce strategic research partnership to advance open source and sovereign AI capabilities  appeared first on The Mozilla Blog.

The Rust Programming Language Blog

Announcing Rust 1.94.1

The Rust team has published a new point release of Rust, 1.94.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.94.1 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.94.1

Rust 1.94.1 resolves three regressions that were introduced in the 1.94.0 release.

And a security fix:

Contributors to 1.94.1

Many people came together to create Rust 1.94.1. We couldn't have done it without all of you. Thanks!

Hacks.Mozilla.Org

Firefox Developer Edition and Beta: Try out Mozilla’s .rpm package!

In January, we introduced our Nightly package for RPM-based Linux distributions. Today, we are thrilled to announce it is now available for Firefox Beta!

Firefox Beta is great for testing your sites in a version of Firefox that will reach regular users in the coming weeks. If you find any issues, please file them on Bugzilla.

Switching to Mozilla’s RPM repository allows Firefox Beta to be installed and updated like any other application, using your favorite package manager. It also provides a number of improvements:

  •  Better performance thanks to our advanced compiler-based optimizations,
  • Updates as fast as possible because the .rpm management is integrated into Firefox’s release process,
  • Hardened binaries with all security flags enabled during compilation,
  • No need to create your own .desktop file.

If you have Mozilla’s RPM repository already set up, you can simply install Firefox Beta with your package manager. Otherwise, follow the setup steps below.


If you are on Fedora (41+), or any other distribution using dnf5 as the package manager

 

sudo dnf config-manager addrepo --id=mozilla --set=baseurl=https://packages.mozilla.org/rpm/firefox --set=gpgkey=https://packages.mozilla.org/rpm/firefox/signing-key.gpg --set=gpgcheck=1 --set=repo_gpgcheck=0
sudo dnf makecache --refresh
sudo dnf install firefox-beta

Note: repo_gpgcheck=0 deactivate the signature of metadata with GPG. However, this is safeguarded instead by HTTPS and package signatures (gpgcheck=1).

If you are on openSUSE or any other distribution using zypper as the package manager

sudo rpm --import https://packages.mozilla.org/rpm/firefox/signing-key.gpg
sudo zypper ar --gpgcheck-allow-unsigned-repo https://packages.mozilla.org/rpm/firefox mozilla
sudo zypper refresh
sudo zypper install firefox-beta

For other RPM based distributions (RHEL, CentOS, Rocky Linux, older Fedora versions)

sudo tee /etc/yum.repos.d/mozilla.repo >  /dev/null << EOF
[mozilla]
name=Mozilla Packages
baseurl=https://packages.mozilla.org/rpm/firefox
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://packages.mozilla.org/rpm/firefox/signing-key.gpg
EOF
# For dnf users
sudo dnf makecache --refresh
sudo dnf install firefox-beta
# For zypper users
sudo zypper refresh
sudo zypper install firefox-beta

The firefox-beta package will not conflict with your distribution’s Firefox package if you have it installed, you can have both at the same time!

Adding language packs

If your distribution language is set to a supported language, language packs for it should automatically be installed. You can also install them manually with the following command (replace fr with the language code of your choice):

sudo dnf install firefox-beta-l10n-fr

You can list the available languages with the following command:

dnf search firefox-beta-l10n

Don’t hesitate to report any problem you encounter to help us make your experience better.

The post Firefox Developer Edition and Beta: Try out Mozilla’s .rpm package! appeared first on Mozilla Hacks - the Web developer blog.

Jonathan Almeida

Use Android Studio for resolving conflicts in Jujutsu

You can use JJ's built-in editor for conflict resolutions, but I've found it difficult to follow. A recommendation from co-workers was to use Meld and that has worked quite well once I (begrudingly) accepted that I needed to download another single-purpose app.

Today, another co-worker Andrey Zinovyev found out that we can use Android Studio's (IntelliJ IDEA's really) built-in merge tool to resolve the three-way merge. This is more convenient for me since I spend most of my time here already, so using it as a general purpose merge editor for my work projects is quite nice.

[ui]
merge-editor = "studio"
[merge-tools.studio]
merge-args = ["merge", "$left", "$right", "$base", "$output"]
program = "/Users/jalmeida/Applications/Android Studio Nightly.app/Contents/MacOS/studio"

Presto!

The Mozilla Blog

A free VPN you can trust, now built into Firefox

Today we’re introducing a free built-in VPN in Firefox, a new IP-protection feature designed to keep you even more private while you browse. We’re starting by offering an industry-leading 50 gigabytes of free VPN-browsing each month. 

Firefox has long focused on building privacy tools directly into the browser to protect you online. Over the years, we’ve introduced world-class protections that block known trackers, reduce fingerprinting and limit how companies can follow people across the web. Our goal has been consistent: make meaningful privacy protections accessible to Firefox users every day.

Firefox is the only major browser to include a built-in VPN like this for free — giving you more control over your privacy, right where you browse.

Privacy built into the browser

Every time you visit a website, your IP address is shared automatically. IP addresses help websites know where to send information back to your device, but they can also be used to approximate your location, link your browsing activity across sites and keep logs about your online behavior, meaning websites can track your behavior. It’s one of many ways companies track activity across the internet.

Additionally, when you’re using public Wi-Fi while at a coffee shop, in a hotel, or in your dorm, people can spy on your network traffic and see which websites you might be visiting. 

At Mozilla, we believe people should have stronger protections against this kind of tracking and spying, and that those protections should be easy to use.

Introducing built-in VPN

Our free built-in VPN is designed to make IP protection simple to use in Firefox.

The built-in VPN includes an unprecedented 50 GB per month of free VPN browsing, enough to cover everyday activities like shopping, banking, and reading.

Turn it on in Firefox with a single click. No extra apps. No downloads. Once it’s on, Firefox routes your browsing traffic through a proxy network that replaces your IP address before it reaches a website. The sites you visit see the proxy’s IP address rather than your own. Firefox already encrypts your traffic with HTTPS, but masking your IP adds another layer of privacy. You can mask the URLs you’re visiting from anyone trying to spy on your network traffic on public Wi-Fi, like while you’re enjoying a latte at your favorite coffee shop. 

If you reach the monthly limit, IP protection is paused until the next cycle. Firefox will require you to confirm before proceeding without the VPN so your browsing doesn’t unintentionally continue without IP protection.

Browser-level protection and full-device protection

The free built-in VPN helps secure your traffic while browsing in Firefox, making it a simple way to protect your IP address from being tracked by big tech. However, it does not offer full device protection. 

For those looking for broader coverage, you can also choose protection that extends across your entire device, including other apps. The standalone Mozilla VPN subscription offers this capability with unlimited data across multiple devices. Depending on your needs, you can pick the level of privacy and protection that suits you. 

We’ve heard concerns about so-called “free VPNs,” which often rely on advertising or selling user data to generate revenue. Firefox’s built-in VPN is designed differently. It does not sell your browsing data and does not inject advertising into your traffic. Instead, we offer a limited amount of browser-level protection for free, alongside Mozilla VPN, our paid, unlimited, full-device VPN service. 

Read more about the differences between VPNs and web proxies.

Rolling out to Firefox users

The free built-in VPN is currently rolling out as a beta to Firefox desktop users in the United States, the United Kingdom, Germany and France, with plans to expand to additional countries coming soon over the next several releases.

As with many Firefox features, we’re introducing it gradually starting in Firefox 149 so we can learn from user feedback and continue improving the experience.

Building a more private web

Protecting privacy online is an ongoing effort. As the web evolves, new technologies create both opportunities and challenges for keeping personal information safe.

Mozilla has spent years building privacy protections — from Total Cookie Protection to Private browsing mode to anti-fingerprinting — directly into Firefox so people have more control over how they experience the web. This built-in VPN is one more way Firefox helps you browse with less exposure and more peace of mind.

By continuing to build these protections into Firefox, we aim to make the web safer, more transparent and more respectful of the people who use it.

Take control of your internet

Download Firefox

The post A free VPN you can trust, now built into Firefox appeared first on The Mozilla Blog.

Firefox Developer Experience

Firefox WebDriver Newsletter 149

WebDriver is a remote control interface that enables introspection and control of user agents. As such, it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 149 release cycle.

Contributions

Firefox is an open source project, and we are always happy to receive external code contributions to our WebDriver implementation. We want to give special thanks to everyone who filed issues, bugs and submitted patches.

In Firefox 149, multiple WebDriver bugs were fixed by contributors:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi. Join our chatroom if you need any help to get started!

General

WebDriver BiDi

Marionette

The Mozilla Blog

Split View in Firefox: Two tabs side by side, right where you need them

Much of what we do on the web involves looking at more than one thing at a time – booking tickets while checking your calendar, taking notes as you go through a report, or comparing options before making a purchase.

The web is inherently multidimensional. For years, browsing this way meant bouncing back and forth between multiple open tabs, or spinning up multiple windows and using other tools to organize them side-by-side.

The newSplit View feature makes these moments easier. It lets you place two tabs next to each other in the same Firefox window so you can see both at once and keep the context you need right in front of you. 

Split View is available to all Firefox users starting with Firefox 149, rolling out on March 24. If you’d like to give it a go: 

  • Make sure you’ve got the latest version of Firefox.
  • Right-click a tab and choose Add Split View. You can also select two tabs, right-click, and choose Open in Split View.

How the Firefox team uses Split View

The team behind Split View has been using it actively over the past few months, and a few workflows quickly stood out. Here are some of the ways people on our team have been using it:

Planning and comparing

Sometimes, you just need two things visible at once.

Gabriel: I’ve been using Split View to plan camping trips. I open a map on one side and a campsite booking page on the other. This makes it easy to explore locations and check availability without constantly switching tabs.

Everyday tasks

Split View is also helpful for small administrative tasks, the kind that involve copying information from one place to another.

Jonathan: I used Split View while filing my taxes. All my documents – W-2s and other forms – were online, so I kept them open on one side while filling things out on the FreeTaxUSA site on the other. Having both visible made the process much easier.

Note-taking

Ania: I often use Split View when reading and writing at the same time. I’ll keep a PDF or article open on one side and take notes on the other as I go. Recently, I’ve been using this setup while preparing notes for my reading group. It helps me stay focused and quickly organize what I want to share.

What’s next for Split View

We built Split View to support the way people naturally move through information on the web – comparing, referencing and writing along the way. This first version focuses on making the most common side-by-side workflows easy. 

If you try it, we’d love your feedback on how it fits into your day-to-day browsing and what would make it even more useful.

Take control of your internet

Download Firefox

The post Split View in Firefox: Two tabs side by side, right where you need them appeared first on The Mozilla Blog.

The Mozilla Blog

Try Tab Notes in Firefox to leave a note on any page

Don’t remember why you have all those webpages open? Now you can leave yourself a note for any tab.

Tab Notes — our latest experimental feature in Firefox — are designed to help you remember, reflect, and pick up where you left off on the web by letting you attach a short note to a webpage. 

Indicated by a sticky note icon and visible when hovering over tabs, Tab Notes notes remain connected to the page’s URL until you delete them. Your notes are yours. They remain private and accessible only to you. Firefox stores them locally in your browser and doesn’t send them to Mozilla.

Starting March 24, you can try Tab Notes by following these steps:

  • Go to Settings.
  • Navigate to Firefox Labs (or enter about:preferences#experimental in the address bar). 
  • Tick the box beside Tab notes.

Now you’re all set! Just right-click or hover over a tab and choose “Add Note” to create your first tab note!

This work is inspired by user research that we conducted last year, which explored how people resume tasks after interruptions. One key insight we learned is that when we are interrupted, even a small reminder or message can significantly improve our ability to resume a task. 

Many people use a variety of analog (e.g., sticky notes) and digital tools (e.g., note-taking apps) for these purposes as well, and Tab Notes are our exploration of that idea in a practical, lightweight way. These notes are easy to create, edit, and delete.

This is an early experiment, part of the Firefox Labs program. We are eager for feedback, which you can share on Mozilla Connect or by filing a ticket in Bugzilla.

Take control of your internet

Download Firefox

The post Try Tab Notes in Firefox to leave a note on any page appeared first on The Mozilla Blog.

Mozilla Open Policy & Advocacy Blog

Competition, Innovation, and the Future of the Web – Why Independent Browser Engines Matter

Gecko matters because it ensures there’s an independent voice shaping how the internet evolves. Without Gecko, the landscape would be dominated by Apple and Google alone.

From accessing information, communicating with others, shopping, working, learning, and entertainment, the vast majority of our time online is spent within a browser. While there are many browsers out there, there are only a few browser engines, the technology necessary to render the data that makes up the web as websites we can use.

Browser engines are among the most complex and consequential pieces of infrastructure on the modern internet. They determine how web standards are implemented, how security and privacy protections are enforced, and which actors ultimately shape the evolution of the web.

As the internet increasingly fragments into walled gardens, and as new technologies like artificial intelligence (AI) are integrated directly into browsers, the influence of browser engines is only growing. When innovation is built on a single dominant engine, it concentrates technical and economic power, narrows choice, and risks steering the web toward the priorities of a few large platforms rather than the public interest.

Gecko is Mozilla’s browser engine that powers Firefox. It is one of only three widely used engines and the only independent browser engine. In other words, it is not governed by a company that also runs an operating system to distribute their own browser.

Why Browser Engines Matter 

Browser engines (not to be confused with search engines) are the lesser-known technology powering your web browsers.

As the core software layer responsible for interpreting and rendering web content, browser engines play the fundamental role of turning HTML, CSS, and JavaScript into webpages users can interact with.

While browsers are user-facing products, engines are the layer where structural decisions about the web are made. Examples include privacy and security protections, performance characteristics, and the support of APIs. Browser engines are at the heart of the web.

Gecko and the Browser Monoculture

The browser engine landscape is highly concentrated. In 2013, there were five major browser engines. In 2026, there are only three left: Apple’s WebKit (which companies are required to use to build on iOS), Google’s Blink, and Mozilla’s Gecko. Gecko is the only remaining independent browser engine and it powers Firefox.

When engine diversity declines, so does the practical ability to challenge dominant business models or introduce alternative implementations that can put users first through security, privacy, or other features.

There are only three major browser engines left — Apple’s WebKit, Google’s Blink and Gecko from Mozilla. Apple’s WebKit mainly runs on Apple devices, making Gecko the only cross-platform challenger to Blink.

 

This concentration increasingly risks hard-coding a single company’s technical assumptions into the future of the web. Market pressures often turn standards-compliant but differing implementation choices into “bugs” that need fixing.

As both human and AI-driven browsing expand in use, choices about API implementation, data access, and security boundaries at the browser engine level become even more critical. A monoculture at the engine layer could extend to producing a monoculture in AI browsing experiences as well.

Maintaining an Independent Browser Engine Allows Mozilla to be More User-centric

Gecko, as an independent browser engine, tangibly allows Mozilla to build and operate in a way that is aligned with our mission:  keeping the web open, secure, privacy-first, and accessible to everyone. It ensures that Mozilla is not only advocating for these principles but actively building the underlying infrastructure that makes them possible.

Through Gecko, we have the freedom to design and ship features based on what is best for users, rather than what is easiest or most profitable within another company’s technology stack.

In practice, this enables us to:

  • Introduce privacy and security protections that go beyond industry defaults, such as strong cross-site tracking protections and anti-fingerprinting measures.
  • Experiment with new user interface designs and customization options that give people more control over how they use the web.
  • Build features that reflect Mozilla’s mission-driven priorities, even when they diverge from dominant commercial models.

If a small number of vertically integrated companies (AI assistants, search, operating systems, ads) completely control browser engines, then competition, transparency, and user choice on the open web will be much harder to achieve. They will have strong incentives to favour their own services, limit interoperability, and steer defaults and standards to their advantage.

Maintaining an independent engine also lowers barriers for others. Newer entrants to the browser space can rely on interoperability as defined in specifications. If they are not building their own engine, building on Gecko can help sustain a more competitive browser ecosystem. Engine diversity at this foundational layer enables innovation, which is shaped by multiple actors and multiple visions, rather than it being dictated by a single dominant platform.

Browser Engine Plurality Ensures Tech is Built For People, Not Shareholders 

In an era defined by platform consolidation and AI-driven change, browser engines can’t be treated as invisible infrastructure. Independent engines like Gecko provide a structural counterbalance. Browser engine plurality is needed to ensure competition, transparency, and technology built for people, not shareholders.

As governments increasingly focus on security, resilience and sustainable growth, browser engine competition has a central role to play in avoiding single points of vulnerability or failure. Meaningful competition and a focus on open source approaches help ensure that economies are not locked into a single company’s infrastructure and that governments, companies, and people retain real choice over where to build and how to optimize for their needs.

Mozilla has long engaged with policymakers and regulators on the importance of competition and openness at the browser and engine layer. As the web and broader technology landscape continue to evolve, especially in the face of AI, we will continue to advance policies that protect engine diversity, promote fair competition, and ensure the web evolves in the public interest.

The post Competition, Innovation, and the Future of the Web – Why Independent Browser Engines Matter appeared first on Open Policy & Advocacy.

Niko Matsakis

Maximally minimal view types, a follow-up

A short post to catalog two interesting suggestions that came in from my previous post, and some other related musings.

Syntax with .

It was suggested to me via email that we could use . to eliminate the syntax ambiguity:

letplace=&mutself.{statistics};

Conceivably we could do this for the type, like:

fn method(mp: &mutMessageProcessor.{statistics},...)

and in self position:

fn foo(&mutself.{statistics}){}

I have to sit with it but
I kinda like it?

I’ll use it in the next example to try it on for size.

Coercion for calling public methods that name private types

In my post I said that if you hvae a public method whose self type references private fields, you would not be able to call it from another scope:

mod module{#[derive(Default)]pubstruct MessageProcessor{messages: Vec<String>,statistics: Statistics,}pubstruct Statistics{..}implMessageProcessor{pubfn push_message(&mutself.{messages},//         -------- private field
message: String,){}}}pubfn main(){letmp=MessageProcessor::default();mp.push_message(format!("Hi"));// ------------ Error!
}

The error arises from desugaring push_message to a call that references private fields:

MessageProcessor::push_message(&mutmp.{messages},//       -------- not nameable here
format!("Hi"),)

I proposed we could lint to avoid this situation.

But an alternative was proposed where we would say that, when we introduce an auto-ref, if the callee references local variables not visible from this point in the program, we just borrow the entire struct rather than borrowing specific fields.

So then we would desugar to:

MessageProcessor::push_message(&mutmp,//   -- borrow the whole struct
format!("Hi"),)

If we then say that &mut MessageProcessor is coercable to a &mut MessageProcessor.{messages}, then the call would be legal.

Interestingly, the autoderef loop already considers visibility: if you do a.foo, we will deref until we see a foo field visible to you at the current point.

Oh and a side note, assigning etc

This raises an interesting question I did not discuss. What happens when you write a value of a type like MessageProcessor.{messages}?

For example, what if I do this:

fn swap_fields(mp1: &mutMessageProcessor.{messages},mp2: &mutMessageProcessor.{messages},){std::mem::swap(mp1,mp2);}

What I expect is that this would just swap the selected fields (messages, in this case) and leave the other fields untouched.

The basic idea is that a type MessageProcessor.{messages} indicates that the messages field is initialized and accessible and the other fields must be completely ignored.

Another possible future extension: moved values

This represents another possible future extension. Today if you move out of a field in a struct, then you can no longer work with the value as a whole:

implMessageProcessor{fn example(mutself){// move from self.statistics
std::mem::drop(self.statistics);// now I cannot call this method,
// because I can't borrow `self`:
self.push_message(format!("Hi again"));}}

But with selective borrowing, we could allow this, and you could even return “partially initialized” values:

implMessageProcessor{fn take_statistics(mutself,)-> MessageProcessor.{messages}{std::mem::drop(self.statistics);self}}

That’d be neat.

Jonathan Almeida

Use |mach try --no-push| for a configuration dry run

I wanted to see what the generated try configuration would be for a new preset I made and did this by submitting real try pushes (with empty so they don't execute resources). What I was looking for was "dry run" in the help files, but I recently discovered it to be --no-push.

$ jj try-push --preset fenix --no-push # 'fenix' as an example preset
Artifact builds enabled, pass --no-artifact to disable
Commit message:
Fuzzy (preset: fenix) query='build-apk-fenix-debug&query='signing-apk-fenix-debug&query='build-apk-fenix-android-test-debug&query='signing-apk-fenix-android-test-debug&query='test-apk-fenix-debug&query='ui-test-apk-fenix-arm-debug&query=^source-test 'fenix&query='generate-baseline-profile-firebase-fenix
mach try command: `./mach try --preset fenix --no-push`
Pushed via `mach try fuzzy`
Calculated try_task_config.json:
{
    "parameters": {
        "optimize_target_tasks": false,
        "try_task_config": {
            "disable-pgo": true,
            "env": {
                "TRY_SELECTOR": "fuzzy"
            },
            "tasks": [
                "build-apk-fenix-android-test-debug",
                "build-apk-fenix-debug",
                "generate-baseline-profile-firebase-fenix",
                "source-test-android-detekt-detekt-fenix",
                "source-test-android-l10n-lint-l10n-lint-fenix",
                "source-test-android-lint-fenix",
                "source-test-buildconfig-buildconfig-fenix",
                "source-test-ktlint-fenix",
                "source-test-mozlint-android-fenix",
                "test-apk-fenix-debug",
                "ui-test-apk-fenix-arm-debug",
                "ui-test-apk-fenix-arm-debug-smoke"
            ],
            "use-artifact-builds": true
        }
    },
    "version": 2
}

Here, jj try-push is my quick alias around ./mach try for personal simplicity with my workflow.

Jonathan Almeida

Create new revisions in Jujutsu with multiple heads

It was one of those "ah ha!" moments for me when I finally used it. Chris Krycho covers the concept of megamerges with this diagram:

       m --- n
      /       \
a -- b -- c -- [merge] -- [wip]
      \       /
       w --- x

I've found a more realistic example that best relates to my natural workflow: implementing feature (A) benefitted from having the changes of another tooling patch upgrade (B), that lead to discovering and fixing a bug (C).

      (B)
       m ----- n
      /         \           (A)
a -- b --------- [merge] --- y -- z
      \                     /      \                (C)
        -------------------         ----- [merge] -- w -- x
        \                                /
          -------------------------------

In this case, trying to separate these into distinct streams of work is quite logically, but we also don't need to leave them unlinked so that they can benefit from each other.

This is what my jj log ended up looking like:

@  oppmsuvz jxxxxxxxxxxxx@gmail.com 2026-03-22 00:34:10 firefox@ 05259417
│  Bug xxxxxxx - Simplify the tests
○  ultowtnr jxxxxxxxxxxxx@gmail.com 2026-03-22 00:34:04 100c4cce
│  Bug xxxxxxx - Include private flag in ShareData
○    lorusmuo jxxxxxxxxxxxx@gmail.com 2026-03-21 20:19:30 905b0460
├─╼  (empty) (no description set)
│ ○  sumqskuu jxxxxxxxxxxxx@gmail.com 2026-03-21 04:22:00 92f6028b
│ │  Add a new secret settings fragment
│ ○  oylmprpu jxxxxxxxxxxxx@gmail.com 2026-03-21 04:22:00 18931825
│ │  Create a new feature for receiving and sending commands.
│ ○  xrnnoonu jxxxxxxxxxxxx@gmail.com 2026-03-21 04:21:48 618020c7
╭──  (empty) (no description set)
│ ○  rqlyqqzx jxxxxxxxxxxxx@gmail.com 2026-03-19 17:20:20 c9b5323c
│ │  Bug xxxxxxx - Part 2: Create new android gradle module skill
│ ○  txvozpwz jxxxxxxxxxxxx@gmail.com 2026-03-19 17:20:13 cee18510
├─╯  Bug xxxxxxx - Part 1: Add new gradle example module
◆  pwsnmryn vxxxxxxxxxxxx@gmail.com 2026-03-18 13:21:47 main@origin fa20ce29
│  Bug xxxxxxx - Make my feature work for everyone
~

When I need to submit these, [moz-phab][1 has support for specifying revset ranges with moz-phab start_rev end_rev. However, I can also use jj rebase -s <rev> -d main@origin to put out some try pushes to validate they still work separately - so far, no conflicts in this step.

Niko Matsakis

Maximally minimal view types

This blog post describes a maximally minimal proposal for view types. It comes out of a converastion at RustNation I had with lcnr and Jack Huey, where we talking about various improvements to the language that are “in the ether”, that basically everybody wants to do, and what it would take to get them over the line.

Example: MessageProcessor

Let’s start with a simple example. Suppose we have a struct MessageProcessor which gets created with a set of messages. It will process them and, along the way, gather up some simple statistics:

pubstruct MessageProcessor{messages: Vec<String>,statistics: Statistics,}#[non_exhaustive]// Not relevant to the example, just good practice!
pubstruct Statistics{pubmessage_count: usize,pubtotal_bytes: usize,}

The basic workflow for a message processor is that you

  • accumulate messages by pushing them into the self.messages vector
  • drain the accumulate messages and process them
  • reuse the backing buffer to push future messages

Accumulating messages

Accumulating messages is easy:

implMessageProcessor{pubfn push_message(&mutself,message: String){self.messages.push(message);}}

Processing a single message

The function to process a single message takes ownership of the message string because it will send it to another thread. Before doing so, it updates the statistics:

implMessageProcessor{fn process_message(&mutself,message: String){self.statistics.message_count+=1;self.statistics.total_bytes+=message.len();// ... plus something to send the message somewhere
}}

Draining the accumulated messages

The final function you need is one that will drain the accumulated messages and process them. Writing this ought to be straightforward, but it isn’t:

implMessageProcessor{pubfn process_pushed_messages(&mutself){formessageinself.messages.drain(..){self.process_message(message);// <-- ERROR: `self` is borrowed
}}}

The problem is that self.messages.drain(..) takes a mutable borrow on self.messages. When you call self.process_message, the compiler assumes you might modify any field, including self.messages. It therefore reports an error. This is logical, but frustrating.

Experienced Rust programmers know a number of workarounds. For example, you could swap the messages field for an empty vector. Or you could invoke self.messages.pop(). Or you could rewrite process_message to be a method on the Statistics type. But all of them are, let’s be honest, suboptimal. The code above is really quite reasonable, it would be nice if you could make it work in a straightforward way, without needing to restructure it.

What’s needed: a way for the borrow checker to know what fields a method may access

The core problem is that the borrow checker does not know that process_message will only access the statistics field. In this post, I’m going to focus on an explicit, and rather limited, notation, but I’ll also talk about how we might extend it in the future.

View types extend struct types with a list of fields

The basic idea of a view type is to extend the grammar of a struct type to optionally include a list of accessible fields:

RustType := StructName<...>
         |  StructName<...> { .. }         // <-- what we are adding
         |  StructName<...> { (fields),* } // <-- what we are adding

A type like MessageProcessor { statistics } would mean “a MessageProcessor struct where only the statistics field can be accessed”. You could also include a .., like MessageProcessor { .. }, which would mean that all fields can be accessed, which is equivalent to today’s struct type MessageProcessor.

View types respect privacy

View types would respect privacy, which means you could only write MessageProcessor { messages } in a context where you can name the field messages in the first place.

View types can be named on self arguments and elsewhere

You could use this to define that process_message only needs to access the field statistics:

implMessageProcessor{fn process_message(&mutself{statistics},message: String){//             ----------------------
//             Shorthand for: `self: &mut MessageProcessor {statistics}`
// ... as before ...
}}

Of course you could use this notation in other arguments as well:

fn silly_example(..,mp: &mutMessageProcessor{statistics},..){}

Explicit view-limited borrows

We would also extend borrow expressions so that it is possible to specify precisely which fields will be accessible from the borrow:

letmessages=&mutsome_variable{messages};// Ambiguous grammar? See below.

When you do this, the borrow checker produces a value of type &mut MessageProcessor {messages}.

Sharp-eyed readers will note that this is ambiguous. The above could be parsed today as a borrow of a struct expression like some_variable { messages } or, more verbosely, some_variable { messages: messages }. I’m not sure what to do about that. I’ll note some alternative syntaxes below, but I’ll also note that it would be possible for the compiler to parse the AST in an ambiguous fashion and disambiguate later on once name resolution results are known.

We automatically introduce view borrows in an auto-ref

In our example, though, the user never writes the &mut borrow explicitly. It results from the auto-ref added by the compiler as part of the method call:

pubfn process_pushed_messages(&mutself){formessageinself.messages.drain(..){self.process_message(message);// <-- auto-ref occurs here
}}

The compiler internally rewrites method calls like self.process_message(message) to fully qualified form based on the signature declared in process_message. Today that results in code like this:

MessageProcessor::process_message(&mut*self,message)

But because process_message would now declare &mut self { statistics }, we can instead desugar to a borrow that specifies a field set:

MessageProcessor::process_message(&mut*self{statistics},message)

The borrow checker would respect views

Integrating views into the borrow checker is fairly trivial. The way the borrow checker works is that, when it sees a borrow expression, it records a “loan” internally that tracks the place that was borrowed, the way it was borrowed (mut, shared), and the lifetime for which it was borrowed. All we have to do is to record, for each borrow using a view, multiple loans instead of a single loan.

For example, if we have &mut self, we would record one mut-loan of self. But if we have &mut self {field1, field2}, we would two mut-loans, one of self.field1 and one of self.field2.

Example: putting it all together

OK, let’s put it all together. This was our original example, collected:

pubstruct MessageProcessor{messages: Vec<String>,statistics: Statistics,}#[non_exhaustive]pubstruct Statistics{pubmessage_count: usize,pubtotal_bytes: usize,}implMessageProcessor{pubfn push_message(&mutself,message: String){self.messages.push(message);}pubfn process_pushed_messages(&mutself){formessageinself.messages.drain(..){self.process_message(message);// <-- ERROR: `self` is borrowed
}}fn process_message(&mutself,message: String){self.statistics.message_count+=1;self.statistics.total_bytes+=message.len();// ... plus something to send the message somewhere
}}

Today, process_pushed_messages results in an error:

pubfn process_pushed_messages(&mutself){formessageinself.messages.drain(..){//         ------------- borrows `self.messages`
self.process_message(message);// <-- ERROR!
//   --------------- borrows `self`
}}

The error arises from a conflict between two borrows:

  • self.messages.drain(..) desugars to Iterator::drain(&mut self.messages, ..) which, as you can see, mut-borrows self.messages;
  • then self.process_message(..) desugars to MessageProcessor::process_message(&mut self, ..) which, as you can see, mut-borrows all of self, which overlaps self.messages.

But in the “brave new world”, we’ll modify the program in one place:

-    fn process_message(&mut self, message: String) {
+    fn process_message(&mut self {statistics}, message: String) {

and as a result, the process_pushed_messages function will now borrow check successfully. This is because the two loans are now issued for different places:

  • as before, self.messages.drain(..) desugars to Iterator::drain(&mut self.messages, ..) which mut-borrows self.messages;
  • but now, self.process_message(..) desugars to MessageProcessor::process_message(&mut self {statistics}, ..) which mut-borrows self.statistics, which doesn’t overlap self.messages.

At runtime, this is still just a pointer

One thing I want to emphasize is that “view types” are a purely static construct and do not change how things are compiled. They simply give the borrow checker more information about what data will be accessed through which references. The process_message method, for example, still takes a single pointer to self.

This is in contrast with the workarounds that exist today. For example, if I were writing the above code, I might well rewrite process_message into an associated fn that takes a &mut Statistics:

implMessageProcessor{fn process_message(statistics: &mutStatistics,message: String){statistics.message_count+=1;statistics.total_bytes+=message.len();// ... plus something to send the message somewhere
}}

This would be annoying, of course, since I’d have to write Self::process_message(&mut self.statistics, ..) instead of self.process_message(), but it would avoid the borrow check error.

Beyond being annoying, it would change the way the code is compiled. Instead of taking a reference to the MessageProcessor it now takes a reference to the Statistics.

In this example, the change from one type to another is harmless, but there are other examples where you need access to mulitple fields, in which case it is less efficient to pass them individually.

Frequently asked questions

How hard would this be to implement?

Honestly, not very hard. I think we could ship it this year if we found a good contributor who wanted to take it on.

What about privacy?

I would require that the fields that appear in view types are ‘visible’ to the code that is naming them (this includes in view types that are inserted via auto-ref). So the following would be an error:

mod m{#[derive(Default)]pubstruct MessageProcessor{messages: Vec<String>,...}implMessageProcessor{pubfn process_message(&mutself{messages},message: String){//                           ----------
//   It's *legal* to reference a private field here, but it
//   results in a lint, just as it is currently *legal*
//   (but linted) for a public method to take an argument of
//   private type. The lint is because doing this is effectively
//   going to make the method uncallable from outside this module.
self.messages.push(message);}}}fn main(){letmutmp=m::MessageProcessor::default();mp.process_message(format!("Hello, world!"));// --------------- ERROR: field `messages` is not accessible here
//
// This desugars to:
// 
// ```
// MessageProcessor::process_message(
//     &mut mp {messages},        // <-- names a private field!
//     format!("Hello, world!"),
// )
// ```
// 
// which names the private field `messages`. That is an error.
}

Does this mean that view types can’t be used in public methods?

More-or-less. You can use them if the view types reference public fields:

#[non_exhaustive]pubStatistics{pubmessage_count: usize,pubaverage_bytes: usize,// ... maybe more fields will be added later ...
}implStatistics{pubfn total_bytes(&self{message_count,average_bytes})-> usize {//                    ----------------------------
//             Declare that we only read these two fields.
self.message_count*self.average_bytes}}

Won’t it be limited that view types more-or-less only work for private methods?

Yes! But it’s a good starting point. And my experience is that this problem occurs most often with private helper methods like the one I showed here. It can occur in public contexts, but much more rarely, and in those circumstances it’s often more acceptable to refactor the types to better expose the groupings to the user. This doesn’t mean I don’t want to fix the public case too, it just means it’s a good use-case to cut from the MVP. In the future I would address public fields via abstract fields, as I described in the past.

What if I am borrowing the same sets of fields over and over? That sounds repititive!

That’s true! It will be! I think in the future I’d like to see some kind of ‘ghost’ or ‘abstract’ fields, like I described in my abstract fields blog post. But again, that seems like a “post-MVP” sort of problem to me.

Must we specify the field sets being borrowed explicitly? Can’t they be inferred?

In the syntax I described, you have to write &mut place {field1, field2} explicitly. But there are many approaches in the literature to inferring this sort of thing, with row polymorphism perhaps being the most directly applicable. I think we could absolutely introduce this sort of inference, and in fact I’d probably make it the default, so that &mut place always introduces a view type, but it is typically inferred to “all fields” in practice. But that is a non-trivial extension to Rust’s inference system, introducing a new kind of inference we don’t do today. For the MVP, I think I would just lean on auto-ref covering by far the most common case, and have explicit syntax for the rest.

Man, I have to write the fields that my method uses in the signature? That sucks! It should be automatic!

I get that for many applications, particularly with private methods, writing out the list of fields that will be accessed seems a bit silly: the compiler ought to be able to figure it out.

On the flip side, this is the kind of inter-procedural inference we try to avoid in Rust, for a number of reasons:

  • it introduces dependecies between methods which makes inference more difficult (even undecidable, in extreme cases);
  • it makes for ’non-local errors’ that can be really confusing as a user, where modifying the body of one method causes errors in another (think of the confusion we get around futures and Send, for example);
  • it makes the compiler more complex, we would not be able to parallelize as easily (not that we parallelize today, but that work is underway!)

The bottom line for me is one of staging: whatever we do, I think we will want a way to be explicit about exactly what fields are being accessed and where. Therefore, we should add that first. We can add the inference later on.

Why does this need to be added to the borrow checker? Why not desugar?

Another common alternative (and one I considered for a while
) is to add some kind of “desugaring” that passes references to fields instead of a single reference. I don’t like this for two reasons. One, I think it’s frankly more complex! This is a fairly straightforward change to the borrow checker, but that desugaring would leave code all over the compiler, and it would make diagnostics etc much more complex.

But second, it would require changes to what happens at runtime, and I don’t see why that is needed in this example. Passing a single reference feels right to me.

What about the ambiguous grammar? What other syntax options are there?

Oh, right, the ambiguous grammar. To be honest I’ve not thought too deeply about the syntax. I was trying to have the type Struct { field1, field 2 } reflect struct constructor syntax, since we generally try to make types reflect expressions, but of course that leads to the ambiguity in borrow expressions that causes the problem:

letfoo=&mutsome_variable{field1};// ------------- is this a variable or a field name?

Options I see:

  • Make it work. It’s not truly ambiguous, but it does require some semantic diambiguation, i.e., in at least some cases, we have to delay resolving this until name resolution can complete. That’s unusual for Rust. We do it in some small areas, most notably around the interpretation of a pattern like None (is it a binding to a variable None or an enum variant?).
  • New syntax for borrows only. We could keep the type syntax but make the borrow syntax different, maybe &mut {field1} in some_variable or something. Given that you would rarely type the explicit borrow form, that seems good?
  • Some new syntax altogether. Perhaps we want to try something different, or introduce a keyword everywhere? I’d be curious to hear options there. The current one feels nice to me but it occupies a “crowded syntactic space”, so I can see it being confusing to readers who won’t be sure how to interpret it.

Conclusion: this is a good MVP, let’s ship it!

In short, I don’t really see anything blocking us from moving forward here, at least with a lang experiment.