Today is my eighth Moziversary đ I joined Mozilla as a full-time employee on
May 1st, 2018. I previously blogged in 2019, 2020, 2021, 2022,
2023, 2024, and 2025.
You might have come across this built-in data consent thing for extensions
in Firefox. I spent a good chunk of last year working on this project, from
developing a technical proposal to implementing the feature in Gecko,
Firefox for desktop and Firefox for Android.
Talking about Android, I became the module owner for
Fenix::Add-ons, a module for all the code related to
add-ons in Firefox for Android (which we call âFenixâ internally). Between the
creation of this new module, and an ever-solidifying collaboration between the
Add-ons and Android teams, the support for extensions in Firefox for Android has
a bright future! Having started my Android journey in 2023, this feels like a
noteworthy achievement.
Near the end of last year, I moved back to being a full-time AMO engineer to
support a team that was down to two engineers. I redesigned the detail page, and
started some refactoring on our security scanners, which I had originally
created back in 2019 đŹ
In other news, I joined the AI/LLM/vibe-coding crowd thanks to my colleague
Paul, and it took me about a month to get brain-fried⊠AI
fatigue is real, indeed. That said, Claude code has been somewhat useful to
me, and I donât hate it, but I also donât love it.
Thank you to everyone in the Add-ons team as well as to all the folks I had the
pleasure to work with so far. Cheers!
The nvptx64-nvidia-cuda target is a compilation target for NVIDIA GPUs. When using this target, the final output is PTX. Two version choices shape that output:
a GPU architecture (for example, sm_70, sm_80, âŠ), which determines which GPUs can run the PTX, and
a PTX ISA version, which determines which CUDA driver versions can load (and JIT-compile) the PTX.
In Rust 1.97 (scheduled for release on July 9, 2026), the baseline PTX ISA version and GPU architecture for nvptx64-nvidia-cuda will be increased. These changes affect both the Rust compiler (rustc) and related host tooling, and they make it impossible to generate PTX artifacts compatible with older GPUs and older CUDA drivers.
The new minimum supported versions will be:
PTX ISA 7.0 (requires a CUDA 11 driver or newer)
SM 7.0 (GPUs with compute capability below 7.0 are no longer supported)
Why are the requirements being changed?
Until now, Rust has supported emitting PTX for a wide range of GPU architectures and PTX ISA versions. In practice, several defects existed that could cause valid Rust code to trigger compiler crashes or miscompilations. Raising the baseline addresses these issues and enables more complete support for the remaining supported hardware.
Removing support affects users of the architectures being removed. In this case, the most recent affected GPU architectures date back to 2017 and are no longer actively supported by NVIDIA. We therefore expect the overall impact of this change to be limited.
Maintaining support for these architectures would require substantial effort. These removals let us focus development efforts on improving correctness and performance for currently supported hardware.
What happens when I update to Rust 1.97?
If you need to target a CUDA driver that does not support PTX ISA 7.0 (CUDA 10-era drivers and older), Rust 1.97 will no longer be able to generate PTX compatible with that environment. Similarly, if you need to run on GPUs with compute capability below 7.0 (for example, Maxwell or Pascal), Rust 1.97 will no longer be able to generate compatible PTX for those GPUs.
Assuming you are targeting a CUDA driver compatible with CUDA 11 or newer and using GPUs with compute capability 7.0 or newer:
If you do not specify -C target-cpu, the new default will be sm_70, and your build should continue to work (but will no longer be compatible with pre-Volta GPUs).
If you currently specify an older -C target-cpu (for example, sm_60), you will need to either:
remove that flag and let it default to sm_70, or
update it to sm_70 or a newer architecture.
If you already specify -C target-cpu=sm_70 (or newer), there should be no behavioral changes from this update.
Removed obsolete migration logic that forced distribution language packs to be reinstalled when upgrading from Firefox versions older than 67 â Bug 2000797
Thanks to Aloys for contributing the changes needed to cleanup this old XPIProvider migration logic
WebExtensions Framework
Fixed a regression where WebRTC permission popups were queued and suppressed while an extension popup was open â Bug 1982832
WebExtension APIs
Fixed an edge case where tabs.move would revert splitview tabs order while moving splitview tab to a new window â Bug 2028832
Fixed windows API reporting window type normal instead of popup for windows opened via window.open() â Bug 2030631
Thanks to Brandon Lucier for contributing this small but very much appreciated fix to the windows WebExtensions API!
Henrik Skupin landed support for browser.setClientWindowState, originally started by Dan and continued by Liam DeBeasi, with final fixes and improvements completed after earlier contributors were unable to continue.
This also updated some global styling for card, message bar, text input border radiuses/colors and the moz-promo component got a refactor with better support for image styling by default
Jules Simplicio has been updating variable names in the Figma Nova Components/Styles files so our design token naming is consistent between Figma and central (so we can run our import script for more Nova automation)
UX Fundamentals
Felt Privacy error pages now support more NSS errors instead of falling through to the legacy page. Updated introductory text for the denied-port-access error. â 2024150
Fixed a test in browser_aboutCertError.js that was failing on Linux opt standalone and removed the platform skip. â 2028651
Added clock skew detection to the Felt Privacy error pages. When a certificate error is caused by a wrong system clock, the Felt Privacy error pages now show the same dedicated clock-skew message that the legacy error pages had, and helps guide users to correct their system time. â 2025049
ââFixed misaligned bullet points in the âWhat can you do?â section of Felt Privacy network error pages, restoring correct visual indentation for that list. â 2028632
Weâre delighted that Abigail Besdin has joined Mozilla as our new Chief Operating Officer.
This is an incredibly exciting time for Mozilla. Our focus is to become the worldâs most trusted software company by building products that let people use the internet openly, safely, and on their terms. As technology changes rapidly, we are working to strengthen the business foundation and infrastructure that champions our mission. Delivering on that ambition takes more than great products; it demands operational rigor. Abigail will lead this effort, demonstrating how values-driven organizations can scale with discipline, speed, and trust in the AI era.
As COO, Abigail will drive company strategy and oversee Mozillaâs Core Services teams: Business Operations, Data, Infrastructure, IT, Legal, People, Security, and Strategy. These are the functions that enable us to move quickly and scale with focus. Abigail will sharpen how we plan, prioritize, and execute across the company.
Abigail brings more than 18 years of experience building and scaling high-impact platforms. She co-founded Great Jones, a venture-backed property management startup where she raised $30M, reached $10M in ARR, and led a successful acquisition by Roofstock. At Roofstock, she served as Chief of Staff to the CEO â functioning as an internal COO â where she launched new product lines, closed and integrated two acquisitions, and led the companyâs strategic planning process.Â
Earlier in her career, she spent six years at Skillshare, where she launched the companyâs online learning platform and built its growth and content engines from the ground up.
That combination of founderâs instinct and operatorâs discipline is exactly what Mozilla needs right now. Abigail will report directly to our CEO and join the executive team.
Iâve learned firsthand that ambitious product goals are only as effective as the operations underpinning them. Mozillaâs mission is as big as it gets, and Iâm thrilled to lead our Core Services organization to enable rigorous, smart, and quick decision-making across the business. With a powerful execution engine, we can make sure the best of Mozillaâs mission materializes.Â
Abigail Besdin, Chief Operating Officer
Abigail studied Philosophy at NYU, with a focus on Ethics and Mathematical Logic. Born and raised in New York City, she still lives there with her husband and three kids.Â
Will Lachance does a retrospective on the Glean Dictionary outreachy internship.
See also "Linh's Outreachy Internship Highlights" https://www.youtube.com/watch?v=UJdIkHDPgGQ
To learn more about Outreachy, see https://www.outreachy.org/
Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.Â
Welcome!
Are you a locale leader and want us to include new members in our upcoming reports? Contact us!
Whatâs new or coming up in Firefox desktop
Firefox string deadline changes
Starting with 149, some changes in developer deadlines relating to Nightly and Beta have resulted in a slight shift in string translation deadlines, giving us 2 extra days to land strings. Previously deadlines in Pontoon were set to the Sunday ahead of the final Release Candidate but going forward they will be set to a Tuesday. For example the upcoming deadline for Firefox 151 is Tuesday, May 12.
If youâre interested to see more details on upcoming Firefox releases and milestones, https://whattrainisitnow.com has all the latest details.
UI Refresh
Behind the scenes a refresh on the visual look of Firefox has been ongoing using the internal name âNovaâ. You may have seen some blog reports recently on this, or perhaps have been seeing bugs in Bugzilla with this in the title. We will start seeing new strings related to these changes here and there as development work progresses, however we donât expect a large number of string changes stemming from this work.
That being said, these updates also bring some changes in how we communicate directly to our users within Firefox. One of these changes you may have already met: our new mascot Kit. If you missed the announcement give it a read here. You may also notice a shift voice for user directed messages â with source strings becoming more Genuine, Fiery, and Playful. See this recent update in Firefoxâs brand voice for more details.
Settings redesign
Localization for the update to about:settings has been going on for some time (starting early this year) and the bulk of the translation work is behind us at this point. You may see some new strings (particularly around Privacy & Security) but many of the strings are in a viewable/testable state in Nightly 152. You can check your translations and test out the redesign by typing about:config into your URL bar, proceeding past the warning message, and searching for browser.settings-redesign.enabled and setting the value to true.
Whatâs new or coming up in mobile
Things have been particularly busy on mobile over the past couple of months. For example, Firefox for Android saw a significant spike in April, with the number of new strings increasing to over 200 compared to fewer than 50 in March â more than eight times the typical monthly volume*.
There are two main drivers behind this increase. First, Firefox for Android is introducing a built-in VPN feature, bringing it in line with the functionality already available in Firefox. Second, both iOS and Android teams are working on a new widget for the upcoming 2026 World Cup, allowing users to follow their team directly from the browser.
Given the short turnaround time for this feature, you will notice that many strings are intentionally kept consistent across platforms â and started landing on Desktop as well. Weâre also pre-landing as many strings as possible, ahead of implementation, to give localizers more time to complete translations.
* Did you know that you can track the number of new strings in a project from the Insights page in Pontoon? Check for example Firefox for Android. In the Translation activity chart, click on New source strings in the legend to display this data. Given the difference in scale, it can also help to hide other metrics to make the chart easier to read.
Whatâs new or coming up in Pontoon
New documentation system. Pontoon now features a brand-new, unified documentation system. This new hub brings together previously scattered resources into a single, streamlined experience, consolidating developer, localizer, and admin documentation from three separate sites into one cohesive platform. By centralizing content, the new system makes it easier to find, navigate, and maintain documentation, ensuring contributors of all roles have quick access to up-to-date and consistent guidance.
Search. You can now set default search options directly in your profile. This allows you to tailor your search without having to adjust filters each time.
The same settings are also applied when using the recently introduced global search page, which brings a major step forward in unifying localization across Mozilla by allowing users to search for strings across all projects and locales in one place. Inspired by Transvision and designed as its successor, the feature integrates deeply with Pontoon, making it easy to filter results, compare translations across languages, and jump directly into the translation workflow.
AI integration. Weâve also refined the prompt used by the LLM-powered translation feature. The goal is not to change how the feature works, but to make its output more consistent and better aligned with the context available in Pontoon. For example, the updated prompt improves how punctuation is handled, reducing variability in suggestions.
In addition, the prompt now includes more contextual data:
String ID.
Comments, including pinned comments from project managers.
Matches from terminology.
This additional context helps the model generate more relevant suggestions. It also represents a first step toward making LLM suggestions more useful, ahead of potential experiments with displaying them by default alongside suggestions from traditional machine translation.
New contributors. Weâre also excited to welcome a group of new contributors who have started making an impact on Pontoon over the past few months. MundiaNderi, nishitmistry, dannycolin, first-afk, wassafshahzad, huseynovvusal, and Peacanduck have all contributed valuable improvements across different parts of the project, helping us move faster and improve the overall experience.
A special shoutout goes to Serah (MundiaNderi), who not only made significant contributions but also shared insights into her work in a recent blog post about enhancing comment management in Pontoonâan excellent example of the kind of collaboration and knowledge sharing we love to see in the community.
Newly published localizer facing documentation
As part of the recent documentation update for Pontoon, weâve reorganized the content around pretranslation to make it clearer and easier to navigate. There is now a dedicated page outlining the criteria required to enable pretranslation for a locale, along with guidance on how to monitor its effectiveness over time (for example, by tracking metrics like acceptance rate or time to review). If youâre a locale manager and want to try pretranslation for your locale, you can request it directly from Pontoon.
Over the past 12 months, we also ran a limited experiment using paid translation agencies for two locales. The goal was to restore the localization level of Firefox for Android in cases where the community was inactive â situations that have since improved, with both communities now active again.
Because volunteer communities remain the foundation of Mozillaâs localization model, we wanted to be transparent about when and why this approach was used, and what it means in practice. This includes clarifying how external support fits within a community-driven ecosystem, where localizers retain ownership and responsibility for quality and direction. You can find more details in this page.
Friends of the Lion
Image by Elio Qoshi
We continue the localizer spotlight series this year.
Meet Oliver from China Firefox localizer, accounting student, former Minecraft translator, and Bocchi the Rock! fan He talks about starting with a single typo, why Firefoxâs independence matters to him, and how the Simplified Chinese community keeps quality high with cross-review and shared responsibility.
Marcelo from Argentina needs no introduction to the localization communities. From Phoenix 0.3 to 24 years later, he shares how he got started, what it meant to be part of the Firefox 1.0 release, his experience as an l10n manager, and why using Mozilla products in his own language â Spanish (Argentina) â continues to motivate him.
What does 18 years of volunteer localization look like? From discovering Firefox and Linux out of curiosity to leading the Portuguese translation team, ClĂĄudio from Portugal reflects on why localization is a form of digital activism, and how every translated word helps build a more inclusive internet.
Baurzhan from Kazakhstan began his localization journey with a simple question: why wasnât Kazakh available in widely used software? That curiosity grew into a long-term commitment to localization, leading to the successful translation of Firefox and many other open source projects. His work demonstrates the power of perseverance in making technology accessible to all.
If you enjoy the series, please help us identify the localizers youâd like to see featured filling out this nomination form. If you have stories to share, tell us in your own words.
Know someone in your l10n community whoâs been doing a great job and should appear here? Contact us and weâll make sure they get a shout-out!
servoshell is now installed as servoshell or servoshell.exe, rather than servo or servo.exe (@jschwe, @mrobinson, #42958).
--userscripts has been removed for now, but anyone who uses it is welcome to reinstate it as a wrapper around UserÂContentÂManager::addÂ_script (@jschwe, #43573).
Weâve fixed a bug where link hover status lines are sometimes not legible (@simartin, #43320), and weâre working on getting servoshell signed for macOS to avoid getting blocked by Gatekeeper (@jschwe, #42912).
crypto.subtle.deriveBits() for X25519 checking for all-zero secrets, and verify() for HMAC comparing signatures, are now done in constant time (@kkoyung, #43775, #43773).
âContent-Security-Policyâ now handles redirects correctly (@TimvdLippe, #43438), and sends violation reports with the correct blockedURI and referrer (@TimvdLippe, #43367, #43645, #43483).
The policy in <meta> now combines with the policy sent in HTTP headers, rather than overriding it (@TimvdLippe, @elomscansio, #43063).
When checking nonces, we now reject elements with duplicate attributes (@dyegoaurelio, #43216).
The document containing an <iframe> can no longer access the contents of error pages (@TimvdLippe, #43539), and CSP violations inside an <iframe> are now correctly reported (@TimvdLippe, #43652).
Weâre continuing to implement document.execCommand() for rich text editing (@TimvdLippe, #43177), under --pref domÂ_execÂ_commandÂ_enabled.
âbeforeinputâ and âinputâ events are now fired when executing supported and enabled commands (@TimvdLippe, #43087), the âdefaultParagraphSeparatorâ and âstyleWithCSSâ commands are now supported (@TimvdLippe, #43028), and the âdeleteâ command is partially supported (@TimvdLippe, #43016, #43082).
All of the features above are enabled in servoshellâs experimental mode.
Work on accessibility support for web contents continues under --pref accessibilityÂ_enabled.
There was a breaking change in the embedding API (@delan, @alice, #43029), and weâve landed support for âgraftingâ the accessibility tree of a document into that of its containing webview (@delan, @alice, #43012, #43013, #43556).
As a result, when you navigate, separate documents can have separate accessibility trees without complicating the embedder.
<link rel=modulepreload> is now partially supported (@Gae24, #42964), though recursive fetching of descendants is gated by --pref domÂ_allowÂ_preloadingÂ_moduleÂ_descendants (@Gae24, #43353).
For a long time, Servo has had some support for the Web Bluetooth API under --pref domÂ_bluetoothÂ_enabled.
Weâve recently reworked our implementation to adopt btleplug, the cross-platform Rust-native Bluetooth LE library (@webbeef, #43529, #43581).
Weâve landed more fixes to Servoâs async parser (@simonwuelker, #42930, #42959), under --pref domÂ_servoparserÂ_asyncÂ_htmlÂ_tokenizerÂ_enabled.
If we can get the feature working more reliably (#37418), it could halve the energy Servo spends on parsing, lower latency for pages that donât use document.write(), and even improve the html5ever API for the ecosystem.
For developers
Servoâs DevTools feature now has partial support for inspecting service workers (@CynthiaOketch, #43659), as well as using the navigation controls along the top of the UI (@brentschroeter, @eerii, #43026).
In the Inspector tab, weâve fixed a bug where the UI stops updating when navigating to a new page (@brentschroeter, #43153).
In the Console tab, you can now evaluate JavaScript in web workers and service workers (@SharanRP, #43361, #43492).
Weâve fixed some long-outstanding bugs where the DevTools UI may stop responding due to protocol desyncs (@brentschroeter, @eerii, #43230, #43236), or due to messages from multiple Servo threads being interleaved (@brentschroeter, @eerii, #43472).
For developers of Servo itself, mach can be a bit opaque at times.
To make mach more transparent and composable, weâve added mach print-env and mach exec commands (@jschwe, #42888).
The empty default implementation of EventLoopWaker::wake has been removed, because it almost never makes sense for a new custom impl to leave the method empty (@chrisduerr, @mrobinson, #43250).
Add Cookie is now more conformant (@yezhizhen, #43690), which led to Servo developers landing a spec patch.
âpauseâ actions are now slightly more efficient (@yezhizhen, #43014), and weâve fixed a bug where âwheelâ actions fail to interleave with other actions (@yezhizhen, #43126).
More on the web platform
Carets now blink in text fields (@mrobinson, #43128).
You can configure or disable blinking carets with --pref editing_caret_blink_time=0 or a duration in milliseconds.
Clicking to move the caret is more forgiving now (@mrobinson, #43238), and moving the caret by a word at a time is more conventional on Windows and Linux, with Ctrl instead of Alt (@mrobinson, #43436).
Weâve also fixed a bug where pressing the arrow keys in text fields both moves the caret (good) and scrolls the page (bad), and fixed a bug where the caret fails to render on empty lines (@mrobinson, @freyacodes, #43247, #42218).
Input has improved, with more responsive touchpad scrolling on Linux (@mrobinson, @chrisduerr, #43350).
Pointer events and mouse events can now be captured across shadow DOM boundaries (@simonwuelker, #42987), and weâve now started working towards shadow-DOM-compatible focus (@mrobinson, #43811).
Pressing Space or Enter inside text fields no longer causes them to be clicked (@mrobinson, #43343).
The lang attribute is now taken into account when shaping, which is important for the correct rendering of Chinese and Japanese text (@RichardTjokroutomo, @mrobinson, #43447).
âfont-weightâ is now matched more accurately when no available font is an exact match (@shubhamg13, #43125).
Navigation is one of the most complicated parts of HTML: navigating can run some JavaScript that replaces the page, just run some JavaScript, or depending on the response, do nothing at all.
<iframe> makes navigation doubly complicated: the document containing an <iframe> can observe and interact with the document inside the <iframe> in various ways, often synchronously.
This has been the source of manybugsovertheyears, but weâve recently fixed one of those major issues (@jdm, #43496).
new Worker() now supports JS modules (@pylbrecht, @Gae24, #40365), and CanvasRenderingContext2D now supports drawing text with Variation Selectors, allowing you to control things like emoji presentation and CJK shaping (@yezhizhen, #43449).
Servo now fires âpointeroverâ, âpointeroutâ, âpointerenterâ, and âpointerleaveâ events on web content (@webbeef, #42736), âscrollâ events on VisualViewport (@stevennovaryo, #42771), and âscrollendâ events on Document, Element, and VisualViewport (@abdelrahman1234567, @mrobinson, #38773).
We also fire âerrorâ events when event handler attributes contain syntax errors (@simonwuelker, #43178).
âdirectionâ now works on grid containers (@nicoburns, #42118), SVG images can now be used in âborder-imageâ (@shubhamg13, #42566), âlinear-gradient()â now dithers to reduce banding (@Messi002, #43603), âletter-spacingâ no longer applies to invisible zero-width formatting characters (@simonwuelker, #42961), and â:activeâ now matches disabled or non-focusable elements too, as long as they are being clicked (@webbeef, #42935).
DOMContentLoaded timings in PerformanceÂNavigationÂTiming are more accurate (@simonwuelker, #43151). PerformanceÂPaintÂTiming and LargestÂContentfulÂPaint are more accurate too, taking <iframe> into account (@shubhamg13, #42149), and checking for and ignoring things like broken images and transparent backgrounds (@shubhamg13, #42833, #42975, #43475).
Weâve landed partial support for using CSS counters in âlist-style-typeâ on âdisplay: list-itemâ and âcontentâ on â::markerâ, but the counter values themselves are not calculated yet, so all list items still read as 0. or similar.
In any case, you can use a <counter-style-name> or âsymbols()â in âlist-style-typeâ, and âcounter()â and âcounters()â in âcontentâ (@Loirooriol, #43111).
Weâve also landed partial support for <marquee> and the HTMLMarqueeElement interface, including basic layout, but the contents are not animated yet (@mrobinson, @lukewarlow, #43520, #43610).
Servo now exposes several attributes that have no direct effect, but are needed for web compatibility (@lukewarlow, #43500, #43499, #43502, #43518):
noHref on HTMLAreaElement
hreflang, type, charset on HTMLAnchorElement
useMap on HTMLInputElement and HTMLObjectElement
longDesc on HTMLIFrameElement and HTMLFrameElement
Web fonts are no longer fetched more than once, and they no longer cause reflow when they fail to load (@minghuaw, #43382, #43595).
Weâre also working towards better caching for shaping results (@mrobinson, @lukewarlow, @Loirooriol, #43653).
Event handler attribute lookup is more efficient now (@Narfinger, #43337), and weâve made DOM tree walking more efficient in many cases (@Narfinger, #42781, #42978, #43476).
crypto.subtle.encrypt(), decrypt(), sign(), verify(), digest(), importKey(), unwrapKey(), decapsulateKey(), and decapsulateBits() are more efficient now (@kkoyung, #42927), thanks to a recent specupdate.
DOM data structures (#[dom_struct]) can refer to one another, with the help of garbage collection.
But when DOM objects are being destroyed, those references can become invalid for a brief moment, depending on the order the GC finalizers run in.
This can be unsound if those references are accessed, which is a very easy mistake to make if the type has an impl Drop.
To help prevent that class of bug, weâre reworking our DOM types so that none of them have #[dom_struct] and impl Drop at the same time (@willypuzzle, #42937, #42982, #43018, #43071, #43222, #43288, #43544, #43563, #43631).
Thanks again for your generous support!
We are now receiving 7167 USD/month (+2.6% from February) in recurring donations.
This helps us cover the cost of our speedyCIandbenchmarkingservers, one of our latest Outreachy interns, and funding maintainer work that helps more people contribute to Servo.
Servo is also on thanks.dev, and already 37 GitHub users (+5 from February) that depend on Servo are sponsoring us there.
If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.
We now have sponsorship tiers that allow you or your organisation to donate to the Servo project with public acknowlegement of your support.
If youâre interested in this kind of sponsorship, please contact us at join@servo.org.
As previously announced, the Rust Project is participating in Google Summer of Code (GSoC) 2026. GSoC is a global program organized by Google that is designed to bring new contributors to the world of open source.
A few months ago, we published a list of GSoC project ideas, and started discussing these projects with potential GSoC applicants on our Zulip. We had many interesting discussions with the potential contributors, and even saw some of them making non-trivial contributions to various Rust Project repositories before GSoC officially started!
The applicants prepared and submitted their project proposals by the end of March. This year, we received 96 proposals, which is a 50% increase from last year. We are glad that there was again a lot of interest in our projects! Like many other GSoC organizations this year, we somewhat struggled with some AI-generated proposals and low-quality contributions generated using AI agents, but it stayed manageable.
GSoC requires us to produce an ordered list of the best proposals, which is always challenging, as Rust is a big project with many priorities. Our mentors examined the submitted proposals and evaluated them based on their prior interactions with the given applicant, their contributions so far, the quality of the proposal itself, but also the importance of the proposed project for the Rust Project and its wider community. We also had to take mentor bandwidth and availability into account. Unfortunately, we had to cancel some projects due to several mentors losing their funding for Rust work in the past few weeks.
As is usual in GSoC, even though some project topics received multiple proposals1, we had to pick only one proposal per project topic. We also had to choose between proposals targeting different work to avoid overloading a single mentor with multiple projects. In the end, we narrowed the list down to the best proposals that we could still realistically support with our available mentor pool. We submitted this list and eagerly awaited how many of them would be accepted into GSoC.
Selected projects
On the 30th of April, Google has announced the accepted projects. We are happy to share that 13 Rust Project proposals were accepted by Google for Google Summer of Code 2026. That is a lot of projects! We are really happy and excited about GSoC 2026!
Below you can find the list of accepted proposals (in alphabetical order), along with the names of their authors and the assigned mentor(s):
Congratulations to all applicants whose project was selected! Our mentors are looking forward to working with you on these exciting projects to improve the Rust ecosystem. You can expect to hear from us soon, so that we can start coordinating the work on your GSoC projects.
We are excited to mentor three contributors who already experienced GSoC with us in the previous year. Welcome back, Kei, Marcelo and Shourya!
We would like to thank all the applicants whose proposal was sadly not accepted, for their interactions with the Rust community and contributions to various Rust projects. There were some great proposals that did not make the cut, in large part because of limited mentorship capacity. However, even if your proposal was not accepted, we would be happy if you would consider contributing to the projects that got you interested, even outside GSoC! Our project idea list is still current and could serve as a general entry point for contributors that would like to work on projects that would help the Rust Project and the Rust ecosystem. Some of the Rust Project Goals are also looking for help.
There is a good chance we'll participate in GSoC next year as well (though we can't promise anything at this moment), so we hope to receive your proposals again in the future!
The accepted GSoC projects will run for several months. After GSoC 2026 finishes (in autumn of 2026), we will publish a blog post in which we will summarize the outcome of the accepted projects.
Hello and welcome to another issue of This Week in Rust!
Rust is a programming language empowering everyone to build reliable and efficient software.
This is a weekly summary of its progress and community.
Want something mentioned? Tag us at
@thisweekinrust.bsky.social on Bluesky or
@ThisWeekinRust on mastodon.social, or
send us a pull request.
Want to get involved? We love contributions.
An important step for RFC implementation is for people to experiment with the
implementation and give feedback, especially before stabilization.
If you are a feature implementer and would like your RFC to appear in this list, add a
call-for-testing label to your RFC along with a comment providing testing instructions and/or
guidance on which aspect(s) of the feature need testing.
Always wanted to contribute to open-source projects but did not know where to start?
Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Relatively few perf-affecting changes this week. Perf report is more positive
than users should see due to the -Zincremental-verify-ich related
improvements in #155473.
No Items entered Final Comment Period this week for
Language Reference,
Language Team or
Leadership Council.
Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.
If you are running a Rust event please add it to the calendar to get
it mentioned here. Please remember to add a link to the event too.
Email the Rust Community Team for access.
A small pet-peeve with fetching the latest main on jujutsu is that I like to move all my WIP patches to the new one. That's also nice because jj doesn't make me fix the conflicts immediately!
The solution from a co-worker (kudos to skippyhammond!) is to query all immediate decendants of the previous main after the fetch.
jj git fetch
# assuming 'z' is the rev-id of the previous main.
jj rebase -s "mutable()&z+" -d main
I haven't learnt how to make aliases accept params with it yet, so this will have to do for now.
Update 2: After some months of usage across multiple repositories, I've found it better to be clear with the destination since main, trunk or others can be tracked with a combination of repository aliases too.
[aliases]
# Update all revs to the latest main; point to the previous one.
hoist = ["util", "exec", "--", "bash", "-c", """
set -euo pipefail
jj rebase -s "mutable()&$1+" -d "$2"
""", ""]
You can use this to rebase all your WIPs like so:
$ jj hoist <prev_main> <current_main>
If my previous main revision was kz, this is what I would end up doing:
One of the most exciting aspects of bringing Thunderbird Pro to life is the opportunity to build an email service from Thunderbird together with our community, giving users the control and freedom they expect without relying on third party email service providers.
Over the past few months, weâve been checking in with our community through quick surveys, and the feedback is clear: people care most about Thundermail. Weâre listening and working to deliver what you expect as quickly as possible, focusing our resources on building a great Thundermail experience first, with Appointment and Send as power features alongside that foundation. Weâre also adjusting the initial price to better align with your expectations.
Weâll be sending out the first wave of Early Bird Beta invites next month. If you havenât already, please join the waitlist HERE and keep an eye on your inbox. Weâre excited to get Thundermail into your hands and continue building it together.
Latest Thundermail Developments
Our work right now is focused on making Thundermail reliable, easy to set up, and ensuring a smooth onboarding experience with an intuitive design, both visually and functionally.
Sign-in and Setup
A new connection flow is in development that will make it much easier to add a Thundermail account to Thunderbird, including options like QR code setup and deeper integration within the app. We have also fixed a range of sign in issues, improved domain setup, and made it easier to move from account creation to actually using the service.
The account dashboard has been updated for a cleaner look, smoother onboarding, and easier access to the key details our users care about. Configuring settings like app passwords, custom domains and aliases are now front and center when you first sign in.
Infrastructure
On the infrastructure side, weâre continuing to improve stability and performance. This includes completed work on upgrading Stalwart to strengthen spam detection so legitimate emails are far less likely to end up in spam, along with improvements to how we monitor the services so problems are easier to catch and less likely to affect users. Everyday actions like archiving and managing settings should feel more intuitive for users, and the web app, add-ons, and related services now work together more smoothly.
April Onward
Next up for the account experience is better alias and custom-domain handling, and even better integration between Thunderbird and the web account flow.
The dashboard is also getting another round of refinement so settings, account details, and subscription information are easier to understand at a glance.
Thundermail work continues by focusing on reliability and security, including aliases, delivery, transport security, and admin access controls.
There will also be a final layer of polish across the entire experience between the web app, add-on, and desktop flows.
Finally: Webmail is moving up our priority list. While still early, development is actively progressing and weâre aiming to bring a usable experience much sooner than originally planned.
Progress on Appointment and Send
While Thundermail is our primary focus, work on other Thunderbird Pro services is continuing.
For Appointment, weâve made progress on reliability and backend performance, including improvements to how calendar tasks are processed and fixes to event handling. Our priorities heading up to the release are also focused on reliability, with refinement on calendar connections, event syncing, Zoom access, and a simpler first-time setup flow.
For Send, weâve made substantial visual improvement so that it feels like a more natural part of Thunderbird Pro. Weâve also made a number of security improvements and are continuing to evaluate infrastructure choices to ensure long term reliability. Our priorities for Send in the coming months include better encryption-key handling and clearer password-protected downloads.
Whatâs Next
Weâll begin inviting people from the waitlist into the Early Bird beta shortly. If you havenât signed up yet, nowâs the time. Your feedback will directly shape how Thundermail evolves.
Improvements to tabs.move() split views support: extensions can now swap tabs within a split view, and passing a list of tabs that explicitly separates split-view members will properly unsplit the view to honor the requested ordering â Bug 2016762 / Bug 2022549
Alexandre Poirot [:ochameau] improved performance of the Inspector Rules view by supporting incremental updates, making some perf test 2 times faster (#2018538)
The Tab Notes feature is now live in Firefox 149 in Firefox Labs! We have a few tweaks and fixes coming in Firefox 150 and 151, but weâre mostly collecting feedback from users.
Fixed a long-standing issue where extension paths stored in extensions.json (and addonStartup.json.lz4) became incorrect after restoring a Firefox profile to a different location and causing all previously installed add-ons to fail to load â Bug 1429838
DevTools
geppy renamed some of our CSS variables to be more explicit (#1767617)
Sebastian Zartner [:sebo] move the :visited pseudo-class to element-specific section in the Inspector pseudo-class (:hov) panel (#2017985)
Ben improved the RDM âAdd Custom Deviceâ form by making sure it worked for any kind of locale (#1705177)
Nicolas Chevobbe [:nchevobbe] added a <global> node on objects which are not from the top-level debugged page (itâs always visible in the browser console/browser toolbox) (#1962343)
Nicolas Chevobbe [:nchevobbe] added proper autocomplete (including things like interpolation/color space) for linear-gradient() (and repeating-linear-gradient())Â (#2025761)
The main impact of this is that when changing globals in head.js files, VS CodeâS ESLint reporting will see the changes when you have the test files open that use the head.js file.
The Firefox for Android app has always had a complicated build process - we're cramping a complex cross-platform browser engine and all the related components that make it work on Android into one package. In its current form, it lives in the Firefox mono-repo at mozilla-central (now mozilla-firefox using the git repository).
I wanted to document my "artifact-mode" environment here since it's worked quite successfully for me for many years with minor changes.
NOTE: After a fresh clone of the mono-repo, don't forget to first run and follow the prompts of ./mach bootstrap .
mozconfig
My mozconfig below is enabled for artifact mode, but occasionally I switch between various configurations. You can see those commented out, with these few extra notes:
I like to separate out my objdirs to avoid cache pollution between the different build types. I think you can get away without needing to specify this and an objdir for your build type and arch will be generated.
sccache speeds up the native portion of full builds after the first slow one, but it's a hit or miss if you fetch from the remote repository but don't need to rebuild as often.
I don't care to manually run the clobber step, and I don't truly appreciate why that isn't always automatically done.
# Build GeckoView/Firefox for Android:
ac_add_options --enable-application=mobile/android
# Targeting the following architecture.
# For regular phones, no --target is needed.
# For x86 emulators (and x86 devices, which are uncommon):
# ac_add_options --target=i686
# For newer phones or Apple silicon
ac_add_options --target=aarch64
# For x86_64 emulators (and x86_64 devices, which are even less common):
# ac_add_options --target=x86_64
# sccache will significantly speed up your builds by caching
# compilation results. The Firefox build system will download
# sccache automatically.
# This only works for non-artifact builds.
#ac_add_options --with-ccache=sccache
# Enable artifact builds; manager-mode.
ac_add_options --enable-artifact-builds
# Write build artifacts to..
## Full build dir
#mk_add_options MOZ_OBJDIR=./objdir-droid
#mk_add_options MOZ_OBJDIR=./objdir-desktop
## Artifact builds
mk_add_options MOZ_OBJDIR=./objdir-frontend
# Automatic clobbering; don't ask me.
mk_add_options AUTOCLOBBER=1
JAVA_HOME
Sometimes you might find yourself needing to run a (non-mach) command in the terminal. Those typically will need to invoke some parts of gradle for an Android build, so it's best to make sure those are using the same JDK as the bootstrapped one in the mono-repo. This avoids weird build errors where something that compiles in one place isn't working in another (like Android Studio).
The location for the JDKs are typically in ~/.mozbuild/jdk/, and if you've between around for ~6 months you end up with multiple versions after every JDK bump:
$ ls -l ~/.mozbuild/jdk/
drwxr-xr-x@ - jalmeida 15 Apr 2025 jdk-17.0.15+6
drwxr-xr-x@ - jalmeida 15 Jul 2025 jdk-17.0.16+8
drwxr-xr-x@ - jalmeida 21 Oct 2025 jdk-17.0.17+10
drwxr-xr-x@ - jalmeida 20 Jan 09:00 jdk-17.0.18+8
drwxr-xr-x@ - jalmeida 26 Feb 15:04 mozboot
You can find some way to point your latest JDK to one location or you can be lazy like me and pick the latest version to assign as your JAVA_HOME property by adding this to your shell's RC file:
export JAVA_HOME="$(ls -1dr -- $HOME/.mozbuild/jdk/jdk-* | head -n 1)/Contents/Home"
Android Studio
Similarly for Android Studio, let's do the same so that environment is identical. Head to, Settings | Build, Execution, Deployment | Build Tools | Gradle, and ensure that "Gradle JDK" path is set to JAVA_HOME.
Lately, the default seems to be for it to follow GRADLE_LOCAL_JAVA_HOME which is a property we can't easily override, so we have to manually set this ourselves.
Using the same Android SDK also helps speed things up and avoids source confusion. You can typically find it in ~/.mozbuild/android-sdk-macosx and update it at Settings | Languages & Frameworks | Android SDK.
Debugging
This section is for miscellaneous build error situations that come-up, but assuming mach build work and there are no known Android build changes, my solution has typically always been the same.
For example, the other day I fetched another engineers patch to test out locally1 as part of reviewing it where I faced the error message below:
Execution failed for task ':components:feature-pwa:compileDebugKotlin'.
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':components:feature-pwa:compileDebugKotlin'.
> A failure occurred while executing org.jetbrains.kotlin.compilerRunner.GradleCompilerRunnerWithWorkers$GradleKotlinCompilerWorkAction
> Internal compiler error. See log for more details
* Try:
> Run with --info or --debug option to get more log output.
> Run with --scan to generate a Build Scan (powered by Develocity).
> Get more help at https://help.gradle.org.
* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':components:feature-pwa:compileDebugKotlin'.
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.lambda$executeIfValid$1(ExecuteActionsTaskExecuter.java:135)
at org.gradle.internal.Try$Failure.ifSuccessfulOrElse(Try.java:288)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:133)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:121)
at org.gradle.api.internal.tasks.execution.ProblemsTaskPathTrackingTaskExecuter.execute(ProblemsTaskPathTrackingTaskExecuter.java:41)
at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46)
at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:51)
at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:74)
at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:209)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52)
at org.gradle.execution.plan.DefaultNodeExecutor.executeLocalTaskNode(DefaultNodeExecutor.java:55)
at org.gradle.execution.plan.DefaultNodeExecutor.execute(DefaultNodeExecutor.java:34)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:355)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:343)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.lambda$execute$0(DefaultTaskExecutionGraph.java:339)
at org.gradle.internal.operations.CurrentBuildOperationRef.with(CurrentBuildOperationRef.java:84)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:339)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:328)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:459)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:376)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
at org.gradle.internal.concurrent.AbstractManagedExecutor$1.run(AbstractManagedExecutor.java:47)
Caused by: org.gradle.workers.internal.DefaultWorkerExecutor$WorkExecutionException: A failure occurred while executing org.jetbrains.kotlin.compilerRunner.GradleCompilerRunnerWithWorkers$GradleKotlinCompilerWorkAction
at org.gradle.workers.internal.DefaultWorkerExecutor$WorkItemExecution.waitForCompletion(DefaultWorkerExecutor.java:289)
at org.gradle.internal.work.DefaultAsyncWorkTracker.lambda$waitForItemsAndGatherFailures$2(DefaultAsyncWorkTracker.java:130)
at org.gradle.internal.Factories$1.create(Factories.java:33)
at org.gradle.internal.work.DefaultWorkerLeaseService.lambda$withoutLocks$2(DefaultWorkerLeaseService.java:344)
at org.gradle.internal.work.ResourceLockStatistics$1.measure(ResourceLockStatistics.java:42)
at org.gradle.internal.work.DefaultWorkerLeaseService.withoutLocks(DefaultWorkerLeaseService.java:342)
at org.gradle.internal.work.DefaultWorkerLeaseService.withoutLocks(DefaultWorkerLeaseService.java:326)
at org.gradle.internal.work.DefaultWorkerLeaseService.withoutLock(DefaultWorkerLeaseService.java:331)
at org.gradle.internal.work.DefaultAsyncWorkTracker.waitForItemsAndGatherFailures(DefaultAsyncWorkTracker.java:126)
at org.gradle.internal.work.DefaultAsyncWorkTracker.waitForItemsAndGatherFailures(DefaultAsyncWorkTracker.java:92)
at org.gradle.internal.work.DefaultAsyncWorkTracker.waitForAll(DefaultAsyncWorkTracker.java:78)
at org.gradle.internal.work.DefaultAsyncWorkTracker.waitForCompletion(DefaultAsyncWorkTracker.java:66)
at org.gradle.api.internal.tasks.execution.TaskExecution$3.run(TaskExecution.java:260)
at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:29)
at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:26)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.run(DefaultBuildOperationRunner.java:47)
at org.gradle.api.internal.tasks.execution.TaskExecution.executeAction(TaskExecution.java:237)
at org.gradle.api.internal.tasks.execution.TaskExecution.executeActions(TaskExecution.java:220)
at org.gradle.api.internal.tasks.execution.TaskExecution.executeWithPreviousOutputFiles(TaskExecution.java:203)
at org.gradle.api.internal.tasks.execution.TaskExecution.execute(TaskExecution.java:170)
at org.gradle.internal.execution.steps.ExecuteStep.executeInternal(ExecuteStep.java:105)
at org.gradle.internal.execution.steps.ExecuteStep.access$000(ExecuteStep.java:44)
at org.gradle.internal.execution.steps.ExecuteStep$1.call(ExecuteStep.java:59)
at org.gradle.internal.execution.steps.ExecuteStep$1.call(ExecuteStep.java:56)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:209)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:56)
at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:44)
at org.gradle.internal.execution.steps.CancelExecutionStep.execute(CancelExecutionStep.java:42)
at org.gradle.internal.execution.steps.TimeoutStep.executeWithoutTimeout(TimeoutStep.java:75)
at org.gradle.internal.execution.steps.TimeoutStep.execute(TimeoutStep.java:55)
at org.gradle.internal.execution.steps.PreCreateOutputParentsStep.execute(PreCreateOutputParentsStep.java:50)
at org.gradle.internal.execution.steps.PreCreateOutputParentsStep.execute(PreCreateOutputParentsStep.java:28)
at org.gradle.internal.execution.steps.RemovePreviousOutputsStep.execute(RemovePreviousOutputsStep.java:68)
at org.gradle.internal.execution.steps.RemovePreviousOutputsStep.execute(RemovePreviousOutputsStep.java:38)
at org.gradle.internal.execution.steps.BroadcastChangingOutputsStep.execute(BroadcastChangingOutputsStep.java:61)
at org.gradle.internal.execution.steps.BroadcastChangingOutputsStep.execute(BroadcastChangingOutputsStep.java:26)
at org.gradle.internal.execution.steps.CaptureOutputsAfterExecutionStep.execute(CaptureOutputsAfterExecutionStep.java:69)
at org.gradle.internal.execution.steps.CaptureOutputsAfterExecutionStep.execute(CaptureOutputsAfterExecutionStep.java:46)
at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:39)
at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:28)
at org.gradle.internal.execution.steps.BuildCacheStep.executeWithoutCache(BuildCacheStep.java:189)
at org.gradle.internal.execution.steps.BuildCacheStep.lambda$execute$1(BuildCacheStep.java:75)
at org.gradle.internal.Either$Right.fold(Either.java:176)
at org.gradle.internal.execution.caching.CachingState.fold(CachingState.java:62)
at org.gradle.internal.execution.steps.BuildCacheStep.execute(BuildCacheStep.java:73)
at org.gradle.internal.execution.steps.BuildCacheStep.execute(BuildCacheStep.java:48)
at org.gradle.internal.execution.steps.StoreExecutionStateStep.execute(StoreExecutionStateStep.java:46)
at org.gradle.internal.execution.steps.StoreExecutionStateStep.execute(StoreExecutionStateStep.java:35)
at org.gradle.internal.execution.steps.SkipUpToDateStep.executeBecause(SkipUpToDateStep.java:75)
at org.gradle.internal.execution.steps.SkipUpToDateStep.lambda$execute$2(SkipUpToDateStep.java:53)
at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:53)
at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:35)
at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:37)
at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:27)
at org.gradle.internal.execution.steps.ResolveIncrementalCachingStateStep.executeDelegate(ResolveIncrementalCachingStateStep.java:49)
at org.gradle.internal.execution.steps.ResolveIncrementalCachingStateStep.executeDelegate(ResolveIncrementalCachingStateStep.java:27)
at org.gradle.internal.execution.steps.AbstractResolveCachingStateStep.execute(AbstractResolveCachingStateStep.java:71)
at org.gradle.internal.execution.steps.AbstractResolveCachingStateStep.execute(AbstractResolveCachingStateStep.java:39)
at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:64)
at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:35)
at org.gradle.internal.execution.steps.ValidateStep.execute(ValidateStep.java:62)
at org.gradle.internal.execution.steps.ValidateStep.execute(ValidateStep.java:40)
at org.gradle.internal.execution.steps.AbstractCaptureStateBeforeExecutionStep.execute(AbstractCaptureStateBeforeExecutionStep.java:76)
at org.gradle.internal.execution.steps.AbstractCaptureStateBeforeExecutionStep.execute(AbstractCaptureStateBeforeExecutionStep.java:45)
at org.gradle.internal.execution.steps.AbstractSkipEmptyWorkStep.executeWithNonEmptySources(AbstractSkipEmptyWorkStep.java:136)
at org.gradle.internal.execution.steps.AbstractSkipEmptyWorkStep.execute(AbstractSkipEmptyWorkStep.java:66)
at org.gradle.internal.execution.steps.AbstractSkipEmptyWorkStep.execute(AbstractSkipEmptyWorkStep.java:38)
at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsStartedStep.execute(MarkSnapshottingInputsStartedStep.java:38)
at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:36)
at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:23)
at org.gradle.internal.execution.steps.HandleStaleOutputsStep.execute(HandleStaleOutputsStep.java:75)
at org.gradle.internal.execution.steps.HandleStaleOutputsStep.execute(HandleStaleOutputsStep.java:41)
at org.gradle.internal.execution.steps.AssignMutableWorkspaceStep.lambda$execute$0(AssignMutableWorkspaceStep.java:35)
at org.gradle.api.internal.tasks.execution.TaskExecution$4.withWorkspace(TaskExecution.java:297)
at org.gradle.internal.execution.steps.AssignMutableWorkspaceStep.execute(AssignMutableWorkspaceStep.java:31)
at org.gradle.internal.execution.steps.AssignMutableWorkspaceStep.execute(AssignMutableWorkspaceStep.java:22)
at org.gradle.internal.execution.steps.ChoosePipelineStep.execute(ChoosePipelineStep.java:40)
at org.gradle.internal.execution.steps.ChoosePipelineStep.execute(ChoosePipelineStep.java:23)
at org.gradle.internal.execution.steps.ExecuteWorkBuildOperationFiringStep.lambda$execute$2(ExecuteWorkBuildOperationFiringStep.java:67)
at org.gradle.internal.execution.steps.ExecuteWorkBuildOperationFiringStep.execute(ExecuteWorkBuildOperationFiringStep.java:67)
at org.gradle.internal.execution.steps.ExecuteWorkBuildOperationFiringStep.execute(ExecuteWorkBuildOperationFiringStep.java:39)
at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:46)
at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:34)
at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:44)
at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:31)
at org.gradle.internal.execution.impl.DefaultExecutionEngine$1.execute(DefaultExecutionEngine.java:64)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:132)
... 30 more
Caused by: org.jetbrains.kotlin.gradle.tasks.FailedCompilationException: Internal compiler error. See log for more details
at org.jetbrains.kotlin.gradle.tasks.TasksUtilsKt.throwExceptionIfCompilationFailed(tasksUtils.kt:22)
at org.jetbrains.kotlin.compilerRunner.GradleKotlinCompilerWork.run(GradleKotlinCompilerWork.kt:112)
at org.jetbrains.kotlin.compilerRunner.GradleCompilerRunnerWithWorkers$GradleKotlinCompilerWorkAction.execute(GradleCompilerRunnerWithWorkers.kt:75)
at org.gradle.workers.internal.DefaultWorkerServer.execute(DefaultWorkerServer.java:68)
at org.gradle.workers.internal.NoIsolationWorkerFactory$1$1.create(NoIsolationWorkerFactory.java:64)
at org.gradle.workers.internal.NoIsolationWorkerFactory$1$1.create(NoIsolationWorkerFactory.java:61)
at org.gradle.internal.classloader.ClassLoaderUtils.executeInClassloader(ClassLoaderUtils.java:100)
at org.gradle.workers.internal.NoIsolationWorkerFactory$1.lambda$execute$0(NoIsolationWorkerFactory.java:61)
at org.gradle.workers.internal.AbstractWorker$1.call(AbstractWorker.java:44)
at org.gradle.workers.internal.AbstractWorker$1.call(AbstractWorker.java:41)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:209)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
at org.gradle.workers.internal.AbstractWorker.executeWrappedInBuildOperation(AbstractWorker.java:41)
at org.gradle.workers.internal.NoIsolationWorkerFactory$1.execute(NoIsolationWorkerFactory.java:58)
at org.gradle.workers.internal.DefaultWorkerExecutor.lambda$submitWork$0(DefaultWorkerExecutor.java:176)
at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.runExecution(DefaultConditionalExecutionQueue.java:194)
at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.access$700(DefaultConditionalExecutionQueue.java:127)
at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner$1.run(DefaultConditionalExecutionQueue.java:169)
at org.gradle.internal.Factories$1.create(Factories.java:33)
at org.gradle.internal.work.DefaultWorkerLeaseService.lambda$withLocksAcquired$0(DefaultWorkerLeaseService.java:269)
at org.gradle.internal.work.ResourceLockStatistics$1.measure(ResourceLockStatistics.java:42)
at org.gradle.internal.work.DefaultWorkerLeaseService.withLocksAcquired(DefaultWorkerLeaseService.java:267)
at org.gradle.internal.work.DefaultWorkerLeaseService.withLocks(DefaultWorkerLeaseService.java:259)
at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:127)
at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:132)
at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.runBatch(DefaultConditionalExecutionQueue.java:164)
at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.run(DefaultConditionalExecutionQueue.java:133)
... 2 more
The full trace was long and didn't seem related to a code failure in the module itself. So I employed the solution, which is always the same:
./mach build
In Android Studio, File > Sync Project with Gradle Files.
Yup, that's all. Very simple and boring.
1
With Jujutsu, this is the moz-phab command I use which has made it easier to manage review patches: moz-phab patch <patch-id> --no-branch --apply-to main@origin
Comments
With an account on the Fediverse or Mastodon, you can respond to this post. Since Mastodon is decentralized, you can use your existing account hosted by another Mastodon server or compatible platform if you don't have an account on this one. Known non-private replies are displayed below.
Learn how this was implemented from the original source here.
<noscript><p>Loading comments relies on JavaScript. Try enabling JavaScript and reloading, or visit <a href="https://mindly.social/@jonalmeida/116197244320129422">the original post</a> on Mastodon.</p></noscript>
<noscript>You need JavaScript to view the comments.</noscript>
&>"'
Hey everyone, weâve been working on some exciting changes, and want to share them with you.
But first, let me introduce myself. I am Christos, the new Sr. Developer Relations engineer in Add-ons, and Iâm excited to write my first post on the Add-ons engineering blog.
Deprecations and changes
To start, Iâm looking at a couple of features that are going away: avoiding content script execution in extension contexts, decoupling file access from host permissions, and improving the display of pageAction SVG icon.
executeScript / registerContentScript in moz-extension documents
Deprecated: Firefox 149Â Removed: Firefox 152
Starting in Firefox Nightly 149 and scheduled for Firefox 152, the scripting and tabs injection APIs no longer inject into moz-extension://documents. This change brings the API in line with broader efforts to discourage string-based code execution in extension contexts, alongside the default CSP that restricts script-src to extension URLs and the removal of remote source allowlisting in MV3 (bug 1581608).
Firefox emits a warning when this restriction is met, so you are aware of and can address any use of this process in your extensions. This is an example of the warning message:
Content Script execution in moz-extension document has been deprecated and it has been blocked
To work around this change, Â you can:
Import scripts directly in the extension pageâs HTML.
Use module imports or standard <script> tags in extension documents.
Restructure code to avoid dynamic code execution patterns. An extension can run code in its documents dynamically by registering a runtime.onMessage listener in the documentâs script, then sending a message to trigger execution of the required code.
File access becomes opt-in
Target: Firefox 152
Extensions requesting file://*/ or <all_urls> currently trigger the âAccess your data for all websitesâ permission message, and when granted, can run content scripts in file:-URLs. From Firefox 152, file access in extensions requires an opt-in for all extensions, including those already installed (bug 2034168).
pageAction SVG icon CSS filter (automatic color scheme)
Removed: Firefox 152
Firefox has been automatically applying a greyscale and brightness CSS filter to pageAction (address bar button) SVG icons when a dark theme is active. This was intended to improve contrast, but it actually reduced contrast for multi-color icons and caused poor visibility for some extensions, such as Firefox Multi-Account Containers.
For icons that adapt to light and dark color schemes, you can now use @media (prefers-color-scheme: dark) in the SVG icon, or the MV3 action manifest key, and specify theme_icons.
Here is an example of how to use a `prefers-color-scheme` media query in a pageAction SVG icon to control how the icon adapts to dark mode:
Use of prefers-color-scheme media queries is also allowed in MV2 browserAction and MV3 action SVG icons as an alternative to the theme_icons manifest properties.
Now to the new stuff. Here, you get the ability to use popups without user activation, initial support for the new tab split view feature, and WebAuthn RP ID assertion.
openPopup without user activation (Firefox Desktop)
Available: Firefox 149 Desktop
action.openPopup() and browserAction.openPopup() no longer require a user gesture on Firefox Desktop. You can open your extensionâs popup programmatically, e.g., in response to a native-messaging event, an alarm, or a background-script condition.
This change is part of the ongoing cross-browser alignment work in the WebExtensions Community Group to harmonize popup behavior across engines.
Example
Before (Firefox < 149): must hang off a user gesture, e.g., a context menu click:
browser.menus.create({
id: "nudge",
title: "Open popup",
contexts: ["all"],
});
browser.menus.onClicked.addListener((info) => {
if (info.menuItemId === "nudge") {
browser.action.openPopup(); // user clicked the menu â allowed
}
});
Â
After (Firefox â„ 149) â same intent, no user gesture needed, fires from a timer:
browser.alarms.create("nudge", { delayInMinutes: 1 });
browser.alarms.onAlarm.addListener((alarm) => {
if (alarm.name === "nudge") {
browser.action.openPopup(); // works without a click
}
});
Itâs the same call with the same result, but only the trigger changes from a user-action handler to any background event.
Itâs the same call with the same result, but only the trigger changes from a user-action handler to any background event.
splitViewId in the tabs API
Available: Firefox 149
Firefox 149 introduces a new read-only splitViewId property on the tabs.Tab object to expose Firefoxâs new split view feature (where two tabs are displayed side-by-side in one window). Split views are treated as one unit, and Web Extensions treat them the same way.
In Firefox 150, extensions can swap tabs within a split view. This update also resolves a confusing issue where using the user interface to reverse tab order incorrectly reports the tabs.onMoved event with inaccurate values. Additionally, Firefox introduces unsplitting behavior for web extensions: when tabs.move() is called with split-view tabs positioned separately (non-adjacently) in the array. Now, after the call, Firefox removes the split view rather than keeping the tabs locked together.
Here is an example of using the new splitViewId property.
// Log whenever a tab joins or leaves a split view.
browser.tabs.onUpdated.addListener((tabId, changeInfo) => {
if (!("splitViewId" in changeInfo)) return;
if (changeInfo.splitViewId === browser.tabs.SPLIT_VIEW_ID_NONE) {
console.log(`Tab ${tabId} left its split view`);
} else {
console.log(`Tab ${tabId} joined split view ${changeInfo.splitViewId}`);
}
});
// Firefox desktop also supports a filter to limite onUpdated events:
// }, { properties: ["splitViewId"] });
Â
Firefox 151 enables extensions to move split views in tab groups. More improvements are coming, such as the ability to create split views from extensions (bug 2016928).
Â
WebAuthn RP ID assertion
Available: Firefox 150
Previously, web extensions couldnât use WebAuthn credentials registered on their companyâs website or mobile apps. When extensions tried to set a custom Relying Party ID (RP ID) in navigator.credentials.create() or navigator.credentials.get(), Firefox rejected it with âSecurityError: The operation is insecure.â
With Firefox 150, Extensions can now assert a WebAuthn RP ID for any domain they have host permissions for
when calling navigator.credentials.create() or navigator.credentials.get(). This applies to both the publicKey.rp.id field during credential creation and the publicKey.rpId field during authentication.
A critical detail for server-side validation: When relying party servers validate credentials created by extensions, they must account for different origin formats across browsers. In Chrome, the origin follows the pattern chrome-extension://extensionid, which matches the extensionâs location.origin. Firefox 150 introduces a new stable origin format: moz-extension://hash, where the hash is a 64-character SHA-256 representation of the extension ID (using characters a-p to represent hex values). Importantly, this hash-based origin is the same all users, unlike Firefoxâs existing UUID-based moz-extension:// URLs used for extension documents.
To extract the origin from a credential for validation:
let clientData = JSON.parse(new TextDecoder().decode(
publicKeyCredential.response.clientDataJSON
));
console.log(clientData.origin);
Itâs been a very busy couple of months as weâve reworked processes & priorities and established a roadmap for both iOS and Android. We are determining how best we can coordinate with the community, and think that our roadmap for the year has a good balance of fixes and features. Today, I want to talk about our contributors and pull requests, Notifications in the Android app, progress in the iOS app, and an overview of our roadmap for both apps this year.
Contributors & Pull Requests
We are so grateful for the support and code contributions of many members, whether building items on our roadmap, improving the user experience, or, of course, translating. As we work on our roadmap priorities, we will make time to review PRs and will discuss them weekly, and prioritize those that help solve issues and bugs or align with our roadmap items. Please be patient with our Pull Request pipeline. Typically, in working with the community, we try to react very quickly.  Â
Roadmap
For Android, weâve chosen the items on our roadmap because we think these will be the highest-impact features and bring the most value to everyone. Our focus this year is to simplify and modernize the Android codebase. This means reworking some of the architecture. This will be super helpful for us to move more quickly and will reduce complex bugs. The app has an older codebase, and like many older ones, it has its challenges. We have three full-time Android engineers and several community contributors, and we hope to better position ourselves to move quickly. At a high level, Android is focusing on the rearchitecture, a better Message List experience, and Message Reader screens. We are also simplifying how users can connect to Thunder Mail as we open it up.
Notifications
One thing that is at the top of my mind right now, too, is Push Notifications, specifically changes that Google has made to background processes, which affect our Notifications. We are looking into what we can do to solve this, so know that it has become a top priority for us. Iâve been asked, âWhy is it so hard for Thunderbird to get Push Notifications right?â and I wanted to speak to some of the challenges we have. Most appsâ Notifications are triggered by their own web services, which then send Notifications through Apple or Google, who pass them to users. But email is different. In an email client, we typically donât own our own backend services, but other companies do (Microsoft, Google, Hotmail, Yahoo, Proton, etc.). And they can have their own flavors of SMTP â how we get the emails, and no specific Push Notification implementation.Â
So we have a work around: polling those providers ever X minutes asking for new emails, and triggering local notifications â but we canât hook into a native Push Notification process like your banking app for example. This is under the IMAP implementation. The JMAP implementation (think modern email protocols) has something in place we can more readily consume. Another challenge is how the battery is affected by how often we poll the providers, and we need specific permissions from Google to run this process in the background. Those permissions changed recently which is why Notifications are having issues.
Iâve simplified some pieces here, but hopefully that gives you an idea of some of the complexity and tradeoffs that we are working with. With all of that said, this is veryimportant to us, and is our usersâ biggest pain point. It is becoming our biggest need for a fix. Iâll give an update on where that sits within the roadmap next progress report when we have explored what solutions we can provide.
iOS Progress
For the iOS roadmap, everything is moving along well. We have been wrapping up most of our IMAP & SMTP tickets, and we are moving into the Account Data pieces to manage accounts and authorizations. We will also be having a new member join us in the next couple of weeks. This will add some speed, but weâve made good progress in getting the inner pieces together â what I consider the most complex parts. As we move to more standard mobile backend pieces and more standard UI, we leave the world of unknown unknowns, and will be picking up steam.
At a high level our iOS roadmap is build out these screens:
Account Setup and Drawer
Messages: List, Reader, Compose, Search
And have these pieces in place:Â
IMAP
SMTP
MIME
OAuth
Encryption
Email CompositionÂ
And our target is still end of the year for the iOS release. Â
Thank You!
Again we are so grateful to you, our community, for your support, and we are excited for this next quarter as we start to see the fruits of our labors. Â
The Sync Storage team has landed official PostgreSQL support for Firefox Sync.
Historically, Sync has only officially supported Google Spanner as a storage backend, with MySQL working unofficially. That has been a pretty high barrier to entry for people self-hosting their own services.
With PostgreSQL support, we hope to make self-hosting more approachable and continue supporting people who want the agency of hosting their data on infrastructure they control.
There is updated documentation for running it with Docker, including a one-shot docker compose setup:
If youâve been interested in self-hosting Sync but were put off by the storage requirements, take another look. If you run into bugs or have feedback, please file issues here:
I want Phabricator emails to have a Gmail label so I can know which patches had me as a reviewer that then had follow-up comments from other folks.
This is useful for me when I review a patch and then I need to respond back to discussions in a more timely manner in comment threads that I've created.
It's difficult to do this today similar to Bugzilla Gmail filters because there are fewer identifiers that the more simplistic Gmail filter parameters can help with.
Today I learnt that there is an X-Phabricator-Stamps header in those Phabricator emails that let's you identify you as a the reviewer in a patch. So using that information, I wrote the Google script below to run every minute and avoid re-processing the same email twice.
A couple variables were added to the top and some console.logs are sprinkled around for my own debugging.
Code
var REVIEWER = "jonalmeida";
var LABEL_NAME = "Phabricator/Comments";
var BODY_MATCH = "commented on this revision.";
var SENDER = "phabricator@mozilla.com";
/**
* Run once manually to install the per-minute trigger.
*/
function install() {
uninstall();
ScriptApp.newTrigger('processInbox')
.timeBased()
.everyMinutes(1)
.create();
}
/**
* Run once manually to remove the trigger.
*/
function uninstall() {
ScriptApp.getProjectTriggers().forEach(function(t) {
ScriptApp.deleteTrigger(t);
});
PropertiesService.getScriptProperties().deleteProperty('lastRun');
}
/**
* Every run, we try to avoid processing the same email twice because
* there is no API trigger to run a script on every new email received.
*/
function processInbox() {
var props = PropertiesService.getScriptProperties();
var lastRun = parseInt(props.getProperty('lastRun') || '0');
var now = Math.floor(Date.now() / 1000);
// On first run, look back 2 minutes
if (lastRun === 0) {
lastRun = now - 120;
}
var label = GmailApp.getUserLabelByName(LABEL_NAME);
if (!label) {
label = GmailApp.createLabel(LABEL_NAME);
}
console.log("last run: " + lastRun);
var threads = GmailApp.search("from:" + SENDER + " after:" + lastRun);
console.log("threads to process: " + threads.length);
for (var i = 0; i < threads.length; i++) {
var thread = threads[i];
var messages = thread.getMessages();
console.log("messages to process: " + messages.length);
for (var j = 0; j < messages.length; j++) {
if (hasReviewerStamp(messages[j])) {
thread.addLabel(label);
console.log(thread.getFirstMessageSubject());
break;
}
}
}
props.setProperty('lastRun', String(now));
}
function hasReviewerStamp(message) {
var raw = message.getRawContent();
var match = raw.match(/^X-Phabricator-Stamps:\s*(.+)$/m);
if (!match) {
return false;
}
var stamps = match[1].trim().split(/\s+/);
return (stamps.indexOf("reviewer(@" + REVIEWER + ")") > -1) && raw.indexOf(BODY_MATCH) > -1;
}
/**
* For debugging - see the list of labels you can search which
* differs from what is used in the Gmail UI filter.
*/
function listAllLabels() {
console.log("All labels");
var labels = GmailApp.getUserLabels();
for (var i = 0; i < labels.length; i++) {
console.log(labels[i].getName());
}
}
Dear reader. I am sure you have read a lot of blog posts about AI in the past
weeks or months. And now I too am writing. Mostly to help me cope with what my
kind of hacker people would call out as hypocrisy or
cognitive dissonance.
Hello and welcome to another issue of This Week in Rust!
Rust is a programming language empowering everyone to build reliable and efficient software.
This is a weekly summary of its progress and community.
Want something mentioned? Tag us at
@thisweekinrust.bsky.social on Bluesky or
@ThisWeekinRust on mastodon.social, or
send us a pull request.
Want to get involved? We love contributions.
An important step for RFC implementation is for people to experiment with the
implementation and give feedback, especially before stabilization.
If you are a feature implementer and would like your RFC to appear in this list, add a
call-for-testing label to your RFC along with a comment providing testing instructions and/or
guidance on which aspect(s) of the feature need testing.
Always wanted to contribute to open-source projects but did not know where to start?
Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!
This week was a bit all over the place, but the largest regressions were either
already fixed or they are being investigated. There were also a couple of nice perf. wins.
If you are running a Rust event please add it to the calendar to get
it mentioned here. Please remember to add a link to the event too.
Email the Rust Community Team for access.
We recently released the telemetry alerting beta, and announced it in the blog post here! This blog post will dive into the details of how it works across Treeherder, and Mozdetect. At a high level, MozDetect handles the change point detection for telemetry probes, and Treeherder handles storing the detections, and producing the emails/bugs for these.
MozDetect
All of the existing, and any future change detection point techniques used for telemetry alerting are built in MozDetect. Having these live outside of Treeherder gives a low-barrier to entry for adding new features, and testing existing ones without having to set up everything needed for alerting in Treeherder. Itâs built as a python module that is run through uv. This makes it very easy for anyone to run the code because of uvâs excellent python version, and dependency management. How to work with the code in this repository is outlined here, along with how to add your own techniques to it (note the access to mozdata through gcloud is required for this).
Detectors are split into two parts: (i) a detector that performs a comparison between two groups, and (ii) a detector that performs detection on a time series (using the detector from (i)). Our default detection technique, called  cdf_squared lives here. The timeseries_detector_name is the name that will be used to access the detector from the telemetry probe side through the change_detection_technique field. The only method that absolutely needs to be implemented by these is the detect_changes method and it must return a list of Detection objects. These detection objects contain all the necessary information for producing an alert. There is also an optional_detection_info field that can contain additional things like attachments that would be added to Bugzilla bugs, and additional_data that can hold JSON data for storage in the DB. The cumulative distribution function (CDF) squared technique uses these to store the CDF before and after the detection along with a graph of these as an attachment for the Bugzilla bug.
Example of a CDF graph that is provided in bugs.
CDF Squared Detection Technique
The CDF squared technique detects changes in time-series histogram data by comparing CDFs between consecutive windows. It takes two CDFs, each representing the distribution of measurements over a time window, and computes the sum of squared differences between the two CDFs at each bin. The sign of the summed linear difference is then used to assign a direction to the squared difference score so that the output encodes whether the distribution moved to higher values (right shift) or lower values (left shift).
For time-series detection, this base comparison is applied in a rolling fashion across the full history of data. Each dayâs 7-day smoothed CDF is compared against the next one, producing a continuous signal of squared CDF differences over time. A Butterworth low-pass filter is then applied to that signal to remove high-frequency noise while preserving genuine trend changes. Finally, scipyâs find_peaks function is used to locate statistically significant peaks and valleys in the filtered signal using a dynamic alert threshold based on the historical data. Information is extracted from those areas and then used to build the detection information needed for the alert generation process.
Â
Alerting
Our alerting tooling lives in the Treeherder codebase. Itâs run through our PerfSheriff Bot (called Sherlock) and runs once per day. When a detection is produced from MozDetect, a telemetry alert is added to the database and then the TelemetryAlertManager is called to handle it. The managerâs tasks are split into 6 ordered phases:
Update alerts with changes from Bugzilla. This step ensures that any changes that happen in the bugs filed are mirrored into our database. Currently, we only track resolution changes here.
Comment on existing bugs. This step is for updating existing bugs with information from new alerts. This step is not currently being used. In the future, this could be used to inform probe owners that a probe which doesnât produce bugs has produced an alert in the same time range.
File new bugs for alerts. This step handles filing bugs for any new alerts on probes set up for producing bugs.
Modify existing bugs with new alerts. This step handles any modifications needed to existing bugs based on the new bugs that were created. Currently, the âSee Alsoâ field is modified for existing bugs to include the new bugs.
Produce emails for new alerts. This step handles producing emails for any alerts set up to produce emails.
Housekeeping. This step handles redoing any failures that happen above in either the current run or past runs. Currently, itâs being used to retry bug modifications and sending emails when we encounter a failure there. This excludes retrying bug filling since we delete the alert in that case and retry it the next time the alert is generated.
After the housekeeping step, the manager is done for the day and runs again on the next day to handle any updates and new alerts. Contrary to how alerting works for performance tests in CI, this process is fully automated and requires no human input at any point.
Setting up telemetry probes for alerting happens on the mozilla-central side in their probe schema using the new monitor field in the metadata section (example for email alerts, example for bug alerts). The telemetry alerting documentation has information about how to do this. We then use an index.json file from the telemetry dictionary to gather all the probes that should be alerting. The information there is supplemented by more granular information later in the pipeline to gather things like the time unit used for the probe to be able to better format the Bugzilla bug table.
Once a telemetry probe is set up for alerting and is found by our system, the owners (those listed in the email notification fields) will begin either receiving emails or have bugs produced for them. These can also be viewed by everyone on this dashboard.
Getting the project to this point involved work from people across multiple teams here at Mozilla. Special thanks to Eduardo Filho for his support on the telemetry probe side, to Bas Schouten for his guidance and work on the CDF Squared detection technique, and to Andrej Glavic and Beatrice Acasandrei for their help in reviewing the Treeherder-related changes.
If you hit any issues with the telemetry alerting system, or have any suggestions feel free to file a bug in the Testing :: Performance component or reach out to us in either #perf-help on Slack or in #perftest on Matrix.
Weâre happy to announce that the Telemetry Alerting beta is now open to everyone!
Monitoring for changes in telemetry probes that you own can be difficult to do on a regular and continuous basis. With telemetry alerting, that changes today! You can now quickly set up your timing distribution probes for automated monitoring on Windows with notifications through email or a Bugzilla bug.
To get started, if you only need email alerts, simply add monitor: True to the  metadata section of your probe (example).
Example of an email alert.
If you would prefer to receive Bugzilla bugs when a change is detected, set the monitor field like so (example):
monitor:   alert: True   lower_is_better: True/False # Optional   bugzilla_notification_emails:     - <YOUR-BUGZILLA-EMAIL-HERE>
More information about telemetry alerting, and how to set up a probe can be found here in the documentation. Thereâs also a dashboard that can show you all of the existing telemetry alerts along with some detection information. For now, we only support change detection on Windows for `timing_distribution` probes (see here for other desktop platforms, and android).
Please note that this is an open beta and we are actively looking for feedback on this system. If you hit any issues, or have any suggestions feel free to file a bug in the Testing :: Performance component or reach out to us in either #perf-help on Slack or in #perftest on Matrix.
Special thanks to Eduardo Filho for his support on the telemetry probe side, to Bas Schouten for his guidance and work on the CDF Squared detection technique, and to Andrej Glavic and Beatrice Acasandrei for their help in reviewing the Treeherder changes.
For a more detailed look at how this works, see this blog post.
Mobile browsing hasnât kept up with how people actually use their phones.
Right now, even basic tasks can feel harder than they should. Finding what you need can mean scrolling through ads and filler content, keeping track of too many tabs, or thinking twice about how private your connection is.
A mobile browser should do more â and weâre raising the bar. Firefox is rolling out a set of updates that build on our most popular desktop features and adapt them for how you browse on-the-go. Hereâs whatâs out now, and whatâs coming next.
When youâre following a recipe, reading a product review, or deciding whether a long article is worth your time, getting to the useful part can take longer than it should.Â
With Shake to Summarize, you can shake or tap your phone to generate a quick summary of the page. Currently available for iOS users in English,weâre expanding availability to all iOS users in German, French, Spanish, Portuguese, Italian and Japanese starting with Firefox 150 on April 21. Weâll also soon be making Shake to Summarize available to Android users in English, so they too can get to the key points of any article in seconds.
AI features are becoming a more common part of browsers â but not everyone wants the same experience. Firefox gives you a say in how theyâre used. With AI Controls, you can turn AI features off entirely, enable only the ones you want, or adjust things over time. Rolling out on Android and iOS beginning May 21.
Firefoxâs free built-in VPN covers up to 50 gigabytes of your browsing in Firefox each month, across desktop and mobile devices. It adds a layer of protection to your browsing activity by masking your IP address â especially useful when youâre on public Wi-Fi. Unlike many âfree VPNsâ that rely on ads or selling user data to generate revenue, Firefox is built with a different model: no selling your browsing data, no injecting ads into your traffic. Instead, we offer a limited amount of browser-level protection for free, alongside Mozilla VPN, our paid, unlimited, full-device VPN service. Rolling out on Android soon.
Tab Groups have been among the most-requested mobile features from our Mozilla community, and theyâre coming on mobile soon. Youâll be able to group related tabs to stay organized, whether youâre comparing restaurants, planning a trip or saving articles to read later.
Weâre also building toward smart groupings, where Firefox can automatically suggest tab groups for you. Rolling out on Android soon.Â
More updates, built around how you browse on mobile
Your phone comes with a browser. That doesnât mean it has to stay your default
âFirefox exists to give people a better way to experience the web, and that has to be just as true on mobile as it is on desktop,â said Ajit Varma, head of Firefox. âFor many people, their phone is their primary way of getting online, and they deserve a browser thatâs fast, intuitive and built around their needs. Thatâs why weâre investing in mobile more than ever before. Weâre building for the millions of people who choose Firefox every day, and giving even more people a reason to do the same.â
Firefox is building a mobile experience designed around how people browse â with tools that help you move faster, stay organized and stay in control.
These updates begin rolling out in April with more on the way.
Since February, the Firefox team has been working around the clock using frontier AI models to find and fix latent security vulnerabilities in the browser. We wrote previously about our collaboration with Anthropic to scan Firefox with Opus 4.6, which led to fixes for 22 security-sensitive bugs in Firefox 148.
As part of our continued collaboration with Anthropic, we had the opportunity to apply an early version of Claude Mythos Preview to Firefox. This weekâs release of Firefox 150 includes fixes for 271 vulnerabilities identified during this initial evaluation.
As these capabilities reach the hands of more defenders, many other teams are now experiencing the same vertigo we did when the findings first came into focus. For a hardened target, just one such bug would have been red-alert in 2025, and so many at once makes you stop to wonder whether itâs even possible to keep up.
Our experience is a hopeful one for teams who shake off the vertigo and get to work. You may need to reprioritize everything else to bring relentless and single-minded focus to the task, but there is light at the end of the tunnel. We are extremely proud of how our team rose to meet this challenge, and others will too. Our work isnât finished, but weâve turned the corner and can glimpse a future much better than just keeping up. Defenders finally have a chance to win, decisively.
Until now, the industry has largely fought security to a draw. Vendors of critical internet-exposed software like Firefox take security extremely seriously and have teams of people who get out of bed every morning thinking about how to keep users safe. Nevertheless, weâve all long quietly acknowledged that bringing exploits to zero was an unrealistic goal. Instead, we aimed to make them so expensive that only actors with functionally unlimited budgets can afford them, and that the cost of burning such an expensive asset disincentivizes those actors against casual use.
This is because security to date has been offensively-dominant: the attack surface isnât infinite, but itâs large enough to be difficult to defend comprehensively with the tools weâve had available. This gives attackers an asymmetric advantage, since they only need to find one chink in the armor.
We use defense-in-depth to apply multiple layers of overlapping defenses, but no layer is bulletproof. Firefox runs each website in a separate process sandbox, but attackers try to combine bugs in the rendering code with bugs in the sandbox to escape to a more privileged context. Weâve led the industry in building and adopting Rust, but we still canât afford to stop everything to rewrite decades of C++ code, especially since Rust only mitigates certain (very common) classes of vulnerabilities.
We pair defense-in-depth engineering with an internal red team tasked with staying on the leading edge of automated analysis techniques. Until recently, these have largely been dynamic analysis techniques like fuzzing. Fuzzing is quite fruitful in practice, but some parts of the code are harder to fuzz than others, leading to uneven coverage.
Elite security researchers find bugs that fuzzers canât largely by reasoning through the source code. This is effective, but time-consuming and bottlenecked on scarce human expertise. Computers were completely incapable of doing this a few months ago, and now they excel at it. We have many years of experience picking apart the work of the worldâs best security researchers, and Mythos Preview is every bit as capable. So far weâve found no category or complexity of vulnerability that humans can find that this model canât.
This can feel terrifying in the immediate term, but itâs ultimately great news for defenders. A gap between machine-discoverable and human-discoverable bugs favors the attacker, who can concentrate many months of costly human effort to find a single bug. Closing this gap erodes the attackerâs long-term advantage by making all discoveries cheap.
Encouragingly, we also havenât seen any bugs that couldnât have been found by an elite human researcher. Some commentators predict that future AI models will unearth entirely new forms of vulnerabilities that defy our current comprehension, but we donât think so. Software like Firefox is designed in a modular way for humans to be able to reason about its correctness. It is complex, but not arbitrarily complex1.
The defects are finite, and we are entering a world where we can finally find them all.
1 Â Thereâs a risk that codebases begin to surpass human comprehension as a result of more AI in the development process, scaling bug complexity along with (or perhaps faster than) discovery capability. Human-comprehensibility is an essential property to maintain, especially in critical software like browsers and operating systems.
Iâm very excited to announce the first release of the Symposium project as well as its inclusion in the Rust Foundationâs Innovation Lab. Symposiumâs goal is to let everyone in the Rust community participate in making agentic development better. The core idea is that crate authors should be able to vend skills, MCP servers, and other extensions, in addition to code. The Symposium tool then installs those extensions automatically based on your dependencies. After all, who knows how to use a crate better than the people who maintain it?
If you want to read more details about how Symposium works, I refer you to the announcement post from Jack Huey on the main Symposium blog. This post is my companion post, and it is focused on something more personal â the reasons that I am working on Symposium.
I believe in extensibility everywhere
The short version is that I believe in extensibility everywhere. Right now, the Rust language does a decent job of being extensible: you can write Rust crates that offer new capabilities that feel built-in, thanks to proc-macros, traits, and ownership. But weâre just getting started at offering extensibility in other tools, and I want us to hurry up!
I want crate authors to be able to supply custom diagnostics. I want them to be able to supply custom lints. I want them to be able to supply custom optimizations. I want them to be able to supply custom IDE refactorings. And, as soon as I started messing around with agentic development, I wanted extensibility there too.
Symposium puts crate authors in charge
The goal of Symposium is to give crate authors, and the broader Rust community, the ability to directly influence the experience of people writing Rust code with agents. Rust is a really popular target language for agents because the type system provides strong guardrails and it generates efficient code â and I predict itâs only going to become more popular.
Despite Rustâs popularity as an agentic coding target, the Rust community right now are basically bystanders when it comes to the experience of people writing Rust with agents; I want us to have a means of influencing it directly.
Enter Symposium. With Symposium, Crate authors can package up skills etc and then Symposium will automatically make them available for your agent. Symposium also takes care of bridging the small-but-very-real gaps between agents (e.g., each has their own hook format, and some of them use .agents/skills and some use .claude/skills, etc).
Example: the assert-struct crate
Let me give you an example. Consider the assert-truct crate, recently created by Carl Lerche. assert-struct lets you write convenient assertions that test the values of specific struct fields:
This crate is neat, but of course, no models are going to know how to use it â itâs not part of their training set. They can figure it out by reading the docs, but thatâs going to burn more tokens (expensive, slow, consumes carbon), so thatâs not a great idea.
âŠbut wouldnât it be better the crate could teach the agent itself?
With Symposium, teaching your agent how to use your dependencies should not be necessary. Instead, your crates can publish their own skills or other extensions.
The way this works is that the assert-struct crate defines the skill once, centrally, in its own repository1. Then there is a separate file in Symposiumâs central recommendations repository with a pointer to the assert-struct repository. Any time that the assert-struct repository updates that skill, the updates are automatically synchronized for you. Neat! (You can also embed skills directly in the rr repository, but then updating them requires a PR to that repo.)
Currently we allow skill content to be defined in a decentralized fashion but we require that a plugin be added to our central recommendations repository. This is a temporary limitation. We eventually expect to allow crate authors to adds skills and plugins in a fully decentralized fashion.
We chose to limit ourselves to a centralized repository early on for three reasons:
Even when decentralized support exists, a centralized repository will be useful, since there will always be crates that choose not to provide that support.
Having a central list of plugins will make it easy to update people as we evolve Symposium.
Having a centralized repository will help protect against malicious skills[^threat] while we look for other mechanisms, since we can vet the crates that are added and easily scan their content.
What if I want to add skills for crates private to my company? I donât want to put those in the central repository!
No problem, you can add a custom plugin source.
Are you aware of the negative externalities of LLMs?
Extensibility: because everybody has something to offer
Fundamentally, the reason I am working on Symposium is that I believe everybody has something unique to offer. I see the appeal of strongly opinionated systems that reflect the brilliant vision of a particular person. But to me, the most beautiful systems are the ones that everybody gets to build together4. This is why I love open source. This is why I love emacs5. Itâs why I love VSCodeâs extension system, which has so many great gems6.
To me, Symposium is a double win in terms of empowerment. First, it makes agents extensible, which is going to give crate authors more power to support their crates. But it also helps make agentic programming better, which I believe will ultimately open up programming to a lot more people. And that is what itâs all about.
WebDriver is a remote control interface that enables introspection and control of user agents. As such, it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).
This newsletter gives an overview of the work weâve done as part of the Firefox 150 release cycle.
Contributions
Firefox is an open source project, and we are always happy to receive external code contributions to our WebDriver implementation. We want to give special thanks to everyone who filed issues, bugs and submitted patches.
In Firefox 150, Khalid AlHaddad contributed several improvements:
Added the emulation.setNetworkConditions command, which supports the type: offline at the moment. Using this, you can emulate offline mode either on specific browsing contexts, on user contexts (a.k.a. containers) or globally.
The Rust team is happy to announce a new version of Rust, 1.95.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.95.0 with:
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!
What's in 1.95.0 stable
cfg_select!
Rust 1.95 introduces a
cfg_select!
macro that acts roughly similar to a compile-time match on cfgs. This
fulfills the same purpose as the popular
cfg-if crate, although with a different
syntax. cfg_select! expands to the right-hand side of the first arm whose
configuration predicate evaluates to true. Some examples:
Rust 1.88 stabilized let chains. Rust
1.95 brings that capability into match expressions, allowing for conditionals
based on pattern matching.
match value {Some(x)ifletOk(y)=compute(x)=>{// Both `x` and `y` are available here
println!("{}, {}", x, y);}_=>{}}
Note that the compiler will not currently consider the patterns matched in if let guards as part of the exhaustiveness evaluation of the overall match, just
like if guards.
Rust 1.95 removes support on stable for passing a custom target specification
to rustc. This should not affect any Rust users using a fully stable
toolchain, as building the standard library (including just core) already
required using nightly-only features.
We're also gathering use cases for custom targets on the tracking issue
as we consider whether some form of this feature should eventually be stabilized.
Other changes
Check out everything that changed in Rust, Cargo, and Clippy.
Contributors to 1.95.0
Many people came together to create Rust 1.95.0. We couldn't have done it without all of you. Thanks!
My name is Baurzhan Muftakhidinov. Iâm from Kazakhstan. I speak Kazakh, Russian, English and I have been contributing to Mozilla localization for more than 18 years.
From Linux Curiosity to Mozilla Localization
Q: How did you get involved in localization, and what drew you to Mozilla?
A: I came to Mozilla through Linux during my student years. I became interested in Linux at university, and very quickly I noticed how closely the open source world was connected: where there was Linux, Firefox was usually nearby.
When installing Linux distributions, one of the first things I noticed was language support. Many languages were available, but Kazakh was often missing or only partially supported. That made me ask a simple question: why is that, and what can be done about it?
Through Ubuntuâs CD distribution program, I discovered Launchpad and began translating Firefox there. Around the same time, through a local Linux forum, I connected with Timur Timirkhanov, who already had experience with Mozilla localization. He helped me understand Mozillaâs processes, pointed me to packages that needed translation, and opened a locale registration ticket for Kazakh in Bugzilla.
Soon after, Dauren Sarsenov joined, and in the beginning it was mainly the two of us working on Firefox. When Kazakh first appeared in a Firefox beta in spring 2009, we were extremely proud. It felt like a real milestone â not just translating isolated strings, but seeing a major global product appear in Kazakh.
For me, that was bigger than one browser. At the time, we were dreaming about a fully usable open source desktop in Kazakh, and Mozilla localization became one important part of that larger goal. What started as curiosity became a long-term commitment: making technology more accessible in Kazakh and proving that our language belongs in modern software.
Q: Which Mozilla products are closest to you? Do you use them regularly?
A: Firefox is definitely the product closest to me because I use it every day â both desktop and mobile. It never feels like I am translating something distant from my real life. I see the interface, the wording choices, and the practical impact of localization almost daily.
What makes Firefox especially meaningful is that it is both symbolic and practical. Symbolically, it showed that Kazakh could be present in one of the most important pieces of everyday software. Practically, it gave users a browser they could use in their own language. A browser is the gateway to the internet, so localizing Firefox means much more than translating one application.
I also use Thunderbird from time to time and visit MDN quite often. Even when I am not translating, I interact with Mozilla products as a user, so there is always a natural connection between volunteer work and daily habits.
People around me know me through Firefox localization more than through anything else. Very often I am simply âthe person who translated Firefox into Kazakh.â That says a lot about how visible Firefox has been.
Promoting Kazakh Localization and Building an Ecosystem
Q: How have you promoted Kazakh-localized software?
A: Most of my promotion work has been grassroots. In earlier years, I shared updates on Linux and open source forums, especially communities already interested in free software. Even when people were not personally interested in contributing, many showed strong support and encouragement. That confirmed that localization mattered beyond just the translation team.
One of my bigger efforts was creating a Debian-based Linux distribution from 2012 to 2015 called Kazsid. I built it partly to test how Kazakh localization worked across multiple applications in a real desktop environment. I included programs that already had Kazakh translations â Firefox, LibreOffice, desktop environments, and other tools â set Kazakh as the default language, and tested how everything worked together.
I shared the builds on forums, and some people downloaded and tried them. It was one of the most practical ways I encouraged interest in Linux and localized software.
Later, as translations matured upstream, maintaining a separate distribution was no longer necessary. That was actually a positive sign â users could install standard distributions and get the same localized experience.
Today I post updates on LinkedIn. It helps maintain visibility, even if it does not often bring in new contributors.
Working Independently â and Working Systematically
Q: What does the Kazakh localization community look like today?
A: At the moment, I am effectively the only active contributor across several major open source localization efforts in Kazakh, including Mozilla products, LibreOffice, GNOME, Xfce, and others.
In the early years, several people made meaningful contributions, but most eventually moved on. Timur helped significantly, especially in the earlier stages and in understanding Mozillaâs processes, and I still occasionally consult trusted people when I need a second opinion.
The challenge for smaller languages is not only starting a translation but maintaining it over the long term. From early on, I was not thinking about one application. My goal was broader: to help create a real open source desktop experience in Kazakh. A browser translated into Kazakh is important, but a full ecosystem is even more meaningful. Sustainability is the hardest part.
Q: How do you approach quality when you are the main translator?
A: Direct user feedback is rare. So QA depends largely on my own testing, judgment, and systems.
I test software in real use, especially Firefox. In earlier years, I also used Nightly builds. Before settling on new terminology, I check dictionaries and reference materials. I consult fluent speakers when needed, and sometimes I discuss wording with my wife to see how natural it sounds.
My principle is that translations should feel clear and alive, not mechanically imported. I studied in Kazakh and remember the terms we were actually taught in IT-related subjects, and that background matters to me.
Because of my scripting background, I have written small tools in Python to help verify translations, track terminology, and maintain consistency. QA is not just âreading it once and hoping for the best.â It is a combination of linguistic judgment, real usage, consultation, and automated checking.
More recently, I have been exploring how AI can assist localization. By testing translations through tools like the Google Gemini API and guiding terminology carefully, I have been able to close significant translation gaps. For Kazakh, newer models understand context much better than traditional machine translation systems. AI does not replace judgment, but it can make the work faster and more effective.
Professional Background
Q: How does your professional background influence your localization work?
Baurzhan at GIS Day 2025
A: My background is partly technical and partly analytical. I studied IT, worked as a Linux system administrator, and later moved into data analysis and GIS.
Those technical skills helped significantly. Automation makes a long-term localization effort much more manageable, especially when one person is doing most of the work.
Localization has strengthened my discipline and consistency. It requires patience and regular effort. Over time, I developed an instinct for terminology and phrasing â whether a term feels natural or artificial in context.
A Few Personal Notes
I have loved reading since I was four years old. My favorite genres are science fiction and popular science. Reading is still how I recharge.
I have lived in several cities in Kazakhstan, so I sometimes joke that I am a true nomad.
My family has always been supportive of my open source work. And when I run into a particularly difficult translation, I can still discuss it with my wife and get a fresh perspective.
Bug 2023761 - [GITHUB] Allow use of individual api keys for pull requests and push comments instead of single share secret
Bug 2012634 - âPhabricator Revisionsâ table overflows on X axis on mobile
Bug 2028222 - Pasting multi-line text after selecting multi-line text does not overwrite, but applies markup for link
Bug 2029522 - CI workflow uses deprecated docker-compose v1 and actions/checkout@v3
Bug 2031520 - Missing space in âThrow away my changes, andrevisit bug NNNâ message (when marking a bug as a duplicate of a hidden bug)
Bug 2030581 - REST API: PUT /rest/bug/attachment/{id} does not pass is_markdown when adding comment
Bug 2018260 - âFields You Can Search Onâ is blocking people from making it through quicksearch.html doc
Bug 2028240 - Cloned security bugs should default to being secure
Bug 2031007 - When linking a Github pull request to a BMO bug, the attachment filename should contain the repository name in addition to the pull request ID
Thanks to overholt, macOS Nightly now has support for sharing the current tabâs URL via QR Code (Right-click tab, Share > Generate QR Code). This is held to Nightly for now.
Fixed a structuredClone regression in the MV2 userScripts sandbox, bringing it in line with the fix already applied to content scripts and MV3 userScripts sandboxes, fix applied in Nightly 150 and uplifted to Beta 149 â Bug 2020773
Fixed a tab crash triggered by calling Document.parseHTMLUnsafe() from a browser extension content script (due to an assertion failure hit because parseHTMLUnsafe was wrongly trying to create a document that belongs to the expanded privileged principal that originated the call). The changes applied is now making sure parseHTMLUnsafe will use the webpage documentâs principal (and prevent the crash as a side effect of that) â Bug 1912587
Fixed a regression where the load event on about:blank iframes would not fire when a content script injected a style element (regressed in Firefox 148 as a side effect of the changes applied by Bug 543435, fix landed in Nightly 140 and uplifted to Beta 149 and Release 148) â Bug 2020300
Thanks to Vincent Villa for promptly investigating and fixing this regression!
WebExtension APIs
As part of followups related to the work to allow action.openPopup calls without user activation, action.openPopup() will reject the requests when another panel, context menu, doorhanger, or notification is already open in the window â Bug 2022281
As part of followups to the work in support of the splitView mode support introduced in the tabs API, tabs.move() has been tweaked to correctly return all specified tabs moved in a split view â Bug 2022372
Addon Manager & about:addons
Fixed a rendering regression where the context menu on about:addons cards would appear with a transparent background at non-default zoom levels â Bug 2006926
Thanks to Botond Ballo for investigating and fixing this small rendering regression!
Sebastian Zartner [:sebo] added a new Element-specific pseudo-classes section in the pseudo-class panel and added support for the new :open pseudo class (#2014442)
Luca Greco fixed an issue where the new tab page was broken for users who had moved their profile to a new directory. Special shout-out to Scott for driving that one over the line and getting the fix uplifted to Beta!
Maxx Crawford added the first version of a refined grid system for Nova, introducing column-based container queries and CSS tokens for more consistent responsive New Tab layouts. This is currently off by default.
Mike Conley enabled the pref to communicate with MARS over OHTTP by default so New Tab network calls use OHTTP by default, reducing IP exposure for all channels. This was already enabled on Beta and Release via Nimbus, but this patch is the pref flip that will ride the trains.
Nathan Barrett landed sections layout support for Nova, to refine section spacing and ordering using the new grid for a cleaner, denser above-the-fold.
Dustin Whisman updated some design token names for consistency (Bug 2013342):
âfont-size-heading-* now a font-size variant (was âheading-font-size)
âcard-border-color, âcard-box-shadow, âcard-box-shadow-hover, âpopup-box-shadow, âtab-box-shadow (were nested as variants previously and now are under their component name)
UX Fundamentals
Added keyboard autofocus to the âTry Againâ button in Felt Privacy error pages so users who land on an error page can immediately press enter to retry. (2021447)
Added keyboard access keys to the three primary error page buttons: G (Go Back), T (Try Again), and P (Proceed to Site). (404501)
In progress: refactoring net error illustrations into a shared object and adding alt text so assistive technology can read out meaningful descriptions. (2022033)
In progress: adding improved messaging to the file-not-found error page. (2018850)
In progress: restoring the error page for Work Offline mode so users see messaging that accurately reflects that theyâre in Offline mode, not that thereâs a network problem. (
Negative option marketing is a practice in which a seller treats a consumerâs silence or failure to take action as consent to be charged for goods or services. This technique is often used in subscription services, where users may be guided toward accepting recurring charges through default selections or obscure disclosures. These design practices, also known as âdark patterns,â successfully manipulate and influence user behavior on a systematic level and are often employed in all aspects of digital markets, not just with subscriptions.
As a browser developer, Mozilla is well-acquainted with the negative impacts of manipulative design. The web browser market provides a documented case study illustrating how operating systems deploy deceptive design practices to weaponize friction and status-quo bias to influence consumer behavior. As such, Mozilla was eager to provide feedback and encourage the Commission to examine the breadth of deceptive design practices that undermine choice.
Dark patterns are a byproduct of power asymmetry between companies and consumers. If we donât protect meaningful choice and effective competition now, we risk giving even more control to the biggest players â and losing what makes the web open and innovative in the first place.
The FTC has a critical opportunity, both in this rulemaking and more broadly, to modernize consumer protection for the realities of digital markets. We encourage the FTC to:
Make clear that practices which manipulate, coerce, or mislead users through interface design, defaults, or friction fall within the scope of unfair or deceptive acts or practices.
Investigate remedies for digital markets to operate with meaningful consumer choice.
Prioritize targeted enforcement against well-documented uses of deceptive design, such as tactics prevalent on the Windows operating system, designed to push users to the Edge browser.
We welcome the opportunity to share our relevant experiences in the browser space and look forward to continuing the conversation.
Hello and welcome to another issue of This Week in Rust!
Rust is a programming language empowering everyone to build reliable and efficient software.
This is a weekly summary of its progress and community.
Want something mentioned? Tag us at
@thisweekinrust.bsky.social on Bluesky or
@ThisWeekinRust on mastodon.social, or
send us a pull request.
Want to get involved? We love contributions.
An important step for RFC implementation is for people to experiment with the
implementation and give feedback, especially before stabilization.
If you are a feature implementer and would like your RFC to appear in this list, add a
call-for-testing label to your RFC along with a comment providing testing instructions and/or
guidance on which aspect(s) of the feature need testing.
Always wanted to contribute to open-source projects but did not know where to start?
Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
No Calls for participation were submitted this week.
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
EuroRust | CFP open until 2026-04-27 | Barcelona, Spain | 2026-10-14 - 2026-10-17
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!
This week was negative, mainly caused by a type system fix and because we had to temporarily revert some attribute cleanups that previously improved performance.
If you are running a Rust event please add it to the calendar to get
it mentioned here. Please remember to add a link to the event too.
Email the Rust Community Team for access.
the amount of times that I spend 15 min in the docs + coding which end up in a monstrous or().flatten().map().is_ok_and() only to get slapped by clippy saying replace your monster with this single function please is way too high đ
Welcome to the Q1 2026 edition of the Firefox Security & Privacy Newsletter.
Security and privacy are foundational to Mozillaâs manifesto and central to how we build Firefox. In this edition, we highlight key security and privacy work from Q1 2026, organized into the following areas:
Firefox Product Security & Privacy â new security and privacy features and integrations in Firefox
Community Engagement â updates from our security research and bug bounty community
Web Security & Standards â advancements that help websites better protect their users from online threats
Preface
Note: Some of the bugs linked below might not be accessible to the general public and restricted to specific work groups. We de-restrict fixed security bugs after a grace-period, until the majority of our user population have received Firefox updates. If a link does not work for you, please accept this as a precaution for the safety of all Firefox users.
Firefox Product Security & Privacy
Collaboration with Anthropic: A few weeks ago, Anthropicâs Frontier Red Team shared the results of a new AI-assisted vulnerability detection approach. Using this method, we have identified more than a dozen confirmed security issues, each supported by reproducible test cases. Learn more in our blog: Hardening Firefox with Anthropicâs Red Team. Leveraging our Firefox Security expertise, we ended up finding dozens of additional vulnerabilities that were fixed in the following Firefox updates.
YouTube coverage of Firefox at pwn2own 2025: To demonstrate Firefoxâs focus on user security and Mozillaâs commitment to openness, we invited LiveOverflow to follow us during the prestigious hacking competition pwn2own last year. LiveOverflowâs four-party documentary provides behind-the-scenes coverage of our quick response to fixing two Firefox 0-day security bugs. The videos go from preparation (part 1), to exploit analysis (part 2) and disclosure (part 3), all the way to the rapid release of a Firefox update (part 4) for the 2-day event coverage.
SafeBrowsing: Firefox 147 shipped with SafeBrowsing v5 support, allowing to protect users against malicious URLs. And starting with v149, Firefox blocks and revokes websites permissions for sites on the SafeBrowsing lists (Bug 1986300), leveling-up the built-in protection from online threats.
2048-bit Minimum for RSA Certificates: Firefox now enforces a minimum 2048-bit RSA key size for certificates issued by Mozillaâs built-in root CAs. As publicly trusted CAs already meet this requirement, no significant impact to the broader web is expected.
Community Engagement
Bug Bounty Program Updates: As the threat landscape evolves, addressing the increasing volume of AI-assisted security bug reports, weâre evolving our security program alongside it. With continued advances in browser security architecture, our bug bounty program is refining its incentives to prioritize the highest-impact research and the most critical classes of vulnerabilities while focusing on novelty. Learn more in our blogpost: Bug Bounty Program Updates 2026. We have also just updated our Bug Bounty hall of fame, to list all people who helped us find and fix security vulnerabilities in Q1 of 2026.
Web Security & Standards
Storage-Access Headers: Firefox 147 is shipping an extension of the Storage Access API to improve both web compatibility and parity with Chrome. These Storage Access headers allow web pages to opt out of storage isolation upfront and without the need to first load a document.
Going Forward
As a Firefox user, you automatically benefit from the security and privacy improvements described above through Firefoxâs regular automatic updates. If youâre not using Firefox yet, you can download it to enjoy a fast, secure browsing experienceâwhile supporting Mozillaâs mission of a healthy, safe, and accessible web for everyone.
Weâd like to thank everyone who helps make Firefox and the open web more secure and privacy-respecting.
Firefox Telemetry Engineer and Data Steward Chris H-C (:chutten) gives a talk at Ubisoft's Data Summit 2021 about how Responsible Data Collection as practised at Mozilla makes cataloguing easy, stops instrumentation mistakes before they ship, and allows you to build self-serve analysis tooling that gets everyone invested in data quality. Oh, and it's cheaper, too.
You ever get to the end of running benchmarks, maybe a long running one, and realize⊠âOh no. I forgot to set that important option, and these results are uselessâ
These options configure the shell for benchmarking, taking the wisdom of the team and boiling multiple shell options down to a single --benchmark-mode flag, and in --strict-benchmark-mode will abort the run if the shell is configured in a way where effective benchmarking is unlikely to be possible (e.g. benchmarking a debug build!)
The nice thing about nailing this down is that this is something we can point anyone to and know that their shell is following the rules any of us would follow.
The general design philosophy of benchmark mode is to disable things you wouldnât see enabled in Firefox in normal configuration, as well as debugging code that maybe makes sense for test suites but doesnât make sense for a benchmark.
Hopefully this is the end of me realizing that I forgot to pass --no-async-stacks yet again.
Mozilla has joined EFF, the Alliance for Responsible Data Collection, Digital Medusa, and EleutherAI in filing an amicus brief in Amazon v. Perplexity, urging the Ninth Circuit not to stretch the Computer Fraud and Abuse Act (CFAA) far beyond its intended purpose.
We have said this before, and it remains true: laws designed to protect the security of the internet should not be used to undermine how people want to use it.
Our mission is grounded in the idea that the internet must remain open and accessible to all, and that privacy and security online are fundamental. Mozilla joined this brief because overly broad interpretations of computer crime laws can put those values at risk.
The CFAA is an anti-hacking law. It was meant to address break-ins to computer systems â not to criminalize tools that enable people to access and engage with information that is publicly available on the web. While there are no-doubt many challenging legal and policy questions around the growth and use of agentic AI tools, we believe expanding the reach of CFAA to address these issues would threaten innovation, chill the development of useful tools and services for researchers and journalists, and undermine competition online.
Today the Servo team has released v0.1.0 of the servo crate.
This is our first crates.io release of the servo crate that allows Servo to be used as a library.
We currently do not have any plans of publishing our demo browser servoshell to crates.io.
In the 5 releases since our initial GitHub release in October 2025, our release process has matured, with the main âbottleneckâ now being the human-written monthly blog post.
Since weâre quite excited about this release, we decided to not wait for the monthly blog post to be finished, but promise to deliver the monthly update in the coming weeks.
As you can see from the version number, this release is not a 1.0 release. In fact, we still havenât finished discussing what 1.0 means for Servo.
Nevertheless, the increased version number reflects our growing confidence in Servoâs embedding API and its ability to meet some usersâ needs.
In the meantime we also decided to offer a long-term support (LTS) version of Servo, since breaking changes in the regular monthly releases are expected and some embedders might prefer doing major upgrades on a scheduled half-yearly basis while still receiving security updates and (hopefully!) some migration guides.
For more details on the LTS release, see the respective section in the Servo book.
In the previous post, I mentioned that buildcache has some unique properties compared to ccache and sccache. One of them is its Lua plugin system, which lets you write custom wrappers for programs that arenât compilers in the traditional sense. With Bug 2027655 now merged, we can use this to cache Firefoxâs WebIDL binding code generation.
Whatâs the WebIDL step?
When you build Firefox, one of the earlier steps runs python3 -m mozbuild.action.webidl to generate C++ binding code from hundreds of .webidl files. It produces thousands of output files: headers, cpp files, forward declarations, event implementations, and so on. The step isnât terribly slow on its own, but it runs on every clobber build, and the output is entirely deterministic given the same inputs. That makes it a perfect candidate for caching.
The problem was that the compiler cache was never passed to this step. Buildcache was only wrapping actual compiler invocations, not the Python codegen.
The change
The fix in Bug 2027655 is small. In dom/bindings/Makefile.in, we now conditionally pass $(CCACHE) as a command wrapper to the py_action call:
The py_action macro in config/makefiles/functions.mk is what runs Python build actions. The ability to pass a command wrapper as a fourth argument was also introduced in this bug. When buildcache is configured as the compiler cache, this means the webidl action is invoked as buildcache python3 -m mozbuild.action.webidl ... instead of just python3 -m mozbuild.action.webidl .... Thatâs all buildcache needs to intercept it.
Note the ifdef MOZ_USING_BUILDCACHE guard. This is specific to buildcache because ccache and sccache donât have a mechanism for caching arbitrary commands. Buildcache does, through its Lua wrappers.
The Lua wrapper
Buildcacheâs Lua plugin system lets you write a script that tells it how to handle a program it doesnât natively understand. The wrapper for WebIDL codegen, webidl.lua, needs to answer a few questions for buildcache:
Can I handle this command? Match on mozbuild.action.webidl in the argument list.
What are the inputs? All the .webidl source files, plus the Python codegen scripts. These come from file-lists.json (which mach generates) and codegen.json (which tracks the Python dependencies from the previous run).
What are the outputs? All the generated binding headers, cpp files, event files, and the codegen state files. Again derived from file-lists.json.
With that information, buildcache can hash the inputs, check the cache, and either replay the cached outputs or run the real command and store the results.
The wrapper uses buildcacheâs direct_mode capability, meaning it hashes input files directly rather than relying on preprocessed output. This is the right approach here since weâre not dealing with a C preprocessor but with a Python script that reads .webidl files.
Numbers
Here are build times for ./mach build on Linux, comparing compiler cachers. Each row shows a clobber build with an empty cache (cold), followed by a clobber build with a filled cache (warm):
tool
cold
warm
with plugin
none
5m35s
n/a
n/a
ccache
5m42s
3m21s
n/a
sccache
9m38s
2m49s
n/a
buildcache
5m43s
1m27s
1m12s
The âwith pluginâ column is buildcache with the webidl.lua wrapper active. It shaves another 15 seconds1, bringing the total down to 1m12s2. Not a revolutionary improvement on its own, but it demonstrates the mechanism. The WebIDL step is just the first Python action to get this treatment; there are other codegen steps in the build that could benefit from the same approach.
More broadly, these numbers show buildcache pulling well ahead on warm builds. Going from a 5m35s clean build to a 1m12s cached rebuild is a nice improvement to the edit-compile-test cycle.
These are single runs on one machine, not rigorous benchmarks, but the direction is clear enough.
Setting it up
If youâre already using buildcache with mach, the Makefile change is available when updating to todayâs central. To enable the Lua wrapper, clone the buildcache-wrappers repo and point buildcache at it via lua_paths in ~/.buildcache/config.json:
The large max_local_entry_size (2.5 GB) is needed because some Rust crates produce very large cache entries.
Whatâs next
The Lua plugin system is the interesting part here. The WebIDL wrapper is a proof of concept, but the same technique applies to any deterministic build step that takes known inputs and produces known outputs. There are other codegen actions in the Firefox build that could get the same treatment, and I plan to explore those next.
Microsoft recently announced itâs pulling back Copilot from several of its core Windows apps â Photos, Notepad, the Snipping Tool, and Widgets. Rolling back these forced AI integrations is the right move, but this is just the most recent example of Microsoft going too far without user consent.Â
Copilot was pushed onto users
Over the past year, Copilot wasnât offered to Windows users â it was installed on them. The M365 Copilot app began auto-installing on any Windows device running Microsoft 365 desktop apps, with no prompt and no consent. A new physical keyboard key was added to laptops that launched Copilot by default, with no simple way to remap it. By default, Copilot was pinned to the taskbar starting with Windows 11 PCs. And, going a step further, Microsoft planned to embed it into three of the most fundamental surfaces for the operating system: the Windows notification center, the Settings app, and File Explorer.Â
When Microsoft says it now wants to be âintentionalâ about Copilot, theyâre really admitting that they made repeated choices to serve their business over their customers.Â
This isnât the first time â Microsoft has a pattern of deceptive design patterns
The pattern of behavior here isnât new. Independent research commissioned by Mozilla has documented how Microsoft uses design and distribution tactics to override user choice â from deliberately complicated processes for changing your default browser, to UI that routes users back to Microsoftâs Edge browser even after theyâve explicitly chosen something else.
Since Mozilla published that research, Microsoft has continued to escalate its use of dark patterns to force behaviors that help the bottom line, not peopleâs lives. Here are a few examples from the rollout of Windows 11 that have continued to strip users of their choice:Â
The Windows Search bar, embedded in the taskbar on both Windows 10 and Windows 11, is hardcoded to only open Microsoft Edge, regardless of your default browser.
Windows has not implemented a true device migration system, like we see with Android, iOS, and MacOS, where your apps, settings and data are all reflected on your new device when you buy a new computer. Instead, the defaults are changed back to Microsoftâs own products.Â
Microsoft Outlook and Microsoft Teams by default ignore your default browser selection and open links directly in Edge.
Windows does not offer a simple prompt that other browsers can trigger asking to become your default browser. Instead, other browsers have to direct you to Windows settings and hope you finish the multi-step process.
The Copilot rollout followed the same playbook weâve come to expect from Microsoft: use automatic installs, physical hardware, and default settings to force behaviors. In the most recent instance, they allowed their AI to learn and gather data as quickly as possible before people had a choice.Â
What âgenuinely usefulâ AI integration actually looks like
We, like Microsoft and basically every tech company, have been asking ourselves the same question: What does it mean for AI to be genuinely useful? For us, the answer is simple. AI should work on your terms, not ours. Firefoxâs goal is to create AI enhancements that are made for people, not just because they can increase profit.Â
Weâve rolled out AI-enhanced features that make browsing smarter, faster, and more personalized, such as translations that stay local on your device to help you browse the web in your preferred language, alt text in PDFs to add accessibility descriptions to images in PDF pages and tab grouping which suggests related tabs and group names.
But we also know users deserve a choice. We built our answer into Firefox 148, introducing a centralized AI Controls panel in your browser settings including a single âBlock AI Enhancementsâ switch that turns off every AI feature at once. Each option is also individually controllable.Â
The premise is simple: You should decide whether AI is part of your browsing experience at all. Not Big Tech. Not Mozilla. You.
And critically, your preferences also persist across browser updates, which means AI tools wonât silently re-enable themselves after a major upgrade. No reinstalling. No opting out again after the fact. Itâs designed for people who care about whatâs happening on their computer but shouldnât have to become a systems administrator to stay in control of it.
The stakes are bigger than one rollback
When a company with Microsoftâs reach continues to control users â and only walks it back when the noise gets loud enough â it shapes what people expect from technology. It tells people that their only real move is to complain until, hopefully, the company relents. It also makes it harder for alternatives to compete when a company uses its reach and control to steer people back into its own products. Â
We donât think thatâs the internet we have to accept. People have been clear about what they want when it comes to this era of the internet. They want to feel like theyâre in control of their own devices and their own data. Thatâs the internet weâre trying to build.Â
Image generated by Nano Banana 2 in response to a request for a âRetro-futuristic collage of a scientist using an open-source AI scanner to analyze floating vintage tech and digital data streams.â
Weâre launching across the developer and security community this week on Product Hunt and Hacker News. If youâve been following AI security, weâd love your support and your feedback.Â
At Mozilla, open source has never been just a licensing choice. Itâs a conviction: the internet gets healthier when tools and knowledge circulate freely, when anyone can audit whatâs running, extend what exists, and build on what came before. Thatâs why we built Firefox in the open. Itâs why weâve kept building that way ever since.
0DIN, Mozillaâs AI security team, is working from the same premise. This week weâre releasing the 0DIN AI Security Scanner as open source software under the Apache 2.0 license, along with 179 community probes covering 35 vulnerability families, plus six specialty probes drawn exclusively from our bug bounty library.
The scanner, and the intelligence behind it
The 0DIN Scanner isnât another benchmark suite built from textbook examples. Weâre seeding it with probes drawn directly from our bug bounty program, where security researchers compete to find novel techniques to manipulate, extract data from, and subvert AI systems. As new vulnerabilities are discovered and disclosed through that program, weâll continue adding probes to the open-source library over time.
That loop, from researcher discovery to packaged reusable test, is what separates 0DIN Scanner from generic tooling. Itâs high impact intelligence on jailbreaks, updated frequently as our researchers find new techniques.
Built on NVIDIAâs GARAK open-source framework, the 0DIN Scanner adds a graphical interface, automated scan scheduling, cross-model comparative analysis, and enterprise-grade reporting. It runs against frontier models, open source LLMs, chatbots and anything with a prompt interface. Security teams can see attack success rates, a vulnerability breakdown, and a comparison against the frontier models that attackers are also probing every day.
Six of those bug bounty probes are named here for the first time: Placeholder Injection, Incremental Table Completion, Technical Field Guide, Chemical Compiler Debug, Correction, and Hex Recipe Book. Each represents a real technique that worked against production AI systems before we closed the loop.
Not every organization has a red team or the bandwidth to run adversarial testing. Many companies are deploying AI in production right now without a clear picture of where theyâre exposed. To help close that gap, weâre offering free security assessments for enterprise AI deployments.
The assessment delivers an attack success rate against your systems, a breakdown across prompt injection, jailbreaks, and data extraction categories, and a benchmark comparison against major frontier models. The process takes a few minutes to setup with scan duration varying based on the number of probes chosen. If youâre actively deploying AI and havenât tested it under adversarial conditions, this is a good place to start.
For teams that donât want to manage the open source scanner on their own, we also offer a managed Enterprise edition with access to nearly 500 pre-disclosure probes from the bug bounty program, giving organizations advance notice of emerging techniques before theyâre publicly known.
Why open source, and why now
AI is moving fast enough that no single team will solve this alone. There are too many threats, too many models, too much attack surface. Keeping our tools locked away would make 0DIN marginally stronger while leaving the broader internet weaker.
The researchers who submitted findings through our bug bounty program earned bounties for their work. Weâre releasing a meaningful portion of that intelligence as open source and weâll keep doing so as new vulnerabilities are discovered and disclosed. Thatâs the deal Mozilla has always offered: we build in the open, the community helps make it better, and the web gets a little healthier for it.
Iâm happy to announce that buildcache is now a first-class compiler cache in mach. This has been a long time coming, and Iâm excited to finally see it land.
For those unfamiliar, buildcache is a compiler cache that can drastically cut down your rebuild times by caching compilation results. Itâs similar to ccache, but even more so sccache, in that it supports C/C++ out of the box, as well as Rust. It has some nice unique properties of its own though, which weâll look at more closely in following posts.
Getting started
Setting it up is straightforward. Just add the following to your mozconfig:
ac_add_options --with-ccache=buildcache
Then build as usual:
./mach build
Thatâs it.
Give it a try
If you run into any issues, please file a bug and tag me. Iâd love to hear how it works out for people, and any rough edges you might hit.
Hello and welcome to another issue of This Week in Rust!
Rust is a programming language empowering everyone to build reliable and efficient software.
This is a weekly summary of its progress and community.
Want something mentioned? Tag us at
@thisweekinrust.bsky.social on Bluesky or
@ThisWeekinRust on mastodon.social, or
send us a pull request.
Want to get involved? We love contributions.
This week's crate is aimdb-core, a type-safe and platform-agnostic data pipeline where the Rust type system is the schema and trait implementations define its behavior.
An important step for RFC implementation is for people to experiment with the
implementation and give feedback, especially before stabilization.
If you are a feature implementer and would like your RFC to appear in this list, add a
call-for-testing label to your RFC along with a comment providing testing instructions and/or
guidance on which aspect(s) of the feature need testing.
Always wanted to contribute to open-source projects but did not know where to start?
Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
NDC Techtown | CFP open until 2026-04-14 | Kongsberg, Norway | 2026-09-09 - 2026-09-12.
EuroRust | CFP open until 2026-04-27 | Barcelona, Spain | 2026-10-14 - 2026-10-17
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!
A shorter week than normal (probably due to later perf triage last week).
Overall fairly small changes scattered across various PRs, though the net
effect was slightly positive (-0.5% avg change). All changed ended up either
mixed or improvements this week.
If you are running a Rust event please add it to the calendar to get
it mentioned here. Please remember to add a link to the event too.
Email the Rust Community Team for access.
Rust tried to have polymorphic generics in the early pre-1.0 days, and they quite reasonably gave up because it was too much work. For real Swift, great fucking working for getting all of this to work!
Welcome to the Q1 edition of the Engineering Effectiveness Newsletter! The Engineering Effectiveness org makes it easy to develop, test and release Mozilla software at scale. See below for some highlights, then read on for more detailed info!
Highlights
Suhaib Integrated Review Helper with Phabricator and moz-phab making AI-powered code review quick and simple.
Connor Sheehan implemented ETL from Lando to STMO, which allows us to get better visibility into landoâs performance and usage.
Firefox 150 will ship with new PDF editing features completed by Calixte, letting users delete, copy, move, and export pages to a new PDF.
Detailed Project Updates
AI for Development
Suhaib Mujahid integrated Review Helper with Phabricator, enabling AI-powered code review directly from patches by clicking a âRequest AI Reviewâ button, allowing it to analyze the patch and post comments with any findings.
Suhaib Mujahid extended moz-phab to support requesting an AI review at patch submission time, enabling contributors to trigger Review Helper analysis directly from the command line via moz-phab --ai.
Bugzilla
Marco trained a new model in bugbug to detect bugs that are accessibility-related and missing the âaccessâ keyword, to bring them to the attention of the accessibility team
Two fixes from dkl to improve the reliability of the background bot that syncs Phabricator revisions with Bugzilla bugs.
Kohei updated the markdown comment editor now intelligently handles pasting URLs. When you paste a URL while text is selected, it automatically formats it as a markdown link âselected textâ.
Kohei has also done significant improvements to the Guided Bug Entry page for new Bugzilla pages that should be going live soon.
Build System and Mach Environment
Better scheduling of rust dependencies through Bug 2011880 leads to ~1m saving in build time for opt build with hot cache.
Warning flags can no longer be added directly to CFLAGS or CXXFLAGS in moz.build, they have to go in COMPILE_FLAGS[âWARNINGS_CXXFLAGSâ] (resp. COMPILE_FLAGS[âWARNINGS_CFLAGSâ]) (see Bug 1986258)
Firefox-CI, Taskcluster and Treeherder
Matt Boris upgraded FxCI to use RabbitMQ quorum queues and upgraded pulse to the latest available version for performance, security, and reliability.
Abhishek Madan migrated schema validation from Voluptuous to msgspec across taskgraph, mozilla-taskgraph, and firefox, resulting in a 30% improvement to decision task times.
Abhishek Madan moved Firefox from a vendored copy of taskgraph to PyPI installs at setup time, enabling support for packages that include compiled components.
Andrew Halberstadt wrote a patch implementing the ability for the Taskcluster Github service to trigger hooks listed in .taskcluster.yml files. This will pave the way to share cross-project workflows and simplify in-repo configuration.
Cameron Dawson upgraded major frontend libraries of Treeherder
Lint, Static Analysis and Code Coverage
New linter for header guards, through bug 2009182, triggered by mach lint --linter header-guards . It enforces our code style.
A limited subset of clang-tidyâs static analysis is now run and enforced on our whole codebase. It is also reported during review on phabricator (see Bug 2023518 and related bugs)
eslint-env comments are being removed as ESLint v9 does not support them (use eslint-file-globals.config.mjs instead). ESLint v10 (currently in rc) will raise errors for them.
More eslint-plugin-jsdoc rules have been enabled across the whole tree. These are the ones relating to valid-jsdoc. A few remain, but will need work by teams to fix the failur
Marco greatly simplified the code coverage infrastructure, getting rid of two Heroku services, a frontend service, and a lot of code. The code coverage official UI is now Searchfox.
Marco added a new mach command (â./mach coverage-reportâ) to generate a coverage report from a push. The command is documented on the code coverage page in the Firefox source docs.
Teklia added added support for Github pull requests to Code Review Bot (prototype)
PDF.js
Calixte finished the implementation of the new reorganize and split functionality in PDF, which will ship in Firefox 150! Users will be able to delete, copy, move pages, and to export a subset of pages to a new PDF.
NicolĂČ Ribaudo implemented the ability to open context menus on images in PDFs, allowing users to perform actions they are used to (such as downloading images). This was a long standing feature request (11 years!).
Firefox Translations
Evgeny Pavlov, Jaume Zaragoza-Bernabeu, and Sergio Ortiz Rojas contributed to training both new and improved Translations models for use in Firefox.
Bosnian
Croatian
Norwegian BokmÄl
Serbian
Thai
Traditional Chinese
Vietnamese
Erik Nordin fixed an issue where text contained within stand-alone SVG images was not being translated (Bug 2003545).
Erik Nordin reworked the Translations settings to be compatible with the upcoming about:settings redesign (Bug 2002127).
Erik Nordin helped design a system to control the enablement of AI Features within Firefox, and worked to make the entire Translations feature set have the capability to be turned off and back on within the same browsing session (Bug 2010922, Bug 2010993).
Thank you to Dasha Andriyenko for designing the visuals and UX of the page.
Thank you to Kim Bryant for managing the product and release considerations.
Thank you to Sam Foster and Greg Tatum who reviewed a significant portion of the code.
Thank you to Ciprian Georgiu and Giorgia Nichita for testing quality assurance.
Thank you to Anna Yeddi for reviewing engineering accessibility characteristics.
Thank you to Dale Harvey for designing the QuickAction system that this feature plugs into.
Leonardo Paffi improved our testing capabilities by allowing us to serve inline HTML on the fly, rather than having to add an HTML file into the repository. This eases the burden of overhead to test special-case language characteristics, and ultimately helped us release Norwegian BokmÄl (Bug 1996967).
Leonardo Paffi improved our handling of the macro language tag for Norwegian (no) to be compatible with our support for Norwegian BokmÄl translations (Bug 2019123).
Tyler Etchart removed in-code references to quality estimation models, which are not utilized during translation inference within Firefox (Bug 1889753).
Tyler Etchart updated the generated Translations WASM JavaScript code to have explicit. comments expressing that the file is generated and should not be modified (Bug 1968038).
Tyler Etchart removed some old dead code related to prior ideas for Translations within Firefox (Bug 1996681).
Emilio Cobos Ălvarez fixed an issue where the checkboxes within the Full-Page Translations Panel settings menu were no longer appearing (Bug 2010234).
Phabricator, moz-phab, and Lando
Connor Sheehan implemented ETL from Lando to STMO, which allows us to get better visibility into landoâs performance and usage, e.g., the new uplift feature: Client Challenge
Zeid continues spear-heading the GitHub PR pilot, gathering feedback and fixing usability issues as they are reported. One key focus was on supporting triggering the Code Review Bot on request, via pushes to try.
Olivier Mehani added backward-compatible support for try pushes in the new instance of lando. It will become the default soon, but you can try it out now by setting LANDO_TRY_CONFIG=lando-prod-new in your environment prior to running `mach try .
Olivier Mehani landed a small change to lando, to make the current Tree Status visible on main landing pages (Bug 2025629). This, with the landing queue visible on the job details pages, should help get a better understanding of why jobs sometimes seem to take longer than expected to land.
moz-phab had several new releases:
Suhaib Mujahid added the --ai flag and submit.ai_review commit option to request an AI review of patches at submission time.
Johan Lorenzo added the --test-plan flag to enable submitting a test plan from the CLI, which is useful for working with AI agents
Julien Cristau updated the docker images for many build and related tasks from Debian 12 to Debian 13
Relman streamlined the release process by removing the Nightly soft code freeze and adjusting the Beta schedule to reduce end-of-cycle friction, create more effective stabilization time, and simplify release candidate workflows.
We now ship to the Xiaomi Store.
Delivered mid-cycle ESR dot releases to address critical security fixes ahead of the standard cadence, improving responsiveness while coordinating across multiple ESR versions and release channels.
Andrew Halberstadt helped support and build out the Firefox Enterprise release pipeline.
Release Operations
Mark Cornmesser improved Windows hardware management, including self-configuration and self-deployment capabilities, automated BIOS management, and standardization of BIOS settings across performance testing environments to ensure consistency and reliability.
Other
Thanks to Bug #2013401 mozilla::Maybe<scalar_type> generates better and denser code, which led to a reduction of 300kB for libxul.so
Thanks to A new clang-tidy pass weâve been able to automatically add std::move in location where it could improve performance (see Bug 2012658)
On 2026-05-01, docs.rs will make a breaking change to its build
behavior.
Today, if a crate does not define a targets list in its
docs.rs metadata, docs.rs builds documentation for a default
list of five targets.
Starting on 2026-05-01, docs.rs will instead build documentation for only
the default target unless additional targets are requested explicitly.
This is the next step in a change we first introduced in 2020, when docs.rs
added support for opting into fewer build targets. Most crates do not compile
different code for different targets, so building fewer targets by default is a
better fit for most releases. It also reduces build times and saves resources on
docs.rs.
This change only affects:
new releases
rebuilds of old releases
How is the default target chosen?
If you do not set default-target, docs.rs uses the target of its build
servers: x86_64-unknown-linux-gnu.
You can override that by setting default-target in your
docs.rs metadata:
Rust's WebAssembly targets are soon going to experience a change which has a
risk of breaking existing projects, and this post is intended to notify users of
this upcoming change, explain what it is, and how to handle it. Specifically, all
WebAssembly targets in Rust have been linked using the --allow-undefined flag
to wasm-ld, and this flag is being removed.
What is --allow-undefined?
WebAssembly binaries in Rust today are all created by linking with wasm-ld.
This serves a similar purpose to ld, lld, and mold, for example; it
takes separately compiled crates/object files and creates one final binary.
Since the first introduction of WebAssembly targets in Rust, the
--allow-undefined flag has been passed to wasm-ld. This flag is documented
as:
--allow-undefined Allow undefined symbols in linked binary. This options
is equivalent to --import-undefined and
--unresolved-symbols=ignore-all
The term "undefined" here specifically means with respect to symbol resolution in wasm-ld itself. Symbols used by wasm-ld correspond relatively closely to what native platforms use, for example all Rust functions have a symbol associated with them. Symbols can be referred to in Rust through extern "C" blocks, for example:
The symbol mylibrary_init is an undefined symbol. This is typically defined by
a separate component of a program, such as an externally compiled C library,
which will provide a definition for this symbol. By passing --allow-undefined
to wasm-ld, however, it means that the above would generate a WebAssembly
module like so:
This means that the undefined symbol was ignored and ended up as an imported
symbol in the final WebAssembly module that is produced.
The precise history here is somewhat lost to time, but the current understanding
is that --allow-undefined was effectively required in the very early days of
introducing wasm-ld to the Rust toolchain. This historical workaround stuck
around till today and hasn't changed.
What's wrong with --allow-undefined?
By passing --allow-undefined on all WebAssembly targets, rustc is introducing
diverging behavior between other platforms and WebAssembly. The main risk of
--allow-undefined is that misconfiguration or mistakes in building can
result in broken WebAssembly modules being produced, as opposed to compilation
errors. This means that the proverbial can is kicked down the road and lengthens
the distance from where the problem is discovered to where it was introduced.
Some example problematic situations are:
If mylibrary_init was typo'd as mylibraryinit then the final binary would
import the mylibraryinit symbol instead of calling the linked
mylibrary_init C symbol.
If mylibrary was mistakenly not compiled and linked into a final
application then the mylibrary_init symbol would end up imported rather than
producing a linker error saying it's undefined.
If external tooling is used to process a WebAssembly module, such as wasm-bindgen or wasm-tools component new, these tools don't know what to do with "env" imports by default and they are likely to provide an error message of some form that isn't clearly connected back to the original source code and where the symbols was imported from.
For web users if you've ever seen an error along the lines of Uncaught TypeError: Failed to resolve module specifier "env". Relative references must start with either "/", "./", or "../". this can mean that "env" leaked into the final module unexpectedly and the true error is the undefined symbol error, not the lack of "env" items provided.
All native platforms consider undefined symbols to be an error by default, and
thus by passing --allow-undefined rustc is introducing surprising behavior on
WebAssembly targets. The goal of the change is to remove this surprise and
behave more like native platforms.
What is going to break, and how to fix?
In theory, not a whole lot is expected to break from this change. If the final
WebAssembly binary imports unexpected symbols, then it's likely that the binary
won't be runnable in the desired embedding, as the desired embedding probably
doesn't provide the symbol as a definition. For example, if you compile an
application for wasm32-wasip1 if the final binary imports mylibrary_init
then it'll fail to run in most runtimes because it's considered an unresolved
import. This means that most of the time this change won't break users, but
it'll instead provide better diagnostics.
The reason for this post, however, is that it's possible users could be
intentionally relying on this behavior. For example your application might have:
This will have the same behavior as before and will no longer be considered an
undefined symbol to wasm-ld, and it'll work both before and after this change.
Affected users can also compile with -Clink-arg=--allow-undefined as well to
quickly restore the old behavior.
When is this change being made?
Removing --allow-undefined on wasm targets is being done in
rust-lang/rust#149868. That change is slated to land in nightly soon, and will then get released with Rust 1.96 on 2026-05-28. If you see any issues as a
result of this fallout please don't hesitate to file an issue on
rust-lang/rust.
Weâre excited to highlight the work of Serah Nderi, a volunteer contributor to Pontoon who has quickly made a meaningful impact on the project. Since getting involved earlier this year, Serah has contributed a steady stream of improvements â including 10 patches in just the past two months â ranging from good-first issues to fully fledged features.
Serah joined the Mozilla community as an Outreachy intern on the SpiderMonkey team, where she demonstrated both strong technical skills and a passion for languages. That combination naturally led her to Pontoon, where she has been contributing not only as a developer but also as a localizer, exploring translations for languages like Kiswahili and Kikuyu.
Her latest contribution introduces long-awaited functionality for editing and deleting comments in Pontoon, improving collaboration and moderation workflows for translators and project managers alike.
You can follow Serahâs work on GitHub and connect with her on LinkedIn.
Last year, I earned a B1 certification in German and TOPIK I certification in Korean. This year, I decided to explore something at the intersection of technology and languages, which led me to start contributing to Pontoon.
Pontoon is Mozillaâs web-based localization platform, used by thousands of contributors to translate Firefox and other Mozilla projects into hundreds of languages.
I began by adding Kiswahili translations and exploring localization for my mother tongue, Kikuyu. While Kikuyu doesnât yet have a project manager and presents unique challenges, it made the experience even more interesting. After working on a few good-first issues, I decided to take on a larger challenge: implementing a full featureâthe ability for users to edit and delete comments.
Previously, users could only add comments. If a comment contained a typo or needed clarification, the only option was to add another comment. This often led to cluttered discussions and made collaboration less efficient. I set out to improve this experience.
Under the hood
The frontend implementation had a natural starting point. Pontoon comments already included actions like pinning, so adding Edit and Delete followed a similar interaction pattern.
One of the main challenges was handling comment content. Comments in Pontoon are stored as serialized HTML paragraphs with support for @mentions. To enable editing, I needed to deserialize this stored content back into the editor so that users would see a fully functional input field pre-populated with their original commentâincluding mentions. When saving, the content is serialized again before being stored.
In addition to the UI changes, I implemented the backend views for editing and deleting comments, along with the necessary tests. The final result allows users to edit and delete their own comments, while project managers can delete any comment for moderation purposes.
This feature makes discussions in Pontoon more flexible, reduces noise from duplicate comments, and improves the overall collaboration experience for localization teams.
My name is ClĂĄudio Esperança, Iâm from Portugal. I speak Portuguese and English. I have been contributing to Mozilla localization projects for more than 18 years.
Mozilla localization
Q: How did you first get involved in localization, and what drew you to Mozilla?
A: Curiosity has always driven me to understand how things work. Discovering open-source software, specifically Firefox and Linux, opened a world of limitless possibilities. I saw software translation not only as a way to improve my English but also as a great opportunity to start collaborating and contributing to the Mozilla mission. I began by following the community email list, contributing translations, and attending events. Before I knew it, I was leading the Portuguese translation team.
Q: You contribute across many projects in Pontoon. Is there a product that stands out to you? Have you shared with family and friends what you have been doing and promoting the products?
A: Firefox is always my favorite and the browser I use most regularly, as I trust it with my personal data. However, I contribute to all projects to provide users with more people-focused, secure, and private options, in a market often dominated by other vested interests.
I donât actively promote my work, as I prefer when people discover Mozilla products because they are the best solution for their needs. It may seem counterintuitive, but actually, I love when I see someone using Firefox, or another Mozilla product, not because they feel pressured by something I said, but because theyâve discovered itâs the best solution for them. It is very gratifying to know that the strings I translate are used by thousands of people every day, including family, friends, coworkers, and many other people which I probably will never know.
Q: What have been some of the most rewarding or impactful projects youâve localized?
A: Firefox is undoubtedly the most impactful due to its fundamental role on the web. I also found Firefox OS particularly interesting: the concept was great, and it had great potential, but unfortunately it didnât go as far as I would have liked. I still hope to see it reborn in some form one day.
Q: What advice would you give to someone considering contributing to Mozilla localization today?
A: One of the best things about L10n at Mozilla is how accessible localization has become. You donât need to be a developer to make a difference. Whether by starting with a smaller project to build up confidence or diving straight into a high-impact application, or focus on a tool you love or explore something entirely new, the choice is yours. The most important step is simply to begin. And thereâs no such thing as a âsmallâ contribution â every translated word helps to build a more inclusive internet for everyone.
Community & leadership
ClĂĄudio and Kit, celebrating 18+ years of Mozilla localization.
Q: How does the Portuguese localization community collaborate today?
A: The Portuguese community is small, and we donât have many members with recurring contributions. One of the reasons they give for this disengagement is that they feel their help isnât needed because our translation completion rate is high (which isnât true at all). There are other reasons like lack of time (main reason), and the fact that a large portion of the user base are pretty comfortable using software in English, Brazilian, or Spanish.
Regarding community communication, while we previously used various discussion groups, we now primarily communicate via email and direct contact, with most of the work happening directly on Pontoon.
Q: Youâve been leading the team for many years. How do you approach mentorship and conflict resolution?
A: When I started, I didnât have a mentor, so I had to rely on Mozillaâs resources and some reverse engineering. Today, platforms like Pontoon and SUMO make the process much easier for volunteers. Regarding conflicts, like all communities, we sometimes face significant challenges regarding personality and linguistic differences. Overall, we try to maintain a positive, constructive, and inclusive attitude, where all well-founded contributions are welcome. We use a democratic process for most decisions, with a âbenevolent dictatorâ model as a final fallback if consensus cannot be reached.
Professional background & skills
Q: What is your professional background, and how has it influenced your localization work?
A: I have a background in software engineering (Masterâs in Mobile Computing, Bachelorâs in Information Systems, technical training in TCP/IP networks, Linux, and other technologies). This experience helps me handle technical aspects of software translation like placeholder syntax, HTML tags, and technical terminology, though modern tools like Pontoon have made localization much more accessible to everyone.
Q: How has localization influenced your professional work?
A: Localization provides a unique perspective on applications by allowing a deeper understanding of how they work. We get to learn about the various options available in the software, sometimes hidden in the more obscure areas of the application. Unlike more traditional applications that rely on older technologies, applications developed within the Mozilla ecosystem are at the forefront of web innovation, allowing early exposure to the future of the Internet. As a software engineer, I incorporate these insights into my own projects to create more modern and user-friendly solutions.
Q: After 18+ years, what keeps you motivated to continue contributing?
A: Our mission remains unfinished. We have a responsibility to ensure the internet remains a global public resource that doesnât require English as a barrier to entry. In an era where AI and massive platforms are consolidating power, the need for diverse alternatives has never been more urgent. Localizing Mozilla products into my native language is my way of practicing digital activism. Itâs incredibly rewarding to know that a handful of translated sentences can improve the lives of so many people instantly. The mission continuesâŠ
Interesting facts
Q: Tell us something unexpected about yourself.
A: How someone born on an island in the Azores, who lived in half a dozen different cities in a country as small as Portugal, and who has worked as a farmer, shepherd, beekeeper, construction worker, electrician, trainer, programmer, and software engineer ended up translating world-class open-source software is a difficult story to explain. Ultimately, I think it all comes back to curiosityâŠ
Welcome back from the Thunderbird development team!
Reflecting back, the first quarter of the year has been a mix of deep technical focus and forward-looking planning. Much of the teamâs energy has gone into tackling some of the more complex, âgnarlyâ parts of our projects to land key milestones. In parallel, weâve been laying the groundwork for whatâs next from ongoing hiring efforts to aligning our goals with broader company initiatives that support the roadmap ahead.
Security & Hardening
Weâve continued to make good progress on improving Thunderbirdâs security and privacy model, not just at a technical level, but in ways that are more usable and transparent for everyday users.
Unobtrusive Signatures
Kai recently presented his work at the IETF on Unobtrusive Signatures, which aims to make email signatures more reliable and less intrusive. The goal is to ensure message authenticity can be verified automatically and consistently, without requiring constant user attention or confusing workflows.
Improving Key Safety and Revocation
Weâre also exploring better ways to handle key revocation. Today, users often have no reliable way to know when a key should no longer be trusted. A proposed revocation service aims to improve how this information is distributed, while avoiding overly centralized or privacy-invasive approaches.
Moving Beyond âEncrypted or Notâ
A major shift underway is how we present trust in encrypted email.
Instead of treating encryption as a simple on/off state, weâre moving toward a graduated confidence model. Thunderbird will evaluate the strength of each recipientâs key whether itâs manually verified, CA-backed, or unverified, and present an overall confidence level to the user.
This allows encryption to work more automatically, while still giving users clear insight into how much trust they can place in a given message. Kai has worked with the design team and internal subject matter experts to refine the UX in this area and is getting close to a final UI.Â
Ongoing Security Fixes and Improvements
Alongside these larger initiatives, Kai, Magnus, and Justin have been actively triaging and addressing security issues and long-standing feature gaps. Recent work includes:
Enabling search within encrypted messages
Fixing issues with incorrect IMAP literal size handling
Addressing a link spoofing vulnerability (CVE-2025-13015)
Together, these efforts reflect a broader direction: making strong security more accessible, while ensuring users remain informed and in control.
Exchange Email Support
Since our last update in February, the team has been moving quickly and has now completed Phase 1 and Phase 2 of the Graph API implementation for email, with Phase 3 already underway.
These phases focused on establishing a solid foundation and delivering core functionality required for real-world usage. Highlights include:
Graph API login with OAuth
Connectivity checks and account validation
Autodiscover support for Graph endpoints
Folder synchronization (fetching and populating folder hierarchy)
Sending messages (including support for different recipient types)
Support for POST requests and improved request handling
Delta query support for efficient syncing
Support for pageable results (x-ms-pageable)
Test infrastructure for Graph (xpcshell and mochitests)
Continued backend refactoring and interoperability work (C++/Rust integration, shared protocol components)
With these milestones in place, Phase 3 is now underway, focusing on deeper message handling (such as fetching message headers) and continued feature expansion.
While onboarding a new junior team member, John has also made a strong impact on the add-ons ecosystem, reaching an important milestone in the effort to move away from legacy, insecure experiments.
A key piece of this work is the VFS Toolkit, which leverages the Origin Private File System and introduces a more secure and maintainable way for WebExtensions to interact with the file system. As part of this, John developed a provider that allows extensions to access a userâs local home folder through a controlled interface.
Under the hood, this works by combining WebExtensions with a small native helper application. The extension communicates with this helper via native messaging, allowing safe, permissioned access to local files, something that modern WebExtensions cannot do directly
The current focus is to enhance the Calendar API ahead of the next ESR release with some of this work tracked here.
Linux System Tray â Contributor Spotlight
Weâd like to give a special shoutout this month to Christophe Henry, who has gone above and beyond with an ambitious contribution to improve Thunderbirdâs system tray integration on Linux.
This work isnât a small patch and spans multiple parts of the codebase, including JavaScript, C++, and Rust, and even bridges into XPCOM interfaces. The goal is to unify how unread mail indicators and tray icons behave across platforms, which is a surprisingly complex problem once you account for the differences between Linux environments, Windows, and macOS.
What really stood out was the level of persistence behind this contribution. Over multiple iterations, Christophe worked through build failures, lint issues, platform quirks, and detailed review feedback, all while tackling tricky problems like image encoding, system tray APIs, and cross-language integration.
This kind of work is rarely straightforward, and often requires deep dives into unfamiliar parts of the stack. Seeing it pushed forward with this level of care and determination is exactly what makes open source collaboration so powerful.
Thank you for the dedication and effort! It truly makes a difference.
Calendar UI Rebuild â Front End Team shoutout
A huge shoutout to the Front End team, who recently met in person in London for a work week and absolutely delivered.
Getting the chance to collaborate face-to-face made a real difference. The team came together to align on priorities, cut through complexity, and focus on what mattered most â and the results speak for themselves. They successfully pushed through the Event Read and Enhancements milestones at an impressive pace, clearing the path to shift full attention onto the First Time User Experience (FTUE) work.
Itâs not easy to balance quality, speed, and coordination across a distributed team, but this was a great example of what happens when everything clicks. Thoughtful planning, strong collaboration, and excellent execution all came together to move things forward in a big way.
Following that strong push on Calendar, the front end team turned their focus to the First Time User Experience and made remarkable progress in a very short time.
In just a few weeks, the majority of the FTUE work has been completed, with only a handful of smaller items remaining in review. This included not only delivering the core experience, but also laying the groundwork for future improvements (such as early components of the âSign in with Thundermailâ flow, already available behind a preference).
Pulling together a milestone of this size on such a tight timeline is no small feat. It reflects both the clarity of planning coming out of the work week, and the teamâs ability to execute quickly without losing sight of the bigger picture.
Maintenance, Upstream adaptations, Recent Features and Fixes
Over the past couple of months, the team has continued to navigate changes from upstream dependencies that occasionally impact build stability, test reliability, and CI. While this is a normal part of working in a large, shared ecosystem, it does require ongoing attention, particularly when tracking down the root cause of regressions and ensuring Thunderbird-specific changes remain on solid ground. Some days it feels like a full-time job!
Alongside this, weâve seen strong support from both the team and the wider contributor community, with a steady stream of fixes and improvements landing across the codebase.
This collective effort has resulted in a number of impactful patches landing recently, with the following being particularly helpful:
If you would like to see new features as they land, and help us find some early bugs, you can try running daily and check the pushlog to see what has recently landed. This assistance is immensely helpful for catching problems early.
Weâve started working on accessibility support for web content (@alice, @delan, #42333, #42402), gated by a pref (--pref accessibility_enabled).
Each webview will be able to expose its own accessibility tree, which the embedder can then integrate into its own accessibility tree.
As part of this work:
we added a Servo API for activating accessibility features (@delan, @alice, #42336), although this has since become a WebView API
Weâve started implementing document.execCommand() (@TimvdLippe, #42621, #42626, #42750), gated by a pref (--pref dom_exec_command_enabled).
This feature is also enabled in experimental mode, and together with contenteditable, itâs critical for rich text editing on the web.
The work done in February includes:
contentEditable on HTMLElement â for execCommand() only, excluding any support for interactive editing (@TimvdLippe, #42633, #42734)
Developer tools
DevTools has seen some big improvements in February!
When enabled in servoshell, the DevTools server is more secure by default, listening only on localhost when only a port number is specified (@Narfinger, #42502).
You can open the port for remote debugging by passing a full SocketAddr, such as --devtools=[::]:6080 or --devtools=0.0.0.0:6080.
In the Inspector tab, you can now edit DOM attributes, and the DOM tree updates when attributes change (@simonwuelker, #42601, #42785).
You can now list the event type and phase of event listeners attached to a DOM node as well (@simonwuelker, #42355).
In the Console tab, objects can now be previewed when passed to console.log() and friends (@simonwuelker, #42296, #42510, #42752), and boolean values are now syntax highlighted (@pralkarz, #42513).
Back in August, we added a servo:preferences page to servoshell that allows you to set some of Servoâs most common preferences at runtime (@jdm, #38159).
servoshell now has a servo:config page (@arihant2math, #40324), allowing you to set any preference, even internal ones.
Note that preference changes are not yet persistent, and not all prefs take effect when changed at runtime.
You can now press F5 to reload the page in servoshell (@Narfinger, #42538), in addition to pressing Ctrl+R or âR.
Weâve fixed a regression where the caret stopped being visible in the location bar (@mrobinson, #42470).
Embedding API
Servo is now easier to build offline, using the complete source tarball included in each release (@jschwe, #42852).
Go to a release on GitHub, then download servo-[version]-src-vendored.tar.gz to get started.
Weâre reworking our gamepad API, with WebViewÂDelegate::playÂ_gamepadÂ_hapticÂ_effect and stopÂ_gamepadÂ_hapticÂ_effect being replaced by a new API that (as of the end of February at least) is known as GamepadProvider (@atbrakhi, #41568).
The old methods are no longer called (#43743), and may be removed at some point.
We now have better diagnostic output when we fail to create an OpenGL context (@mrobinson, #42873), including when the OpenGL versions supported by the device are too old.
Servo::constellation_sender was removed (@jdm, #42389), since it was never useful to embedders.
If you navigate to a video file or audio file as a document, the player now has controls (@webbeef, #42488).
Images now rotate according to their EXIF metadata by default (@rayguo17, #42567), like they would once we add support for âimage-orientation: from-imageâ.
Weâre implementing system-font-aware font fallback (@mrobinson, #42466), with support for this on macOS landing this month (@mrobinson, #42776).
This allows Servo to render text in scripts that are not covered by web fonts or any of the fonts on Servoâs built-in lists of fallback fonts, as long as they are covered by fonts installed on the system.
Servo now supports the newer pointermove, pointerdown, pointerup, and pointercancel events (@webbeef, #41290).
The older touchmove, touchstart, touchend, and touchcancel events continue to be supported.
The default language in âAccept-Languageâ and navigator.language is now taken from the $LANG environment variable if present (@webbeef, #41919), rather than always being set to en-US.
<input type=color> now supports any CSS color value (@simonwuelker, #42275), including the more complex values like color-mix().
Weâve also landed the colorspace attribute (@simonwuelker, #42279), but only in the web-facing side of Servo for now, not the embedding API or in servoshell.
Cookies are now more conformant (@sebsebmc, #42418, #42427, #42435).
âExpiresâ and âMax-Ageâ attributes are now handled correctly in âSet-Cookieâ headers, get() and getAll() on CookieStore now trim whitespace in cookie names and values, and the behaviour of set() on CookieStore has been improved.
<iframe> elements are now more conformant in how load events are fired on the element and its contentWindow (@TimvdLippe, #42254), although there are still some bugs.
This has long behaved incorrectly in Servo, and it has historically caused many problems in the Web Platform Tests.
Weâve started implementing Largest Contentful Paint timings (@shubhamg13, #42024), and weâve landed a bunch of improvements to how First Contentful Paint timings work in Servo:
When geolocation is enabled (--pref dom_geolocation_enabled), navigatorÂ.geolocationÂ.getÂCurrentÂPosition() and watchÂPosition() now support the optional errors argument (@arihant2math, #42295).
We now support the â-webkit-text-securityâ property in CSS (@mrobinson, #42181), which is not specified anywhere but required for MotionMark.
Layout has seen a lot of performance work in February, with our main focus being on improving incremental layout of the box tree and fragment tree.
We now have our first truly incremental box tree layout (@mrobinson, @Loirooriol, @lukewarlow, #42700), rather than our previous âdirty rootsâ-based approach.
Depending on how they were damaged, some boxes for floats (as above, #42816), independent formatting contexts (as above, #42783), and their descendants (as above, #42582) can now be reused, and they avoid damaging their parents (as above, #42847).
We also destroy boxes with âdisplay: noneâ earlier in the layout process (as above, #42584).
Incremental fragment tree layout is improving too!
Whereas we previously had to decide whether to run fragment tree layout in an âall or nothingâ way, we can now reuse cached fragments in independent formatting contexts (@mrobinson, @Loirooriol, @lukewarlow, #42687, #42717, #42871).
We can also measure how much work is being done on each layout (as above, #42817).
Servo uses shared memory for many situations where copying data over channels would be too expensive, such as for images and fonts.
In multiprocess mode (--multiprocess), we use the operating system to create the shared memory in a way that can be shared with other processes, such as shm_open(3) or CreateFileMappingW, but this consumes resources that can sometimes be exhausted.
We only need to use those kinds of shared memory in multiprocess mode, so weâve reworked Servo to use Arcï»ż<Vec<u8>> in single-process mode (@Narfinger, #42083), which should avoid resource exhaustion.
Parsing web pages is complicated: we want pages to render incrementally as they stream in from the network, and we want to prefetch resources, but scripts can call document.write(), which injects markup âon the spotâ.
This is further complicated if that markup also contains a <script>.
Weâve recently landed some fixes to Servoâs async parser (@simonwuelker, #42882, #42910), which handles these issues more efficiently.
This is currently an obscure and somewhat buggy feature (--pref domÂ_servoparserÂ_asyncÂ_htmlÂ_tokenizerÂ_enabled), but if we can get the feature working more reliably (#37418), it could halve the energy Servo spends on parsing, lower latency for pages that donât use document.write(), and even improve the html5ever API for the ecosystem.
Weâve landed some fixes for issues preventing Servo from being built on Windows arm64 (@dpaoliello, @npiesco, #42371, #42341).
Work to enable Windows arm64 as a build platform is ongoing (@npiesco, #42312).
<img height> now takes the default <img width> from the aspect ratio of the image (@Loirooriol, #42577), rather than using a width of 300px by default.
<svg width=0> and <svg height=0> now take the default width and height (respectively) from the aspect ratio of the <svg viewBox> (@Loirooriol, #42545).
Weâve fixed crashes in layout, when using âbackground-repeat: roundâ (@mrobinson, #42303), when using âlist-style-imageâ or âcontent: <image>â (@lukewarlow, #42332), when calling elementFromPoint() on Document (@mrobinson, @Loirooriol, @lukewarlow, #42822), and when handling layout queries like getBoundingClientRect() on inline <svg> (@jdm, @Loirooriol, #42594).
Weâve also fixed crashes when using multitouch input (@yezhizhen, #42350), when using MediaStreamAudioSourceNode (@mrobinson, #42914), when calling add() on HTMLOptionsCollection (@mrobinson, #42263), when calling elementFromPoint() on Document or ShadowRoot(), when we fail to open a database for IndexedDB (@jdm, @mrobinson, #42444), and when certain pages are run with a mozjs debug build (@Gae24, #42428).
Donations
Thanks again for your generous support!
We are now receiving 6985 USD/month (â0.4% from January) in recurring donations.
This helps us cover the cost of our speedyCIandbenchmarkingservers, one of our latest Outreachy interns, and funding maintainer work that helps more people contribute to Servo.
Servo is also on thanks.dev, and already 32 GitHub users (â1 from January) that depend on Servo are sponsoring us there.
If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.
We now have sponsorship tiers that allow you or your organisation to donate to the Servo project with public acknowlegement of your support.
If youâre interested in this kind of sponsorship, please contact us at join@servo.org.
And are you surprised? After all, Macs have their own bespoke GPUs now, and RAM is on-die. (Glad I sprang for the 16GB option on my M1 Air â that has greatly lengthened its useful service life.) If Apple isn't shipping computers with DIMM slots anymore, then why would they ship PCIe slots for anything else? It wasn't like there were many options you could put in the last iteration anyway, because it too had a non-upgradeable GPU and fixed RAM. Okay, okay, you could stick a whole bunch of NVMe sticks in it and it had good cooling. Was that worth it?
This marks the end of the venerable tower Macs that we loved in the PowerPC days. The Mac Studio is the new Mac Pro. We were always at war with Eastasia.
The future of AI should belong to all of humanity, well beyond a handful of countries or companies. For that to happen, AI needs to be open, trusted, and built in ways that give people, institutions, and nations real choices. Thatâs why, today, Mozilla is announcing a strategic partnership with Mila â Quebec Artificial Intelligence Institute to advance open source and sovereign AI capabilities.
This partnership marks a landmark strategic collaboration for both organizations and Mozillaâs first-ever partnership with a major AI research lab. It is designed to grow over time, with an inaugural project that focuses on the intersection of trust and usability, including private memory architectures for AI agents.
Mila brings world-class research depth and a proven track record moving ideas into systems â from fundamental breakthroughs to applied tools and the diffusion of technology. Mozilla brings deep open source experience, a vibrant developer community, and the ecosystem instincts needed to turn research into something that spreads. The partnership is designed to show that open source AI can close the gap between cutting-edge research and real-world impact.Â
As we saw in the web era, having a robust open source software stack can democratize and accelerate innovation in dramatic ways. The same opportunity exists in AI â across compute, models, data, and developer experience â and much of the stack is already being built in the open. But gaps remain, particularly in the layers that determine whether AI is trustworthy, private, and built for a world with many languages, many cultures, and many legitimate ways of organizing society. If we can close those gaps, open source AI becomes a genuine option for the people and institutions that need it most.
âWe are working to build a future where AI development is rooted in openness, privacy, and humanity,â said Mark Surman, president of Mozilla. âThis partnership is a delivery vehicle for that vision â and for breakthroughs that will help governments, developers, and companies alike. Canada can lead on AI sovereignty; weâre joining with Mila to make it happen.â
Together, Mila and Mozilla will develop the technologies and approaches that reduce dependence on closed systems and create more room for transparency, accountability, and shared innovation. The partnership also lays the groundwork for middle-power cooperation in AI: Open source projects have consistently provided the framework for technical collaboration across geographies and jurisdictions. Both organizations welcome research institutions, developers, and like-minded organizations to help fill the stack.
This is the first of what both organizations intend to be a sustained and growing body of work.Â
Read more about our Open Source AI Strategy here. Learn more about Mila here.
The Rust team has published a new point release of Rust, 1.94.1. Rust is a
programming language that is empowering everyone to build reliable and
efficient software.
If you have a previous version of Rust installed via rustup, getting Rust 1.94.1 is as easy as:
rustup update stable
If you don't have it already, you can get rustup from the appropriate page on our website.
What's in 1.94.1
Rust 1.94.1 resolves three regressions that were introduced in the 1.94.0 release.
In January, we introduced our Nightly package for RPM-based Linux distributions. Today, we are thrilled to announce it is now available for Firefox Beta!
Firefox Beta is great for testing your sites in a version of Firefox that will reach regular users in the coming weeks. If you find any issues, please file them on Bugzilla.
Switching to Mozillaâs RPM repository allows Firefox Beta to be installed and updated like any other application, using your favorite package manager. It also provides a number of improvements:
 Better performance thanks to our advanced compiler-based optimizations,
Updates as fast as possible because the .rpm management is integrated into Firefoxâs release process,
Hardened binaries with all security flags enabled during compilation,
No need to create your own .desktop file.
If you have Mozillaâs RPM repository already set up, you can simply install Firefox Beta with your package manager. Otherwise, follow the setup steps below.
If you are on Fedora (41+), or any other distribution using dnf5 as the package manager
Note: repo_gpgcheck=0 deactivate the signature of metadata with GPG. However, this is safeguarded instead by HTTPS and package signatures (gpgcheck=1).
If you are on openSUSE or any other distribution using zypper as the package manager
sudo rpm --import https://packages.mozilla.org/rpm/firefox/signing-key.gpg
sudo zypper ar --gpgcheck-allow-unsigned-repo https://packages.mozilla.org/rpm/firefox mozilla
sudo zypper refresh
sudo zypper install firefox-beta
For other RPM based distributions (RHEL, CentOS, Rocky Linux, older Fedora versions)
The firefox-beta package will not conflict with your distributionâs Firefox package if you have it installed, you can have both at the same time!
Adding language packs
If your distribution language is set to a supported language, language packs for it should automatically be installed. You can also install them manually with the following command (replace fr with the language code of your choice):
sudo dnf install firefox-beta-l10n-fr
You can list the available languages with the following command:
You can use JJ's built-in editor for conflict resolutions, but I've found it difficult to follow. A recommendation from co-workers was to use Meld and that has worked quite well once I (begrudingly) accepted that I needed to download another single-purpose app.
Today, another co-worker Andrey Zinovyev found out that we can use Android Studio's (IntelliJ IDEA's really) built-in merge tool to resolve the three-way merge. This is more convenient for me since I spend most of my time here already, so using it as a general purpose merge editor for my work projects is quite nice.
[ui]
merge-editor = "studio"
[merge-tools.studio]
merge-args = ["merge", "$left", "$right", "$base", "$output"]
program = "/Users/jalmeida/Applications/Android Studio Nightly.app/Contents/MacOS/studio"
Today weâre introducing a free built-in VPN in Firefox, a new IP-protection feature designed to keep you even more private while you browse. Weâre starting by offering an industry-leading 50 gigabytes of free VPN-browsing each month.Â
Firefox has long focused on building privacy tools directly into the browser to protect you online. Over the years, weâve introduced world-class protections that block known trackers, reduce fingerprinting and limit how companies can follow people across the web. Our goal has been consistent: make meaningful privacy protections accessible to Firefox users every day.
Firefox is the only major browser to include a built-in VPN like this for free â giving you more control over your privacy, right where you browse.
Privacy built into the browser
Every time you visit a website, your IP address is shared automatically. IP addresses help websites know where to send information back to your device, but they can also be used to approximate your location, link your browsing activity across sites and keep logs about your online behavior, meaning websites can track your behavior. Itâs one of many ways companies track activity across the internet.
Additionally, when youâre using public Wi-Fi while at a coffee shop, in a hotel, or in your dorm, people can spy on your network traffic and see which websites you might be visiting.Â
At Mozilla, we believe people should have stronger protections against this kind of tracking and spying, and that those protections should be easy to use.
Introducing built-in VPN
Our free built-in VPN is designed to make IP protection simple to use in Firefox.
The built-in VPN includes an unprecedented 50 GB per month of free VPN browsing, enough to cover everyday activities like shopping, banking, and reading.
Turn it on in Firefox with a single click. No extra apps. No downloads. Once itâs on, Firefox routes your browsing traffic through a proxy network that replaces your IP address before it reaches a website. The sites you visit see the proxyâs IP address rather than your own. Firefox already encrypts your traffic with HTTPS, but masking your IP adds another layer of privacy. You can mask the URLs youâre visiting from anyone trying to spy on your network traffic on public Wi-Fi, like while youâre enjoying a latte at your favorite coffee shop.Â
If you reach the monthly limit, IP protection is paused until the next cycle. Firefox will require you to confirm before proceeding without the VPN so your browsing doesnât unintentionally continue without IP protection.
Browser-level protection and full-device protection
The free built-in VPN helps secure your traffic while browsing in Firefox, making it a simple way to protect your IP address from being tracked by big tech. However, it does not offer full device protection.Â
For those looking for broader coverage, you can also choose protection that extends across your entire device, including other apps. The standalone Mozilla VPN subscription offers this capability with unlimited data across multiple devices. Depending on your needs, you can pick the level of privacy and protection that suits you.Â
Weâve heard concerns about so-called âfree VPNs,â which often rely on advertising or selling user data to generate revenue. Firefoxâs built-in VPN is designed differently. It does not sell your browsing data and does not inject advertising into your traffic. Instead, we offer a limited amount of browser-level protection for free, alongside Mozilla VPN, our paid, unlimited, full-device VPN service.Â
The free built-in VPN is currently rolling out as a beta to Firefox desktop users in the United States, the United Kingdom, Germany and France, with plans to expand to additional countries coming soon over the next several releases.
As with many Firefox features, weâre introducing it gradually starting in Firefox 149 so we can learn from user feedback and continue improving the experience.
Building a more private web
Protecting privacy online is an ongoing effort. As the web evolves, new technologies create both opportunities and challenges for keeping personal information safe.
Mozilla has spent years building privacy protections â from Total Cookie Protection to Private browsing mode to anti-fingerprinting â directly into Firefox so people have more control over how they experience the web. This built-in VPN is one more way Firefox helps you browse with less exposure and more peace of mind.
By continuing to build these protections into Firefox, we aim to make the web safer, more transparent and more respectful of the people who use it.
WebDriver is a remote control interface that enables introspection and control of user agents. As such, it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).
This newsletter gives an overview of the work weâve done as part of the Firefox 149 release cycle.
Contributions
Firefox is an open source project, and we are always happy to receive external code contributions to our WebDriver implementation. We want to give special thanks to everyone who filed issues, bugs and submitted patches.
In Firefox 149, multiple WebDriver bugs were fixed by contributors:
Added the browser.setDownloadBehavior command, which lets clients allow or prohibit the downloads and also set a custom download folder. This behavior can be configured per session or per user contexts.
Updated the logic of applying different settings to new browsing contexts to make sure that in the case of creating a browsing context with the window.open command, emulations, viewport overrides, and preload scripts apply before the command returns.
Much of what we do on the web involves looking at more than one thing at a time â booking tickets while checking your calendar, taking notes as you go through a report, or comparing options before making a purchase.
The web is inherently multidimensional. For years, browsing this way meant bouncing back and forth between multiple open tabs, or spinning up multiple windows and using other tools to organize them side-by-side.
The newSplit View feature makes these moments easier. It lets you place two tabs next to each other in the same Firefox window so you can see both at once and keep the context you need right in front of you.Â
Split View is available to all Firefox users starting with Firefox 149, rolling out on March 24. If youâd like to give it a go:Â
Make sure youâve got the latest version of Firefox.
Right-click a tab and choose Add Split View. You can also select two tabs, right-click, and choose Open in Split View.
How the Firefox team uses Split View
The team behind Split View has been using it actively over the past few months, and a few workflows quickly stood out. Here are some of the ways people on our team have been using it:
Planning and comparing
Sometimes, you just need two things visible at once.
Gabriel: Iâve been using Split View to plan camping trips. I open a map on one side and a campsite booking page on the other. This makes it easy to explore locations and check availability without constantly switching tabs.
Everyday tasks
Split View is also helpful for small administrative tasks, the kind that involve copying information from one place to another.
Jonathan: I used Split View while filing my taxes. All my documents â W-2s and other forms â were online, so I kept them open on one side while filling things out on the FreeTaxUSA site on the other. Having both visible made the process much easier.
Note-taking
Ania: I often use Split View when reading and writing at the same time. Iâll keep a PDF or article open on one side and take notes on the other as I go. Recently, Iâve been using this setup while preparing notes for my reading group. It helps me stay focused and quickly organize what I want to share.
Whatâs next for Split View
We built Split View to support the way people naturally move through information on the web â comparing, referencing and writing along the way. This first version focuses on making the most common side-by-side workflows easy.Â
If you try it, weâd love your feedback on how it fits into your day-to-day browsing and what would make it even more useful.
Donât remember why you have all those webpages open? Now you can leave yourself a note for any tab.
Tab Notes â our latest experimental feature in Firefox â are designed to help you remember, reflect, and pick up where you left off on the web by letting you attach a short note to a webpage.Â
Indicated by a sticky note icon and visible when hovering over tabs, Tab Notes notes remain connected to the pageâs URL until you delete them. Your notes are yours. They remain private and accessible only to you. Firefox stores them locally in your browser and doesnât send them to Mozilla.
Starting March 24, you can try Tab Notes by following these steps:
Go to Settings.
Navigate to Firefox Labs (or enter about:preferences#experimental in the address bar).Â
Tick the box beside Tab notes.
Now youâre all set! Just right-click or hover over a tab and choose âAdd Noteâ to create your first tab note!
This work is inspired by user research that we conducted last year, which explored how people resume tasks after interruptions. One key insight we learned is that when we are interrupted, even a small reminder or message can significantly improve our ability to resume a task.Â
Many people use a variety of analog (e.g., sticky notes) and digital tools (e.g., note-taking apps) for these purposes as well, and Tab Notes are our exploration of that idea in a practical, lightweight way. These notes are easy to create, edit, and delete.
This is an early experiment, part of the Firefox Labs program. We are eager for feedback, which you can share on Mozilla Connect or by filing a ticket in Bugzilla.
Gecko matters because it ensures thereâs an independent voice shaping how the internet evolves. Without Gecko, the landscape would be dominated by Apple and Google alone.
From accessing information, communicating with others, shopping, working, learning, and entertainment, the vast majority of our time online is spent within a browser. While there are many browsers out there, there are only a few browser engines, the technology necessary to render the data that makes up the web as websites we can use.
Browser engines are among the most complex and consequential pieces of infrastructure on the modern internet. They determine how web standards are implemented, how security and privacy protections are enforced, and which actors ultimately shape the evolution of the web.
As the internet increasingly fragments into walled gardens, and as new technologies like artificial intelligence (AI) are integrated directly into browsers, the influence of browser engines is only growing. When innovation is built on a single dominant engine, it concentrates technical and economic power, narrows choice, and risks steering the web toward the priorities of a few large platforms rather than the public interest.
Gecko is Mozillaâs browser engine that powers Firefox. It is one of only three widely used engines and the only independent browser engine. In other words, it is not governed by a company that also runs an operating system to distribute their own browser.
Why Browser Engines MatterÂ
Browser engines (not to be confused with search engines) are the lesser-known technology powering your web browsers.
As the core software layer responsible for interpreting and rendering web content, browser engines play the fundamental role of turning HTML, CSS, and JavaScript into webpages users can interact with.
While browsers are user-facing products, engines are the layer where structural decisions about the web are made. Examples include privacy and security protections, performance characteristics, and the support of APIs. Browser engines are at the heart of the web.
Gecko and the Browser Monoculture
The browser engine landscape is highly concentrated. In 2013, there were five major browser engines. In 2026, there are only three left: Appleâs WebKit (which companies are required to use to build on iOS), Googleâs Blink, and Mozillaâs Gecko. Gecko is the only remaining independent browser engine and it powers Firefox.
When engine diversity declines, so does the practical ability to challenge dominant business models or introduce alternative implementations that can put users first through security, privacy, or other features.
There are only three major browser engines left â Appleâs WebKit, Googleâs Blink and Gecko from Mozilla. Appleâs WebKit mainly runs on Apple devices, making Gecko the only cross-platform challenger to Blink.
Â
This concentration increasingly risks hard-coding a single companyâs technical assumptions into the future of the web. Market pressures often turn standards-compliant but differing implementation choices into âbugsâ that need fixing.
As both human and AI-driven browsing expand in use, choices about API implementation, data access, and security boundaries at the browser engine level become even more critical. A monoculture at the engine layer could extend to producing a monoculture in AI browsing experiences as well.
Maintaining an Independent Browser Engine Allows Mozilla to be More User-centric
Gecko, as an independent browser engine, tangibly allows Mozilla to build and operate in a way that is aligned with our mission:Â keeping the web open, secure, privacy-first, and accessible to everyone. It ensures that Mozilla is not only advocating for these principles but actively building the underlying infrastructure that makes them possible.
Through Gecko, we have the freedom to design and ship features based on what is best for users, rather than what is easiest or most profitable within another companyâs technology stack.
In practice, this enables us to:
Introduce privacy and security protections that go beyond industry defaults, such as strong cross-site tracking protections and anti-fingerprinting measures.
Experiment with new user interface designs and customization options that give people more control over how they use the web.
Build features that reflect Mozillaâs mission-driven priorities, even when they diverge from dominant commercial models.
If a small number of vertically integrated companies (AI assistants, search, operating systems, ads) completely control browser engines, then competition, transparency, and user choice on the open web will be much harder to achieve. They will have strong incentives to favour their own services, limit interoperability, and steer defaults and standards to their advantage.
Maintaining an independent engine also lowers barriers for others. Newer entrants to the browser space can rely on interoperability as defined in specifications. If they are not building their own engine, building on Gecko can help sustain a more competitive browser ecosystem. Engine diversity at this foundational layer enables innovation, which is shaped by multiple actors and multiple visions, rather than it being dictated by a single dominant platform.
Browser Engine Plurality Ensures Tech is Built For People, Not ShareholdersÂ
In an era defined by platform consolidation and AI-driven change, browser engines canât be treated as invisible infrastructure. Independent engines like Gecko provide a structural counterbalance. Browser engine plurality is needed to ensure competition, transparency, and technology built for people, not shareholders.
As governments increasingly focus on security, resilience and sustainable growth, browser engine competition has a central role to play in avoiding single points of vulnerability or failure. Meaningful competition and a focus on open source approaches help ensure that economies are not locked into a single companyâs infrastructure and that governments, companies, and people retain real choice over where to build and how to optimize for their needs.
Mozilla has long engaged with policymakers and regulators on the importance of competition and openness at the browser and engine layer. As the web and broader technology landscape continue to evolve, especially in the face of AI, we will continue to advance policies that protect engine diversity, promote fair competition, and ensure the web evolves in the public interest.
The error arises from desugaring push_message to a call that references private fields:
MessageProcessor::push_message(&mutmp.{messages},// -------- not nameable here
format!("Hi"),)
I proposed we could lint to avoid this situation.
But an alternative was proposed where we would say that, when we introduce an auto-ref, if the callee references local variables not visible from this point in the program, we just borrow the entire struct rather than borrowing specific fields.
So then we would desugar to:
MessageProcessor::push_message(&mutmp,// -- borrow the whole struct
format!("Hi"),)
If we then say that &mut MessageProcessor is coercable to a &mut MessageProcessor.{messages}, then the call would be legal.
Interestingly, the autoderef loop already considers visibility: if you do a.foo, we will deref until we see a foo field visible to you at the current point.
Oh and a side note, assigning etc
This raises an interesting question I did not discuss. What happens when you write a value of a type like MessageProcessor.{messages}?
What I expect is that this would just swap the selected fields (messages, in this case) and leave the other fields untouched.
The basic idea is that a type MessageProcessor.{messages} indicates that the messages field is initialized and accessible and the other fields must be completely ignored.
Another possible future extension: moved values
This represents another possible future extension. Today if you move out of a field in a struct, then you can no longer work with the value as a whole:
implMessageProcessor{fnexample(mutself){// move from self.statistics
std::mem::drop(self.statistics);// now I cannot call this method,
// because I can't borrow `self`:
self.push_message(format!("Hi again"));}}
But with selective borrowing, we could allow this, and you could even return âpartially initializedâ values:
I wanted to see what the generated try configuration would be for a new preset I made and did this by submitting real try pushes (with empty so they don't execute resources). What I was looking for was "dry run" in the help files, but I recently discovered it to be --no-push.
It was one of those "ah ha!" moments for me when I finally used it. Chris Krycho covers the concept of megamerges with this diagram:
m --- n
/ \
a -- b -- c -- [merge] -- [wip]
\ /
w --- x
I've found a more realistic example that best relates to my natural workflow: implementing feature (A) benefitted from having the changes of another tooling patch upgrade (B), that lead to discovering and fixing a bug (C).
(B)
m ----- n
/ \ (A)
a -- b --------- [merge] --- y -- z
\ / \ (C)
------------------- ----- [merge] -- w -- x
\ /
-------------------------------
In this case, trying to separate these into distinct streams of work is quite logically, but we also don't need to leave them unlinked so that they can benefit from each other.
This is what my jj log ended up looking like:
@ oppmsuvz jxxxxxxxxxxxx@gmail.com 2026-03-22 00:34:10 firefox@ 05259417
â Bug xxxxxxx - Simplify the tests
â ultowtnr jxxxxxxxxxxxx@gmail.com 2026-03-22 00:34:04 100c4cce
â Bug xxxxxxx - Include private flag in ShareData
â lorusmuo jxxxxxxxxxxxx@gmail.com 2026-03-21 20:19:30 905b0460
âââź (empty) (no description set)
â â sumqskuu jxxxxxxxxxxxx@gmail.com 2026-03-21 04:22:00 92f6028b
â â Add a new secret settings fragment
â â oylmprpu jxxxxxxxxxxxx@gmail.com 2026-03-21 04:22:00 18931825
â â Create a new feature for receiving and sending commands.
â â xrnnoonu jxxxxxxxxxxxx@gmail.com 2026-03-21 04:21:48 618020c7
ââ†(empty) (no description set)
â â rqlyqqzx jxxxxxxxxxxxx@gmail.com 2026-03-19 17:20:20 c9b5323c
â â Bug xxxxxxx - Part 2: Create new android gradle module skill
â â txvozpwz jxxxxxxxxxxxx@gmail.com 2026-03-19 17:20:13 cee18510
ââ⯠Bug xxxxxxx - Part 1: Add new gradle example module
â pwsnmryn vxxxxxxxxxxxx@gmail.com 2026-03-18 13:21:47 main@origin fa20ce29
â Bug xxxxxxx - Make my feature work for everyone
~
When I need to submit these, [moz-phab][1 has support for specifying revset ranges with moz-phab start_rev end_rev. However, I can also use jj rebase -s <rev> -d main@origin to put out some try pushes to validate they still work separately - so far, no conflicts in this step.
This blog post describes a maximally minimal proposal for view types. It comes out of a converastion at RustNation I had with lcnr and Jack Huey, where we talking about various improvements to the language that are âin the etherâ, that basically everybody wants to do, and what it would take to get them over the line.
Example: MessageProcessor
Letâs start with a simple example. Suppose we have a struct MessageProcessor which gets created with a set of messages. It will process them and, along the way, gather up some simple statistics:
pubstructMessageProcessor{messages: Vec<String>,statistics: Statistics,}#[non_exhaustive]// Not relevant to the example, just good practice!
pubstructStatistics{pubmessage_count: usize,pubtotal_bytes: usize,}
The basic workflow for a message processor is that you
accumulate messages by pushing them into the self.messages vector
The function to process a single message takes ownership of the message string because it will send it to another thread. Before doing so, it updates the statistics:
implMessageProcessor{fnprocess_message(&mutself,message: String){self.statistics.message_count+=1;self.statistics.total_bytes+=message.len();// ... plus something to send the message somewhere
}}
Draining the accumulated messages
The final function you need is one that will drain the accumulated messages and process them. Writing this ought to be straightforward, but it isnât:
implMessageProcessor{pubfnprocess_pushed_messages(&mutself){formessageinself.messages.drain(..){self.process_message(message);// <-- ERROR: `self` is borrowed
}}}
The problem is that self.messages.drain(..) takes a mutable borrow on self.messages. When you call self.process_message, the compiler assumes you might modify any field, including self.messages. It therefore reports an error. This is logical, but frustrating.
Experienced Rust programmers know a number of workarounds. For example, you could swap the messages field for an empty vector. Or you could invoke self.messages.pop(). Or you could rewrite process_message to be a method on the Statistics type. But all of them are, letâs be honest, suboptimal. The code above is really quite reasonable, it would be nice if you could make it work in a straightforward way, without needing to restructure it.
Whatâs needed: a way for the borrow checker to know what fields a method may access
The core problem is that the borrow checker does not know that process_message will only access the statistics field. In this post, Iâm going to focus on an explicit, and rather limited, notation, but Iâll also talk about how we might extend it in the future.
View types extend struct types with a list of fields
The basic idea of a view type is to extend the grammar of a struct type to optionally include a list of accessible fields:
RustType := StructName<...>
| StructName<...> { .. } // <-- what we are adding
| StructName<...> { (fields),* } // <-- what we are adding
A type like MessageProcessor { statistics } would mean âa MessageProcessor struct where only the statistics field can be accessedâ. You could also include a .., like MessageProcessor { .. }, which would mean that all fields can be accessed, which is equivalent to todayâs struct type MessageProcessor.
View types respect privacy
View types would respect privacy, which means you could only write MessageProcessor { messages } in a context where you can name the field messages in the first place.
View types can be named on self arguments and elsewhere
You could use this to define that process_message only needs to access the field statistics:
implMessageProcessor{fnprocess_message(&mutself{statistics},message: String){// ----------------------
// Shorthand for: `self: &mut MessageProcessor {statistics}`
// ... as before ...
}}
Of course you could use this notation in other arguments as well:
We would also extend borrow expressions so that it is possible to specify precisely which fields will be accessible from the borrow:
letmessages=&mutsome_variable{messages};// Ambiguous grammar? See below.
When you do this, the borrow checker produces a value of type &mut MessageProcessor {messages}.
Sharp-eyed readers will note that this is ambiguous. The above could be parsed today as a borrow of a struct expression like some_variable { messages } or, more verbosely, some_variable { messages: messages }. Iâm not sure what to do about that. Iâll note some alternative syntaxes below, but Iâll also note that it would be possible for the compiler to parse the AST in an ambiguous fashion and disambiguate later on once name resolution results are known.
We automatically introduce view borrows in an auto-ref
In our example, though, the user never writes the &mut borrow explicitly. It results from the auto-ref added by the compiler as part of the method call:
pubfnprocess_pushed_messages(&mutself){formessageinself.messages.drain(..){self.process_message(message);// <-- auto-ref occurs here
}}
The compiler internally rewrites method calls like self.process_message(message) to fully qualified form based on the signature declared in process_message. Today that results in code like this:
Integrating views into the borrow checker is fairly trivial. The way the borrow checker works is that, when it sees a borrow expression, it records a âloanâ internally that tracks the place that was borrowed, the way it was borrowed (mut, shared), and the lifetime for which it was borrowed. All we have to do is to record, for each borrow using a view, multiple loans instead of a single loan.
For example, if we have &mut self, we would record one mut-loan of self. But if we have &mut self {field1, field2}, we would two mut-loans, one of self.field1 and one of self.field2.
Example: putting it all together
OK, letâs put it all together. This was our original example, collected:
pubstructMessageProcessor{messages: Vec<String>,statistics: Statistics,}#[non_exhaustive]pubstructStatistics{pubmessage_count: usize,pubtotal_bytes: usize,}implMessageProcessor{pubfnpush_message(&mutself,message: String){self.messages.push(message);}pubfnprocess_pushed_messages(&mutself){formessageinself.messages.drain(..){self.process_message(message);// <-- ERROR: `self` is borrowed
}}fnprocess_message(&mutself,message: String){self.statistics.message_count+=1;self.statistics.total_bytes+=message.len();// ... plus something to send the message somewhere
}}
Today, process_pushed_messages results in an error:
The error arises from a conflict between two borrows:
self.messages.drain(..) desugars to Iterator::drain(&mut self.messages, ..) which, as you can see, mut-borrows self.messages;
then self.process_message(..) desugars to MessageProcessor::process_message(&mut self, ..) which, as you can see, mut-borrows all of self, which overlaps self.messages.
But in the âbrave new worldâ, weâll modify the program in one place:
and as a result, the process_pushed_messages function will now borrow check successfully. This is because the two loans are now issued for different places:
as before, self.messages.drain(..) desugars to Iterator::drain(&mut self.messages, ..) which mut-borrows self.messages;
but now, self.process_message(..) desugars to MessageProcessor::process_message(&mut self {statistics}, ..) which mut-borrows self.statistics, which doesnât overlap self.messages.
At runtime, this is still just a pointer
One thing I want to emphasize is that âview typesâ are a purely static construct and do not change how things are compiled. They simply give the borrow checker more information about what data will be accessed through which references. The process_message method, for example, still takes a single pointer to self.
This is in contrast with the workarounds that exist today. For example, if I were writing the above code, I might well rewrite process_message into an associated fn that takes a &mut Statistics:
implMessageProcessor{fnprocess_message(statistics: &mutStatistics,message: String){statistics.message_count+=1;statistics.total_bytes+=message.len();// ... plus something to send the message somewhere
}}
This would be annoying, of course, since Iâd have to write Self::process_message(&mut self.statistics, ..) instead of self.process_message(), but it would avoid the borrow check error.
Beyond being annoying, it would change the way the code is compiled. Instead of taking a reference to the MessageProcessor it now takes a reference to the Statistics.
In this example, the change from one type to another is harmless, but there are other examples where you need access to mulitple fields, in which case it is less efficient to pass them individually.
Frequently asked questions
How hard would this be to implement?
Honestly, not very hard. I think we could ship it this year if we found a good contributor who wanted to take it on.
What about privacy?
I would require that the fields that appear in view types are âvisibleâ to the code that is naming them (this includes in view types that are inserted via auto-ref). So the following would be an error:
modm{#[derive(Default)]pubstructMessageProcessor{messages: Vec<String>,...}implMessageProcessor{pubfnprocess_message(&mutself{messages},message: String){// ----------
// It's *legal* to reference a private field here, but it
// results in a lint, just as it is currently *legal*
// (but linted) for a public method to take an argument of
// private type. The lint is because doing this is effectively
// going to make the method uncallable from outside this module.
self.messages.push(message);}}}fnmain(){letmutmp=m::MessageProcessor::default();mp.process_message(format!("Hello, world!"));// --------------- ERROR: field `messages` is not accessible here
//
// This desugars to:
//
// ```
// MessageProcessor::process_message(
// &mut mp {messages}, // <-- names a private field!
// format!("Hello, world!"),
// )
// ```
//
// which names the private field `messages`. That is an error.
}
Does this mean that view types canât be used in public methods?
More-or-less. You can use them if the view types reference public fields:
#[non_exhaustive]pubStatistics{pubmessage_count: usize,pubaverage_bytes: usize,// ... maybe more fields will be added later ...
}implStatistics{pubfntotal_bytes(&self{message_count,average_bytes})-> usize{// ----------------------------
// Declare that we only read these two fields.
self.message_count*self.average_bytes}}
Wonât it be limited that view types more-or-less only work for private methods?
Yes! But itâs a good starting point. And my experience is that this problem occurs most often with private helper methods like the one I showed here. It can occur in public contexts, but much more rarely, and in those circumstances itâs often more acceptable to refactor the types to better expose the groupings to the user. This doesnât mean I donât want to fix the public case too, it just means itâs a good use-case to cut from the MVP. In the future I would address public fields via abstract fields, as I described in the past.
What if I am borrowing the same sets of fields over and over? That sounds repititive!
Thatâs true! It will be! I think in the future Iâd like to see some kind of âghostâ or âabstractâ fields, like I described in my abstract fields blog post. But again, that seems like a âpost-MVPâ sort of problem to me.
Must we specify the field sets being borrowed explicitly? Canât they be inferred?
In the syntax I described, you have to write &mut place {field1, field2} explicitly. But there are many approaches in the literature to inferring this sort of thing, with row polymorphism perhaps being the most directly applicable. I think we could absolutely introduce this sort of inference, and in fact Iâd probably make it the default, so that &mut placealways introduces a view type, but it is typically inferred to âall fieldsâ in practice. But that is a non-trivial extension to Rustâs inference system, introducing a new kind of inference we donât do today. For the MVP, I think I would just lean on auto-ref covering by far the most common case, and have explicit syntax for the rest.
Man, I have to write the fields that my method uses in the signature? That sucks! It should be automatic!
I get that for many applications, particularly with private methods, writing out the list of fields that will be accessed seems a bit silly: the compiler ought to be able to figure it out.
On the flip side, this is the kind of inter-procedural inference we try to avoid in Rust, for a number of reasons:
it introduces dependecies between methods which makes inference more difficult (even undecidable, in extreme cases);
it makes for ânon-local errorsâ that can be really confusing as a user, where modifying the body of one method causes errors in another (think of the confusion we get around futures and Send, for example);
it makes the compiler more complex, we would not be able to parallelize as easily (not that we parallelize today, but that work is underway!)
The bottom line for me is one of staging: whatever we do, I think we will want a way to be explicit about exactly what fields are being accessed and where. Therefore, we should add that first. We can add the inference later on.
Why does this need to be added to the borrow checker? Why not desugar?
Another common alternative (and one I considered for a whileâŠ) is to add some kind of âdesugaringâ that passes references to fields instead of a single reference. I donât like this for two reasons. One, I think itâs frankly more complex! This is a fairly straightforward change to the borrow checker, but that desugaring would leave code all over the compiler, and it would make diagnostics etc much more complex.
But second, it would require changes to what happens at runtime, and I donât see why that is needed in this example. Passing a single reference feels right to me.
What about the ambiguous grammar? What other syntax options are there?
Oh, right, the ambiguous grammar. To be honest Iâve not thought too deeply about the syntax. I was trying to have the type Struct { field1, field 2 } reflect struct constructor syntax, since we generally try to make types reflect expressions, but of course that leads to the ambiguity in borrow expressions that causes the problem:
letfoo=&mutsome_variable{field1};// ------------- is this a variable or a field name?
Options I see:
Make it work. Itâs not truly ambiguous, but it does require some semantic diambiguation, i.e., in at least some cases, we have to delay resolving this until name resolution can complete. Thatâs unusual for Rust. We do it in some small areas, most notably around the interpretation of a pattern like None (is it a binding to a variable None or an enum variant?).
New syntax for borrows only. We could keep the type syntax but make the borrow syntax different, maybe &mut {field1} in some_variable or something. Given that you would rarely type the explicit borrow form, that seems good?
Some new syntax altogether. Perhaps we want to try something different, or introduce a keyword everywhere? Iâd be curious to hear options there. The current one feels nice to me but it occupies a âcrowded syntactic spaceâ, so I can see it being confusing to readers who wonât be sure how to interpret it.
Conclusion: this is a good MVP, letâs ship it!
In short, I donât really see anything blocking us from moving forward here, at least with a lang experiment.
Planet Mozilla
Collected here are the most recent blog posts from all over the Mozilla community.
The content here is unfiltered and uncensored, and represents the views of individual community members.
Individual posts are owned by their authors -- see original source for licensing information.