My name is Marcelo Poli. I live in Argentina, and I speak Spanish and English. I started contributing to Mozilla localization with Phoenix 0.3 — 24 years ago.
Mozilla Localization Journey
Q: How did you first get involved in localizing Mozilla products?
A: There was a time when alternative browsers were incompatible with many websites. “Best with IE” appeared everywhere. Then Mozilla was reborn with Phoenix. It was just the browser — unlike Mozilla Suite (the old name for SeaMonkey) — and it was the best option.
At first, it was only available in English, so I searched and found an opportunity to localize my favorite browser. There were already some Spanish localization for the Suite, and that became the base for my work. It took me two releases to complete it, and Phoenix 0.3 shipped with a full language pack — the first Spanish localization in Phoenix history.
The most amazing part was that Mozilla let me do it.
Q: Do you have a favorite product? Do you use the ones you localize regularly?
A: Firefox is always my favorite. Thunderbird comes second — it’s the simplest and most powerful email software. Firefox has been my default browser since the Phoenix era, and since many Mozilla products are connected, working on one often makes you want to contribute to others as well.
Q: What moments stand out from your localization journey?
A: Being part of the Firefox 1.0 release was incredible. The whole world was talking about the new browser, and my localization was part of it.
Another unforgettable moment was seeing my name — along with hundreds of others — on the Mozilla Monument in San Francisco.
Q: Have you shared your work with family and friends?
A: Yes. I usually say, “Try this, it’s better,” and many times they agree. Sometimes I have to explain the concept of free software. When they say, “But I didn’t pay for the other browsers,” I use the classic explanation: “Free as in freedom and free as in free beer.”
I wear Mozilla T-shirts, but I don’t brag about managing the Argentinian localization. Still, some tech-savvy friends have found my name in the credits.
Community & Collaboration
Q: How does the Argentinian localization community work together today?
Marcelo (right) with fellow Argentinian Mozillians Gabriela and Guillermo
A: In the beginning, the Suite localization, Firefox localization, and the Argentinian community were separate. Mozilla encouraged us to join forces, and I eventually became the l10n manager.The community has grown and shrunk over time. Right now it’s smaller, but localization remains the most active part, keeping products up to date. We stay in touch through an old mailing list, Matrix, and direct messages. I’ve also participated in many community events, although living far from Buenos Aires limits how often I can attend.
Q: How do you coordinate translation, review, and testing?
A: We’re a small group, which actually makes coordination easier. Since we contribute in our free time, even small contributions matter, and three people can approve strings at any time.
We test using Nightly as our main browser. Priorities are set in Pontoon — once the five-star products are complete, we move on to others. Usually, the number of untranslated strings is small, so it’s manageable.
Q: How has your role evolved over time?
A: The old Mozilla folks — the “original cast,” you could say — were essential in the early days. Before collaborative tools existed, I explained DTD and properties file structures to others. Some contributors had strong language skills but less technical background.
Since the Phoenix years, I’ve been responsible for es-AR localization. At first, I worked alone; later others joined. Today, I hold the manager title in Pontoon. As Uncle Ben once said, “With great power comes great responsibility,” so I check Pontoon daily.
Q: What best practices would you share with other localizers?
A: Pontoon is easy to use. The key is respecting terminology and staying consistent across the localization.
If you find a typo or a better phrasing, suggest it directly in Pontoon. You don’t need to contact a manager, and it doesn’t matter how small the change is. Every contribution matters — even if it isn’t approved.
Professional Background & Skills
Q: What is your professional background, and how has it helped your localization work?
A: I studied programming, so I understand software structure and how it works. That helped a lot in the early days when localization required editing files directly — especially dealing with encoding and file structure.
Knowledge of web development also helped with Developer Tools strings, and as a heavy user, I’m familiar with the terminology for almost everything you can do in software.
Q: What have you gained beyond translation?
A: Mozilla allows you to be part of something global — meeting people from different countries and learning how similar or different we are. Through community events and hackathons, I learned how to collaborate internationally. As a side effect, I became more fluent speaking English face to face than I expected.
Q: After so many years, what keeps you motivated?
A: My main motivation is being able to use Mozilla products in my own language. Mozilla is unique in having four Spanish localization. Most projects offer only one for all Spanish-speaking countries — or at best, one for Spain and one for Latin America.
I’m not the most social person in the community, so recruiting isn’t really my role. The best way I motivate others is simply by continuing to work on the projects. Many years ago, I contributed a few strings to Ubuntu localization — maybe they’re still there.
Fun Facts
Marcelo as a DJ
I was a radio DJ for many years — sometimes just playing music, sometimes talking about it.
Paraphrasing Sting, I was born in the ’60s and witnessed the first home computers like Texas Instruments and Commodore. My first personal computer was pre-Windows, with text-based screens, and I used Netscape Navigator on dial-up.
I still prefer a big screen over a cellphone and mechanical keyboards over on-screen ones. These days, I’m learning how to build mobile apps.
In my previous Dada blog post, I talked about how Dada enables composable sharing. Today I’m going to start diving into Dada’s permission system; permissions are Dada’s equivalent to Rust’s borrow checker.
Goal: richer, place-based permissions
Dada aims to exceed Rust’s capabilities by using place-based permissions. Dada lets you write functions and types that capture both a value and things borrowed from that value.
As a fun example, imagine you are writing some Rust code to process a comma-separated list, just looking for entries of length 5 or more:
letlist: String=format!("...something big, with commas...");letitems: Vec<&str>=list.split(",").map(|s|s.trim())// strip whitespace
.filter(|s|s.len()>5).collect();
One of the cool things about Rust is how this code looks a lot like some high-level language like Python or JavaScript, but in those languages the split call is going to be doing a lot of work, since it will have to allocate tons of small strings, copying out the data. But in Rust the &str values are just pointers into the original string and so split is very cheap. I love this.
On the other hand, suppose you want to package up some of those values, along with the backing string, and send them to another thread to be processed. You might think you can just make a struct like so…
structMessage{list: String,items: Vec<&str>,// ----
// goal is to hold a reference
// to strings from list
}
…and then create the list and items and store them into it:
letlist: String=format!("...something big, with commas...");letitems: Vec<&str>=/* as before */;letmessage=Message{list,items};// ----
// |
// This *moves* `list` into the struct.
// That in turn invalidates `items`, which
// is borrowed from `list`, so there is no
// way to construct `Message`.
But as experienced Rustaceans know, this will not work. When you have borrowed data like an &str, that data cannot be moved. If you want to handle a case like this, you need to convert from &str into sending indices, owned strings, or some other solution. Argh!
Dada’s permissions use places, not lifetimes
Dada does things a bit differently. The first thing is that, when you create a reference, the resulting type names the place that the data was borrowed from, not the lifetime of the reference. So the type annotation for items would say ref[list] String1 (at least, if you wanted to write out the full details rather than leaving it to the type inferencer):
let list: given String = "...something big, with commas..."
let items: given Vec[ref[list] String] = list
.split(",")
.map(_.trim()) // strip whitespace
.filter(_.len() > 5)
// ------- I *think* this is the syntax I want for closures?
// I forget what I had in mind, it's not implemented.
.collect()
I’ve blogged before about how I would like to redefine lifetimes in Rust to be places as I feel that a type like ref[list] String is much easier to teach and explain: instead of having to explain that a lifetime references some part of the code, or what have you, you can say that “this is a String that references the variable list”.
But what’s also cool is that named places open the door to more flexible borrows. In Dada, if you wanted to package up the list and the items, you could build a Message type like so:
class Message(
list: String
items: Vec[ref[self.list] String]
// ---------
// Borrowed from another field!
)
// As before:
let list: String = "...something big, with commas..."
let items: Vec[ref[list] String] = list
.split(",")
.map(_.strip()) // strip whitespace
.filter(_.len() > 5)
.collect()
// Create the message, this is the fun part!
let message = Message(list.give, items.give)
Note that last line – Message(list.give, items.give). We can create a new class and move list into it along withitems, which borrows from list. Neat, right?
OK, so let’s back up and talk about how this all works.
References in Dada are the default
Let’s start with syntax. Before we tackle the Message example, I want to go back to the Character example from previous posts, because it’s a bit easier for explanatory purposes. Here is some Rust code that declares a struct Character, creates an owned copy of it, and then gets a few references into it.
class Character(
name: String,
klass: String,
hp: u32,
)
let ch: Character = Character("Tzara", "Dadaist", 22)
let p: ref[ch] Character = ch
let q: ref[p] String = p.name
The first thing to note is that, in Dada, the default when you name a variable or a place is to create a reference. So let p = ch doesn’t move ch, as it would in Rust, it creates a reference to the Character stored in ch. You could also explicitly write let p = ch.ref, but that is not preferred. Similarly, let q = p.name creates a reference to the value in the field name. (If you wanted to move the character, you would write let ch2 = ch.give, not let ch2 = ch as in Rust.)
Notice that I said let p = ch “creates a reference to the Character stored in ch”. In particular, I did not say “creates a reference to ch”. That’s a subtle choice of wording, but it has big implications.
References in Dada are not pointers
The reason I wrote that let p = ch “creates a reference to the Character stored in ch” and not “creates a reference to ch” is because, in Dada, references are not pointers. Rather, they are shallow copies of the value, very much like how we saw in the previous post that a shared Characteracts like an Arc<Character> but is represented as a shallow copy.
Clearly, the Dada representation takes up more memory on the stack. But note that it doesn’t duplicate the memory in the heap, which tends to be where the vast majority of the data is found.
Dada talks about values not references
This gets at something important. Rust, like C, makes pointers first-class. So given x: &String, x refers to the pointer and *x refers to its referent, the String.
Dada, like Java, goes another way. x: ref Stringis a String value – including in memory representation! The difference between a given String, shared String, and ref String is not in their memory layout, all of them are the same, but they differ in whether they own their contents.2
So in Dada, there is no *x operation to go from “pointer” to “referent”. That doesn’t make sense. Your variable always contains a string, but the permissions you have to use that string will change.
In fact, the goal is that people don’t have to learn the memory representation as they learn Dada, you are supposed to be able to think of Dada variables as if they were all objects on the heap, just like in Java or Python, even though in fact they are stored on the stack.3
Rust does not permit moves of borrowed data
In Rust, you cannot move values while they are borrowed. So if you have code like this that moves ch into ch1…
…then this code only compiles if name is not used again:
letch=Character{...};letname=&ch.name;// create reference
letch1=ch;// ERROR: cannot move while borrowed
letname1=name;// use reference again
…but Dada can
There are two reasons that Rust forbids moves of borrowed data:
References are pointers, so those pointers may become invalidated. In the example above, name points to the stack slot for ch, so if ch were to be moved into ch1, that makes the reference invalid.
The type system would lose track of things. Internally, the Rust borrow checker has a kind of “indirection”. It knows that ch is borrowed for some span of the code (a “lifetime”), and it knows that the lifetime in the type of name is related to that lifetime, but it doesn’t really know that name is borrowed from ch in particular.4
Neither of these apply to Dada:
Because references are not pointers into the stack, but rather shallow copies, moving the borrowed value doesn’t invalidate their contents. They remain valid.
Because Dada’s types reference actual variable names, we can modify them to reflect moves.
Dada tracks moves in its types
OK, let’s revisit that Rust example that was giving us an error. When we convert it to Dada, we find that it type checks just fine:
class Character(...) // as before
let ch: given Character = Character(...)
let name: ref[ch.name] String = ch.name
// -- originally it was borrowed from `ch`
let ch1 = ch.give
// ------- but `ch` was moved to `ch1`
let name1: ref[ch1.name] = name
// --- now it is borrowed from `ch1`
Woah, neat! We can see that when we move from ch into ch1, the compiler updates the types of the variables around it. So actually the type of name changes to ref[ch1.name] String. And then when we move from name to name1, that’s totally valid.
In PL land, updating the type of a variable from one thing to another is called a “strong update”. Obviously things can get a bit complicated when control-flow is involved, e.g., in a situation like this:
let ch = Character(...)
let ch1 = Character(...)
let name = ch.name
if some_condition_is_true() {
// On this path, the type of `name` changes
// to `ref[ch1.name] String`, and so `ch`
// is no longer considered borrowed.
ch1 = ch.give
ch = Character(...) // not borrowed, we can mutate
} else {
// On this path, the type of `name`
// remains unchanged, and `ch` is borrowed.
}
// Here, the types are merged, so the
// type of `name` is `ref[ch.name, ch1.name] String`.
// Therefore, `ch` is considered borrowed here.
Renaming lets us call functions with borrowed values
OK, let’s take the next step. Let’s define a Dada function that takes an owned value and another value borrowed from it, like the name, and then call it:
fn character_and_name(
ch1: given Character,
name1: ref[ch1] String,
) {
// ... does something ...
}
We could call this function like so, as you might expect:
let ch = Character(...)
let name = ch.name
character_and_name(ch.give, name)
So…how does this work? Internally, the type checker type-checks a function call by creating a simpler snippet of code, essentially, and then type-checking that. It’s like desugaring but only at type-check time. In this simpler snippet, there are a series of let statements to create temporary variables for each argument. These temporaries always have an explicit type taken from the method signature, and they are initialized with the values of each argument:
// type checker "desugars" `character_and_name(ch.give, name)`
// into more primitive operations:
let tmp1: given Character = ch.give
// --------------- -------
// | taken from the call
// taken from fn sig
let tmp2: ref[tmp1.name] String = name
// --------------------- ----
// | taken from the call
// taken from fn sig,
// but rewritten to use the new
// temporaries
If this type checks, then the type checker knows you have supplied values of the required types, and so this is a valid call. Of course there are a few more steps, but that’s the basic idea.
Notice what happens if you supply data borrowed from the wrong place:
let ch = Character(...)
let ch1 = Character(...)
character_and_name(ch, ch1.name)
// --- wrong place!
This will fail to type check because you get:
let tmp1: given Character = ch.give
let tmp2: ref[tmp1.name] String = ch1.name
// --------
// has type `ref[ch1.name] String`,
// not `ref[tmp1.name] String`
Class constructors are “just” special functions
So now, if we go all the way back to our original example, we can see how the Message example worked:
class Message(
list: String
items: Vec[ref[self.list] String]
)
Basically, when you construct a Message(list, items), that’s “just another function call” from the type system’s perspective, except that self in the signature is handled carefully.
This is modeled, not implemented
I should be clear, this system is modeled in the dada-model repository, which implements a kind of “mini Dada” that captures what I believe to be the most interesting bits. I’m working on fleshing out that model a bit more, but it’s got most of what I showed you here.5 For example, here is a test that you get an error when you give a reference to the wrong value.
The “real implementation” is lagging quite a bit, and doesn’t really handle the interesting bits yet. Scaling it up from model to real implementation involves solving type inference and some other thorny challenges, and I haven’t gotten there yet – though I have some pretty interesting experiments going on there too, in terms of the compiler architecture.6
This could apply to Rust
I believe we could apply most of this system to Rust. Obviously we’d have to rework the borrow checker to be based on places, but that’s the straight-forward part. The harder bit is the fact that &T is a pointer in Rust, and that we cannot readily change. However, for many use cases of self-references, this isn’t as important as it sounds. Often, the data you wish to reference is living in the heap, and so the pointer isn’t actually invalidated when the original value is moved.
Consider our opening example. You might imagine Rust allowing something like this in Rust:
In this case, the str data is heap-allocated, so moving the string doesn’t actually invalidate the &str value (it would invalidate an &String value, interestingly).
In Rust today, the compiler doesn’t know all the details of what’s going on. String has a Deref impl and so it’s quite opaque whether str is heap-allocated or not. But we are working on various changes to this system in the Beyond the & goal, most notably the Field Projections work. There is likely some opportunity to address this in that context, though to be honest I’m behind in catching up on the details.
I’ll note in passing that Dada unifies str and String into one type as well. I’ll talk in detail about how that works in a future blog post. ↩︎
This is kind of like C++ references (e.g., String&), which also act “as if” they were a value (i.e., you write s.foo(), not s->foo()), but a C++ reference is truly a pointer, unlike a Dada ref. ↩︎
This goal was in part inspired by a conversation I had early on within Amazon, where a (quite experienced) developer told me, “It took me months to understand what variables are in Rust”. ↩︎
As a teaser, I’m building it in async Rust, where each inference variable is a “future” and use “await” to find out when other parts of the code might have added constraints. ↩︎
This post is an expanded version of a presentation I gave at the 2025 WebAssembly CG meeting in Munich.
WebAssembly has come a long way since its first release in 2017. The first version of WebAssembly was already a great fit for low-level languages like C and C++, and immediately enabled many new kinds of applications to efficiently target the web.
These additions have allowed many more languages to efficiently target WebAssembly. There’s still more important work to do, like stack switching and improved threading, but WebAssembly has narrowed the gap with native in many ways.
Yet, it still feels like something is missing that’s holding WebAssembly back from wider adoption on the Web.
There are multiple reasons for this, but the core issue is that WebAssembly is a second-class language on the web. For all of the new language features, WebAssembly is still not integrated with the web platform as tightly as it should be.
This leads to a poor developer experience, which pushes developers to only use WebAssembly when they absolutely need it. Oftentimes JavaScript is simpler and “good enough”. This means its users tend to be large companies with enough resources to justify the investment, which then limits the benefits of WebAssembly to only a small subset of the larger Web community.
Solving this issue is hard, and the CG has been focused on extending the WebAssembly language. Now that the language has matured significantly, it’s time to take a closer look at this. We’ll go deep into the problem, before talking about how WebAssembly Components could improve things.
What makes WebAssembly second-class?
At a very high level, the scripting part of the web platform is layered like this:
WebAssembly can directly interact with JavaScript, which can directly interact with the web platform. WebAssembly can access the web platform, but only by using the special capabilities of JavaScript. JavaScript is a first-class language on the web, and WebAssembly is not.
This wasn’t an intentional or malicious design decision; JavaScript is the original scripting language of the Web and co-evolved with the platform. Nonetheless, this design significantly impacts users of WebAssembly.
What are these special capabilities of JavaScript? For today’s discussion, there are two major ones:
Loading of code
Using Web APIs
Loading of code
WebAssembly code is unnecessarily cumbersome to load. Loading JavaScript code is as simple as just putting it in a script tag:
<script src="script.js"></script>
WebAssembly is not supported in script tags today, so developers need to use the WebAssembly JS API to manually load and instantiate code.
let bytecode = fetch(import.meta.resolve('./module.wasm'));
let imports = { ... };
let { exports } =
await WebAssembly.instantiateStreaming(bytecode, imports);
The exact sequence of API calls to use is arcane, and there are multiple ways to perform this process, each of which has different tradeoffs that are not clear to most developers. This process generally just needs to be memorized or generated by a tool for you.
Thankfully, there is the esm-integration proposal, which is already implemented in bundlers today and which we are actively implementing in Firefox. This proposal lets developers import WebAssembly modules from JS code using the familiar JS module system.
import { run } from "/module.wasm";
run();
In addition, it allows a WebAssembly module to be loaded directly from a script tag using type=”module”:
This streamlines the most common patterns for loading and instantiating WebAssembly modules. However, while this mitigates the initial difficulty, we quickly run into the real problem.
Using Web APIs
Using a Web API from JavaScript is as simple as this:
console.log("hello, world");
For WebAssembly, the situation is much more complicated. WebAssembly has no direct access to Web APIs and must use JavaScript to access them.
The same single-line console.log program requires the following JavaScript file:
// We need access to the raw memory of the Wasm code, so
// create it here and provide it as an import.
let memory = new WebAssembly.Memory(...);
function consoleLog(messageStartIndex, messageLength) {
// The string is stored in Wasm memory, but we need to
// decode it into a JS string, which is what DOM APIs
// require.
let messageMemoryView = new UInt8Array(
memory.buffer, messageStartIndex, messageLength);
let messageString =
new TextDecoder().decode(messageMemoryView);
// Wasm can't get the `console` global, or do
// property lookup, so we do that here.
return console.log(messageString);
}
// Pass the wrapped Web API to the Wasm code through an
// import.
let imports = {
"env": {
"memory": memory,
"consoleLog": consoleLog,
},
};
let { instance } =
await WebAssembly.instantiateStreaming(bytecode, imports);
instance.exports.run();
And the following WebAssembly file:
(module
;; import the memory from JS code
(import "env" "memory" (memory 0))
;; import the JS consoleLog wrapper function
(import "env" "consoleLog"
(func $consoleLog (param i32 i32))
)
;; export a run function
(func (export "run")
(local i32 $messageStartIndex)
(local i32 $messageLength)
;; create a string in Wasm memory, store in locals
...
;; call the consoleLog method
local.get $messageStartIndex
local.get $messageLength
call $consoleLog
)
)
Code like this is called “bindings” or “glue code” and acts as the bridge between your source language (C++, Rust, etc.) and Web APIs.
This glue code is responsible for re-encoding WebAssembly data into JavaScript data and vice versa. For example, when returning a string from JavaScript to WebAssembly, the glue code may need to call a malloc function in the WebAssembly module and re-encode the string at the resulting address, after which the module is responsible for eventually calling free.
This is all very tedious, formulaic, and difficult to write, so it is typical to generate this glue automatically using tools like embind or wasm-bindgen. This streamlines the authoring process, but adds complexity to the build process that native platforms typically do not require. Furthermore, this build complexity is language-specific; Rust code will require different bindings from C++ code, and so on.
Of course, the glue code also has runtime costs. JavaScript objects must be allocated and garbage collected, strings must be re-encoded, structs must be deserialized. Some of this cost is inherent to any bindings system, but much of it is not. This is a pervasive cost that you pay at the boundary between JavaScript and WebAssembly, even when the calls themselves are fast.
This is what most people mean when they ask “When is Wasm going to get DOM support?” It’s already possible to access any Web API with WebAssembly, but it requires JavaScript glue code.
Why does this matter?
From a technical perspective, the status quo works. WebAssembly runs on the web and many people have successfully shipped software with it.
From the average web developer’s perspective, though, the status quo is subpar. WebAssembly is too complicated to use on the web, and you can never escape the feeling that you’re getting a second class experience. In our experience, WebAssembly is a power user feature that average developers don’t use, even if it would be a better technical choice for their project.
The average developer experience for someone getting started with JavaScript is something like this:
There’s a nice gradual curve where you use progressively more complicated features as the scope of your project increases.
By comparison, the average developer experience for someone getting started with WebAssembly is something like this:
You immediately must scale “the wall” of wrangling the many different pieces to work together. The end result is often only worth it for large projects.
Why is this the case? There are several reasons, and they all directly stem from WebAssembly being a second class language on the web.
1. It’s difficult for compilers to provide first-class support for the web
Any language targeting the web can’t just generate a Wasm file, but also must generate a companion JS file to load the Wasm code, implement Web API access, and handle a long tail of other issues. This work must be redone for every language that wants to support the web, and it can’t be reused for non-web platforms.
Upstream compilers like Clang/LLVM don’t want to know anything about JS or the web platform, and not just for lack of effort. Generating and maintaining JS and web glue code is a specialty skill that is difficult for already stretched-thin maintainers to justify. They just want to generate a single binary, ideally in a standardized format that can also be used on platforms besides the web.
2. Standard compilers don’t produce WebAssembly that works on the web
The result is that support for WebAssembly on the web is often handled by third-party unofficial toolchain distributions that users need to find and learn. A true first-class experience would start with the tool that users already know and have installed.
This is, unfortunately, many developers’ first roadblock when getting started with WebAssembly. They assume that if they just have rustc installed and pass a –target=wasm flag that they’ll get something they could load in a browser. You may be able to get a WebAssembly file doing that, but it will not have any of the required platform integration. If you figure out how to load the file using the JS API, it will fail for mysterious and hard-to-debug reasons. What you really need is the unofficial toolchain distribution which implements the platform integration for you.
3. Web documentation is written for JavaScript developers
The web platform has incredible documentation compared to most tech platforms. However, most of it is written for JavaScript. If you don’t know JavaScript, you’ll have a much harder time understanding how to use most Web APIs.
A developer wanting to use a new Web API must first understand it from a JavaScript perspective, then translate it into the types and APIs that are available in their source language. Toolchain developers can try to manually translate the existing web documentation for their language, but that is a tedious and error prone process that doesn’t scale.
4. Calling Web APIs can still be slow
If you look at all of the JS glue code for the single call to console.log above, you’ll see that there is a lot of overhead. Engines have spent a lot of time optimizing this, and more work is underway. Yet this problem still exists. It doesn’t affect every workload, but it’s something every WebAssembly user needs to be careful about.
Benchmarking this is tricky, but we ran an experiment in 2020 to precisely measure the overhead that JS glue code has in a real world DOM application. We built the classic TodoMVC benchmark in the experimental Dodrio Rust framework and measured different ways of calling DOM APIs.
Dodrio was perfect for this because it computed all the required DOM modifications separately from actually applying them. This allowed us to precisely measure the impact of JS glue code by swapping out the “apply DOM change list” function while keeping the rest of the benchmark exactly the same.
We tested two different implementations:
“Wasm + JS glue”: A WebAssembly function which reads the change list in a loop, and then asks JS glue code to apply each change individually. This is the performance of WebAssembly today.
“Wasm only”: A WebAssembly function which reads the change list in a loop, and then uses an experimental direct binding to the DOM which skips JS glue code. This is the performance of WebAssembly if we could skip JS glue code.
The duration to apply the DOM changes dropped by 45% when we were able to remove JS glue code. DOM operations can already be expensive; WebAssembly users can’t afford to pay a 2x performance tax on top of that. And as this experiment shows, it is possible to remove the overhead.
5. You always need to understand the JavaScript layer
The state of the art for WebAssembly on the web is that every language builds their own abstraction of the web platform using JavaScript. But these abstractions are leaky. If you use WebAssembly on the web in any serious capacity, you’ll eventually hit a point where you need to read or write your own JavaScript to make something work.
This adds a conceptual layer which is a burden for developers. It feels like it should just be enough to know your source language, and the web platform. Yet for WebAssembly, we require users to also know JavaScript in order to be a proficient developer.
How can we fix this?
This is a complicated technical and social problem, with no single solution. We also have competing priorities for what is the most important problem with WebAssembly to fix first.
Let’s ask ourselves: In an ideal world, what could help us here?
What if we had something that was:
A standardized self-contained executable artifact
Supported by multiple languages and toolchains
Which handles loading and linking of WebAssembly code
Which supports Web API usage
If such a thing existed, languages could generate these artifacts and browsers could run them, without any JavaScript involved. This format would be easier for languages to support and could potentially exist in standard upstream compilers, runtimes, toolchains, and popular packages without the need for third-party distributions. In effect, we could go from a world where every language re-implements the web platform integration using JavaScript, to sharing a common one that is built directly into the browser.
It would obviously be a lot of work to design and validate a solution! Thankfully, we already have a proposal with these goals that has been in development for years: the WebAssembly Component Model.
What is a WebAssembly Component?
For our purposes, a WebAssembly Component defines a high-level API that is implemented with a bundle of low-level WebAssembly code. It’s a standards-track proposal in the WebAssembly CG that’s been in development since 2021.
We feel that WebAssembly Components have the potential to give WebAssembly a first-class experience on the web platform, and to be the missing link described above.
How could they work?
Let’s try to re-create the earlier console.log example using only WebAssembly Components and no JavaScript.
NOTE: The interactions between WebAssembly Components and the web platform have not been fully designed, and the tooling is under active development.
Take this as an aspiration for how things could be, not a tutorial or promise.
The first step is to specify which APIs our application needs. This is done using an IDL called WIT. For our example, we need the Console API. We can import it by specifying the name of the interface.
component {
import std:web/console;
}
The std:web/console interface does not exist today, but would hypothetically come from the official WebIDL that browsers use for describing Web APIs. This particular interface might look like this:
And that’s it! The browser would automatically load the component, bind the native web APIs directly (without any JS glue code), and run the component.
This is great if your whole application is written in WebAssembly. However, most WebAssembly usage is part of a “hybrid application” which also contains JavaScript. We also want to simplify this use case. The web platform shouldn’t be split into “silos” that can’t interact with each other. Thankfully, WebAssembly Components also address this by supporting cross-language interoperability.
Let’s create a component that exports an image decoder for use from JavaScript code. First we need to write the interface that describes the image decoder:
Once we have that, we can write the component in any language that supports components. The right language will depend on what you’re building or what libraries you need to use. For this example, I’ll leave the implementation of the image decoder as an exercise for the reader.
The component can then be loaded in JavaScript as a module. The image decoder interface we defined is accessible to JavaScript, and can be used as if you were importing a JavaScript library to do the task.
import { Image } from "image-lib.wasm";
let byteStream = (await fetch("/image.file")).body;
let image = await Image.fromStream(byteStream);
let pixel = image.get(0, 0);
console.log(pixel); // { r: 255, g: 255, b: 0, a: 255 }
Next Steps
As it stands today, we think that WebAssembly Components would be a step in the right direction for the web. Mozilla is working with the WebAssembly CG to design the WebAssembly Component Model. Google is also evaluating it at this time.
If you’re interested to try this out, learn to build your first component and try it out in the browser using Jco or from the command-line using Wasmtime. The tooling is under heavy development, and contributions and feedback are welcome. If you’re interested in the in-development specification itself, check out the component-model proposal repository.
WebAssembly has come very far from when it was first released in 2017. I think the best is still yet to come if we’re able to turn it from being a “power user” feature, to something that average developers can benefit from.
Hello and welcome to another issue of This Week in Rust!
Rust is a programming language empowering everyone to build reliable and efficient software.
This is a weekly summary of its progress and community.
Want something mentioned? Tag us at
@thisweekinrust.bsky.social on Bluesky or
@ThisWeekinRust on mastodon.social, or
send us a pull request.
Want to get involved? We love contributions.
An important step for RFC implementation is for people to experiment with the
implementation and give feedback, especially before stabilization.
If you are a feature implementer and would like your RFC to appear in this list, add a
call-for-testing label to your RFC along with a comment providing testing instructions and/or
guidance on which aspect(s) of the feature need testing.
Always wanted to contribute to open-source projects but did not know where to start?
Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
No Calls for participation were submitted this week.
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
Oxidize Conference | CFP open until 2026-03-23 | Berlin, Germany | 2026-09-14 - 2026-09-16
EuroRust | CFP open until 2026-04-27 | Barcelona, Spain | 2026-10-14 - 2026-10-17
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Overall, a bit more noise than usual this week, but mostly a slight improvement
with several low-level optimizations at MIR and LLVM IR building landing. Also
less commits landing than usual, mostly due to GitHub CI issues during the week.
If you are running a Rust event please add it to the calendar to get
it mentioned here. Please remember to add a link to the event too.
Email the Rust Community Team for access.
This is actually just Rust adding support for C++-style duck-typed templates, and the long and mostly-irrelevant information contained in the ICE message is part of the experience.
In the latest desktop version of Firefox, you’ll find an AI controls section where you can turn off AI features entirely — or decide which ones stay on. Here’s how to set things up the way you want.
But first, what AI features can you manage in Firefox?
Menu bar > Firefox > Settings (or Preferences) > AI Controls
Turn on Block AI enhancements
Or block specific features
You can also manage AI features individually. Block link previews? Up to you. Change your mind on translations? You can turn them on any time, while keeping all other AI features blocked.
To block specific features:
Menu bar > Firefox > Settings (or Preferences) > AI Controls
Find the AI feature > dropdown menu > Blocked
In the dropdown menu, Available means you can see and use the feature; Enabled means you’ve opted in to use it; and Blocked means it’s hidden and can’t be used.
Choose the AI features you want to use
Enable image alt text in PDFs to improve accessibility. Keep AI-enhanced tab group suggestions if they’re useful. Block anything that isn’t — the decision is yours.
WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).
This newsletter gives an overview of the work we’ve done as part of the Firefox 148 release cycle.
Contributions
Firefox is an open source project, and we are always happy to receive external code contributions to our WebDriver implementation. We want to give special thanks to everyone who filed issues, bugs and submitted patches.
In Firefox 148, a WebDriver bug was fixed by a contributor:
Cross-site scripting (XSS) remains one of the most prevalent vulnerabilities on the web. The new standardized Sanitizer API provides a straightforward way for web developers to sanitize untrusted HTML before inserting it into the DOM. Firefox 148 is the first browser to ship this standardized security enhancing API, advancing a safer web for everyone. We expect other browsers to follow soon.
An XSS vulnerability arises when a website inadvertently lets attackers inject arbitrary HTML or JavaScript through user-generated content. With this attack, an attacker could monitor and manipulate user interactions and continually steal user data for as long as the vulnerability remains exploitable. XSS has a long history of being notoriously difficult to prevent and has ranked among the top three web vulnerabilities (CWE-79) for nearly a decade.
Firefox has been deeply involved in solutions for XSS from the beginning, starting with spearheading the Content-Security-Policy (CSP) standard in 2009. CSP allows websites to restrict which resources (scripts, styles, images, etc.) the browser can load and execute, providing a strong line of defense against XSS. Despite a steady stream of improvements and ongoing maintenance, CSP did not gain sufficient adoption to protect the long tail of the web as it requires significant architectural changes for existing web sites and continuous review by security experts.
The Sanitizer API is designed to help fill that gap by providing a standardized way to turn malicious HTML into harmless HTML — in other words, to sanitize it. The setHTML( ) method integrates sanitization directly into HTML insertion, providing safety by default. Here is an example of sanitizing a simple unsafe HTML:
document.body.setHTML(`<h1>Hello my name is <img src="x"
onclick="alert('XSS')">`);
This sanitization will allow the HTML <h1> element while removing the embedded <img> element and its onclick attribute, thereby eliminating the XSS attack resulting in the following safe HTML:
<h1>Hello my name is</h1>
Developers can opt into stronger XSS protections with minimal code changes by replacing error-prone innerHTML assignments with setHTML(). If the default configuration of setHTML( ) is too strict (or not strict enough) for a given use case, developers can provide a custom configuration that defines which HTML elements and attributes should be kept or removed. To experiment with the Sanitizer API before introducing it on a web page, we recommend exploring the Sanitizer API playground.
For even stronger protections, the Sanitizer API can be combined with Trusted Types, which centralize control over HTML parsing and injection. Once setHTML( ) is adopted, sites can enable Trusted Types enforcement more easily, often without requiring complex custom policies. A strict policy can allow setHTML( ) while blocking other unsafe HTML insertion methods, helping prevent future XSS regressions.
The Sanitizer API enables an easy replacement of innerHTML assignments with setHTML( ) in existing code, introducing a new safer default to protect users from XSS attacks on the web. Firefox 148 supports the Sanitizer API as well as Trusted Types, which creates a safer web experience. Adopting these standards will allow all developers to prevent XSS without the need for a dedicated security team or significant implementation changes.
Righty-ho, I’m back from Rust Nation, and busily horrifying my teenage daughter with my (admittedly atrocious) attempts at doing an English accent1. It was a great trip with a lot of good conversations and some interesting observations. I am going to try to blog about some of them, starting with some thoughts spurred by Jon Seager’s closing keynote, “Rust Adoption At Scale with Ubuntu”.
There are many chasms out there
For some time now I’ve been debating with myself, has Rust “crossed the chasm”? If you’re not familiar with that term, it comes from a book that gives a kind of “pop-sci” introduction to the Technology Adoption Life Cycle.
The answer, of course, is it depends on who you ask. Within Amazon, where I have the closest view, the answer is that we are “most of the way across”: Rust is squarely established as the right way to build at-scale data planes or resource-aware agents and it is increasingly seen as the right choice for low-level code in devices and robotics as well – but there remains a lingering perception that Rust is useful for “those fancy pants developers at S3” (or wherever) but a bit overkill for more average development3.
On the other hand, within the realm of Safety Critical Software, as Pete LeVasseur wrote in a recent rust-lang blog post, Rust is still scrabbling for a foothold. There are a number of successful products but most of the industry is in a “wait and see” mode, letting the early adopters pave the path.
“Crossing the chasm” means finding “reference customers”
The big idea that I at least took away from reading Crossing the Chasm and other references on the technology adoption life cycle is the need for “reference customers”. When you first start out with something new, you are looking for pioneers and early adopters that are drawn to new things:
What an early adopter is buying [..] is some kind of change agent. By being the first to implement this change in the industry, the early adopters expect to get a jump on the competition. – from Crossing the Chasm
But as your technology matures, you have to convince people with a lower and lower tolerance for risk:
The early majority want to buy a productivity improvement for existing operations. They are looking to minimize discontinuity with the old ways. They want evolution, not revolution. – from Crossing the Chasm
So what is most convincing to people to try something new? The answer is seeing that others like them have succeeded.
You can see this at play in both the Amazon example and the Safety Critical Software example. Clearly seeing Rust used for network services doesn’t mean it’s ready to be used in your car’s steering column4. And even within network services, seeing a group like S3 succeed with Rust may convince other groups building at-scale services to try Rust, but doesn’t necessarily persuade a team to use Rust for their next CRUD service. And frankly, it shouldn’t! They are likely to hit obstacles.
Ubuntu is helping Rust “cross the (user-land linux) chasm”
All of this was on my mind as I watched the keynote by Jon Seager, the VP of Engineering at Canonical, which is the company behind Ubuntu. Similar to Lars Bergstrom’s epic keynote from year’s past on Rust adoption within Google, Jon laid out a pitch for why Canonical is adopting Rust that was at once visionary and yet deeply practical.
“Visionary and yet deeply practical” is pretty much the textbook description of what we need to cross from early adopters to early majority. We need folks who care first and foremost about delivering the right results, but are open to new ideas that might help them do that better; folks who can stand on both sides of the chasm at once.
Jon described how Canonical focuses their own development on a small set of languages: Python, C/C++, and Go, and how they had recently brought in Rust and were using it as the language of choice for new foundational efforts, replacing C, C++, and (some uses of) Python.
Ubuntu is building the bridge across the chasm
Jon talked about how he sees it as part of Ubuntu’s job to “pay it forward” by supporting the construction of memory-safe foundational utilities. Jon meant support both in terms of finances – Canonical is sponsoring the Trifecta Tech Foundation’s to develop sudo-rs and ntpd-rs and sponsoring the uutils org’s work on coreutils – and in terms of reputation. Ubuntu can take on the risk of doing something new, prove that it works, and then let others benefit.
Remember how the Crossing the Chasm book described early majority people? They are “looking to minimize discontinuity with the old ways”. And what better way to do that than to have drop-in utilities that fit within their existing workflows.
The challenge for Rust: listening to these new adopters
With new adoption comes new perspectives. On Thursday night I was at dinner5 organized by Ernest Kissiedu6. Jon Seager was there along with some other Rust adopters from various industries, as were a few others from the Rust Foundation and the open-source project.
Ernest asked them to give us their unvarnished takes on Rust. Jon made the provocative comment that we needed to revisit our policy around having a small standard library. He’s not the first to say something like that, it’s something we’ve been hearing for years and years – and I think he’s right! Though I don’t think the answer is just to ship a big standard library. In fact, it’s kind of a perfect lead-in to (what I hope will be) my next blog post, which is about a project I call “battery packs”7.
To grow, you have to change
The broader point though is that shifting from targeting “pioneers” and “early adopters” to targeting “early majority” sometimes involves some uncomfortable changes:
Transition between any two adoption segments is normally excruciatingly awkward because you must adopt new strategies just at the time you have become most comfortable with the old ones. [..] The situation can be further complicated if the high-tech company, fresh from its marketing success with visionaries, neglects to change its sales pitch. [..] The company may be saying “state-of-the-art” when the pragmatist wants to hear “industry standard”. – Crossing the Chasm (emphasis mine)
Not everybody will remember it, but in 2016 there was a proposal called the Rust Platform. The idea was to bring in some crates and bless them as a kind of “extended standard library”. People hated it. After all, they said, why not just add dependencies to your Cargo.toml? It’s easy enough. And to be honest, they were right – at least at the time.
I think the Rust Platform is a good example of something that was a poor fit for early adopters, who want the newest thing and don’t mind finding the best crates, but which could be a great fit for the Early Majority.8
Anyway, I’m not here to argue for one thing or another in this post, but more for the concept that we have to be open to adapting our learned wisdom to new circumstances. In the past, we were trying to bootstrap Rust into the industry’s consciousness – and we have succeeded.
The task before us now is different: we need to make Rust the best option not just in terms of “what it could be” but in terms of “what it actually is” – and sometimes those are in tension.
Another challenge for Rust: turning adoption into investment
Later in the dinner, the talk turned, as it often does, to money. Growing Rust adoption also comes with growing needs placed on the Rust project and its ecosystem. How can we connect the dots? This has been a big item on my mind, and I realize in writing this paragraph how many blog posts I have yet to write on the topic, but let me lay out a few interesting points that came up over this dinner and at other recent points.
Investment can mean contribution, particularly for open-source orgs
First, there are more ways to offer support than $$. For Canonical specifically, as they are an open-source organization through-and-through, what I would most want is to build stronger relationships between our organizations. With the Rust for Linux developers, early on Rust maintainers were prioritizing and fixing bugs on behalf of RfL devs, but more and more, RfL devs are fixing things themselves, with Rust maintainers serving as mentors. This is awesome!
Money often comes before a company has adopted Rust, not after
Second, there’s an interesting trend about $$ that I’ve seen crop up in a few places. We often think of companies investing in the open-source dependencies that they rely upon. But there’s an entirely different source of funding, and one that might be even easier to tap, which is to look at companies that are considering Rust but haven’t adopted it yet.
For those “would be” adopters, there are often individuals in the org who are trying to make the case for Rust adoption – these individuals are early adopters, people with a vision for how things could be, but they are trying to sell to their early majority company. And to do that, they often have a list of “table stakes” features that need to be supported; what’s more, they often have access to some budget to make these things happen.
This came up when I was talking to Alexandru Radovici, the Foundation’s Silver Member Directory, who said that many safety critical companies have money they’d like to spend to close various gaps in Rust, but they don’t know how to spend it. Jon’s investments in Trifecta Tech and the uutils org have the same character: he is looking to close the gaps that block Ubuntu from using Rust more.
Conclusions…?
Well, first of all, you should watch Jon’s talk. “Brilliant”, as the Brits have it.
But my other big thought is that this is a crucial time for Rust. We are clearly transitioning in a number of areas from visionaries and early adopters towards that pragmatic majority, and we need to be mindful that doing so may require us to change some of the way that we’ve always done things. I liked this paragraph from Crossing the Chasm:
To market successfully to pragmatists, one does not have to be one – just understand their values and work to serve them. To look more closely into these values, if the goal of visionaries is to take a quantum leap forward, the goal of pragmatists is to make a percentage improvement–incremental, measurable, predictable progress. [..] To market to pragmatists, you must be patient. You need to be conversant with the issues that dominate their particular business. You need to show up at the industry-specific conferences and trade shows they attend.
Re-reading Crossing the Chasm as part of writing this blog post has really helped me square where Rust is – for the most part, I think we are still crossing the chasm, but we are well on our way. I think what we see is a consistent trend now where we have Rust champions who fit the “visionary” profile of early adopters successfully advocating for Rust within companies that fit the pragmatist, early majority profile.
Open source can be a great enabler to cross the chasm…
It strikes me that open-source is just an amazing platform for doing this kind of marketing. Unlike a company, we don’t have to do everything ourselves. We have to leverage the fact that open source helps those who help themselves – find those visionary folks in industries that could really benefit from Rust, bring them into the Rust orbit, and then (most important!) support and empower them to adapt Rust to their needs.
…but only if we don’t get too “middle school” about it
This last part may sound obvious, but it’s harder than it sounds. When you’re embedded in open source, it seems like a friendly place where everyone is welcome. But the reality is that it can be a place full of cliques and “oral traditions” that “everybody knows”9. People coming with an idea can get shutdown for using the wrong word. They can readily mistake the, um, “impassioned” comments from a random contributor (or perhaps just a troll…) for the official word from project leadership. It only takes one rude response to turn somebody away.
What Rust needs most is empathy
So what will ultimately help Rust the most to succeed? Empathy in Open Source. Let’s get out there, find out where Rust can help people, and make it happen. Exciting times!
I am famously bad at accents. My best attempt at posh British sounds more like Apu from the Simpsons. I really wish I could pull off a convincing Greek accent, but sadly no. ↩︎
Another of my pearls of wisdom is “there is nothing more permanent than temporary code”. I used to say that back at the startup I worked at after college, but years of experience have only proven it more and more true. ↩︎
Russel Cohen and Jess Izen gave a great talk at last year’s RustConf about what our team is doing to help teams decide if Rust is viable for them. But since then another thing having a big impact is AI, which is bringing previously unthinkable projects, like rewriting older systems, within reach. ↩︎
I have no idea if there is code in a car’s steering column, for the record. I assume so by now? For power steering or some shit? ↩︎
Or am I supposed to call it “tea”? Or maybe “supper”? I can’t get a handle on British mealtimes. ↩︎
Ernest is such a joy to be around. He’s quiet, but he’s got a lot of insights if you can convince him to share them. If you get the chance to meet him, take it! If you live in London, go to the London Rust meetup! Find Ernest and introduce yourself. Tell him Niko sent you and that you are supposed to say how great he is and how you want to learn from the wisdom he’s accrued over the years. Then watch him blush. What a doll. ↩︎
The Battery Packs proposal I want to talk about is similar in some ways to the Rust Platform, but decentralized and generally better in my opinion– but I get ahead of myself! ↩︎
Betteridge’s Law of Headlines has it that “Any headline that ends in a question mark can be answered by the word no”. Well, Niko’s law of open-source2 is that “nobody actually knows anything that ’everybody’ knows”. ↩︎
To answer that question, we first need to understand how complex, writing or maintaining a web browser is.
A "modern" web browser is :
a network stack,
and html+[1] parser,
and image+[2] decoder,
a javascript[3] interpreter compiler,
a User's interface,
integration with the underlying OS[4],
And all the other things I'm currently forgetting.
Of course, all the above point are interacting with one another in different ways. In order for "the web" to work, standards are developed and then implemented in the different browsers, rendering engines.
In order to "make" the browser, you need engineers to write and maintain the code, which is probably around 30 Million lines of code[5] for Firefox. Once the code is written, it needs to be compiled [6] and tested [6]. This requires machines that run the operating system the browser ships to (As of this day, mozilla officially ships on Linux, Microslop Windows and MacOS X - community builds for *BSD do exists and are maintained). You need engineers to maintain the compile (build) infrastructure.
Once the engineers that are responsible for the releases [7] have decided what codes and features were mature enough, they start assembling the bits of code and like the engineers, build, test and send the results to the people using said web browser.
When I was employed at Mozilla (the company that makes Firefox) around 900+ engineers were tasked with the above and a few more were working on research and development. These engineers are working 5 days a week, 8 hours per day, that's 1872000 hours of engineering brain power spent every year (It's actually less because I have not taken vacations into account) on making Firefox versions. On top of that, you need to add the cost of building and running the test before a new version reaches the end user.
The current browsing landscape looks dark, there are currently 3 choices for rendering engines, KHTML based browsers, blink based ones and gecko based ones. 90+% of the market is dominated by KHTML/blink based browsers. Blink is a fork of KHTML. This leads to less standard work, if the major engine implements a feature and others need to play catchup to stay relevant, this has happened in the 2000s with IE dominating the browser landscape[8], making it difficult to use macOS 9 or X (I'm not even mentioning Linux here :)). This also leads to most web developers using Chrome and once in a while testing with Firefox or even Safari. But if there's a little glitch, they can still ship because of market shares.
Firefox was started back in 1998, when embedding software was not really a thing with all the platform that were to be supported. Firefox is very hard to embed (eg use as a softwrae library and add stuff on top). I know that for a fact because both Camino and Thunderbird are embeding gecko.
In the last few years, Mozilla has been itching the people I connect to, who are very privacy focus and do not see with a good eye what Mozilla does with Firefox. I believe that Mozilla does this in order to stay relevant to normal users. It needs to stay relevant for at least two things :
Keep the web standards open, so anyone can implement a web browser / web services.
to have enough traffic to be able to pay all the engineers working on gecko.
Now that, I've explained a few important things, let's answer the question "Are mozilla's fork any good?"
I am biased as I've worked for the company before. But how can a few people, even if they are good and have plenty of free time, be able to cope with what maintaining a fork requires :
following security patches and porting said patches.
following development and maintain their branch with changes coming all over the place
If you are comfortable with that, then using a fork because Mozilla is pushing stuff you don't want is probably doable. If not, you can always kill those features you don't like using some `about:config` magic.
Now, I've set a tone above that foresees a dark future for open web technologies. What Can you do to keep the web open and with some privacy focus?
Keep using Mozilla Nightly
Give servo a try
[1] HTML is interpreted code, that's why it needs to be parsed and then rendered.
[2] In order to draw and image or a photo on a screen, you need to be able to encode it or decode it. Many file formats are available.
[4] Operating systems need to the very least know which program to open files with. The OS landscape has changed a lot over the last 25 years. These days you need to support 3 major OS, while in the 2000s you had more systems, IRIX for example. You still have some portions of the Mozilla code base that support these long dead systems.
Various issues with debugging Rust code are often mentioned as one of the biggest challenges that annoy Rust developers. While it is definitely possible to debug Rust code today, there are situations where it does not work well enough, and the quality of debugging support also varies a lot across different debuggers and operating systems.
In order for Rust to have truly stellar debugging support, it should ideally:
Support (several versions!) of different debuggers (such as GDB, LLDB or CDB) across multiple operating systems.
Implement debugger visualizers that are able to produce quality presentation of most Rust types.
Provide first-class support for debugging async code.
Allow evaluating Rust expressions in the debugger.
Rust is not quite there yet, and it will take a lot of work to reach that level of debugger support. Furthermore, it is also challenging to ensure that debugging Rust code keeps working well, across newly released debugger versions, changes to internal representation of Rust data structures in the standard library and other things that can break the debugging experience.
We already have some plans to start improving debugging support in Rust, but it would also be useful to understand the current debugging struggles of Rust developers. That is why we have prepared the Rust Debugging Survey, which should help us find specific challenges with debugging Rust code.
Filling the survey should take you approximately 5 minutes, and the survey is fully anonymous. We will accept submissions until Friday, March 13th, 2026. After the survey ends, we will evaluate the results and post key insights on this blog.
We would like to thank Sam Kellam (@hashcatHitman) who did a lot of great work to prepare this survey.
We invite you to fill the survey, as your responses will help us improve the Rust debugging experience. Thank you!
TLDR: No one could agree what ‘sovereignty’ means, but (almost) everyone agreed that AI cannot be controlled by a few dominant companies.
This past week, droves of AI experts and enthusiasts descended on New Delhi, bringing their own agendas, priorities, and roles in the debate to the table.
I scored high for my ability to move between cars, rickshaws and foot for transport (mostly thanks to colleagues), but low for being prepared with snacks. So, based on my tightly packed agenda combined with high hunger levels, here’s my read out:
The same script, different reactions
As with any global summit, the host government made the most of having the eyes of the world and deep pockets of AI investors in town. While some press were critical of India seeking deals and investments, it wasn’t notable – or outside of the norm.
What should be notable, and indeed were reflected in the voluntary agreements, were the key themes that drove conversations, including democratisation of AI, access to resources, and the vital role of open source to drive the benefits of AI. These topics were prominent in the Summit sessions and side events throughout the week.
In the name of innovation, regulation has become a faux pas
The EU has become a magnet for criticism given its recent attempts to regulate AI. I’m not going to debate this here, but it’s clear that the EU AI Act (AIA) is being deployed and PRed quite expertly as a cautionary tale. While healthy debate around regulation is absolutely critical, much of the public commentary surrounding the AIA (and not just at this Summit) has been factually incorrect. Interrogate this reality by all means — we live in complex times — but it’s hard not to see invalid criticisms as a strategic PR effort by those who philosophically (and financially) opposed governance. There is certainly plenty to question in the AIA, but the reality is much more nuanced than critics suggest.
What’s more likely to kill a start up: the cost of compliance, or the concentration of market power in the hands of a few dominant players? It’s true that regulation can absolutely create challenges. However, it is also worth looking at whether the greater obstacle is the control a small number of tech companies hold. A buy-out as an exit is great for many start-ups, but if that is now the most hopeful option, it raises important questions about the long-term health and competitiveness of the larger tech ecosystem.
A note of optimism: developing global frameworks on AI may still seem like a pipe dream in today’s macro political climate, but ideas around like-minded powers working together and building trust makes me think that alignment beyond pure voluntary principles may be something we see grow. Frequent references to the Hiroshima Process as a middle ground were notable.
AI eats the world
There were pervasive assumptions that bigger — and then bigger still — is the inevitable direction of AI deployment, with hyperscale seen as the only viable path forward, in terms of inputs needed. However, the magnitude of what’s required to fuel the current LLM-focused market structure met a global majority-focused reality: hyperscaling isn’t sustainable. There were two primary camps at the Summit — the haves and the rest of us — and while the Summit brought them together, the gulf between them continues to grow.
Open source has to win
At the first AI Safety Summit in the UK, the concept of open source AI was vilified as a security risk. At the France AI Action Summit, the consensus began to shift meaningfully. At the India AI Impact Summit, we saw undeniable recognition of the vital role that open source plays in our collective AI future.
With proprietary systems, winning means owning. With open source approaches, winning means we’re not just renting AI from a few companies and countries: we’re working collectively to build, share, secure and inspect AI systems in the name of economic growth and the public interest. Before the Paris Summit, this was a difficult vision to push for, but after New Delhi, it’s clear that open source is on the right side of history. Now, it’s time for governments to build out their own strategies to promote and procure this approach.
Consolidation ≠ Competition
Global South discussions made one message clear: dependency orientated partnerships are not true partnerships and they’re not a long term bet. Many countries are becoming more vocal that they want autonomy of their data and choice in their suppliers to lessen harmful impacts on citizens, and increase their impact to responsibly govern.
That is not today’s reality.
I was, however, encouraged to find that attendees were far less starry-eyed over big tech than at previous Summits. The consensus agreed that it met no one’s definition of sovereignty for a select few companies to own and control AI.
Despite agreement amongst the majority, addressing market concentration remained an elephant in the room. The narrative deployed against regulation became a blanket mantra, applied to anything from AI governance to competition action. However, it fails to address the fact that the AI market is already skewed toward a small number of powerful companies and traditional competition rules that act only after problems arise (and often through long legal processes) are not enough to keep up with fast-paced digital industries.
Some participants were downbeat and questioned if it was too late. The challenge is in demonstrating that it isn’t. There is no single approach. But we know that concentration can be countered with a mix of technical and legal interventions. Options can be sweeping, or lighter touch and surgical in their focus. We are currently seeing is a wave of countries pass, draft, debate and consider new ex ante rules, providing learnings, data and inspiration.
It’s important that we watch this space.
Whose safety are we talking about exactly?
The India AI Impact Summit has been criticised for letting safety discussions fall off the radar. That’s not necessarily true. Instead of focusing on the view that AI is a cause for human annihilation, discussions focused on impacts that we can evidence now: on language, culture, bias, online safety, inclusion, and jobs.
Less headline-grabbing, less killer robots, far more human.
The path forward
It’s difficult to know if these Summits will continue in the long term. There is a lot of fuss, expense, marketing, diplomacy, traffic and word salads involved. However, the opportunity to convene world leaders, businesses, builders, engineers, civil society and academics in one place, for what we are constantly reminded is a transformational technology, feels needed. Tracking progress on voluntary commitments over time might be illustrative. And while many of the top sessions are reserved for the few, witnessing the diverse debates this past week gives me hope that there is an opportunity for the greater voice to shape AI to be open, competitive and built for more than just the bottom line.
We are happy to announce that the Rust Project will again be participating in Google Summer of Code (GSoC) 2026, same as in the previous two years. If you're not eligible or interested in participating in GSoC, then most of this post likely isn't relevant to you; if you are, this should contain some useful information and links.
Google Summer of Code (GSoC) is an annual global program organized by Google that aims to bring new contributors to the world of open-source. The program pairs organizations (such as the Rust Project) with contributors (usually students), with the goal of helping the participants make meaningful open-source contributions under the guidance of experienced mentors.
The organizations that have been accepted into the program have been announced by Google. The GSoC applicants now have several weeks to discuss project ideas with mentors. Later, they will send project proposals for the projects that they found the most interesting. If their project proposal is accepted, they will embark on a several months long journey during which they will try to complete their proposed project under the guidance of an assigned mentor.
We have prepared a list of project ideas that can serve as inspiration for potential GSoC contributors that would like to send a project proposal to the Rust organization. However, applicants can also come up with their own project ideas. You can discuss project ideas or try to find mentors in the #gsoc Zulip stream. We have also prepared a proposal guide that should help you with preparing your project proposals. We would also like to bring your attention to our GSoC AI policy.
You can start discussing the project ideas with Rust Project mentors and maintainers immediately, but you might want to keep the following important dates in mind:
The project proposal application period starts on March 16, 2026. From that date you can submit project proposals into the GSoC dashboard.
The project proposal application period ends on March 31, 2026 at 18:00 UTC. Take note of that deadline, as there will be no extensions!
If you are interested in contributing to the Rust Project, we encourage you to check out our project idea list and send us a GSoC project proposal! Of course, you are also free to discuss these projects and/or try to move them forward even if you do not intend to (or cannot) participate in GSoC. We welcome all contributors to Rust, as there is always enough work to do.
Our GSoC contributors were quite successful in the past two years (2024, 2025), so we are excited what this year's GSoC will bring! We hope that participants in the program can improve their skills, but also would love for this to bring new contributors to the Project and increase the awareness of Rust in general. Like last year, we expect to publish blog posts in the future with updates about our participation in the program.
Hello and welcome to another issue of This Week in Rust!
Rust is a programming language empowering everyone to build reliable and efficient software.
This is a weekly summary of its progress and community.
Want something mentioned? Tag us at
@thisweekinrust.bsky.social on Bluesky or
@ThisWeekinRust on mastodon.social, or
send us a pull request.
Want to get involved? We love contributions.
An important step for RFC implementation is for people to experiment with the
implementation and give feedback, especially before stabilization.
If you are a feature implementer and would like your RFC to appear in this list, add a
call-for-testing label to your RFC along with a comment providing testing instructions and/or
guidance on which aspect(s) of the feature need testing.
Always wanted to contribute to open-source projects but did not know where to start?
Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
No Calls for participation were submitted this week.
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
Oxidize Conference | CFP open until 2026-03-23 | Berlin, Germany | 2026-09-14 - 2026-09-16
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Several pull requests introduced (usually very small) regressions across the board this week. On the
other hand, #151380 provided a nice performance win in the inference engine.
I would also like to bring attention to #152375,
which improved the parallel frontend. It is not shown in this report, because we don't yet have
many benchmarks for the parallel frontend, but this PR seemingly improved the check (wall-time)
performance with multiple frontend threads on several real-world crates by 5-10%!
If you are running a Rust event please add it to the calendar to get
it mentioned here. Please remember to add a link to the event too.
Email the Rust Community Team for access.
Clearly there is such a thing as too much syntactic sugar (as one of my professors put it, "syntactic sugar causes semantic cancer"), but at the same time also clearly some syntactic sugar is worth having.
WebNN is emerging as a portable, browser-friendly inference API.
But LLMs hit a hard wall: dynamic inputs.
Autoregressive transformers fundamentally mutate state at runtime. KV cache
tensors evolve at every step, sequence lengths vary with prompts, and shape
expressions flow through
operators like Shape, Gather, Concat, Reshape, and Expand.
Today, this does not map cleanly to WebNN’s static-graph constraints.
At step 1, KV cache length is 1. At step 512, KV cache length is 512.
That is not a static graph.
Why this matters
If this is not solved, WebNN stays limited to fixed-shape demos for many LLM
use cases.
For real product workloads, people need variable prompt lengths, efficient
token-by-token decode, and stable latency as context grows. Without that, local
browser LLM UX degrades quickly and teams default back to heavier alternatives.
The issue captures exactly what we see in practice:
Vision models with runtime-determined resolution
Speech/decoder models where KV cache grows by one token per step
LLMs with arbitrary prompt lengths and dynamic cache shapes
In other words: modern inference workloads.
The ONNX Runtime workaround (and why it hurts)
ONNX Runtime WebNN has had to work around this limitation by routing dynamic-shape
parts away from WebNN and into WASM execution paths.
It works, but performance is terrible for autoregressive generation because you
bounce between fast backend code and slow fallback code in the hottest part of
the loop.
This architecture can make demos pass, but it creates significant performance
penalties in real autoregressive workloads.
In preliminary runs, keeping decode on one backend avoids the repeated fallback
round-trips that dominate token latency.
So instead of accepting that, we decided to push WebNN support further.
I started this in Malta
I started prototyping this while on vacation in Malta.
What began as a small experiment quickly turned into deep changes across three
repositories: converter internals, runtime shape validation, and KV-cache
plumbing.
The work happened across:
webnn-graph: ONNX lowering and dynamic-input metadata support
rustnn (tarek-flexible-input): dynamic dimensions in graph/runtime + checked execution
pywebnn: Python demos and loading-from-Hub workflows
I also made sure to surface rustnn through Python (pywebnn) very early.
Most ML engineers live in Python land, and I wanted this work to be reachable by
that ecosystem immediately: easier model validation, easier parity checks against
transformers, and faster feedback from people who already ship models every day.
What changed in practice
The key was to support bounded dynamic dimensions end to end.
Why bounded and not fully dynamic? Because many backends still need strong
compile-time guarantees for validation, allocation, and kernel planning.
Fully unbounded shapes are hard to optimize and hard to validate safely.
Bounded dynamic dimensions are the practical compromise: keep symbolic runtime
flexibility, but define a maximum range so memory and execution planning remain
deterministic.
This allows the full autoregressive decode loop to stay inside the WebNN
backend, without bouncing into slower fallback paths.
This is also better than common alternatives:
Padding everything to worst-case shapes wastes memory and compute
Re-exporting one graph per shape explodes complexity
Falling back dynamic parts to WASM in hot decode loops kills throughput
For example, you can bound a sequence dimension to 2048 tokens: large enough
for real prompts, still finite for backend planning and allocation.
In rustnn, tensor dimensions can now be static values or dynamic descriptors
with a name and a max size, then checked at runtime:
In webnn-graph, ONNX conversion can preserve unresolved input dynamics while
still lowering shape-driving expressions needed by WebNN.
That lets us keep flexibility where possible while still emitting valid WebNN
graphs.
SmolLM-135M converted and running
With flexible inputs supported end to end, SmolLM-135M converts cleanly: no
shape rewriting hacks, no per-length exports, no WASM fallback in the decode
loop. The artifacts are published here:
The demo can also run a transformers baseline and fail fast on divergence:
ifargs.compare_transformers:hf_generated,hf_text,hf_prompt_ids=run_transformers_baseline(...)...ifgenerated_text!=hf_text:print("[ERROR] WebNN and transformers generated different text output")sys.exit(1)
Correctness checks against transformers are critical. Performance
improvements mean nothing if generation diverges.
Lessons learned
Fully unbounded dynamic shapes are rarely necessary for practical decode loops
Bounded flexibility captures most real workloads while keeping backends sane
Python exposure (pywebnn) accelerates model validation and ecosystem feedback
What is next
Flexible inputs will likely be important if WebNN is to support real LLM workloads.
Static graphs alone are not enough for modern inference. Bounded flexibility is
the pragmatic bridge.
And while this work pushes WebNN forward, we are also giving a lot of love to
the TensorRT backend these days, because high-performance local inference matters
just as much as API design.
OK, let’s talk about sharing. This is the first of Dada blog posts where things start to diverge from Rust in a deep way and I think the first where we start to see some real advantages to the Dada way of doing things (and some of the tradeoffs I made to achieve those advantages).
We are shooting for a GC-like experience without GC
Let’s start with the goal: earlier, I said that Dada was like “Rust where you never have to type as_ref”. But what I really meant is that I want a GC-like experience–without the GC.
We are shooting for a “composable” experience
I also often use the word “composable” to describe the Dada experience I am shooting for. Composable means that you can take different things and put them together to achieve something new.
Obviously Rust has many composable patterns – the Iterator APIs, for example. But what I have found is that Rust code is often very brittle: there are many choices when it comes to how you declare your data structures and the choices you make will inform how those data structures can be consumed.
Running example: Character
Defining the Character type
Let’s create a type that we can use as a running example throughout the post: Character. In Rust, we might define a Character like so:
So far, so good. Now suppose I want to share that same Character struct so it can be referenced from a lot of places without deep copying. To do that, I am going to put it in an Arc:
OK, cool! Now I have a Character that is readily sharable. That’s great.
Rust is composable here, which is cool, we like that
Side note but this is an example of where Rust is composable: we defined Character once in a fully-owned way and we were able to use it mutably (to build it up imperatively over time) and then able to “freeze” it and get a read-only, shared copy of Character. This gives us the advantages of an imperative programming language (easy data construction and manipulation) and the advantages of a functional language (immutability prevents bugs when things are referenced from many disjoint places). Nice!
Creating and Arc’ing the Character
Now, suppose that I have some other code, written independently, that just needs to store the character’s name. That code winds up copying the name into a lot of different places. So, just like we used Arc to let us cheaply reference a single character from multiple places, it uses Arc so it can cheaply reference the character’s name from multiple places:
structCharacterSheetWidget{// Use `Arc<String>` and not `String` because
// we wind up copying this into name different
// places and we don't want to deep clone
// the string each time.
name: Arc<String>,// ... assume more fields here ...
}
OK. Now comes the rub. I want to create a character-sheet widget from our shared character:
fncreate_character_sheet_widget(ch: Arc<Character>)-> CharacterSheetWidget{CharacterSheetWidget{// FIXME: Huh, how do I bridge this gap?
// I guess I have to do this.
name: Arc::new(ch.name.clone()),// ... assume more fields here ...
}}
Shoot, that’s frustrating! What I would like to do is to write name: ch.name.clone() or something similar (actually I’d probably like to just write ch.name, but anyhow) and get back an Arc<String>. But I can’t do that. Instead, I have to deeply clone the string and allocate a newArc. Of course any subsequent clones will be cheap. But it’s not great.
Rust often gives rise to these kind of “impedance mismatches”
I often find patterns like this arise in Rust: there’s a bit of an “impedance mismatch” between one piece of code and another. The solution varies, but it’s generally something like
clone some data – it’s not so big anyway, screw it (that’s what happened here).
refactor one piece of code – e.g., modify the Character class to store an Arc<String>. Of course, that has ripple effects, e.g., we can no longer write ch.name.push_str(...) anymore, but have to use Arc::get_mut or something.
invoke some annoying helper – e.g., write opt.as_ref() to convert from an &Option<String> to a Option<&String> or write a &**r to convert from a &Arc<String> to a &str.
The goal with Dada is that we don’t have that kind of thing.
Sharing is how Dada copies
So let’s walk through how that same Character example would play out in Dada. We’ll start by defining the Character class:
class Character(
name: String,
klass: String, # Oh dang, the perils of a class keyword!
hp: u32,
)
Just as in Rust, we can create the character and then modify it afterwards:
class Character(name: String, klass: String, hp: u32)
let ch: given Character = Character("", "", 22)
# ----- remember, the "given" permission
# means that `ch` is fully owned
ch.name!.push("Tzara")
ch.klass!.push("Dadaist")
# - and the `!` signals mutation
The .share operator creates a shared object
Cool. Now, I want to share the character so it can be referenced from many places. In Rust, we created an Arc, but in Dada, sharing is “built-in”. We use the .share operator, which will convert the given Character (i.e., fully owned character) into a shared Character:
class Character(name: String, klass: String, hp: u32)
let ch = Character("", "", 22)
ch!.push("Tzara")
ch!.push("Dadaist")
let ch1: shared Character = ch.share
# ------ -----
# The `share` operator consumes `ch`
# and returns the same object, but now
# with *shared* permissions.
shared objects can be copied freely
Now that we have a shared character, we can copy it around:
class Character(name: String, klass: String, hp: u32)
# Create a shared character to start
let ch1 = Character("Tzara", "Dadaist", 22).share
# -----
# Create another shared character
let ch2 = ch1
Sharing propagates from owner to field
When you have a shared object and you access its field, what you get back is a shared (shallow) copy of the field:
class Character(...)
# Create a `shared Character`
let ch: shared Character = Character("Tristan Tzara", "Dadaist", 22).share
# ------ -----
# Extracting the `name` field gives a `shared String`
let name: shared String = ch1.name
# ------
Propagation using a Vec
To drill home how cool and convenient this is, imagine that I have a Vec[String] that I share with .share:
let v: shared Vec[String] = ["Hello", "Dada"].share
and then I share it with v.share. What I get back is a shared Vec[String]. And when I access the elements of that, I get back a shared String:
let v = ["Hello", "Dada"].share
let s: shared String = v[0]
This is as if one could take a Arc<Vec<String>> in Rust and get out a Arc<String>.
How sharing is implemented
So how is sharing implemented? The answer lies in a not-entirely-obvious memory layout. To see how it works, let’s walk how a Character would be laid out in memory:
# Character type we saw earlier.
class Character(name: String, klass: String, hp: u32)
# String type would be something like this.
class String {
buffer: Pointer[char]
initialized: usize
length: usize
}
Here Pointer is a built-in type that is the basis for Dada’s unsafe code system.1
Layout of a given Character in memory
Now imagine we have a Character like this:
let ch = Character("Duchamp", "Dadaist", 22)
The character ch would be laid out in memory something like this (focusing just on the name field):
Let’s talk this through. First, every object is laid out flat in memory, just like you would see in Rust. So the fields of ch are stored on the stack, and the name field is laid out flat within that.
Each object that owns other objects begins with a hidden field, _flag. This field indicates whether the object is shared or not (in the future we’ll add more values to account for other permissions). If the field is 1, the object is not shared. If it is 2, then it is shared.
Heap-allocated objects (i.e., using Pointer[]) begin with a ref-count before the actual data (actually this is at the offset of -4). In this case we have a Pointer[char] so the actual data that follows are just simple characters.
Layout of a shared Character in memory
If I were to instead create a shared character:
let ch1 = Character("Duchamp", "Dadaist", 22).share
# -----
The memory layout would be the same, but the flag field on the character is now 2:
Now imagine that we created two copies of the same shared character:
let ch1 = Character("Duchamp", "Dadaist", 22).share
let ch2 = ch1
What happens is that we will copy all the fields of _ch1 and then, because _flag is 2, we will increment the ref-counts for the heap-allocated data within:
“Sharing propagation” is one example of permission propagation
This post showed how shared values in Dada work and showed how the shared permission propagates when you access a field. Permissions are how Dada manages object lifetimes. We’ve seen two so far
the given permission indicates a uniquely owned value (T, in Rust-speak);
the shared permission indicates a copyable value (Arc<T> is the closest Rust equivalent).
In future posts we’ll see the ref and mut permissions, which roughly correspond to & and &mut, and talk out how the whole thing fits together.
Dada is more than a pretty face
This is the first post where we started to see a bit more of Dada’s character. Reading over the previous few posts, you could be forgiven for thinking Dada was just a cute syntax atop familiar Rust semantics. But as you can see from how shared works, Dada is quite a bit more than that.
I like to think of Dada as “opinionated Rust” in some sense. Unlike Rust, it imposes some standards on how things are done. For example, every object (at least every object with a heap-allocated field) has a _flag field. And every heap allocation has a ref-count.
These conventions come at some modest runtime cost. My rule is that basic operations are allowed to do “shallow” operations, e.g., toggling the _flag or adjusting the ref-counts on every field. But they cannot do “deep” operations that require traversing heap structures.
In exchange for adopting conventions and paying that cost, you get “composability”, by which I mean that permissions in Dada (like shared) flow much more naturally, and types that are semantically equivalent (i.e., you can do the same things with them) generally have the same layout in memory.
Remember that I have not implemented all this, I am drawing on my memory and notes from my notebooks. I reserve the right to change any and everything as I go about implementing. ↩︎
The first Mobile Progress Report of 2026 provides a high-level overview to our mobile plans and priorities for the coming year.
Android
Our primary focus this year revolves around a better user experience and includes a major push to improve quality. We want to make the app stable, reduce our bugs, and speed up our development process. To do this, we have to make some big changes and improvements to the app’s basic structure and database. We’re moving toward modern Android standards, which includes using technologies like Compose and creating a single, consistent design system for our User Interface.
But we don’t want a year of just code fixes; we know we need to add features, especially around messaging & notifications. We’re making sure we deliver features and improve the user experience along the way. It’s a tricky balance between making the app better for users and overhauling the inner workings. We think these changes are worth the investment because they’ll lead to a better app, and ultimately, a better app for everyone. So the focus is better quality and simplifying the code to make us quicker.
Thunderbird has a new product – Thunderbird Pro – and as it comes more online, we plan to connect the Android app to it.
Here are our priorities for the year. P1 is the top focus:
P1
Get the Android app into a state that’s easier to maintain
Improve the database structure
Message List & View improvements
Unified Account
Unified Design System (for both iOS and Android)
P2
HTML signatures
Looking into JMAP support
P3
Looking into Exchange support
Calendar exploration
iOS
The main thing we’re trying to do for iOS this year is successfully launch Version 1 of our app. That sounds simple, but it involves building a lot of complicated, low-level foundational things.
This quarter, we’re concentrating on finishing up the IMAP and SMTP pieces, getting our design system established, and building the basic UI so we can start using these pieces. After that, we’ll shift to implementing OAuth. This will stop users from having to use confusing processes, like creating an app token, and let them sign in easily through the standard account import process with a simple User Interface.
Once we have IMAP and OAuth ready, we’ll have the absolute bare minimum for a mail app, allowing users to send and receive email. But there are other features you’d expect in a mail app, like mailboxes, signatures, rich text viewing, attachment handling, and the compose experience. We’ve already made great progress on the underlying functionality, and we have a clear vision of what needs to be implemented to make this successful.
Our key priorities for iOS are:
P1
Account creation flow
IMAP support
Full email writing and reading experience
P2
JMAP support
HTML signatures
It’s exciting to see the momentum that the iOS app is gaining and to get a clearer picture of what we need to do for the Android app to simplify things. We are getting farther on fewer, more targeted goals. I look forward to communicating with you over the next few months and share the progress that we are making. —
Mozilla is headed to New Delhi, India for the India AI Impact Summit 2026 next week with a message: Open Source is the path to both economic and digital sovereignty. Participating in dozens of events across the weeklong global forum, Mozilla leaders will make the case that a different kind of AI future is possible, and that global action is urgently needed to build a global AI ecosystem firmly grounded in the public interest.
“We’re at a crossroads for AI, where the world can continue to rent from a few big global corporations, or can take back control,” said Mark Surman, president of Mozilla. “To build national resilience and lower costs for their domestic stakeholders, countries should leave India ready to meaningfully invest in open source AI as a transformational solution.”
As part of the India AI Impact Summit 2026, Mozilla is curating three official events that showcase its work to help build a more decentralized AI ecosystem. These include panels on the state of competition in AI, how open source AI can operationalize digital sovereignty, and the launch of a new convenings program aimed at Bollywood artists and filmmakers to explore the future of creativity in the age of AI. Mozilla will also host a community party for open source AI developers and founders.
At the Summit, Mozilla will speak to growing concerns about consolidation in the AI landscape, with just a few large global corporations essentially renting access to AI to the rest of the world. As more powerful AI systems are built and controlled behind closed platforms, users are left with little visibility into how these systems work and almost no ability to shape or govern them. This concentration of control means a small number of companies can decide who gets access to advanced AI, on what terms, and at what cost, with far-reaching consequences for innovation, public institutions, and digital sovereignty.
Mozilla believes this growing concentration of AI control threatens innovation, fair competition and public accountability. To counter this, Mozilla is investing in people, products, and organizations working to build a more open and human-centered AI ecosystem. Open source AI is a key part of this effort, providing practical, real-world infrastructure that allows systems to be inspected, improved locally, and governed in the public interest.
In partnership with G5A, Mozilla Foundation will also launch “The Origin of Thought” at the Summit — a new initiative to explore the intersections of culture and AI. The program will convene Bollywood artists, filmmakers, technologists, and cultural practitioners to help shape a nuanced, multi-faceted understanding of AI’s impact on our lives — not just as a tool but as a potential cultural force. The first taster session, held at The Oddbird Theatre in New Delhi, will feature filmmakers Nikkhil Advani and Shakun Batra.
“Creativity has always been how people make sense of change, long before policy frameworks or product roadmaps catch up. As AI reshapes how culture is made, shared, and remembered, we need spaces that slow the conversation down enough to ask what we’re protecting and why,” said Nabiha Syed, executive director of Mozilla Foundation.“The Origin of Thought brings artists, technologists, and decision-makers together to look beyond efficiency and novelty, and toward the human stakes of this moment. This work reflects our belief that imagination is a critical safeguard, not a luxury. When we center creative voices, we make it possible to build technologies that expand opportunity, dignity, and livelihoods.”
Through its participation at the India AI Impact Summit 2026, Mozilla will reaffirm its commitment to building an AI future that is open, accountable, and shaped by the societies it is meant to serve, and to ensuring that open source AI remains a central pillar of global AI governance and innovation.
“Technology should adapt to humanity, not the other way around,” said Raffi Krikorian, CTO of Mozilla. “Across our entire portfolio, Mozilla is working to decentralize the future of AI — from building new tools for developers to supporting creators to building all levels of the open source AI ecosystem. India, with its incredible community of founders, startups, and developers, knows firsthand that open source AI is where innovation is going next.”
The crates.io team will no longer publish a blog post each time a malicious crate is detected or reported. In the vast majority of cases to date, these notifications have involved crates that have no evidence of real world usage, and we feel that publishing these blog posts is generating noise, rather than signal.
We will always publish a RustSec advisory when a crate is removed for containing malware. You can subscribe to the RustSec advisory RSS feed to receive updates.
Crates that contain malware and are seeing real usage or exploitation will still get both a blog post and a RustSec advisory. We may also notify via additional communication channels (such as social media) if we feel it is warranted.
Recent crates
Since we are announcing this policy change now, here is a retrospective summary of the malicious crates removed since our last blog post and today:
polymarket-clients-sdk: we were notified on February 6th by Socket that this crate was attempting to exfiltrate credentials by impersonating the polymarket-client-sdk crate. Advisory: RUSTSEC-2026-0010.
polymarket-client-sdks: we were notified on February 13th that this crate was attempting to exfiltrate credentials by impersonating the polymarket-client-sdk crate. Advisory: RUSTSEC-2026-0011.
In all cases, the crates were deleted, the user accounts that published them were immediately disabled, and reports were made to upstream providers as appropriate.
Thanks
Once again, our thanks go to Matthias, Socket, and the reporter of polymarket-client-sdks for their reports. We also want to thank Dirkjan Ochtman from the secure code working group, Emily Albini from the security response working group, and Walter Pearce from the Rust Foundation for aiding in the response.
The Interop Project is a cross-browser initiative to improve web compatibility in areas that offer the most benefit to both users and developers.
The group, including Apple, Google, Igalia, Microsoft, and Mozilla, takes proposals of features that are well defined in a sufficiently stable web standard, and have good test suite coverage. Then, we come up with a subset of those proposals that balances web developer priorities (via surveys and bug reports) with our collective resources.
We focus on features that are well-represented in Web Platform Tests as the pass-rate is how we measure progress, which you can track on the Interop dashboard.
Once we have an agreed set of focus areas, we use those tests to track progress in each browser throughout the year. And after that, we do it all again!
But, before we talk about 2026, let’s take a look back at Interop 2025…
Interop 2025
Firefox started Interop 2025 with a score of 46, so we’re really proud to finish the cycle on 99. But the number that really matters is the overall Interop score, which is a combined score for all four browsers – and the higher this number is, the fewer developer hours are lost to frustrating browser differences.
That’s the headline-grabbing part, but in my experience, it’s way more frustrating when a feature is claimed to be supported, but doesn’t work as expected. That’s why Interop 2025 also focused on improving the reliability of existing features like WebRTC, CSS Flexbox, CSS Grid, Pointer Events, CSS backdrop-filter, and more.
But it’s not just about passing tests
With some focus areas, in particular CSS Anchor Positioning and the Navigation API, we noticed that it was possible to achieve a good score on the tests while having inconsistent behavior compared to other browsers.
In some cases this was due to missing tests, but in some cases the tests contradicted the spec. This usually happens when tests are written against a particular implementation, rather than the specified behavior.
I experienced this personally before I joined Mozilla – I tried to use CSS Anchor Positioning back when it was only shipping in Chrome and Safari, and even with simple use-cases, the results were wildly inconsistent.
Although it caused delays in these features landing in Firefox, we spent time highlighting these problems by filing issues against the relevant specs, and ensured they got priority in their working groups. As a result, specs became less ambiguous, tests were improved, and browser behavior became more reliable for developers.
Okay, that’s enough looking at the past. Let’s move on to…
Interop 2026
Over 150 proposals were submitted for Interop 2026. We looked through developer feedback, on the issues themselves, and developer surveys like The State of HTML and The State of CSS. As an experiment for 2026, we at Mozilla also invited developers to stack-rank the proposals, the results of which we used in combination with the other data to compare developer preferences between individual features – this is something we want to expand on in the future.
After carefully examining all the proposals, the Interop group has agreed on 20 focus areas (formed of 33 proposals) and 4 investigation areas. See the Interop repository for the full list, but here are the highlights:
New features
As with 2025, part of the effort is about bringing new features to all browser engines.
Scroll-driven animations allow you to drive animations based on the user’s scroll position. This replaces heavy JavaScript solutions that run on the main thread.
WebTransport provides a low-level API over HTTP/3, allowing for multiple unidirectional streams, and optional out-of-order delivery. This is a modern alternative to WebSockets.
CSS container style queries allow you to apply a block of styles depending on the computed values of custom properties on the nearest container. This means, for example, you can have a simple --theme property that impacts a range of other properties.
JavaScript Promise Integration for Wasm allows WebAssembly to asynchronously ‘suspend’, waiting on the result of an external promise. This simplifies the compilation of languages like C/C++ which expect APIs to run synchronously.
CSS attr() has been supported across browsers for over 15 years, but only for pseudo-element content. For Interop 2026, we’re focusing on more recent changes that allow attribute values to be used in most CSS values (with URLs being an exception).
CSS custom highlights let you register a bunch of DOM ranges as a named highlight, which you can style via the ::highlight(name) pseudo-element. The styling is limited, but it means these ranges can span between elements, don’t impact layout, and don’t disrupt things like text selection.
Scoped Custom Element Registries allow different parts of your DOM tree (such as a shadow root) to use a different set of custom elements definitions, meaning the same tag name can refer to different custom elements depending on where they are in the DOM.
CSS shape() is a reimagining of path() that, rather than using SVG path syntax, uses a CSS syntax, allowing for mixed units and calc(). In practice, this makes it much easier to design responsive clip-paths and offset-paths.
Like in previous years, the backbone of Interop is in improving the reliability of existing features, removing frustrations for web developers.
In 2026, we’ll be focusing these efforts on particular edge cases in:
Range headers & form data in fetch
The Navigation API
CSS scroll snap
CSS anchor positioning
Same-document View Transitions
JavaScript top-level await
The event loop
WebRTC
CSS user-select
CSS zoom
Some of these are carried over from 2025 focus areas, as shortcomings in the tests and specs were fixed, but too late to be included in Interop 2025.
Again, these are less headline-grabbing than the shiny new features, but it’s these edge cases where us web developers lose hours of our time. Frustrating, frustrating, hours.
Interop investigations
Sometimes, we see a focus area proposal that’s clearly important, but doesn’t fit the requirements of Interop. This is usually because the tests for the feature aren’t sufficient, are in the wrong format, or browsers are missing automation features that are needed to make the feature testable.
In these cases, we identify what’s missing, and set up an investigation area.
For interop 2026, we’re looking at…
Accessibility. This is a continuation of work in 2025. Ultimately, we want browsers to produce consistent accessibility trees from the same DOM and CSS, but before we can write tests for this, we need to improve our testing infrastructure.
Mobile testing. Another continuation from 2025. In particular, in 2026, we want to figure out an approach for testing viewport changes caused by dynamic UI, such as the location bar and virtual keyboard.
JPEG XL. The current tests for this are sparse. Existing decoders have more comprehensive test suites, but we need to figure out how these relate to browsers. For example, progressive rendering is an important feature for developers, but how and when browsers should do this (to avoid performance issues) is currently being debated.
WebVTT. This feature allows for text to be synchronised to video content. The investigation is to go through the test suite and ensure it’s fit for purpose, and amend it where necessary.
It begins… again
The selected focus areas mean we’ve committed to more work compared to the other browsers, which is quite the challenge being the only engine that isn’t owned by billionaires. But it’s a challenge we’re happy to take on!
Together with other members of the Interop group, we’re looking forward to delivering features and fixes over the next year. You can follow along with the progress of all browsers on the Interop dashboard.
If your favorite feature is missing from Interop 2026, that doesn’t mean it won’t be worked on. JPEG XL is a good example of this. The current test suite meant it wasn’t a good fit for Interop 2026, but we’ve challenged the JPEG XL team at Google Research to build a memory-safe decoder in Rust, which we’re currently experimenting with in Firefox, as is Chrome.
Interop isn’t the limit of what we’re working on, but it is a cross-browser commitment.
If you’re interested in details of features as they land in Firefox, and discussions of future features from spec groups, you can follow us on:
Here at Mozilla, we are tirelessly working to bring our products to a global audience. Pontoon, our in-house translation management system plays a central role, where volunteer localizers work to bring translations for Firefox, Thunderbird, SUMO and other Mozilla products.
Recently, we have been working to unify localization tools and give localizers and developers smoother, more streamlined workflows. This is why we are excited to introduce Pontoon’s new Translation Search feature, where everyone can search for strings across all projects and locales at Mozilla.
What is the Translation Search Feature?
Translation Search is Pontoon’s latest way to access our extensive collection of translations, built up through years of localization work by fellow Mozillians. This new feature allows localizers to search for strings across all projects and all locales at Mozilla. Inspired by the functionality of Transvision, it is intended to be a suitable replacement and includes many of the various features that localizers rely on.
Let’s go through some of its features and how they can apply to your localization workflows.
Searching for Translations
Pontoon’s new Translation Search, where users can search for strings through all projects and locales.
Searching for strings in Translation Search is simple. Similar to how Transvision operates, you can search within a specific project and locale, as well as filter by string identifiers, case sensitivity and whole word search. Unlike Transvision, Pontoon Translation Search covers all products localized at Mozilla. It is also completely integrated with other Pontoon elements, including Translate, such that you can seamlessly navigate to Translate after your search.
Transvision’s search functionality, which we intend to replace.
Finding Entity Translations
Pontoon’s new Entity Translations page, which lists available translations for a source string.
If you want to get a better picture of a source string with different translations, go to the Entity Translations page. By clicking the “All Locales” button in Translation Search for a string, you can see the source string translated into every available locale. This is useful for comparing similar locale translations and getting a broader picture of a string’s context. This feature is intended to replace Transvision’s translation list for a particular entity.
Transvision’s string translation list, which we intend to replace.
Searching from Firefox Address Bar
If you have the Pontoon Add-on installed to the latest version, you can now search for strings directly from the address bar in Firefox: you can select Pontoon from the list of search engines, or start typing pontoon and press TAB.
How to use Translation Search with Pontoon Add-on.
Pontoon Translation Search is Live!
This feature is available now and ready for you to use. Head over to pontoon.mozilla.org/search to try out the new search experience and streamline your localization work! If you have any questions or concerns about Translation Search, do not hesitate to contact us on Matrix or file an issue. You can also consult the documentation here.
The Rust team has published a new point release of Rust, 1.93.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, getting Rust 1.93.1 is as easy as:
rustup update stable
If you don't have it already, you can get rustup from the appropriate page on our website.
What's in 1.93.1
Rust 1.93.1 resolves three regressions that were introduced in the 1.93.0 release.
Revert an update to wasm-related dependencies, fixing file descriptor leaks on the wasm32-wasip2 target. This only affects the rustup component for this target, so downstream toolchain builds should check their own dependencies too.
Contributors to 1.93.1
Many people came together to create Rust 1.93.1. We couldn't have done it without all of you. Thanks!
Welcome back to the Thunderbird blog and the first post of 2026! We’re rested, recharged, and ready to keep our community updated on all of our progress on the desktop and mobile clients and with Thunderbird Pro!
Hello again from the Thunderbird development team! After a restful and reflective break over the December holidays, the team returned recharged and ready to take on the mountain of priorities ahead.
To everyone we met during the recent FOSDEM weekend, thank you! The conversations, encouragement, and thoughtful feedback you shared were genuinely energizing, and many of your insights are already helping us better understand the real-world challenges you’re facing. The timing couldn’t have been better, as FOSDEM provided a strong early-year boost of inspiration, collaboration, and perspective.
FOSDEM – Collaboration, learning and real conversations
This year, a larger contingent of the Thunderbird team joined our Mozilla colleagues in Brussels for an intense and rewarding FOSDEM weekend. Across talks, hallway chats, and long discussions at the Thunderbird booth, we dug into standards, shared hard-won lessons, debated solutions, and explored what’s next for open communication tools.
The highlight, as always, was meeting users face-to-face. Hearing your stories about what’s working, what’s painful, and what you’d like to see next continues to be one of the most motivating parts of our work.
Several recurring themes stood out in these discussions, and we’re keen to help move the needle on some of the bigger, knottier challenges, including:
Unblocking Oauth issues for Microsoft exchange
Enterprise management feature support and extension
Add-ons and feature requests to enable the continued move to FOSS solutions for many European institutions and orgs
These conversations don’t end when FOSDEM does but help shape our priorities for the months ahead, and we’re grateful to everyone who took the time to stop by, ask questions, or share their experiences.
Exchange Email Support
After releasing Exchange support for email to the Monthly release channel, we’ve had some great feedback and helpful diagnosis of edge case problems that we’ve been prioritizing for the past few weeks
Work completed during this period includes:
Concurrency, queuing and prioritization of requests
Classless folder handling
Subfolder copy/move operations
Starring a message (which proved to be far more painful than imagined)
Custom Oauth configuration support
Work on supporting the Graph API protocol for email is moving steadily through the early stages, with these basic components already shipped to Daily:
Initial scaffolding & rust crates
Account Hub changes to support the addition of Graph protocol
The team met in person following FOSDEM and have planned out work to allow the new Account Hub UX to be used as the default experience in our next Extended Support Release this summer, which will ensure users benefit from changes we’ve made to enable custom Oauth settings and configuration specific to Microsoft Exchange.
Follow progress in the meta bugs for the last few pieces of phase 3 and telemetry, as well as the work we’ve defined to enable an interim experience for users setting up Thunderbird for the first time.
Calendar UI Rebuild
The new Calendar UI work has advanced at a good pace in recent weeks and the team met in person to break the work apart into chunks which have been prioritized alongside some of the “First Time User Experience” milestones. The team has recently:
Completed sprint planning for upcoming milestones
Assigned tasks and estimated work for the next 2 milestones
Continued preparation for adopting Redux-based state management during the “Event Add/Edit” milestone
Maintenance, Upstream adaptations, Recent Features and Fixes
Over the past couple of months, a significant portion of the team’s time has gone into responding to upstream changes that have impacted build stability, test reliability, and CI. Sheriffing continues to be a challenge, with frequent breakages requiring careful investigation to separate upstream regressions from Thunderbird-specific changes.
Alongside this ongoing maintenance work, we’ve also benefited greatly from contributions across the wider development community. Thanks to that collective effort, a steady stream of fixes and improvements has landed.
More broadly and focusing on our roadmap, the last two months have seen solid progress on Fluent migrations, as well as planning and early groundwork for rolling out Redux and the Design System more widely across the codebase.
Support from the community and team has resulted in some notable patches landing in recent weeks, with the following of particular help:
If you would like to see new features as they land, and help us find some early bugs, you can try running daily and check the pushlog to see what has recently landed. This assistance is immensely helpful for catching problems early.
We may already be a few weeks into 2026, but it’s never too late to say Happy New Year! As we dive into the year ahead, we also want to take a moment to reflect on everything we accomplished together in the second half of 2025.
In this recap, you’ll find highlights from community campaigns, major platform updates and product launches, as well as forum and Knowledge Base data that offer a clearer picture of how the community performed. We hope this snapshot helps celebrate what went well, while also shining a light on areas where we can continue to grow together.
Let’s jump in!
Highlights
The biggest highlight from H2 2025 was probably our first Ask a Fox event that brought contributors from both Firefox and Thunderbird together to respond to user questions in real time. Our total contributors aroused by 41.6 %, and we hit the highest weekly reply rate of the year during the event. The energy, collaboration, and collective impact of that week highlighted the power of coordinated community action, so much so that we’re now exploring ways to make it a recurring event.
We launched the Semantic History Search campaign in September, which reaffirmed our community’s strong interest in shaping early-stage features. Contributor participation in this testing phase showed once again how eager our communities are to help shape the future of Firefox.
The SUMO engineering team has also introduced Machine Translation (MT) for Knowledge Base articles that we first announced in August, with Italian and Spanish enabled later in September. This marked the start of a long-term effort to improve scalability and freshness of support content, particularly in core locales, where we’ve seen content freshness improve from 38% to 96%. At the same time, we recognize that this shift brought significant changes to long-established contributor workflows. We understand the rollout hasn’t been without its bumps. For many in the community, it raised concerns about quality, trust, and the role of contributors in shaping localized content. These are important conversations and we remain committed to listening carefully, learning from your feedback, and making the transition as transparent and collaborative as possible. We sincerely appreciate the patience, feedback, and ongoing dedication of everyone who has helped us navigate this complex change together.
In October, we also helped host a Reddit AMA with Firefox leadership, and the response from the community was overwhelmingly positive. The thread generated 184 comments and 218 upvotes, making it our most engaging AMA on Reddit to date. This level of interaction reflects a strong appetite for direct and transparent conversations with users. The AMA not only created space for meaningful dialogue but also surfaced valuable insights that can inform future product decisions. Building on this momentum, we’re committed to hosting more AMAs in the future to continue strengthening that connection.
Earlier in the year, we made the tough call to sunset Social Support and Mobile Store Support, a decision made to focus our energy and resources more intentionally on the Community Forums. While we know this transition wasn’t easy for everyone, the impact became clear by year-end: reply rates rebounded from 49.9% in H2 2024 to 62% in average (May-December 2025), a nearly 12% increase. This shift signaled that our community didn’t just adapt, but they rallied, making the forum stronger, faster, and more effective than it had been in months.
We wrapped the year with a celebration of identity and collaboration. Kit, our charming new mascot, made its debut in November. And shortly after, we announced that Mozilla Connect officially joined forces with the SUMO team. The alignment felt natural, uniting support and feedback under one community umbrella.
Community stats
Forum Support
General stats
The forum continues to show encouraging signs of community growth and maturity. Most notably, the solve rate more than doubled, jumping by 124% to reach 11.9%. This is a clear signal that contributors are not only engaging with users, but successfully resolving their issues at a much higher rate. We also saw a 9.8% increase in OP reply rate, suggesting stronger two-way engagement between users and contributors.
Improvements in speed reinforce this trend: first response time dropped by nearly 30%, while time to resolution was cut in half, falling by 48.9%. Combined with a 12.7% rise in reply rate and 15.7% more threads being actively supported, these results point to a community that’s not just growing, but becoming more efficient, responsive, and impactful.
With the automatic spam moderation introduced in the first half of year, we’ve seen fewer total questions coming in H2 2025 but higher quality interactions overall. This shift suggests a more focused and intentional support environment. Taken together, these trends suggest that it’s time to elevate solve rate from a “nice to have” to a core success metric, a meaningful reflection of contributor expertise, community trust, and the maturity of our Community Forums.
Total valid questions
14803 (-20%)
Reply rate
63.5 (+12.67%)
Solve rate
11.9% (+124%)
Total responses posted
14191 (+20.4%)
Total threads interacted
9599 (+15.7%)
Average first response time (in hour)
22.4 (-29.8%)
Average time to resolution (in day)
2.55 (-48.9%)
Total new registration
455k (+0.5%)
Total contributors
963 (-1.3%)
Helpful rate
60.8% (+3.92%)
OP reply rate
27% (+9.8%)
Top forum contributors
All credit for this impressive performance goes to our incredible forum community, who continue to raise the bar with each quarter. Their dedication, consistency, and responsiveness are what make these results possible.
We’re proud to highlight the top 3 contributors on the English forum, along with the leading contributors across other locales. This shows a true reflection of the global impact of our support network.
We’ve seen an impressive uptick in article contributions in the second half of 2025, with 925 revisions submitted to the English Knowledge Base, a 21.9% increase compared to the previous period. This continued growth reflects not just dedication, but real momentum of the growing spirit to keep our Knowledge Base fresh and helpful for users worldwide.
This level of participation directly supports our broader direction towards improving and streamlining the content request workflow. As we continue investing in clearer processes and better documentation, it’s clear that contributors are willing to step up when the pathway to impact is well defined.
Total en-US revisions
925 (+21.9%)
Total articles
248 (+2.9%)
Total revisions reviewed
821 (+19.9%)
Total revisions approved
778 (+17.7%)
Total authors
97
Total reviewers
19 (+11.8%)
Top KB contributors
The numbers may show progress, but the real story is the people behind them. Behind these revisions, 97 unique contributors stepped in to create the updates and 19 reviewers helped guide their contributions. And here’s just a glimpse of the top 5 contributors:
The localization community delivered an outstanding performance in H2 2025, despite undergoing significant changes. We saw 4807 non-English revisions submitted, a 37.5% increase, covering 2664 articles across locales (+29.7%). In total, 4,259 revisions were approved and 4,296 reviewed, reflecting consistent contributor dedication to quality and accuracy. Most notably, 314 unique contributors stepped up to author content, representing a 54.7% increase from earlier this year.
These results are especially meaningful given the rollout of Machine Translation in August, a major shift in localization workflow that understandably sparked concern and discussion across the community. Adjusting to MT required both flexibility and trust, and we’re grateful that many contributors responded by showing up in full force. Your continued involvement ensured that translations remained thoughtful, context-aware, and aligned with Mozilla’s values of openness and quality. This success is a testament to the strength, resilience, and care of our contributor base, and we’re deeply grateful for your ongoing contribution.
Participate in ongoing discussions on the Contributor Forum to catch up on the latest updates and share your input.
Drop by our Matrix channel for more casual chats with fellow contributors.
Attend Our Monthly Community Call
Every month, we host a community call to share updates about Firefox and community activities. Watch past recordings from 2026!
Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. Don’t feel pressured to turn on your camera or speak if you’re not comfortable. You can also:
Submit your questions ahead of time via the Contributor Forum or Matrix
Lurk silently and absorb the updates—your presence is still valued!
Stay Informed
Follow the SUMO Blog for the latest community news and updates.
Hello and welcome to another issue of This Week in Rust!
Rust is a programming language empowering everyone to build reliable and efficient software.
This is a weekly summary of its progress and community.
Want something mentioned? Tag us at
@thisweekinrust.bsky.social on Bluesky or
@ThisWeekinRust on mastodon.social, or
send us a pull request.
Want to get involved? We love contributions.
An important step for RFC implementation is for people to experiment with the
implementation and give feedback, especially before stabilization.
If you are a feature implementer and would like your RFC to appear in this list, add a
call-for-testing label to your RFC along with a comment providing testing instructions and/or
guidance on which aspect(s) of the feature need testing.
Always wanted to contribute to open-source projects but did not know where to start?
Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
No Calls for participation were submitted this week.
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
Oxidize Conference | CFP open until 2026-03-23 | Berlin, Germany | 2026-09-14 - 2026-09-16
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!
This week we saw quite a few improvements. Largest one comes from adding two targeted with_capacity calls in #151929.
Another source of multiple improvements is the ongoing migration away from using external files to store diagnostic messages.
If you are running a Rust event please add it to the calendar to get
it mentioned here. Please remember to add a link to the event too.
Email the Rust Community Team for access.
Let’s continue with working through Dada. In my previous post, I introduced some string manipulation. Let’s start talking about permissions. This is where Dada will start to resemble Rust a bit more.
Class struggle
Classes in Dada are one of the basic ways that we declare new types (there are also enums, we’ll get to that later).
The most convenient way to declare a class is to put the fields in parentheses. This implicitly declares a constructor at the same time:
class Point(x: u32, y: u32) {}
This is in fact sugar for a more Rust like form:
class Point {
x: u32
y: u32
fn new() -> Point {
Point { x, y }
}
}
And you can create an instance of a class by calling the constructor:
let p = Point(22, 44) // sugar for Point.new(22, 44)
Mutating fields
I can mutate the fields of p as you would expect:
p.x += 1
p.x = p.y
Read by default
In Dada, the default when you declare a parameter is that you are getting read-only access:
fn print_point(p: Point) {
print("The point is {p.x}, {p.y}")
}
let p = Point(22, 44)
print_point(p)
If you attempt to mutate the fields of a parameter, that would get you an error:
If you declare a parameter with !, then it becomes a mutable reference to a class instance from your caller:
fn translate_point(point!: Point, x: u32, y: u32) {
point.x += x
point.y += y
}
In Rust, this would be like point: &mut Point. When you call translate_point, you also put a ! to indicate that you are passing a mutable reference:
let p = Point(22, 44) # Create point
print_point(p) # Prints 22, 44
translate_point(p!, 2, 2) # Mutate point
print_point(p) # Prints 24, 46
As you can see, when translate_point modifies p.x, that changes p in place.
Moves are explicit
If you’re familiar with Rust, that last example may be a bit surprising. In Rust, a call like print_point(p) would movep, giving ownership away. Trying to use it later would give an error. That’s because the default in Dada is to give a read-only reference, like &x in Rust (this gives the right intuition but is also misleading; we’ll see in a future post that references in Dada are different from Rust in one very important way).
If you have a function that needs ownership of its parameter, you declare that with given:
fn take_point(p: given Point) {
// ...
}
And on the caller’s side, you call such a function with .give:
let p = Point(22, 44)
take_point(p.give)
take_point(p.give) # <-- Error! Can't give twice.
Comparing with Rust
It’s interesting to compare some Rust and Dada code side-by-side:
Rust
Dada
vec.len()
vec.len()
map.get(&key)
map.get(key)
vec.push(element)
vec!.push(element.give)
vec.append(&mut other)
vec!.append(other!)
message.send_to(&channel)
message.give.send_to(channel)
Design rationale and objectives
Convenient is the default
The most convenient things are the shortest and most common. So we make reads the default.
Everything is explicit but unobtrusive
The . operator in Rust can do a wide variety of things depending on the method being called. It might mutate, move, create a temporary, etc. In Dada, these things are all visible at the callsite– but they are unobtrusive.
This actually dates from Dada’s “gradual programming” days – after all, if you don’t have type annotations on the method, then you can’t decide foo.bar() should take a shared or mutable borrow of foo. So we needed a notation where everything is visible at the call-site and explicit.
Postfix operators play more nicely with others
Dada tries hard to avoid prefix operators like &mut, since they don’t compose well with . notation.
I’m hearing this question asked a lot lately. Both within Mozilla and from others in the industry. You come up with a plan for implementing some feature, put your best estimate on how long it will take to implement, and then you get push back from folks several levels removed from the project along the lines of “Wouldn’t this be faster if you used AI?”, or “Can’t Claude Code do most of this?”.
Following on my Fun with Dada post, this post is going to start teaching Dada. I’m going to keep each post short – basically just what I can write while having my morning coffee.1
You have the right to write code
Here is a very first Dada program
println("Hello, Dada!")
I think all of you will be able to guess what it does. Still, there is something worth noting even in this simple program:
“You have the right to write code. If you don’t write a main function explicitly, one will be provided for you.” Early on I made the change to let users omit the main function and I was surprised by what a difference it made in how light the language felt. Easy change, easy win.
Convenient is the default
Here is another Dada program
let name = "Dada"
println("Hello, {name}!")
Unsurprisingly, this program does the same thing as the last one.
“Convenient is the default.” Strings support interpolation (i.e., {name}) by default. In fact, that’s not all they support, you can also break them across lines very conveniently. This program does the same thing as the others we’ve seen:
let name = "Dada"
println("
Hello, {name}!
")
When you have a " immediately followed by a newline, the leading and trailing newline are stripped, along with the “whitespace prefix” from the subsequent lines. Internal newlines are kept, so something like this:
let name = "Dada"
println("
Hello, {name}!
How are you doing?
")
would print
Hello, Dada!
How are you doing?
Just one familiar String
Of course you could also annotate the type of the name variable explicitly:
let name: String = "Dada"
println("Hello, {name}!")
You will find that it is String. This in and of itself is not notable, unless you are accustomed to Rust, where the type would be &'static str. This is of course a perennial stumbling block for new Rust users, but more than that, I find it to be a big annoyance – I hate that I have to write "Foo".to_string() or format!("Foo") everywhere that I mix constant strings with strings that are constructed.
Similar to most modern languages, strings in Dada are immutable. So you can create them and copy them around:
let name: String = "Dada"
let greeting: String = "Hello, {name}"
let name2: String = name
Next up: mutation, permissions
OK, we really just scratched the surface here! This is just the “friendly veneer” of Dada, which looks and feels like a million other languages. Next time I’ll start getting into the permission system and mutation, where things get a bit more interesting.
My habit is to wake around 5am and spend the first hour of the day doing “fun side projects”. But for the last N months I’ve actually been doing Rust stuff, like symposium.dev and preparing the 2026 Rust Project Goals. Both of these are super engaging, but all Rust and no play makes Niko a dull boy. Also a grouchy boy. ↩︎
Waaaaaay back in 2021, I started experimenting with a new programming language I call “Dada”. I’ve been tinkering with it ever since and I just realized that (oh my gosh!) I’ve never written even a single blog post about it! I figured I should fix that. This post will introduce some of the basic concepts of Dada as it is now.
Before you get any ideas, Dada isn’t fit for use. In fact the compiler doesn’t even really work because I keep changing the language before I get it all the way working. Honestly, Dada is more of a “stress relief” valve for me than anything else1 – it’s fun to tinker with a programming language where I don’t have to worry about backwards compatibility, or RFCs, or anything else.
That said, Dada has been a very fertile source of ideas that I think could be applicable to Rust. And not just for language design: playing with the compiler is also what led to the new salsa design2, which is now used by both rust-analyzer and Astral’s ty. So I really want to get those ideas out there!
I took a break, but I’m back baby!
I stopped hacking on Dada about a year ago3, but over the last few days I’ve started working on it again. And I realized, hey, this is a perfect time to start blogging! After all, I have to rediscover what I was doing anyway, and writing about things is always the best way to work out the details.
Dada started as a gradual programming experiment, but no longer
Dada has gone through many phases. Early on, the goal was to build a gradually typed programming language that I thought would be easier for people to learn.
The idea was that you could start writing without any types at all and just execute the program. There was an interactive playground that would let you step through and visualize the “borrow checker” state (what Dada calls permissions) as you go. My hope was that people would find that easier to learn than working with type checker checker.
At the same time, I found myself unconvinced that the gradually typed approach made sense. What I wanted was that when you executed the program without type annotations, you would get errors at the point where you violated a borrow. And that meant that the program had to track a lot of extra data, kind of like miri does, and it was really only practical as a teaching tool. I still would like to explore that, but it also felt like it was adding a lot of complexity to the language design for something that would only be of interest very early in a developer’s journey4.
Therefore, I decided to start over, this time, to just focus on the static type checking part of Dada.
Dada is like a streamlined Rust
Dada today is like Rust but streamlined. The goal is that Dada has the same basic “ownership-oriented” feel of Rust, but with a lot fewer choices and nitty-gritty details you have to deal with.5
Rust often has types that are semantically equivalent, but different in representation. Consider &Option<String> vs Option<&String>: both of them are equivalent in terms of what you can do with them, but of course Rust makes you carefully distinguish between them. In Dada, they are the same type. Dada also makes &Vec<String>, &Vec<&String>, &[String], &[&str], and many other variations all the same type too. And before you ask, it does it without heap allocating everything or using a garbage collector.
To put it pithily, Dada aims to be “Rust where you never have to call as_ref()”.
Dada has a fancier borrow checker
Dada also has a fancier borrow checker, one which already demonstrates much of the borrow checker within, although it doesn’t have view types. Dada’s borrow checker supports internal borrows (e.g., you can make a struct that has fields that borrow from other fields) and it supports borrow checking without lifetimes. Much of this stuff can be brought to Rust, although I did tweak a few things in Dada that made some aspects easier.
Dada targets WebAssembly natively
Somewhere along the line in refocusing Dada, I decided to focus exclusively on building WebAssembly components. Initially I felt like targeting WebAssembly would be really convenient:
WebAssembly is like a really simple and clean assembly language, so writing the compiler backend is easy.
WebAssembly components are explicitly designed to bridge between languages, so they solve the FFI problem for you.
With WASI, you even get a full featured standard library that includes high-level things like “fetch a web page”. So you can build useful things right off the bat.
WebAssembly and on-demand compilation = compile-time reflection almost for free
But I came to realize that targeting WebAssembly has another advantage: it makes compile-time reflection almost trivial. The Dada compiler is structured in a purely on-demand fashion. This means we can compile one function all the way to WebAssembly bytecode and leave the rest of the crate untouched.
And once we have the WebAssembly bytecode, we can run that from inside the compiler! With wasmtime, we have a high quality JIT that runs very fast. The code is even sandboxed!
So we can have a function that we compile and run during execution and use to produce other code that will be used by other parts of the compilation step. In other words, we get something like miri or Zig’s comptime for free, essentially. Woah.
Wish you could try it? Me too!
Man, writing this blog post made ME excited to play with Dada. Too bad it doesn’t actually work. Ha! But I plan to keep plugging away on the compiler and get it to the point of a live demo as soon as I can. Hard to say exactly how long that will take.
In the meantime, to help me rediscover how things work, I’m going to try to write up a series of blog posts about the type system, borrow checker, and the compiler architecture, all of which I think are pretty interesting.
Yes, I relax by designing new programming languages. Doesn’t everyone? ↩︎
Designing a new version of salsa so that I could write the Dada compiler in the way I wanted really was an epic yak shave, now that I think about it. ↩︎
I lost motivation as I got interested in LLMs. To be frank, I felt like I had to learn enough about them to understand if designing a programming language was “fighting the last war”. Having messed a bunch with LLMs, I definitely feel that they make the choice of programming language less relevant. But I also think they really benefit from higher-level abstractions, even more than humans do, and so I like to think that Dada could still be useful. Besides, it’s fun. ↩︎
And, with LLMs, that period of learning is shorter than ever. ↩︎
Of course this also makes Dada less flexible. I doubt a project like Rust for Linux would work with Dada. ↩︎
In the Mozilla Android team, we want engineers to talk with Product and UX more and hash out ideas sooner. Prototype an idea, then discuss the feature's merits. For this, we built TryFox to make it easier for everyone to get the latest version of Firefox Nightly or install a "try" build with a link to the build on our CI servers.
The downside with using a Try build is that you sometimes have to uninstall your existing version of Nightly before you can install another and that means losing app data. Typically, if you were doing this on your daily driver, you don't want to do that. A quick work around is to add a temporary patch above your stack of commits which changes the App ID suffix (and optionally the application name) so that there is no conflict.
Here is an example diff from a patch that changed an animation, so having it install alongside what we shipped let you compare them easily:
diff --git a/mobile/android/fenix/app/build.gradle b/mobile/android/fenix/app/build.gradle
index 019fdb7ab4..772cc6cc3d 100644
--- a/mobile/android/fenix/app/build.gradle
+++ b/mobile/android/fenix/app/build.gradle
@@ -127,7 +127,7 @@
debug {
shrinkResources = false
minifyEnabled = false
- applicationIdSuffix ".fenix.debug"
+ applicationIdSuffix ".fenix.debug_animator"
resValue "bool", "IS_DEBUG", "true"
pseudoLocalesEnabled = true
}
diff --git a/mobile/android/fenix/app/src/main/res/values/static_strings.xml b/mobile/android/fenix/app/src/main/res/values/static_strings.xml
index 4f6703eb35..1a57988477 100644
--- a/mobile/android/fenix/app/src/main/res/values/static_strings.xml
+++ b/mobile/android/fenix/app/src/main/res/values/static_strings.xml
@@ -4,7 +4,7 @@
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
<resources xmlns:moz="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools">
<!-- Name of the application -->
- <string name="app_name" translatable="false">Firefox Fenix</string>
+ <string name="app_name" translatable="false">Firefox Fenix (animator)</string>
<string name="firefox" translatable="false">Firefox</string>
<!-- Preference for developers -->
In the future, we can add mach try support to do this automatically for any push.
My name is Oliver Chan, though I am mostly known by my username Olvcpr423. I’m from China, and I speak Mandarin and Cantonese. I have been contributing to Mozilla localization in Simplified Chinese since 2020.
Getting Started
Q: How did you first get involved in localization, and what led you to Mozilla?
A: My localization journey actually began with Minecraft back in 2018, when I was 13. I was an avid player of this globally popular game. Similar to Mozilla, its developer uses a crowdsourcing platform to let players localize the game themselves. I joined the effort and quickly realized that I had a strong interest in translation. More importantly, I found myself eager to use my skills to help bridge language gaps, so that more people could enjoy content from different languages easily.
Firefox was the first Mozilla product I ever used. I started using it relatively late, in 2020, and my connection with Firefox began thanks to my uncle. Although I was aware that Firefox had a long history, I didn’t yet understand what made it special. I gradually learned about its unique features and position as I explored further, and from then on, Firefox became my primary browser.
Later that same year, I noticed a typo while using Firefox and suggested a fix on Pontoon (I honestly can’t recall how I found Pontoon at the time). That small contribution marked the beginning of my journey as a Mozilla localizer. I believe many people’s localization journeys also start by correcting a single typo.
Working on Mozilla Products
Q: Which Mozilla projects do you enjoy working on the most, and why?
A: Firefox, absolutely. For one thing, it’s my favorite piece of software, which makes working on it personally meaningful. More importantly, Firefox has a massive Chinese user base, which gives me a strong sense of responsibility to provide the best possible language support for my fellow speakers. On top of that, Firefox’s mission as the last independent browser gives me extra motivation when working on its localization.
Aside from Firefox, Common Voice has been the most impactful project I’ve localized for Mozilla. It collects voices from a diverse range of speakers to build a publicly available voice dataset, which I think is especially valuable in this era. And honestly, working on the text for a voice-collection platform is a wonderful experience, isn’t it? 😀
Thunderbird is another project I find especially rewarding. It is popular on Linux, and localizing it means supporting many users who rely on it for everyday communication, which I consider vital work.
Q: How does regularly using these products influence how you approach localization?
A: Regular usage is essential for localization teams (like us) that lack dedicated LQA processes and personnel. Without routinely using the product, it’s easy to overlook issues that only become apparent in context, such as translations that don’t fit the context or layout problems.
Since we also lack a centralized channel to gather feedback from the broader community, we have to do our best to identify as many issues as we can ourselves. We also actively monitor social media and forums for user complaints related to localization. In addition, whenever I come across a screenshot of an unfamiliar interface, I take it as an opportunity to check for potential issues.
Community & Collaboration
Q: How does the Chinese localization community collaborate in practice?
A: In practice, besides myself, there is only one other active member on Pontoon for our locale. While the workload is still manageable, we do need to seriously think about recruiting new contributors and planning for succession to ensure sustainability.
That said, our community is larger than what you see on Pontoon alone. We have a localization group chat where many members stay connected. Although they may not actively contribute to Pontoon — some work on SUMO or MDN, some are regular users, while others are less active nowadays — I can always rely on them for insightful advice whenever I encounter tricky issues or need to make judgment calls. Oftentimes, we make collective decisions on key terminology and expressions to reflect community consensus.
Q: How do you coordinate translation, review, and testing when new strings appear?
A: Recently, our locale hit 60,000 strings — a milestone well worth celebrating. Completing the translation of such a massive volume has been a long-term effort, built on nearly two decades of steady, cumulative work by successive contributors. I’d like to take this opportunity to thank each and every one of them.
As for coordination, we don’t divide work by product — partly because all products already have a high completion level, and the number of products and new strings is still manageable. In practice, we treat untranslated strings a bit like Whac-A-Mole: whenever new strings appear, anyone available just steps in to translate them. Testing is also a duty we all share.
For review, we follow a cross-review principle. We avoid approving our own suggestions and instead leave them for peers to review. This helps reduce errors and encourages discussion, ensuring we arrive at the best possible translations.
Q: Did anyone mentor you when you joined the community, and how do you support new contributors today?
A: When I first joined Mozilla localization, I wasn’t familiar with the project’s practices or consensus. The locale manager 你我皆凡人 helped me greatly by introducing them. For several years, they were almost the only active proofreader for our locale, and I’d like to take this opportunity to pay tribute to their long-term dedication.
Today, when reviewing suggestions from newcomers, if a translation doesn’t yet meet the approval standard, I try my best to explain the issues through comments and encourage them to keep contributing, rather than simply rejecting their work — which could easily discourage them and dampen their enthusiasm.
Q: What do you think is most important for keeping the community sustainable over time?
A: It’s all about the people. Without people, there is no community. We need fresh blood to ensure we don’t face a succession crisis. At the moment, recruiting from within the Mozilla ecosystem (MDN or SUMO) is the most immediate approach, but I won’t give up on trying to draw in more people from the broader community.
Continuity of knowledge is also important. We mentor newcomers so they understand how the project works, along with its best practices and historical context. Documentation becomes necessary as time passes or the community grows; it ensures knowledge is preserved over time and prevents “institutional amnesia” as people come and go.
Background, Skills & Personal Lens
Q: What’s your background outside localization, and how does it shape your approach to translation?
A: I’m currently a student majoring in accounting. While accounting and software localization may seem worlds apart, I believe they share similar characteristics. The IFRS (International Financial Reporting Standards) identifies six qualitative characteristics of accounting information, and with a slight reinterpretation, I find that they also apply surprisingly well to localization and translation. For example:
Relevance: translations should help users use the product smoothly and as expected
Faithful representation: translations should reflect the original meaning and nuance, without being constrained by literal form
Verifiability: translations should be reasonable to any knowledgeable person
Timeliness: translations should be delivered promptly
Understandability: translations should be easy to comprehend
Comparability: translations should stay consistent with existing strings and industry standards
On a personal level, I developed qualities like prudence and precision through localization long before I started my degree, which gave me a head start in accounting. In turn, what I’ve learned through my studies has helped me perform even better in localization. It’s a somewhat interesting interplay.
Q: Besides translation, what else have you gained through localization?
A: I knew very little about Web technologies before I started localizing for Mozilla. Through working on Firefox localization, I gradually developed a solid understanding of Web technologies and gained deeper insight into how the Web works.
Fun Facts
Q: Any fun or unexpected facts you’d like to share about yourself?
A: My connection with Firefox began thanks to my uncle. One day, he borrowed my computer and complained that Firefox wasn’t installed — it had always been his go-to browser. So I decided to give it a try and installed it on my machine. That was how my journey with Firefox began.
I love watching anime, especially Bocchi the Rock! and the band Kessoku Band featured in the series. I also enjoy listening to Anisongs and Vocaloid music, particularly songs voiced by Hatsune Miku and Luo Tianyi. And while I enjoy watching football matches, I’m not very good at playing football myself!
This year I was lucky again and was able to attend FOSDEM. This turned out to be more of a social conference than a technical one for me this year. I mean: I had a bunch of really great conversations, with peers and users of Firefox. I was there to man the Mozilla booth. The idea was to engage people and have them fill up a bingo, in exchange they might go back home with a T-shirt a baseball cap or a pair of socks. Most people that I saw on Saturday afternoon and Sunday morning. Some people complained about AI, but not as many as I was expecting. Explaining why and that https://techcrunch.com/2026/02/02/firefox-will-soon-let-you-block-all-of-its-generative-ai-features/ would soon be available made them all understand and think that they could keep Firefox as their main browser. Our sticker stock melts like snow under the sun. The people from mozilla.ai had some pretty interesting discussions with some users that came by the booth.
When the FOSDEM schedule got published, I got exited by the fact that the Mozilla room had been renamed the web browser room. Inclusion done the right, the best way to push for an open web. That dev room was located in the room that had historically served the Mozilla community back in 2004/2005/2006/2007 ... Unfortunately, I woke up 30m past Midnight on Saturday and was unable to get back to sleep. The sessions I had intended to watch were just at the time I got a big tired / want to sleep feeling. This was also true for the other room I was interested in : the BSD dev room.
Last but not least, as I had helped organize the Search dev room, a very nice recap was posted on LinkedIn. I was doing the MC in that room. It was a lot of fun and I learned a lot.
This year the conference was a social event. I've met plenty of "old" or not so old friend. I've counted 33 people, not counting my previous manager and her daughter. I know I have missed at least 3 people. Very nice conversation with many of these people. I really was a pleasure to meet and interact.
The highlight of this FOSDEM was seeing he Sun sparc station 4 on one of the stands.
Hello and welcome to another issue of This Week in Rust!
Rust is a programming language empowering everyone to build reliable and efficient software.
This is a weekly summary of its progress and community.
Want something mentioned? Tag us at
@thisweekinrust.bsky.social on Bluesky or
@ThisWeekinRust on mastodon.social, or
send us a pull request.
Want to get involved? We love contributions.
An important step for RFC implementation is for people to experiment with the
implementation and give feedback, especially before stabilization.
If you are a feature implementer and would like your RFC to appear in this list, add a
call-for-testing label to your RFC along with a comment providing testing instructions and/or
guidance on which aspect(s) of the feature need testing.
Always wanted to contribute to open-source projects but did not know where to start?
Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
* **Oxidize Conference | CFP open until 2026-03-23 | Berlin, Germany | 2026-09-14 - 2026-09-16
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Overall a positive week for instruction counts (~1% improvement on
check/debug/opt/doc builds). Cycle counts and memory usage remain broadly
unchanged across the week though.
If you are running a Rust event please add it to the calendar to get
it mentioned here. Please remember to add a link to the event too.
Email the Rust Community Team for access.
In C++, the muscle memory you develop over time is avoidant. You learn not to do certain things. It's a negative memory, not in a pejorative sense, but in the sense that you have to remember what not to do rather than what to do: a list of patterns to avoid, of traps to dodge. And this list keeps growing, because the language doesn't prevent you from falling into the traps, you just have to remember they exist.
In Rust, muscle memory is constructive. You learn patterns that are inherently correct. You don't have to remember what to avoid because the compiler won't let you do it. Instead of thinking "I must remember not to leave the door open", you learn to build a door that closes by itself.
Since joining the Firefox Performance team as a Software Engineering Intern back in May, I’ve been working on improving performance profiles. Firefox developers need performance profiles that are readable, readily accessible, and automated. This makes it easier to identify performance regressions and bugs introduced by patches, as well as to diagnose and understand existing performance behavior. Let’s take a closer look at two notable improvements to our performance profiling pipelines that strive toward these goals:
Simpleperf Profiles for Firefox for Android Startup Tests (Bug 1969490)
With mobile performance being a major priority, we currently run Firefox for Android startup tests in CI with our mozperftest framework and use the Android CPU profiler Simpleperf to measure performance. However, the raw performance data produced by Simpleperf in these tests wasn’t human-readable. To address this, we added an automated pipeline in mozperftest that processes Simpleperf’s output into readable profiles that can be viewed in the Firefox Profiler.
To do this, we introduced two powerful toolchains to CI to process profiles: samply and Firefox Profiler’s symbolicator-cli.
samply is a cross-platform, Rust-based CPU profiler that, among other things, can convert performance data from profilers like Simpleperf to Firefox Profiler’s standard profile format and host a local symbol server for symbolication.
symbolicator-cli allows us to leverage Firefox Profiler’s backend and symbolicate the call stacks (i.e., resolve function addresses to their corresponding function names) from performance profiles using debug symbols hosted by samply.
This “toolchain approach” for symbolication is powerful. Updating our CI symbolication in-tree is as simple as bumping the Git revisions of the toolchains, allowing for easy updates to support the latest profile format version and rapid adoption of new Firefox Profiler features.
With these two toolchains, we updated mozperftest to be able to automatically produce symbolicated Simpleperf profiles when running tests. To use this feature with startup tests in CI, simply run:
./mach try fuzzy --full
and run any startup job suffixed with -profiling (e.g. perftest-android-hw-a55-aarch64-shippable-startup-fenix-newssite-applink-startup-profiling). The resulting Simpleperf profiles can immediately be inspected in the Firefox Profiler by clicking the “Open in Firefox Profiler” link next to the job’s profile artifact in Treeherder (below).
Viewing symbolicated Simpleperf profiles
Alternatively, you can select a mobile startup job in Treeherder and click the “Generate Performance Profile” button under the Performance tab to re-trigger the job with Simpleperf profiling.
Generating symbolicated Simpleperf profiles
Better Symbolication for Test Harnesses (Bug 1970961)
Firefox’s test harnesses — Raptor, Talos, Mochitest, and XPCShell — all use the mozgeckoprofiler module to generate and symbolicate profiles in CI. This is done using --gecko-profile for Raptor/Talos tests or --profiler for XPCShell/Mochitest tests.
However, mozgeckoprofiler’s symbolication implementation was outdated and did not support newer features of the Firefox Profiler such as inline call stacks, source view, and assembly view — features that are useful for debugging and pinpointing regressions.
Profile with no symbolication
Profile with outdated symbolication (no inline function support or source view)
Profile with updated symbolication (support for inline functions and source view)
We have now updated this symbolication pipeline with the same toolchain approach used in mozperftest. This patch not only allows for richer profiles with mozgeckoprofiler but also paves the way for a more unified and maintainable approach to symbolication across our Firefox test platforms.
What’s Next?
These patches are part of an ongoing effort to make performance profiling easier, more standardized, and more useful for Firefox developers. Here are a few related bugs to keep an eye on:
Bug 1978586: Raptor’s Speedometer 3 mobile tests can be more insightful if they provide performance profiles generated with Simpleperf. We should be able to leverage our mozgeckoprofiler patch for the symbolication step of this patch.
Bug 2010311: Automatically generating reports that compare symbolicated profiles from patch changes with what’s in-tree would make it much easier for engineers to spot sources of performance regressions (layout, garbage collection, just-in-time compilation, etc.).
If you were running into any problems last week opening links from other applications, specifically with Firefox being foregrounded but not opening the URL, this should now be fixed in Nightly and Beta (bug 2010535, fixed by Mossop). Please file a bug if you’ve updated your browser and the bug is still happening.
Dão & Moritz continued their work on the new separate search bar, adding a ‘x’ to clear the input, respecting the browser.search.openintab preference, matching the search history to the legacy bar. This new version of the separate search bar is enabled by default in Nightly.
Rob Wu investigated and fixed an issue that can prevent langpacks from being staged successfully as part of Firefox version updates (landed in 148, and will be included in a 147 dot release) – Bug 2006489
Greg Stoll introduced a proper localized stings mapping table for Add-on Gated Site Permission, change needed as part of the work WebSerial DOM API – Bug 1826747
WebExtensions Framework
Rachel Rusiecki contributed a nice cleanup of the WebExtensions internals by removing the extensions.manifestV3.enabled rollout pref – Bug 1804921
Emilio investigated and fixed a drag and drop issue hit by WebExtensions action popup pages, regression introduced in Firefox 146 (by Bug 1933181 ) and fixed in Firefox 148 and 147 – Bug 2007274
WebExtension APIs
Piro (TreeStyleTab extension developer) contributed a fix for browser.tabs.create unexpected rejection hit when openeredTabId is the tab id of a discarded tab – Bug 1762249
Fix issue related to the extensions event page suspending while downloads.download API call is waiting for user input through the file chooser – Bug 2005953, Bug 2005963
Fixed issue hit by the tabs API (and TreeStyleTab as a side effect of that) on builds were sync-about-blank is enabled (currently only Nightly builds) – Bug 2004525
Fixed issue related to data set through browser.session.setTabValue not being preserved when the tab is moved between windows – Bug 2002643
Fixed issue with declarativeNetRequest initialization at startup when one extension using declarativeNetRequest does not have any static DNR rules dataset declared in their manifest – Bug 2006233
Arai contributed changes needed to allow declarativeNetRequest roles to apply successfully to cached web requests resources – Bug 1949623
AI Window
assistant response markdown rendering with prosemirror 2001504
Alexandra Borovova updated the reset behavior of the emulation.setGeolocationOverride and the emulation.setScreenOrientationOverride commands to align with the spec changes. With this update, when calling these commands to reset the override, e.g., a browsing context, only this override will be reset, and if there is an override set for a user context related to this browsing context, this override will be applied instead.
Alexandra Borovova fixed user prompts open and close events to reference the correct context ID in case prompts are being opened from iframes on desktop and Android.
eslint-env comments are being removed as ESLint v9 does not support them (use eslint-file-globals.config.mjs instead). ESLint v10 (currently in rc) will raise errors for them.
More eslint-plugin-jsdoc rules have been enabled across the whole tree. These are the ones relating to valid-jsdoc. A few remain, but will need work by teams to fix the failures.
Linux users with new installs of Firefox were experiencing an issue where newtab was appearing blank (amongst otherbugs). This appears to be related to content sandboxing and the XDG base directory support that was recently added for Linux builds. Emilio Cobos Álvarez is working on a fix in this bug.
In the meantime, we’ve disabled all train-hop XPIs on Beta and Release for Linux builds. They will fallback to the built-in versions of New Tab instead.
Finn has been working through his onboarding bug list:
bug 1947638, switching about:preferences to open the profile selector window in a dialog, not a subdialog
bug 1950247, improve a11y by making headings on the edit profile page actually headings
bug 2001276, excluding the ignoredSharedPrefs list when creating a new profile
Mossop also continued making behavior more consistent across the toolkit profile service and the selectable profile service, fixed bug 2004345 – ensuring that, if a toolkit profile has a selectable profile group, we don’t allow that toolkit profile to be deleted from about:profiles. Instead we warn the user.
Search and Navigation
Address Bar
Moritz fixed a multi-second jank when dragging large text over tabs or the address bar.
Drew and Daisuke are starting the work on standardizing the UI for the various address bar result types (in-flight: 2010176, 2010177, 2010184) and their result menus (2010168, 2010171, 2010172).
Advance notice that this week we are planning on landing a change to the search service to change it from an XPCOM service to a Javascript singleton. This is part of work to remove the XPCOM interfaces as the service hasn’t been accessed from C++ for a while.
This will help reduce development overhead of needing to do full builds for interface changes.
Other interesting fixes
The browser.urlbar.switchTabs.searchAllContainers preference has been removed.
ESC key should nownot save modified data in the Edit bookmark dialog when accessed from Star Icon.
UX Fundamentals
We’re disabling the felt-privacy error pages in Firefox 148 while we sort out a few small issues. Going to try to get this in for Firefox 149 with the new error page UI for all errors.
AI controls showing the option to block AI enhancements.
AI is changing the web, and people want very different things from it. We’ve heard from many who want nothing to do with AI. We’ve also heard from others who want AI tools that are genuinely useful. Listening to our community, alongside our ongoing commitment to offer choice, led us to build AI controls.
Starting with Firefox 148, which rolls out on Feb. 24, you’ll find a new AI controls section within the desktop browser settings. It provides a single place to block current and future generative AI features in Firefox. You can also review and manage individual AI features if you choose to use them. This lets you use Firefox without AI while we continue to build AI features for those who want them.
One place to manage your AI preferences
Firefox offers AI features to enhance everyday browsing. These features are optional, and they’re easy to turn on or off.
At launch, AI controls let you manage these features individually:
Translations, which help you browse the web in your preferred language.
Alt text in PDFs, which add accessibility descriptions to images in PDF pages.
AI-enhanced tab grouping, which suggests related tabs and group names.
Link previews, which show key points before you open a link.
AI chatbot in the sidebar, which lets you use your chosen chatbot as you browse, including options like Anthropic Claude, ChatGPT, Microsoft Copilot, Google Gemini and Le Chat Mistral.
You can choose to use some of these and not others. If you don’t want to use AI features from Firefox at all, you can turn on the Block AI enhancements toggle. When it’s toggled on, you won’t see pop-ups or reminders to use existing or upcoming AI features.
Once you set your AI preferences in Firefox, they stay in place across updates. You can also change them whenever you want.
Firefox AI controls overview.
The browser that gives you a say
AI controls give you more say in how you move across the web.
We believe choice is more important than ever as AI becomes a part of people’s browsing experiences. What matters to us is giving people control, no matter how they feel about AI.
If you’d like to try AI controls early, they’ll be available first in Firefox Nightly. We’d love to hear what you think on Mozilla Connect.
Performance issues in Python often don’t look like bugs.
They don’t crash, they don’t fail tests, and they don’t stand out in code review.
They just quietly turn into cliffs when the input size grows.
This post is about one such performance fix in transformers, what it revealed,
and a small experiment that came out of it: LoopSleuth, a local LLM-powered
complexity scanner.
It Started With a Tokenizer Converter
While working on transformers, I fixed a performance issue in
convert_slow_tokenizer.py
that took a tokenizer conversion step from 4 minutes down to ~1 second
when running on very large vocabularies (100k+ tokens).
The Test That Surfaced It
This started when CI flagged test_voxtral_tokenizer_converts_from_tekken as
the slowest test in the suite.
The test loads mistralai/Voxtral-Mini-3B-2507 and forces the fallback path to
TokenizersBackend.
That fallback triggers the slow→fast tokenizer conversion step — and that
conversion was doing repeated .index() lookups inside a sort key, turning
large vocabularies into a performance cliff.
The root cause was a classic scaling trap.
The Original Pattern
# BEFORE (simplified excerpt)
forrank,tokeninenumerate(bpe_ranks):local=sorted(local,key=lambdax:(bpe_ranks.index(x[0]),bpe_ranks.index(x[1]),),)
(Simplified excerpt — the key issue is the repeated .index() inside the sort
key.)
At first glance this looks harmless.
But list.index() is O(n).
And the real killer is that it happens inside a sorted() key function.
Sorting local means computing the key for every element, and each key performs
two linear searches through bpe_ranks: sorted() calls the key function once
per element (O(m)), and each key calls .index() twice (O(n)), so the total
becomes O(m·n) — often a scaling trap when m and n are both large.
The Fix
# AFTER (reduces key computation from O(n) to O(1))
token_to_rank={token:rankforrank,tokeninenumerate(bpe_ranks)}forrank,tokeninenumerate(bpe_ranks):local=sorted(local,key=lambdax:(token_to_rank[x[0]],token_to_rank[x[1]],),)
The optimization is simple:
replace repeated linear searches with constant-time dictionary lookups
This doesn’t eliminate all sorting work (the outer loop still sorts repeatedly),
but it removes the quadratic lookup cost that was dominating runtime.
The takeaway wasn’t just “use dicts” — it was that asymptotic traps often hide
in perfectly valid Python idioms.
Could This Have Been Caught Automatically?
After landing that fix, I kept wondering:
How many other places in the codebase have the exact same pattern?
This wasn’t a correctness issue:
everything worked
tests passed
the slowdown only appeared at scale
And none of the linting tools I normally rely on flagged it.
Ruff’s PERF rules catch obvious constructs like unnecessary list copies, but
they don’t reason about .index() inside a sort key.
In theory, a linter could detect patterns like:
repeated .index() inside loops
.index() inside sort keys
nested iteration over growing structures
But most rule-based linters avoid making strong claims about asymptotic
complexity.
That’s a reasonable trade-off: linters are fast, deterministic, and low-noise —
but they often miss scaling issues unless you add very specific custom rules.
This is where I started wondering whether an LLM could help fill the gap.
Scanning Transformers With Claude
As an experiment, I ran Claude Code over the repository with one question:
Find quadratic complexity patterns similar to the tokenizer converter bug.
The result was surprisingly useful.
It scanned ~3,000 Python functions across the codebase in a few minutes and
flagged ~20 instances of the same anti-pattern:
.index() inside loops
.index() inside sort keys
nested iteration patterns with superlinear blow-up at scale
About half were genuine hot-path candidates; others were technically quadratic
but not performance-critical in practice.
Instead of running a massive model in the cloud, I wanted to know:
could a small local model catch these patterns?
could we build something closer to a linter?
could we automate complexity review?
That’s how I ended up hacking together a small prototype I called LoopSleuth.
Why Rust + llama.cpp?
My first instinct was to build this as a Python script on top of
transformers itself.
But I wanted this experiment to be:
fast startup time
easy CI binary distribution
no Python runtime dependency
easy to integrate into tooling
A single static binary makes it easy to drop into CI, like Ruff.
And honestly, I also wanted an excuse to explore the Rust ecosystem that powers
tools like Ruff and Ty.
So LoopSleuth is written in Rust and uses:
rustpython-parser to extract functions
llama.cpp bindings for local inference
In practice, a small model like Qwen2.5-Coder 3B (Q4) already gives
surprisingly good results for this narrow task.
LoopSleuth: A Small Complexity Scanner
LoopSleuth is a CLI tool that:
parses Python modules
extracts functions (each function is analyzed in isolation: signature + body, without full module context)
sends each function to a local LLM
asks a focused question:
Does this contain patterns that may scale quadratically?
If the model answers “QUADRATIC”, it also asks for an optimization suggestion.
This framing treats complexity as a heuristic warning (like a linter) rather
than a mathematical proof.
How It Works
The prompt is deliberately simple and constrained:
Classify this function as OK or QUADRATIC.
Look for list.index(), nested loops, or linear operations inside loops.
Return only one word: OK or QUADRATIC.
This makes the model focus on structural patterns rather than trying to perform
full dataflow analysis, and the constrained output format makes parsing reliable.
Because it’s a CLI, it can be used in a few practical ways:
as a local complexity scanner during development
as a lightweight pre-pass before calling a large cloud model (reducing token usage)
as a GitHub Action on pull requests to catch patches that introduce quadratic behavior
Why Not Just Use Existing Linters?
Before building anything, I tried the usual suspects.
Tools like Ruff, Pylint, and performance-focused plugins can catch a lot:
Pylint warns about string concatenation in loops (consider-using-join)
Ruff has PERF rules inspired by Perflint
But none of the linters I tried really caught the pattern that triggered this
whole experiment:
repeated .index() lookups inside loops
.index() inside sort key functions
nested iteration patterns that only become problematic at scale
These tools are excellent at enforcing specific rules, but they generally don’t
try to answer:
“Does this function scale quadratically with input size?”
That gap is what made the LLM approach interesting to explore.
A Quick Comparison
One thing I wanted to sanity-check early was whether existing linters would
catch the same issues.
So I built a small test file with a handful of intentionally quadratic
functions (nested loops, .remove() in loops, string concatenation, etc.) and
ran:
LoopSleuth
Ruff (with --select ALL)
Pylint
The results were pretty stark:
Tool
Detects .index() in loop?
Reports complexity?
Ruff
❌
❌
Pylint
❌
❌
LoopSleuth
✅
✅ (heuristic)
LoopSleuth flagged all 5 quadratic functions, while Ruff and Pylint flagged
plenty of style and quality issues but neither directly reported algorithmic
complexity problems.
This isn’t really a criticism of those tools — they’re simply not designed for
that job.
To be clear, there may be ways to approximate some of these checks with custom
rules or plugins, and linters remain the first line of defense for code quality.
LoopSleuth is just exploring a different axis: scaling behavior.
Still an Experiment
LoopSleuth is not a replacement for linters.
It’s a small experiment.
Traditional linters like Ruff or Pylint excel at catching specific code smells.
But most scaling bugs don’t come from a single construct.
They come from composition:
nested iteration
repeated membership checks
linear operations inside loops
Rule-based linters struggle to infer:
“this .index() is inside a hot path”
“this loop is over the same input size”
“this becomes O(n²) at scale”
LLMs, even small ones, can often reason about these patterns more directly.
That said, LoopSleuth runs against isolated Python functions one by one, which
means it doesn’t yet understand:
cross-function context
runtime sizes
whether a loop is actually hot in practice
Limitations
Like any heuristic tool, LoopSleuth has trade-offs:
False positives:
small fixed-size loops that never scale
code in non-hot paths
patterns that look quadratic but have early exits
False negatives:
complexity hidden across function calls
indirect iteration patterns
subtle algorithm choices
The accuracy depends heavily on prompt design and model choice.
Important: LoopSleuth is a screening tool, not a replacement for profiling
or benchmarking. It flags patterns that may cause issues, but only real
measurements can confirm actual performance problems.
More broadly, I’m interested in whether this approach can extend beyond
complexity analysis to other classes of performance issues.
One direction would be to build a small library of prompts for:
repeated tensor conversions
hidden CPU/GPU sync points
accidental re-tokenization
And in an ideal world, we could fine-tune a small model (like Qwen2.5-Coder 3B)
to specialize on this kind of performance reasoning.
What’s Next
If this experiment proves useful, here are some directions worth exploring:
AST-based prefiltering to skip obviously safe functions
Caching inference results to avoid re-analyzing unchanged code
Training on real perf bugs from issue trackers and PRs
GitHub Actions integration to catch regressions in CI
Right now LoopSleuth is a proof of concept, but these extensions could make it
practical for real codebases.
Conclusion
LoopSleuth started as a simple question:
Could we catch quadratic complexity bugs automatically?
The answer is: not perfectly.
But even a small local model can spot surprising amounts of hidden O(n²)
behavior.
And as codebases grow — especially ones like transformers — performance traps
scale with them.
LoopSleuth is a small experiment toward making complexity visible earlier.
If you have examples of hidden scaling bugs or want to contribute detection
patterns, I’d love to collect them as test cases. Feel free to try it locally or
open an issue.
Welcome to the Q4 2025 edition of the Firefox Security & Privacy Newsletter.
Security and privacy are foundational to Mozilla’s manifesto and central to how we build Firefox. In this edition, we highlight key security and privacy work from Q4 2025, organized into the following areas:
Firefox Product Security & Privacy — new security and privacy features and integrations in Firefox
Core Security — platform-level security and hardening efforts
Community Engagement — updates from our security research and bug bounty community
Web Security & Standards — advancements that help websites better protect their users from online threats
Preface
Note: Some of the bugs linked below might not be accessible to the general public and restricted to specific work groups. We de-restrict fixed security bugs after a grace-period, until the majority of our user population have received Firefox updates. If a link does not work for you, please accept this as a precaution for the safety of all Firefox users.
Firefox Product Security & Privacy
Functional Privacy. Firefox empowers users with control and choice - including the option for maximum privacy protections. Yet, our commitment lies in targeting online tracking by default in ways that ensures the web continues to function accurately and smoothly. With focus on this important balance, our protections have blocked more than 1 trillion tracking attempts, while reported site compatibility issues were driven down to an all time low: 500, as compared to 1,100 in Q1 of 2025.
Improved page redirect prevention: Firefox now blocks top-level redirects from iframes. This new prevention mechanism aligns Firefox behaviour with other browsers and protects users against so-called malvertising attacks.
Improved protections against navigational cross-site tracking: Navigational tracking is used to track users across different websites using browser navigations. Bounce tracking is a type of navigational tracking that “bounces” user navigations through an intermediary tracking site. Firefox’s Bounce Tracking Protection already protects against this tracking vector. And Firefox 145 uplevels this by also eliminating cache access for these intermediate redirect pages.
Global Privacy Control (GPC): Following Firefox’s lead as the first major browser to do this, Thunderbird has also now replaced the legacy “Do Not Track” (DNT) in favor of Global Privacy Control (GPC). This new control has the much needed legal footing to clearly communicate a user’s “do-not-sell-or-share preference” and other browsers are expected to follow soon.
Warning prompts for digital identity requests: When a webpage attempts to open a digital wallet app using custom URL schemes such as openid4vp, mdoc, mdoc-openid4vp, or haip, Firefox on Desktop and Android (Firefox 145 and newer) now displays clear warning prompts that explain what’s happening and give users control.
Core Security
Certificate Transparency (CT) on Android: Certificate Transparency enables rapid detection of unauthorized or fraudulent SSL/TLS certificates. CT has been available in Firefox Desktop since Firefox 136 and is now also available on Android starting with Firefox 145.
Post-Quantum Cryptography (ML-KEM): ML-KEM is a next-generation public-key cryptosystem designed to resist attacks from large-scale quantum computers. Post-quantum (PQ) cryptography with ML-KEM support shipped in Firefox 132 for Desktop. Support is now also available on Android starting with Firefox 145 and in WebRTC starting with Firefox 146.
Community Engagement
Mozilla and Firefox at the 39th Chaos Communication Congress (39C3): Teams from Firefox Security, Privacy, Networking, Thunderbird, and Public Policy collaborated to raise awareness of their work and gather direct community feedback. A clear highlight was the popularity of our swag, with our folks distributing 1,000 Fox Ears. The high level of engagement was further sustained by a dedicated community meetup and an impromptu AMA session, which drew attention from over 100 people.
Firefox Bug Bounty Hall of Fame: We just updated the Hall of Fame, which credits all of the skillful security researchers that strive to keep Firefox secure as of Q4 2025. If you also want to contribute to Firefox security, please look at our Bug Bounty pages.
Web Security & Standards
Integrity-Policy: Firefox 145 has added support for the Integrity-Policy response header. The header allows websites to ensure that only scripts with an integrity attribute will load. Errors will be logged to the console, with support for the Reporting API coming in early 2026.
Compressed Elliptic Curve Points in WebCrypto: Firefox 146 adds support for compressed elliptic curve points in WebCrypto. This reduces public key sizes by nearly half, saving bandwidth and storage, while still allowing the full point to be reconstructed mathematically. With this addition, Firefox now leads in WebCrypto web platform test coverage.
Going Forward
As a Firefox user, you automatically benefit from the security and privacy improvements described above through Firefox’s regular automatic updates. If you’re not using Firefox yet, you can download it to enjoy a fast, secure browsing experience — while supporting Mozilla’s mission of a healthy, safe, and accessible web for everyone.
We’d like to thank everyone who helps make Firefox and the open web more secure and privacy-respecting.
Working on the mobile Firefox team gives you the opportunity to touch on many different parts of the browser space. You often need to test the interaction between web content and the application integration's to another component, say for example, a site registering for a WebPush subscription and Firefox using Firebase Cloud Messaging to deliver the encrypted message to the end-user. Hunting around for an example to validate everything fine and dandy takes time.
Sometimes a simple test site for your use case is helpful for initial validation or comparison against other browsers.
Below is a list of tests that I've used in the past (in no particular order):
Push notifications requires a server to send a notification to the client (not the same as a WebNotification), so you can use this WebPush test site for validating just that.
There are Too Many™ different prompt and input element types. The MDN docs have the best collection of all of them.
Forms and Autocomplete
There are various form types and various heuristics to trigger completion options, so they deserve their own section. The more (test sites) the merrier!
Sign-up and login forms behave differently, so they are handy to test separately. For example, autofilling a generated password is useful on a registration form but not on a login one.
Make your own
If you need to make your own, try to write out the code yourself so you can understand the reduced test case. If it's not straight-forward, try using the Testcase Reducer by Thomas Wisniewski.
Comments
With an account on the Fediverse or Mastodon, you can respond to this post. Since Mastodon is decentralized, you can use your existing account hosted by another Mastodon server or compatible platform if you don't have an account on this one. Known non-private replies are displayed below.
Learn how this was implemented from the original source here.
<noscript><p>Loading comments relies on JavaScript. Try enabling JavaScript and reloading, or visit <a href="https://mindly.social/@jonalmeida/115937256635328128">the original post</a> on Mastodon.</p></noscript>
<noscript>You need JavaScript to view the comments.</noscript>
&>"'
AI is here, and has started to define how we search, create, communicate — and how the web itself works. Some of you love AI, but want it to work better for yourselves and society. Some of you hate it, and don’t want any of it.
We get it.
We also know, as Mozilla, that the future is being decided now. The big tech players are racing to lock down and control AI, and make sure it works on their terms, not ours.
Updates on what’s new and coming with our core products, Firefox and Thunderbird.
A look at how Mozilla is investing in open source AI and privacy preserving tech.A snapshot of our financials, and how we allocate resources to balance mission and money
Stories from people across Mozilla and our community who are building tools, products, and movements that push AI in a better direction
And, a commitment to giving you a choice in everything we do — including the option to say no to AI altogether.
All of this is guided by Mozilla’s double bottom line: advancing the public interest and building sustainable businesses. This model lets us invest patiently, say no to extractive approaches, and support ecosystems that would otherwise struggle to exist.
A vision for what comes next
The future of AI — and the future of the web — is ours to define. We want to make that future to be one where humanity thrives, and technology helps out.
If you believe the future of AI should be human-centered, transparent, and open, we invite you to explore the report, share with your community and build that future with us.
Huge thanks to :arai for working on this feature! It’s currently not enabled by default but will be soon. It can be enabled through window.toggleDarkMode().
[arai-a] Add a menu to copy the Marker Table as text (#5732)
[arai-a] Do not apply sticky tooltip on double click (#5754)
[Markus Stange] Allow seeing different assembly code for the same function (#5349)
[fatadel] Align double-click behavior of stack chart with flame graph (#5782)
[Markus Stange] Add a Focus Self transform (#5774)
[Markus Stange] Fix “scroll to hotspot” functionality in the source view + assembly view (#5759)
[Nazım Can Altınova] Enable the Turkish locale in production (#5786)
Who will build the next version of the web? Mozilla wants to make it more likely that it’s you. We are committing time and resources to bring experienced builders into Mozilla for a short, programmed period, to work with our New Products leaders to build tools and products for the next version of the web.
A different program from a different kind of company
Our mission at Mozilla is to ensure the internet is a global public resource, open and accessible to all. We know that there are a lot of gifted, experienced and thoughtful technologists, designers, and builders who care as deeply about the internet as we do – but seek a different environment to explore what’s possible than what they might find across the rest of the tech industry.
Pioneers is intentionally structured to make it possible for those who don’t typically get the opportunity to create new products to participate. The program is paid, flexible (i.e. you can do it part-time if needed), and bounded. We’re not asking you to gamble your livelihood in order to explore how we can improve the internet.
This matters to me
My own career advanced the most dramatically in moments when change was piling on top of change and most people couldn’t grasp the compounding effects of these shifts. That’s why I stepped up to start an independent blogging company back in 2002 (Gizmodo) and again in 2004 (Engadget).
It’s also why, a lifetime later, I joined Mozilla to lead New Products, where I’ve had the good fortune of supporting the development of meaningful new Mozilla tools like Solo, Tabstack, 0DIN, and an enterprise version of Firefox.
Changing the game
We’ve designed Pioneers to make space for technologists — professionals comfortable working across code, product, and systems — to collaborate with Mozilla on foundational ideas for AI and the web in a way that reflects these shared values.
We’re looking for people to work with; this is not a contest for ideas, and you don’t apply with a pitch deck. Our vision:
Pioneers are paid. Participants receive compensation for their time and work.
It’s flexible, designed so participants can be in the program and continue to work on existing commitments. You don’t have to put your life on hold.
It’s hands-on. Builders work closely with Mozilla leaders to prototype and pressure-test concepts.
It’s bounded. The program is time-limited and focused, with clear expectations on both sides.
It’s real. Some ideas will move forward inside Mozilla. Some will not – and they’ll still be valuable. If it makes sense, there will be an opportunity for you to join Mozilla full-time to bring your concept to market.
Applications are open Monday, Jan. 26 and close Monday, Feb. 16, 2026.
Pioneers isn’t an accelerator, and it isn’t a traditional residency. It’s a way to explore foundational ideas for AI and the web in a high-trust environment, with the possibility of continuing that work at Mozilla.
If this sounds like the kind of work you want to do, we want to hear from you. Hopefully, by reading to the end of this post, you’re either thinking of applying yourself — or know someone who should. I encourage you to check out (and share) Mozilla Pioneers, thanks!
Shout-out to new contributor Lorenz A, who fixed almost 70 bugs over the past few weeks! Most of this work was modernizing some of our DevTools code to use ES6 classes (example)
Split View has been enabled by default in Nightly! You can right click on a tab to add it to a split view, and from there select the other tab you’d like to view in the split. Or, multi-select 2 tabs with Ctrl/Cmd, and choose “Open in Split View” from the tab context menu
@rejects for indicating if an async (or promise returning) function may reject. This is not standard in JSDoc, and TypeScript doesn’t have an equivalent. Hence for now, this is an alternative way that we can use to at least document the expectations.
Quick update this week – OS Integration intern Nishu is traveling a long road to add support for storing profiles in the secure MacOS App Group container (bug 1932976), over the break she fixed
Daisuke fixed multiple address bar bugs, including broken “switch to [tab group]” behaviour, persisted search terms, and a missing unified search button in private new tabs (2002936, 1968218, 1961568)
Jeremy Swinarton aligned the tab note editor to spec in Tab note content textarea spec, refining textarea sizing, focus/blur save behavior, and keyboard shortcuts for consistent editing and better a11y across platforms.
Stephen Thompson added a one-click entry point in hover previews via Add note button to tab hover preview, surfacing Tab Notes in the preview tooltip (behind notes and hover-preview prefs) with full keyboard focusability and theme-aware iconography.
Stephen Thompson hooked History API updates in Update canonical URL for tab note on pushState to recompute the canonical URL on pushState/replaceState/popstate, preventing stale or misplaced notes during SPA navigations.
Last year brought a wealth of new features and fixes to Firefox on Linux. Besides numerous improvements and bug fixes, I want to highlight some major achievements: HDR video playback support, reworked rendering for fractionally scaled displays, and asynchronous rendering implementation. All this progress was enabled by advances in the Wayland compositor ecosystem, with new features implemented by Mutter and KWin.
HDR
The most significant news on the Wayland scene is HDR support, tracked by Bug 1642854. It’s disabled by default but can be enabled in recent Wayland compositors using the gfx.wayland.hdr preference at about:config (or by gfx.wayland.hdr.force-enabled if you don’t have an HDR display).
HDR mode uses a completely different rendering path, similar to the rendering used on Windows and macOS. It’s called native rendering or composited rendering, and it places specific application layers directly into the Wayland compositor as subsurfaces.
The first implementation was done by Robert Mader (presented at FOSDEM), and I unified the implementation for HDR and non-HDR rendering paths as new WaylandSurface object.
The Firefox application window is actually composited from multiple subsurfaces layered together. This design allows HDR content like video frames to be sent directly to the screen while the rest of the application (controls and HTML page) remains in SDR mode. It also enables power-efficient rendering when video frames are decoded on the graphics card and sent directly to the screen (zero-copy playback). In fullscreen mode, this rendering is similar to mpv or mplayer playback and uses minimal power resources.
I also received valuable feedback from AMD engineers who suggested various improvements to HDR playback. We removed unnecessary texture creation over decoded video frames (they’re now displayed directly as wl_buffers without any GL operations) and implemented wl_buffer recycling as mpv does.
For HDR itself (since composited rendering is available for any video playback), Firefox on Wayland uses the color-management-v1 protocol to display HDR content on screen, along with BT.2020 video color space and PQ color transfer function. It uses 10-bit color vectors, so you need VP9 version 2 to decode it in hardware. Firefox also implements software decoding and direct upload to dmabuf frames as a fallback.
The basic HDR rendering implementation is complete, and we’re now in the testing and bug-fixing phase. Layered rendering is quite tricky as it involves rapid wl_surface mapping/unmapping and quick wl_buffer switches, which are difficult to handle properly. HDR rendering of scaled surfaces is still missing—we need fractional-scale-v2 for this (see below), which allows positioning scaled subsurfaces directly in device pixels. We also need to test composited/layered rendering for regular web page rendering to ensure it doesn’t drain your battery. You’re very welcome to test it and report any bugs you find.
Fractional scale
The next major work was done for fractional scale rendering, which shipped in Firefox 147.0. We updated the rendering pipeline and widget sizing to support fractionally scaled displays (scales like 125%, etc.). This required reworking the widget size code to strictly upscale window/surface sizes and coordinates and never downscale them, as downscaling introduces rounding errors.
Another step was identifying the correct rounding algorithm for Wayland subsurfaces and implementing it. Wayland doesn’t define rounding for it, only for toplevel windows, so we’re in a gray area here. I was directed to Stable rounding by Michel Daenzer. It’s used by Mutter and Sway so Firefox implements it for those two compositors while using a different implementation for KWin. This may be updated to use the fractional-scale-v2 protocol when it becomes available.
Fractional scaling is enabled by default, and you should see crisp and clear output regardless of your desktop environment or screen scale.
Asynchronous rendering
Historically, Firefox disabled and re-enabled the rendering pipeline for scale changes, window create/destroy events, and hide/show sequences. This stems from Wayland’s architecture, where a Wayland surface is deleted when a window becomes invisible or is submitted to the compositor with mismatched size/scale (e.g., 111 pixels wide at 200% scale).
Such rendering disruptions cause issues with multi-threaded rendering—they need to be synchronized among threads, and we must ensure surfaces with the wrong scale aren’t sent to the screen, as this leads to application crashes due to protocol errors.
Firefox 149.0 (recent nightly) has a reworked Wayland painting pipeline (Bug 1739232) for both EGL and software rendering. Scale management was moved from wl_buffer fixed scale to wp_viewport, which doesn’t cause protocol errors when size/scale doesn’t match (producing only blurred output instead of crashes).
We also use a clever technique: the rendering wl_surface / wl_buffer / EGLWindow is created right after window creation and before it’s shown, allowing us to paint to it offscreen. When a window becomes visible, we only attach the wl_surface as a subsurface (making it visible) and remove the attachment when it’s hidden. This allows us to keep painting and updating the backbuffer regardless of the actual window status, and the synchronized calls can be removed.
This brings speed improvements when windows are opened and closed, and Linux rendering is now synchronized with the Windows and macOS implementations.
… and more
Other improvements include a screen lock update for audio playback, which allows the screen to dim but prevents sleep when audio is playing. We also added asynchronous Wayland object management to ensure we cleanly remove Wayland objects without pending callbacks, along with various stability fixes.
And there are even more challenges waiting for us Firefox Linux hackers:
Wayland session restore (session-restore-v1) to restore Firefox windows to the correct workspace and position.
Implement drag and drop for the Firefox main window, and possibly add a custom Wayland drag and drop handler to avoid Gtk3 limitations and race conditions.
Utilize the fractional-scale-v2 protocol when it becomes available.
Investigate using xdg-positioner directly instead of Gtk3 widget positioning to better handle popups.
Vulkan video support via the ffmpeg decoder to enable hardware decoding on NVIDIA hardware.
And of course, we should plan properly before we even start. Ready, Scrum, Go!
From designers to writers, multi-media producers and more — if you perform creative work on a computer there’s a good chance you can find a browser extension to improve your process. Here’s a mix of practical Firefox extensions for a wide spectrum of creative cases…
Extensions for visual artists, animators & designers
Awesome Screenshot & Screen Recorder
There are a lot of screenshot and recording tools out there, but few offer the sweet combination of intuitive control and a deep feature set like Awesome Screenshot & Screen Recorder.
An ideal tool if you do a lot of screen recording for things like tutorials, the extension also integrates with your computer’s microphone should you need a voice component.
The easily accessible pop-up menu puts you in control of everything, including the screenshot feature (full page, selected area, or just the visible part). You can also annotate screenshots with text and graphics, blur unwanted images, highlight sections, and more.
Save and share everything with just a couple quick mouse clicks.
Image Max URL
Find a great image online, but it’s too small or the resolution is poor quality? No problem. Image Max URL can help you find a batter version or even the original.
Scouring more than 10,000 websites in its database (including most social media sites, news outlets, WordPress sites and various image hosting services), Image Max URL will search for any image’s original version and short of that, look for high res alternatives.
Font Finder
Every designer has seen a beautiful font in the wild and thought — I need that font for my next project! But how to track it down? Try Font Finder.
Investigating your latest favorite font doesn’t require a major research project anymore. Font Finder gives you quick point-and-click access to:
Typography analysis. Font Finder reveals all relevant typographical characteristics like color, spacing, alignment, and of course font name.
Copy information. Any portion of the font analysis can be copied to a clipboard for convenient pasting anywhere.
Inline editing. All font characteristics (e.g. color, size, type) on an active element can be changed directly on the page.
Search by Image
If you’re a designer who scours the web looking for images to use in your work, but gets bogged down researching aspects like intellectual property ownership or subject matter context, you might consider an image search extension like Search by Image.
If you’re unfamiliar with the concept of image search, it works like text-based search, except your search starts with an image instead of a word or phrase. The Search by Image extension leverages the power of 30+ image search engines like Tineye, Google, Bing, Yandex, Getty Images, Pinterest, and others. This tool can be an incredible time saver when you can’t leave any guesswork to images you want to repurpose.
Search by Image makes it simple to find the origins of almost any image you encounter on the web.
Extended Color Management
Built in partnership between Mozilla and Industrial Light & Magic, this niche extension performs an invaluable function for animation teams working remotely. Extended Color Management calibrates colors on Firefox so animators working from different home computer systems (which might display colors differently based on their operating systems) can trust the whole team is looking at the same exact shades of color through Firefox.
Like other browsers, Firefox by default utilizes color management (i.e. the optimization of color and brightness) from the distinct operating systems of the computers it runs on. The problem here for professional animators working remotely is they’re likely collaborating from different operating system — and seeing slight but critically different variations in color rendering. Extended Color Management simply disables the default color management tools so animators with different operating systems are guaranteed to see the same versions of all colors, as rendered by Firefox.
Measure-it
What a handy tool for designers and developers — Measure-it lets you draw a ruler across any web page to get precise dimensions in pixels.
Access the ruler from a toolbar icon or keyboard shortcut. Other customization features include setting overlay colors, background opacity, and pop-up characteristics.
More than just a spell checker, LanguageTool also…
Recognizes common misuses of similar sounding words (e.g. there/their, your/you’re)
Works on social media sites and email
Offers alternate phrasing and style suggestions for brevity and clarity
Please note LanguageTool’s full feature set is free during a 14-day trial period, then payment is required.
Dark Background and Light Text
If you spend all day (and maybe many nights) staring at a screen to scribe away, Dark Background and Light Text may ease strain on your eyes.
By default the extension flips the colors of every web page you visit, so your common light colored backgrounds become text colors and vice versa. But all color combinations are customizable, freeing you to adjust everything to taste. You can also set exceptions for certain websites that have a native look you prefer.
Dictionary Anywhere
It’s annoying when you have to navigate away from a page just to check a word definition elsewhere. Dictionary Anywhere fixes that by giving you instant access to word definitions without leaving the page you’re on.
Just double-click any word to get a pop-up definition right there on the page. Available in English, French, German, and Spanish. You can even save and download word definitions for later offline reference.
Dictionary Anywhere — no more navigating away from a page just to get a word check.
LeechBlock NG
Concentration is key for productive writing. Block time-wasting websites with LeechBlock NG.
This self-discipline aid lets you select websites that Firefox will restrict during time parameters you define — hours of the day, days of the week, or general time limits for specific sites. Even cooler, LeechBlock NG lets you block just portions of websites (for instance, you can allow yourself to see YouTube video pages but block YouTube’s homepage, which sucks you down a new rabbit hole every time!).
Gyazo
If your writing involves a fair amount of research and cataloging content, consider Gyazo for a better way to organize all the stuff you clip and save on the web.
Clip entire web pages or just certain elements, save images, take screenshots, mark them up with notes, and much more. Everything you clip is automatically saved to your Gyazo account, making it accessible across devices and collaborative teams.
With its minimalist pop-up interface, Gyazo makes it easy to clip elements, sections, or entire web pages.
We hope one of these extensions improves your creative output on Firefox! Explore more great media extensions on addons.mozilla.org.
Servo 0.0.4 and our December nightly builds now support multiple windows (@mrobinson, @mukilan, #40927, #41235, #41144)!
This builds on features that landed in Servo’s embedding API last month.
We’ve also landed support for several web platform features, both old and new:
‘contrast-color()’ in CSS color values (@webbeef, #41542)
For better compatibility with older web content, we now support vendor-prefixed CSS properties like ‘-moz-transform’ (@mrobinson, #41350), as well as window.clientInformation (@Taym95, #41111).
When using servoshell on Windows, you can now see --help and log output, as long as servoshell was started in a console (@jschwe, #40961).
Servo diagnostics options are now accessible in servoshell via the SERVO_DIAGNOSTICS environment variable (@atbrakhi, #41013), in addition to the usual -Z / --debug= arguments.
Servo’s devtools now partially support the Network > Security tab (@jiang1997, #40567), allowing you to inspect some of the TLS details of your requests.
We’ve also made it compatible with Firefox 145 (@eerii, #41087), and use fewer IPC resources (@mrobinson, #41161).
We now use the system root certificates by default (@Narfinger, @mrobinson, #40935, #41179), on most platforms.
If you don’t want to trust the system root certificates, you can instead continue to use Mozilla’s root certificates with --pref network_use_webpki_roots.
As always, you can also add your own root certificates via Opts::certificate_path (--certificate-path=).
Servo, the main handle for controlling Servo, is now cloneable for sharing within the same thread (@mukilan, @mrobinson, #41010).
To shut down Servo, simply drop the last Servo handle or let it go out of scope.
Servo::start_shutting_down and Servo::deinit have been removed (@mukilan, @mrobinson, #41012).
We can now evict entries from our HTTP cache (@Narfinger, @gterzian, @Taym95, #40613), rather than having it grow forever (or get cleared by an embedder).
about:memory now tracks SVG-related memory usage (@d-kraus, #41481), and we’ve fixed memory leaks in <video> and <audio> (@tharkum, #41131).
We’ve fixed a crash that occurs when <link rel=“shortcut icon”> has an empty ‘href’ attribute, which affected chiptune.com (@webbeef, #41056), and we’ve also fixed crashes in:
Thanks again for your generous support!
We are now receiving 7110 USD/month (+10.5% over November) in recurring donations.
This helps us cover the cost of our speedyCIandbenchmarkingservers, one of our latest Outreachy interns, and funding maintainer work that helps more people contribute to Servo.
Servo is also on thanks.dev, and already 30 GitHub users (+2 over November) that depend on Servo are sponsoring us there.
If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.
We now have sponsorship tiers that allow you or your organisation to donate to the Servo project with public acknowlegement of your support.
A big thanks from Servo to our newest Bronze Sponsors: Anthropy, Niclas Overby, and RxDB!
If you’re interested in this kind of sponsorship, please contact us at join@servo.org.
Servo developers Martin Robinson (@mrobinson) and Delan Azabani (@delan) will also be attending FOSDEM 2026, so it would be a great time to come along and chat about Servo!
YouTube wants you to experience YouTube in prescribed ways. But with the right browser extension, you’re free to alter YouTube to taste. Change the way the site looks, behaves, and delivers your favorite videos.
Return YouTube Dislike
Do you like the Dislike? YouTube removed the display that reveals the number of thumbs-down Dislikes a video has, but with Return YouTube Dislike you can bring back the brutal truth.
“Does exactly what the name suggests. Can’t see myself without this extension. Seriously, bad move on YouTube for removing such a vital tool.”
Though its primary function is to automatically play all YouTube videos in their highest possible resolution, YouTube High Definition has a few other fine features to offer.
In addition to automatic HD, YouTube High Definition can…
Customize video player size
HD support for clips embedded on external sites
Specify your ideal resolution (4k – 144p)
Set a preferred volume level
Also automatically plays the highest quality audio
YouTube NonStop
So simple. So awesome. YouTube NonStop remedies the headache of interrupting your music with that awful “Video paused. Continue watching?” message.
Works on YouTube and YouTube Music. Now you’re free to navigate away from the YouTube tab for as long as you like and never worry about music interruption again.
YouTube Screenshot Button
If you take a lot of screenshots on YouTube, then the aptly titled YouTube Screenshot Button is worth your time.
You’ll find a “Screenshot” button conveniently located on the control panel of videos, or at the top of the screen on Shorts (or you can use custom keystrokes), so it’s always easy to snap a quick shot. Set preferences to automatically download screenshots as JPEG or PNG files.
Instant serenity for YouTube! Unhook strips away unwanted distractions like the promotional sidebar, end-screen suggestions, trending tab, and much more.
More than two dozen customization options make this an essential extension for anyone seeking escape from YouTube rabbit holes. You can even hide notifications and live chat boxes.
“This is the best extension to control YouTube usage, and not let YouTube control you.”
If you subscribe to a lot of YouTube channels PocketTube is a fantastic way to organize all your subscriptions by themed collections.
Group your channel collections by subject, like “Sports,” “Cooking,” “Cat videos,” etc. Other key features include…
Add custom icons to easily identify channel collections
Customize your feed so you just see videos you haven’t watched yet and prioritize videos from certain channels
Integrates seamlessly with YouTube homepage
Sync collections across Firefox/Android/iOS using Google Drive and Chrome Profiler
PocketTube keeps your channel collections neatly tucked away to the side.
AdBlocker for YouTube
It’s not just you who’s noticed a lot more ads lately. Regain control with AdBlocker for YouTube.
The extension very simply and effectively removes both video and display ads from YouTube. Period. Enjoy a faster, more focused YouTube.
SponsorBlock
It’s a terrible experience when you’re enjoying a video or music on YouTube and you’re suddenly interrupted by a blaring ad. SponsorBlock solves this problem in a highly effective and original way.
Leveraging the power of crowd sourced information to locate where — precisely — interruptive sponsored segments appear in videos, SponsorBlock learns where to automatically skip sponsored segments with its ever growing database of videos. You can also participate in the project by reporting sponsored segments whenever you encounter them (it’s easy to report right there on the video page with the extension).
SponsorBlock can also learn to skip non-music portions of music videos and intros/outros, as well. If you’d like a deeper dive of SponsorBlock we profiled its developer and open source project on Mozilla Distilled.
We hope one of these extensions enhances the way you enjoy YouTube. Feel free to explore more great media extensions on addons.mozilla.org.
Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.
Happy New Year!
What’s new or coming up in Firefox desktop
Preferences updates for 148
A new set of strings intended for inclusion in the preferences page of 148 landed recently in Pontoon on January 16. These strings, focused around controls of AI features, landed ahead of the UX and functionality implementation so are not currently testable. These should be testable within the coming week in Nightly and Beta.
Split view coming in 149
A new feature, called “split view”, is coming to Firefox 149. This feature and its related strings have already started landing at the end of 2025. You can test the feature now in Nightly, just right click a tab and select “Add Split View”. (If the option isn’t showing in your Nightly, then open about:config and ensure “browser.tabs.splitView.enabled” is set to true.
What’s new or coming up in mobile
Android onboarding testing updates
It is now possible to test the onboarding experience in Firefox for Android without using a simulator or wiping your existing data. We are currently waiting for engineers to update the default configuration to align with the onboarding experience in Firefox 148 and newer. We hope this update will land in time for the release of 148, and we will communicate the change via Pontoon as soon as that’s available.
In the meantime, please review the updated testing documentation to see how to trigger the onboarding flow. Note that some UI elements will display string identifiers instead of translations until the configuration is updated.
Firefox for iOS localization screenshots
We heard your feedback about the screenshot process for Firefox for iOS. Thanks to everyone who answered the survey at the end of last year.
Screenshots are now available as a gallery for each locale. There is no longer a need to download and decompress a local zip file. You can browse the current screenshots for your locale, and use the links at the top to review the full history or compare changes between runs (generated roughly every two weeks).
A reminder that links to testing environments and instructions are always available from the project header in Pontoon.
What’s new or coming up in web projects
Firefox.com
We’re planning some changes to how content is managed on firefox.com, and these updates will have an impact on our existing localization workflows. Once the details are finalized, we’ll share more information and notify you directly in Pontoon.
What’s new or coming up in Pontoon
Pontoon infrastructure update
Behind the scenes, Pontoon has recently completed a major migration from Heroku to Google Cloud Platform. While this change should be largely invisible to localizers in day-to-day use, it brings noticeable improvements in performance, reliability, and scalability, helping ensure a smoother experience as contributor activity continues to grow. Huge thanks go to our Cloud Engineering partners for supporting this effort over the past months and helping make this important milestone possible.
Friends of the Lion
Image by Elio Qoshi
Since relaunching the contributor spotlight blog series, we’ve published two more stories highlighting the people behind our localization work.
We featured Robb, a professional translator from Romania, whose love for words and her desire to help her mom keep up with modern technology has grown into a day-to-day commitment to making products and technology accessible in language that everyday people can understand.
We also spotlighted Andika from Indonesia, a long-time open source contributor who joined the localization community to ensure Firefox and other products feel natural and accessible for Indonesian-speaking users. His steady, long-term commitment to quality speaks volumes about the impact of thoughtful localization.
We’ll be continuing this series and are always looking for contributors to feature. You can help us find the next localizer to spotlight by nominating one of your fellow community members. We’d love to hear from you!
Know someone in your l10n community who’s been doing a great job and should appear here? Contact us and we’ll make sure they get a shout-out!
Mozilla has always believed that technology should empower people.
That belief shaped the early web, when browsers were still new and the idea of an open internet felt fragile. Today, the technology is more powerful, more complex, and more opaque, but the responsibility is the same. The question isn’t whether technology can do more. It’s whether it helps people feel capable, informed, and in control.
As we build new products at Mozilla today, that question is where we start.
I joined Mozilla to lead New Products almost one year ago this week because this is one of the few places still willing to take that responsibility seriously. Not just in what we ship, but in how we decide what’s worth building in the first place — especially at a moment when AI, platforms, and business models are all shifting at once.
Our mission — and mine — is to find the next set of opportunities for Mozilla and help shape the internet that all of us want to see.
Writing up to users
One of Mozilla’s longest-held principles is respect for the people who use our products. We assume users are thoughtful. We accept skepticism as a given (it forces product development rigor — more on that later). And we design accordingly.
That respect shows up not just in how we communicate, but in the kinds of systems we choose to build and the role we expect people to play in shaping them.
You can see this in the way we’re approaching New Products work across Mozilla today: Our current portfolio includes tools like Solo, which makes it easy for anyone to own their presence on the web; Tabstack, which helps developers enable agentic experiences; 0DIN, which pools the collective expertise of over 1400 researchers from around the globe to help identify and surface AI vulnerabilities; and an enterprise version of Firefox that treats the browser as critical infrastructure for modern work, not a data collection surface.
None of this is about making technology simpler than it is. It’s about making it legible. When people understand the systems they’re using, they can decide whether those systems are actually serving them.
Experimentation that respects people’s time
Mozilla experiments. A lot. But we try to do it without treating talent and attention as an unlimited resource. Building products that users love isn’t easy and requires us to embrace the uncertainty and ambiguity that comes with zero-to-one exploration.
Every experiment should answer a real question. It should be bounded. And it should be clear to the people interacting with it what’s being tested and why. That discipline matters, especially now. When everything can be prototyped quickly, restraint becomes part of the craft.
Fewer bets, made deliberately. A willingness to stop when something isn’t working. And an understanding that experimentation doesn’t have to feel chaotic to be effective.
Creating space for more kinds of builders
Mozilla has always believed that who builds is just as important as what gets built. But let’s be honest: The current tech landscape often excludes a lot of brilliant people, simply because the system is focused on only rewarding certain kinds of outcomes.
We want to unlock those meaningful ideas by making experimentation more practical for people with real-world perspectives. We’re focused on lowering the barriers to building — because we believe that making tech more inclusive isn’t just a nice-to-have, it’s how you build better products.
A practical expression of this approach
One expression of this philosophy is a new initiative we’ll be sharing more about soon: Mozilla Pioneers.
Pioneers isn’t an accelerator, and it isn’t a traditional residency. It’s a structured, time-limited way for experienced builders to work with Mozilla on early ideas without requiring them to put the rest of their lives on hold.
The structure is intentional. Pioneers is paid. It’s flexible. It’s hands-on. And it’s bounded. Participants work closely with Mozilla engineers, designers, and product leaders to explore ideas that could become real Mozilla products — or could simply clarify what shouldn’t be built.
Some of that work will move forward. Some won’t. Both outcomes are valuable. Pioneers exists because we believe that good ideas don’t only come from founders or full-time employees, and that meaningful contribution deserves real support.
Applications open Jan. 26. For anyone interested (and I hope that’s a lot of you) please follow us, share and apply. In the meantime, know that what’s ahead is just one more example of how we’re trying to build with intention.
Looking ahead
Mozilla doesn’t pretend to have all the answers. But we’re clear about our commitments.
As we build new products, programs, and systems, we’re choosing clarity over speed, boundaries over ambiguity, and trust that compounds over time instead of short-term gains.
The future of the internet won’t be shaped only by what technology can do — but by what its builders choose to prioritize. Mozilla intends to keep choosing people.
The Rust team is happy to announce a new version of Rust, 1.93.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.93.0 with:
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!
What's in 1.93.0 stable
Update bundled musl to 1.2.5
The various *-linux-musl targets now all ship with musl 1.2.5. This primarily affects static musl builds for x86_64, aarch64, and powerpc64le which bundled musl 1.2.3. This update comes with several fixes and improvements, and a breaking change that affects the Rust ecosystem.
For the Rust ecosystem, the primary motivation for this update is to receive major improvements to
musl's DNS resolver which shipped in 1.2.4 and received bug fixes in 1.2.5. When using musl
targets for static linking, this should make portable Linux binaries that do networking more
reliable, particularly in the face of large DNS records and recursive nameservers.
Allow the global allocator to use thread-local storage
Rust 1.93 adjusts the internals of the standard library to permit global allocators written in Rust
to use std's thread_local! and
std::thread::current without
re-entrancy concerns by using the system allocator instead.
Previously, if individual parts of a section of inline assembly needed to be cfg'd, the full asm!
block would need to be repeated with and without that section. In 1.93, cfg can now be applied to
individual statements within the asm! block.
asm!(// or global_asm! or naked_asm!
"nop",#[cfg(target_feature ="sse2")]"nop",// ...
#[cfg(target_feature ="sse2")] a =const123,// only used on sse2
);
Time flies! Six months have passed since our last crates.io development update, so it's time for another one. Here's a summary of the most notable changes and improvements made to crates.io over the past six months.
Security Tab
Crate pages now have a new "Security" tab that displays security advisories from the RustSec database. This allows you to quickly see if a crate has known vulnerabilities before adding it as a dependency.
The tab shows known vulnerabilities for the crate along with the affected version ranges.
This feature is still a work in progress, and we plan to add more functionality in the future. We would like to thank the OpenSSF (Open Source Security Foundation) for funding this work and Dirkjan Ochtman for implementing it.
Trusted Publishing Enhancements
In our July 2025 update, we announced Trusted Publishing support for GitHub Actions. Since then, we have made several enhancements to this feature.
GitLab CI/CD Support
Trusted Publishing now supports GitLab CI/CD in addition to GitHub Actions. This allows GitLab users to publish crates without managing API tokens, using the same OIDC-based authentication flow.
Note that this currently only works with GitLab.com. Self-hosted GitLab instances are not supported yet. The crates.io implementation has been refactored to support multiple CI providers, so adding support for other platforms like Codeberg/Forgejo in the future should be straightforward. Contributions are welcome!
Trusted Publishing Only Mode
Crate owners can now enforce Trusted Publishing for their crates. When enabled in the crate settings, traditional API token-based publishing is disabled, and only Trusted Publishing can be used to publish new versions. This reduces the risk of unauthorized publishes from leaked API tokens.
Blocked Triggers
The pull_request_target and workflow_run GitHub Actions triggers are now blocked from Trusted Publishing. These triggers have been responsible for multiple security incidents in the GitHub Actions ecosystem and are not worth the risk.
Source Lines of Code
Crate pages now display source lines of code (SLOC) metrics, giving you insight into the size of a crate before adding it as a dependency. This metric is calculated in a background job after publishing using the tokei crate. It is also shown on OpenGraph images:
Thanks to XAMPPRocky for maintaining the tokei crate!
Publication Time in Index
A new pubtime field has been added to crate index entries, recording when each version was published. This enables several use cases:
Cargo can implement cooldown periods for new versions in the future
Cargo can replay dependency resolution as if it were a past date, though yanked versions remain yanked
Services like Renovate can determine release dates without additional API requests
Thanks to Rene Leonhardt for the suggestion and Ed Page for driving this forward on the Cargo side.
Svelte Frontend Migration
At the end of 2025, the crates.io team evaluated several options for modernizing our frontend and decided to experiment with porting the website to Svelte. The goal is to create a one-to-one port of the existing functionality before adding new features.
This migration is still considered experimental and is a work in progress. Using a more mainstream framework should make it easier for new contributors to work on the frontend. The new Svelte frontend uses TypeScript and generates type-safe API client code from our OpenAPI description, so types flow from the Rust backend to the TypeScript frontend automatically.
Thanks to eth3lbert for the helpful reviews and guidance on Svelte best practices. We'll share more details in a future update.
Miscellaneous
These were some of the more visible changes to crates.io over the past six months, but a lot has happened "under the hood" as well.
Cargo user agent filtering: We noticed that download graphs were showing a constant background level of downloads even for unpopular crates due to bots, scrapers, and mirrors. Download counts are now filtered to only include requests from Cargo, providing more accurate statistics.
HTML emails: Emails from crates.io now support HTML formatting.
Encrypted GitHub tokens: OAuth access tokens from GitHub are now encrypted at rest in the database. While we have no evidence of any abuse, we decided to improve our security posture. The tokens were never included in the daily database dump, and the old unencrypted column has been removed.
Source link: Crate pages now display a "Browse source" link in the sidebar that points to the corresponding docs.rs page. Thanks to Carol Nichols for implementing this feature.
Fastly CDN: The sparse index at index.crates.io is now served primarily via Fastly to conserve our AWS credits for other use cases. In the past month, static.crates.io served approximately 1.6 PB across 11 billion requests, while index.crates.io served approximately 740 TB across 19 billion requests. A big thank you to Fastly for providing free CDN services through their Fast Forward program!
OpenGraph image improvements: We fixed emoji and CJK character rendering in OpenGraph images, which was caused by missing fonts on our server.
Background worker performance: Database indexes were optimized to improve background job processing performance.
CloudFront invalidation improvements: Invalidation requests are now batched to avoid hitting AWS rate limits when publishing large workspaces.
Feedback
We hope you enjoyed this update on the development of crates.io. If you have any feedback or questions, please let us know on Zulip or GitHub. We are always happy to hear from you and are looking forward to your feedback!
(“This Week in Data” is a series of blog posts that the Data Team at Mozilla is using to communicate about our work. Posts in this series could be release notes, documentation, hopes, dreams, or whatever: so long as it’s about data.)
I’ve erased the y-axis because the absolute values don’t actually matter for this discussion, but this is basically a sparkline plot of active users of Firefox Desktop for 2025. The line starts and ends basically at the same height but wow does it have a lot of ups and downs between.
I went looking at this shape recently while trying to estimate the costs of continuing to collect Legacy Telemetry in Firefox Desktop. We’re at the point in our migration to Glean where you really ought to start removing your Legacy Telemetry probes unless you have some ongoing analyses that depend on them. I was working out a way to get a back-of-the-envelope dollar figure to scare teams into prioritizing such removals to be conducted sooner rather than later.
Our ingestion metadata (how many bytes were processed by which pieces of the pipeline) only goes back sixty days, and I was worried that basing my cost estimate on numbers from December 2025 would make them unusually low compared to “a normal month”.
But what’s “normal”? Which of these months could be considered “normal” by any measure? I mean:
January: Beginning-of-year holiday slump
February: Only twenty-eight days long
March: Easter (sometimes), DST begins
April: Easter (sometimes), something that really starts suppressing activity
May: What’s with that big rebound in the second half?
June: Last day of school
July: School’s out, Northern Hemisphere Summer means less time on the ‘net and more time touching grass
August: Typical month for vacations in Europe
September: Back-to-school
October: Maybe “normal”?
November: US Thanksgiving
December: End-of-year holiday slump
October and maybe May are perhaps the closest things we have to “normal” months, and by being the only “normal”-ish months that makes them rather abnormal, don’t you think?
Now, I’ve been lying to you with data visualization here. If you’re exceedingly clever you’ll notice that, in the sparkline plot above, not only did I take the y-axis labels off, I didn’t start the y-axis at 0 (we had far more than zero active users of Firefox Desktop at the end of August, after all). I chose this to be illustrative of the differences from month to month, exaggerating them for effect. But if you look at, say, the Monthly Active Users (now combined Mobile + Desktop) on data.firefox.com it paints a rather more sedate picture, doesn’t it:
This isn’t a 100% fair comparison as data.firefox.com goes back years, and I stretched 2025 to be the same width, above… but you see what data visualization choices can do to help or hinder the story you’re hoping to tell.
At any rate, I hope you found it as interesting as I did to learn that December’s abnormality makes it just as “normal” as the rest of the months for my cost estimation purposes.
(“This Week in Data” is a series of blog posts that the Data Team at Mozilla is using to communicate about our work. Posts in this series could be release notes, documentation, hopes, dreams, or whatever: so long as it’s about data.)
I’ve erased the y-axis because the absolute values don’t actually matter for this discussion, but this is basically a sparkline plot of active users of Firefox Desktop for 2025. The line starts and ends basically at the same height but wow does it have a lot of ups and downs between.
I went looking at this shape recently while trying to estimate the costs of continuing to collect Legacy Telemetry in Firefox Desktop. We’re at the point in our migration to Glean where you really ought to start removing your Legacy Telemetry probes unless you have some ongoing analyses that depend on them. I was working out a way to get a back-of-the-envelope dollar figure to scare teams into prioritizing such removals to be conducted sooner rather than later.
Our ingestion metadata (how many bytes were processed by which pieces of the pipeline) only goes back sixty days, and I was worried that basing my cost estimate on numbers from December 2025 would make them unusually low compared to “a normal month”.
But what’s “normal”? Which of these months could be considered “normal” by any measure? I mean:
January: Beginning-of-year holiday slump
February: Only twenty-eight days long
March: Easter (sometimes), DST begins
April: Easter (sometimes), something that really starts suppressing activity
May: What’s with that big rebound in the second half?
June: Last day of school
July: School’s out, Northern Hemisphere Summer means less time on the ‘net and more time touching grass
August: Typical month for vacations in Europe
September: Back-to-school
October: Maybe “normal”?
November: US Thanksgiving
December: End-of-year holiday slump
October and maybe May are perhaps the closest things we have to “normal” months, and by being the only “normal”-ish months that makes them rather abnormal, don’t you think?
Now, I’ve been lying to you with data visualization here. If you’re exceedingly clever you’ll notice that, in the sparkline plot above, not only did I take the y-axis labels off, I didn’t start the y-axis at 0 (we had far more than zero active users of Firefox Desktop at the end of August, after all). I chose this to be illustrative of the differences from month to month, exaggerating them for effect. But if you look at, say, the Monthly Active Users (now combined Mobile + Desktop) on data.firefox.com it paints a rather more sedate picture, doesn’t it:
This isn’t a 100% fair comparison as data.firefox.com goes back years, and I stretched 2025 to be the same width, above… but you see what data visualization choices can do to help or hinder the story you’re hoping to tell.
At any rate, I hope you found it as interesting as I did to learn that December’s abnormality makes it just as “normal” as the rest of the months for my cost estimation purposes.
After introducing Debian packages for Firefox Nightly, we’re now excited to extend that to RPM-based distributions.
Just like with the Debian packages, switching to Mozilla’s RPM repository allows Firefox to be installed and updated like any other application, using your favorite package manager. It also provides a number of improvements:
Better performance thanks to our advanced compiler-based optimizations,
Updates as fast as possible because the .rpm management is integrated into Firefox’s release process,
Hardened binaries with all security flags enabled during compilation,
No need to create your own .desktop file.
To install Firefox Nightly, follow these steps:
If you are on fedora (41+), or any other distribution using dnf5 as the package manager
It is worth noting that the firefox-nightly package will not conflict with your distribution’s Firefox package if you have it installed, you can have both at the same time!
Adding language packs
If your distribution language is set to a supported language, language packs for it should automatically be installed. You can also install them manually with the following command (replace fr with the language code of your choice):
sudo dnf install firefox-nightly-l10n-fr
You can list the available languages with the following command:
Edit (2026-02-27): Bug 2009927 has been addressed. Packages are now signed and the instructions have been updated to include the GPG key (fingerprint: 14F26682D0916CDD81E37B6D61B7B526D98F0353).
Modern computer displays have gained more colorful capabilities in recent years with High Dynamic Range (HDR) being a headline feature. These displays can show vibrant shades of red, purple and green that were outside the capability of past displays, as well as higher brightness for portions of the displayed videos.
We are happy to announce that Firefox is gaining support for HDR video on Windows, now enabled in Firefox Nightly 148. This is experimental for the time being, as we want to gather feedback on what works and what does not across varied hardware in the wild before we deploy it for all Firefox users broadly. HDR video has already been live on macOS for some time now, and is being worked on for Wayland on Linux.
To get the full experience, you will need an HDR display, and the HDR feature needs to be turned on in Windows (Settings -> Display Settings) for that display. This release also changes how HDR video looks on non-HDR displays in some cases: this used to look very washed out, but it should be improved now. Feedback on whether this is a genuine improvement is also welcome. Popular streaming websites may be checking for this HDR capability, so they may now offer HDR video content to you, but only if HDR is enabled on the display.
We are actively working on HDR support for other web functionality such as WebGL, WebGPU, Canvas2D and static images, but have no current estimates on when those features will be ready: this is a lot of work, and relevant web standards are still in flux.
Note for site authors: Websites can use the CSS video-dynamic-range functionality to make separate HDR and SDR videos available for the same video element. This functionality detects if the user has the display set to HDR, not necessarily whether the display is capable of HDR mode. Displaying an HDR video on an SDR display is expected to work reasonably but requires more testing – we invite feedback on that.
Notes and limitations:
Some streaming sites offer HDR video only if the page is on an HDR-enabled display at the time the page is loaded. Refreshing the page will update that status if you have enabled/disabled HDR mode on the display or moved the window to another display with different capabilities. On the other hand, you can use this behavior to make side-by-side comparisons of HDR and non-HDR versions of a video on these streaming sites if that interests you.
Some streaming sites do not seem to offer HDR video to Firefox users at this time. This is not necessarily a problem with the HDR video functionality in Firefox; they may simply use codecs we do not currently support.
Viewing videos in HEVC format on Windows may require obtaining ‘HEVC Video Extensions’ format support from the Microsoft Store. This is a matter of codec support and not directly related to HDR, but some websites may use this codec for HDR content.
If you wish to not be offered HDR video by websites, you can set the pref ‘layout.css.video-dynamic-range.allows-high’ to false in about:config, we may decide to add this pref to the general browser settings if there is interest. Local files and websites that only offer HDR videos will still be HDR if the encoding is HDR.
If you wish to experiment with the previous ‘washed out’ look for HDR video, you can set the pref ‘gfx.color_management.hdr_video’ to false. This is unlikely to be useful, but if you find you need to use it for some reason we would like to know (file a bug on Bugzilla).
No attempt has been made to read and use HDR metadata in video streams at this time. Windows seems to do something smart with tonemapping for this in our testing, but we will want to implement full support as in other browsers.
On the technical side: we’re defining HDR video as video using the BT2020 colorspace with the Perceptual Quantizer (PQ) transfer function defined in BT2100. In our observations, all HDR video on the web uses this exact combination of colorspace and transfer function, so we assume all BT2020 video is PQ as a matter of convenience. We’ve been making this assumption for a few years on macOS already. The ‘washed out’ HDR video look arose from using the stock BT2020 transfer function rather than PQ, as well as the use of a BGRA8 overlay. Now we use the RGB10A2 format if the colorspace is BT2020, as HDR requires at least 10 bits to match the quality of SDR video. Videos are assumed to be opaque (alpha channel not supported): we’re not aware of any use of transparency in videos in the wild. It would be interesting to know if that feature is used anywhere.
This blog post is written both as a heads-up to embedders of SpiderMonkey, and an explanation of why the changes are coming
As an embedder of SpiderMonkey one of the decisions you have to make is whether or not to provide your own implementation of the job queue.
The responsibility of the job queue is to hold pending jobs for Promises, which in the HTML spec are called ‘microtasks’. For embedders, the status quo of 2025 was two options:
Call JS::UseInternalJobQueues, and then at the appropriate point for your embedding, call JS::RunJobs. This uses an internal job queue and drain function.
Subclass and implement the JS::JobQueue type, storing and invoking your own jobs. An embedding might want to do this if they wanted to add their own jobs, or had particular needs for the shape of jobs and data carried alongside them.
The goal of this blog post is to indicate that SpiderMonkey’s handling of Promise jobs is changing over the next little while, and explain a bit of why.
If you’ve chosen to use the internal job queue, almost nothing should change for your embedding. If you’ve provided your own job queue, read on:
What’s Changing
The actual type of a job from the JS engine is changing to be opaque.
The responsibility for actually storing the Promise jobs is moving from the embedding, even in the case of an embedding provided JobQueue.
As a result of (1), the interface to run a job from the queue is also changing.
I’ll cover this in a bit more detail, but a good chunk of the interface discussed is in MicroTask.h (this link is to a specific revision because I expect the header to move).
For most embeddings the changes turn out to be very mechanical. If you have specific challenges with your embedding please reach out.
Job Type
The type of a JS Promise job has been a JSFunction, and thus invoked with JS::Call. The job type is changing to an opaque type. The external interface to this type will be JS::Value (typedef’d as JS::GenericMicroTask);
This means that if you’re an embedder who had been storing your own tasks in the same queue as JS tasks you’ll still be able to, but you’ll need to use the queue access APIs in MicroTask.h. A queue entry is simply a JS::Value and so an arbitrary C address can be stored in it as a JS::PrivateValue.
Jobs now are split into two types: JSMicroTasks (enqueued by the JS engine) and GenericMicroTasks (possibly JS engine provided, possibly embedding provided).
Storage Responsibility
It used to be that if an embedding provided its own JobQueue, we’d expect them to store the jobs and trace the queue. Now that an embedding finds that the queue is inside the engine, the model is changing to one where the embedding must ask the JS engine to store jobs it produces outside of promises if it would like to share the job queue.
Running Micro Tasks
The basic loop of microtask execution now looks like this:
JS::Rooted<JSObject*>executionGlobal(cx)JS::Rooted<JS::GenericMicroTask>genericTask(cx);JS::Rooted<JS::JSMicroTask>jsTask(cx);while(JS::HasAnyMicroTasks(cx)){genericTask=JS::DequeueNextMicroTask(cx);if(JS::IsJSMicroTask(genericTask)){jsMicroTask=JS::ToMaybeWrappedJSMicroTask(genericMicroTask);executionGlobal=JS::GetExecutionGlobalFromJSMicroTask(jsMicroTask);{AutoRealmar(cx,executionGlobal);if(!JS::RunJSMicroTask(cx,jsMicroTask)){// Handle job execution failure in the // same way JS::Call failure would have been// handled}}continue;}// Handle embedding jobs as appropriate. }
The abstract separation of the execution global is required to handle cases with many compartments and complicated realm semantics (aka a web browser).
An example
In order to see roughly what the changes would look like, I attempted to patch GJS, the GNOME JS embedding which uses SpiderMonkey.
The patch is here. It doesn’t build due to other incompatibilities I found, but this is the rough shape of a patch for an embedding. As you can see, it’s fairly self contained with not too much work to be done.
Why Change?
In a word, performance. The previous form of Promise job management is very heavyweight with lots of overhead, causing performance to suffer.
The changes made here allow us to make SpiderMonkey quite a bit faster for dealing with Promises, and unlock the potential to get even faster.
How do the changes help?
Well, perhaps the most important change here is making the job representation opaque. What this allows us to do is use pre-existing objects as stand-ins for the jobs. This means that rather than having to allocate a new object for every job (which is costly) we can some of the time actually allocate nothing, simply enqueing an existing job with enough information to run.
Owning the queue will also allow us to choose the most efficient data structure for JS execution, potentially changing opaquely in the future as we find better choices.
Empirically, changing from the old microtask queue system to the new in Firefox led to an improvement of up to 45% on Promise heavy microbenchmarks.
Is this it?
I do not think this is the end of the story for changes in this area. I plan further investment. Aspirationally I would like this all to be stabilized by the next ESR release which is Firefox 153, which will ship to beta in June, but only time will tell what we can get done.
Future changes I can predict are things like
Renaming JS::JobQueue which is now more of a ‘jobs interface’
Renaming the MicroTask header to be less HTML specific
However, I can also imagine making more changes in the pursuit of performance.
At Mozilla, we’ve long believed that technology can be built differently — not only more openly, but more responsibly, more inclusively, and more in service of the people who rely on it. As AI reshapes nearly every layer of the internet, those values are being tested in real time.
At the Mozilla Festival 2025 in Barcelona, from Nov. 7–9, we brought together 50 founders from 30 companies across our portfolio to grapple with some of the most pressing questions in technology today: How do we build AI that is trustworthy and governable? How do we protect privacy at scale? What does “better social” look like after the age of the global feed? And how do we ensure that the future of technology is shaped by people and communities far beyond today’s centers of power?
Over three days of panels, talks, and hands-on sessions, founders shared not just what they’re building, but what they’re learning as they push into new terrain. What emerged is a vivid snapshot of where the industry is heading — and the hard choices required to get there.
Open source as strategy, not slogan
A major theme emerging across conversations with our founders was that open source is no longer a “nice to have.” It’s the backbone of trust, adoption, and long‑term resilience in AI, and a critical pillar for the startup ecosystem. But these founders aren’t naïve about the challenges. Training frontier‑scale models costs staggering sums, and the gravitational pull of a few dominant labs is real. Yet companies like Union.ai, Jozu, and Oumi show that openness can still be a moat — if it’s treated as a design choice, not a marketing flourish.
Their message is clear: open‑washing won’t cut it. True openness means clarity about what’s shared —weights, data, governance, standards — and why. It means building communities that outlast any single company. And it means choosing investors who understand that open‑source flywheels take time to spin up.
Community as the real competitive edge
Across November’s sessions, founders returned to a simple truth: community is the moat. Flyte’s growth into a Linux Foundation project, Jozu’s push for open packaging standards, and Lelapa’s community‑governed language datasets all demonstrate that the most durable advantage isn’t proprietary code — it’s shared infrastructure that people trust.
Communities harden technology, surface edge cases, and create the kind of inertia that keeps systems in place long after competitors appear. But they also require care: documentation, governance, contributor experience, and transparency. As one founder put it, “You can’t build community overnight. It’s years of nurturing.”
Ethics as infrastructure
One of the most powerful threads came from Lelapa AI, which reframes data not as raw material to be mined but as cultural property. Their licensing model, inspired by Māori data sovereignty, ensures that African languages — and the communities behind them — benefit from the value they create. This is openness with accountability, a model that challenges extractive norms and points toward a more equitable AI ecosystem.
It’s a reminder that ethical design isn’t a layer on top of technology — it’s part of the architecture.
The real competitor: fear
Founders spoke candidly about the biggest barrier to adoption: fear. Enterprises default to hyperscalers because no one gets fired for choosing the biggest vendor. Overcoming that inertia requires more than values. It requires reliability, security features, SSO, RBAC, audit logs — the “boring” but essential capabilities that make open systems viable in real organizations.
In other words, trust is built not only through ideals but through operational excellence.
A blueprint for builders
Across all 16 essays, a blueprint started to emerge for founders and startups committed to building responsible technology and open source AI:
Design openness as a strategic asset, not a giveaway.
Invest in community early, even before revenue.
Treat data ethics as non‑negotiable, especially when working with marginalized communities.
Name inertia as a competitor, and build the tooling that makes adoption feel safe.
Choose aligned investors, because misaligned capital can quietly erode your mission.
Taken together, the 16 essays in this report point to something larger than any single technology or trend. They show founders wrestling with how AI is governed, how trust is earned, how social systems can be rebuilt at human scale, and how innovation looks different when it starts from Lagos or Johannesburg instead of Silicon Valley.
The future of AI doesn’t have to be centralized, extractive or opaque. The founders in this portfolio are proving that openness, trustworthiness, diversity, and public benefit can reinforce one another — and that competitive companies can be built on all four.
We hope you’ll dig into the report, explore the ideas these founders are surfacing, and join us in backing the people building what comes next.
My whole stream in the past months has been about AI coding. From skeptical
engineers who say it creates unmaintainable code, to enthusiastic (or scared)
engineers who say it will replace us all, the discourse is polarized. But I’ve
been more interested in a different question: what does AI coding actually cost,
and what does it actually save?
I recently had Claude help me with a substantial refactoring task: splitting a
monolithic Rust project into multiple workspace repositories with proper
dependency management. The kind of task that’s tedious, error-prone, and
requires sustained attention to detail across hundreds of files. When it was
done, I asked Claude to analyze the session: how much it cost, how long it took,
and how long a human developer would have taken.
The answer surprised me. Not because AI was faster or cheaper (that’s expected),
but because of how much faster and cheaper.
The Task: Repository Split and Workspace Setup
The work involved:
Planning and researching the codebase structure
Migrating code between three repositories
Updating thousands of import statements
Configuring Cargo workspaces and dependencies
Writing Makefiles and build system configuration
Setting up CI/CD workflows with GitHub Actions
Updating five different documentation files
Running and verifying 2300+ tests
Creating branches and writing detailed commit messages
This is real work. Not a toy problem, not a contrived benchmark. The kind of multi-day slog that every engineer has faced: important but tedious, requiring precision but not creativity.
The Numbers
AI Execution Time
Total: approximately 3.5 hours across two sessions
This is the marginal execution cost for this specific task. It doesn’t include my Claude subscription, the time I spent iterating on prompts and reviewing output, or the risk of having to revise or fix AI-generated changes. For a complete accounting, you’d also need to consider those factors, though for this task they were minimal.
Human Developer Time Estimate
Conservative estimate: 2-3 days (16-24 hours)
This is my best guess based on experience with similar tasks, but it comes with uncertainty. A senior engineer deeply familiar with this specific codebase might work faster. Someone encountering similar patterns for the first time might work slower. Some tasks could be partially templated or parallelized across a team.
Breaking down the work:
Planning and research (2-4 hours): Understanding codebase structure, planning dependency strategy, reading PyO3/Maturin documentation
Testing and debugging (3-5 hours): Running test suites, fixing unexpected failures, verifying tests pass, testing on different platforms
Git operations and cleanup (1-2 hours): Creating branches, writing commit messages, final verification
Even if we’re generous and assume a very experienced developer could complete this in 8 hours of focused work, the time and cost advantages remain substantial. The economics don’t depend on the precise estimate.
Savings: approximately 85-90% time reduction, approximately 99% marginal cost reduction
These numbers compare execution time and per-task marginal costs. They don’t capture everything (platform costs, review time, long-term maintenance implications), but they illustrate the scale of the difference for this type of systematic refactoring work.
Why AI Was Faster
The efficiency gains weren’t magic. They came from specific characteristics of how AI approaches systematic work:
No context switching fatigue. Claude maintained focus across three repositories simultaneously without the cognitive load that would exhaust a human developer. No mental overhead from jumping between files, no “where was I?” moments after a break.
Instant file operations. Reading and writing files happens without the delays of IDE loading, navigation, or search. What takes a human seconds per file took Claude milliseconds.
Pattern matching without mistakes. Updating thousands of import statements consistently, without typos, without missing edge cases. No ctrl-H mistakes, no regex errors that you catch three files later.
Parallel mental processing. Tracking multiple files at once without the working memory constraints that force humans to focus narrowly.
Documentation without overhead. Generating comprehensive, well-structured documentation in one pass. No switching to a different mindset, no “I’ll document this later” debt.
Error recovery. When workspace conflicts or dependency issues appeared, Claude fixed them immediately without the frustration spiral that can derail a human’s momentum.
Commit message quality. Detailed, well-structured commit messages generated instantly. No wrestling with how to summarize six hours of work into three bullet points.
What Took Longer
AI wasn’t universally faster. Two areas stood out:
Initial codebase exploration. Claude spent time systematically understanding the structure before implementing. A human developer might have jumped in faster with assumptions (though possibly paying for it later with rework).
User preference clarification. Some back-and-forth on git dependencies versus crates.io, version numbering conventions. A human working alone would just make these decisions implicitly based on their experience.
These delays were minimal compared to the overall time savings, but they’re worth noting. AI coding isn’t instantaneous magic. It’s a different kind of work with different bottlenecks.
The Economics of Coding
Let me restate those numbers because they still feel surreal:
85-90% time reduction
99% marginal cost reduction
For this type of task, these are order-of-magnitude improvements over solo human execution. And they weren’t achieved through cutting corners or sacrificing immediate quality. The tests passed, the documentation was comprehensive, the commits were well-structured, the code compiled cleanly.
That said, tests passing and documentation existing are necessary but not sufficient signals of quality. Long-term maintainability, latent bugs that only surface later, or future refactoring friction are harder to measure immediately. The code is working, but it’s too soon to know if there are subtle issues that will emerge over time.
This creates strange economics for a specific class of work: systematic, pattern-based refactoring with clear success criteria. For these tasks, the time and cost reductions change how we value engineering effort and prioritize maintenance work.
I used to avoid certain refactorings because the payoff didn’t justify the time investment. Clean up import statements across 50 files? Update documentation after a restructure? Write comprehensive commit messages? These felt like luxuries when there was always more pressing work.
But at $5 marginal cost and 3.5 hours for this type of systematic task, suddenly they’re not trade-offs anymore. They’re obvious wins. The economics shift from “is this worth doing?” to “why haven’t we done this yet?”
What This Doesn’t Mean
Before the “AI will replace developers” crowd gets too excited, let me be clear about what this data doesn’t show:
This was a perfect task for AI. Systematic, pattern-based, well-scoped, with clear success criteria. The kind of work where following existing patterns and executing consistently matters more than creative problem-solving or domain expertise.
AI did not:
Design the architecture (I did)
Decide on the repository structure (I did)
Choose the dependency strategy (we decided together)
Understand the business context (I provided it)
Know whether the tests passing meant the code was correct (I validated)
The task was pure execution. Important execution, skilled execution, but execution nonetheless. A human developer would have brought the same capabilities to the table, just slower and at higher cost.
Where This Goes
I keep thinking about that 85-90% time reduction for this specific type of task. Not simple one-liners where AI already shines, but systematic maintenance work with high regularity, strong compiler or test feedback, and clear end states.
Tasks with similar characteristics might include:
Updating deprecated APIs across a large codebase
Migrating from one framework to another with clear patterns
Standardizing code style and patterns
Refactoring for testability where tests guide correctness
Adding comprehensive logging and monitoring
Writing and updating documentation
Creating detailed migration guides
Many maintenance tasks are messier: ambiguous semantics, partial test coverage, undocumented invariants, organizational constraints. The economics I observed here don’t generalize to all refactoring work. But for the subset that is systematic and well-scoped, the shift is significant.
All the work that we know we should do but often defer because it doesn’t feel like progress. What if the economics shifted enough for these specific tasks that deferring became the irrational choice?
I’m not suggesting AI replaces human judgment. Someone still needs to decide what “good” looks like, validate the results, understand the business context. But if the execution of systematic work becomes 10x cheaper and faster, maybe we stop treating certain categories of technical debt like unavoidable burdens and start treating them like things we can actually manage.
The Real Cost
There’s one cost the analysis didn’t capture: my time. I wasn’t passive during those 3.5 hours. I was reading Claude’s updates, reviewing file changes, answering questions, validating decisions, checking test results.
I don’t know exactly how much time I spent, but it was less than the 3.5 hours Claude was working. Maybe 2 hours of active engagement? The rest was Claude working autonomously while I did other things.
So the real comparison isn’t 3.5 AI hours versus 16-24 human hours. It’s 2 hours of human guidance plus 3.5 hours of AI execution versus 16-24 hours of human solo work. Still a massive win, but different from pure automation.
This feels like the right model: AI as an extremely capable assistant that amplifies human direction rather than replacing human judgment. The economics work because you’re multiplying effectiveness, not substituting one for the other.
Final Thoughts
Five dollars marginal cost. Three and a half hours. For systematic refactoring work that would have taken me days and cost hundreds or thousands of dollars in my time.
These numbers make me think differently about certain kinds of work. About how we prioritize technical debt in the systematic, pattern-based category. About what “too expensive to fix” really means for these specific tasks. About whether we’re approaching some software maintenance decisions with outdated economic assumptions.
I’m still suspicious of broad claims that AI fundamentally changes how we work. But I’m less suspicious than I was. When the economics shift this dramatically for a meaningful class of tasks, some things that felt like pragmatic trade-offs start to look different.
The tests pass. The documentation is up to date. And I paid less than the cost of a fancy coffee drink.
Maybe the skeptics and the enthusiasts are both right. Maybe AI doesn’t replace developers and maybe it does change some things meaningfully. Maybe it just makes certain kinds of systematic work cheap enough that we can finally afford to do them right.
What About Model and Pricing Changes?
One caveat worth noting: these economics depend on Claude Sonnet 4.5 at January 2026 pricing. Model pricing can change, model performance can regress or improve with updates, tool availability can shift, and organizational data governance constraints might limit what models you can use or what tasks you can delegate to them.
For individuals and small teams, this might not matter much in the short term. For larger organizations making long-term planning decisions, these factors matter. The specific numbers here are a snapshot, not a guarantee.
References
Claude Code - The AI coding assistant used for this project
This is another post in our series covering what we learned through the Vision Doc process. In our first post, we described the overall approach and what we learned about doing user research. In our second post, we explored what people love about Rust. This post goes deep on one domain: safety-critical software.
When we set out on the Vision Doc work, one area we wanted to explore in depth was safety-critical systems: software where malfunction can result in injury, loss of life, or environmental harm. Think vehicles, airplanes, medical devices, industrial automation. We spoke with engineers at OEMs, integrators, and suppliers across automotive (mostly), industrial, aerospace, and medical contexts.
What we found surprised us a bit. The conversations kept circling back to a single tension: Rust's compiler-enforced guarantees support much of what Functional Safety Engineers and Software Engineers in these spaces spend their time preventing, but once you move beyond prototyping into the higher-criticality parts of a system, the ecosystem support thins out fast. There is no MATLAB/Simulink Rust code generation. There is no OSEK or AUTOSAR Classic-compatible RTOS written in Rust or with first-class Rust support. The tooling for qualification and certification is still maturing.
Quick context: what makes software "safety-critical"
If you've never worked in these spaces, here's the short version. Each safety-critical domain has standards that define a ladder of integrity levels: ISO 26262 in automotive, IEC 61508 in industrial, IEC 62304 in medical devices, DO-178C in aerospace. The details differ, but the shape is similar: as you climb the ladder toward higher criticality, the demands on your development process, verification, and evidence all increase, and so do the costs.1
This creates a strong incentive for decomposition: isolate the highest-criticality logic into the smallest surface area you can, and keep everything else at lower levels where costs are more manageable and you can move faster.
We'll use automotive terminology in this post (QM through ASIL D) since that's where most of our interviews came from, but the patterns generalize. These terms represent increasing levels of safety-criticality, with QM being the lowest and ASIL D being the highest. The story at low criticality looks very different from the story at high criticality, regardless of domain.
Rust is already in production for safety-critical systems
Before diving into the challenges, it is worth noting that Rust is not just being evaluated in these domains. It is deployed and running in production.
We spoke with a principal firmware engineer working on mobile robotics systems certified to IEC 61508 SIL 2:
"We had a new project coming up that involved a safety system. And in the past, we'd always done these projects in C using third party stack analysis and unit testing tools that were just generally never very good, but you had to do them as part of the safety rating standards. Rust presented an opportunity where 90% of what the stack analysis stuff had to check for is just done by the compiler. That combined with the fact that now we had a safety qualified compiler to point to was kind of a breakthrough." -- Principal Firmware Engineer (mobile robotics)
We also spoke with an engineer at a medical device company deploying IEC 62304 Class B software to intensive care units:
"All of the product code that we deploy to end users and customers is currently in Rust. We do EEG analysis with our software and that's being deployed to ICUs, intensive care units, and patient monitors." -- Rust developer at a medical device company
"We changed from this Python component to a Rust component and I think that gave us a 100-fold speed increase." -- Rust developer at a medical device company
These are not proofs of concept. They are shipping systems in regulated environments, going through audits and certification processes. The path is there. The question is how to make it easier for the next teams coming through.
Rust adoption is easiest at QM, and the constraints sharpen fast
At low criticality, teams described a pragmatic approach: use Rust and the crates ecosystem to move quickly, then harden what you ship. One architect at an automotive OEM told us:
"We can use any crate [from crates.io] [..] we have to take care to prepare the software components for production usage." -- Architect at Automotive OEM
But at higher levels, third-party dependencies become difficult to justify. Teams either rewrite, internalize, or strictly constrain what they use. An embedded systems engineer put it bluntly:
"We tend not to use 3rd party dependencies or nursery crates [..] solutions become kludgier as you get lower in the stack." -- Firmware Engineer
Some teams described building escape hatches, abstraction layers designed for future replacement:
"We create an interface that we'd eventually like to have to simplify replacement later on [..] sometimes rewrite, but even if re-using an existing crate we often change APIs, write more tests." -- Team Lead at Automotive Supplier (ASIL D target)
Even teams that do use crates from crates.io described treating that as a temporary accelerator, something to track carefully and remove from critical paths before shipping:
"We use crates mainly for things in the beginning where we need to set up things fast, proof of concept, but we try to track those dependencies very explicitly and for the critical parts of the software try to get rid of them in the long run." -- Team lead at an automotive software company developing middleware in Rust
In aerospace, the "control the whole stack" instinct is even stronger:
"In aerospace there's a notion of we must own all the code ourselves. We must have control of every single line of code." -- Engineering lead in aerospace
This is the first big takeaway: a lot of "Rust in safety-critical" is not just about whether Rust compiles for a target. It is about whether teams can assemble an evidence-friendly software stack and keep it stable over long product lifetimes.
The compiler is doing work teams used to do elsewhere
Many interviewees framed Rust's value in terms of work shifted earlier and made more repeatable by the compiler. This is not just "nice," it changes how much manual review you can realistically afford. Much of what was historically process-based enforcement through coding standards like MISRA C and CERT C becomes a language-level concern in Rust, checked by the compiler rather than external static analysis or manual review.
"Roughly 90% of what we used to check with external tools is built into Rust's compiler." -- Principal Firmware Engineer (mobile robotics)
We heard variations of this from teams dealing with large codebases and varied skill levels:
"We cannot control the skill of developers from end to end. We have to check the code quality. Rust by checking at compile time, or Clippy tools, is very useful for our domain." -- Engineer at a major automaker
Even on smaller teams, the review load matters:
"I usually tend to work on teams between five and eight. Even so, it's too much code. I feel confident moving faster, a certain class of flaws that you aren't worrying about." -- Embedded systems engineer (mobile robotics)
Closely related: people repeatedly highlighted Rust's consistency around error handling:
"Having a single accepted way of handling errors used throughout the ecosystem is something that Rust did completely right." -- Automotive Technical Lead
For teams building products with 15-to-20-year lifetimes and "teams of teams," compiler-enforced invariants scale better than "we will just review harder."
Teams want newer compilers, but also stability they can explain
A common pattern in safety-critical environments is conservative toolchain selection. But engineers pointed out a tension: older toolchains carry their own defect history.
"[..] traditional wisdom is that after something's been around and gone through motions / testing then considered more stable and safer [..] older compilers used tend to have more bugs [and they become] hard to justify" -- Software Engineer at an Automotive supplier
Rust's edition system was described as a real advantage here, especially for incremental migration strategies that are common in automotive programs:
"[The edition system is] golden for automotive, where incremental migration is essential." -- Software Engineer at major Automaker
In practice, "stability" is also about managing the mismatch between what the platform supports and what the ecosystem expects. Teams described pinning Rust versions, then fighting dependency drift:
"We can pin the Rust toolchain, but because almost all crates are implemented for the latest versions, we have to downgrade. It's very time-consuming." -- Engineer at a major automaker
For safety-critical adoption, "stability" is operational. Teams need to answer questions like: What does a Rust upgrade change, and what does it not change? What are the bounds on migration work? How do we demonstrate we have managed upgrade risk?
Target support matters in practical ways
Safety-critical software often runs on long-lived platforms and RTOSs. Even when "support exists," there can be caveats. Teams described friction around targets like QNX, where upstream Rust support exists but with limitations (for example, QNX 8.0 support is currently no_std only).2
This connects to Rust's target tier policy: the policy itself is clear, but regulated teams still need to map "tier" to "what can I responsibly bet on for this platform and this product lifetime."
"I had experiences where all of a sudden I was upgrading the compiler and my toolchain and dependencies didn't work anymore for the Tier 3 target we're using. That's simply not acceptable. If you want to invest in some technology, you want to have a certain reliability." -- Senior software engineer at a major automaker
core is the spine, and it sets expectations
In no_std environments, core becomes the spine of Rust. Teams described it as both rich enough to build real products and small enough to audit.
A lot of Rust's safety leverage lives there: Option and Result, slices, iterators, Cell and RefCell, atomics, MaybeUninit, Pin. But we also heard a consistent shape of gaps: many embedded and safety-critical projects want no_std-friendly building blocks (fixed-size collections, queues) and predictable math primitives, but do not want to rely on "just any" third-party crate at higher integrity levels.
"Most of the math library stuff is not in core, it's in std. Sin, cosine... the workaround for now has been the libm crate. It'd be nice if it was in core." -- Principal Firmware Engineer (mobile robotics)
Async is appealing, but the long-run story is not settled
Some safety-critical-adjacent systems are already heavily asynchronous: daemons, middleware frameworks, event-driven architectures. That makes Rust's async story interesting.
But people also expressed uncertainty about ecosystem lock-in and what it would take to use async in higher-criticality components. One team lead developing middleware told us:
"We're not sure how async will work out in the long-run [in Rust for safety-critical]. [..] A lot of our software is highly asynchronous and a lot of our daemons in the AUTOSAR Adaptive Platform world are basically following a reactor pattern. [..] [C++14] doesn't really support these concepts, so some of this is lack of familiarity." -- Team lead at an automotive software company developing middleware in Rust
And when teams look at async through an ISO 26262 lens, the runtime question shows up immediately:
"If we want to make use of async Rust, of course you need some runtime which is providing this with all the quality artifacts and process artifacts for ISO 26262." -- Team lead at an automotive software company developing middleware in Rust
Async is not "just a language feature" in safety-critical contexts. It pulls in runtime choices, scheduling assumptions, and, at higher integrity levels, the question of what it would mean to certify or qualify the relevant parts of the stack.
Recommendations
Find ways to help the safety-critical community support their own needs. Open source helps those who help themselves. The Ferrocene Language Specification (FLS) shows this working well: it started as an industry effort to create a specification suitable for safety-qualification of the Rust compiler, companies invested in the work, and it now has a sustainable home under the Rust Project with a team actively maintaining it.3
Contrast this with MC/DC coverage support in rustc. Earlier efforts stalled due to lack of sustained engagement from safety-critical companies.4 The technical work was there, but without industry involvement to help define requirements, validate the implementation, and commit to maintaining it, the effort lost momentum. A major concern was that the MC/DC code added maintenance burden to the rest of the coverage infrastructure without a clear owner. Now in 2026, there is renewed interest in doing this the right way: companies are working through the Safety-Critical Rust Consortium to create a Rust Project Goal in 2026 to collaborate with the Rust Project on MC/DC support. The model is shared ownership of requirements, with primary implementation and maintenance done by companies with a vested interest in safety-critical, done in a way that does not impede maintenance of the rest of the coverage code.
The remaining recommendations follow this pattern: the Safety-Critical Rust Consortium can help the community organize requirements and drive work, with the Rust Project providing the deep technical knowledge of Rust Project artifacts needed for successful collaboration. The path works when both sides show up.
Establish ecosystem-wide MSRV conventions. The dependency drift problem is real: teams pin their Rust toolchain for stability, but crates targeting the latest compiler make this difficult to sustain. An LTS release scheme, combined with encouraging libraries to maintain MSRV compatibility with LTS releases, could reduce this friction. This would require coordination between the Rust Project (potentially the release team) and the broader ecosystem, with the Safety-Critical Rust Consortium helping to articulate requirements and adoption patterns.
Turn "target tier policy" into a safety-critical onramp. The friction we heard is not about the policy being unclear, it is about translating "tier" into practical decisions. A short, target-focused readiness checklist would help: Which targets exist? Which ones are no_std only? What is the last known tested OS version? What are the top blockers? The raw ingredients exist in rustc docs, release notes, and issue trackers, but pulling them together in one place would lower the barrier. Clearer, consolidated information also makes it easier for teams who depend on specific targets to contribute to maintaining them. The Safety-Critical Rust Consortium could lead this effort, working with compiler team members and platform maintainers to keep the information accurate.
Document "dependency lifecycle" patterns teams are already using. The QM story is often: use crates early, track carefully, shrink dependencies for higher-criticality parts. The ASIL B+ story is often: avoid third-party crates entirely, or use abstraction layers and plan to replace later. Turning those patterns into a reusable playbook would help new teams make the same moves with less trial and error. This seems like a natural fit for the Safety-Critical Rust Consortium's liaison work.
Define requirements for a safety-case friendly async runtime. Teams adopting async in safety-critical contexts need runtimes with appropriate quality and process artifacts for standards like ISO 26262. Work is already happening in this space.5 The Safety-Critical Rust Consortium could lead the effort to define what "safety-case friendly" means in concrete terms, working with the async working group and libs team on technical feasibility and design.
Treat interop as part of the safety story. Many teams are not going to rewrite their world in Rust. They are going to integrate Rust into existing C and C++ systems and carry that boundary for years. Guidance and tooling to keep interfaces correct, auditable, and in sync would help. The compiler team and lang team could consider how FFI boundaries are surfaced and checked, informed by requirements gathered through the Safety-Critical Rust Consortium.
"We rely very heavily on FFI compatibility between C, C++, and Rust. In a safety-critical space, that's where the difficulty ends up being, generating bindings, finding out what the problem was." -- Embedded systems engineer (mobile robotics)
Conclusion
To sum up the main points in this post:
Rust is already deployed in production for safety-critical systems, including mobile robotics (IEC 61508 SIL 2) and medical devices (IEC 62304 Class B). The path exists.
Rust's defaults (memory safety, thread safety, strong typing) map directly to much of what Functional Safety Engineers spend their time preventing. But ecosystem support thins out as you move toward higher-criticality software.
At low criticality (QM), teams use crates freely and harden later. At higher levels (ASIL B+), third-party dependencies become difficult to justify, and teams rewrite, internalize, or build abstraction layers for future replacement.
The compiler is doing work that used to require external tools and manual review. Much of what was historically process-based enforcement through standards like MISRA C and CERT C becomes a language-level concern, checked by the compiler. That can scale better than "review harder" for long-lived products with large teams and supports engineers in these domains feeling more secure in the systems they ship.
Stability is operational: teams need to explain what upgrades change, manage dependency drift, and map target tier policies to their platform reality.
Async is appealing for middleware and event-driven systems, but the runtime and qualification story is not settled for higher-criticality use.
We make six recommendations: find ways to help the safety-critical community support their own needs, establish ecosystem-wide MSRV conventions, create target-focused readiness checklists, document dependency lifecycle patterns, define requirements for safety-case friendly async runtimes, and treat C/C++ interop as part of the safety story.
Hearing concrete constraints, examples of assessor feedback, and what "evidence" actually looks like in practice is incredibly helpful. The goal is to make Rust's strengths more accessible in environments where correctness and safety are not optional.
If you're curious about how rigor scales with cost in ISO 26262, this Feabhas guide gives a good high-level overview. ↩
Over a year ago, we introduced an updated version of the sidebar that offers easy access to multiple tools – bookmarks, history, tabs from other devices, and a selection of chatbots – all in one place. As the new version has gained popularity and we plan our future work, we have made a decision to retire the older version in 2026.
Old sidebar version
Updated sidebar version
We know that changes like this can be disruptive – especially when they affect established workflows you rely on every day. While use of the older version has been declining, it remains a familiar and convenient tool for many – especially long-time Firefox users who have built workflows around it.
Unfortunately, supporting two versions means dividing the time and attention of a very small team. By focusing on a single updated version, we can fix issues more quickly, incorporate feedback more efficiently, and deliver new features more consistently for everyone. For these reasons, in 2026, we will focus on improving the updated sidebar to provide many of the conveniences of the older version, then transition everyone to the updated version.
Here’s what to expect:
Starting with Firefox Nightly 148, we have turned on the new sidebar by default for Nightly users. The new default will remain Nightly-only for a few releases to allow us to implement planned improvements, existing community requests, and collect additional feedback.
In Q2 2026, all users of the older version in release will be migrated to the updated sidebar. After the switch, for a period of time, we will keep the option to return to the older version to support folks who may be affected by bugs we fail to discover during Nightly testing. During this period, you will still be able to temporarily switch back to the old sidebar by going to Firefox Settings > General > Browser Layout and unchecking the Show sidebar option.
In Q3 2026, we will fully retire the original sidebar and remove the associated pref as we complete the transition.
Our goal is to make our transition plans transparent and implement suggested improvements that are feasible within the new interaction model, while preserving the speed and flexibility that long-time sidebar users value. Several implemented and planned improvements to the updated sidebar were informed by your feedback, and we expect that to continue throughout the transition:
If you’d like to share what functionality you’ve been missing in the new sidebar and what challenges you’ve experienced when you tried to adopt it, please share your thoughts in this Mozilla Connect thread or file a bug in Bugzilla’s Sidebar component, so your feedback can continue shaping Firefox.
WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).
This newsletter gives an overview of the work we’ve done as part of the Firefox 147 release cycle.
Contributions
Firefox is an open source project, and we are always happy to receive external code contributions to our WebDriver implementation. We want to give special thanks to everyone who filed issues, bugs and submitted patches.
In Firefox 147, two WebDriver bugs were fixed by contributors:
Implemented the input.fileDialogOpened event, which is emitted whenever a file picker is triggered by the content page, for instance after clicking on an input with type="file".
Windows tests now start twice as fast! Thanks to improvements in how we provision Windows machines in the cloud, Yaraslau Kurmyza and RelOps cut startup delays dramatically. Since December 9th, it now takes 3 times less time to get a Windows worker ready, which has reduced Windows test wait times by half.
An AI coding policy was published in the Firefox source docs.
Suhaib Mujahid built an MCP server to facilitate the integration of AI assistants with the Firefox development tooling, whichIt enables AI assistants to search using Searchfox, read Bugzilla bugs and Phabricator revisions, access Firefox source documentation, and streamline patch review workflows.
Suhaib Mujahid extended the test selection system to work with local changes, enabling AI assistants to leverage our ML-based test selection for automatic identification of relevant tests, allowing them to iterate faster during development.
Suhaib Mujahid implemented improvements to the Review Helper tool to improve the accuracy of suggested review comments.
Bugzilla
Thanks to Kohei, when a user enters a comment on the show bug page, it will update the page instantly without a reload. (see Bug 1993761)
Thanks to external contributor Logan Rosen for updating Bugzilla to use a newer version of libcmark-gfm which will solve some issues with rendering of Markdown in comments. (see Bug 1802047)
Build System and Mach Environment
The dependency on Makefile.in has been reduced. The path is still long, but it’s getting a bit closer (see Bug 847009 )
Faster configure step thanks to faster warning flag checks (see Bug 1985940 )
Alex Hochheiden upgraded the JavaScript minifier from jsmin to Terser and enabled minification for pdf.js to improve loading performance.
Alex Hochheiden optimized glean-gradle-plugin and NimbusGradlePlugin configuration. Gained ~10s configuration time speedup and ~200MB disk space saved.
Firefox-CI, Taskcluster and Treeherder
Your CI tasks are going to start faster! After many changes of different sizes, the entire Release Engineering team is proud to announce that the decision task is as fast as the best record from 2019 and even faster than ever before on autoland. We intend to beat the record on try with a few more patches close to landing.
Windows tests now start twice as fast! Thanks to improvements in how we provision Windows machines in the cloud, Yaraslau Kurmyza and RelOps cut startup delays dramatically. Since December 9th, it now takes 3 times less time to get a Windows worker ready, which has reduced Windows test wait times by half.
Ever wondered if your try-push scheduled the right tasks? Treeherder now shows unscheduled jobs too. Hit s to toggle visibility and cut down CI guesswork!
Abhishek Madan made various performance improvements to the decision tasks totalling to around 25% improvement
Abhishek Madan switched Decision tasks to a faster worker-type
Andrew Halberstadt kicked off the CI migration from hg.mozilla.org → Github, implementing:
Matt Boris added the finishing touches on D2G (Docker Worker to Generic Worker translation layer) to enable Julien Cristau to begin rolling changes out to L3 pools.
Lint, Static Analysis and Code Coverage
New include linter through mach lint -lincludes . Unused MFBT and standard C++ headers are reported.
Calixte added support for the pdfium jbig2 decoder compiled in wasm in order to replace the pure JS version.
Firefox Translations
(Bug 1975487, 1994794, 1995403) Erik Nordin shipped significant improvements to the Translations experience when translating web pages between left-to-right and right-to-left languages.
(Bug 1967758) Erik Nordin improved the algorithm for page-language detection, centralizing the behavior in the parent process, instead of creating a separate language detector instance per content process.
Evgeny Pavlov trained Chinese Traditional
Sergio Ortiz Rojas trained English to Vietnamese
Evgeny Pavlov created new evaluation dashboards with expanded metrics, datasets and LLM explanations
Zeid added support for short hash when querying git2hg commit maps in Lando.
Connor Sheehan implemented uplift requests as background jobs, providing many improvements to the uplift request workflow in Lando:
Merge conflict detection at job completion time, instead of at landing time.
Uplift to multiple trains at once, with failure notification emails that provide step-by-step commands to resolve the conflict and re-submit.
Uplift assessment form linking workflow to avoid re-submitting the same form when manually resolving merge conflicts for an uplift.
Connor Sheehan made it possible to select individual commits in the stack for uplift, instead of always uplifting the parent commits for a given revision.
Connor Sheehan added a new uplift assessment linking view and hooked it into moz-phab uplift, removing a few steps between submitting an uplift request and opening the form for submission or linking to the new request.
moz-phab had several new releases.
Mathew Hodson restored the --upstream argument to moz-phab submit.
Jujutsu support saw improvements to moz-phab patch, better handling of working copy changes and a minimum jj version bump to 0.33.
moz-phab uplift saw a few changes to enable better integration with the Lando-side changes.
Connor Sheehan added clonebundle buckets in the us-east1 GCP region to improve clone times in CI.
Julien Cristau added the new tags Mercurial branches to mozilla-unified.
Julien Cristau and Olivier Mehani took steps to reduce OOM issues on the hg push server.
Julien Cristau resolved a Kafka issue by pruning try heads and resolving issues with try heads alerting, and Greg Cox increased the storage in Kafka in support of the mitigation.
Greg Cox implemented staggered auto-updating with reboots on the load balancers in front of hg.mozilla.org.
In an increasingly siloed internet landscape, WebRTC directly connects human voices and faces. The technology powers Audio/Video calling, conferencing, live streaming, telehealth, and more. We strive to make Firefox the client that best serves humans during those experiences.
Expanding Simulcast Support
Simulcast allows a single WebRTC video to be simultaneously transmitted at differing qualities. Some codecs can efficiently encode the streams simultaneously. Each viewer can receive the video stream that gives them the best experience for their viewing situation, whether that be using a phone with a small screen and shaky cellular link, or a desktop with a large screen and wired broadband connection. While Firefox has supported a more limited set of simulcast scenarios for some time, this year we put a lot of effort into making sure that even more of our users using even more services can get those great experiences.
We have added simulcast capabilities for H.264 and AV1. This along with adding support for the dependency descriptor header (and H.264 support), increases the number of services that can take advantage of simulcast while using Firefox.
Codec Support
Dovetailing the simulcast support, we now support more codecs doing more things on more platforms! This includes turning on AV1 support by default, and adding temporal layer support for H.264. Additionally there were a number of behind the scenes changes made. For our users, this means that they have a moreuniformexperience across devices.
Media Capture
We have improved camera resolution and frame-rate adaptation on all platforms, as well as OS-integrated improved screen capture on macOS. Users will have a smoother experience when joining calls with streams that are better suited to their devices. This means having smoother video and a consistent aspect ratio.
DataChannel
Improving reliability, performance, and compatibility of our DataChannel implementation has been a focus this year. DataChannels can now be run on workers keeping data processing off of the main thread. This was enabled by a major refactoring effort, migrating our implementation to dcsctp.
Web Compatibility
We targeted a number of areas where we could improve compatibility with the broad web of services that our users rely on.
Bug 1329847 Implement RTCDegradationPreference related functions
Bug 1835077 Support RTCEncodedAudioFrameMetadata.contributingSources
Bug 1972657 SendKeyFrameRequest Should Not Reject Based on Transceiver State
Summary
2025 has been an exciting and busy year for WebRTC in Firefox. We have broadly improved web compatibility throughout the WebRTC technology stack, and we are looking forward to another impactful year in 2026.
Mozilla is pleased to announce that Amy Keating has joined Mozilla as Chief Business Officer (CBO).
In this role, Amy will work across the Mozilla family of organizations, and alongside other business leaders, including the Mozilla Corporation’s CBO Brad Smallwood — spanning products, companies, investments, grants, and new ventures — to help ensure we are not only advancing our mission but also financially sustainable and operationally rigorous. The core of this job: making investments that push the internet in a better direction.
Keating takes on this role at a pivotal moment for Mozilla and for the responsible technology ecosystem. As Mozilla pursues a new portfolio strategy centered on building an open, trustworthy alternative to today’s closed and concentrated AI ecosystem, the organization has embraced a double bottom line economic model: one that measures success through mission impact and commercial performance. Delivering on that model requires disciplined business leadership at the highest level.
“Mozilla’s mission has never been more urgent — but mission alone isn’t enough to bring about the change we want to see in the world,” said Mark Surman, President of the Mozilla Foundation. “To build real alternatives in AI and the web, we need to be commercially successful, sustainable, and able to invest at scale. Our double bottom line depends on it. Amy is a proven, visionary business leader who understands how to align values with viable, ambitious business strategy. She will help ensure Mozilla can grow, thrive, and influence the entire marketplace.”
This role is a return to Mozilla for Keating, who previously was Mozilla Corporation’s Chief Legal Officer. Keating has also served on the Boards of Mozilla Ventures and the Mozilla Foundation. Most recently, Keating held senior leadership roles at Glean and Planet Labs, and previously spent nearly a decade across Google and Twitter. She returns to Mozilla with 20 years of professional experience advising and operating in technology organizations. In these roles — and throughout her career — she has focused on building durable businesses grounded in openness, community, and long-term impact.
“Mozilla has always been creative, ambitious, and deeply rooted in community,” said Amy Keating. “I’m excited to return at a moment when the organization is bringing its mission and its assets together in new ways — and to help build the operational and business foundation that allows our teams and portfolio organizations to thrive.”
As Chief Business Officer, Amy brings an investment and growth lens to Mozilla, supporting Mozilla’s portfolio of mission-driven companies and nonprofits, identifying investments in new entities aligned with the organization’s strategy, and helping to strengthen Mozilla’s leadership creating an economic counterbalance to the players now dominating a closed AI ecosystem.
This work is critical not only to Mozilla’s own sustainability, but to its ability to influence markets and shape the future of AI and the web in the public interest.
“I’m here to move with speed and clarity,” said Keating, “and to think and act at the scale of our potential across the Mozilla Project.”
Read more here about Mozilla’s next era. Read hereabout Mozilla’s new CTO, Raffi Krikorian.
In our work on Firefox MacOS accessibility we routinely run into highly nuanced bugs in our accessibility platform API. The tree structure, an object attribute, the sequence of events, or the event payloads, is just off enough that we see a pronounced difference in how an AT like VoiceOver behaves. When we compare our API against other browsers like Safari or Chrome, we notice small differences that have out-sized user impacts.
In cases like that, we need to dive deep. XCode’s Accessibility Inspector shows a limited subset of the API, but web engines implement a much larger set of attributes that are not shown in the inspector. This includes an advanced, undocumented, text API. We also need a way to view and inspect events and their payloads so we can compare the sequence to other implementations.
Since we started getting serious about MacOS accessibility in Firefox in 2019 we have hobbled together an adhoc set of Swift and Python scripts to examine our work. It slowly started to coalesce and formalize into a python client library for MacOS accessibility called pyax.
Recently, I put some time into making pyax not just a Python library, but a nifty command line tool for quick and deep diagnostics. There are several sub commands I’ll introduce here. And I’ll leave the coolest for last, so hang on.
pyax tree
This very simply dumps the accessibility tree of the given application. But hold on, there are some useful flags you can use to drill down to the issue you are looking for:
--web
Only output the web view’s subtree. This is useful if you are troubleshooting a simple web page and don’t want to be troubled with the entire application.
--dom-id
Dump the subtree of the given DOM ID. This obviously is only relevant for web apps. It allows you to cut the noise and only look at the part of the page/app you care about.
--attribute
By default the tree dumper only shows you a handful of core attributes. Just enough to tell you a bit about the tree. You can include more obscure attributes by using this argument.
--all-attributes
Print all known attributes of each node.
--list-attributes
List all available attributes on each node in the tree. Sometimes you don’t even know what you are looking for and this could help.
Implementation note: An app can provide an attribute without advertising its availability, so don’t rely on this alone.
--list-actions
List supported actions on each node.
--json
Output the tree in a JSON format. This is useful with --all-attributes to capture and store a comprehensive state of the tree for comparison with other implementations or other deep dives.
pyax observe
This is a simple event logger that allows you to output events and their payloads. It takes most of the arguments above, like --attribute, and --list-actions.
In addition:
--event
Observe specific events. You can provide this argument multiple times for more than one event.
--print-info
Print the bundled event info.
pyax inspect
For visually inclined users, this command allows them to hover over the object of interest, click, and get a full report of its attributes, subtree, or any other useful information. It takes the same arguments as above, and more! Check out --help.
Getting pyax
Do pip install pyax[highlight] and its all yours. Please contribute with code, documentation, or good vibes (keep you vibes separate from the code).
Also known as “you can just put whatever you want in a jitdump you know?”
When you profile JIT code, you have to tell a profiler what on earth is going on in those JIT bytes you wrote out. Otherwise the profiler will shrug and just give you some addresses.
There’s a decent and fairly common format called jitdump, which originates in perf but has become used in more places. The basic thrust of the parts we care about is: you have names associated with ranges.
Of course, the basic range you’d expect to name is “function foo() was compiled to bytes 0x1000-0x1400“
Suppose you get that working. You might get a profile that looks like this one.
This profile is pretty useful: You can see from the flame chart what execution tier created the code being executed, you can see code from inline caches etc.
Before I left for Christmas break though, I had a thought: To a first approximation both -optimized- and baseline code generation is fairly ‘template’ style. That is to say, we emit (relatively) stable chunks of code for either one of our bytecodes, in the case of our baseline compiler, or for one of our intermediate-representation nodes in the case of Ion, our top tier compiler.
What if we looked more closely at that?
Some of our code is already tagged with AutoCreatedBy, and RAII class which pushes a creator string on, and pops it off when it’s not used. I went through and added AutoCreatedBy to each of the LIR op’s codegen methods (e.g. CodeGenerator::visit*). Then I rigged up our JITDump support so that instead of dumping functions, we dump the function name + whole chain of AutoCreatedBy as the ‘function name’ for that sequence of instructions generated while the AutoCreatedBy was live.
While it doesn’t look that different, the key is in how the frames are named. Of course, the vast majority of frames just are the name of the call instruction... that only makes sense. However, you can see some interesting things if you invert the call-tree
For example, we spend 1.9% of the profiled time doing for a single self-hosted function ‘visitHasShape’, which is basically:
Ok so that proves out the value. What if we just say... hmmm. I actually want to aggregate across all compilation; ignore the function name, just tell me the compilation path here.
Even more interesting (easier to interpret) is the inverted call tree:
So across the whole program, we’re spending basically 5% of the time doing guardShape. I think that’s a super interesting slicing of the data.
Is it actionable? I don’t know yet. I haven’t opened any bugs really on this yet; a lot of the highlighted code is stuff where it’s not clear that there is a faster way to do what’s being done, outside of engine architectural innovation.
The reason to write this blog post is basically to share that... man we can slice-and-dice our programs in so many interesting ways. I’m sure there’s more to think of. For example, not shown here was an experiment: I added AutoCreatedBy inside a single macro-assembler method set (around barriers) to try and see if I could actually see GC barrier cost (it’s low on the benchmarks I checked yo).
So yeah. You can just... put stuff in your JIT dump file.
Edited to Add: I should mention this code is nowhere. Given I don’t entirely know how actionable this ends up being, and the code quality is subpar, I haven’t even pushed this code. Think of this as an inspiration, not a feature announcement.
The future of intelligence is being set right now, and the path we’re on leads somewhere I don’t want to go. We’re drifting toward a world where intelligence is something you rent — where your ability to reason, create, and decide flows through systems you don’t control, can’t inspect, and didn’t shape. In that world, the landlord can change the terms anytime, and you have no recourse but to accept what you’re given.
I think we can do better. Making that happen is now central to what Mozilla is doing.
What we did for the web
Twenty-five years ago, Microsoft Internet Explorer controlled 95% of the browser market, which meant Microsoft controlled how most people experienced the internet and who could build what on what terms. Mozilla was born to change this, and Firefox succeeded beyond what most people thought possible — dropping Internet Explorer’s market share to 55% in just a few years and ushering in the Web 2.0 era. The result was a fundamentally different internet. It was faster and richer for everyday users, and for developers it was a launchpad for open standards and open source that decentralized control over the core technologies of the web.
There’s a reason the browser is called a “user agent.” It was designed to be on your side — blocking ads, protecting your privacy, giving you choices that the sites you visited never would have offered on their own. That was the first fight, and we held the line for the open web even as social networks and mobile platforms became walled gardens.
Now AI is becoming the new intermediary. It’s what I’ve started calling “Layer 8” — the agentic layer that mediates between you and everything else on the internet. These systems will negotiate on our behalf, filter our information, shape our recommendations, and increasingly determine how we interact with the entire digital world.
The question we have to ask is straightforward: Whose side will your new user agent be on?
Why closed systems are winning (for now)
We need to be honest about the current state of play: Closed AI systems are winning today because they are genuinely easier to use. If you’re a developer with an idea you want to test, you can have a working prototype in minutes using a single API call to one of the major providers. GPUs, models, hosting, guardrails, monitoring, billing — it all comes bundled together in a package that just works. I understand the appeal firsthand, because I’ve made the same choice myself on late-night side projects when I just wanted the fastest path from an idea in my head to something I could actually play with.
The open-source AI ecosystem is a different story. It’s powerful and advancing rapidly, but it’s also deeply fragmented — models live in one repository, tooling in another, and the pieces you need for evaluation, orchestration, guardrails, memory, and data pipelines are scattered across dozens of independent projects with different assumptions and interfaces. Each component is improving at remarkable speed, but they rarely integrate smoothly out of the box, and assembling a production-ready stack requires expertise and time that most teams simply don’t have to spare. This is the core challenge we face, and it’s important to name it clearly: What we’re dealing with isn’t a values problem where developers are choosing convenience over principle. It’s a developer experience problem. And developer experience problems can be solved.
The ground is already shifting
We’ve watched this dynamic play out before and the history is instructive. In the early days of the personal computer, open systems were rough, inconsistent, and difficult to use, while closed platforms offered polish and simplicity that made them look inevitable. Openness won anyway — not because users cared about principles, but because open systems unlocked experimentation and scale that closed alternatives couldn’t match. The same pattern repeated on the web, where closed portals like AOL and CompuServe dominated the early landscape before open standards outpaced them through sheer flexibility and the compounding benefits of broad participation.
AI has the potential to follow the same path — but only if someone builds it. And several shifts are already reshaping the landscape:
Small models have gotten remarkably good. 1 to 8 billion parameters, tuned for specific tasks — and they run on hardware that organizations already own;
The economics are changing too. As enterprises feel the constraints of closed dependencies, self-hosting is starting to look like sound business rather than ideological commitment (companies like Pinterest have attributed millions of dollars in savings to migrating to open-source AI infrastructure);
Governments want control over their supply chain. Governments are becoming increasingly unwilling to depend on foreign platforms for capabilities they consider strategically important, driving demand for sovereign systems; and,
Consumer expectations keep rising. People want AI that responds instantly, understands their context, and works across their tools without locking them into a single platform.
The capability gap that once justified the dominance of closed systems is closing fast. What remains is a gap in usability and integration. The lesson I take from history is that openness doesn’t win by being more principled than the alternatives. Openness wins when it becomes the better deal — cheaper, more capable, and just as easy to use
Where the cracks are forming
If openness is going to win, it won’t happen everywhere at once. It will happen at specific tipping points — places where the defaults haven’t yet hardened, where a well-timed push can change what becomes normal. We see four.
The first is developer experience. Developers are the ones who actually build the future — every default they set, every stack they choose, every dependency they adopt shapes what becomes normal for everyone else. Right now, the fastest path runs through closed APIs, and that’s where most of the building is happening. But developers don’t want to be locked in any more than users do. Give them open tools that work as well as the closed ones, and they’ll build the open ecosystem themselves.
The second is data. For a decade, the assumption has been that data is free to scrape — that the web is a commons to be harvested without asking. That norm is breaking, and not a moment too soon. The people and communities who create valuable data deserve a say in how it’s used and a share in the value it creates. We’re moving toward a world of licensed, provenance-based, permissioned data. The infrastructure for that transition is still being built, which means there’s still a chance to build it right.
The third is models. The dominant architecture today favors only the biggest labs, because only they can afford to train massive dense transformers. But the edges are accelerating: small models, mixtures of experts, domain-specific models, multilingual models. As these approaches mature, the ability to create and customize intelligence spreads to communities, companies, and countries that were previously locked out.
The fourth is compute. This remains the choke point. Access to specialized hardware still determines who can train and deploy at scale. More doors need to open — through distributed compute, federated approaches, sovereign clouds, idle GPUs finding productive use.
What an open stack could look like
Today’s dominant AI platforms are building vertically integrated stacks: closed applications on top of closed models trained on closed data, running on closed compute. Each layer reinforces the next — data improves models, models improve applications, applications generate more data that only the platform can use. It’s a powerful flywheel. If it continues unchallenged, we arrive at an AI era equivalent to AOL, except far more centralized. You don’t build on the platform; you build inside it.
There’s another path. The sum of Linux, Apache, MySQL, and PHP won because that combination became easier to use than the proprietary alternatives, and because they let developers build things that no commercial platform would have prioritized. The web we have today exists because that stack existed.
We think AI can follow the same pattern. Not one stack controlled by any single party, but many stacks shaped by the communities, countries, and companies that use them:
Open developer interfaces at the top. SDKs, guardrails, workflows, and orchestration that don’t lock you into a single vendor;
Open data standards underneath. Provenance, consent, and portability built in by default, so you know where your training data came from and who has rights to it;
An open model ecosystem below that. Smaller, specialized, interchangeable models that you can inspect, tune to your values, and run where you need them; and
Open compute infrastructure at the foundation. Distributed and federated hardware across cloud and edge, not routed through a handful of hyperscn/lallers.
Pieces of this stack already exist — good ones, built by talented people. The task now is to fill in the gaps, connect what’s there, and make the whole thing as easy to use as the closed alternatives. That’s the work.
Why open source matters here
If you’ve followed Mozilla, you know the Manifesto. For almost 20 years, it’s guided what we build and how — not as an abstract ideal, but as a tool for making principled decisions every single day. Three of its principles are especially urgent in the age of AI:
Human agency. In a world of AI agents, it’s more important than ever that technology lets people shape their own experiences — and protects privacy where it matters most;
Decentralization and open source. An open, accessible internet depends on innovation and broad participation in how technology gets created and used. The success of open-source AI, built around transparent community practices, is critical to making this possible; and
Balancing commercial and public benefit. The direction of AI is being set by commercial players. We need strong public-benefit players to create balance in the overall ecosystem.
Open-source AI is how these principles become real. It’s what makes plurality possible — many intelligences shaped by many communities, not one model to rule them all. It’s what makes sovereignty possible — owning your infrastructure rather than renting it. And it’s what keeps the door open for public-benefit alternatives to exist alongside commercial ones.
What we’ll do in 2026
The window to shape these defaults is still open, but it won’t stay open forever. Here’s where we’re putting our effort — not because we have all the answers, but because we think these are the places where openness can still reset the defaults before they harden.
Make open AI easier than closed. Mozilla.ai is building any-suite, a modular framework that integrates the scattered components of the open AI stack — model routing, evaluation, guardrails, memory, orchestration — into something coherent that developers can actually adopt without becoming infrastructure specialists. The goal is concrete: Getting started with open AI should feel as simple as making a single API call.
Shift the economics of data. The Mozilla Data Collective is building a marketplace for data that is properly licensed, clearly sourced, and aligned with the values of the communities it comes from. It gives developers access to high-quality training data while ensuring that the people and institutions who contribute that data have real agency and share in the economic value it creates.
Learn from real deployments. Strategy that isn’t grounded in practical experience is just speculation, so we’re deepening our engagement with governments and enterprises adopting sovereign, auditable AI systems. These engagements are the feedback loops that tell us where the stack breaks and where openness needs reinforcement.
Invest in the ecosystem. We’re not just building; we’re backing others who are building too. Mozilla Ventures is investing in open-source AI companies that align with these principles. Mozilla Foundation is funding researchers and projects through targeted grants. We can’t do everything ourselves, and we shouldn’t try. The goal is to put resources behind the people and teams already doing the work.
Show up for the community. The open-source AI ecosystem is vast, and it’s hard to know what’s working, what’s hype, and where the real momentum is building. We want to be useful here. We’re launching a newsletter to track what’s actually happening in open AI. We’re running meetups and hackathons to bring builders together. We’re fielding developer surveys to understand what people actually need. And at MozFest this year, we’re adding a dedicated developer track focused on open-source AI. If you’re doing important work in this space, we want to help it find the people who need to see it.
Are you in?
Mozilla is one piece of a much larger movement, and we have no interest in trying to own or control it — we just want to help it succeed. There’s a growing community of people who believe the open internet is still worth defending and who are working to ensure that AI develops along a different path than the one the largest platforms have laid out. Not everyone in that community uses the same language or builds exactly the same things, but something like a shared purpose is emerging. Mozilla sees itself as part of that effort.
We kept the web open not by asking anyone’s permission, but by building something that worked better than the alternatives. We’re ready to do that again.
So: Are you in?
If you’re a developer building toward an open source AI future, we want to work with you. If you’re a researcher, investor, policymaker, or founder aligned with these goals, let’s talk. If you’re at a company that wants to build with us rather than against us, the door is open. Open alternatives have to exist — that keeps everyone honest.
The future of intelligence is being set now. The question is whether you’ll own it, or rent it.
We’re launching a newsletter to track what’s happening in open-source AI — what’s working, what’s hype, and where the real momentum is building. Sign up here to follow along as we build.
Read more here about our emerging strategy, and how we’re rewiring Mozilla for the era of AI.
While nearly half of all Firefox users have installed an add-on, it’s safe to say nearly all Firefox staffers use add-ons. I polled a few of my peers and here are some of our staff favorite add-ons of 2025…
Falling Snow Animated Theme
Enjoy the soothing mood of Falling Snow Animated Theme. This motion-animated dark theme turns Firefox into a calm wintry night as snowflakes cascade around the corners of your browser.
Privacy Badger
The flagship anti-tracking extension from privacy proponents at the Electronic Frontier Foundation, Privacy Badger is built to look for a certain set of actions that indicate a web page is trying to secretly track you.
Zero set up required. Just install Privacy Badger and it will automatically search for third-party cookies, HTML5 local storage “supercookies,” canvas fingerprinting, and other sneaky tracking methods.
Adaptive Tab Bar Color
Turn Firefox into an internet chameleon. Adaptive Tab Bar Color changes the colors of Firefox to match whatever website you’re visiting.
It’s beautifully simple and sublime. No setup required, but you’re free to make subtle adjustments to color contrast patterns and assign specific colors for websites.
Rainy Spring Sakura by MaDonna
Created by one of the most prolific theme designers in the Firefox community, MaDonna, we love Rainy Spring Sakura’s bucolic mix of calming colors.
It’s like instant Zen mode for Firefox.
Return YouTube Dislike
Do you like the Dislike? YouTube removed the thumbs-down display, but fortunately Return YouTube Dislike came along to restore our view into the sometimes brutal truth of audience sentiment.
Other Firefox users seem to agree…
“Does exactly what the name suggests. Can’t see myself without this extension. Seriously, bad move on YouTube for removing such a vital tool.”
As is tradition, we’re wrapping up 2025 for Mozilla’s localization efforts and offering a sneak peek at what’s in store for 2026 (you can find last year’s blog post here).
Pontoon’s metrics in 2025 show a stable picture for both new sign-ups and monthly active users. While we always hope to see signs of strong growth, this flat trend is a positive achievement when viewed against the challenges surrounding community involvement in Open Source, even beyond Mozilla. Thank you to everyone actively participating on Pontoon, Matrix, and elsewhere for making Mozilla localization such an open and welcoming community.
30 projects and 469 locales (+100 compared to 2024) set up in Pontoon.
5,019 new user registrations
1,190 active users, submitting at least one translation, on average 233 users per month (+5% Year-over-Year)
551,378 submitted translations (+18% YoY)
472,195 approved translations (+22% YoY)
13,002 new strings to translate (-38% YoY).
The number of strings added has decreased significantly overall, but not for Firefox, where the number of new strings was 60% higher than in 2024 (check out the increase of Fluent strings alone). That is not surprising, given the amount of new features (selectable profiles, unified trust panel, backup) and the upcoming settings redesign.
As in 2024, the relentless growth in the number of locales is driven by Common Voice, which now has 422 locales enabled in Pontoon (+33%).
Before we move forward, thank you to all the volunteers who contributed their time, passion, and expertise to Mozilla’s localization over the last 12 months — or plan to do so in 2026. There is always space for new contributors!
Pontoon Development
A significant part of the work on Pontoon in 2025 isn’t immediately visible to users, but it lays the groundwork for improvements that will start showing up in 2026.
One of the biggest efforts was switching to a new data model to represent all strings across all supported formats. Pontoon currently needs to handle around ten different formats, as transparently as possible for localizers, and this change is a step to reduce complexity and technical debt. As a concrete outcome, we can now support proper pluralization in Android projects, and we landed the first string using this model in Firefox 146. This removes long-standing UX limitations (no more Bookmarks saved: %1$s instead of %1$s bookmarks saved) and allows languages to provide more natural-sounding translations.
In parallel, we continued investing in a unified localization library, moz-l10n, with the goal of having a centralized, well-maintained place to handle parsing and serialization across formats in both JavaScript and Python. This work is essential to keep Pontoon maintainable as we add support for new technologies and workflows.
Pontoon as a project remains very active. In 2025 alone, Pontoon saw more than 200 commits from over 20 contributors, not including work happening in external libraries such as moz-l10n.
Finally, we’ve been improving API support, another area that is largely invisible to end users. We moved away from GraphQL and migrated to Django REST, and we’re actively working toward feature parity with Transvision to better support automation and integrations.
Community
Our main achievement in 2025 was organizing a pilot in-person event in Berlin, reconnecting localizers from around Europe after a long hiatus. Fourteen volunteers from 11 locales spent a weekend together at the Mozilla Berlin office, sharing ideas, discussing challenges, and deepening relationships that had previously existed only online. For many attendees, this was the first time they met fellow contributors they had collaborated with for years, and the energy and motivation that came out of those days clearly showed the value of human connection in sustaining our global community.
This doesn’t mean we stopped exploring other ways to connect. For example, throughout the year we continued publishing Contributor Spotlights, showcasing the amazing work of individual volunteers from different parts of the world. These stories highlight not just what our contributors do, but who they are and why they make Mozilla’s localization work possible.
Internally, these spotlights have played an important role for advocating on behalf of the community. By bringing real voices and contributions to the forefront, we’ve helped reinforce the message that investing in people — not just tools — is essential to the long-term health of Mozilla’s localization ecosystem.
What’s coming in 2026
As we move into the new year, our focus will shift to exploring alternative deployment solutions. Our goal is to make Pontoon faster, more reliable, and better equipped to meet the needs of our users.
This excerpt comes from last year’s blog post, and while it took longer than expected, the good news is that we’re finally there. On January 6, we moved Pontoon to a new hosting platform. We expect this change to bring better reliability and performance, especially in response to peaks in bot traffic that have previously made Pontoon slow or unresponsive.
In parallel, we “silently” launched the Mozilla Language Portal, a unified hub that reflects Mozilla’s unique approach to localization while serving as a central resource for the global translator community. While we still plan to expand its content, the main infrastructure is now in place and publicly available, bringing together searchable translation memories, documentation, blog posts, and other resources to support knowledge-sharing and collaboration.
On the technology side, we plan to extend plural support to iOS projects and continue improving Pontoon’s translation memory support. These improvements aim to make it easier to reuse translations across projects and formats, for example by matching strings independently of placeholder syntax differences, and to translate Fluent strings with multiple values.
We also aim to explore improvements in our machine translation options, evaluating how large language models could help with quality assessment or serve as alternative providers for MT suggestions.
Last but not least, we plan to keep investing in our community. While we don’t know yet what that will look like in practice, keep an eye on this blog for updates.
If you have any thoughts or ideas about this plan, let us know on Mastodon or Matrix!
Thank you!
As we look toward 2026, we’re grateful for the people who make Mozilla’s localization possible. Through shared effort and collaboration, we’ll continue breaking down barriers and building a web that works for everyone. Thank you for being part of this journey.
VStarcam is an important brand of cameras based on the PPPP protocol. Unlike the LookCam cameras I looked into earlier, these are often being positioned as security cameras. And they in fact do a few things better like… well, like having a mostly working authentication mechanism. In order to access the camera one has to know its administrator password.
So much for the theory. When I looked into the firmware of the cameras I discovered a surprising development: over the past years this protection has been systematically undermined. Various mechanisms have been added that leak the access password, and in several cases these cannot be explained as accidents. The overall tendency is clear: for some reason VStarcam really wants to have access to their customer’s passwords.
A reminder: “P2P” functionality based on the PPPP protocol means that these cameras will always communicate with and be accessible from the internet, even when located on a home network behind NAT. Short of installing a custom firmware this can only addressed by configuring the network firewall to deny internet access.
Contents
How to recognize affected cameras
Not every VStarcam camera has “VStarcam” printed on the side. I have seen reports of VStarcam cameras being sold under the brand names Besder, MVPower, AOMG, OUSKI, and there are probably more.
Most cameras should be recognizable by the app used to manage them. Any camera managed by one of these apps should be a VStarcam camera: Eye4, EyeCloud, FEC Smart Home, HOTKam, O-KAM Pro, PnPCam, VeePai, VeeRecon, Veesky, VKAM, VsCam, VStarcam Ultra.
Downloading the firmware
VStarcam cameras have a mechanism to deliver firmware updates (LookCam cameras prove that this shouldn’t be taken for granted). The app managing the camera will request update information from an address like http://api4.eye4.cn:808/firmware/1.2.3.4/EN where 1.2.3.4 is the firmware version. If a firmware update is available the response will contain a download server and a download path. The app sends these to the device which then downloads and installs the updated firmware.
Both requests are performed over plain HTTP and this is already the first issue. If an attacker can produce a manipulated response either on the network that the app or the device are connected to they will be able to install a malicious update on the camera. The former is particularly problematic, as the camera owner may connect to an open WiFi or similarly untrusted networks while being out.
The last part of a firmware version is a build number which is ignored for the update requests. The first part is a vendor ID where only a few options seem relevant (I checked 10, 48 and 66). The rest of the version number can be easily enumerated. Many firmware branches don’t have an active update, and when they do some updates won’t download because the servers in question appear no longer operational. Still, I found 380 updates this way.
I managed to unpack all but one of these updates. Firmware version 10.1.110.2 wasn’t for a camera but rather some device with an HDMI connector and without any P2P functionality – probably a Network Video Recorder (NVR). Firmware version 10.121.160.42 wasn’t using PPPP but something called NHEP2P and an entirely different application-level protocol. Ten updates weren’t updating the camera application but only the base system. This left 367 firmware versions for this investigation.
Caveats of this survey
I do not own any VStarcam hardware, nor would it be feasible to investigate hundreds of different firmware versions with real hardware. The results of this article are based solely on reverse engineering, emulation, and automated analysis via running Ghidra in headless mode. While I can easily emulate a PPPP server, doing the same for the VStarcam cloud infrastructure isn’t possible, I simply don’t know how it behaves. Similarly, the firmware’s interaction with hardware had to be left out of the emulation. While I’m still quite confident in my results, these limitations could introduce errors.
More importantly, there are only so many firmware versions that I checked manually. Most of them were checked automatically, and I typically only looked at a few lines of decompiled code that my scripts extracted. There is potential for false negatives here, I expect that there are more issues with VStarcam firmware than what’s listed here.
VStarcam’s authentication approach
When an app communicates with a camera, it sends commands like GET /check_user.cgi?loginuse=admin&loginpas=888888&user=admin&pwd=888888. Despite the looks of it, these aren’t HTTP requests passed on to a web server. Instead, the firmware handles these in function P2pCgiParamFunction which doesn’t even attempt to parse the request. The processing code looks for substrings like check_user.cgi to identify the command (yes, you better don’t set check_user.cgi as your access password). Parameter extraction works via similar substring matching.
It’s worth noting that these cameras have a very peculiar authentication system which VStarcam calls “dual authentication.” Here is how the Eye4 application describes it:
The dual authentication mechanism is a measure to upgrade the whole system security
The device will double check the identity of the visitor and does not support the old version of app.
Considering the security risk of possible leakage, the plaintext password mode of the device was turned off and ciphertext access was used.
After the device is added for the first time, it will not be allowed to be added for a second time, and it will be shared by the person who has added it.
I’m not saying that this description is utter bullshit but there is a considerable mismatch with the reality that I can observe. The VStarcam firmware cannot accept anything other than plaintext passwords. Newer firmware versions employ obfuscation on the PPPP-level but this hardly deserves the name “ciphertext”.
What I can see is: once a device is enrolled into dual authentication, the authentication is handled by function GetUserPri_doubleVerify rather than GetUserPri. There isn’t a big difference between the two, both will try the credentials from the loginuse/loginpas parameters and fall back to the user/pwd credentials pair. Function GetUserPri_doubleVerify merely checks a different password.
From the applications I get the impression that the dual authentication password is automatically generated and probably not even shared with the user but stored in their cloud account. This is an improvement over the regular password that defaults to 888888 and allowed these cameras to be enrolled into a botnet. But it’s still a plaintext password used for authentication.
There is a second aspect to dual authentication. When dual authentication is used, the app is supposed to make a second authentication call to eye4_authentication.cgi. The loginAccount and loginToken parameters here appear to belong to the user’s cloud account, apparently meant to make sure that only the right user can access a device.
Yet in many firmware versions I’ve seen the eye4_authentication.cgi request always succeeds. The function meant to perform a web request is simply hardcoded to return the success code 200. Other firmware versions actually make a request to https://verification.eye4.cn, yet this server also seems to produce a 200 response regardless of what parameters I try. It seems that VStarcam never made this feature work the way they intended it.
None of this stopped VStarcam from boasting on their website merely a year ago:
You can certainly count on anything saying “financial grade encryption” being bullshit. I have no idea where AES comes into the picture here, I haven’t seen it being used anywhere. Maybe it’s their way of saying “we use TLS when connecting to our cloud infrastructure.”
Endpoint protection
A reasonable approach to authentication is: authentication is required before any requests unrelated to authentication can be made. This is not the approach taken by VStarcam firmware. Instead, some firmware versions decide for each endpoint individually whether authentication is necessary. Other versions put a bunch of endpoints outside of the code enforcing authentication.
The calls explicitly excluded from authentication differ by firmware version but are for example: get_online_log.cgi, show_prodhwfg.cgi, ircut_test.cgi, clear_log.cgi, alexa_ctrl.cgi, server_auth.cgi. For most of these it isn’t obvious why they should be accessible to unauthenticated users. But get_online_log.cgi caught my attention in particular.
Unauthenticated log access
So a request like GET /get_online_log.cgi?enable=1 can be sent to a camera without any authentication. This isn’t a request that any of the VStarcam apps seem to support, what does it do?
Despite the name this isn’t a download request, it rather sets a flag for the current connection. The logic behind this involves many moving parts including a Linux kernel module but the essence is this: whenever the application logs something via LogSystem_WriteLog function, the application won’t merely print that to stderr and write it to the log file on the SD card but also send it to any connection that has this flag set.
What does the application log? Lots and lots of stuff. On average, VStarcam firmware has around 1500 such logging calls. For example, it could log security tokens:
Reminder: these requests contain the authentication password as parameter. So an attacker can connect to a vulnerable device, request logs and wait for the legitimate device owner to connect. Once they do their password will show up in the logs – voila, the attacker has access now.
VStarcam appears to be at least somewhat aware of this issue because some firmware versions contain code “censoring” password parameters prior to logging:
But that’s only the beginning of the story of course.
Explicit password leaking via logs
In addition to the logging calls where the password leaks as a (possibly unintended) side-effect, some logging calls are specifically designed to write the device password to the log. For example, the function GetUserPri meant to handle authentication when dual authentication isn’t enabled will often do something like this on a failed login attempt:
These aren’t the parameters of a received login attempt but rather what the parameters should look like for the request to succeed. And if the attacker enabled log access for their connection they will get the device credentials handed on a silver platter – without even having to wait for the device owner to connect.
If dual authentication is enabled, function GetUserPri_doubleVerify often contains a similar call:
LogSystem_WriteLog("web.c","GetUserPri_doubleVerify",536,0,"pri[%d] system OwnerPwd[%s] app Pwd[%s]",pri,gOwnerPassword,gAppPassword);
Log uploading
What got me confused at first were the firmware versions that would log the “correct” password on failed authentication attempts but lacked the capability for unauthenticated log access. When I looked closer I found the function DoSendLogToNodeServer. The firmware receives a “node configuration” from a server which includes a “push IP” and the corresponding port number. It then opens a persistent TCP connection to that address (unencrypted of course), so that DoSendLogToNodeServer can send messages to it.
Despite the name this function doesn’t upload all of the application logs. There are only three to four DoSendLogToNodeServer calls in the firmware versions I looked at, and two are invariably found in function P2pCgiParamFunction, in code running on first failed authentication attempt:
This is sending both the failed authentication request and the correct passwords to a VStarcam server. So while the password isn’t being leaked here to everybody who knows how to ask, it’s still being leaked to VStarcam themselves. And anybody who is eavesdropping on the device’s traffic of course.
A few firmware versions have log upload functionality in a function called startUploadLogToServer, here really all logging output is being uploaded to the server. This one isn’t called unconditionally however but rather enabled by the setLogUploadEnable.cgi endpoint. An endpoint which, you guessed it, can be accessed without authentication. But at least these firmware versions don’t seem to have any explicit password logging, only the “regular” logging of requests.
Password-leaking backdoor
With some considerable effort all of the above could be explained as debugging functionality which was mistakenly shipped to production. VStarcam wouldn’t be the first company to fail realizing that functionality labeled “for debugging purposes only” will still be abused if released with the production build of their software. But I found yet another password leak which can only be described as a backdoor.
At some point VStarcam introduced a second version of their get_online_log.cgi API. When that second version is requested the device will respond with something like:
result=0;
index=12345678;
str=abababababab;
The result=0 part is typical and indicates that authentication (or lack thereof in this case) was successful. The other two values are unusual, and eventually I decided to check what they were about. Turned out, str is a hex-encoded version of the device password after it was XOR’ed with a random byte. And index is an obfuscated representation of that byte.
I can only explain it like this: somebody at VStarcam thought that leaking passwords via log output was too obvious, people might notice. So they decided to expose the device password in a more subtle way, one that only they knew how to decode (unless somebody notices this functionality and spends two minutes studying it in the firmware).
Mind you, even though this is clearly a backdoor I’m still not ruling out incompetence. Maybe VStarcam made a large enough mess with their dual authentication that their customer support needs to recover device access on a regular basis. However, they do have device reset functionality that should normally be used for this scenario.
In the end, for their customers it doesn’t matter what the intention was. The result is a device that cannot be trusted with protecting access. For a security camera this is an unforgivable flaw.
Establishing a timeline
Now we are coming to the tough questions. Why do some firmware versions have this backdoor functionality while others don’t? When was this introduced? In what order? What is the current state of affairs?
You might think that after compiling the data on 367 firmware versions the answers would be obvious. But the data is so inconsistent that any conclusions are really difficult. Thing is, we aren’t dealing with a single evolving codebase here. We aren’t even dealing with two codebases or a dozen of them. 367 firmware versions are 367 different codebases. These codebases are related, they share some code here and there, but they are all being developed independently.
I’ve seen this development model before. What VStarcam appears to be doing is: for every new camera model they take some existing firmware and fork it. They adjust that firmware for the new hardware, they probably add new features as well. None of this work makes it into the original firmware unless it is explicitly backported. And since VStarcam is maintaining hundreds of firmware variants, the older ones are usually only receiving maintenance changes if any at all.
To make this mess complete, VStarcam’s firmware version numbers don’t make any sense at all. And I don’t mean the fact that VStarcam releases the same camera under 30 different model names, so there is no chance of figuring out the model to firmware version mapping. It’s also the firmware version numbers themselves.
As I’ve already mentioned, the last part of the firmware version is the build number, increased with each release. The first part is the vendor ID: firmware versions starting with 48 are VStarcam’s global releases whereas 66 is reserved for their Russian distributor (or rather was I think). Current VStarcam firmware is usually released with vendor ID 10 however, standing for… who knows, VeePai maybe? This leaves the two version parts in between, and I couldn’t find any logic here whatsoever. Like, firmware versions sharing the third part of the version number would sometimes be closely related, but only sometimes. At the same time the second part of the version number is supposed to represent the camera model, but that’s clearly not always correct either.
I ended up extracting all the logging calls from all the firmware versions and using that data to calculate a distance between every firmware version pair. I then fed this data into GraphViz and asked it to arrange the graph for me. It gave me the VStarcam spiral galaxy:
Click the image above to see the larger and slightly interactive version (it shows additional information when the mouse pointer is at a graph node). The green nodes are the ones that don’t allow access to device logs. Yellow are the ones providing unauthenticated log access, always logging incoming requests including their password parameters. The orange ones have additional logging that exposes the correct password on failed authentication attempts – or they call DoSendLogToNodeServer function to send the correct password to a VStarcam server. The red ones have the backdoor in the get_online_log.cgi API leaking passwords. Finally pink are the ones which pretend to improve things by censoring parameters of logged requests – yet all of these without exception leak the password via the backdoor in the get_online_log.cgi API.
Note: Firmware version 10.165.19.37 isn’t present in the graph because it is somehow based on an entirely different codebase with no relation to the others. It would be red in the graph however, as the backdoor has been implemented here as well.
Not only does this graph show the firmware versions as clusters, it’s also possible to approximately identify the direction of time for each cluster. Let’s add cluster names and time arrows to the image:
Of course this isn’t a perfect representation of the original data, and I wasn’t sure whether it could be trusted. Are these clusters real or merely an artifact produced by the graph algorithm? I verified things manually and could confirm that the clusters are in fact distinctly different on the technical level, particularly when considering updates format:
Clusters A and B represent firmware for ARM processors. I’m unsure what caused the gap between the two clusters but cluster A contains firmware from years 2019 and 2020, cluster B on the other hand is mostly years 2021 and 2022. Development pretty much stopped here, the only exception being the four red firmware versions which are recent. Updates use the “classic” ZIP format here.
Cluster C covers years 2019 to 2022. Quite remarkably, in these years the firmware from this cluster moved from ARM processors and LiteOS to MIPS processors and Linux. The original updates based on VStarcam Pack System were replaced by the VeePai-branded ZIP format and later by Ingenic updates with LZO compression. All that happened without introducing significant changes to the code but rather via incremental development.
Cluster D contains firmware for the MIPS processors from years 2022 and 2023. Updates are using the VeePai-branded ZIP format.
Cluster E formed around 2023, there is still some development being done here. It uses MIPS processors like cluster D, yet the update format is different (what I called VeePai updates in my previous blog post).
Cluster F has seen continuous development since approximately 2022, this is firmware based on Ingenic’s MIPS hardware and the most active branch of VStarcam development. Originally the VeePai-branded ZIP format was used for updates, this was later transitioned to Ingenic updates with LZO compression and finally to the same format with jzlcma compression.
With the firmware versions ordered like this I could finally make some conclusions about the introduction of the problematic features:
Unauthenticated logs access via the get_online_log.cgi API was introduced in cluster B around 2022.
Logging the correct password on failed attempts was introduced independently in cluster C. In fact, some firmware versions had this in 2020 already.
In 2021 cluster C also added the innovation that was DoSendLogToNodeServer function, sending the correct password to a VStarcam server on first failed login attempt.
Unauthenticated logs access and logging the correct password appear to have been combined in cluster D in 2023.
Cluster E initially also adopted the approach of exposing log access and logging device password on failed attempts, adding the sending of the correct password to a VStarcam server to the mix. However, starting in 2024 firmware versions with the get_online_log.cgi backdoor start popping up here, and these have all other password leaks removed. These even censor passwords in logged request parameters. Either there were security considerations at play or the other ways to expose the password were considered unnecessary at this point and too obvious.
Cluster F also introduced logging device password on failed attempts around 2023. This cluster appears to be the origin of the get_online_log.cgi backdoor, it was introduced here around 2024. Unlike with cluster E this backdoor didn’t replace the existing password leaks here but only complemented them. In fact, while cluster F was initially “censoring” parameters so that logged requests wouldn’t leak passwords, this measure appears to have been dropped later in 2024. Current cluster F firmware tends to have all the issues described in this post simultaneously. Whatever security considerations may have driven the changes in cluster E, the people in charge of cluster F clearly disagreed.
The impact
So, how bad is it? Knowing the access password allows access to the camera’s main functionality: audio and video recordings. But these cameras have been known for vulnerabilities allowing execution of arbitrary commands. Also, newer cameras have an API that will start a telnet server with hardcoded and widely known administrator credentials (older cameras had this telnet server start by default). So we have to assume that a compromised camera could become part of a botnet or be used as a starting point for attacks against a network.
But this requires accessing the camera first, and most VStarcam cameras won’t be exposed to the internet directly. They will only be reachable via the PPPP protocol. And for that the attackers would need to know the device ID. How would they get it?
There is a number of ways, most of which I’ve already discussed before. For example, anybody who was briefly connected to your network could have collected device IDs of your cameras. The script to do that won’t currently work with newer VStarcam cameras because these obfuscate the traffic on the PPPP level but the necessary adjustments aren’t exactly complicated.
PPPP networks still support “supernodes,” devices that help route traffic. Back in 2019 Paul Marrapese abused that functionality to register a rogue supernode and collect device IDs en masse. There is no indication that this trick stopped working, and the VStarcam networks are likely susceptible as well.
Users also tend to leak their device IDs themselves. They will post screenshots or videos of the app’s user interface. On the first glance this is less problematic with the O-KAM Pro app because this one will display only a vendor-specific device ID (looks similar to a PPPP device ID but has seven digits and only four letters in the verification code). That is, until you notice that the app uses a public web API to translate vendor-specific device IDs into PPPP device IDs.
Anybody who can intercept some PPPP traffic can extract the device IDs from it. Even when VStarcam networks obfuscate the traffic rather than using plaintext transmission – the static keys are well known, removing the obfuscation isn’t hard.
And finally, simply guessing device IDs is still possible. With only 5 million possible verification codes for each device IDs and servers not implementing rate limiting, bruteforce attacks are quite realistic.
Let’s not forget the elephant in the room however: VStarcam themselves know all the device IDs of course. Not just that, they know which devices are active and where. With a password they can access the cameras of interest to them (or their government) anytime.
Coordinated disclosure attempt
Given the intentional nature of these issues, I was unsure how to deal with this. I mean, what’s the point of reporting vulnerabilities to VStarcam that they are clearly aware of? In the end I decided to give them a chance to address the issues before they become public knowledge.
However, all I found was VStarcam boasting about their ISO 27001:2022 compliance. My understanding is that this requires them to have a dedicated person responsible for vulnerability management, but they are not obliged to list any security contact that can be reached from outside the company – and so they don’t. I ended up emailing all company addresses I could find, asking whether there is any way to report security issues to them.
I haven’t received any response, an experience that in my understanding other people already made with VStarcam. So I went with my initial publication schedule rather than waiting 90 days as I would normally do.
Recommendations
Whatever motives VStarcam had to backdoor their cameras, the consequence for the customers is: these cameras cannot be trusted. Their access protection should be considered compromised. Even with firmware versions shown as green on my map, there is no guarantee that I haven’t missed something or that these will still be green after the next update.
If you want to keep using a VStarcam camera, the only safe way to do it is disconnecting it from the internet. They don’t have to be disconnected physically, internet routers will often have a way to prohibit internet traffic to and from particular devices. My router for example has this feature under parental control.
Of course this will mean that you will only be able to control your camera while connected to the same network. It might be possible to explicitly configure port forwarding for the camera’s RTSP port, allowing you to access at least the video stream from outside. Just make sure that your RTSP password isn’t known to VStarcam.
Collected here are the most recent blog posts from all over the Mozilla community.
The content here is unfiltered and uncensored, and represents the views of individual community members.
Individual posts are owned by their authors -- see original source for licensing information.