Mozilla ThunderbirdThe Official Thunderbird Podcast Is Here

Welcome to the debut episode of the Thunderbird podcast, which we’re affectionately calling the ThunderCast! It’s an inside look at the making of Thunderbird, alongside community-driven conversations with our friends in the open-source world. We can’t wait for you to listen! 

ThunderCast is making its way to all your favorite podcast players. You can currently subscribe on Spotify, Amazon Music, YouTube, or by using this RSS feed.

Highlights from Episode 1

  • What to expect on future episodes of ThunderCast
  • We’re hiring!
  • Is Thunderbird still part of Mozilla?
  • Alex starts a band, Ryan is building a keyboard, Jason’s island adventures
  • 4 years of “invisible work” to prepare for Supernova
  • Thunderbird on Android… and iOS

Chapter Markers

The ThunderCast includes chapters for podcast players that support the feature. If yours does not, here are some timestamps to help you navigate the episode:

  • (00:00) – ThunderCast: What To Expect
  • (02:07) – Meet Ryan
  • (08:55) – Meet Alex
  • (12:24) – Loving Your Work
  • (18:07) – Meet Jason
  • (21:06) – Geeking Out
  • (31:37) – Mozilla + Thunderbird: A history lesson
  • (41:22) – Supernova: Setting The Stage
  • (56:28) – “Email is not broken”
  • (01:05:23) – K-9 Mail → Thunderbird Android
  • (01:16:36) – Closing comments

The post The Official Thunderbird Podcast Is Here appeared first on The Thunderbird Blog.

Niko MatsakisMust move types

Rust has lots of mechanisms that prevent you from doing something bad. But, right now, it has NO mechanisms that force you to do something good1. I’ve been thinking lately about what it would mean to add “must move” types to the language. This is an idea that I’ve long resisted, because it represents a fundamental increase to complexity. But lately I’m seeing more and more problems that it would help to address, so I wanted to try and think what it might look like, so we can better decide if it’s a good idea.

Must move?

The term ‘must move’ type is not standard. I made it up. The more usual name in PL circles is a “linear” type, which means a value that must be used exactly once. The idea of a must move type T is that, if some function f has a value t of type T, then f must move t before it returns (modulo panic, which I discuss below). Moving t can mean either calling some other function that takes ownership of t, returning it, or — as we’ll see later — destructuring it via pattern matching.

Here are some examples of functions that move the value t. You can return it…

fn return_it<T>(t: T) {

…call a function that takes ownership of it…

fn send_it<T>(t: T) {
    channel.send(t); // takes ownership of `t`

…or maybe call a constructor function that takes ownership of it (which would usually mean you must “recursively” move the result)…

fn return_opt<T>(t: T) -> Option<T> {
    Some(t) // moves t into the option

Doesn’t Rust have “linear types” already?

You may have heard that Rust’s ownership and borrowing is a form of “linear types”. That’s not really true. Rust has affine types, which means a value that can be moved at most once. But we have nothing that forces you to move a value. For example, I can write the consume function in Rust today:

fn consume<T>(t: T) {
    /* look ma, no .. nothin' */

This function takes a value t of (almost, see below) any type T and…does nothing with it. This is not possible with linear types. If T were linear, we would have to do something with t — e.g., move it somewhere. This is why I call linear types must move.

What about the destructor?

“Hold up!”, you’re thinking, “consume doesn’t actually do nothing with t. It drops t, executing its destructor!” Good point. That’s true. But consume isn’t actually required to execute the destructor; you can always use forget to avoid it2:

fn consume<T>(t: T) {

If weren’t possible to “forget” values, destructors would mean that Rust had a linear system, but even then, it would only be in a technical sense. In particular, destructors would be a required action, but of a limited form — they can’t, for example, take arguments. Nor can they be async.

What about Sized?

There is one other detail about the consume type worth mentioning. When I write fn consume<T>(t: T), that is actually shorthand for saying “any type T that is Sized”. In other words, the fully elaborated “do nothing with a value” function looks like this:

fn consume<T: Sized>(t: T) {

If you don’t want this default Sized bound, you write T: ?Sized. The leading ? means “maybe Sized” — i.e., now T can any type, whether it be sized (e.g., u32) or unsized (e.g., [u32]).

This is important: a where-clause like T: Foo narrows the set of types that T can be, since now it must be a type that implements Foo. The “maybe” where-clause T: ?Sized (we don’t accept other traits here) broadens the set of types that T can be, by removing default bounds.

So how would “must move” work?

You might imagine that we could encode “must move” types via a new kind of bound, e.g., T: MustMove. But that’s actually backwards. The problem is that “must move” types are actually a superset of ordinary types — after all, if you have an ordinary type, it’s still ok to write a function that always moves it. But it’s also ok to have a function that drops it or forgets it. In contrast, with a “must move” type, the only option is to move it. This implies that what we want is a ? bound, not a normal bound.

The notation I propose is ?Drop. The idea is that, by default, every type parameter D is assumed to be droppable, meaning that you can always choose to drop it at any point. But a M: ?Drop parameter is not necessarily droppable. You must ensure that a value of type M is moved somewhere else.

Let’s see a few examples to get the idea of it. To start, the identity function, which just returns its argument, could be declared with ?Drop:

fn identity<M: ?Drop>(m: M) -> M {
    m // OK — moving `m` to the caller

But the consume function could not:

fn consume<M: ?Drop>(m: M) -> M {
    // ERROR: `M` is not moved.

You might think that the version of consume which calls mem::forget is sound — after all, forget is declared like so

fn forget<T>(t: T) {
    /* compiler magic to avoid dropping */

Therefore, if consume were to call forget(m), wouldn’t that count as a move? The answer is yes, it would, but we still get an error. This is because forget is not declared with ?Drop, and therefore there is an implicit T: Drop where-clause:

fn consume<M: ?Drop>(m: M) -> M {
    forget(m); // ERROR: `forget` requires `M: Drop`, which isn’t known to hold.

Declaring types to be ?Drop

Under this scheme, all structs and types you declare would be droppable by default. If you don’t implement Drop explicitly, the compiler adds an automatic Drop impl for you that just recursively drops your fields. But you could explicitly declare your type to be ?Drop by using a negative impl:

pub struct Guard {
    value: u32

impl !Drop for Guard { }

When you do this, the type becomes “must move” and any function which has a value of type Guard must either move it somewhere else. You might wonder then how you ever terminate — the answer is that one way to “move” the value is to unpack it with a pattern. For example, Guard might declare a log method:

impl Guard {
    pub fn log(self, message: &str) {
        let Guard { value } = self; // moves “self”
        println!({value} = {message});

This plays nicely with privacy: if your type have private fields, only functions within that module will be able to destruct it, everyone else must (eventually) discharge their obligation to move by invoking some function within your module.

Interactions between “must move” and control-flow

Must move values interact with control-flow like ?. Consider the Guard type from the previous section, and imagine I have a function like this one…

fn execute(t: Guard) -> Result<(), std::io::Error> {
    let s: String = read_file(message.txt)?;  // ERROR: `t` is not moved on error

This code would not compile. The problem is that the ? in read_file may return with an Err result, in which case the call to t.log would not execute! This is a good error, in the sense that it is helping us ensure that the log call to Guard is invoked, but you can imagine that it’s going to interact with other things. To fix the error, you should do something like this…

fn execute(t: Guard) -> Result<(), std::io::Error> {
    match read_file(message.txt) {
        Ok(s) => {
        Err(e) => {
            t.log(error); // now `t` is moved

Of course, you could also opt to pass back the t value to the caller, making it their problem.

Conditional “must move” types

Talking about types like Option and Result — it’s clear that we are going to want to be able to have types that are conditionally must move — i.e., must move only if their type parameter is “must move”. That’s easy enough to do:

enum Option<T: ?Drop> {

Some of the methods on Option work just fine:

impl<T: ?Drop> Option<T> {
    pub fn map<U: ?Drop>(self, op: impl FnOnce(T) -> U) -> Option<U> {
        match self {
            Some(t) => Some(op(t)),
            None => None,

Other methods would require a Drop bound, such as unwrap_or:

impl<T: ?Drop> Option<T> {
    pub fn unwrap_or(self, default:T) -> T
        T: Drop,
        match self {
            // OK
            None => default,

            // Without the `T: Drop` bound, we are not allowed to drop `default` here.
            Some(v) => v,

“Must move” and panic

One very interesting question is what to do in the case of panic. This is tricky! Ordinarily, a panic will unwind all stack frames, executing destructors. But what should we do for a ?Drop type that doesn’t have a destructor?

I see a few options:

  • Force an abort. Seems bad.
  • Deprecate and remove unwinding, limit to panic=abort. A more honest version of the previous one. Still seems bad, though dang would it make life easier.
  • Provide some kind of fallback option.

The last one is most appealing, but I’m not 100% sure how it works. It may mean that we don’t want to have the “must move” opt-in be to impl !Drop but rather to impl MustMove, or something like that, which would provide a method that is invoked on the case of panic (this method could, of course, choose to abort). The idea of fallback might also be used to permit cancellation with the ? operator or other control-flow drops (though I think we definitely want types that don’t permit cancellation in those cases).

“Must move” and trait objects

What do we do with dyn? I think the answer is that dyn Foo defaults to dyn Foo + Drop, and hence requires that the type be droppable. To create a “must move” dyn, we could permit dyn Foo + ?Drop. To make that really work out, we’d have to have self methods to consume the dyn (though today you can do that via self: Box<Self> methods).

Uses for “must move”

Contra to best practices, I suppose, I’ve purposefully kept this blog post focused on the mechanism of must move and not talked much about the motivation. This is because I’m not really trying to sell anyone on the idea, at least not yet, I just wanted to sketch some thoughts about how we might achieve it. That said, let me indicate why I am interested in “must move” types.

First, async drop: right now, you cannot have destructors in async code that perform awaits. But this means that async code is not able to manage cleanup in the same way that sync code does. Take a look at the status quo story about dropping database handles to get an idea of the kinds of problems that arise. Adding async drop itself isn’t that hard, but what’s really hard is guaranteeing that types with async drop are not dropped in sync code, as documented at length in Sabrina Jewson’s blog post. This is precisely because we currently assume that all types are droppable. The simplest way to achieve “async drop” then would to define a trait trait AsyncDrop { async fn async_drop(self); } and then make the type “must move”. This will force callers to eventually invoke async_drop(x).await. We might want some syntactic sugar to handle ? more easily, but that could come later.

Second, parallel structured concurrency. As Tyler Mandry [elegant documented][tm], if we want to mix parallel scopes and async, we need some way to have futures that cannot be forgotten. The way I think of it is like this: in sync code, when you create a local variable x on your stack, you have a guarantee from the language that it’s destructor will eventually run, unless you move it. In async code, you have no such guarantee, as your entire future could just be forgotten by a caller. “Must move” types solve this problem (with some kind of callback for panic) give us a tool to solve this problem, by having the future type be ?Drop — this is effectively a principled way to integrate completion-style futures that must be fully polled.

Finally, “liveness conditions writ large”. As I noted in the beginning, Rust’s type system today is pretty good at letting you guarantee “safety” properties (“nothing bad happens”), but it’s much less useful for liveness properties (“something good eventually happens”). Destructors let you get close, but they can be circumvented. And yet I see liveness properties cropping up all over the place, often in the form of guards or cleanup that really ought to happen. Any time you’ve ever wanted to have a destructor that takes an argument, that applies. This comes up a lot in unsafe code, in particular. Being able to “log” those obligations via “must move” types feels like a really powerful tool that will be used in many different ways.

Parting thoughts

This post sketches out one way to get “true linear” types in Rust, which I’ve dubbed as “must move” types. I think I would call this the ?Drop approach, because the basic idea is to allow types to “opt out” from being “droppable” (in which case they must be moved). This is not the only approach we could use. One of my goals with this blog post is to start collecting ideas for different ways to add linear capabilities, so that we can compare them with one another.

I should also address the obvious “elephant in the room”. The Rust type system is already complex, and adding “must move” types will unquestionably make it more complex. I’m not sure yet whether the tradeoff is worth it: it’s hard to judge without trying the system out. I think there’s a good chance that “must move” types live “on the edges” of the type system, through things like guards and so forth that are rarely abstracted over. I think that when you are dealing with concrete types, like the Guard example, must move types won’t feel particularly complicated. It will just be a helpful lint saying “oh, by the way, you are supposed to clean this up properly”. But where pain will arise is when you are trying to build up generic functions — and of course just in the sense of making the Rust language that much bigger. Things like ?Sized definitely make the language feel more complex, even if you never have to interact with them directly.

On the other hand, “must move” types definitely add value in the form of preventing very real failure modes. I continue to feel that Rust’s goal, above all else, is “productive reliability”, and that we should double down on that strength. Put another way, I think that the complexity that comes from reasoning about “must move” types is, in large part, inherent complexity, and I feel ok about extending the language with new tools for that. We saw this with the interaction with the ? operator — no doubt it’s annoying to have to account for moves and cleanup when an error occurs, but it’s also a a key part of building a robust system, and destructors don’t always cut it.

  1. Well, apart from the “must use” lint. 

  2. Or create a Rc-cycle, if that’s more your speed. 

The Servo BlogMaking it easier to contribute to Servo

Back in January, flaky tests were a serious problem for Servo’s development. Each build failure caused by flaky tests would delay merging a pull request by over two hours, and some changes took as many as seven tries to merge! But since then, we’ve made a bunch of improvements to how we run tests, which should make contributing to Servo a lot easier.

What is a flaky test?

Servo is tested against the Web Platform Tests, a suite of over 30,000 tests shared with all of the major web engines. Each test can pass, fail, crash, or time out, and if a test has subtests, each subtest can have its own result. Passing is not always the expected outcome: for example, we would expect most tests for unimplemented features to fail.

Flaky tests are tests that yield the expected outcome sometimes and an unexpected outcome other times, causing intermittent build failures. Tests can be flaky due to how they were written, or problems with the machines that run those tests, but often they flake due to Servo bugs. Regardless of the cause, we want to avoid letting flaky tests affect people doing unrelated work.

Faster build times

Making builds faster doesn’t directly make tests less flaky, but it does reduce the delays that flaky tests can cause.

Our main try and merge builds often took three or four hours to complete, because our GitHub org was limited to 20 concurrent runners. Since we also split the Web Platform Tests into 20 concurrent jobs, some of those jobs would almost always get starved by other jobs, like Windows unit tests or nightly WPT updates.

We reached out to GitHub about this, and they were kind enough to increase our free runner limit to 60 concurrent jobs, cutting our build times to a consistent two hours.

In the future, it may be worth adding some caching of the Cargo and target directories across builds, but the slowest parts of our builds by far are the Windows and macOS jobs. While neither of them run the Web Platform Tests yet, even just compiling and running unit tests takes over 90 minutes, making them almost always the critical path.

We are hoping this will improve with initiatives like GitHub’s upcoming “XL” macOS runners, and in the longer term it may be worth setting up some dedicated runners of our own.

Support for multiple expectations

We were previously only able to handle flaky tests by marking them as intermittent, that is, creating an issue with the test name in the title and the label I-intermittent. This means we treat any result as expected when deciding whether or not the build should succeed, which is a very coarse approach, and it means the list of intermittent tests isn’t version controlled.

But as of #29339, we can now give tests a set of expected outcomes in the metadata files! Note that the typical outcome, if any, should go first, but the order doesn’t really matter in practice.

# tests/wpt/metadata/path/to/test.html.ini
  [subtest that only fails]
    expected: FAIL

  [subtest that occasionally times out]
    expected: [PASS, TIMEOUT]

In the future, it may be worth migrating the existing intermittent issues to expectations like this.

Retrying tests with unexpected results

Sometimes the causes of flakiness can affect many or even all tests, like bugs causing some reftest screenshots to be completely white, or overloaded test runners causing some tests to time out.

Thanks to #29370, we now retry tests that yield unexpected results. If a test yields the expected result on the second try, we ignore it when deciding whether or not the build should succeed. This can make builds a little slower, but it should be outweighed by our recent improvements to build times.

In the future, it may be worth adopting some more advanced retry techniques. For example, Chromium’s retry strategy includes retrying the entire “shard” of tests to reproduce the test environment more accurately, and retrying tests both with and without the pull request to help “exonerate” the changes. These techniques require considerably more resources though, and they are generally only viable if we can fund our own dedicated test runners.

Result comments

As of #29315, when a try or merge build finishes, we now post a comment on the pull request with a clear breakdown of the unexpected results:

  • Flaky unexpected results are those that were unexpected at first, but expected on retry
  • Stable unexpected results that are known to be intermittent are those that were unexpected, but ignored due to being marked as intermittent
  • Stable unexpected results are those that caused the build to fail

Intermittent dashboard

To ensure that flaky tests can be discovered and fixed even if they are mitigated by retries, we’ve created an intermittent dashboard that all unexpected results get reported to.

Each result includes the test and subtest, the expected and actual outcomes, any test output, plus metadata like the commit and a link to the build. You can filter the data to a specific test or field value, and the dashboard automatically points out when all of the visible results have something in common, which can help us analyse the failures and identify patterns.

For example, here we can see that all of the unexpected failures for one of the HTML parsing tests have the same assertion failure on the same subtest, but are not limited to one pull request:

screenshot of intermittent dashboard, filtered by test (/html/syntax/parsing/DOMContentLoaded-defer.html) and actual outcome (FAIL)

In the future, we plan to further develop the dashboard, including adding more interesting views of the data like:

  • which tests flake the most (within some recent period like 30 days)
  • which tests are starting to flake (newly seen or quickly spiking)
  • which tests are marked as intermittent but haven’t flaked recently

Niko MatsakisTemporary lifetimes

In today’s lang team design meeting, we reviewed a doc I wrote about temporary lifetimes in Rust. The current rules were established in a blog post I wrote in 2014. Almost a decade later, we’ve seen that they have some rough edges, and in particular can be a common source of bugs for people. The Rust 2024 Edition gives us a chance to address some of those rough edges. This blog post is a copy of the document that the lang team reviewed. It’s not a proposal, but it covers some of what works well and what doesn’t, and includes a few sketchy ideas towards what we could do better.


Rust’s rules on temporary lifetimes often work well but have some sharp edges. The 2024 edition offers us a chance to adjust these rules. Since those adjustments change the times when destructors run, they must be done over an edition.

Design principles

I propose the following design principles to guide our decision.

  • Independent from borrow checker: We need to be able to figure out when destructors run without consulting the borrow checker. This is a slight weakening of the original rules, which required that we knew when destructors would run without consulting results from name resolution or type check.
  • Shorter is more reliable and predictable: In general, we should prefer shorter temporary lifetimes, as that results in more reliable and predictable programs.
    • Editor’s note: A number of people in the lang questions this point. The reasoning is as follows. First, a lot of the problems in practice come from locks that are held longer than expected. Second, problems that come from temporaries being dropped too early tend to manifest as borrow check errors. Therefore, they don’t cause reliability issues, but rather ergonomic ones.
  • Longer is more convenient: Extending temporary lifetimes where we can do so safely gives more convenience and is key for some patterns.
    • Editor’s note: As noted in the previous bullet, our current rules sometimes give temporary lifetimes that are shorter than what the code requires, but these generally surface as borrow check errors.

Equivalences and anti-equivalences

The rules should ensure that E and (E), for any expression E, result in temporaries with the same lifetimes.

Today, the rules also ensure that E and {E}, for any expression E, result in temporaries with the same lifetimes, but this document proposes dropping that equivalence as of Rust 2024.

Current rules

When are temporaries introduced?

Temporaries are introduced when there is a borrow of a value-producing expression (often called an “rvalue”). Consider an example like &foo(); in this case, the compiler needs to produce a reference to some memory somewhere, so it stores the result of foo() into a temporary local variable and returns a reference to that.

Often the borrows are implicit. Consider a function get_data() that returns a Vec<T> and a call get_data().is_empty(); because is_empty() is declared with &self on [T], this will store the result of get_data() into a temporary, invoke deref to get a &[T], and then call is_empty.

Default temporary lifetime

Whenever a temporary is introduced, the default rule is that the temporary is dropped at the end of the innermost enclosing statement; this rule is sometimes summarized as “at the next semicolon”. But the definition of statement involves some subtlety.

Block tail expressions. Consider a Rust block:


And temporaries created in a statement stmt[i] will be dropped once that statement completes. But the tail expression is not considered a statement, so temporaries produced there are dropped at the end of the statement that encloses the block. For example, given get_data and is_empty as defined in the previous section, and a statement let x = foo({get_data().is_empty()});, the vector will be freed at the end of the let.

Conditional scopes for if and while. if and while expressions and if guards (but not match or if let) introduce a temporary scope around the condition. So any temporaries from expr in if expr { ... } would be dropped before the { ... } executes. The reasoning here is that all of these contexts produce a boolean and hence it is not possible to have a reference into the temporary that is still live. For example, given if get_data().is_empty(), the vector must be safe to drop before entering the body of the if. This is not true for a case like match get_data().last() { Some(x) => ..., None => ... }, where the x would be a reference into the vector returned by get_data().

Function scope. The tail expression of a function block (e.g., the expression E in fn foo() { E }) is not contained by any statement. In this case, we drop temporaries from E just before returning from the function, and thus fn last() -> Option<&Datum> { get_data().last() } fails the borrow check (because the temporary returned by get_data() is dropped before the function returns). Importantly, this function scope ends after local variables in the function are dropped. Therefore, this function…

fn foo() {
    let x = String::new();

…is effectively desugared to this…

fn foo() {
    let tmp;
        let x = String::new();
        { tmp = vec![]; &tmp }.is_empty()
    } // x dropped here
} // tmp dropped here

Lifetime extension

In some cases, temporary lifetimes are extended from the innermost statement to the innermost block. The rules for this are currently defined syntactically, meaning that they do not consider types or name resolution. The intution is that we extend the lifetime of the temporary for an expression E if it is evident that this temporary will be stored into a local variable. Consider the trivial example:

let t = &foo();

Here, foo() is a value expression, and hence &foo() needs to create a temporary so that we can have a reference. But the resulting &T is going to be stored in the local variable t. If we were to free the temporary at the next ;, this local variable would be immediately invalid. That doesn’t seem to match the user intent. Therefore, we extend the lifetime of the temporary so that it is dropped at the end of the innermost block. This is the equivalent of:

let tmp;
let t = { tmp = foo(); &tmp };

We can extend this same logic to compound expressions. Consider:

let t = (&foo(), &bar());

we will expand this to

let tmp1;
let tmp2;
let t = { tmp1 = foo(); tmp2 = bar(); (&tmp1, &tmp2) };

The exact rules are given by a grammar in the code and also covered in the reference. Rather than define them here I’ll just give some examples. In each case, the &foo() temporary is extended:

let t = &foo();

// Aggregates containing a reference that is stored into a local:
let t = Foo { x: &foo() };
let t = (&foo(), );
let t = [&foo()];

// Patterns that create a reference, rather than `&`:
let ref t = foo();

Here are some cases where temporaries are NOT extended:

let f = some_function(&foo()); // could be `fn some_function(x: &Vec<T>) -> bool`, may not need extension

struct SomeTupleStruct<T>(T);
let f = SomeTupleStruct(&foo()); // looks like a function call

Patterns that work well in the current rules

Storing temporary into a local

struct Data<'a> {
    data: &'a [u32] // use a slice to permit subslicing later

fn initialize() {
    let d = Data { x: &[1, 2, 3] };
    //                 ^^^^^^^^^ extended temporary

impl Data<'_> {
    fn process(&mut self) {
        ... = &[1..];

Reading values out of a lock/refcell

The current rules allow you to do atomic operations on locals/refcells conveniently, so long as they don’t return references to the data. This works great in a let statement (there are other cases below where it works less well).

let result = cell.borrow_mut().do_something();
// `cell` is not borrowed here

Error-prone cases with today’s rules

Today’s rules sometimes give lifetimes that are too long, resulting in bugs at runtime.

Deadlocks because of temporary lifetimes in matches

One very common problem is deadlocks (or panics, for ref-cell) when mutex locks occur in a match scrutinee:

match lock.lock().data.clone() {
    //     ------ returns a temporary guard
    Data { ... } => {
        lock.lock(); // deadlock
} // <-- lock() temporary dropped here

Ergonomic problems with today’s rules

Today’s rules sometimes give lifetimes that are too short, resulting in ergonomic failures or confusing error messages.

Call parameter temporary lifetime is too short (RFC66)

Somewhat surprisingly, the following code does not compile:

fn get_data() -> Vec<u32> { vec![1, 2, 3] }

fn main() {
    let last_elem = get_data().last();
    drop(last_elem); // just a dummy use

This fails because the Vec returned by get_data() is stored into a temporary so that we can invoke last, which requires &self, but that temporary is dropped at the ; (as this case doesn’t fall under the lifetime extension rules).

RFC 66 proposed a rather underspecified extension to the temporary lifetime rules to cover this case; loosely speaking, the idea was to extend the lifetime extension rules to extend the lifetime of temporaries that appear in function arguments if the function’s signature is going to return a reference from that argument. So, in this case, the signature of last indicates that it returns a reference from self:

impl<T> [T] {
    fn last(&self) -> Option<&T> {...}

and therefore, since E.last() is being assigned to last_elem, we would extend the lifetime of any temporaries in E (the value for self). Ding Xiang Fei has been exploring how to actually implement RFC 66 and has made some progress, but it’s clear that we need to settle on the exact rules for when lifetime temporary extension should happen.

Even assuming we created some rules for RFC 66, there can be confusing cases that wouldn’t be covered. Consider this statement:

let l = get_data().last().unwrap();
drop(l); // ERROR

Here, the unwrap call has a signature fn(Option<T>) -> T, which doesn’t contain any references. Therefore, it does not extend the lifetimes of temporaries in its arguments. The argument here is the expression get_data().last(), which creates a temporary to store get_data(). This temporary is then dropped at the end of the statement, and hence l winds up pointing to dead memory.

Statement-like expressions in tail position

The original rules assumed that changing E to {E} should not change when temporaries are dropped. This has the counterintuitive behavior though that introducing a block doesn’t constrain the stack lifetime of temporaries. It is also surprising for blocks that have tail expressions that are “statement-like” (e.g., match), because these can be used as statements without a ;, and thus users may not have a clear picture of whether they are an expression producing a value or a statement.

Example. The following code does not compile:

struct Identity<A>(A);
impl<A> Drop for Identity<A> {
    fn drop(&mut self) { }
fn main() {
    let x = 22;
    match Identity(&x) {
        //------------ creates a temporary that can be matched
        _ => {
    } // <-- this is considered a trailing expression by the compiler
} // <-- temporary is dropped after this block executes

Because of the way that the implicit function scope works, and the fact that this match is actually the tail expression in the function body, this is effectively desugared to something like this:

struct Identity<A>(A);
impl<A> Drop for Identity<A> {
    fn drop(&mut self) { }
fn main() {
    let tmp;
        let x = 22;
        match {tmp = Identity(&x); tmp} {
            _ => {

Lack of equivalence between if and match

The current rules distinguish temporary behavior for if/while from match/if-let. As a result, code like this compiles and executes fine:

if lock.lock().something { // grab lock, then release
    lock.lock(); // OK to grab lock again

but very similar code using a match gives a deadlock:

if let true = lock.lock().something {
    lock.lock(), // Deadlock lock.lock(), // Deadlock

// or

match lock.lock().something {
    true => lock.lock(), // Deadlock
    false => (),

Partly as a result of this lack of equivalence, we have had a lot of trouble doing desugarings for things like let-else and if-let expressions.

Named block

Tail expressions aren’t the only way to “escape” a value from a block, the same applies to breaking with a named label, but they don’t benefit from lifetime extension. The following example, therefore, fails to compile:

fn main() {
    let x = 'a: {
        break 'a &vec![0]; // ERROR

Note that a tail-expression based version does compile today:

fn main() {
    let x = { &vec![0] };

Proposed properties to focus discussion

To focus discussion, here are some named examples we can use that capture key patterns.

Examples of behaviors we would ideally preserve:

  • read-locked-field: let x: Event = ref_cell.borrow_mut().get_event(); releases borrow at the end of the statement (as today)
  • obvious aggregate construction: let x: Event = Event { x: &[1, 2, 3] } stores [1, 2, 3] in a temporary with block scope

Examples of behavior that we would like, but which we don’t have today, resulting in bugs/confusion:

  • match-locked-field: match data.lock().unwrap().data { ... } releases lock before match body executes
  • if-match-correspondence: if <expr> {}, if let true = <expr> {}, and match <expr> { true => .. } all behave the same with respect to temporaries in <expr> (unlike today)
  • block containment: {<expr>} must not create any temporaries that extend past the end of the block (unlike today)
  • tail-break-correspondence: {<expr>} and 'a: { break 'a <expr> } should be equivalent

Examples we behavior that we would like, but which we don’t have today, resulting in ergonomic pain (these cases may not be achievable without violating the previous ones):

  • last: let x = get_data().last(); (the canonical RFC66 example) will extend lifetime of data to end of block; also covers (some) new methods like let x: Event<'_> = Event::new(&[1, 2, 3])
  • last-unwrap: let x = get_data().last().unwrap(); (extended form of the above) will extend lifetime of data to end of block
  • tuple struct construction: let x = Event(&[1, 2, 3])

Tightest proposal

The proposal with minimal confusion would be to remove syntactic lifetime extension and tighten default lifetimes in two ways:

Tighten block tail expressions. Have temporaries in the tail expression of a block be dropped when returning from the block. This ensures block containment and tail-break-correspondence.

Tighten match scrutinees. Drop temporaries from match/if-let scrutinees performing the match. This ensures match-locked-field and if-match-correspondence. To avoid footguns, we can tighten up the rules around match/if-let scrutinees so that temporaries are dropped before entering body of the match.

In short, temporaries would always be dropped at the innermost statement, match/if/if-let/while scrutinee, or block.

Things that no longer build

There are three cases that build today which will no longer build with this minimal proposal:

  • let x = &vec![] no longer builds, nor does let x = Foo { x: &[1, 2, 3] }. Both of them create temporaries that are dropped at the end of the let.
  • match &foo.borrow_mut().parent { Some(ref p) => .., None => ... } no longer builds, since temporary from borrow_mut() is dropped before entering the match arms.
  • {let x = {&vec![0]}; ...} no longer builds, as a result of tightening block tail expressions. Note however that other examples, e.g. the one from th section “statement-like expressions in tail position”, would now build successfully.

The core proposal also does nothing to address RFC66-like patterns, tuple struct construction, etc.

Extension option A: Do What I Mean

One way to overcome the concerns of the core proposal would be to extend with more “DWIM”-like options. For example, we could extend “lifetime extension rules” to cover match expressions.

Lifetime extension for let statements, as today. To allow let x = &vec![] to build, we can restore today’s lifetime extension rules.

  • Pro: things like this will build
let x = Foo { 
    data: &get_data()
    //     ---------- stored in a temporary that outlives `x`
  • Con: the following example would build again, which leads to a (perhaps surprising) panic – that said, I’ve never seen a case like this in the wild, the confusion always occurs with match
use std::cell::RefCell;

struct Foo<'a> {
    data: &'a u32

fn main() {
    let cell = RefCell::new(22);
    let x: Foo<'_> = Foo {
        data: &*cell.borrow_mut(),
    *cell.borrow_mut() += 1; // <-- panic

Scope extension for match structinees. To allow match &foo.borrow_mut().parent { Some(ref x => ... } to work, we could fix this by including similar scope extension rules to the ones used with let initializers (i.e., if we can see that a ref is taken into the temporary, then extend its lifetime, but otherwise do not).

  • Pro: match &foo.borrow_mut().parent { .. } works as it does today.
  • Con: Syntactic extension rules can be approximate, so e.g. match (foo(), bar().baz()) { (Some(ref x), y) => .. } would likely keep the temporary returned by bar(), even though it is not referenced.

RFC66-like rules. Use some heuristic rules to determine, from a function signature, when the return type includes data from the arguments. If the return type of a function f references a generic type or lifetime parameter that also appears in some argument i, and the function call f(a0, ..., ai, ..., an) appears in some position with an extended temporary lifetime, then ai will also have an extended temporary lifetime (i.e., any temporaries created in ai will persist until end of enclosing block / match expression).

  • Pro: Patterns like let x = E where E is get_data().last(), get_data().last().unwrap(), TupleStruct(&get_data()), or SomeStruct::new(&get_data()) would all allocate a temporary for get_data() that persistent until the end of the enclosing block. This occurs because
  • Con: Complex rules imply that let x = locked_vec.lock().last() would also extend lock lifetime to end-of-block, which users may not expect.

Extension option B: “Anonymous lets” for extended temporary lifetimes

Allow expr.let as an operator that means “introduce a let to store this value inside the innermost block but before the current statement and replace this statement with a reference to it”. So for example:

let x = get_data().let.last();

would be equivalent to

let tmp = get_data();
let x = tmp.last();

Question: Do we keep some amount of implicit extension? For example, should let x = &vec![] keep compiling, or do you have to do let x = &vec![].let?

Parting notes

Editor’s note: As I wrote at the start, this was an early document to prompt discussion in a meeting (you can see notes from the meeting here) It’s not a full proposal. That said, my position when I started writing was different than where I landed. Initially I was going to propose more of a “DWIM”-approach, tweaking the rules to be tighter in some places, more flexible in others. I’m still interested in exploring that, but I am worried that the end-result will just be people having very little idea when their destructors run. For the most part, you shouldn’t have to care about that, but it is sometimes quite important. That leads me to: let’s have some simple rules that can be explained on a postcard and work “pretty well”, and some convenient way to extend lifetimes when you want it. The .let syntax is interesting but ultimately probably too confusing to play this role.

Oh, and a note on the edition: I didn’t say it explicitly, but we can make changes to temporary lifetime rules over an edition by rewriting where necessary to use explicit lets, or (if we add one) some other explicit notation. The result would be code that runs on all editions with same semantics.

This Week In RustThis Week in Rust 486

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is duplicate, a proc macro crate for easy parametric code duplication.

Thanks to Anton Fetisov for the suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

391 pull requests were merged in the last week

Rust Compiler Performance Triage

A fairly mixed week, with several significant improvements and a few significant regressions. On average, this week saw a slight increase in compile times.

Triage done by @simulacrum. Revision range: 8f9e09ac..0058748

4 Regressions, 6 Improvements, 4 Mixed; 2 of them in rollups 39 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
New and Updated RFCs
Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Upcoming Events

Rusty Events between 2023-03-15 - 2023-04-12 🦀

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.


Please see the latest Who's Hiring thread on r/rust

Quote of the Week

The Rust compiler is a thousand unit tests that you don't have to write

Someone, likely Ian Purton on the Cloak blog

Thanks to Stephan Sokolow for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Hacks.Mozilla.OrgMozilla Launches Responsible AI Challenge

The last few months it has become clear that AI is no longer our future, but our present. Some of the most exciting ideas for the future of both the internet and the world involve AI solutions. This didn’t happen overnight, decades of work have gone into this moment. Mozilla has been working to make sure that the future of AI benefits humanity in the right ways by investing in the creation of trustworthy AI.

We want entrepreneurs and builders to join us in creating a future where AI is developed through this responsible lens. That’s why we are relaunching our Mozilla Builders program with the Responsible AI Challenge.

At Mozilla, we believe in AI: in its power, its commercial opportunity, and its potential to solve the world’s most challenging problems. But now is the moment to make sure that it is developed responsibly to serve society. 

If you want to build (or are already building) AI solutions that are ambitious but also ethical and holistic, the Mozilla Builder’s Responsible AI Challenge is for you. We will be inviting the top nominees to join a gathering of the brightest technologists, community leaders and ethicists working on trustworthy AI to help get your ideas off the ground. Participants will also have access to mentorship from some of the best minds in the industry, the ability to meet key contributors in this community, and an opportunity to win some funding for their project.

Mozilla will be investing $50,000 into the top applications and projects, with a grand prize of $25,000 for the first-place winner. 

Up for the challenge?

For more information, please visit the WEBSITE

Applications open on March 30, 2023.

The post Mozilla Launches Responsible AI Challenge appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogMozilla Launches Responsible AI Challenge

The last few months it has become clear that AI is no longer our future, but our present. Some of the most exciting ideas for the future of both the internet and the world involve AI solutions. This didn’t happen overnight, decades of work have gone into this moment. Mozilla has been working to make sure that the future of AI benefits humanity in the right ways by investing in the creation of trustworthy AI.

We want entrepreneurs and builders to join us in creating a future where AI is developed through this responsible lens. That’s why we are relaunching our Mozilla Builders program with the Responsible AI Challenge.

At Mozilla, we believe in AI: in its power, its commercial opportunity, and its potential to solve the world’s most challenging problems. But now is the moment to make sure that it is developed responsibly to serve society. 

If you want to build (or are already building) AI solutions that are ambitious but also ethical and holistic, the Mozilla Builder’s Responsible AI Challenge is for you. We will be inviting the top nominees to join a gathering of the brightest technologists, business leaders and ethicists working on trustworthy AI to help get your ideas off the ground. Participants will also have access to mentorship from some of the best minds in the industry, the ability to meet key contributors in this community, and an opportunity to win some funding for their project.

Mozilla will be investing $50,000 into the top applications and projects, with a grand prize of $25,000 for the first place winner. 

For more information, please visit here. Applications open up March 30, 2023.

The post Mozilla Launches Responsible AI Challenge appeared first on The Mozilla Blog.

The Mozilla BlogEmail protection just got easier in Firefox

If you’re already one of the many people who use Firefox Relay to save your real email address from trackers and spammers, then we’ve got a timesaver for you. We are testing a new way for Firefox Relay users to access their email masks directly from Firefox on numerous sites.

Since its launch, Firefox Relay has blocked more than 2.1 million unwanted emails from people’s inboxes while keeping real email addresses safe from trackers across the web. We’re always listening to our users, and one of the most-requested features is having Firefox Relay directly within the Firefox browser. And if you don’t already use Firefox Relay, you can always sign up.

How to use your Firefox Relay email masks in Firefox 

In the physical world, we limit sharing our home address. Yet, in the online world, we’re constantly asked for our email address and we freely share it with almost every site we come across. It’s our Firefox Relay users who think twice before sharing their email address, using email masks instead of their real email address to keep their personal information safe.

So, when a Firefox Relay user visits some sites in the Firefox browser and is prompted to sign up and share their email address, they can use one of their Firefox Relay email masks or create a new one. See how it works:

<figcaption class="wp-element-caption">Use a Firefox Relay email mask or create a new one</figcaption>

We hope to expand to more sites and to all Firefox users later this year. 

Additionally, Firefox Relay users can also opt out of this new feature so that they’re no longer prompted to use an email mask when they come across the pop-up. If they want to manage their Firefox Relay email address masks, they can visit their dashboard on the Firefox Relay site.

Thousands of users have signed up for our smart, easy solution that hides their real email address to help protect their identity. Wherever you go online, Mozilla’s trusted products and services can help you feel safer knowing that you have privacy protection for your everyday online life. 

If you don’t have Firefox Relay, you can subscribe today from the Firefox Relay site.

Start protecting your email inbox today

Sign up for Firefox Relay

The post Email protection just got easier in Firefox appeared first on The Mozilla Blog.

The Mozilla BlogFirefox Android’s new privacy feature, Total Cookie Protection, stops companies from keeping tabs on your moves

In case you haven’t heard, there’s an ongoing conversation happening about your personal data. 

Earlier this year, United States President Biden said in his State of the Union address that there needs to be stricter limits on the personal data that companies collect. Additionally, a recent survey found that most people said they’d like to control the data that companies collect about them, yet they don’t understand how online tracking works nor do they know what they can do about it. Companies are now trying and testing ways to anonymize the third-party cookies that track people on the web or get consent for each site or app that wants to track people’s behavior across the web. 

These days, who can you trust with your personal data? Mozilla. We have over a decade of anti-tracking work with products and features that protect people, their privacy and their online activity. Today, we’re announcing the official rollout of one of our strongest privacy features, Total Cookie Protection, to automatically block cross-site tracking on Firefox Android. 

Yes, companies gather your data when you go from site to site

Before we talk about Total Cookie Protection, let’s talk about cross-site tracking. These days our in-person transactions like shopping for groceries or buying gifts for friends have now become commonplace online. What people may not be aware of are the other transactions happening behind the scenes. 

For example, as you’re shopping for a gift and going from site to site looking for the right one, your activity is being tracked without your consent. Companies use a specific cookie known as the third-party cookie, which gathers information about you and your browsing behavior and tracks you when you go from site to site. Companies use the information to build profiles and help them make ads targeted at convincing you to purchase, like resurfacing an item you were shopping for. So Mozilla created the feature Total Cookie Protection to block companies from gathering information about you and your browsing behavior.

<figcaption class="wp-element-caption">Total Cookie Protection stops cookies from tracking you around the web</figcaption>

Your freedom from cross-site tracking now available on Firefox Android 

Meet Firefox’s Total Cookie Protection, which stops cookies from tracking you around the web and is now available on Firefox Android. Last year, Firefox rolled out our strongest privacy feature, Total Cookie Protection across Windows, Mac and Linux. Total Cookie Protection works by maintaining a separate “cookie jar” for each website you visit. Any time a website, or third-party content embedded in a website, deposits a cookie in your browser, Firefox Android confines that cookie to the cookie jar assigned to that website. This way, no other websites can reach into the cookie jars that don’t belong to them and find out what the other websites’ cookies know about you. Now, you can say goodbye to those annoying ads following you and reduce the amount of information that companies gather about you whenever you go online.

Firefox’s Total Cookie Protection covers you across all your devices

Whether you’re browsing at your desk or your phone, now you’ll get Firefox’s strongest privacy protection to date. Firefox will confine cookies to the site where they were created, thus preventing tracking companies from using these cookies to track your browsing from site to site. To seamlessly work across your devices, sign up for a free Firefox Account. You’ll be able to easily pick up your last open tab between your devices. Bonus: You can also access your saved passwords from your other devices by signing up for a free Firefox Account.

Get your most secure and private Firefox Android today!

Download Firefox Android

For more on Firefox:

The post Firefox Android’s new privacy feature, Total Cookie Protection, stops companies from keeping tabs on your moves appeared first on The Mozilla Blog.

Niko MatsakisTo async trait or just to trait

One interesting question about async fn in traits is whether or not we should label the trait itself as async. Until recently, I didn’t see any need for that. But as we discussed the question of how to enable “maybe async” code, we realized that there would be some advantages to distinguishing “async traits” (which could contain async functions) from sync traits (which could not). However, as I’ve thought about the idea more, I’m more and more of the mind that we should not take this step — at least not now. I wanted to write a blog post diving into the considerations as I see them now.

What is being proposed?

The specific proposal I am discussing is to require that traits which include async functions are declared as async traits…

// The "async trait" (vs just "trait") would be required
// to have an "async fn" (vs just a "fn").
async trait HttpEngine {
    async fn fetch(&mut self, url: Url) -> Vec<u8>;

…and when you reference them, you use the async keyword as well…

fn load_data<H>(h: &mut impl async HttpEngine, urls: &[Url]) {
    //                       ----- just writing `impl HttpEngine`
    //                             would be an error

This would be a change from the support implemented in nightly today, where any trait can have async functions.

Why have “async traits” vs “normal” traits?

When authoring an async application, you’re going to define traits like HttpEngine that inherently involve async operations. In that case, having to write async trait seems like pure overhead. So why would we ever want it?

The answer is that not all traits are like HttpEngine. We can call HttpEngine an “always async” trait — it will always involve an async operation. But a lot of traits are “maybe async” — they sometimes involve async operations and sometimes not. In fact, we can probably break these down further: you have traits like Read, which involve I/O but have a sync and async equivalent, and then you have traits like Iterator, which are orthogonal from I/O.

Particularly for traits like Iterator, the current trajectory will result in two nearly identical traits in the stdlib: Iterator and AsyncIterator. These will be mostly the same apart from AsyncIterator have an async next function, and perhaps some more combinators. It’s not the end of the world, but it’s also not ideal, particularly when you consider that we likely want more “modes”, like a const Iterator, a “sendable” iterator, perhaps a fallible iterator (one that returns results), etc. This is of course the problem often referred to as the “color problem”, from Bob Nystron’s well-known “What color is your function?” blog post, and it’s precisely what the “keyword generics” initiative is looking to solve.

Requiring an async keyword ensures consistency between “maybe” and “always” async traits…

It’s not really clear what a full solution to the “color problem” looks like. But whatever it is, it’s going to involve having traits with multiple modes. So instead of Iterator and AsyncIterator, we’ll have the base definition of Iterator and then a way to derive an async version, async Iterator. We can then call an Iterator a “maybe async” trait, because it might be sync but it might be async. We might declare a “maybe async” trait using an attribute, like this1:

trait Iterator {
    type Item;

    // Because of the #[maybe(async)] attribute,
    // the async keyword on this function means “if
    // this trait is in async mode, then this is an
    // async function”:
    async fn next(&mut self) -> Option<Self::Item>;

Now imagine I have a function that reads urls from some kind of input stream. This might be an async fn that takes an impl async Iterator as argument:

async fn read_urls(urls: impl async Iterator<Item = Url>) {
    //                        --——- specify async mode
    while let Some(u) = {
        //                          -———- needed because this is an async iterator

But now let’s say I want to combine this (async) iterator of urls and use an HttpEngine (our “always async” trait) to fetch them:

async fn fetch_urls(
    urls: impl async Iterator<Item = Url>,
    engine: impl HttpEngine,
) {
   while let Some(u) = {
       let data = engine.fetch(u).await;

There’s nothing wrong with this code, but it might be a bit surprising that I have to write impl async Iterator but I just write impl HttpEngine, even though both traits involve async functions. I can imagine that it would sometimes be hard to remember which traits are “always async” versus which ones are only “maybe async”.

…which also means traits can go from “always” to “maybe” async without a major version bump.

There is another tricky bit: imagine that I am authoring a library and I create a “always async” HttpEngine trait to start:

trait HttpEngine {
    async fn fetch(&mut self, url: Url) -> Vec<u8>;

but then later I want to issue a new version that offers a sync and an async version of HttpEngine. I can’t add a #[maybe(async)] to the trait declaration because, if I do so, then code using impl HttpEngine would suddenly be getting the sync version of the trait, whereas before they were getting the async version.

In other words, unless we force people to declare async traits up front, then changing a trait from “always async” to “maybe async” is a breaking change.

But writing async Trait for traits that are always async is annoying…

The points above are solid. But there are some flaws. The most obvious is that having to write async for every trait that uses an async function is likely to be pretty tedious. I can easily imagine that people writing async applications are going to use a lot of “always async” traits and I imagine that, each time they write impl async HttpEngine, they will think to themselves, “How many times do I have to tell the compiler this is async already?! We get it, we get it!!”

Put another way, the consistency argument (“how will I remember which traits need to be declared async?”) may not hold water in practice. I can imagine that for many applications the only “maybe async” traits are the core abstractions coming from libraries, like Iterator, and most of the other code is just “always async”. So actually it’s not that hard to remember which is which.

…and it’s not clear that traits will go from “always” to “maybe” async anyway…

But what about semver violations? Well, if my thesis above is correct, then it’s also true that there will be relatively few traits that need to go from “always async” to “maybe async”. Moreover, I imagine most libraries will know up front whether they expect to be sync or not. So maybe it’s not a big deal that this is a breaking change,

…and trait aliases would give a workaround for “always -> maybe” transitions anyway…

So, maybe it won’t happen in practice, but let’s imagine that we did define an always async HttpEngine and then later want to make the trait “maybe async”. Do we absolutely need a new major version of the crate? Not really, there is a workaround. We can define a new “maybe async” trait — let’s call it HttpFetch and then redefine HttpEngine in terms of HttpFetch:

// This is a trait alias. It’s an unstable feature that I would like to stabilize.
// Even without a trait alias, though, you could do this with a blanket impl.
trait HttpEngine = async HttpFetch;

trait HttpFetch {  }

This obviously isn’t ideal: you wind up with two names for the same underlying trait. Maybe you deprecate the old one. But it’s not the end of the world.

…and requiring async composes poorly with supertraits and trait aliases…

Actually, that last example brings up an interesting point. To truly ensure consistency, it’s not enough to say that “traits with async functions must be declared async”. We also need to be careful what we permit in trait aliases and supertraits. For example, imagine we have a trait UrlIterator that has an async Iterator as a supertrait…

trait UrlIterator: async Iterator<Item = Url> { }

…now people could write functions that take a impl UrlIterator, but it will still require await when you invoke its methods. So we didn’t really achieve consistency after all. The same thing would apply with a trait alias like trait UrlIterator = async Iterator<Item = Url>.

It’s possible to imagine a requirement like “to have a supertrait that is async, the trait must be async”, but — to me — that feels non-compositional. I’d like to be able to declare a trait alias trait A = … and have the be able to be any sort of trait bounds, whether they’re async or not. It feels funny to have the async propagate out of the ... and onto the trait alias A.

…and, while this decision is hard to reverse, it can be reversed.

So, let’s say that we were to stabilize the ability to add async functions to any trait. And then later we find that we actually want to have maybe async traits and that we wish we had required people to write async explicitly all the time, because consistency and semver. Are we stuck?

Well, not really. There are options here. For example, we might might make it possible to write async (but not required) and then lint and warn when people don’t. Perhaps in another edition, we would make it mandatory. This is basically what we did with the dyn keyword. Then we could declare that making a trait always-async to maybe-async is not considered worthy of a major version, because people’s code that follows the lints and warnings will not be affected. If we had transitioned so that all code in the new edition required an async keyword even for “always async” traits, we could let people declare a trait to be “maybe async but only in the new edition”, which would avoid all breakage entirely.

In any case, I don’t really want to do those things. It’d be embarassing and confusing to stabilize SAFIT and then decide that “oh, no, you have to declare traits to be async”. I’d rather we just think through the arguments now and make a call. But it’s always good to know that, just in case you’re wrong, you have options.

My (current) conclusion: YAGNI

So which way to go? I think the question hinges a lot on how common we expect “maybe async” code to be. My expectation is that, even if we do support it, “maybe async” will be fairly limited. It will mostly apply to (a) code like Iterator that is orthogonal from I/O and (b) core I/O primitives like the Read trait or the File type. If we’re especially successful, then crates like reqwest (which currently offers both a sync and async interface) would be able to unify those into one. But application code I expect to largely be written to be either sync or async.

I also think that it’ll be relatively unusual to go from “always async” to “maybe async”. Not impossible, but unusual enough that either making a new major version or using the “renaming” trick will be fine.

For this reason, I lean towards NOT requiring async trait, and instead allowing async fn to be added to any trait. I am still hopeful we’ll add “maybe async” traits as well, but I think there won’t be a big problem of “always async” traits needing to change to maybe async. (Clearly we are going to want to go from “never async” to “maybe async”, since there are lots of traits like Iterator in the stdlib, but that’s a non-issue.)

The other argument in favor is that it’s closer to what we do today. There are lots of people using #[async_trait] and I’ve never heard anyone say “it’s so weird that you can write T: HttpEngine and don’t have to write T: async HttpEngine”. At minimum, if we were going to change to requiring the “async” keyword, I would want to give that change some time to bake on nightly before we stabilized it. This could well delay stabilization significantly.

If, in contrast, you believed that lots of code was going to be “maybe async”, then I think you would probably want the async keyword to be mandatory on traits. After all, since most traits are maybe async anyway, you’re going to need to write it a lot of the time.

  1. I can feel you fixating on the #[maybe(async)] syntax. Resist the urge! There is no concrete proposal yet. 

Tiger OakesTricks for easier right-to-left CSS styling

Using CSS custom properties for elements without logical properties.

The Mozilla BlogThe Depths of Wikipedia creator on finding the goofy corners of the web

A photo shows Annie Rauwerda smiling. Surrounding illustration shows an icon for a sheet of paper and a dialogue box that reads "LOL".<figcaption class="wp-element-caption">Annie Rauwerda is the creator of Depths of Wikipedia, which highlights weird and unexpected entries from the online encyclopedia. Credit: Fuzheado, CC BY-SA 4.0 via Wikimedia Commons / Nick Velazquez / Mozilla</figcaption>

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we are also quick to point out that the internet is pretty darn magical. The internet opens up doors and opportunities, allows for people to connect with others, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner Of The Internet, we talk with people about the online spaces they can’t get enough of, what we should save in Pocket to read later, and what sites and forums shaped them.

This month we chat with Annie Rauwerda, the woman behind Depths of Wikipedia, which highlights weird and unexpected entries from the online encyclopedia. The project started as an Instagram account and has since expanded to Twitter, TikTok and a live comedy show. We talk to her about her obsession with memes, editing Wikipedia pages and finding the goofy corners of the internet. 

What is your favorite corner of the internet? 

I really like goofy corners of the web, like this archive of computer mice and this archive of rotating food. I love Discord so much. I love Twitter most of the time. And I love Wikipedia, particularly the timeline of the far future. 

What is an internet deep dive that you can’t wait to jump back into?

I’m obsessed with finding the origin of meme images. I recently tracked down the hot pink bitch named Breakfast and the Wikipedia high five couple. I spend a lot of time on the lost media subreddit

What is the one tab you always regret closing?

I edit Wikipedia a lot, and occasionally I come across a guideline or gadget that’s super helpful but hard to find (Wikipedia’s guidelines are a total labyrinth). Thank god for command + shift + t!!

What can you not stop talking about on the internet right now? 

I’ve been really into crossword construction! And making silly Venn diagrams!

What was the first online community you engaged with?

When I was in elementary school, my dad quit his job to be a stay-at-home parent and started pouring tons of energy into being the coolest dad ever. I grew up in Grand Rapids, Michigan, where there’s a ton of show in the winter, and he would make massive snow castles. It was right as YouTube was getting big, and he posted a video of the fort. We got comments from people around the world. It was so cool!

What articles and videos are in your Pocket waiting to be read/watched right now?

I love all the features in Quanta. And I’ve been meaning to finish The Curse of Xanadu in Wired from 1995. 

If you could create your own corner of the internet, what would it look like?

You know how people create extensive, time-intensive projects dedicated to random things like bread tags or candy cross-sections or notes left in library books or etymology? Probably something like that. I love archives, especially silly archives!

Wikipedia turned 22 this year. What about it keeps contributors like you so dedicated to editing and maintaining its pages?

There are a lot of canned answers I could give about a shared commitment to free knowledge, and I’m sure that’s part of it, but the real reason I edit is that it’s pretty fun. You get addicted.

Annie Rauwerda has been running Depths of Wikipedia since March 2020, when she started it as a sophomore at the University of Michigan. She’s also a writer whose work has appeared in Input Magazine and Slate.

Save and discover the best articles, stories and videos on the web

Get Pocket

The post The Depths of Wikipedia creator on finding the goofy corners of the web appeared first on The Mozilla Blog.

The Mozilla BlogReal talk: Did your 5-year-old just tease you about having too many open tabs?

An illustration shows various internet icons surrounding an internet browser window that reads, "Firefox and YouGov parenting survey."<figcaption class="wp-element-caption">Credit: Nick Velazquez / Mozilla</figcaption>

No one ever wanted to say “tech-savvy toddler” but here we are. It’s not like you just walked into the kitchen one morning and your kid was sucking on a binky and editing Wikipedia, right? Wait, really? It was pretty close to that? Well, for years there’s been an ongoing conversation on internet usage in families’ lives, and in 2020, the pandemic made us come face-to-face with that elephant in the room, the internet. There was no way around it. We went online for everything from virtual classrooms for kids, playing video games with friends, conducting video meetings with co-workers, and of course, streaming movies and TV shows. The internet’s role in our lives became a more permanent fixture in our family. It’s about time we gave it a rethink.

We conducted a survey with YouGov to get an understanding of how families use the internet in the United States, Canada, France, Germany and the United Kingdom. In November, we shared a preview with top insights from the report which included:

  • Many parents believe their kids have no idea how to protect themselves online. About one in three parents in France and Germany don’t think their child “has any idea on how to protect themselves or their information online.” In the U.S., Canada and the U.K., about a quarter of parents feel the same way.
  • U.S. parents spend the most time online compared to parents in other countries, and so do their children. Survey takers in the U.S. reported an average of seven hours of daily internet use via web browsers, mobile apps and other means. Asked how many hours their children spend online on a typical day, U.S. parents said an average of four hours. That’s compared to two hours of internet use among children in France, where parents reported spending about five hours online everyday. No matter where a child grows up, they spend more time online a day as they get older. 
  • Yes, toddlers use the web. Parents in North America and Western Europe reported introducing their kids to the internet some time between two and eight years old.  North America and the U.K. skew younger, with kids first getting introduced online between two and five for about a third of households.  Kids are introduced to the internet in France and Germany when they are older, between eight to 14 years old.

Today, we’re sharing more of the report, as well as our insights of what the numbers are telling us. Below is a link to the report:

An illustration reads: The Tech Talk

Toddlers, tablets, and the ‘Tech Talk’

Download our report

The internet is a great place for families. It gives us new opportunities to discover the world, connect with others and just generally make our lives easier and more colorful. But it also comes with new challenges and complications for the people raising the next generations. Mozilla wants to help families make the best online decisions, whatever that looks like, with our latest series, The Tech Talk.

The post Real talk: Did your 5-year-old just tease you about having too many open tabs? appeared first on The Mozilla Blog.

The Mozilla BlogAd blocker roundup: 6 ad blockers to improve your internet experience

Ad blockers are a specific type of browser extension (i.e. software that adds new features or functionality to Firefox). Using ad blockers, you can eliminate distraction and frustration associated with online ads popping up across your internet travels. Here are six of our favorite ad blockers that make the web a whole lot easier to enjoy. 

uBlock Origin

A gold standard among ad blockers, uBlock Origin is extremely efficient at stopping all types of internet ads, including video pre-rolls and pop-ups. It works great by default, but also affords users plenty of customization options should you want to tweak your content filters. 

AdBlocker Ultimate

AdBlocker Ultimate is also very capable at removing all varieties of internet ads. There are no “acceptable” ads or whitelisted advertisers. The extension also blocks many trackers and helps detect malware.

AdGuard AdBlocker 

AdGuard AdBlocker is a highly effective ad blocker that works well on Facebook and YouTube. It also smartly allows certain types of ads by default, such as search ads, since those may actually be helpful to your searches, as well as “self-promotion” ads (e.g. special deals on site-specific shopping platforms like “50% off today only!” sales). 


Block ads and most hidden browser trackers by default with Ghostery. This slick extension also scores points for being very intuitive and easy to operate. It’s simple to set Ghostery’s core features, like enabling or disabling Enhanced Ad Blocking and Anti-tracking. 

Popup Blocker (strict)

Popup Blocker (strict) blocks all pop-up requests from any website by default. However, a handy notification window gives you the control to accept, reject or open pop-ups as you please. 

Webmail Ad Blocker

Use Webmail Ad Blocker to clean up your web-based email by removing ads littering the inboxes of Gmail, Hotmail,, Yahoo Mail and more. 

Take control of your online advertising experience by checking out our latest Ad Blockers Collection. And if you’re really into add-ons, follow our Mozilla Add-ons Blog and consider building your own extension to add to the collection!

Take control of your online advertising experience by checking our latest Ad Blockers Collection. And if you’re really into add-ons, follow our Mozilla Add-ons Blog and consider building your own extension to add to the collection!

Get Firefox

Get the browser that protects what’s important

The post Ad blocker roundup: 6 ad blockers to improve your internet experience appeared first on The Mozilla Blog.

Mozilla ThunderbirdThunderbird for Android / K-9 Mail: February Progress Report

While K-9 Mail is developed in the open, following its development on GitHub can be somewhat tedious for a casual observer. So we’re trying something new and summarizing the most notable things that happened in the past month as we head down the road to Thunderbird for Android.

If you missed the exciting news last summer, K-9 Mail is now part of the Thunderbird family, and we’re working steadily on transforming it into Thunderbird for Android. If you want to learn more, check out the Android roadmap, this blog post, and this FAQ.

New Full-Time Developer 🎉

As already announced on Mastodon, in February Wolf Montwé joined the team. He is working full-time on K-9 Mail development.

What We’ve Been Working On

Message view redesign

In July 2022 ByteHamster proposed a change to the message view header. cketti’s decision to take a more holistic approach sent us on a months-long journey redesigning this screen in close cooperation with the Thunderbird design team. A first version finally shipped with K-9 Mail v6.505 (beta) at the start of February. The UI has since been refined based on user feedback.

<figcaption class="wp-element-caption">Refreshed Message View</figcaption>
<figcaption class="wp-element-caption">Message Details</figcaption>

The next stable release will most likely ship with what is included in the latest beta version. But during our design sessions we’ve looked at many other improvements, e.g. selecting which remote images to load (or not load), attachment handling, and more. So expect smaller updates to this screen in the future.

Message list

We started making small changes to the message list screen. It’s mostly about text alignment and whitespace. But we’ve also enlarged the click areas for the contact image and the star. That should make it much less likely that you accidentally open a message when you meant to select or star it.

We also added three different message list density settings: compact, default, relaxed.

<figcaption class="wp-element-caption">Message List Density Settings</figcaption>
<figcaption class="wp-element-caption">Compact Message List Density</figcaption>
<figcaption class="wp-element-caption">Default Message List Density</figcaption>
<figcaption class="wp-element-caption">Relaxed Message List Density</figcaption>

A first version of these changes can be found in K-9 Mail v6.509 (beta). We’re looking forward to getting your feedback on this.

Bug fixes

Most of the bugs we fixed in February were related to newly added functionality. We also fixed a couple of (rare) crashes that we received via the Google Play Developer Console. Nothing too exciting.


The post Thunderbird for Android / K-9 Mail: February Progress Report appeared first on The Thunderbird Blog.

The Rust Programming Language BlogAnnouncing Rust 1.68.0

The Rust team is happy to announce a new version of Rust, 1.68.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.68.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.68.0 on GitHub.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.68.0 stable

Cargo's sparse protocol

Cargo's "sparse" registry protocol has been stabilized for reading the index of crates, along with infrastructure at for those published in the primary registry. The prior git protocol (which is still the default) clones a repository that indexes all crates available in the registry, but this has started to hit scaling limitations, with noticeable delays while updating that repository. The new protocol should provide a significant performance improvement when accessing, as it will only download information about the subset of crates that you actually use.

To use the sparse protocol with, set the environment variable CARGO_REGISTRIES_CRATES_IO_PROTOCOL=sparse, or edit your .cargo/config.toml file to add:

protocol = "sparse"

The sparse protocol is currently planned to become the default for in the 1.70.0 release in a few months. For more information, please see the prior announcement on the Inside Rust Blog, as well as RFC 2789 and the current documentation in the Cargo Book.

Local Pin construction

The new pin! macro constructs a Pin<&mut T> from a T expression, anonymously captured in local state. This is often called stack-pinning, but that "stack" could also be the captured state of an async fn or block. This macro is similar to some crates, like tokio::pin!, but the standard library can take advantage of Pin internals and temporary lifetime extension for a more expression-like macro.

/// Runs a future to completion.
fn block_on<F: Future>(future: F) -> F::Output {
    let waker_that_unparks_thread = todo!();
    let mut cx = Context::from_waker(&waker_that_unparks_thread);
    // Pin the future so it can be polled.
    let mut pinned_future = pin!(future);
    loop {
        match pinned_future.as_mut().poll(&mut cx) {
            Poll::Pending => thread::park(),
            Poll::Ready(result) => return result,

In this example, the original future will be moved into a temporary local, referenced by the new pinned_future with type Pin<&mut F>, and that pin is subject to the normal borrow checker to make sure it can't outlive that local.

Default alloc error handler

When allocation fails in Rust, APIs like Box::new and Vec::push have no way to indicate that failure, so some divergent execution path needs to be taken. When using the std crate, the program will print to stderr and abort. As of Rust 1.68.0, binaries which include std will continue to have this behavior. Binaries which do not include std, only including alloc, will now panic! on allocation failure, which may be further adjusted via a #[panic_handler] if desired.

In the future, it's likely that the behavior for std will also be changed to match that of alloc-only binaries.

Stabilized APIs

These APIs are now stable in const contexts:

Other changes

  • As previously announced, Android platform support in Rust is now targeting NDK r25, which corresponds to a minimum supported API level of 19 (KitKat).

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.68.0

Many people came together to create Rust 1.68.0. We couldn't have done it without all of you. Thanks!

This Week In RustThis Week in Rust 485

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Rust Nation 2023
Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is man-in-the-middle-proxy, a - surprise! - man in the middle proxy.

Thanks to Emanuele Em for the self-suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

376 pull requests were merged in the last week

Rust Compiler Performance Triage

A really quiet week with almost all regressions being due to noise in benchmarks that show "bimodality" in codegen that can cause swings in performance from one change to the other. The only true performance change was a two-line change by @nnethercote to remove a redundant function call which led to a 0.3% improvement in performance across roughly 15 benchmarks.

Triage done by @rylev. Revision range: 31f858d9..8f9e09ac


(instructions:u) mean range count
Regressions ❌
- - 0
Regressions ❌
2.0% [1.2%, 2.8%] 8
Improvements ✅
-0.4% [-0.7%, -0.2%] 7
Improvements ✅
-1.0% [-1.8%, -0.1%] 31
All ❌✅ (primary) -0.4% [-0.7%, -0.2%] 7

7 Regressions, 8 Improvements, 2 Mixed; 7 of them in rollups 35 artifact comparisons made in total

Full report

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
New and Updated RFCs
  • No New or Updated RFCs were created this week.
Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Upcoming Events

Rusty Events between 2023-03-08 - 2023-04-05 🦀

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.


Please see the latest Who's Hiring thread on r/rust

Quote of the Week

(…) as much as i dislike the cargo-geiger concept, the name … kind of works

unsafe is a lot like uranium. it’s just one more metal ore you can process, refine, and machine. it doesn’t combust in atmosphere, it doesn’t corrode or make weird acids. unless you go out of your way to make it dangerous you don’t even have to worry about critical masses. you can work with it pretty normally most of the time

but if you don’t know exactly what it is, what it does, and how to work with it, it will cause mysterious illnesses that only crop up long after you’ve stopped touching it

Alexander Payne on /r/rust

Thanks to Stephan Sokolow for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Firefox NightlySurf with more Perf(ormance) – These Weeks in Firefox: Issue 133


  • The DevTools team has improved Pretty Printing performance in the Debugger by ~30%! This improvement is available in Beta and currently slated to go out in Firefox 111.
  • Mak has changed the frecency (the URL bar ranking algorithm) recalculation to happen on idle, rather than immediately when bookmarks are added/removed, or when visits are removed. This allows for more performant recalculation during large operations!
  • Daisuke has completed the conversion of bookmarks from the old Places notifications to new ones, and has finally removed the nsINavBookmarksObserver interface. The new notifications, originally designed with the help of Doug Thayer from the Perf Team, are much more performant and detailed, and will improve the performance of history and bookmarks consumers.
  • Our WebExtensions team has made it easier for extension authors to migrate from Manifest V2 to Manifest V3:
    • Bug 1811443 (landed in Firefox 111) introduced an optional “background.type” manifest property that can be set to either “classic” or “module”. When set to “module”, all the background scripts are loaded as ES Modules in the auto-generated background and event pages
  • All of the Colorways are now available as themes at If there was one you never got to try, now’s your chance!

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • CanadaHonk [:CanadaHonk]
  • Itiel
  • portiawuu
  • Razvan Cojocaru
  • steven w

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Colorway built-in themes migration:
    • A brief mention about the Colorway themes migrating to AMO is in the Firefox 110 release notes
    • Colorway closet section in about:addons themes list view has been disabled along with the last active colorways collection reaching its expire date – Bug 1801044
    • Enabling Colorway built-in themes to be migrated to themes hosted on (gated by an about:config pref, disabled by default) – Bug 1806701, Bug 1810231
    • QA verifications on the Colorway migration has been completed and is is now enabled by default and riding the Firefox 111 release train (Bug 1808589)
WebExtensions Framework
  • As part of fixing a regression (originally introduced in Firefox 96 by Bug 1675456) in Bug 1792559, we landed an initial (partial) fix for the issue with extension exceptions raised from a WebExtensions API event listener being missing from the add-on debugging toolbox (which technically is logged in the Browser Console when the multiprocess mode is enabled, but logged without the expected source url and line number from the extension code raising the exception in the error stack trace)
    • Bug 1792559 is a partial fix because, without further changes, we can’t include the full error stack from the original exception (in particular when the extension code is throwing “undefined” or an object that is not an instance of “Error”). Bug 1810582 is tracking further follow-ups to achieve a more comprehensive fix.
  • Bug 1808459 made sure that if the “extensions.getAddons.showPane” pref is set to “false” then clicking the extensions button when there is no extension installed is going to open the about:addons “extensions list” view instead of opening the about:addons “recommendation” view (which is disabled by the pref).
    • Thanks to Malte Jürgens for contributing the fix!
  • Bug 1795213 fixed a Fluent error logged when the add-on install popup was being disabled (as a side note: besides the fluent error being logged, there wasn’t any issue from a user perspective)
WebExtension APIs
  • As part of the ongoing work on the declarativeNetRequest API:
    • Bug 1811947 introduced a permission warning for the “declarativeNetRequestFeedback” permission
    • Bug 1816319 exposed the API namespace properties for the DYNAMIC_RULESET_ID and SESSION_RULESET_ID constants

Developer Tools

  • Christian Sonne landed a patch to display subgrid in the auto-complete suggestions for grid-template-* and grid properties in the Rules view (bug)
    • The CSS rule pane from the Inspector DevTool is shown, with a "grid-template" rule selected. "subgrid" is shown as an item in an autocomplete panel.

      CSS Subgrid is the bees knees, and now it’s easier to use in the Rules pane of the Inspector.

  • Emilio fixed an issue that would leave the page in a weird state when closing Responsive Design Mode (bug)
  • Arai fixed eager evaluation so it does not assume getters are effect-free anymore (bug)
  • Nicolas added support for “Change array by copy” methods in eager evaluation (bug)
  • Julian fixed the “Disable javascript” feature (bug)
  • Alex made it possible (again) to select or click html response in the Network panel (bug)
  • Hubert is working on the Debugger “Search in all files” feature:
    • We now show results from minified and pretty-printed tabs (bug), as well as matches from node_modules folder  (bug)
    • We don’t search within files that are blackboxed anymore (bug)
    • A side-by-side comparison of the DevTools Debugger search result pane for Firefox 110 and 111. In the former, a search query is showing no results. In the latter, several results are appearing because they match strings contained within the source files.

      This should make it much easier to find what you’re looking for!


WebDriver BiDi
  • Sasha added support for two new commands script.addPreloadScript (bug) and script.removePreloadScript (bug) to schedule scripts guaranteed to run before any other content script (used by test frameworks to preload helpers for instance).
  • Sasha fixed a bug which prevented us from using preload scripts and events simultaneously (bug)
  • Henrik released a new version of geckodriver v0.32.2 which fixes a semver issue introduced with the previous version (bug).

ESMification status

Migration Improvements (CalState LA Project)


Performance Tools (aka Firefox Profiler)

  • Improved timer markers make it easier to see what timers are running when and at which frequency. Add the Timer thread to the thread filter to see them. Example profile
    • The marker chart from the Firefox Profiler UI is shown, with a large series of timers being displayed in the char. The markers show the timer duration as text on top of the marker.

      That’s a lot of timers! This should help us hunt down timers that don’t need to happen so frequently, which will save on CPU time.

Search and Navigation

Storybook / Reusable components

Mitchell BakerExpanding Mozilla’s Boards in 2023

As Mozilla reaches its 25th anniversary this year, we’re working hard to set up our ‘next chapter’ — thinking bigger and being bolder about how we can shape the coming era of the internet. We’re working to expand our product offerings, creating multiple options for consumers, audiences and business models. We’re growing our philanthropic and advocacy work that promotes trustworthy AI. And, we’re creating two new Mozilla companies, to develop a trustworthy open source AI stack and Mozilla Ventures:  to invest in responsible tech companies. Across all of this, we’ve been actively recruiting new leaders who can help us build Mozilla for this next era.

With all of this in mind, we are seeking three new members for the Mozilla Foundation Board of Directors. These Board members will help grow the scope and impact of the Mozilla Project overall, working closely with the Boards of the Mozilla Corporation, and Mozilla Ventures. At least one of the new Board members will play a central role in guiding the work of the Foundation’s charitable programs, which focuses on movement building and trustworthy AI.

What is the role of a Mozilla board member?

I’ve written in the past about the role of the Board of Directors at Mozilla.

At Mozilla, our board members join more than just a board, they join the greater team and the whole movement for internet health. We invite our board members to build relationships with management, employees and volunteers. The conventional thinking is that these types of relationships make it hard for executives to do their jobs. We feel differently. We work openly and transparently, and want Board members to be part of the team and part of the community.

It’s worth noting that Mozilla is an unusual organization. As I wrote in our most recent annual report:

Mozilla is a rare organization. We’re activists for a better internet, one where individuals and societies benefit more from the effects of technology, and where competition brings consumers choices beyond a small handful of integrated technology giants.

We’re activists who champion change by building alternatives. We build products and compete in the consumer marketplace. We combine this with advocacy, policy, and philanthropic programs connecting to others to create change. This combination is rare.

It’s important that our Board members understand all this, including why we build consumer products and why we have a portfolio of organizations playing different roles. It is equally important that the Boards of our commercial subsidiaries understand why we run charitable programs within Mozilla Foundation that complement the work we do to develop products and invest in responsible tech companies.

What are we looking for?

At the highest level, we are seeking people who can help our global organization grow and succeed — and who ensure that we advance the work of the Mozilla Manifesto over the long run. Here is the full job description:

There are a variety of qualities that we seek in all Board members, including a cultural sense of Mozilla and a commitment to an open, transparent, community driven approach. We are also focused on ensuring the diversity of the Board, and fostering global perspectives.

As we recruit, we typically look to add specific skills or domain expertise to the Board. Current examples of areas where we’d like to add expertise include:

  1. Mission-based business — experience creating, running or overseeing organizations that combine public benefit and commercial activities towards a mission.
  2. Global, public interest advocacy – experience leading successful, large-scale public interest advocacy organizations with online mobilization and shaping public discourse on key issues at the core.
  3. Effective ‘portfolio’ organizations – experience running or overseeing organizations that include a number of divisions, companies or non-profits under one umbrella, with an eye to helping the portfolio add up to more than the sum of its parts.

Finding the right people who match these criteria and who have the skills we need takes time. Board candidates will meet the existing board members, members of the management team, individual contributors and volunteers. We see this as a good way to get to know how someone thinks and works within the framework of the Mozilla mission. It also helps us feel comfortable including someone at this senior level of stewardship.

We want your suggestions

We are hoping to add three new members to the Mozilla Foundation Board of Directors over the next 18 months. If you have candidates that you believe would be good board members, send them to We will use real discretion with the names you send us.

Mitchell BakerIn Memoriam: Gervase Markham

Gerv was Mozilla’s first intern.  He arrived in the summer of 2001, when Mozilla staff was still AOL employees.  It was a shock that AOL had allocated an intern to the then-tiny Mozilla team, and we knew instantly that our amazingly effective volunteer in the UK would be our choice.

When Gerv arrived a few things about him jumped out immediately.  The first was a swollen, shiny, bright pink scar on the side of his neck.  He quickly volunteered that the scar was from a set of surgeries for his recently discovered cancer.  At the time Gerv was 20 or so, and had less than a 50% chance of reaching 35.  He was remarkably upbeat.

The second thing that immediately became clear was Gerv’s faith, which was the bedrock of his response to his cancer.  As a result the scar was a visual marker that led straight to a discussion of faith. This was the organizing principle of Gerv’s life, and nearly everything he did followed from his interpretation of how he should express his faith.

Eventually Gerv felt called to live his faith by publicly judging others in politely stated but damning terms.  His contributions to expanding the Mozilla community would eventually become shadowed by behaviors that made it more difficult for people to participate.  But in 2001 all of this was far in the future.

Gerv was a wildly active and effective contributor almost from the moment he chose Mozilla as his university-era open source project.  He started as a volunteer in January 2000, doing QA for early Gecko builds in return for plushies, including an early program called the Gecko BugAThon.  (With gratitude to the Internet Archive for its work archiving digital history and making it publicly available.)

Gerv had many roles over the years, from volunteer to mostly-volunteer to part-time, to full-time, and back again.  When he went back to student life to attend Bible College, he worked a few hours a week, and many more during breaks.  In 2009 or so, he became a full time employee and remained one until early 2018 when it became clear his cancer was entering a new and final stage.

Gerv’s work varied over the years.  After his start in QA, Gerv did trademark work, a ton of FLOSS licensing work, supported Thunderbird, supported Bugzilla, Certificate Authority work, policy work and set up the MOSS grant program, to name a few areas. Gerv had a remarkable ability to get things done.  In the early years, Gerv was also an active ambassador for Mozilla, and many Mozillians found their way into the project during this period because of Gerv.

Gerv’s work life was interspersed with a series of surgeries and radiation as new tumors appeared. Gerv would methodically inform everyone he would be away for a few weeks, and we would know he had some sort of major treatment coming up.

Gerv’s default approach was to see things in binary terms — yes or no, black or white, on or off, one or zero.  Over the years I worked with him to moderate this trait so that he could better appreciate nuance and the many “gray” areas on complex topics.  Gerv challenged me, infuriated me, impressed me, enraged me, surprised me.  He developed a greater ability to work with ambiguity, which impressed me.

Gerv’s faith did not have ambiguity at least none that I ever saw.  Gerv was crisp.  He had very precise views about marriage, sex, gender and related topics.  He was adamant that his interpretation was correct, and that his interpretation should be encoded into law.  These views made their way into the Mozilla environment.  They have been traumatic and damaging, both to individuals and to Mozilla overall.

The last time I saw Gerv was at FOSDEM, Feb 3 and 4.   I had seen Gerv only a few months before in December and I was shocked at the change in those few months.  Gerv must have been feeling quite poorly, since his announcement about preparing for the end was made on Feb 16.  In many ways, FOSDEM is a fitting final event for Gerv — free software, in the heart of Europe, where impassioned volunteer communities build FLOSS projects together.

To memorialize Gerv’s passing, it is fitting that we remember all of Gerv —  the full person, good and bad, the damage and trauma he caused, as well as his many positive contributions.   Any other view is sentimental.  We should be clear-eyed, acknowledge the problems, and appreciate the positive contributions.  Gerv came to Mozilla long before we were successful or had much to offer besides our goals and our open source foundations.  As Gerv put it, he’s gone home now, leaving untold memories around the FLOSS world.

Mitchell BakerBusting the myth that net neutrality hampers investment

This week I had the opportunity to share Mozilla’s vision for an Internet that is open and accessible to all with the audience at MWC Americas.

I took this opportunity because we are at a pivotal point in the debate between the FCC, companies, and users over the FCC’s proposal to roll back protections for net neutrality. Net neutrality is a key part of ensuring freedom of choice to access content and services for consumers.

Earlier this week Mozilla’s Heather West wrote a letter to FCC Chairman Ajit Pai highlighting how net neutrality has fueled innovation in Silicon Valley and can do so still across the United States.

The FCC claims these protections hamper investment and are bad for business. And they may vote to end them as early as October. Chairman Pai calls his rule rollback “restoring internet freedom” but that’s really the freedom of the 1% to make decisions that limit the rest of the population.

At Mozilla we believe the current rules provide vital protections to ensure that ISPs don’t act as gatekeepers for online content and services. Millions of people commented on the FCC docket, including those who commented through Mozilla’s portal that removing these core protections will hurt consumers and small businesses alike.

Mozilla is also very much focused on the issues preventing people coming online beyond the United States. Before addressing the situation in the U.S., journalist Rob Pegoraro asked me what we discovered in the research we recently funded in seven other countries into the impact of zero rating on Internet use:

(Video courtesy: GSMA)

If you happen to be in San Francisco on Monday 18th September please consider joining Mozilla and the Internet Archive for a special night: The Battle to Save Net Neutrality. Tickets are available here.

You’ll be able to watch a discussion featuring former FCC Chairman Tom Wheeler; Representative Ro Khanna; Mozilla Chief Legal and Business Officer Denelle Dixon; Amy Aniobi, Supervising Producer, Insecure (HBO); Luisa Leschin, Co-Executive Producer/Head Writer, Just Add Magic (Amazon); Malkia Cyril, Executive Director of the Center for Media Justice; and Dane Jasper, CEO and Co-Founder of Sonic. The panel will be moderated by Gigi Sohn, Mozilla Tech Policy Fellow and former Counselor to Chairman Wheeler. It will discuss how net neutrality promotes democratic values, social justice and economic opportunity, what the current threats are, and what the public can do to preserve it.

The Mozilla BlogExpanding Mozilla’s boards in 2023

As Mozilla reaches its 25th anniversary this year, we’re working hard to set up our “next chapter” — thinking bigger and being bolder about how we can shape the coming era of the internet. We’re working to expand our product offerings, creating multiple options for consumers, audiences and business models. We’re growing our philanthropic and advocacy work that promotes trustworthy AI. And, we’re creating two new Mozilla companies, to develop a trustworthy open source AI stack and Mozilla Ventures:  to invest in responsible tech companies. Across all of this, we’ve been actively recruiting new leaders who can help us build Mozilla for this next era. 

With all of this in mind, we are seeking three new members for the Mozilla Foundation Board of Directors. These Board members will help grow the scope and impact of the Mozilla Project overall, working closely with the Boards of the Mozilla Corporation, and Mozilla Ventures. At least one of the new Board members will play a central role in guiding the work of the Foundation’s charitable programs, which focuses on movement building and trustworthy AI. 

What is the role of a Mozilla board member?

I’ve written in the past about the role of the Board of Directors at Mozilla.

At Mozilla, our board members join more than just a board, they join the greater team and the whole movement for internet health. We invite our board members to build relationships with management, employees and volunteers. The conventional thinking is that these types of relationships make it hard for executives to do their jobs. We feel differently. We work openly and transparently, and want Board members to be part of the team and part of the community.

It’s worth noting that Mozilla is an unusual organization. As I wrote in our most recent annual report

Mozilla is a rare organization. We’re activists for a better internet, one where individuals and societies benefit more from the effects of technology, and where competition brings consumers choices beyond a small handful of integrated technology giants.

We’re activists who champion change by building alternatives. We build products and compete in the consumer marketplace. We combine this with advocacy, policy, and philanthropic programs connecting to others to create change. This combination is rare.

It’s important that our Board members understand all this, including why we build consumer products and why we have a portfolio of organizations playing different roles. It is equally important that the Boards of our commercial subsidiaries understand why we run charitable programs within Mozilla Foundation that complement the work we do to develop products and invest in responsible tech companies.

What are we looking for?

At the highest level, we are seeking people who can help our global organization grow and succeed — and who ensure that we advance the work of the Mozilla Manifesto over the long run. Here is the full job description: 

There are a variety of qualities that we seek in all Board members, including a cultural sense of Mozilla and a commitment to an open, transparent, community driven approach. We are also focused on ensuring the diversity of the Board, and fostering global perspectives.  

As we recruit, we typically look to add specific skills or domain expertise to the Board. Current examples of areas where we’d like to add expertise include: 

  1. Mission-based business — experience creating, running or overseeing organizations that combine public benefit and commercial activities towards a mission. 
  2. Global, public interest advocacy – experience leading successful, large-scale public interest advocacy organizations with online mobilization and shaping public discourse on key issues at the core.
  3. Effective ‘portfolio’ organizations – experience running or overseeing organizations that include a number of divisions, companies or non-profits under one umbrella, with an eye to helping the portfolio add up to more than the sum of its parts.

Finding the right people who match these criteria and who have the skills we need takes time. Board candidates will meet the existing board members, members of the management team, individual contributors and volunteers. We see this as a good way to get to know how someone thinks and works within the framework of the Mozilla mission. It also helps us feel comfortable including someone at this senior level of stewardship.

We want your suggestions

We are hoping to add three new members to the Mozilla Foundation Board of Directors over the next 18 months. If you have candidates that you believe would be good board members, send them to We will use real discretion with the names you send us.

The post Expanding Mozilla’s boards in 2023 appeared first on The Mozilla Blog.

Mozilla ThunderbirdMeet The Team: Alex Castellani, Product Design Manager

Welcome to a brand new feature called “Meet The Team!” In this ongoing series of conversations, I want to introduce you to the people behind the software you use every day. When and why did they fall in love with technology? What does their day-to-day schedule look like? What attracts them to open-source projects? And what’s their playlist of choice when hacking away on Thunderbird?

Let’s kick it off by getting to know the person leading the charge on Thunderbird’s modern redesign: Product Design manager, Alex Castellani.

Alex’s Origin Story

I’ve always been fascinated by learning about the initial spark that ignited someone’s love for technology, open source, or programming. Everyone loves a good origin story, right? So that’s one of the first questions I’m asking every member of the Thunderbird team.

<figcaption class="wp-element-caption">In his own words: Listen to Alex share how he fell in love with web design and programming.</figcaption>

At the young age of 12, after his father taught him how to paint, Alex initially wanted to be a comic book artist and attend art school. But a few years later, this wondrous new thing called the Internet distracted him.

“I started doing some web design very loosely, and I was participating in a web design forum where I was doing the design and other people were doing the development,” Alex says. “But I was getting very upset about the developers not recreating my design perfectly! Because I am who I am, I said to myself ‘I’m sure I can do better than them, so I will teach myself.’ So I started learning PHP and HTML — by the way, at that time CSS didn’t even have floating elements.”

It didn’t take long for Alex to fall in love with coding. But his journey took another step forward thanks to, of all things, an Italian-language Star Trek role-playing game.

“I was fascinated by how they were doing online activities and multiple people chatting,” Alex says. “I found out it was built on PHP and MySQL, and I wanted to learn how to build it myself as well. And yeah, I’ve never had a full night’s sleep ever since!”

Pricey, Proprietary Tools: A Gateway to Open-Source

Alex’s fondness for coding and design would eventually lead him to discover the concept of open source. But first, he learned a hard lesson about what “proprietary” meant in the real world.

<figcaption class="wp-element-caption">In his own words: The pricey proprietary tools that led Alex to open source</figcaption>

“After High School, I wanted to be an architect,” Alex says. “So I went to university — and dropped out after 3 months because I hated it so much.”

Alex came from what he describes as a blue collar family, and while he didn’t live an uncomfortable life, certain things were unaffordable luxuries.

“I couldn’t afford a phone,” Alex says. “I couldn’t even afford textbooks, especially old, important architecture textbooks that cost hundreds of Euros. So I tried to sneak out, grab them from the library and photocopy them. But that wasn’t even allowed because these restrictive policies were in place.”

Alex tells me he felt shut out. Blocked from accessing the tools he needed.

“And even little things like, you need this specific type of paper, this specific pencil, this specific compass,” he explains. “You need all these tools that are so expensive…”

After coming home from university, a glimmer of light appeared. A closer look at the burgeoning internet revealed things like website developers sharing HTML source code for the sites he visited, and freely available documentation showing how to write CSS.

“I’m not forced to pay thousands of dollars to learn these things that are just out there,” he says. “That influenced a lot in my way of thinking. Since day one, everything that I coded and everything that I built, I always put the source code online for free. I never thought about other people stealing it. I thought other people could learn from it!”

A Typical Week At Thunderbird?

Alex manages the entire front-end team, which means a typical week involves lots of meetings. But each day starts at 7am when he rolls out of bed and walks his dog. Then, before enjoying some breakfast, he sits down and filters through hundreds of emails to see what he can contribute to. He points out that since we’re a global, remote team, it’s important for him to answer any European-based messages so that the senders have a response to wake up to.

Those meetings though? They definitely couldn’t just be an email. Right now, as Thunderbird 115 is being built, the team is looking at the entire user interface, every single pixel, every single tab, discussing what doesn’t work and what needs to be fixed. They’re evaluating various mockups to see what can be applied. They have regular design sessions and generally work toward making Supernova awesome.

After the meetings are over, Alex typically does patch reviews, then dedicates some time to coding.

And yes, those coding sessions have a very specific soundtrack: Eastern European, female-fronted heavy metal.

What Excites Alex Most About Thunderbird Supernova?

As the person heading up UX and UI design, you might think Alex is a little biased when it comes to naming his favorite Supernova feature. And you’d be right! But when pressed to name a very specific favorite feature about the upcoming Thunderbird 115 in July, Alex points to customization.

<figcaption class="wp-element-caption">In his own words: Is Thunderbird REALLY that customizable?</figcaption>

“A lot of our current users put Thunderbird on a pedestal for how customizable it is, but it’s actually not,” Alex exclaims. “Thunderbird is customizable in a manner that you can hide a panel, change some buttons in the toolbars, and change the theming.”

“But what if I want my reading list in my message pane to be like a vertical layout, with only three lines of preview text, or zero lines of preview text,” he asks. “What if I want my calendar to have tiny dot colors and not block colors? And I want to hide the notification icons, and not be shown which ones are recurring? What if I want to collapse my Saturday and Sunday to smaller chunks instead of hiding them entirely? What if I want to see the subject line larger, and I don’t even want to see the ‘From’ and ‘To’ labels? I don’t need to see those, it’s been like that for 20 years. Let’s hide them!”

Alex explains that he — and by extension the entire team — wants the Thunderbird experience to feel more personal, and more suited to what individual users want and expect. “That’s why we needed to rebuild it from scratch, but make it behave exactly like the old version,” he says.

“Now that we rebuilt, we can customize it much more, and we can offer the user much more flexibility to shape it the way they want.”

Alex Castellani, Thunderbird product design manager

How Can The Community Help You?

We’ve written previously about all the ways our community — and any open source project’s community — can contribute without knowing how to code. So I asked Alex what our community could do to have a direct, positive impact on his specific role at Thunderbird.


“What I love about our community is they are extremely involved and they care about everything that we do,” Alex says. “So in general, using more Beta and using more Daily. Be aware though that Daily is literally alpha software and could crash. But testing the operating system, integrations, the usability and accessibility of the whole interface is something that’s really important.”

The post Meet The Team: Alex Castellani, Product Design Manager appeared first on The Thunderbird Blog.

Wladimir PalantVeraport: Inside Korea’s dysfunctional application management

Note: This article is also available in Korean.

As discussed before, South Korea’s banking websites demand installation of various so-called security applications. At the same time, we’ve seen that these applications like TouchEn nxKey and IPinside lack auto-update functionality. So even in case of security issues, it is almost impossible to deliver updates to users timely.

And that’s only two applications. Korea’s banking websites typically expect around five applications, and it will be different applications for different websites. That’s a lot of applications to install and to keep up-to-date.

Luckily, the Veraport application by Wizvera will take care of that. This application will automatically install everything necessary to use a particular website. And it will also install updates if deemed necessary.

Laptop with Veraport logo on the left, three web servers on the right. First server is labeled “Initiating server,” the arrow going from it to the laptop says “Get policy from banking.example.” Next web server is labeled “Policy server,” the arrow pointing from the laptop to it says “Installation policy?” and the arrow back “Install app.exe from download.example.” The final web server is labeled “Download server” and an arrow points to it from the laptop saying “Give me app.exe.”

If this sounds like a lot of power: that’s because it is. And so Veraport already made the news as the vehicle of an attack by North Korean hackers.

Back then everybody was quick to shift the blame to the compromised web servers. I now took a deeper dive into how Veraport works and came to the conclusion: its approach is inherently dangerous.

As of Veraport (released on February 28), all the reported security issues seem to be fixed. Getting users to update will take a long time however. Also, the dangerous approach of allowing Veraport customers to distribute arbitrary software remains of course.

Summary of the findings

Veraport signs the policy files determining which applications are to be installed from where. While the cryptography here is mostly sane, the approach suffers from a number of issues:

  • One root certificate still used for signature validation is using MD5 hashing and a 1024 bit strong RSA key. Such certificates have been deprecated for over a decade.
  • HTTPS connection for downloads is not being enforced. Even when HTTPS is used, server certificate is not validated.
  • Integrity of downloaded files is not validated correctly. Application signature validation is trivially circumvented, and while hash-based validation is possible this functionality is essentially unused.
  • Even if integrity validation weren’t easily circumvented, Veraport leaves the choice to the user as to whether to proceed with a compromised binary.
  • Download and installation of an application can be triggered without user interaction and without any visible clues.
  • Individual websites (e.g. banking) are still responsible for software distribution and will often offer outdated applications, potentially with known security issues.
  • Each and every Veraport customer is in possession of a signing certificate that, if compromised, can sign arbitrary malicious policies.
  • There is no revocation mechanism to withdraw known leaked signing certificates or malicious policies.

In addition to that, Veraport’s local web server on contains vulnerabilities amounting to persistent Cross-Site Scripting (XSS) among other things. It will expose the full list of the processes running on the user’s machine to any website asking. For security applications it will also expose the application version.

Finally, Veraport is also built on top of a number of outdated open-source libraries with known vulnerabilities. For example, it uses OpenSSL 1.0.2j (released 2016) for its web server and for signature validation. OpenSSL vulnerabilities are particularly well-documented – it’s at least 3 known high-severity and 13 known moderate-severity vulnerabilities for this version.

The local web server itself is mongoose 5.5 (released in 2014). And parsing of potentially malicious JSON data received from websites is done via JsonCpp 0.5.0 (released 2010). Yes, that’s almost 13 years old. Yes, current version is JsonCpp 1.9.5 which has seen plenty of security improvements.

How banking websites distribute applications

Login websites of South Korean banks run JavaScript code from SDKs belonging to various so-called security applications. Each such SDK will first check whether the application is present on the user’s computer. If it isn’t, the typical action is redirecting the user to a download page.

Screenshot of a page titled “Install Security Program.” Below it the text “To access and use services on Busan Bank website, please install the security programs. If your installation is completed, please click Home Page to move to the main page. Click [Download Integrated Installation Program] to start automatica installation. In case of an error message, please click 'Save' and run the downloaded the application.” Below that text the page suggests downloading “Integrated installation (Veraport)” and five individual applications.

This isn’t the software vendor’s download page but rather the bank’s page. It lists all the various applications required and expects you to download them. Typically, the bank’s web server doubles as the download server for the application. Some of the software vendors don’t even have their own download servers.

So it probably comes as no surprise that all banks distribute different versions of the applications, often years behind the current release. Also, it’s very common to find an outdated and hopefully unused installation page. Downloading the application from this page will usually still work, but it will be up to a decade old.

For example, until a few weeks ago the Citibank Korea web page would distribute TouchEn nxKey from 2020 (current version at the time was But if you accidentally got the wrong download page, you would be downloading TouchEn nxKey from 2015.

And while Busan Bank website for example claims to have software packages for Linux and macOS users, these aren’t actually downloadable or you get Windows software. The one Linux package which can be downloaded is from 2015 and relies on NPAPI which isn’t supported by modern browsers.

Obviously, users cannot be expected to deal with this entire mess. And that’s why banks typically also offer something called “integrated installation.” This means downloading Wizvera Veraport application and letting it do everything necessary.

How Wizvera Veraport works

If you expect Veraport to know where to get the latest version of each application and when to update them: that’s of course not it. Instead, Veraport merely automates the task of installing applications. It does exactly what the user would do: downloads each application (usually from the bank’s servers) and runs the installer.

Laptop with Veraport logo on the left, three web servers on the right. First server is labeled “Initiating server,” the arrow going from it to the laptop says “Get policy from banking.example.” Next web server is labeled “Policy server,” the arrow pointing from the laptop to it says “Installation policy?” and the arrow back “Install app.exe from download.example.” The final web server is labeled “Download server” and an arrow points to it from the laptop saying “Give me app.exe.”

So your banking website will connect to Veraport’s local server on and send it a JSONP request. It will use a command like getAxInfo to make it download an installation policy from some (typically its own) website:

send_command("getAxInfo", {
  "configure": {
    "domain": "http://banking.example/",
    "axinfourl": "http://banking.example/wizvera/plinfo.html",
    "type": "normal",
    "language": "eng",
    "browser": "Chrome/",
    "forceinstall": "TouchEnNxKeyNo",

This will make Veraport download and validate a policy file from http://banking.example/wizvera/plinfo.html which is essentially an XML file looking like this:

  <createDate>2022/02/18 17:04:35</createDate>
  <object type="Must">
      file:%ProgramFiles(x86)%\RaonSecure\TouchEn nxKey\TKMain.dll
      TouchEn_nxKey_Installer_32bit_MLWS_nonadmin.exe /silence
  <object type="Must">

This indicates that TouchEn nxKey is a required application. The objectMIMEType entry allows Veraport to recognize whether the application is currently installed. If it isn’t, the downloadURL and backupURL entries are to be used to download the installer.

Once this data is processed, the initiating website can make Veraport open its user interface:


What happens next depends on the type configuration parameter passed before. In “manage” mode Veraport will allow the user to choose which applications should be installed. Removing already installed applications is theoretically also possible but didn’t work when I tried. In other modes such as “normal” Veraport will automatically start downloading and installing whatever applications are considered necessary.

Protection against malicious policies

Veraport places no restrictions on initiating servers. Any website can communicate with its local web server, so any website can initiate software installation. What is stopping malicious websites from abusing this functionality to install malware then?

This time it isn’t (only) obfuscation. The policy file needs to be cryptographically signed. And the signer has to be verified by one of Veraport’s two certification authorities hardcoded in the application. So in theory, only Veraport customers can sign such policies, and malicious websites can only attempt to abuse legitimate policies.

Veraport then further restricts which websites are allowed to host such policies. That’s the allowedDomains entry in the policy file above. From what I could tell, the web address parsing here works correctly and doesn’t allow circumvention.

If the policy file contains relative paths under downloadURL and backupURL (very common), these are resolved relative to the location of the policy file. In principle, these security mechanisms combined make sure that even abusing a legitimate policy cannot trigger downloads from untrusted locations.

Holes in the protection

While the measures above provide some basic protection, there are many holes that malicious actors could abuse.

Lack of data protection in transit

Veraport generally does not enforce HTTPS connections. Any of the connections can use unencrypted HTTP, including software downloads. In fact, I’ve not seen a policy using anything other than unencrypted HTTP for the AhnLab Safe Transaction download. When connected to an untrusted network, this download could be replaced by malware.

It is no different with applications that are downloaded via a relative path. While a malicious website cannot (easily) manipulate a policy file, it can initiate a policy download via an unencrypted HTTP connection. All downloads indicated by relative paths will be downloaded via an unencrypted HTTP connection then.

Using HTTPS connections consistently wouldn’t quite solve the issue however. In my tests, Veraport didn’t verify server identity even for HTTPS connections. So even if application download were to happen from https://download.example, on an untrusted network a malicious server could pose as download.example and Veraport would accept it.

Overly wide allowedDomains settings

It seems that Wizvera provides their customers with a signing key, and these sign their policy files themselves. I don’t know whether these customers are provided with any guidelines on how to choose secure settings, particularly when it comes to allowedDomains.

Even looking at the Citibank example above, * means that each and every subdomain of can host their policy file. In connection with relative download paths, each subdomain has the potential to distribute malware. And I suspect that there are many such subdomains, some of which might be compromised e.g. via subdomain takeover. The other option is straight out hacking a website, something that apparently already happened in 2020.

I’ve only looked at a few policy files, and this blanket whitelisting of the company’s domain is present in all of them. One was worse however: it also listed multiple IP ranges as allowed. Now I don’t know why a South Korean company would put IP ranges belonging to Abbott Laboratories and US Department of Defense on this list, but they did. And they also listed

Who has the signing keys?

Obviously, Wizvera customers can sign any policy file they like. So if they want to allow website to install malicious.exe – there is nothing stopping them. They only need a valid signing certificate, the use of this certificate isn’t restricted to their website. And the Wizvera website lists many customers:

A large list of company logos including Citibank in a section titled “Customers and Partners” in Korean.

Hopefully, all these customers realize the kind of power they have been granted and keep the private key of their signing certificate somewhere very secure. If this key falls into the wrong hands, it could be abused to sign malicious policies.

There are many ways for a private key to leak. The company could get hacked, something that happens way too often. They might leak data via an insufficiently protected source code repository or the like. A disgruntled employee might accept a bribe or straight out sell the key to someone malicious. Or some government might ask the company to hand over the key, particularly likely for multinational companies.

And if that happens, Wizvera won’t be able to do anything to prevent abuse. Even if a signing certificate is known to be compromised, the application has no certificate revocation mechanism. There is also no mechanism to block known malicious policies as long as these have a valid signature. The only way to limit the damage would be distributing a new Veraport version, a process that without any autoupdate functionality takes years.

The certification authorities

And that’s abuse of legitimate signing certificates. But what if somebody manages to create their own signing certificate?

That’s not entirely unrealistic, thanks to Veraport accepting two certification authorities:

Authority name Validity Key size Signature algorithm
axmserver 2008-11-16 to 2028-11-11 1024 MD5
VERAPORT Root CA 2020-07-21 to 2050-07-14 2048 SHA256

It seems that the older of the two certification authorities was used exclusively until 2020. Using a 1024 bit key and an MD5 signature was long deprecated at this point, with the browsers starting to phase out such certification authorities a decade earlier. Yet there we are in year 2023, and this certification authority is still accepted by Veraport.

Mind you, to my knowledge nobody managed to successfully factorize a 1024 bit RSA key yet. Neither did anyone succeed generating a collision with a given MD5 signature. But both of these scenarios got realistic enough that browsers took preventive measures more than a decade ago already.

And speaking of outdated technology, Microsoft’s requirements for certification authorities say:

Root certificates must expire no more than 25 years after the date of application for distribution.

Reason for this requirement is: the longer a certification authority is around, the more outdated its technological base and the more likely it is to be compromised. So Veraport might want to overthink its newer certification authority’s 30 years life span.

Combining the holes into an exploit

A successful Veraport exploit would be launched from a malicious website. When a user visits it, it would trigger installation of a malicious application without providing any clues to the user. That’s what my proof of concept demonstrated.

Using an existing policy file from a malicious website

As mentioned before, policy files have to be hosted by a particular domain. While this restriction isn’t easily circumvented, one doesn’t need to hack a banking website. Instead, I considered a situation where the network connection isn’t trusted, e.g. open WiFi.

If you connect to someone’s WiFi, they can direct you to their web page. It could look like a captive portal but in fact attempt to exploit Veraport, e.g. by using the existing signed policy file from

And since they control the network, they can tell your computer that has for example the IP address As mentioned above, Veraport won’t realize that isn’t really no matter what.

So then the malicious website can trigger Veraport’s automatic installation:

A window titled “Wizvera updater” saying: “DelphinoG3 is installing…”

Running a malicious binary

As mentioned before, relative download paths are resolved relative to the policy file. So that “DelphinoG3” application is downloading from “” just like the policy file, meaning that it actually comes from a server controlled by the attacker.

But a malicious application won’t install, at least not immediately:

A message box saying: It’s wrong signature for DelfinoG3 [0x800B0100,0x800B0100], Disregards this error and continues a progress?

With this cryptic message, chances are good that the user will click “OK” and allow the malicious application to execute. That’s why security-sensitive decisions should never be left to the user. But what signature does it even mean?

The policy files have the option to do hash-based verification for the downloads. But for every website I checked this functionality is unused:


So this is rather about regular code signing. And code-signed malware isn’t unheard of.

But wait, the signature doesn’t even have to be valid! I tried uploading a self-signed executable to the server. And Veraport allowed this one to run without any complains!

Console window titled “delfino-g3.exe”, on top of it a message box saying “Hi.”

I know, my minimal “malicious” application isn’t very impressive. But real malware would dig deep down into the system at this point. Maybe it would start encrypting files, maybe it would go spying on you, or maybe it would “merely” wait for your next banking session in order to rob you of all your money.

Keep in mind that this application is now running elevated and can change anything in the system. And the user didn’t even have to accept an elevation prompt (one that would warn about the application’s invalid signature) – Veraport itself is running elevated to avoid displaying elevation prompts for each individual installer.

Removing visual clues

Obviously, the user might grow concerned when confronted with a Veraport installation screen out of the blue. Luckily for the attackers, that installation screen is highly configurable.

For example, a malicious website could pass in a configuration like the following:

  send_command("getAxInfo", {
    "configure": {
      "type": "normaldesc",
      "addinfourl": "http://malicious.example/addinfo.json",

The addinfo.json file can change names and descriptions for the downloads arbitrarily, making certain that the user doesn’t grow too suspicious:

Screenshot of the Veraport window listing a number of applications with names like “Not TouchEn nxKey” and “Not IPinside LWS.” Description is always “Important security update, urgent!”

But manipulating texts isn’t even necessary. The bgurl configuration parameter sets the background image for the entire window. What if this image has the wrong size? Well, Veraport window will be resized accordingly then. And if it is a 1x1 pixel image? Well, invisible window it is. Mission completed: no visual clues.

Information leak: Local applications

One interesting command supported by the Veraport server is checkProcess: give it an application name, and it returns information on the process if this application is currently running. And what does it do if given * as application name?

let processes = send_command("checkProcess", "*");

Well, output of a trivial web page:

The processes running on your computer, followed by a list of process names and identifiers, e.g. 184 msedgewebview2.exe

That’s definitely a more convenient way of learning what applications the user is running than the complicated search via IPinside.

For security applications, the getPreDownInfo command provides additional information. It will process a policy file and check which applications are already installed. By taking the policy files from multiple websites I got a proof of concept that would check a few dozen different applications:

Following security applications have been found: DelfinoG3-multi64, ASTx-multi64, TouchEnNxKey-multi64, UriI3GM-multi64, ASTx-multi64, INISAFEWebEX-multi64, MAGIC-PKI 22.0.8811.0

With this approach producing a version number, it is ideal for malicious websites to prepare their attack: find which outdated security software the user has installed and choose the vulnerabilities to exploit.

Web server vulnerabilities

HTTP Response Splitting

As we’ve seen, Veraport’s local web server under responds to a number of commands. But it also has a redirect endpoint: will redirect you to No, I don’t know what this is being used for.

Testing this endpoint showed that no validation of the redirect address is being performed. Veraport will even happily accept newline characters in the address, resulting in HTTP Response Splitting, a vulnerability class that has gone almost extinct since all libraries generating HTTP responses started prohibiting newline characters in header names or values. But Veraport isn’t using any such library.

So the request will result in the response:

HTTP/1.1 302 Found
Cookie: a=b

We’ve successfully injected an HTTP header to set a cookie on And by rendering the Location header invalid, one can even prevent the redirect and serve arbitrary content instead. The request will result in the response:

HTTP/1.1 302 Found
Content-Type: text/html


Google Chrome will in fact run this script in the context of, so we have a reflected Cross-site scripting (XSS) vulnerability here. Mozilla Firefox on the other hand protects against such attacks – content of a 302 response is never rendered.

Persistent XSS via a service worker

Reflected XSS on isn’t very useful to potential attackers. So maybe we can turn it into persistent XSS? That’s possible by installing a service worker.

The hurdles for installing a service worker are intentionally being set very high. Let’s see:

  • HTTPS has to be used.
  • Code has to be running within the target scope.
  • Service worker file needs to use JavaScript MIME type.
  • Service worker file has to be within the target scope.

Veraport uses HTTPS for the local web server for some reason, and we’ve already found a way to run code in that context. So the first two conditions are met. As to the other two, it should be possible to use HTTP Response Splitting to get JavaScript code with a valid MIME type. But there is a simpler way.

The Veraport server communicates via JSONP, remember? So the request{}&callback=hi results in a JavaScript file like the following:

hi({"res": 1});

The use of JSONP is discouraged and has been discouraged for a very long time. But if an application needs to use JSONP, it is recommended that it validates the callback name, e.g. only allowing alphanumerics.

Guess what: Veraport performs no such validation. Meaning that a malicious callback name can inject arbitrary code into this JavaScript data. For example,{}&callback=alert(document.domain)// would result in the following JavaScript code:

alert(document.domain)//hi({"res": 1});

And there is the JavaScript file with arbitrary code which can be registered as a service worker. In my proof of concept, the service worker would handle requests to It would then serve up a phishing page:

Screenshot of the browser window titled “Citibank Internet Banking” and showing as page address. The page is a login page titled “Welcome. Please Sign On. More secure local sign-in! Brought you by Veraport.”

This is what you would see on no matter how you came there. A service worker persists and handles requests until it is unregistered or replaced by another service worker, surviving even browser restarts and clearing browser data.

Reporting the issues

I’ve compiled the issues I found into six reports in total. As with other South Korean security applications, I submitted these reports via the KrCERT vulnerability report form.

While this form is generally unreliable and will often produce an error message, this time it straight out rejected to accept the two most important reports. Apparently, something in the text of the reports triggered a web application firewall.

I tried tweaking the text, to no avail. I even compiled a list of words present only in these reports but not in my previous reports, still no luck. In the end, I used the form on December 3rd, 2022 to send in four reports, and asked via email about the remaining two.

Two days later I received a response asking me to submit the issues via email which I immediately did. This response also indicated that my previous reports were received multiple times. Apparently, each time the vulnerability submission form errors out, it actually adds the report to the database and merely fails sending email notifications.

On January 5th, 2023 KrCERT notified me about forwarding my reports to Wizvera – at least the four submitted via the vulnerability form. As to the reports submitted via email, for a while I was unsure whether Wizvera received those as I received no communication on those.

But this wasn’t the last I heard from KrCERT. On February 6th I received an unsolicited email from them inviting me to a bug bounty program:

Screenshot of a Korea-language email from The “To” field contains a list of censored email addresses.

Yes, the email addresses of all recipients were listed in the “To” field. They leaked the email addresses of 740 security researchers here.

According to the apology email which came two hours later they actually made the same mistake in a second email as well, with a total of 1,490 people affected. This email also suggested a mitigation measure: reporting the incident to KISA, the government agency that KrCERT belongs to.

What is fixed

I did not receive any communication from Wizvera, but I accidentally stumbled upon their download server. And this server had a text file with a full change history. That’s how I learned that Veraport is out:

2864,3864 (2023-01-26) 취약점 수정적용

In my tests, Veraport resolved some issues but not others. In particular, installing a malicious application still worked with a minimal change – one merely had to download the policy via an HTTP rather than an HTTPS connection.

So on February 22 I sent an email to KrCERT asking them to forward my comments to Wizvera, which they did on the same day. As a result, various additional changes have been implemented in Veraport and released on February 28 according to the change history.

Altogether, all the directly exploitable issues seem to have been addressed. In particular, server identity is now being validated for HTTPS connections. Also, Veraport will automatically upgrade HTTP downloads to HTTPS. So untrusted networks can no longer mess with installations.

Window size is no longer determined by the background image, so that the application window can no longer be hidden this way. With Veraport websites also cannot change application descriptions any more.

The redirect endpoint has been removed from Veraport’s local server, and the JSONP endpoint now restricts callback names to a set of allowed characters.

OpenSSL has been updated to version 1.0.2u in Veraport and version 1.1.1t in Veraport The latter is actually current and has no known vulnerabilities.

According to the changelog, JsonCpp 0.10.7 is being used now. While this version has been released in 2016, using newer versions should be impossible as long as the application is being compiled with Visual Studio 2008.

Veraport also addressed the issues mentioned in my blog post on TLS security. The certification authority is being generated during installation now. In addition to that, the application allows communicating on port 16116 without TLS.

Interestingly, I also learned that abusing checkProcess to list running processes is a known issue which has been resolved in 2021 already:

2860, 3860 (2021-10-29)
- checkProcess 가 아무런 결과를 리턴하지 않도록 수정

To my defense: when I tested Veraport, there was no way of telling what the current version was. Even on 2023-03-01, a month after Wizvera presumably notified their customers about a release fixing security issues, only three out of ten websites (chosen by their placement in the Google results) offered the latest Veraport version for download. That didn’t mean that existing users updated, merely that users got this version if they decided to reinstall Veraport. But even by this measure, seven out of ten websites were lagging behind by years.

Website Veraport version Release year 2021 2023 2021 2021 2020 2023 2013 2021 2020 2023

Remaining issues

Application signature validation was still broken in Veraport Presumably, that’s still the case in Veraport, but verifying is complicated. This is no longer a significant issue since the connection integrity can be trusted now.

While checkProcess is no longer available, the getPreDownInfo command is still accessible in the latest Veraport version. So any website can still see what security applications are installed. Merely the version numbers have been censored and are no longer usable.

It seems that even Veraport still uses the eight years old mongoose 5.5 library for its local web server, this one hasn’t been upgraded.

None of the conceptual issues have been addressed of course, these are far more complicated to solve. Veraport customers still have the power to force installation of arbitrary applications, including outdated and malicious software. And they aren’t restricted to their own website but can sign a policy file for any website.

A compromised signing certificate of a Veraport customer still cannot be revoked, and neither is it possible to revoke a known malicious policy. Finally, the outdated root certificate (1024 bits, MD5) is still present in the application.

Niko MatsakisTrait transformers (send bounds, part 3)

I previously introduced the “send bound” problem, which refers to the need to add a Send bound to the future returned by an async function. This post continues my tour over the various solutions that are available. This post covers “Trait Transformers”. This proposal arose from a joint conversation with myself, Eric Holk, Yoshua Wuyts, Oli Scherer, and Tyler Mandry. It’s a variant of Eric Holk’s inferred async send bounds proposal as well as the work that Yosh/Oli have been doing in the keyword generics group. Those posts are worth reading as well, lots of good ideas there.1

Core idea: the trait transformer

A transformer is a way for a single trait definition to define multiple variants of that trait. For example, where T: Iterator means that T implements the Iterator trait we know and love, T: async Iterator means that T implements the async version of Iterator. Similarly, T: Send Iterator means that T implements the sendable version of Iterator (we’ll define both the “sendable version” and “async version” more precisely, don’t worry).

Transformers can be combined, so you can write T: async Send Iterator to mean “the async, sendable version”. They can also be distributed, so you can write T: async Send (Iterator + Factory) to mean the “async, sendable” version of both Iterator and Factory.

There are 3 proposed transformers:

  • async
  • const
  • any auto trait

The set of transformers is defined by the language and is not user extensible. This could change in the future, as transformers can be seen as a kind of trait alias.

The async transformer

The async transformer is used to choose whether functions are sync or async. It can only be applied to traits that opt-in by specifying which methods should be made into sync or async. Traits can opt-in either by declaring the async transformer to be mandatory, as follows…

async trait Fetch {
    async fn fetch(&mut self, url: Url) -> Data;

…or by making it optional, in which case we call it a “maybe-async” trait…

trait Iterator {
    type Item;
    fn next(&mut self) -> Self::Item;
    fn size_hint(&self) -> Option<(usize, usize)>;

Here, the trait Iterator is the same Iterator we’ve always had, but async Iterator refers to the “async version” of Iterator, which means that it has an async next method (but still has a sync method size_hint).

(For the time being, maybe-async traits cannot have default methods, which avoids the need to deal with “maybe-async” code. This can change in the future.)

Trait transformer as macros

You can think of a trait transformer as being like a fancy kind of macro. When you write a maybe-async trait like Iterator above, you are effectively defining a template from which the compiler can derive a family of traits. You could think of the #[maybe(async)] annotation as a macro that derives two related traits, so that…

trait Iterator {
    type Item;
    fn next(&mut self) -> Self::Item;
    fn size_hint(&self) -> Option<(usize, usize)>;

…would effectively expand into two traits, one with a sync next method and one with an async version…

trait Iterator { fn next(&mut self ) -> Self::Item; ... }
trait AsyncIterator { async fn next(&mut self) -> Self::Item; ... }

…when you have a where-clause like T: async Iterator, then, the compiler would be transforming that to T: AsyncIterator. In fact, Oli and Yosh implemented a procedural macro crate that does more-or-less exactly this.

The idea with trait transformers though is not to literally do expansions like the ones above, but rather to build those mechanisms into the compiler. This makes them more efficient, and also paves the way for us to have code that is generic over whether or not it is async, or expand the list of modifiers. But the “macro view” is useful to have in mind.

Always async traits

When a trait is declared like async trait Fetch, it only defines an async version, and it is an error to request the sync version like T: Fetch, you must write T: async Fetch.

Defining an async method without being always-async or maybe-async is disallowed:

trait Fetch {
    async fn fetch(&mut self, url: Url) -> Data; // ERROR

Forbidding traits of this kind means that traits can move from “always async” to “maybe async” without a breaking change. See the frequently asked questions for more details.

The const transformer

The const transformer works similarly to async. One can write

trait Compute {
    fn a(&mut self);
    fn b(&mut self);

and then if you write T: const Compute it means that a must be a const fn but b need not be. Similarly one could write const trait Compute to indicate that the const transformer is mandatory.

The auto-trait transformer

Auto-traits can be used as a transformer. This is permitted on any (maybe) async trait or on traits that explicitly opt-in by defining #[maybe(Send)] variants. The default behavior of T: Send Foo for some trait Foo is that…

  • T must be Send
  • the future returned by any async method in Foo must be Send
  • the value returned by any RPITIT method must be Send2

Per these rules, given:

trait Iterator {
    type Item;

    fn next(&mut self) -> Self::Item;

writing T: async Send Iterator would be equivalent to:

  • T: async Iterator<next(): Send> + Send

using the return type notation.

The #[maybe(Send)] annotation can be applied to associated types or functions…

trait IntoIterator {
    type IntoIter;
    type Item;

…in which case writing T: Send IntoIterator would expand to T: IntoIterator<IntoIter: Send> + Send.

Frequently asked questions

How is this different from eholk’s Inferred Async Send Bounds?

Eric’s proposal was similar in that it permitted T: async(Send) Foo as a similar sort of “macro” to get a bound that included Send bounds on the resulting futures. In that proposal, though the “send bounds” were tied to the use of async sugar, which means that you could no longer consider async fn to be sugar for a function returning an -> impl Future. That seemed like a bad thing, particularly since explicitly -> impl Future syntax is the only way to write an async fn that doesn’t capture all of its arguments.

How is this different from the keyword generics post?

Yosh and Oli posted a keyword generics update that included notation for “maybe async” traits (they wrote ?async) along with some other things. The ideas in this post are very similar to those, the main difference is treating Send as an independent transformer, similar to the previous question.

Should the auto-trait transformer be specific to each auto-trait, or generic?

As written, the auto-trait transformer is specific to a particular auto-trait, but it might be useful to be able to be generic over multiple (e.g., if you are maybe Send, you likely want to be maybe Send-Sync too, right?). You could imagine writing #[maybe(auto)] instead of #[maybe(Send)], but that’s kind of confusing, because an “always-auto” trait (i.e., an auto trait like Send) is quite a different thing from a “maybe-auto” trait (i.e., a trait that has a “sendable version”). OTOH users can’t define their own auto traits and likely will never be able to. Unclear.

Why make auto-trait transformer be opt-in?

You can imagine letting T: Send Foo mean T: Foo + Send for all traits Foo, without requiring Foo to be declared as maybe(Send). The problem is that this would mean that customizing the Send version of a trait for the first time is a semver breaking change, and so must be done at the same time the trait is introduced. This implies that no existing trait in the ecosystem could customize its Send version. Seems bad.

Will you permit async methods without the async transformer? Why or why not?

No. The following trait…

trait Http {
    async fn fetch(&mut self); // ERROR

…would get an error like “cannot use async in a trait unless it is declared as async or #[maybe(async)]. Ensuring that people write T: async Http and not just T: Http means that the trait can become “maybe async” later without breaking those clients. It also means that people would have to remember (when writing async code) whether a trait is “maybe async” or “always async” so they know whether to write T: async Http (for maybe-async traits) or T: Http (for always-async). This way, if the trait has async methods, you write async.

Why did you label methods in a #[maybe(async)] trait as #[maybe(async)] instead of async?

In the examples, I wrote maybe(async) traits like so:

trait Iterator {
    type Item;

    fn next(&mut self) -> Self::Item;

Personally, I rather prefer the idea that inside a #[maybe(async)] block, you define the trait as it were always async…

trait Iterator {
    type Item;

    async fn next(&mut self) -> Self::Item;

…but then the async gets removed when used in a sync context. However, I changed it because I couldn’t figure out the right way to permit #[maybe(Send)] in this scenario. I can also imagine that it’s a bit confusing to write async fn when you maybe “maybe async”.

Why use an annotation (#[..]) like #[maybe(async)] instead of a keyword?

I don’t know, because ?async is hard to read, and we’ve got enough keywords? I’m open to bikeshedding here.

Do we still want return type notation?

Yes, RTN is useful for giving more precise specification of which methods should return send-futures (you may not want to require that all async methods are send, for example). It’s also needed internally by the compiler anyway as the “desugaring target” for the Send transformer.

Can we allow #[maybe] on types/functions?

Maybe!3 That’s basically full-on keyword generics. This proposal is meant as a stepping stone. It doesn’t permit code or types to be generic whether they are async/send/whatever, but it does permit us to define multiple versions of trait. To the language, it’s effectively a kind of macro, so that (i.e.) a single trait definition #[maybe(async)] trait Iterator effectively defines two traits, Iterator and AsyncIterator, and the T: async Iterator notation is being used to select the second one. (This is only an example, I don’t mean that users would literally be able to reference a AsyncIterator trait.)

What order are transformers applied?

Transformers must be written according to this grammar

Trait := async? const? Path* Path

where x? means optional x, x* means zero or more x, and the traits named in Path* must be auto-traits. The transformers (if present) are applied in order, so first things are made async, then const, then sendable. (I’m not sure if both async and const make any sense?)

Can auto-trait transformers let us genearlize over rc/arc?

Yosh at some point suggested that we could think of “send” or “not send” as another application of keyword generics, and that got me very excited. It’s a known problem that people have to define two versions of their structs (see e.g. the im and im-rc crates). Maybe we could permit something like

struct Shared<T> {
    /* either Rc<T> or Arc<T>, depending */

and then permit variables of type Shared<u32> or Send Shared<u32>. The keywosrd generics proposals already are exploring the idea of structs whose types vary depending on whether they are async or not, so this fits in.


This post covered “trait transformers” as a possible solution the “send bounds” problem. Trait transformers are not exactly an alternative to the return type notation proposed earlier; they are more like a complement, in that they make the “easy easy”, but effectively provide a convenient desugaring to uses of return type notation.

The full set of solutions thus far are…

  • Return type notation (RTN)
    • Example: T: Fetch<fetch(): Send>
    • Pros: flexible and expressive
    • Cons: verbose
  • eholk’s inferred async send bounds
    • Example: T: async(Send) Fetch
    • Pros: concise
    • Cons: specific to async notation, doesn’t support -> impl Future functions; requires RTN for completeness
  • trait transformers (this post)
    • Example: T: async Send Fetch
    • Pros: concise
    • Cons: requires RTN for completeness
  1. I originally planned to have part 3 of this series simply summarize those posts, in fact, but I consider Trait Transformers an evolution of those ideas, and close enough that I’m not sure separate posts are needed. 

  2. It’s unclear if Send Foo should always convert RPITIT return values to be Send, but it is clear that we want some way to permit one to write -> impl Future in a trait and have that be Send iff async methods are Send

  3. See what I did there? 

Mozilla Performance BlogAncient Bug Discovered in the Visual Metrics Processing Script

Recently, we had an odd regression in our page load tests. You can find that bug here. The regression was quite large, but when a developer investigated, they found that the regression looked more like an improvement.

Below you can see an example of this, and note how the SpeedIndex of the Before video shows up before the page is loaded. You can also see that in the After video the page seems to load faster (less frames with gray boxes) even though we have a higher SpeedIndex.


Side-by-side of a regression that is actually an improvement

Side-by-side of a regression that is actually an improvement


These are side-by-side videos that were recently introduced by Alexandru Ionescu, you can read about it more here. They are very useful to use when you’re trying to visually understand, or find, the exact change that might have caused a regression or improvement. I used this command to generate them locally:

./mach perftest-tools side-by-side \
  --base-revision 10558e1ac8914875aae6b559ec7f7eba667a77a7 \
  --new-revision 57727bb29f591c86fd30724b9ec8ebcb4417a3e2 \
  --base-platform test-linux1804-64-shippable-qr/opt \
  --test-name browsertime-tp6-firefox-youtube


When I ran the visual metrics processing for these videos locally with verbose mode enabled (with -vvvv) I found that the visual progress in the Before video jumped immediately to 49% on the first visual change! You can find those numbers in this bug comment.

I spent some time looking into this bit of code that calculates the progress percentage above on a frame-by-frame basis. This progress is calculated as the difference between the current frame’s histogram, and the final frame’s histogram. It also uses the initial, or start, frame as the baseline of the progress measurement. The original code is an optimized approach at calculating the differences of the histograms that only considers channel intensity values that changed. It’s difficult to say if this is worthwhile given that we generally have a small number of frames to process at this point. These frames have been de-duplicated so at this stage we only have unique frames that should be visually different from each other (when looked at sequentially). That said, this isn’t an issue. The actual issue lies in the usage of a “slop” or another word for fuzz.

In other areas of code we find fuzz being used extensively, and for good reason too. From what I’ve been able to find, this variance in our pixel intensities is coming from an archiving process that the videos undergo. Regardless of the exact reason(s), it needs to be handled appropriately in the processing script. However, this variance is primarily visible in the pixel intensities. If we break apart the pixel intensity into its components and analyze them separately, then we’ll find the fuzz appearing as some X% variance between the histograms. If we assume a 10% variance in pixel intensities, then this means we would need to incorporate +-10% of the histogram pixel values (or bins) around a given pixel value. But what portion of those +-10% histogram bins should be incorporated? Should the entire thing be incorporated? Should we only look at the minimum between the current value and the other value? The latter is what the original visual metrics processing script has chosen to do.

While it’s a reasonable approach, some questions appear about the validity of the results like how do you know the bins you’re looking at have variance, and how do you know you’ve incorporated the correct amount of the other bins. However, the biggest, and in my opinion, most illuminating question for why using a fuzz-based methodology here is highly problematic is: how do you know you’re not mixing colours together? For example, a pure red colour withrbg(240, 0, 0), would end up being mixed with a light grey with rbg(245, 245, 245) in the original implementation. If we’re currently mixing up large ranges of values like this together, then the fuzz implementation isn’t working as intended, and that’s what we were seeing in the video above. The grey boxes are mixing with the other colours/bins to produce a higher progress percentage at an earlier time which gives us a lower overall SpeedIndex score. The proper way of having a fuzz at this stage would be to look at the actual individual pixel intensities instead of breaking them up into the channel components. Given that that would be a difficult change to make since it would involve changing how the histograms work, I implemented a simpler approach that only takes the differences of the histograms and doesn’t incorporate any fuzz into the implementation. I based it off of an implementation of frame progress by the speedline JS module.

Locally, I was able to fix the regression, and the results looked much better. The change has landed in the main Browsertime repository and it’s also been fixed in mozilla-central. In the process of performing the update, I ran a performance metric comparison using ./mach try perf to determine what kind of changes we could expect from this and found that overall we should see a reduction in the variance of our metrics. This fix will also regress most of the metrics since the progress is going to be lower at an earlier time in most, if not all, tests.

Something interesting to note is that this ancient bug has existed for over 8 years, or since the beginning of time relative to the file! It was introduced in the first commit that added the visual-metrics processing code here. While this fix is a small change, it has a large impact because all of the *Index metrics depend on this progress. We hope to continue finding improvements like this, or target some larger ones in the future such as reworking how the fuzz factor is used across the script since it also produces issues in other areas.

If you have any questions about this work, feel free to reach out to us in the #perftest channel on Matrix!



The Mozilla BlogHow to talk to kids about finding community online

An illustration shows icons for a book, a paintbrush, angle brackets and a videog game joystick.<figcaption class="wp-element-caption">Credit: Nick Velazquez</figcaption>

Charnaie Gordon holds up a book in front of a bookshelf. The book cover reads: "The world was hers for the reading."
Charnaie Gordon is the content creator behind Here Wee Read, a blog that advocates for diversity and inclusivity in children’s literature. She’s also the author of several children’s books, including “Lift Every Voice and Change: A Celebration of Black Leaders and the Words that Inspire Generations,” You can follow her on Instagram. 

I’ve always attributed my love of reading to Oprah Winfrey. I used to rush home from school so I could watch her daytime talk show at 4 o’clock. She didn’t have a book club back then, but I remember her talking about her love of reading and how books were her escape as a child.

Seeing someone who looked like me with such a huge platform was powerful. It’s part of  why I wanted to share my own love of reading through my blog, Here Wee Read. The internet helped me find not only a platform, but a community of readers who are as passionate about diverse literature as I am. Now that I have two young children, I’m showing them that while the internet isn’t perfect, it’s a great place to find inspiration, make connections and grow their worlds. 

I want parents to know that the internet is not a thing to be feared, but a place where everyone – even our kids – can find community. 

The internet is a great connector

I started Here Wee Read in 2015 with no intentions of being a content creator or an influencer. I did it because I wanted to share my love of books that featured diverse characters and stories with others. 

I wanted to help make sure that others – no matter their color, background or ability – feel included in literature that we see on bookshelves.

Turns out that I wasn’t alone. Over the past seven years, my blog, as well as social media, have connected me with so many amazing and talented people around the world that I wouldn’t have met otherwise. This led me to become a children’s book author and editor myself: In 2019, a publisher approached me to co-author a picture book, “A Friend Like You.” Since then, I now have five published picture books with more on the way in 2023 and 2024.

How we use the internet as a family

I’ve been reading out loud to my children daily since they were born, and I’m proud that I’ve imparted the same love of reading in them. In the summer of 2018, when my kids were four and five, our family thought about how we can help put more inclusive books on shelves across America. We came up with the idea of donating books to each state. Soon, what started as a family project became a nonprofit organization called 50 States 50 Books, which has been providing free diverse books around the country. 

My kids ship books for our nonprofit. They also create social media content and reply to messages on our accounts (all with my guidance of course). Not only have my kids helped create a literary community online, they’re practicing their communication and business skills too. 

How to engage your kids

These days, the internet is a constant part of our lives, including our children. My experience has taught me that it’s full of opportunities to build community.

If your kids are turning to the internet more than you’d like, don’t focus on what can go wrong. Think about the opportunities instead. It’s likely that children are exploring their interests online, and finding others who share them. Talk to your kids about what they like to do on the internet and ask questions.

Are they making new friends outside of their school? Are they learning something new? Can they join an online group or club to develop their passions or skills? Do they want to support a cause they care about? 

Show you are genuinely interested in understanding how they want to spend their time. Discuss specific content they may see or create.

Once you understand the value of the internet to your child, define boundaries together – whether that’s screen time limits or what they could or couldn’t share online. Use this time as an opportunity to express that when done responsibly, using the internet isn’t just fun but can be used to make the world a better place. 

Acknowledge the challenges of being online

Like anything else, being online has pitfalls. It helps to discuss these with your kids, too. 

Older children who constantly post online can feel the pressure to keep it up. This can lead to burnout, loss of creativity and mental health challenges. Stress the importance of stepping away and taking breaks when they need to. 

Help them understand that what they post can have consequences for themselves and others. Being online can make people targets for hurtful messages or cyberbullying. So while the internet is a good place to express ourselves, acknowledge that the internet isn’t perfect. In my experience, one of the best things parents and caregivers can do is address any problematic encounters, behavior or content as soon as possible. Remind them to always be kind. 

There are risks associated with using the internet, but knowing how to navigate it safely can help your young ones build resilience. 

How to talk to kids about finding community online

Encourage creativity. There are many free apps that let kids express their creativity – whether through music, photos, videos, coloring, coding, making funny voices or creating their own movies. Their creations can lead them to like-minded peers both online and offline. 

Talk about their dreams. Regardless of our ambitions for our children or their own goals, technology will likely be a big part of their adult lives. Set boundaries and teach them about important things like online privacy, but also give them the freedom to explore and make connections based on their passions. Let them troubleshoot, emphasizing that you’re there for them when they need your help. Kids can feel empowered to learn from the treasure trove of resources (including brilliant minds from around the world) that is the internet, and these can help them become the adults they want to be.

Discuss ways to embody your values as a family online. As a parent, I like to set an example for my children. I show them that whatever I do online reflects my values, so I use my platform to spread awareness for causes I care about. I include my kids in my efforts and together, we connect with other families that have similar beliefs and values.

Stepping foot into the online world and finding community there can be exciting for children. Instead of instilling fear, stay engaged as a parent or caregiver. The internet can be better, but that requires efforts from people who want to create a better world. Let your children be a part of that.

The internet is a great place for families. It gives us new opportunities to discover the world, connect with others and just generally make our lives easier and more colorful. But it also comes with new challenges and complications for the people raising the next generations. Mozilla wants to help families make the best online decisions, whatever that looks like, with our latest series, The Tech Talk.

Talk to your kids about online safety

Get tips

The post How to talk to kids about finding community online appeared first on The Mozilla Blog.

Spidermonkey Development BlogJavaScript Import maps, Part 2: In-Depth Exploration

We recently shipped import maps in Firefox 108. you might be wondering what this new feature is and what problems it solves. In the previous post we introduced what import maps are and how they work, in this article we are going to explore the feature in depth.

Explanation in-depth

Let’s explain the terms first. The string literal "app.mjs" in the above examples is called a Module Specifier in ECMAScript, and the map which maps "app.mjs" to a URL is called a Module Specifier Map.

An import map is an object with two optional items:

  • imports, which is a module specifier map.
  • scopes, which is a map of URLs to module specifier maps.

So an import map could be thought of as

  • A top-level module specifier map called “imports”.
  • A map of module specifier maps called “scopes”, which can override the top-level module specifier map according to the location of the referrer.

If we put it into a graph

Module Specifier Map:
  | Module Specifier |      URL        |
  |  ......          | ......          |
  |  foo             | |

Import Map:
  imports: Top-level Module Specifier Map
    | URL        | Module Specifier Map   |
    | /          | ...                    |

  scopes: Sub-directories Module Specifier Map
    | URL        | Module Specifier Map   |
    | /subdir1/  | ...                    |
    | /subdir2/  | ...                    |

Validation of entries when parsing the import map

The format of the import map text has some requirements:

  • A valid JSON string.
  • The parsed JSON string must be a JSON object.
  • The imports and scopes must be JSON objects as well.
  • The values in scopes must be JSON objects since they should be the type of Module Specifier Maps.

Failing to meet any one of the above requirements will result in a failure to parse the import map, and a SyntaxError/TypeError will be thrown.1

<!-- In the HTML document -->
window.onerror = (e) => {
  // SyntaxError will be thrown.
<script type="importmap">
<!-- In another HTML document -->
window.onerror = (e) => {
  // TypeError will be thrown.
<script type="importmap">
  "imports": "NOT_AN_OBJECT"

After the validation of JSON is done, parsing the import map will check whether the values (URLs) in the Module specifier maps are valid.

If the map contains an invalid URL, the value of the entry in the module specifier map will be marked as invalid. Later when the browser is trying to resolve the module specifier, if the resolution result is the invalid value, the resolution will fail and throw a TypeError.

<!-- In the HTML document -->
<script type="importmap">
  "imports": {
    "foo": "INVALID URL"

// Notice that TypeError is thrown when trying to resolve the specifier
// with an invalid URL, not when parsing the import map.
import("foo").catch((err) => {
  // TypeError will be thrown.

Resolution precedence

When the browser is trying to resolve the module specifier, it will find out the most specific Module Specifier Map to use, depending on the URL of the referrer.

The precedence order of the Module Specifier Maps from high to low is

  1. scopes
  2. imports

After the most specific Module Specifier Map is determined, then the resolving will iterate the parsed module specifier map to find out the best match of the module specifier:

  1. The entry whose key equals the module specifier.
  2. The entry whose key has the longest common prefix with the module specifier provided the key ends with a trailing slash ‘/’.
<!-- In the HTML document -->
<script type="importmap">
  "imports": {
    "a/": "/js/test/a/",
    "a/b/": "/js/dir/b/"
// In a module script.
import foo from "a/b/c.js"; // will import "/js/dir/b/c.js"

Notice that although the first entry "a/" in the import map could be used to resolve "a/b/c.js", there is a better match "a/b/" below since it has a longer common prefix of the module specifier. So "a/b/c.js" will be resolved to "js/dir/b/c.js", instead of "/js/test/a/b/c.js".

Details can be found in Resolve A Module Specifier Specification.

Limitations of import maps

Currently, import maps have some limitations that may be lifted in the future:

  • Only one import map is supported
    • Processing the first import map script tag will disallow the following import maps from being processed. Those import map script tags won’t be parsed and the onError handlers will be called. Even if the first import map fails to parse, later import maps won’t be processed.
  • External import maps are not supported. See issue 235.
  • Import maps won’t be processed if module loading has already started. The module loading includes the following module loads:
    • Inline/External module load.
    • Static import/Dynamic import of Javascript modules.
    • Preload the module script in .
  • Not supported for workers/worklets. See issue 2.

Common problems when using import maps

There are some common problems when you use import maps incorrectly:

  • Invalid JSON format
  • The module specifier cannot be resolved, but the import map seems correct: This is one of the most common problems when using import maps. The import map tag needs to be loaded before any module load happens, check the Limitations of import maps section above.
  • Unexpected resolution
    • See the Resolution precedence part above, and check if there is another specifier key that takes higher precedence than the specifier key you thought.

Specification link

The specification can be found in import maps.


Many thanks to Jon Coppeard, Yulia Startsev, and Tooru Fujisawa for their contributions to the modules implementations and code reviews on the import maps implementation in Spidermonkey. In addition, great thanks to Domenic Denicola for clarifying and explaining the specifications, and to Steven De Tar for coordinating this project.

Finally, thanks to Yulia Startsev, Steve Fink, Jon Coppeard, and Will Medina for reading a draft version of this post and giving their valuable feedback.


  1. If it isn’t a valid JSON string, a SyntaxError will be thrown. Otherwise, if the parsed strings are not of type JSON objects, a TypeError will be thrown. 

Jan-Erik RedigerFive-year Moziversary

I can't believe it's already my fifth Moziversary. It's been 5 years now since I joined Mozilla as a Telemetry engineer, I blogged every year since then: 2019, 2020, 2021, 2022. As I'm writing this I'm actually off on vacation (and will be for another week or so) and also it's super early here. Nonetheless it's time to look back and forward.

So what have I been up to in the past year? My team changed again. We onboarded Perry and Bruno and when Mike left we got Alessio as the manager of us all. In September we finally met again at the Mozilla All Hands in Hawaii. Not everyone was there, but it was great to meet those that were. I also went to the Berlin office more often. It's still good to have that other place to work from.

I didn't add any new posts to the "This Week in Glean" series and it was effectively retired. I still believe that openly communicating about our work is very useful and would like to see more of that again. Maybe I'll find some topics to write about this year (and I have some drafts lying around I should finish). One major piece of work in the past year was migrating Glean to use UniFFI. That shipped and I'm proud we rolled it out. Beyond that I spent large parts of my time supporting our users (other Mozilla applications, mostly mobile), fixing bugs and slowly tackling some feature improvements.

And what is for the next year? I'm in the process of handing over my Glean SDK tech lead role to Travis. After over 2 years I feel it's the right time to give up some responsibility and decision power over the project. I believe that sharing responsibilities and empowering others to fill tech lead positions is an overwhelmingly good and important thing.

This shift will free me up to expand my work into other places. I'm staying with the same team and of course a major part of my work will be on the SDK regardless, but I also hope to expand my knowledge about our data systems end-to-end and have a high-level view and opinion about it. For the most part Glean feels "complete", but of course there's always feature requests, use cases we want to support better and improvements to make. My list of little things I would like to improve keeps growing, but now also gets new items beyond just the SDK.

To the next year and beyond!

Thank you

Thanks to my team mates Alessio, Chris, Travis, Perry and Bruno and also thanks to the bigger data engineering team within Mozilla. And thanks to all the other people at Mozilla I work with.

This Week In RustThis Week in Rust 484

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is goku, a HTTP load tester.

Thanks to Joaquín Caro for the self-suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

381 pull requests were merged in the last week

Rust Compiler Performance Triage

Some noisy benchmarks impeded performance review this week. There was a notable improvement to a broad range of primary benchmarks, first from PR #108440, which revised the encodable proc macro to handle the discriminant separately from its fields, and second from PR #108375, which inlined a number of methods that had only a single caller. Both of these PR's were authored by the same contributor; many thanks Zoxc!

Triage done by @pnkfelix. Revision range: 3fee48c1..31f858d9

5 Regressions, 4 Improvements, 6 Mixed; 6 of them in rollups 39 artifact comparisons made in total

Full report

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
New and Updated RFCs
Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Upcoming Events

Rusty Events between 2023-03-01 - 2023-03-29 🦀

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.


Please see the latest Who's Hiring thread on r/rust

Quote of the Week

You've probably come across unsafe. So "unsafe" is a keyword that sort of unlocks super powers and segfaults.

Arthur Cohen during FOSDEM '23

Thanks to blonk for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Mozilla BlogAn Axios tech reporter on her favorite corners of the internet

<figcaption class="wp-element-caption">Ashley Gold covers big tech and regulators as a tech and policy reporter at Axios.</figcaption>

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we are also quick to point out that the internet is pretty darn magical. The internet opens up doors and opportunities, allows for people to connect with others, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner Of The Internet, we talk with people about the online spaces they can’t get enough of, what we should save in Pocket to read later, and what sites and forums shaped them.

This month we chat with Ashley Gold, who covers big tech and regulators as a tech and policy reporter at Axios. She’ll be moderating a panel at SXSW in Austin, Texas called “Open Innovation: Breaking The Default Together,” featuring members from Mozilla’s innovation team.

What is your favorite corner of the internet? 

My favorite corner of the internet is where I can read really good, insightful, funny TV, movie and music reviews. I love cultural criticism. I love TV recaps. I love album reviews. I love movie reviews. I love watching or listening to something and immediately being able to go on the internet and see what other people thought about it, and whether they came to the same conclusions as me and what they rated it and their analysis. I just love communities talking about a show or an album or movie together.

What is an internet deep dive that you can’t wait to jump back into?

 I love any deep dive that’s about a very niche internet personality that only people who are very online would know. I love when there’s a deep dive into their life behind their Twitter account or something like that. I get way too interested in that kind of stuff. 

I also really love a good long read, not necessarily about something that happened or a news event but about a person themselves. Profiles on people that I’m interested in, I’ll always click on

What is the one tab you always regret closing?

At Axios, where I work, we have a document that lays out all of our axioms, which is what we start our little paragraphs with. And every time I exit out, I realize I did it again.

What can you not stop talking about on the internet right now? 

Any content that has to do with Rihanna and her pregnancy and her first baby. She just gave an interview to British Vogue. I had a baby a year-and-a-half ago and hearing Rihanna say the same exact thing that I did right after I had a baby just cracks me up because she’s a billionaire superstar and she’s having the same experience I did. So I’m loving anything that has to do with Rihanna, this current era.

What was the first online community you engaged with?

I had this Xanga [page] when I was in, maybe, sixth grade. I definitely wrote some things in my Xanga that were way too personal to put on the internet. A middle school boyfriend I had at the time read it and figured out I didn’t really like him. So I had some explaining to do.

What articles and videos are in your Pocket waiting to be read/watched right now?

I’m in the middle of finishing a long profile on The New York Times’ executive editor, Joe Kahn. I just finished up The New York Time Magazine’s profile on SZA. And I look forward to reading an article from Tim Alberta on Michigan State University.

If you could create your own corner of the internet, what would it look like?

It would be a lot of pop culture, music, movies and TV content. Makeup tips, workout tips, tips for new parents and generally a lot of positivity. The internet has too much negativity but I do think it’s still a good place for people to come together and you  find like-minded people and get tips on everyday life things and live with intention. 

What do you hope people will have a better understanding of from listening to the SXSW panel you’re moderating?

I hope that people understand that, even though their lives seem to be dominated by the biggest companies in the world and their products and their offerings, there are other organizations out there working on interesting groundbreaking technology that aren’t the biggest companies in the world. It might just take some challenging of our assumptions about what makes a successful company and whether that’s dependent on profit for those kinds of companies to really break through and have their products used by everyone.

Join Ashley, along with Peter van Hardenberg of Ink & Switch and Imo Udom and Liv Erickson of Mozilla, at their panel “Open Innovation: Breaking The Default Together” at SXSW in Austin, Texas on March 14. You can read Ashley’s work on Axios and follow her on Twitter at @ashleyrgold.

Save and discover the best articles, stories and videos on the web

Get Pocket

The post An Axios tech reporter on her favorite corners of the internet appeared first on The Mozilla Blog.

The Mozilla BlogCommon Sense Media’s ultimate guide to parental controls

A child smiles while using a table computer.<figcaption class="wp-element-caption">Credit: Nick Velazquez / Mozilla</figcaption>

Do you need parental controls? What are the options? Do they really work? Here’s everything you need to know about the wide array of parental control solutions, from OS settings to monitoring apps to network hardware.

Even if you’ve talked to your kids about screen-time limits and responsible online behavior, it’s still really tough to manage what they do when you’re not there (and even when you are). Parental controls can support you in your efforts to keep your kids’ internet experiences safe, fun, and productive. They work best when used openly and honestly in partnership with your kids.

Figuring out what kind of parental control is best is entirely based on your own family’s needs. Some families can get by with simple, free browser settings to filter inappropriate content. Some families need help clamping down on screen time. Some folks are cool with spot-checks on their kids’ devices. Wherever you are in your search, this guide can help you make sense of the wide array of options for managing your family’s devices. Find the answers to parents’ most frequently asked questions about parental controls.

What are the best parental controls if I want to:

Block websites. If you just want to limit what your kids can search for, your best option is to enable Google SafeSearch in whichever browser or browsers you use. First, you need to make sure your browsers use Google as their default search engine, and then you need to turn on SafeSearch. This is a good precaution to take as soon as your kids start going online and you want to make sure they don’t accidentally stumble across something yucky.

Block websites and filter content. If you want to prevent access to specific websites and limit your kid’s exposure to inappropriate content such as mature games or porn, you can use the parental controls that are built into your device’s operating system. Every major operating system — Microsoft’s Windows, Apple’s Mac OS, and even Amazon’s Fire — offers settings to keep kids from accessing stuff you don’t want them to see. To get the benefits, you need to use the most updated version of the operating system, and each user has to log in under his or her profile. The settings apply globally to everything the computer accesses. Each works differently and has its own pros and cons. This is the best solution if your kids are younger and are primarily using a home device. Check out each one’s features: Microsoft, Apple, Amazon.

Block websites, filter content, impose time limits, see what my kids are doing. A full-featured, third-party parental control service such as Bark, Qustodio or NetNanny will give you a lot of control over all of your kid’s devices (the ones they use at home as well as their phones). These can be pricey (especially if you have several kids to monitor), but the cost includes constant device monitoring, offering you visibility into how kids are using their devices. These parental controls can only keep track of accounts that they know your kid is using, and for some apps, you’ll need your kid’s password in order to monitor activity. If your kid creates a brand-new profile on Instagram using a friend’s computer without telling you, for example, the parental controls won’t cover that account.

Monitor my kid’s phone. To keep tabs on your tween or teen’s phone, your best bet is to download an app to monitor text messages, social networks, emails, and other mobile functions — try Bark, Circle, TeenSafe, or WebWatcher. These are especially helpful if you’re concerned about potentially risky conversations or iffy topics your kid might be engaging in. Bark, for example, notifies you when it detects “alert” words, such as “drugs.” To monitor social media, you’ll need your kid’s account information, including passwords.

Track my kid’s location. You can use GPS trackers such as Find My Friends and FamiSafe to stay abreast of your kid’s whereabouts. Your kid’s phone needs to be on for these to work, though.

Manage all devices on the network, limit screen time, filter content, turn off Wi-Fi. There are both hardware and software solutions to control your home network and your home Wi-Fi. To name a few popular ones: OpenDNS is a download that works with your existing router (the device that brings the internet into your home) to filter internet content. Circle Home Plus is a device and subscription service that pairs with your existing router and lets you pause access to the internet, create time limits, and add content filters to all devices on your home network (including Wi-Fi devices), plus manage phones and tablets outside the home. Some internet service providers such as Comcast and Verizon offer parental control features that apply to all devices on the network, too. Network solutions can work for families with kids of different ages; however, mucking around in your network and Wi-Fi settings can be challenging, and the controls may not apply when kids are on a different network.

What are the best parental control options for iOS phones and tablets?

If you have an iPhone or iPad, Apple’s Screen Time settings let you manage nearly every aspect of your kid’s iOS device, including how much time kids spend on individual apps and games and what they download. You can even turn the phone off for specified periods, such as bedtime. There are two ways to enable Screen Time: You can either set it up on your kid’s device and password-protect the settings, or you can set up Family Sharing through your Apple account and manage the features from your phone.

What are the best parental control options for Android devices?

Android devices can vary a lot in what they offer, so check your device’s settings to see what options you have. If your kid is under 13, you can download Google’s Family Link to track and control online activity, including text messaging and social media, using your own phone. (You can use Family Link on teens’ devices, but you can’t lock the settings.) You can also use Android’s Digital Wellbeing settings, which are built into the latest version of the OS. These can help kids become more mindful of the time they’re spending online — and hopefully help them cut down. You’ll want to help your kid enable the settings that will be most useful on the road to self-regulation.

Can I set parental controls in specific apps, such as Snapchat and TikTok?

In addition to blocking specific people, most social media apps let you disable features that could pose some risks for kids. For example, you may be able to turn off chat, restrict conversation to friends only, and hide your child’s profile so that it won’t show up in search results. Some apps go a step further by letting users control their own use of the app. Instagram‘s Your Activity feature, for example, shows you how much time you’ve spent in the app and lets you set limits for yourself. YouTube has a similar feature that reminds users to take a break. TikTok even allows parents to set limits and remotely manage their kids’ TikTok account from their phone using its Family Pairing feature. To find out if your kids’ favorite apps offer any types of restrictions, go to the app’s settings section (usually represented by the gear icon). Unless an app offers passcode protection for its settings (and most don’t), your kid can easily reverse them.

Do I need to worry about my kid disabling parental controls?

Yes, kids can undo parental controls. In fact, the directions on how to get around them are easily available on the internet. Depending on your software, you may get a notification that the parental control was breached — or not. Kids can figure out all sorts of ingenious methods to keep doing what they want to be doing — talking to friends, staying up late playing Fortnite, and watching videos you don’t want them to see. If you notice something fishy such as a steep drop-off in your parental control notifications, Wi-Fi or data activity after you’ve turned off the network, or anything else that indicates the parental control isn’t working the way it’s supposed to, your kid may have figured out how to get around it. It could be for another reason, though, since parental controls can be affected by system updates, power outages, and other technical issues.

Will my kid know that I’m using parental controls?

It really depends on the type of controls you install and the devices you have. Some parental controls can be installed without your kids knowing, but Common Sense Media doesn’t recommend it (unless you have a really serious issue with your kid and you need to monitor discreetly). In fact, be cautious with companies that promise covert monitoring, as they tend to prey on parents’ fears. Parental control companies that encourage open dialogue will most likely be more helpful anyway, because at some point you’ll need to discuss what you find. And that’s a lot easier to do if your kid already knows you’re monitoring them. If you decide to use parental controls, talk to your kids about why you’re using them (to help keep them safe) and how your ultimate goal is for them to learn how to interact online responsibly and regulate their own usage independently.

Common Sense Media is the nation’s leading nonprofit organization dedicated to improving the lives of all kids and families by providing the trustworthy information, education, and independent voice they need to thrive in the 21st century.

The internet is a great place for families. It gives us new opportunities to discover the world, connect with others and just generally make our lives easier and more colorful. But it also comes with new challenges and complications for the people raising the next generations. Mozilla wants to help families make the best online decisions, whatever that looks like, with our latest series, The Tech Talk.

An illustration reads: The Tech Talk

Talk to your kids about online safety

Get tips

The post Common Sense Media’s ultimate guide to parental controls appeared first on The Mozilla Blog.

Mozilla Performance BlogImproving the Test Selection Experience with Mach Try Perf

If you’ve ever tried to figure out what performance tests you should run to target some component, and got lost in the nomenclature of our CI task names, then you’re not alone!

The current naming for performance tests that you’ll find when you run ./mach try fuzzy can look like this: test-android-hw-a51-11-0-aarch64-shippable-qr/opt-browsertime-tp6m-essential-geckoview-microsoft-support. The main reason why these task names are so convoluted is because we run so many different variant combinations of the same test across multiple platforms, and browsers. For those of us who are familiar with it, it’s not too complex. But for people who don’t see these daily, it can be overwhelming to try to figure out what tests they should be running, or even where to start in terms of asking questions. This leads to hesitancy in terms of taking the initiative to do performance testing themselves. In other words, our existing system is not fun, or intuitive to use which prevents people from taking performance into consideration in their day-to-day work.




In May of 2022, the Performance team had a work week in Toronto, and we brainstormed how we could fix this issue. The original idea was to essentially to build a web-page, and/or improve the try chooser usage (you can find the bug for all of this ./mach try perf work here). However, given that developers were already used to the mach try fuzzy interface, it made little sense for us to build something new for developers to have to learn. So we decided to re-use the fzf interface from ./mach try fuzzy. I worked with Andrew Halberstadt [:ahal] to build an “alpha” set of changes first which had revealed two issues: (i) running hg through the Python subprocess module results in some interesting behaviours, and (ii) our perf selector changes had too much of an impact on the existing ./mach try fuzzy code. From there, I refactored the code for our fzf usage to make it easier to use in our perf selector, and so that we don’t impact existing tooling with our changes.

The hg issue we had was quite interesting. One feature of ./mach try perf is that it performs two pushes by default, one for your changes, and another for the base/parent of your patch without changes. We do this because comparisons with mozilla-central can sometimes result in people comparing apples to oranges due to minor differences in branch-specific setups. This double-push lets us produce a direct Perfherder (or PerfCompare) link in the console after running ./mach try perf to easily, and quickly know if a patch had any impact on the tests.

This was easier said than done! We needed to find a method that would allow us to both parse the logs, and print out these lines in real time so that the user could see there was something happening. At first, I tried the obvious method of parsing logs when I would trigger the push-to-try method but I quickly ran into all sorts of issues, like logs not being displayed and freezing. Digging into the push-to-try code in an effort to get hg logs parsed, I started off trying to run the script with the check_call, and run methods from the subprocess module. The check_call method caused hg to hang, and with the newer run method the logs were output far too slowly. It looked like the tool was frozen and this was a prime candidate for corrupting a repository. I ended up settling on using Popen because it gave us the best speed even though it was still slower than the original ./mach try fuzzy. I suspect that this issue stems from how hg protects the logging they do, and you can find a bit more info about that in this bug comment.

Outside of the issues I hit with log parsing, the core category building aspect of this tool went through a major rewrite one month after we landed the initial patches because of some unexpected issues. The issues were by design because we couldn’t tell exactly what combinations of variants could exist in Taskcluster. However, as Kash Shampur [:kshampur] rightly pointed out: “it’s disappointing to see so many tests available but none of them run any tests!”. I spent some time thinking about this issue and completely rewrote the core categorization code to use the current mechanism which involves building decision matrices whose dimensions are the suites, apps, platforms, variant combinations, and the categories (essentially, a 5-dimensional matrix). This made the code much simpler to read, and maintain because it allowed us to move any, and all specific/non-generalized code out of the core code. This problem was surprisingly well suited for matrix operations especially when we consider how apps, platforms, suites, and variants interact with each other and the categories. Instead of explicitly coding all of these combinations/restrictions into a method, we can put them into a matrix, and use OR/AND operations on matching elements (indices) across some dimensions to modify them. Using this matrix, we can figure out exactly which categories we should display to the user given their input by simply looking for all entries that are true/defined in the matrix.


Mach Try Perf


Mach Try Perf Demo

A (shortened) demo of the mach try perf tool.


The ./mach try perf tool was initially released as an “alpha” version in early November. After a major rewrite of the core code of the selector, and some testing, we’re making it more official, and widely known this month! From this point on, the recommended approach for doing performance testing in CI is to use this tool.

You can find all the information you need about this tool in our PerfDocs page for it here. It’s very simple to use; simply call ./mach try perf and it’ll bring up all the pre-built categories for you to select from. What each category is for should be very clear as they use simple names like Pageload for the pageload tests, or Benchmarks for benchmark tests. There are a number of categories there that are open for modifications/improvements, and you are free to add more if you want to!

If you are wondering where a particular test is use --no-push to output a list of the tasks selected from a particular selection (this will improve in the future). Checkout the --help for more options that you can use (for instance, Android, Chrome, and Safari tests are hidden behind a flag). The --show-all option is also very useful if the categories don’t contain the tests you want. It will let you select directly from the familiar ./mach try fuzzy interface using the full task graph.

Testimonials (or famous last words?) from people who have tested it out:

./mach try perf is really nice. I was so overusing that during the holiday season, but managed to land a rather major scheduling change without (at least currently) known perf regressions.
– smaug
really enjoying ./mach try perf so far, great job
– denispal
./mach try perf looks like exactly the kind of tool I was looking for!
– asuth
If you have any questions about this, please check the FAQ in the docs linked above, or reach out to us in the #perftest channel on Matrix.

Wladimir PalantLastPass breach update: The few additional bits of information

Half a year after the LastPass breach started in August 2022, information on it remains sparse. It took until December 2022 for LastPass to admit losing their users’ partially encrypted vault data. This statement was highly misleading, e.g. making wrong claims about the protection level provided by the encryption. Some of the failures to protect users only became apparent after some time, such as many accounts configured with a dangerously low password iterations setting, the company hasn’t admitted them to this day.

Screenshot of an email with the LastPass logo. The text: Dear LastPass Customer, We recently notified you that an unauthorized party was able to gain access to a third-party cloud-based storage service which is used by LastPass to store backups. Earlier today, we posted an update to our blog with important information about our ongoing investigation. This update includes details regarding our findings to date, recommended actions for our customers, as well as the actions we are currently taking.

Despite many questions being raised, LastPass maintained strict radio silence since December. Until yesterday they published an article with details of the breach. If you were hoping to get answers: nope. If you look closely, the article again carefully avoids making definitive statements. There is very little to learn here.

TL;DR: The breach was helped by a lax security policy, an employee was accessing critical company data from their home computer. Also, contrary to what LastPass claimed originally, business customers using Federated Login Services are very much affected by this breach. In fact, the attackers might be able to decrypt company data without using any computing resources on bruteforcing master passwords.

Update (2023-02-28): I found additional information finally explaining the timeline here. So the breach affects LastPass users who had an active LastPass account between August 20 and September 16, 2022. The “Timeline of the breach” section has been rewritten accordingly.

The compromised senior DevOps engineer

According to LastPass, only four DevOps engineers had access to the keys required to download and decrypt LastPass backup data from Amazon Web Services (AWS). These keys were stored in the LastPass’ own corporate LastPass vault, with only these four people having access.

The attackers learned about that when they initially compromised LastPass in August 2022. So they specifically targeted one of these DevOps engineers and infected their home computer with a keylogger. Once this engineer used their home computer to log into the corporate LastPass vault, the attackers were able to access all the data.

While LastPass makes it sound like the employee’s fault, one has to ask: what kind of security policies allowed an employee to access highly critical company assets from their home computer? Was this kind of access sanctioned by the company? And if yes, e.g. as part of the Bring Your Own Device (BYOD) policy – what kind of security measures were in place to prevent compromise?

Also, in another transparent attempt to shift blame LastPass mentions a vulnerability in a third-party media software which was supposedly used for this compromise. LastPass does not mention either the software or the vulnerability, yet I highly doubt that the attackers burned a zero-day vulnerability. LastPass would certainly mention it if they did, as it supports their case of being overrun by highly sophisticated attackers.

However, Ars Technica quotes an anonymous source claiming that the software in question was Plex media server. Plex has two known vulnerabilities potentially allowing remote code execution: CVE-2019-19141 and CVE-2018-13415. The former is unlikely to have been exploited because it requires an authenticated attacker, which leaves us with a vulnerability from 2018.

And that certainly explains why LastPass wouldn’t mention the specific vulnerability used. Yes, allowing an employee to access company secrets from a computer where they also run an at least four years old Plex version that is directly accessible from the internet – that’s pretty damning.

Update (2023-03-02): Dan Goodin, the journalist behind the article above, got a definitive statement from LastPass confirming my speculations:

We can confirm that the engineer was running an earlier, unpatched version of Plex Media Server on the engineer’s home computer. This was not a zero-day attack.

Update (2023-03-05): According to PCMag, Plex learned that the vulnerability abused here was actually CVE-2020-5741 from 2020. That would mean that the attackers already had admin access to the media server. How they gained admin access is unknown.

Timeline of the breach

Other than that, we learn fairly little from the LastPass statement. In particular, this doesn’t really help understand the timeline:

the threat actor […] was actively engaged in a new series of reconnaissance, enumeration, and exfiltration activities aligned to the cloud storage environment spanning from August 12, 2022 to October 26, 2022.

As it turns out, another recently published document is more specific:

The threat actor was able to copy five of the Binary Large Objects (BLOBs) database shards that were dated: August 20, 2022, August 30, 2022, August 31, 2022, September 8, 2022, and September 16, 2022. This took place between September 8 - 22, 2022. LastPass accounts created after these dates are not affected.

So in the initial breach in August 2022 the attackers compromised an employee’s company laptop to steal some source code and internal information. They used the information to compromise the aforementioned senior DevOps engineer’s home computer. This way they gained access to LastPass’ backup storage, and between September 8 and 22 they’ve been copying data.

And we finally know which users are affected: the ones who had active LastPass accounts between August 20 and September 16, 2022. Anyone who deleted their account before that time span or created their account after it isn’t affected.

That’s finally something specific. Too bad that it took almost half a year to get there.

Bad news for business customers

Back in December, LastPass had good news for business customers:

The threat actor did not have access to the key fragments stored in customer Identity Provider’s or LastPass’ infrastructure and they were not included in the backups that were copied that contained customer vaults. Therefore, if you have implemented the Federated Login Services, you do not need to take any additional actions.

As people pointed out, Super Admin accounts cannot be federated. So even businesses implementing Federated Login Services should have taken a closer look at their Super Admin accounts. That’s another issue LastPass failed to admit so far.

But that isn’t the biggest issue. As Chaim Sanders noticed, LastPass’ recently published recommendations for business customers directly contradict their previous statements:

The K2 component was exfiltrated by the threat actor as it was stored in the encrypted backups of the LastPass MFA/Federation Database for which the threat actor had decryption keys.

As Chaim Sanders explains, business accounts using Federated Login Services are using a “hidden master password” consisting of the parts K1 and K2. And now we learn that K2 was stored without any protection in the backups that the attackers exfiltrated – just like URLs in the vault data.

But at least the K1 component is safe, since that one is stored with the company, right? Well, it didn’t leak in the breach. However, Chaim Sanders points out that this part is identical for the entire company and can be trivially extracted by any employee.

So the attackers can compromise any of the company’s employees, similarly to how they compromised LastPass’ DevOps engineer. And they will get the K1 component, enabling them to decrypt the LastPass data for the entire company. No need to throw lots of computing resources on bruteforcing here.

Just read the full article by Chaim Sanders, it’s really bad news for any company using LastPass. And to make matters worse, LastPass makes resetting K1 very complicated.

Any security improvements?

While the LastPass statement goes to great lengths explaining how they want to prevent data from leaking again in the same way, something is suspiciously missing: improvements for the encryption of customer data. It’s great that LastPass wants to make exfiltrating their data harder in future, but why not make this data useless to the attackers?

Two issues would have been particularly easy to fix:

  1. The new master password policy introduced in 2018 is not being enforced for existing accounts. So while new accounts need long master passwords, my old test account still goes with eight characters.
  2. The password iterations setting hasn’t been updated for existing accounts, leaving some accounts configured with 1 iteration despite the default being 100,100 since 2018. My test account in particular is configured with 5,000 iterations which, quite frankly, shouldn’t even be a valid setting.

The good news: when I logged into LastPass today, I could see the Security Dashboard indicating new messages for me.

Screenshot of the LastPass menu. Security Dashboard has a red dot on its icon.

Going there, I get a message warning me about my weak master password:

Screenshot of a LastPass message titled “Master password alert.” The message text says: “Master password strength: Weak (50%). For your protection, change your master password immediately.” Below it a red button titled “Change password.”

Judging by a web search, this isn’t a new feature but has been there for a while. It’s quite telling that I only noticed this message when I went there specifically looking for it. This approach is quite different from forcing users to set a strong master password, which is what LastPass should have done if they wanted to protect all users.

And the password iterations? LastPass has recently increased the default to 600,000 iterations. But this is once again for new accounts only.

There is no automatic password iterations update for my test account. There isn’t even a warning message. As far as LastPass is concerned, everything seems to be just fine. And even for business users, LastPass currently tells admins to update the setting manually, once again promising an automated update at some point in the future.

Mike HoyeNever Work In Theory, Spring 2023

Indulge me for a minute; I’d like to tell you about a conference I’m helping organize, and why. But first, I want to tell you a story about measuring things, and the tools we use to do that.

Specifically, I want to talk about thermometers.

Even though a rough understanding of basic principles of the tool we now call a thermometer are at least two thousand years old, for centuries the whole idea that you could measure temperature at all was fantastical. The entire idea was absurd; how could you possibly measure an experience as subjective and ethereal as temperature?

the Dalence Thermometer, one of the rare scientific apparatus' notable for closely resembling an angry man sneaking a bong rip.

Even though you could demonstrate the basic principles involved in ancient Greece with nothing more than glass tubes and a fire the question itself was nonsense, like asking how much a poem weighs, how much water you could pour out of a sunset.

It was more than 1600 years between the earliest known glass-tube demonstrations and Santorini Santorio‘s decision to put a ruler to the side of one of those glass tubes; it was most of a century after that before Carlo Renaldini went ahead and tried Christiaan Huygens‘ idea of measuring relative to the freezing and boiling points of water be used as the anchor points of a linear scale. (Sir Isaac Newton followed that up with a proposal that the increments of that gradient be “12”, a decision I’m glad we didn’t stick with. Andres Celcius’ idea was better.)

The first tools we’d recognize as “modern thermometers” – using mercury, one of those unfortunately-reasonable-at-the-time decisions that have had distressing long-term consequences – were invented by Farenheit in 1714. More tragically, he proposed the metric that bears his name, but: the tool worked, and if there’s one thing in tech that we all know and fear, it’s that there’s nothing quite as permanent as something temporary that works.

By 1900, Henry Bolton – author of “The Evolution Of The Thermometer, 1592-1743” – had described this long evolution as “encumbered with erroneous statements that have been reiterated with such dogmatism that they have received the false stamp of authority”, a phrase that a lot of us in tech, I suspect, find painfully familiar.

Today, of course, outside of the most extreme margins – things get pretty dicey down in the quantum froth around absolute zero and when your energy densities are way up past the plasmas – these questions are behind us. Thermometers are real, temperatures can be very precisely measured, and that has enabled a universe of new possibilities across physics and chemistry and through metallurgy to medicine to precision manufacturing, too many things to mention.

The practice of computation, as a field, is less than a century old. We sometimes measure things we can measure, usually the things that are easiest to measure, but at the intersection of humans and computers, the most important part of the exercise, this field is still deeply & dogmatically superstitious. The false stamps of authority are everywhere.

I mean, look at this. Look at it. Tell me that isn’t kabbalist occultism, delivered via PowerPoint.

This is where we are, but we can do better.

On Tuesday, April 25, and Wednesday, April 26, It Will Never Work in Theory is running our third live event: a set of lightning talks from leading software engineering researchers on immediate, actionable results from their work.

I want to introduce you to the people building the thermometers of modern software engineering.

Some of last year’s highlights include the introduction of novel techniques like Causal Fairness Testing, supercharging DB test suites with SQLancer and two approaches for debugging neural nets, and none of these are hypothetical future someday ideas. These are tools you can start using now. That’s the goal.

And it should be a lot of fun, I hope to see you there.

Never Work In Theory:

The event page:

Mozilla Performance BlogThe Firefox Profiler team was at FOSDEM 2023

The Free and Open source Software Developers’ European Meeting (FOSDEM) 2023 took place on the 4th and 5th of February. This was the first in-person FOSDEM since 2020, and for this reason, coming back to the good ol’ ULB building felt very special. The event was just like we left it in 2020: lots of people, queues in front of the most popular rooms, queues for the food trucks, mud, booths, many many developer rooms and talks to see, and this was just like a reunion between old friends.

As the Profiler team is very distributed, just like the rest of Mozilla, it’s been also great seeing each other again, living this event together, and strengthening our relationships around some carbonade flamande, meatballs, waffles, and (edible) mushrooms.

The Firefox Profiler was very much represented there, with no less than 5 talks in 3 different rooms!

Here is a quick overview of these talks as well as links to the slides and videos.

Using the Firefox Profiler for web performance analysis, by Julien Wajsberg

The talk took place in the JavaScript room, at the very last slot on Sunday.

This was mostly an introduction talk about the Firefox Profiler. Julien talked about what a profiler is, described how to capture a profile, and showed how to navigate in the Firefox Profiler UI like a pro. He explained that measuring is always better than guessing in the performance world.

You can find the slides by following this link.

And here is the full video:

What’s new with the Firefox Profiler, by Nazım Can Altınova

This talk took place in the Mozilla room, on Saturday afternoon.

Nazım described all the new things that happened in the past year: new importers, power profiling, and other improvements in the UI interface.

You can look at the slides by following this link.

Power Profiling with the Firefox Profiler, by Florian Quèze

This talk took place in the Energy room, on Saturday afternoon.

Florian explained the process that led him to implement this feature directly in the Firefox Profiler. Then he went on to show how to use it, and even shared some examples and findings thanks to this new feature.

You can find the slides by following this link.

And here is the video:

Florian gave another talk related to the energy use in Firefox, mentioning the Firefox Profiler among other things. You can read more about this talk on its dedicated page.

Firefox Profiler beyond the web, by Johannes Bechberger

This talk happened in the Mozilla room, on Saturday afternoon.

In this talk, our contributor Johannes shared how he came to contribute to the Firefox Profiler for his own use case: implementing a full-featured profiler for Java programs, based on Java Flight Recorder (JFR), and integrating it fully in IntelliJ IDEA.

You can watch the video below:

Johannes also shared his work extensively in two posts on his blog where you can read more.

Other talks

The team also went and see some interesting talks that you might want to check out:


The Profiler team was very glad to be part of this edition. We felt lucky that we could share so much content with the public, and we’re sharing those here in the hope that they might be useful to more people.

As always, if you have any questions or feedback, please feel free to reach out to our team on the Firefox Profiler channel on Matrix (, we’d be glad to get feedback and suggestions from you.

The Talospace ProjectFirefox 110 on POWER

Firefox 110 is out, with graphics performance improvements like GPU-accelerated 2D canvas and faster WebGL, and the usual under the hood updates. The record's still broken and bug 1775202 still is too, so you'll either need this patch — but this time without the line containing desktop_capture/desktop_capture_gn, since that's gone in the latest WebRTC update — or put --disable-webrtc in your .mozconfig if you don't need WebRTC at all. I also had to put #pragma GCC diagnostic ignored "-Wnonnull" into js/src/irregexp/imported/ for optimized builds to complete on this Fedora 37 system and I suspect this is a gcc bug; you may not need it if you're not using gcc 12.2.1 or build with clang. Finally, I trimmed yet another patch from the PGO-LTO diff, so use the new one for Firefox 110 and the .mozconfigs from Firefox 105.

Patrick ClokePython str Collection Gotchas

We have been slowly adding Python type hints [1] to Synapse and have made great progress (see some of our motivation). Through this process we have learned a lot about Python and type hints. One bit that was unexpected is that many of the abstract base classes representing groups of str instances also match an individual str instance. This has resulted in more than one real bug for us [2]: a function which has parameter of type Collection[str] was called with a str, for example [3]:

def send(event: Event, destinations: Collection[str]) -> None:
    """Send an event to a set of destinations."""
    for destination in destinations:
        # Do some HTTP.

def create_event(sender: str, content: str, room: Room) -> None:
    """Create & send an event."""
    event = Event(sender, content)
    send(event, "")

The correct version should call send with a list of destinations instead of a single one. The “s” at the end of “destinations” takes on quite a bit of importance! See the fix:

@@ -7,5 +7,5 @@
   def create_event(sender: str, content: str, room: Room) -> None:
       """Create & send an event."""
       event = Event(sender, content)
-      send(event, "")
+      send(event, [""])

A possible solution is redefine the destinations parameter as a List[str], but this forces the caller to convert a set or tuple to a list (meaning iterating, allocate memory, etc.) or maybe using a cast(...) (and thus losing some of the protections from type hints). As a team we have a desire to keep the type hints of function parameters as broad as possible.

Put another way, str is an instance of Collection[str], Container[str], Iterable[str], and Sequence[str]. [4] [5]

Since our type hints are only used internally we do not need to worry too much about accepting exotic types and eventually came up with StrCollection:

# Collection[str] that does not include str itself; str being a Sequence[str]
# is very misleading and results in bugs.
StrCollection = Union[Tuple[str, ...], List[str], AbstractSet[str]]

This covers lists, tuples, sets, and frozen sets of str, the one case it does not seem to cover well is if you are using a dictionary as an Iterable[str], the easy workaround there is to call keys() on it to explicitly convert to a KeysView, which inherits from Set.

[1]Looking at the commits to mypy.ini is probably the best way to see progress.
[2]matrix-org/synapse#14809 is our tracking issue for fixing this, although matrix-org/synapse#14880 shows a concrete bug fix which occurred.
[3]This is heavily simplified, but hopefully illustrates the bug!
[4]And Reversible[str], but I don’t think I have ever seen that used and I think it less likely to introduce a bug.
[5]bytes doesn’t have quite the same issue, but it might be surprising that bytes is an instance of Collection[int], Container[int], Iterable[int], and Sequence[int]. I find this less likely to introduce a bug.

Spidermonkey Development BlogJavaScript Import maps, Part 1: Introduction

We recently shipped import maps in Firefox 108 and this article is the first in a series that describes what they are and the problems they can solve. In this first article, we will go through the background and basics of import maps and follow up with a second article explaining more details of import maps.

Background: JavaScript Modules

If you don’t know JavaScript modules, you can read the MDN docs for JavaScript Modules first, and there are also some related articles on Mozilla Hacks like ES6 In Depth: Modules and ES modules: A cartoon deep-dive. If you are already familiar with them, then you are probably familiar with static import and dynamic import. As a quick refresher:

<!-- In a module script you can do a static import like so: -->
<script type="module">
import lodash from "/node_modules/lodash-es/lodash.js";

<!-- In a classic or a module script,
you can do a dynamic import like so: -->

Notice that in both static import and dynamic import above, you need to provide a string literal with either the absolute path or the relative path of the module script, so the host can know where the module script is located.

This string literal is called the Module Specifier in ECMAScript specification1.

One subtle thing about the Module Specifier is that each host has its own module resolution algorithm to interpret the module specifier. For example, Node.js has its own Resolver Algorithm Specification, whereas browsers have their Resolve A Module Specifier Specification. The main difference between the two algorithms is the resolution of the bare specifier, which is a module specifier that is neither an absolute URL nor a relative URL. Before continuing to explain bare specifiers, we need to know some history first.

History: Modules between Node.js and ECMAScript

When Node.js v4 was released, it adopted an existing server-side-JavaScript framework called “CommonJS” as its module system, which had various ways to import a module. For example

  • Using a relative path or an absolute path.
  • Using a core module name, like require(“http”)
  • Using file modules.
  • Using folders as modules.

Details can be found in Node.js v4.x modules documentation.

Later, when ECMAScript Modules were merged into the HTML specification, only relative URLs and absolute URLs were allowed. Bare specifiers were excluded at that time (see HTML PR 443) because CommonJS was originally designed for server side applications instead of web browsers and bare specifiers could cause some security concerns and would require a more complex design in other web standards.

After ECMAScript Modules became an official standard, Node.js wanted to ship support for them so they added an implementation in Node.js v12 modules. This implementation also borrowed from CommonJS including the concept of a bare specifier. See import specifier from Node.js documentation.

Resolving a bare specifier

The following code will import a built-in module 'lodash' in Node.js. However, it won’t work for browsers that don’t support import maps unless you use a transpiler like webpack or Babel.

// Import a bare specifier 'lodash'.
// Valid on Node.js, but for browsers that don't support Import maps,
// it will fail.
import lodash from 'lodash';

This is a pretty common issue for web developers: they want to use a JavaScript module in their website, but it turns out the module is a Node.js module so they now need to spend time to transpile it.

Import maps are designed to reduce the friction of resolving module specifiers between different Javascript runtimes like Node.js and browsers. It not only saves us from using bundlers like webpack or Babel but also gives us the ergonomics of bare specifiers while ensuring that the security properties of URLs are preserved. This is what the proposal does at a fundamental level for most use cases.

Introduction to import maps

Let’s explain what import maps are and how you should use them in your web apps.

Module Specifier remapping

With import maps now supported in Firefox, you can do the following:

<!-- In a module script. -->
<script type="module">
import lodash from "lodash";

<!-- In a classic or module script. -->

<script type="module">

To make the resolution of lodash work in browsers, we need to provide the location of the module 'lodash'. This is where “Import maps” come into play.

To create an import map, you need to add a script tag whose type is “importmap” to your HTML document2. The body of the script tag is a JSON object that maps the module specifier to the URL.

<!-- In the HTML document -->
<script type="importmap">
  "imports": {
     "lodash": "/node_modules/lodash-es/lodash.js"

When the browser tries to resolve a Module Specifier, it will check if an import map exists in the HTML document and try to get the corresponding URL of the module specifier. If it doesn’t find one, it will try to resolve the specifier as a URL.

In our example with the "lodash" library, it will try to find the entry whose key is "lodash", and get the value "/node_modules/lodash-es/lodash.js" from the import map.

What about more complex use cases? For example, browsers cache all files with the same name so your websites will load faster. But what if we update our module? In this case, we would have to do “cache busting”. That is, we rename the file we are loading. The name will be appended with the hash of the file’s content. In the above example, lodash.js could become lodash-1234abcd.js, where the "1234abcd" is the hash of the content of lodash.js.

<!-- Static import -->
<script type="module">
import lodash from "/node_modules/lodash-es/lodash-1234abcd.js";

<!-- Dynamic import -->

This is quite a pain to do by hand! Instead of modifying all the files that would import the cached module script, you could use import maps to keep track of the hashed module script so you only have to modify it once and can use the module specifier in multiple places without modification.

An import map example to map the module specifier to the actual cached file
in the HTML document
<script type="importmap">
  "imports": {
    "lodash": "/node_modules/lodash-es/lodash-1234abcd.js"

Prefix remapping via a trailing slash ‘/’

Import maps also allow you to remap the prefix of the module specifier, provided that the entry in the import map ends with a trailing slash ‘/’.

<!--In the HTML document. -->
<script type="importmap">
  "imports": {
    "app/": "/js/app/"

<!-- In a module script. -->
<script type="module">
import foo from "app/foo.js";

In this example, there is no entry "app/foo.js" in the import map. However, there’s an entry "app/" (notice that it ends with a slash ‘/’), so the "app/foo.js" will be resolved to "/js/app/foo.js".

This feature is quite useful when the module contains several sub-modules, or when you’re about to test multiple versions of the external module. For example, the import map below contains two sub-modules: feature and app. And in the app sub-module, we choose version 4.0. If the developer wants to use another version of “app", he or she can simply change that in the URL in the "app/" entry.

<!-- In the HTML document -->
<script type="importmap">
  "imports": {
    "feature/": "/js/module/feature/",
    "app/": "/js/app@4.0/",

Sub-folders need different versions of the external module.

Import maps provide another mapping called “scopes”. It allows you to use the specific mapping table according to the URL of the module script. For example,

<!-- In the HTML document. -->
<script type="importmap">
  "scopes": {
    "/foo/": {
      "app.mjs": "/js/app-1.mjs"
    "/bar/": {
      "app.mjs": "/js/app-2.mjs"

In this example, the scopes map has two entries:

  1. "/foo/" → A Module specifier map which maps "app.mjs" to "/js/app-1.mjs".
  2. "/bar/" → A Module specifier map which maps "app.mjs" to "/js/app-2.mjs".

For the module scripts located in "/foo/", the "app.mjs" will be resolved to "/js/app-1.mjs", whereas for those located in "/bar/", "app.mjs" will be resolved to "/js/app-2.mjs".

// In /foo/foo.js
import app from "app.mjs"; // Will import "/js/app-1.mjs"
// In /bar/bar.js
import app from "app.mjs"; // Will import "/js/app-2.mjs"

This covers the basics of import maps, including its historical background, how to use it and what problem it is trying to solve. In the following article we will explain more details of import maps, including the validation of the entries in the module specifier maps, the resolution precedence in import maps, and the common problems when you use import maps.


  1. In Node.js, it’s called import specifier, but in ECMAScript, ImportSpecifier has a different meaning. 

  2. Currently, external import maps are not supported, so you can only specify the import map in an HTML document. 

Firefox UXPeople do use Add to Home Screen

An iPhone in hand with the thumb near the Add to Home Screen item in the share menu.Last week Apple added a bunch of capabilities for web apps added to an iPhone or iPad home screen. This includes the ability for 3rd party browsers, like Firefox, to offer Add to Home Screen from the share menu and to open home screen bookmarks if it’s the default browser. I’d love to see us add this to our iOS app. It looks like a contributor did some investigation and this might be easy.

As I was reading about this news I saw that the commentary around it repeated an often heard assumption that says, as Jeremy Keith puts it, it’s a “fact that adding a website to the home screen remains such a hidden feature that even power users would be forgiven for not knowing about it.” No one ever seems to cite a study that shows this. I always see this written as if it is indeed a statement of fact. But it just so happens that recently we were testing some prototypes on iOS (unrelated to web apps) and we needed participants to add them their home screens. Of the ten people we talked to, four were familiar with this flow and had saved various things this way. When I mentioned this to others on the UX team a few shared similar stories.

So four of ten people in a user test – what does that tell us? It tells us that it’s something that at least some regular people do and that it’s not a hidden power user feature. More than that, it’s a good reminder to check your assumptions.

Mozilla Performance BlogAnnouncing side-by-side videos for performance regressions

Early in the fall, I was talking about integrating the side-by-side tool in Continuous Integration (CI). This started as a script for generating a side-by-side comparison between two page load test runs, and emphasizes the visual differences. Using this tool, you can see more clearly which metrics have regressed, and how they might be experienced by a user. The first milestone was to have a Minimum Viable Product (MVP).

What is the side-by-side comparison?

The page load tests measure the performance of Firefox (and competitors) browsers and run in Taskcluster (which is our CI) which are then visualized in Treeherder. Perfherder is our tool for catching performance regressions. When a regression bug is filed, the author of the regressing commit is needinfo-ed and asked for a fix. Sometimes the issue is not obvious and additional debugging information is needed, in which case the side-by-side comparison is very useful to help preview the impact the regression would have on the end-user.

Side-by-side integration

Last summer, we refactored the script to use it as a pypi library in mozilla-central. Then we added it as a perftest-tools command so we can use it just like mach perftest-tools side-by-side. Then we integrated it into CI so we can generate comparisons on demand. In December last year, we launched it to have side-by-side video comparisons generated automatically on regressions.


We can use this tool in three ways:

  1. Locally via ./mach perftest-tools side-by-side.
  2. On CI, as an action task
    1. either by specifying the repo and revision to compare against
    2. or leaving the fields as they are and letting the tool search for the nearest revision the job instance ran on
  3. Automatically on every browsertime page load regression.

How to generate a side-by-side video locally

The local command will download the video results of the specified test and generate the comparisons for the cold and warm modes, such as MP4 videos, GIF animations, and GIF slow motion animations.

The command for generating the side-by-side comparison locally

The command for generating the side-by-side comparison locally

The FFMPEG codec working

The expected output of the side-by-side comparison

The expected output of the side-by-side comparison

We currently recommend running the tool on Linux, as this is the platform we used in development, and in our CI environment.

How to generate a side-by-side video in CI

If you want to generate a comparison against two revisions for a particular test that ran on autoland, you don’t have to push to try. You need to go to the Treeherder job view, select a browsertime tp6 page load test, and either trigger the comparison from the action task or run it locally as explained in the previous section – whichever is more convenient for you. The results of the task triggered on CI will be available as artifacts in the side-by-side job.

Details Panel of Amazon page load job

Details Panel of Amazon page load job

Custom Action menu of the Amazon page load job

Custom Action menu of the Amazon page load job

side-by-side Action Task

side-by-side Action Task

The comparison is triggered from the revision that caused the regression (called the “after” revision) because the regression bugs are filed for the commits that caused it, so the side-by-side comparison will show the “before” test on the left and the “after” test on the right.

Currently, there are project and revision parameters available for running the comparison. The first one is usually left as it is because most comparisons are made on revisions that triggered performance alerts on autoland and our recommendation is to not compare tests against two different projects/repositories unless you know what you’re doing. If you want to compare two tests from try, then you’ll have to copy the before revision, go to the after revision on try, select the page load test you want to compare, go to Custom Action… from the Details Panel menu, and paste the before revision to the revision field along with typing try instead of autoland.

side-by-side Action Task on try

side-by-side Action Task on try

Automatic side-by-side videos for every regression

Automatic Backfilling Report

Automatic Backfilling Report

Ebay page load test and associated side-by-side comparison job

Ebay page load test and associated side-by-side comparison job


This is one of the most impactful features because it improves the developer experience. Every time Perfherder detects a performance regression, an alert is created and the perf sheriff bot does the backfills automatically. A backfill triggers the missing jobs for the regressed test so the performance sheriff can identify the regressor. After this happens, the side-by-side comparison is triggered with the backfills and by the time the regression bug gets to the author of the regressing commit, the comparison is ready to be visualized. The thing with this side-by-side comparison is it shows the visual impact of the regression. Sometimes the numbers don’t give a feel about the impact of the performance changes, but visualizing the generated comparison will help determine how the end-user perceives it.

What’s next?

As I was saying, this is an MVP of the side-by-side tool that mostly does the backend job. The future plans are to improve the Treeherder Job View so that the comparison can be triggered more easily, to embed the videos in various places, to include more details about the comparison job in the Details panel, and the development will continue according to the needs of making the developer experience as smooth as possible.

For more details or questions you can find us on the #perftools element channel.

This Week In RustThis Week in Rust 483

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is Darkbird, a high-concurrency real-time in-memory database.

Thanks to DanyalMh for the self-suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

396 pull requests were merged in the last week

Rust Compiler Performance Triage

Overall a fairly positive week, with few noise-related regressions or improvements and many benchmarks showing significant improvements. The one large regression is limited to documentation builds and has at least a partial fix already planned.

Other wins this week include an average improvement of around 1% in maximum memory usage of optimized builds, and a 2% average reduction in compiled binary sizes. These are fairly significant wins for these metrics.

Triage done by @simulacrum. Revision range: 9bb6e60..3fee48c1

3 Regressions, 3 Improvements, 3 Mixed; 2 of them in rollups 45 artifact comparisons made in total

Full report

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
New and Updated RFCs
Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Upcoming Events

Rusty Events between 2023-02-22 - 2023-03-22 🦀

North America
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.


Please see the latest Who's Hiring thread on r/rust

Quote of the Week

It’s enjoyable to write Rust, which is maybe kind of weird to say, but it’s just the language is fantastic. It’s fun. You feel like a magician, and that never happens in other languages.

Parker Timmerman cited in a TechnologyReview article

Thanks to robin for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Wladimir PalantSouth Korea’s banking security: Intermediate conclusions

Note: This article is also available in Korean.

A while back I wrote my first overview of South Korea’s unusual approach to online security. After that I published two articles on specific applications. While I’m not done yet, this is enough information to draw some intermediate conclusions.

The most important question is: all the security issues and bad practices aside, does this approach to banking security make sense? Do these applications have the potential to make people more secure when rolled out mandatorily nation-wide?

Message on stating: [IP Logger] program needs to be installed to ensure safe use of the service. Do you want to move to the installation page?

TL;DR: I think that the question above can be answered with a clear “no.” The approaches make little sense given actual attack scenarios, they tend to produce security theater rather than actual security. And while security theater can sometimes be useful, the issues in question have proper solutions.

Endpoint protection

The probably least controversial point here is: users’ devices need protection, ideally preventing them from being compromised. So when a user accesses their bank, their computer should really be theirs, with nobody secretly watching over their shoulder. Over time, two types of applications emerged with the promise to deliver that: antivirus and firewall.

But Microsoft has you covered there. Starting with Windows 7, there is a very effective built-in firewall (Windows Firewall) and a decent built-in antivirus (Windows Defender). So you are protected out of the box, and installing a third-party antivirus application will not necessarily make you safer. In fact, these antivirus applications way too often end up weakening the protection.

Of course, I have no idea how good AhnLab’s antivirus is. Maybe it is really good, way better than Windows Defender. Does it mean that it makes sense for South Korean banking websites to force installation of AhnLab Safe Transaction?

Well, most of the time AhnLab Safe Transaction sits idly in the background. It only activates when you are on a banking website. In other words: it will not prevent your computer from being compromised, as a malware infection doesn’t usually happen on a banking website. It will merely attempt to save the day when it is already too late.

Keyboard protection

And speaking of “too late,” I see a number of “security” applications in South Korea attempting to protect keyboard input. The idea here is: yes, the computer is already compromised. But we’ll encrypt keyboard input between the keyboard and the website, so that the malicious application cannot see it.

I took a closer look at TouchEn nxKey, which is one such solution. The conclusion here was:

So whatever protection nxKey might provide, it relies on attackers who are unaware of nxKey and its functionality. Generic attacks may be thwarted, but it is unlikely to be effective against any attacks targeting specifically South Korean banks or government organizations.

And this isn’t because they did such a bad job (even though they did). As a general rule, you cannot add software to magically fix a compromised environment. Whatever you do, the malicious software active in this environment can always implement countermeasures.

We’ve already seen this two decades ago when banking trojans became a thing and would steal passwords. Some websites decided to use on-screen keyboards, so that the password would not be entered via a physical keyboard.

The banking trojans adapted quickly: in addition to merely recording the keys pressed, they started recording mouse clicks along with a screenshot of the area around the mouse cursor. And at that point on-screen keyboards became essentially useless. Yet they are still common in South Korea.

Just to state this again: once a computer is compromised, it cannot be helped. The only solution is multi-factor authentication. In banking context this means that the transaction details always need to be confirmed on a separate and hopefully unaffected device.

IP address detection

Two decades ago I was a moderator of an online chat. Most chat visitors would behave, but some were trolls only looking to annoy other people. I would ban the trolls’ IP address, but they would soon come back with a different IP address.

Twenty years later I see South Korean banks still struggling with the same inadequate protection measures. Rather than finding new ways, they continue fighting anonymous proxies and VPNs. As a result, they demand that customers install IPinside, a privacy-invasive de-anonymization tool.

Quite frankly, I’m not even certain which exact threat this addresses. Assuming that it addresses a threat at all rather than merely serving as an excuse to collect more data about their customers.

Banks generally don’t care about IP addresses when limiting login attempts. After three unsuccessful login attempts the account is locked, this is common practice ensuring that guessing login credentials is impracticable.

Is the goal maybe recognizing someone using stolen login credentials? But that’s also something which is best addressed by multi-factor authentication. Banking trojans learned avoiding such geo-blocking a long time ago, they simply use the victim’s compromised computer both to exfiltrate login credentials and to apply them for a malicious transaction. As far as the bank can see, the transaction comes from the computer belonging to the legitimate owner of the account.

Or is the goal actually preventing attacks against vulnerabilities of the banking website itself, allowing to recognize the source of the attack and to block it? But accessing banking websites prior to login doesn’t require IPinside, so it has no effect here. And once the malicious actors are logged in, the bank can recognize and lock the compromised account.

Certificate-based logins

One specific of the South Korean market is the prevalence of certificate-based logins, something that was apparently mandated for online banking at a certain point but no longer is. There are still applications to manage these certificates and to transfer them between devices.

Now certificate-based logins are something that browsers supported out of the box for a long time (“client-authenticated TLS handshake”). Yet I’ve seen very few websites actually use this feature and none in the past five years. The reason is obvious: this is a usability nightmare.

While regular people understand passwords pretty well, certificates are too complicated. The necessity to back up certificates and to move them to all devices used makes them particularly error-prone.

At the same time they don’t provide additional value in the banking context. While certificates are theoretically much harder to guess than passwords, this has no practical relevance if an account is locked after three guessing attempts. And using certificates in addition to passwords doesn’t work as proper two-factor authentication: there is no independent device to show the transaction details on, so one cannot know what is being confirmed with the certificate.

However, if one really wanted to secure important accounts with a hardware token instead of a password, browsers have supported the WebAuthn protocol for a while now. No third-party applications are required for that.

Edit (2023-02-20): I forgot one scenario here. What if a user is lured to a malicious look-alike of their legitimate banking website? With password-based logins, this website will have stolen the user’s login credentials. Certificates on the other hand cannot be stolen this way.

Yet the malicious website could use the user’s login attempt to log into the legitimate banking website in the background. The browser’s built-in TLS handshake mechanism effectively prevents such attacks, but from what I’ve seen the South Korean custom applications don’t. Whether certificates still offer some value then depends on how hard it will be to trick the user into signing a malicious banking transfer instead of their intended one. I don’t know that yet.

Software distribution

Even without any security issues, the mere number of applications users are supposed to install is causing considerable issues. One application required by every bank in the country? Well, probably manageable. Ten applications which you might need depending on the website, and where you have to keep the right version in mind? Impossible for regular users to navigate.

Add to this that software vendors completely delegated the software distribution to the banks, who have no experience with distributing software securely. So when security software is being downloaded from banking websites, it’s often years behind the latest version and I’ve also seen plain HTTP (unencrypted) downloads. Never mind abandoned download pages still distributing ancient software.

This is already playing out badly with my disclosures. While the software vendors still have to develop fixes for the security issues I reported, they have no proper way of distributing updates once done. They will need to ask each of the banks using the software, and quite a few are bound to delay this even further because their website cannot work with the latest software version. And even then, users will still need to install the update manually.

Now if all these applications were actually necessary, one option to deal with this would be adding efficient auto-update functionality, similar to the one implemented in web browsers. No matter how old the version installed by the user, it would soon contact the vendor’s (secure) update server and install the very latest version. And banks would need to implement processes allowing them to stay compatible with this latest version, staying with outdated and potentially vulnerable software would not be an option.

Of course, that’s not the solution South Korea went with. Instead they got Veraport: an application meant to automate management of multiple security applications. And it is still the banks determining what needs to be installed and when it should be updated. Needless to say that it didn’t really make this mess any smaller, but it did get abused by North Korean hackers.

Mike HoyeModern Problems Require Modern Solutions


Over on Mastodon I asked: “What modern utilities should be a standard part of a modern unixy distro? Why? I’ve got jq, pandoc, tldr and a few others on my list, but I’d love to know others.”

Here’s what came back; I’ve roughly grouped them into two categories: new utilities and improvements on the classics.

In no particular order, the new kids on the block:

  • htop, “a cross-platform interactive process viewer”. An htop-like utility called bottom also got some votes.
  • As an aside, about htop: one commenter noted that they run HTOP on a non-interactive TTY, something like control-alt-F11; so do I, and it’s great, but you must not do this on sec-critical systems. You can kill processes through htop, and that gives you a choice of signals to issue, and on most machines running systemd “systemd init” responds to SIGTRMIN+1 by dropping back into rescue mode, and that’s a backstage pass to a root shell. I have used this to recover a personal device from an interrupted upgrade that broke PAM. You must never do this on a machine that matters.

  • tmux, a terminal multiplexer. Some people mentioned screen, the classic tool in this space, but noted that it’s getting pretty long in the tooth and tmux is a pure improvement.
  • HTTPie, a CURL-adjacentish command-line HTTP client for testing and debugging web APIs.
  • glow, a markdown-on-the-command-line tool that looks great. Lowdown is also interesting.
  • fzf, a command-line “fuzzy finder” that a few people suggested.
  • tldr – simplified man pages with practical examples. The world has needed this for a long time.
  • Datamash: Gnu, I know, but an interesting command-line-math tool.
  • zsh + OhMyZsh + Alacritty: this trifecta came up a lot and it looks pretty amazing.
  • VisiData: a tabular data visualization multitool.
  • jq and jid are both fantastic tools for inspecting and manipulating JSON.
  • Tree: show you the tree structure of directories, a bit like microdosing on Midnight Commander from back in the day.
  • Gron, a tool for making JSON greppable.
  • ncdu, friend of htop and a nice disk usage display for the terminal.
  • duc, also a nice drive-use visualizer.
  • rclone, a cloud-storage data-moving multitool.
  • csvkit: if you spend a lot of time working with comma-separated values, accept no substitutes.
  • matplotlib: the upgrade over gnuplot you’ve been waiting for.
  • xidel: this looks like jq-for-html, and I’m intrigued.
  • The moreutils collection.
  • nushell: A structured-data pipeline-building shell. This looks amazing.

Improvements on “classic” tools and utilities:

  • duf a better df.
  • ripgrep, a line-oriented search tool that recursively searches the current directory for a regex pattern described as a better grep.
  • sd, a better sed.
  • fd, a better find
  • atool, a set of scripts that wrap common compressed-file-format handlers.
  • bat, a “better cat”.
  • lsd and exa, both new takes on the venerable ls.
  • There’s also zoxide: an interesting update to, of all things, cd!
  • Not really a new thing but a quality of life improvement: the “ducks” alias.
  • ag, the “silver searcher”. “Fast ack”.

So, there you go. Life in the terminal is still improving here in 2023, it’s great to see.

Update, 22/Feb/2023:

  • ijq, an “interactive jq”.
  • Broot: better navigation of directory trees.
  • dust: “du on steroids”.
  • dyff: diff for yaml.
  • miller, a CSV multitool.
  • LazyDocker and LazyGit, CLI improvements for Docker and Git respectively.
  • procs: a replacement for ps written in Rust.
  • mcfly: replaces the usual ctrl-r shell-history search handler with a more powerful tool, super cool.

Spidermonkey Development BlogSpiderMonkey Newsletter (Firefox 110-111)

SpiderMonkey is the JavaScript engine used in Mozilla Firefox. This newsletter gives an overview of the JavaScript and WebAssembly work we’ve done as part of the Firefox 110 and 111 Nightly release cycles.

🛠️ RISC-V backend

SpiderMonkey now has a JIT/Wasm backend for the 64-bit RISC-V architecture! This port was contributed by PLCT Lab and they’ll also be maintaining it going forward. Adding a backend for a new platform is a lot of work so we’re grateful to them for making SpiderMonkey run well on this exciting new architecture.

🚀 Performance

We’re working on improving performance for popular web frameworks such as React. We can’t list all of these improvements here, but the list below covers some of this work.

  • We added an optimization for property accesses inside a for-in loop. This lets us avoid slower megamorphic property lookups in React and other frameworks.
  • We optimized megamorphic property gets/sets more.
  • We optimized atomization more to avoid flattening ropes in certain cases.
  • We landed more improvements for our GC’s parallel marking implementation. We’re currently performing some experiments to evaluate its performance.
  • We fixed some performance issues with the ARM64 fast paths for truncating doubles.
  • We added some fast paths for objects/arrays to structured clone reading.
  • We added support for optimizing more relational comparison types with CacheIR.

⚙️ Modernizing JS modules

We’re working on improving our implementation of modules. This includes supporting modules in Workers, adding support for Import Maps, and ESMification (replacing the JSM module system for Firefox internal JS code with standard ECMAScript modules).

  • As of this week, there are more ESM modules than JSM modules 🎉. See the AreWeESMifiedYet website for the status of ESMification.
  • We’ve landed some large changes to the DOM Worker code to add support for modules. We’re now working on extending this support to shared workers and enabling it by default.
  • We continue to improve our modules implementation to be more efficient and easier to work with.

⚡ Wasm GC

High-level programming languages currently need to bring their own GC if they want to run on WebAssembly. This can result in memory leaks because it cannot collect cycles that form with the browser. The Wasm GC proposal adds struct and array types to Wasm so these languages can use the browser’s GC instead.

  • We landed enough changes to be able to run a Dart Wasm GC benchmark. We then profiled this benchmark and fixed various performance problems.
  • We optimized struct and array allocation.
  • We implemented a constant-time algorithm for downcasting types.
  • We used signal handlers to remove more null pointer checks.

💾 Robust Caching

We’re working on better (in-memory) caching of JS scripts based on the new Stencil format. This will let us integrate better with other resource caches used in Gecko and might also allow us to potentially cache JIT-related hints in the future.

The team is currently working on removing the dependency on JSContext for off-thread parsing. This will make it easier to integrate with browser background threads and will let us further simplify and optimize the JS engine.

  • A lot of changes landed the past weeks to stop using JSContext in the bytecode emitter, the parser, and many other data structures.

📚 Miscellaneous

  • We improved our perf jitdump support by annotating more trampoline code.
  • We added profiler markers for discarding JIT code.
  • We fixed some devtools problems with eager evaluation of getters and async functions.

Mozilla ThunderbirdThunderbird 115 Supernova Preview: The New Folder Pane

In our last blog post, we announced that we’re rebuilding the Thunderbird UI from scratch, with the first results coming to Thunderbird 115 “Supernova” this July. We also explained why it’s necessary to begin removing some technical and interface debt, and to modernize things in order to sustain the project for decades to come. That post may have caused you to worry that Thunderbird 115’s interface would be radically different and ship with less customization options. Perhaps fearful you’d have to relearn how to use the application.

Nothing could be further from the truth! In this post — and in future Supernova previews — we want to put all those worries to rest by showing you how Thunderbird 115 will be intuitive and welcoming to new users, while remaining familiar and comfortable for veteran users.

<figcaption class="wp-element-caption">Product Design Manager Alex Castellani takes you on a guided tour of the new Thunderbird folder pane.</figcaption>

Today we’re going to take a look at the new Thunderbird folder pane. That’s the section on the left of the application that displays all of your mail accounts, feed accounts, chat accounts, and local folders.

Folder Pane: Thunderbird 102 vs Thunderbird 115

Here is what the folder pane looks right right now, in Thunderbird 102:

The Thunderbird folder pane in version 102, showing local folders, mail accounts, and subfolders. <figcaption class="wp-element-caption">Thunderbird 102 Folder Pane</figcaption>

Now, let’s see the new folder pane that’s coming in Thunderbird 115. Don’t worry, we’ll explain the new design and the new buttons further down.

<figcaption class="wp-element-caption">Thunderbird 115 Folder Pane — with Unified Folder Mode and relaxed density</figcaption>

See how roomy and breathable that is? See all the white space that helps prevent cognitive overload? This will feel familiar to users who’ve only used webmail in the past.

Wait, wait! Before you get angry and close your browser tab, let’s take an additional look at the Thunderbird 115 folder pane, right next to the existing Thunderbird 102 folder pane:

<figcaption class="wp-element-caption">Thunderbird 115 Folder Pane with Unified Folder Mode disabled and default density</figcaption>

Hmm, that looks identical to the current folder pane! What’s going on here? The above iteration of the Thunderbird 115 folder pane simply has Unified Folder mode turned off, and the density set to default instead of relaxed.

It’s exactly what you’re already used to!

Different People, Different Needs

We understand that many of you love the traditional, compact Thunderbird UI that presents much more information at a glance. We also know that many of our users dislike all that information being so cramped and squished together.

So, who’s right? Everyone is right! One of the benefits of rebuilding the Thunderbird interface from scratch is that we can better tailor the application to satisfy different people with different needs.

New Feature: The Folder Pane Header

Some users rely on the toolbar, shown just below, for their action buttons. That area near the top of Thunderbird has always been the default location for the main actions in your current tab.

<figcaption class="wp-element-caption">The toolbar in Thunderbird 102</figcaption>

But others prefer to completely remove all buttons from the toolbar, and rely exclusively on the menu bar to access options and features. A different set of users might completely hide both the menu bar and toolbar and interact exclusively with shortcuts.

These situations are just a few examples of how different users like to change the interface to feel more productive. That’s why we’re planning to offer more easily discoverable contextual options for specific areas.

That’s where the new Folder Pane Header enters the picture:

Using a primary button to highlight the most important action in the current context (like writing a message) is a common UX paradigm that helps new users focus on simple, common actions.

Adding that button in the new folder pane makes it easily accessible for users that rely on assistive technologies, or who navigate exclusively with a keyboard.

In the same area, we added a button to fetch messages from the server. Just in case users want to force the syncing process.

On the right, an accessible “meatball” menu button will allow users to:

  • Change the layout of the pane
  • Switch folder modes
  • Show or hide local folders and tags
  • …and many more options that normally would be hidden inside some submenu of the menu bar.

What if you don’t care about these new buttons and don’t want them? Are they just a waste of space in your workflow? You can simply hide the entire area with one click, and that preference will be remembered forever in your profile.

New Feature: Tags and Local Folder Options

Younger users have become used to using simpler interfaces. They’ve never used a “Local Folder” and probably don’t even know what that is. So, we’re offering a simple option to turn the Local Folders display on or off.

You might be familiar with Tags, which are basically labels that filter your email. Tags behave a lot like virtual folders. If you select a Tag, you end up with a subset of messages that have that tag, which simply looks at a folder with the same tag name.

We understand that some users might prefer having the Tags button in the toolbar, or not using tags at all. Meanwhile, others might rely heavily on tags. That’s why we’re adding the option to show them in the new folder pane.

As you’d expect, you’ll be able to re-order all these sections to suit your own preferences and workflows. And if nothing in this entire post appeals to you, rest assured that all of it is completely optional in Thunderbird 115!

For users who don’t consider the current Thunderbird interface comfortable, we’re confident these new features will make you feel right at home — with the added benefits Thunderbird brings compared to traditional webmail: privacy, customization, no ads, and absolutely no selling of your data.

Thunderbird 115 “Supernova” launches this July. A beta will be available for you to try by mid-April.

The post Thunderbird 115 Supernova Preview: The New Folder Pane appeared first on The Thunderbird Blog.

Niko MatsakisReturn type notation (send bounds, part 2)

In the previous post, I introduced the “send bound” problem, which refers to the need to add a Send bound to the future returned by an async function. I want to start talking about some of the ideas that have been floating around for how to solve this problem. I consider this a bit of an open problem, in that I think we know a lot of the ingredients, but there is a bit of a “delicate balance” to finding the right syntax and so forth. To start with, though, I want to introduce Return Type Notation, which is an idea that Tyler Mandry and I came up with for referring to the type returned by a trait method.

Recap of the problem

If we have a trait HealthCheck that has an async function check

trait HealthCheck {
    async fn check(&mut self, server: Server);

…and then a function that is going to call that method check but in a parallel task…

fn start_health_check<H>(health_check: H, server: Server)
    H: HealthCheck + Send + 'static,

…we don’t currently have a way to say that the future returned by calling H::check() is send. The where clause H: HealthCheck + Send says that the type H must be send, but it says nothing about the future that gets returned from calling check.

Core idea: A way to name “the type returned by a function”

The core idea of return-type notation is to let you write where-clauses that apply to <H as HealthCheck>::check(..), which means “any return type you can get by calling check as defined in the impl of HealthCheck for H”. This notation is meant to be reminiscent of the fully qualified notation for associated types, e.g. <T as Iterator>::Item. Just as we usually abbreviate associated types to T::Item, you would also typically abbreviate return type notation to H::check(..). The trait name is only needed when there is ambiguity.

Here is an example of how start_health_check would look using this notation:

fn start_health_check<H>(health_check: H, server: Server)
    H: HealthCheck + Send + 'static,
    H::check(..): Send, // <— return type notation

Here the where clause H::check(..): Send means “the type(s) returned when you call H::check must be Send. Since async functions return a future, this means that future must implement Send.

More compact notation

Although it has not yet been stabilized, RFC #2289 proposed a shorthand way to write bounds on associated types; something like T: Iterator<Item: Send> means “T implements Iterator and its associated type Item implements Send”. We can apply that same sugar to return-type notations:

fn start_health_check<H>(health_check: H, server: Server)
    H: HealthCheck<check(..): Send> + Send + 'static,
    //             ^^^^^^^^^

This is more concise, though also clearly kind of repetitive. (When I read it, I think “how many dang times do I have to write Send?” But for now we’re just trying to explore the idea, not evaluate its downsides, so let’s hold on that thought.)

Futures capture their arguments

Note that the where clause we wrote was

H::check(..): Send

and not

H::check(..): Send + static

Moreover, if we were to add a 'static bound, the program would not compile. Why is that? The reason is that async functions in Rust desugar to returning a future that captures all of the function’s arguments:

trait HealthCheck {
    // async fn check(&mut self, server: Server);
    fn check<s>(&s mut self, server: Server) -> impl Future<Output = ()> + s;
    //           ^^^^^^^^^^^^                                                ^^
    //         The future captures `self`, so it requires the lifetime bound `'s` 

Because the future being returned captures self, and self has type &’s mut Self, the Future returned must capture ’s. Therefore, it is not ’static, and so the where-clause H::check(..): Send + ‘static doesn’t hold for all possible calls to check, since you are not required to give an argument of type &’static mut Self.

RTN with specific parameter types

Most of the time, you would use RTN to bound all possible return values from the function. But sometimes you might want to be more specific, and talk just about the return value for some specific argument types. As a silly example, we could have a function like

fn call_check_with_static<H>(h: &static mut H)
   H: HealthCheck + static,
   H::check(&static mut H, Server): static,

This function has a generic parameter H that is ’static and it gets a &’static mut H as argument. The where clause H::check(&’static mut H, Server): ‘static then says: if I call check with the argument &’static mut H, it will return a ‘static future. In contrast to the previous section, where we were talking about any possible return value from check, this where-clause is true and valid.

Desugaring RTN to associated types

To understand what RTN does, it’s best to think of the desugaring from async functions to associated types. This desugaring is exactly how Rust works internally, but we are not proposing to expose it to users directly, for reasons I’ll elaborate in a bit.

We saw earlier how an async fn desugars to a function that returns impl Future. Well, in a trait, returning impl Future can itself be desugared to a trait with a(generic) associated type:

trait HealthCheck {
    // async fn check(&mut self, server: Server);
    type Check<t>: Future<Output = ()> + t;
    fn check<s>(&s mut self, server: Server) -> Self::Check<s>;

When we write a where-clause like H::check(..): Send, that is then effectively a bound on this hidden associated type Check:

fn start_health_check<H>(health_check: H, server: Server)
    H: HealthCheck + Send + 'static,
    for<a> H::Check<a>: Send, // <— equivalent to `H::check(..): Send`

Generic methods

It is also possible to have generic async functions in traits. Imagine that instead of HealthCheck taking a specific Server type, we wanted to accept any type that implements the trait ServerTrait:

trait HealthCheckGeneric {
    async fn check_gen<S: ServerTrait>(&mut self, server: S);

We can still think of this trait as desugaring to a trait with an associated type:

trait HealthCheckGeneric {
    // async fn check<S>(&mut self, server: S) where S: ServerTrait,
    type CheckGen<t, S: ServerTrait>: Future<Output = ()> + t;
   fn check_gen <s, S: ServerTrait>(&s mut self, server: Server) -> Self::CheckGen<s, S>;

But if we want to write a where-clause like H::check_gen(..): Send, this would require us to support higher-ranked trait bounds over types and not just lifetimes:

fn start_health_check<H>(health_check: H, server: Server)
    H: HealthCheckGeneric + Send + 'static,
    for<a, S> H::CheckGen<a, S>: Send, // <—
    //     ^ for all types S…

As it happens, this sort of where-clause is something the types team is working on in our new solver design. I’m going to skip over the details, as it’s kind of orthogonal to the topic of how to write Send bounds.

One final note: just as you can specify a particular value for the argument types, you should be able to use turbofish to specify the value for generic parameters. So something like H::check_gen::<MyServer>(..): Send would mean “whenever you call check_gen on H with S = MyServer, the return type is Send”.

Using RTN outside of where-clauses

So far, all the examples I’ve shown you for RTN involved a where-clause. That is the most important context, but it should be possible to write RTN types any place you write a type. For the most part, this is just fine, but using the .. notation outside of a where-clause introduces some additional complications. Think of H::check — the precise type that is returned will depend on the lifetime of the first argument. So we could have one type H::check(&’a mut H, Server) and the return value would reference the lifetime ’a, but we could also have H::check(&’b mut H, Server), and the return value would reference the lifetime ’b. The .. notation really names a range of types. For the time being, I think we would simply say that .. is not allowed outside of a where-clause, but there are ways that you could make it make sense (e.g., it might be valid only when the return type doesn’t depend on the types of the parameters).

“Frequently asked questions”

That sums up our tour of the “return-type-notation” idea. In short:

  • You can write bounds like <T as Trait>::method(..): Send in a where-clause to mean “the method method from the impl of Trait for T returns a value that is Send, no matter what parameters I give it”.
  • Like an associated type, this would more commonly be written T::method(..), with the trait automatically determined.
  • You could also specify precise types for the parameters and/or generic types, like T::method(U, V).

Let’s dive into some of the common questions about this idea.

Why not just expose the desugared associated type directly?

Earlier I explained how H::check(..) would work by desugaring it to an associated type. So, why not just have users talk about that associated type directly, instead of adding a new notation for “the type returned by check”? The main reason is that it would require us to expose details about this desugaring that we don’t necessarily want to expose.

The most obvious detail is “what is the name of the associated type” — I think the only clear choice is to have it have the same name as the method itself, which is slightly backwards incompatible (since one can have a trait with an associated type and a method that has the same name), but easy enough to do over an edition.

We would also have to expose what generic parameters this associated type has. This is not always so simple. For example, consider this trait:

trait Dump {
   async fn dump(&mut self, data: &impl Debug);

If we want to desugar this to an associated type, what generics should that type have?

trait Dump {
    type Dump<>: Future<Output = ()> + ;
    //        ^^^ how many generics go here?
    fn dump(&mut self, data: &impl Debug) -> Self::Dump<>;

This function has two sources of “implicit” generic parameters: elided lifetimes and the impl Trait argument. One desugaring would be:

trait Dump {
    type Dump<a, b, D: Debug>: Future<Output = ()> + a + b;
   fn dump<a, b, D: Debug>(&a mut self, data: &b D) -> Self::Dump<a, b, D>;

But, in this case, we could also have a simpler desugaring that uses just one lifetime parameter (this isn’t always the case):

trait Dump {
    type Dump<a, D: Debug>: Future<Output = ()> + a;
   fn dump<a, D: Debug>(&a mut self, data: &a D) -> Self::Dump<a, D>;

Regardless of how we expose the lifetimes, the impl Trait argument also raises interesting questions. In ordinary functions, the lang-team generally favors not including impl Trait arguments in the list of generics (i.e., they can’t be specified by turbofish, their values are inferred from the argument types), although we’ve not reached a final decision there. That seems inconsistent with exposing the type parameter D.

All in all, the appeal of the RTN is that it skips over these questions, leaving the compiler room to desugar in any of the various equivalent ways. It also means users don’t have to understand the desugaring, and can just think about the “return value of check”.

Should H::check(..): Send mean that the future is Send, or the result of the future?

Some folks have pointed out that H::check(..): Send seems like it refers to the value you get from awaiting check, and not the future itself. This is particularly true since our async function notation doesn’t write the future explicitly, unlike (say) C# or TypeScript (in those languages, an async fn must return a task or promise type). This seems true, it will likely be a source of confusion — but it’s also consistent with how async functions work. For example:

trait Get {
    async fn get(&mut self) -> u32;

async fn bar<G: Get>(g: &mut G) {
    let f: impl Future<Output = u32> = g.get();

In this code, even though g.get() is declared to return u32, f is a future, not an integer. Writing G::get(..): Send thus talks about the future, not the integer.

Isn’t RTN kind of verbose?

Interesting fact: when I talk to people about what is confusing in Rust, the trait system ranks as high or higher than the borrow checker. If we take another look at our motivation example, I think we can start to see why:

fn start_health_check<H>(health_check: H, server: Server)
    H: HealthCheck<check(..): Send> + Send + 'static,

That where-clause basically just says “H is safe to use from other threads”, but it requires a pretty dense bit of notation! (And, of course, also demonstrates that the borrow checker and the trait system are not independent things, since ’static can be seen as a part of both, and is certainly a common source of confusion.) Wouldn’t it be nice if we had a more compact way to say that?

Now imagine you have a trait with a lot of methods:

trait AsyncOps {
    async fn op1(self);
    async fn op2(self);
    async fn op3(self);

Under the current proposal, to create an AsyncOps that can be (fully) used across threads, one would write:

fn do_async_ops<A>(health_check: H, server: Server)
    A: AsyncOps<op1(..): Send, op2(..): Send, op3(..): Send> + Send + 'static,

You could use a trait alias (if we stabilized them) to help here, but still, this seems like a problem!

But maybe that verbosity is useful?

Indeed! RTN is a very flexible notation. To continue with the AsyncOps example, we could write a function that says “the future returned by op1 must be send, but not the others”, which would be useful for a function like so:

async fn do_op1_in_parallel(a: impl AsyncOps<op1(..): Send + 'static>) {
    //                                       ^^^^^^^^^^^^^^^^^^^^^^^
    //                                       Return value of `op1` must be Send, static

Is RTN limited to async fn in traits?

All my examples have focused on async fn in traits, but we can use RTN to name the return types of any function anywhere. For example, given a function like get:

fn get() -> impl FnOnce() -> u32 {
    move || 22

we could allow you to write get() to name name the closure type that is returned:

fn foo() {
    let c: get() = get();
    let d: u32 = c();

This seems like it would be useful for things like iterator combinators, so that you can say things like “the iterator returned by calling map is Send”.

Why do we have to write ..?

OK, nobody asks this, but I do sometimes feel that writing .. just seems silly. We could say that you just write H::check(): Send to mean “for all parameters”. (In the case where the method has no parameters, then “for all parameters” is satisfied trivially.) That doesn’t change anything fundamental about the proposal but it lightens the “line noise” aspect a tad:

fn start_health_check<H>(health_check: H, server: Server)
    H: HealthCheck<check(): Send> + Send + 'static,

It does introduce some ambiguity. Did the user mean “for all parameters” or did they forget that check() has parameters? I’m not sure how this confusion is harmful, though. The main way I can see it coming about is something like this:

  • check() initially has zero parameters, and the user writes check(): Send.
  • In a later version of the program, a parameter is added, and now the meaning of check changes to “for all parameters” (although, as we noted before, that was arguably the meaning before).

There is a shift happening here, but what harm can it do? If the check still passes, then check(T): Send is true for any T. If it doesn’t, the user gets an error has to add an explicit type for this new parameter.

Can we really handle this in our trait solver?

As we saw when discussing generic methods, handling this feature in its full generality is a bit much for our trait solver today. But we could begin with a subset – for example, the notation can only be used in where-clauses and only for methods that are generic over lifetime parameters and not types. Tyler and I worked out a subset we believe would be readily implementable.


This post introduced return-type notation, an extension to the type grammar that allows you to refer to the return type of a trait method, and covered some of the pros/cons. Here is a rundown:


  • Extremely flexible notation that lets us say precisely which methods must return Send types, and even lets us go into detail about which argument types they will be called with.
  • Avoids having to specify a desugaring to associated types precisely. For example, we don’t have to decide how to name that type, nor do we have to decide how many lifetime parameters it has, or whether impl Trait arguments become type parameters.
  • Can be used to refer to return values of things beyond async functions.


  • New concept for users to learn — now they have associated types as well as associated return types.
  • Verbose even for common cases; doesn’t scale up to traits with many methods.

Mozilla ThunderbirdWhy We’re Rebuilding The Thunderbird Interface From Scratch

The future of Thunderbird

Thunderbird is quickly approaching its 20th anniversary as a standalone email client. And as we get closer to this year’s release of Thunderbird 115 “Supernova” we’re hearing a certain question more and more often:

“Why does Thunderbird look so old, and why does it take so long to change?”

~ A notable percentage of Thunderbird users

It’s certainly a valid one, so let’s spend some time answering it!

As Thunderbird’s Product Design manager, I have some good insights into what’s happening and where things are going. Consider this article (and the companion video below) the first painting in a more complete mural showing where Thunderbird is headed, and why some of the things we’re doing might seem counterintuitive.

Some of the talking points below might be divisive. They might touch a nerve. But we believe in being transparent and open about both our past and our future.

<figcaption class="wp-element-caption">Watch our companion video, hosted by Alex, which goes into even more detail. </figcaption>

3 Objectives For The Next 3 Years

Before we really dig in, let’s start with the future. We believe it’s a bright one!

With this year’s release of Thunderbird 115 “Supernova,” we’re doing much more than just another yearly release. It’s a modernized overhaul of the software, both visually and technically. Thunderbird is undergoing a massive rework from the ground up to get rid of the technical and interface debt accumulated over the past 10 years.

This is not an easy task, but it’s necessary to guarantee the sustainability of the project for the next 20 years.

Simply “adding stuff on top” of a crumbling architecture is not sustainable, and we can’t keep ignoring it.

Throughout the next 3 years, the Thunderbird project is aiming at these primary objectives:

  • Make the code base leaner and more reliable, rewrite ancient code, remove technical debt.
  • Rebuild the interface from scratch to create a consistent design system, as well as developing and maintaining an adaptable and extremely customizable user interface.
  • Switch to a monthly release schedule.

Inside those objectives there are hundreds of very large steps that need to happen, and achieving everything will require a lot of time and resources.

Thunderbird: An Old, Fragile LEGO Tower

<figcaption class="wp-element-caption">Photo by Mourizal Zativa on Unsplash</figcaption>

What’s all this stuff about “technical debt?” Why does it need to be rebuilt? Let’s talk about how we got here, and shed some light on the the complicated history of Thunderbird’s development.

Thunderbird is a monolithic application that has been developed by thousands of people over the course of two decades. Making major changes — as we’re doing with Supernova — requires very careful consideration.

As you’re reading this, it might help to imagine Thunderbird as an enormous Lego tower you’ve built. But years later, you realize the crucial center piece serving as the foundation is using the wrong shape. If you replace just that piece, the whole tower will crumble. This means you have to slowly remove the blocks above it to keep the tower from collapsing. Then, once you reach that center piece, you replace it, and then add back the pieces you removed with slightly different pieces.

Why? Because the original pieces don’t fit anymore.

How Is Thunderbird Made?

Thunderbird is literally a bunch of code running on top of Firefox. All the tabs and sections you see in our applications are just browser tabs with a custom user interface.

We love using Firefox as our base architecture, because it leverages all the very good stuff within. Things like cross-platform support, the Gecko web renderer, the Spidermonkey JavaScript compiler, and so on.

The Firefox logo + the Thunderbird logo

In doing so, Thunderbird can tag along Firefox for their release cycle, inherit security patches, benefit from extensions support, and much more.

Obviously there’s more complexity to it, including a lot of C++, JS, CSS, and XHTML to ensure everything works properly. Using a solid base architecture like Firefox is the perfect starting point.

Unfortunately, this approach comes with a hefty cost.

Keep in mind that Thunderbird is currently being actively developed by a bit more than a dozen core developers. Firefox has hundreds of developers constantly changing and improving things on a daily basis.

So, you can imagine how many times per week things suddenly break in Thunderbird because a C++ interface was renamed, or an API was deprecated, or a building library was upgraded. Keeping up with the upstream changes is not a simple task, and on some occasions it takes up most of our days.

“Is Thunderbird Dead?”

That cost — and what I’ll talk about next — is why Thunderbird has accumulated an enormous amount of “technical debt” to pay off.

Throughout the years, Mozilla’s focus shifted a lot, investing less and less resources into the development of Thunderbird. On July 6, 2012, the Mozilla Foundation announced that it would no longer be focused on innovations for Thunderbird, and that the future Thunderbird development would transition to a community-driven model.

This meant that community members and external contributors would be in charge of developing and supporting Thunderbird.

This decision was both a blessing and curse.

The blessing: it sparked a fire of support and contributions inside the community, allowing passionate contributors to submit code and improve Thunderbird in areas they cared about. Many features and customization options were introduced because a lot of community members started sharing and proposing their ideas to improve Thunderbird. The community grew, and the project became a solid example of real software democracy!

The curse: coordinating efforts across a volunteer community was challenging. Plus, there weren’t enough resources to ensure the long-term success and sustainability of an open-source software project.

Our Community Saved Thunderbird, But…

The Thunderbird community absolutely kept the project alive all of those years. Millions of active users, contributors, donors, and supporters have dedicated hours and hours of their free time in order to guarantee a usable and useful tool for so many. And they did a great job — something we’re eternally grateful for.

Our community responded and adapted to the scenario they found, and they tried to make the best of it.

Since Thunderbird was being contributed to by many volunteer contributors with varying tastes, it resulted in an Inconsistent user interface without a coherent user experience.

Moreover, the lack of constant upstream synchronization with Firefox caused the inability to build and release Thunderbird for months at a time.

The more time passed without a proper development structure, the more difficult it became to keep up with the technology innovations and improvements from competitors. Thunderbird now lacked a proper organization behind it. It lacked development oversight, a cohesive vision, and a roadmap. It lacked full-time employees with specific expertise.

And all of that contributed to a question that grew louder and louder as the years went by: “Is Thunderbird dead?”

MZLA Technologies and Community Culture Shock

Today, Thunderbird is wholly owned by MZLA Technologies, a subsidiary of Mozilla Foundation, and it’s actively developed and maintained by a growing group of paid employees. We have a proper organization, a roadmap, and people in charge of making smart decisions and defining directions.

This shift, which happened slowly between 2017 and 2020, was a bit of a shock for our community. Now, additions or changes need to be approved by core developers and designers. A stricter roadmap and list of features gets the priority during every release cycle, and external contributions are rejected if they’re not up to the standard of quality and visual direction of the project.

This sudden shift in the way Thunderbird is handled and developed created a “walled garden” feeling. This caused many community members to feel rejected or alienated by a project they spent hours on.

This is absolutely understandable, but it was necessary. 

Still Open, And Still Open Source

At Thunderbird, we strive to remain open, welcoming, and collaborative as much as possible.

We constantly advocate for an open process, starting from the initial roadmap ideas, releasing early mock-ups and changes to our community, as well as keeping our entire source code open and accessible.

Even though we’re very upfront and honest about the direction of the project, the decision making process happens during internal meetings. It’s driven by the people in charge, like a normal company. The lead developer, lead designer, project and product manager, senior engineers, etc, make the final decisions.

We always listen and incorporate the feedback from the community, and we try to balance what we know is needed with what our users and external contributors want. But you can’t make everyone happy; trying to do so can actually dilute and devalue your product.

The toughest thing to do is changing the perception that “we” (the core developers) don’t care about the community and we just do things to upset them, or change things just because it’s “trendy”.

That couldn’t be more wrong.

What To Expect Going Forward

In 2023, Thunderbird is pretty well sustainable, with a healthy donation flow, more services in development to increase our revenue stream (stay tuned!), and an ever growing team of developers and designers bringing their expertise to the table.

The technical debt is slowly abandoning the source code, thanks to the outstanding work of many core developers which are implementing modern paradigms, documenting a consistent coding style, and removing the crusty old code that only creates problems.

Improvements to the UI and UX will continue for the next 2 years, with the objective of creating an interface that can adapt to everyone’s needs. A UI that looks and feels modern is getting initially implemented with version 115 in July, aiming at offering a simple and clean interface for “new” users, as well as the implementation of more customizable options with a flexible and adaptable interface to allow veteran users to maintain that familiarity they love.

A renewed attention to usability and accessibility is now part of our daily development process, guaranteeing easy discoverability of all the powerful features, as well as full compatibility with assistive technologies to make Thunderbird usable by everyone.

And yes, absolutely: the constant addition of new features that some of our competitors have had for years, as well as the creation of some amazing and innovative solutions that will improve everyone’s experience.

Everything, as usual, wrapped around an open and ethical process, with a constant attention to our community, and a renewed passion to innovate and grow, to make Thunderbird the best personal and professional communication application out there!

Thanks for taking the ride with us. And thank you for trusting us. We welcome your feedback in the comments below.

The post Why We’re Rebuilding The Thunderbird Interface From Scratch appeared first on The Thunderbird Blog.

The Rust Programming Language BlogAnnouncing Rust 1.67.1

Firefox NightlyLet’s Give You More Control(s) – These Weeks in Firefox: Issue 132


  • Dao has enabled the URL bar result menu in Nightly! This gives you more control over the results that appear.
  • Screenshot of the URL bar, showing an option to remove a suggested result.

    You can remove hamster dance from your history, but not from your heart.

  • The improved video controls for Picture-in-Picture have been enabled in Nightly
    • We have introduced the playhead scrubber, timestamp, seek forward and backward buttons, and the fullscreen button.
    • Screenshot of the PiP window showing the new controls, including playhead scrubber, timestamp, seek forward and backward buttons, and the fullscreen button.
    • Please file bugs here if you find any!
  • Thanks to Christian Sonne, it’s now easier to copy output from console.table into a spreadsheet
  • The DevTools team has tweaked the console output to be more legible when the console is narrow
  • Screenshot from the Devtools console, showing old wrapping behavior in v110 on top, and updated wrapping behavior in v111 on bottom.
  • Thanks to Gregory Pappas, who has given the WebExtension API a few enhancements to help Firefox be more compatible with Chrome WebExtensions that use that API!

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Gregory Pappas [:gregp]
  • Itiel

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  •  As follow-ups related to the extensions button and panel:
    • Cleaning up code and css that is now unused on all channels (Bug 1801540, Bug 1799009) and fixed the extension icon size selected when the extensions action buttons are being pinned to the toolbar (Bug 1811128) and the positioning of the extension install doorhanger (Bug 1813275)
  • as part of enhancements related to Origin Controls, starting from Firefox 111 the user will be able to allow manifest_version 3 extensions to get access to a website origin loaded in a tab for a single visit (Bug 1805523)
  • Emilio fixed a small visual regressions related to the private browsing checkbox included in the post-install dialog (fixed in Firefox 111 and Firefox 110 by Bug 1812445, initially regressed in Firefox 108  by Bug 1790616)
WebExtension APIs

Developer Tools

  • Kernp25 fixed an error that was happening on tab unloading (bug)
  • Zacnomore improved the layout of the pseudoclass toggles in the inspector (bug)
    • Screenshot of Devtools style rules panel, showing old and new layouts of the psuedoclass filter panel.
  • Thanks to Ian from the SpiderMonkey team who helped us fix a case where breaking on exception would pause at the wrong line in the debugger when faulty code was called inside a for..of loop (bug)
  • Hubert added a title for debugger search results from dynamic sources (eval, new Function(), …) (bug)
    • Screenshot of Devtools debugger, showing old and new treatment of debugger search results from dynamic sources.
  • Hubert also added a performance test so we can track perceived performance of debugger search (bug)
  • We added autocomplete in rule view for color level 4 formats (color, lab, lch, oklab, oklch) when layout.css.more_color_4.enabled is true (bug, bug)
  • Alex fixed an issue with the Browser toolbox’s debugger being broken when a frame was selected in the iframe dropdown (bug)
  • Alex migrated most of our server actors to proper ES classes (bug, bug)
  • Julian fixed a bug that was preventing to inspect elements on some system pages (bug)
WebDriver BiDi
  • Sasha updated our vendored version of puppeteer (bug) which allows us to run Puppeteer BIDI protocol unit tests in CI (bug)
  • Henrik finalized (de-)serialization support for WebElement and ShadowRoot (bug, bug)
  • GeckoDriver was running with Fission disabled in some scenarios (e.g. in Selenium). Henrik fixed the issue (bug) and we released a new version of GeckoDriver, 0.32.1 (bug)

ESMification status

  • Small increases this week, but various patches are in progress.
  • ESMified status:
    • browser: 46.7%
    • toolkit: 38.3%
    • Total: 47.1% (up from 46.5%)
  • #esmification on Matrix
  • Migration Document (with a walkthrough!)

Lint, Docs and Workflow

Migration Improvements (CalState LA Project)


Search and Navigation

  • Daisuke fixed an issue to show page titles in the urlbar when the whole url is typed @ 1791657
  • James setup default engines for Friulian and Sardinian @ 1807791
  • Dale added telemetry and a minimum character match for QuickActions @ 1812152
  • Mandy fixed the weather results wrapping incorrectly @ 1811556
  • Mark fixed search engine names being updated correctly when switching languages @ 1781768
  • Oliver fixed the placeholder being shown correctly in private browsing @ 1792816

Storybook / Reusable components

  • Itiel fixed an RTL issue with moz-toggle (Bug 1813590)
  • Tim landed patches to replace “learn more” links in about:addons with moz-support-link (Bug 1804695)
  • Tim removed the redundant support link implementation from about:addons (Bug 1809458)
  • Hanna enabled support for writing .mdx and .md based stories in our Storybook (Bug 1805573)
  • We’ve set up a very quiet (so far) matrix room for reusable components

Wladimir PalantWeakening TLS protection, South Korean style

Note: This article is also available in Korean.

Normally, when you navigate to your bank’s website you have little reason to worry about impersonations. The browser takes care of verifying that you are really connected to the right server, and that your connection is safely encrypted. It will indicate this by showing a lock icon in the address bar.

So even if you are connected to a network you don’t trust (such as open WiFi), nothing can go wrong. If somebody tries to impersonate your bank, your browser will notice. And it will refuse connecting.

Screenshot of an error message: Did Not Connect: Potential Security Issue. Firefox detected a potential security threat and did not continue to because this website requires a secure connection.

This is achieved by means of a protocol called Transport Layer Security (TLS). It relies on a number of trusted Certification Authorities (CAs) to issue certificates to websites. These certificates allow websites to prove their identity.

When investigating South Korea’s so-called security applications I noticed that all of them add their own certification authorities that browsers have to trust. This weakens the protection provided by TLS considerably, as misusing these CAs allows impersonating any website towards a large chunk of South Korean population. This puts among other things the same banking transactions at risk that these applications are supposed to protect.

Which certification authorities are added?

After doing online banking on your computer in South Korea, it’s worth taking a look at the trusted certification authorities of your computer. Most likely you will see names that have no business being there. Names like iniLINE, Interezen or Wizvera.

Screenshot of the Windows “Trusted Root Certification Authorities” list. Among names like GTE CyberTrust or Microsoft, also iniLINE and Interezen are listed.

None of these are normally trusted. They have rather been added to the operating system’s storage by the respective applications. These applications also add their certification authorities to Firefox which, unlike Google Chrome or Microsoft Edge, won’t use operating system’s settings.

So far I found the following certification authorities being installed by South Korean applications:

Name Installing application(s) Validity Serial number
ASTxRoot2 AhnLab Safe Transaction 2015-06-18 to 2038-06-12 009c786262fd7479bd
iniLINE CrossEX RootCA2 TouchEn nxKey 2018-10-10 to 2099-12-31 01
INTEREZEN CA Interezen IPInside Agent 2021-06-09 to 2041-06-04 00d5412a38cb0e4a01
LumenSoft CA KeySharp CertRelay 2012-08-08 to 2052-07-29 00e9fdfd6ee2ef74fc
WIZVERA-CA-SHA1 Wizvera Veraport 2019-10-23 to 2040-05-05 74b7009ee43bc78fce69 73ade1da8b18c5e8725a
WIZVERA-CA-SHA2 Wizvera Veraport, Wizvera Delfino 2019-10-23 to 2040-05-05 20bbeb748527aeaa25fb 381926de8dc207102b71

And these certification authorities will stay there until removed manually. The applications’ uninstallers won’t remove them.

They are also enabled for all purposes. So one of these authorities being compromised will not merely affect web server identities but also application or email signatures for example.

Update (2023-02-19): Someone compiled a more comprehensive list of these certificates.

Will a few more certification authorities really hurt?

If you look at the list of trusted certification authorities, there are more than 50 entries on it anyways. What’s the problem if a few more are added?

Running a Certificate Authority is a huge responsibility. Anyone with access to the private key of a trusted certification authority will be able to impersonate any website. Criminals and governments around the world would absolutely love to have this power. The former need it to impersonate your bank for example, the latter to spy on you undetected.

That’s why there are strict rules for certification authorities, making sure the access to the CA’s private key is restricted and properly secured. Running a certification authority also requires regular external audits to ensure that all the security parameters are still met.

Now with these South Korean applications installing their own Certificate Authorities on so many computers in South Korea, they become a huge target for hackers and governments alike. If a private key for one of these Certificate Authorities is compromised, TLS will provide very little protection in South Korea.

How do AhnLab, RaonSecure, Interezen, Wizvera deal with this responsibility? Do they store the private keys in a Hardware Security Module (HSM)? Are these in a secure location? Who has access? What certificates have been issued already? We have no answer to these questions. There are no external audits, no security practices that they have to comply with.

So people are supposed to simply trust these companies to keep the private key secure. As we’ve already seen from my previous articles however, they have little expertise in keeping things secure.

How could this issue be solved?

The reason for all these certificate authorities seems to be: the applications need to enable TLS on their local web server. Yet no real certificate authority will issue a certificate for, so they have to add their own.

If a certificate for is all they need, there is a simple solution. Instead of adding the same CA on all computers, it should be a different CA for each computer.

So the applications should do the following during the installation:

  1. Generate a new (random) certificate authority and the corresponding private key.
  2. Import this CA into the list of trusted certification authorities on the computer.
  3. Generate a certificate for and sign it with this CA. Application can now use it for its local web server.
  4. Destroy the private key of the CA.

In fact, Initech CrossWeb Ex V3 seems to do exactly that. You can easily recognize it because the displayed validity starts at the date of the installation. While it also installs its certificate authority, this one is valid for one computer only and thus unproblematic.

Oh, and one more thing to be taken care of: any CAs added should be removed when the application is uninstalled. Currently none of the applications seem to do it.

Alex VincentIntroducing Motherhen: Gecko-based applications from scratch

Mozilla‘s more than just Firefox. There’s Thunderbird for e-mail, BlueGriffon for creating web pages, and ye olde SeaMonkey Application Suite. Once upon a time, there was the ability to create custom front-ends on top of Firefox using the -app option. (It’s still there but not really supported.) Mozilla’s source code provides a rich ecosystem to build atop of.

With all that said, creating new Gecko-based applications has always been a challenge at best. There’s a surprising amount of high-quality software using a Chromium-based framework, Electron – and yes, I use some of them. (Visual Studio Code, in particular.) There should be a similar framework for Mozilla code.

Now there is: Motherhen, which I am releasing under the MPL 2 license. This is a GitHub template repository, meaning you can create a complete copy of the repository and start your own projects with it.

Motherhen is at release version 1.0, beta 2: it supports creating, building, running and packaging Mozilla applications only on Linux. On MacOS, the mach package command doesn’t work. No one’s tried this on Windows yet. I need help with both of those, if someone’s willing.

Speaking of help, a big shout-out to TrickyPR from Pulse Browser for his contributions, especially with patches to Mozilla’s code to get this working and for developer tools support in progress!

<figcaption class="wp-element-caption">Motherhen screenshot</figcaption>

David TellerAbout Safety, Security and yes, C++ and Rust

Recent publications by Consumer Reports and the NSA have launched countless conversations in development circles about safety and its benefits.

In these conversations, I’ve seen many misunderstandings about what safety means in programming and how programming languages can implement, help or hinder safety. Let’s clarify a few things.

About:CommunityMeet us at FOSDEM 2023

Hello everyone,

It is that time of the year, and we are off to Brussels for  FOSDEM 2023!

FOSDEM is a central appointment for the Open Source community.

This is the first year the conference will be back in person and Mozilla will be there, with a stand on the conference floor and many interesting talks in our DevRoom.

We are all looking forward to meet-up in person with developers and Open Source enthusiasts from all over Europe (and beyond).

The event will take place on the 4th and 5th of February, including more than 700 talks and 60 stands.

If you are there, come to say hi to our stand or watch the streaming of our talks on the FOSDEM website!

Many mozillians that are going to FOSDEM will also be in this Matrix Room, so feel free to join and ask any questions.


The Mozilla Stand


Our stand will be in building K level 2 and will be managed by many enthusiastic Mozillians. Come pick up a sticker and chat all that is Mozilla, including Firefox, MDN, Hubs, digital policy, and many other projects.


Mozilla DevRoom – UA2.220 (Guillissen)


The Mozilla DevRoom will take place on Saturday between 15:00 and 19:00. If you cannot make it, all the talks will be streamed during the event (click on the event link to find the streaming link).


15:00 – 15:30

Understanding the energy use of Firefox. With less power comes more sustainability – Florian Quèze


15:30 – 16:00

What’s new with the Firefox Profiler. Power tracks, UI improvements, importers – Nazım Can Altınova


16:00 – 16:30

Over a decade of anti-tracking work at Mozilla – Vincent Tunru


16:30 – 17:00

The Digital Services Act 101. What is it and why should you care – Claire Pershan


17:00 – 17:30

Cache The World. Adventures in A11Y Performance – Benjamin De Kosnik, Morgan Reschenberg


17:30 – 18:00

Firefox Profiler beyond the web. Using Firefox Profiler to view Java profiling data – Johannes Bechberger


18:00 – 18:30

Localize your open source project with Pontoon – Matjaž Horvat


18:30 – 19:00

The Road to Intl.MessageFormat – Eemeli Aro


Other Mozilla Talks


But that’s not all. There will also be other Mozilla-related talks around FOSDEM such as


We look forward to seeing you all.

Community Programs Team

Karl DubostBlade Runner 2023

Graffiti of a robot on a wall with buildings in the background.

Webcompat engineers will never be over their craft. I've seen things you people wouldn't believe. Large websites broken off the shoulder of developer tools. I watched Compat-beams glitter in the dark near the Interoperability Gate. All those moments will be lost in time, like tears in rain. Time to die.

In other news: Pushing Interop Forward in 2023

Now we are pleased to announce this year’s Interop 2023 project! Once again, we are joining with Bocoup, Google, Igalia, Microsoft, and Mozilla to move the interoperability of the web forward.


The Servo BlogServo 2023 Roadmap

As we move forward with our renewed project activity, we would like to share more details about our plans for 2023. We’ve recently published the Servo 2023 roadmap on the project wiki, and our community and governance and technical plans are outlined below.

Servo 2023 Roadmap. Project reactivation Q1-Q4. Project outreach Q1-Q4. Main dependencies upgrade Q1-Q3. Layout engine selection Q1-Q2. Progress towards basic CSS2 support Q3-Q4. Explore Android support Q3-Q4. Embeddable web engine experiments Q4.

Community and governance

We’re restarting all the usual activities, including PR triage and review, public communications about the project, and arranging TSC meetings. We will also make some outreach efforts in order to attract more collaborators, partners, and potential sponsors interested in working, participating, and funding the project.


We want to upgrade the main dependencies of Servo, like WebRender and Stylo, to get them up to date. We will also analyse the status of the two layout engines in Servo, and select one of them for continued development. Our plan is to then work towards basic CSS2 conformance.

Regarding platform support, we would like to explore the possibility of supporting Android. We would also like to experiment with making Servo a practical embeddable web rendering engine.

As with any software project, this roadmap will evolve over time, but we’ll keep you posted. We hope you’ll join us in making it happen.

Hacks.Mozilla.OrgAnnouncing Interop 2023

A key difference between the web and other platforms is that the web puts users in control: people are free to choose whichever browser best meets their needs, and use it with any website. This is interoperability: the ability to pick and choose components of a system as long as they adhere to common standards.

For Mozilla, interoperability based on standards is an essential element of what makes the web special and sets it apart from other, proprietary, platforms. Therefore it’s no surprise that maintaining this is a key part of our vision for the web.

However, interoperability doesn’t just happen. Even with precise and well-written standards it’s possible for implementations to have bugs or other deviations from the agreed-upon behavior. There is also a tension between the desire to add new features to the platform, and the effort required to go back and fix deficiencies in already shipping features.

Interoperability gaps can result in sites behaving differently across browsers, which generally creates problems for everyone. When site authors notice the difference, they have to spend time and energy working around it. When they don’t, users suffer the consequences. Therefore it’s no surprise that authors consider cross-browser differences to be one of the most significant frustrations when developing sites.

Clearly this is a problem that needs to be addressed at the source. One of the ways we’ve tried to tackle this problem is via web-platform-tests. This is a shared testsuite for the web platform that everyone can contribute to. This is run in the Firefox CI system, as well as those of other vendors. Whenever Gecko engineers implement a new feature, the new tests they write are contributed back upstream so that they’re available to everyone.

Having shared tests allows us to find out where platform implementations are different, and gives implementers a clear target to aim for. However, users’ needs are large, and as a result, the web platform is large. That means that simply trying to fix every known test failure doesn’t work: we need a way to prioritize and ensure that we strike a balance between fixing the most important bugs and shipping the most useful new features.

The Interop project is designed to help with this process, and enable vendors to focus their energies in the way that’s most helpful to the long term health of the web. Starting in 2022, the Interop project is a collaboration between Apple, Bocoup, Google, Igalia, Microsoft and Mozilla (and open to any organization implementing the web platform) to set a public metric to measure improvements to interoperability on the web.

Interop 2022 showed significant improvements in the interoperability of multiple platform features, along with several cross-browser investigations that looked into complex, under-specified, areas of the platform where interoperability has been difficult to achieve. Building on this, we’re pleased to announce Interop 2023, the next iteration of the Interop project.

Interop 2023

Like Interop 2022, Interop 2023 considers two kinds of platform improvement:

Focus areas cover parts of the platform where we already have a high quality specification and good test coverage in web-platform-tests. Therefore progress is measured by looking at the pass rate of those tests across implementations. “Active focus areas” are ones that contribute to this year’s scores, whereas “inactive” focus areas are ones from previous years where we don’t anticipate further improvement.

As well as calculating the test pass rate for each browser engine, we’re also computing the “Interop” score: how many tests are passed by all of Gecko, WebKit and Blink. This reflects our goal not just to improve one browser, but to make sure features work reliably across all browsers.

Investigations are for areas where we know interoperability is lacking, but can’t make progress just by passing existing tests. These could include legacy parts of the platform which shipped without a good specification or tests, or areas which are hard to test due to missing test infrastructure. Progress on these investigations is measured according to a set of mutually agreed goals.

Focus Areas

The complete list of focus areas can be seen in the Interop 2023 readme. This was the result of a consensus based process, with input from web authors, for example using the results of the State of CSS 2022 survey, and MDN “short surveys”. That process means you can have confidence that all the participants are committed to meaningful improvements this year.

Rather than looking at all the focus areas in detail, I’ll just call out some of the highlights.


Over the past several years CSS has added powerful new layout primitives — flexbox and grid, followed by subgrid — to allow sophisticated, easy to maintain, designs. These are features we’ve been driving & championing for many years, and which we were very pleased to see included in Interop 2022. They have been carried forward into Interop 2023, adding additional tests, reflecting the importance of ensuring that they’re totally dependable across implementations.

As well as older features, Interop 2023 also contains some new additions to CSS. Based on feedback from web developers we know that two of these in particular are widely anticipated: Container Queries and parent selectors via :has(). Both of these features are currently being implemented in Gecko; Container Queries are already available to try in prerelease versions of Firefox, and is expected to be released in Firefox 110 later this month, whilst :has() is under active development. We believe that including these new features in Interop 2023 will help ensure that they’re usable cross-browser as soon as they’re shipped.

Web Apps

Several of the features included in Interop 2023 are those that extend and enhance the capability of the platform; either allowing authors to achieve things that were previously impossible, or improving the ergonomics of building web applications.

The Web Components focus area is about ergonomics; components allow people to create and share interactive elements that encapsulate their behavior and integrate into native platform APIs. This is especially important for larger web applications, and success depends on the implementations being rock solid across all browsers.

Offscreen Canvas and Web Codecs are focus areas which are really about extending the capabilities of the platform; allowing rich video and graphics experiences which have previously been difficult to implement efficiently using web technology.


Unlike the other focus areas, Web Compatibility isn’t about a specific feature or specification. Instead the tests in this focus area have been written and selected on the basis of observed site breakage, for example from browser bug reports or via The fact that these bugs are causing sites to break immediately makes them a very high priority for improving interoperability on the web.


Unfortunately not all interoperability challenges can be simply defined in terms of a set of tests that need to be fixed. In some cases we need to do preliminary work to understand the problem, or to develop new infrastructure that will allow testing.

For 2023 we’re going to concentrate on two areas in which we know that our current test infrastructure is insufficient: mobile platforms and accessibility APIs.

Mobile browsing interaction modes often create web development and interoperability challenges that don’t occur on desktop. For example, the browser viewport is significantly more dynamic and complex on mobile, reflecting the limited screen size. Whilst browser vendors have ways to test their own mobile browsers, we lack shared infrastructure required to run mobile-specific tests in web-platform-tests and include the results in Interop metrics. The Mobile Testing investigation will look at plugging that gap.

Users who make use of assistive technology (e.g., screen readers) depend on parts of the platform that are currently difficult to test in a cross-browser fashion. The Accessibility Testing investigation aims to ensure that accessibility technologies are just as testable as other parts of the web technology stack and can be included in future rounds of Interop as focus areas.

Together these investigations reflect the importance of ensuring that the web works for everyone, irrespective of how they access it.


Interop 2023 Dashboard as of January 2023, showing an Interop score of 61, an Investigation Score of 0, and browser engine scores of 86 for Blink and WebKit and 74 for Gecko.

To follow progress on Inteop 2023, see the dashboard on This gives detailed scores for each focus area, as well as overall progress on Interop and the investigations.

Mozilla & Firefox

The Interop project is an important part of Mozilla’s vision for a safe & open web where users are in control, and can use any browser on any device. Working with other vendors to focus efforts towards improving cross-browser interoperability is a big part of making that vision a reality. We also know how important it is to lead through our products, and look forward to bringing these improvements to Firefox and into the hands of users.

Partner Announcements

The post Announcing Interop 2023 appeared first on Mozilla Hacks - the Web developer blog.

Niko MatsakisAsync trait send bounds, part 1: intro

Nightly Rust now has support for async functions in traits, so long as you limit yourself to static dispatch. That’s super exciting! And yet, for many users, this support won’t yet meet their needs. One of the problems we need to resolve is how users can conveniently specify when they need an async function to return a Send future. This post covers some of the background on send futures, why we don’t want to adopt the solution from the async_trait crate for the language, and the general direction we would like to go. Follow-up posts will dive into specific solutions.

Why do we care about Send bounds?

Let’s look at an example. Suppose I have an async trait for performs some kind of periodic health check on a given server:

trait HealthCheck {
    async fn check(&mut self, server: &Server) -> bool;

Now suppose we want to write a function that, given a HealthCheck, starts a parallel task that runs that check every second, logging failures. This might look like so:

fn start_health_check<H>(health_check: H, server: Server)
    H: HealthCheck + Send + 'static,
    tokio::spawn(async move {
        while health_check.check(&server).await {

So far so good! So what happens if we try to compile this? You can try it yourself if you use the async_fn_in_trait feature gate, you should see a compilation error like so:

error: future cannot be sent between threads safely
   --> src/
15  |       tokio::spawn(async move {
    |  __________________^
16  | |         while health_check.check(&server).await {
17  | |             tokio::time::sleep(Duration::from_secs(1)).await;
18  | |         }
19  | |         emit_failure_log(&server).await;
20  | |     });
    | |_____^ future created by async block is not `Send`
    = help: within `[async block@src/ 20:6]`, the trait `Send` is not implemented for `impl Future<Output = bool>`

The error is saying that the future for our task cannot be sent between threads. But why not? After all, the health_check value is both Send and ’static, so we know that health_check is safe to send it over to the new thread. But the problem lies elsewhere. The error has an attached note that points it out to us:

note: future is not `Send` as it awaits another future which is not `Send`
   --> src/
16  |         while health_check.check(&server).await {
    |               ^^^^^^^^^^^^^^^^^^^^^^^^^^^ await occurs here

The problem is that the call to check is going to return a future, and that future is not known to be Send. To see this more clearly, let’s desugar the HealthCheck trait slightly:

trait HealthCheck {
    // async fn check(&mut self, server: &Server) -> bool;
    fn check(&mut self, server: &Server) -> impl Future<Output = bool>;
                                           // ^ Problem is here! This returns a future, but not necessarily a `Send` future.

The problem is that check returns an impl Future, but the trait doesn’t say whether this future is Send or not. The compiler therefore sees that our task is going to be awaiting a future, but that future might not be sendable between threads.

What does the async-trait crate do?

Interestingly, if you rewrite the above example to use the async_trait crate, it compiles. What’s going on here? The answer is that the async_trait proc macro uses a different desugaring. Instead of creating a trait that yields -> impl Future, it creates a trait that returns a Pin<Box<dyn Future + Send>>. This means that the future can be sent between threads; it also means that the trait is dyn-safe.

This is a good answer for the async-trait crate, but it’s not a good answer for a core language construct as it loses key flexibility. We want to support async in single-threaded executors, where the Send bound is irrelevant, and we also to support async in no-std applications, where Box isn’t available. Moreover, we want to have key interop traits (e.g., Read) that can be used for all three of those applications at the same time. An approach like the used in async-trait cannot support a trait that works for all three of those applications at once.

How would we like to solve this?

Instead of having the trait specify whether the returned future is Send (or boxed, for that matter), our preferred solution is to have the start_health_check function declare that it requires check to return a sendable future. Remember that health_check already included a where clause specifying that the type H was sendable across threads:

fn start_health_check<H>(health_check: H, server: Server)
    H: HealthCheck + Send + 'static,
    // —————  ^^^^^^^^^^^^^^ “sendable to another disconnected thread”
    //     |
    // Implements the `HealthCheck` trait

Right now, this where clause says two independent things:

  • H implements HealthCheck;
  • values of type H can be sent to an independent task, which is really a combination of two things
    • type H can be sent between threads (H: Send)
    • type H contains no references to the current stack (H: ‘static)

What we want is to add syntax to specify an additional condition:

  • H implements HealthCheck and its check method returns a Send future

In other words, we don’t want just any type that implements HealthCheck. We specifically want a type that implements HealthCheck and returns a Send future.

Note the contrast to the desugaring approach used in the async_trait crate: in that approach, we changed what it means to implement HealthCheck to always require a sendable future. In this approach, we allow the trait to be used in both ways, but allow the function to say when it needs sendability or not.

The approach of “let the function specify what it needs” is very in-line with Rust. In fact, the existing where-clause demonstrates the same pattern. We don’t say that implementing HealthCheck implies that H is Send, rather we say that the trait can be implemented by any type, but allow the function to specify that H must be both HealthCheck and Send.

Next post: Let’s talk syntax

I’m going to leave you on a cliffhanger. This blog post setup the problem we are trying to solve: for traits with async functions, we need some kind of syntax for declaring that you want an implementation that returns Send futures, and not just any implementation. In the next set of posts, I’ll walk through our proposed solution to this, and some of the other approaches we’ve considered and rejected.

Appendix: Why does the returned future have to be send anyway?

Some of you may wonder why it matters that the future returned is not Send. After all, the only thing we are actually sending between threads is health_check — the future is being created on the new thread itself, when we call check. It is a bit surprising, but this is actually highlighting an area where async tasks are different from threads (and where we might consider future language extensions).

Async is intended to support a number of different task models:

  • Single-threaded: all tasks run in the same OS thread. This is a great choice for embedded systems, or systems where you have lightweight processes (e.g., Fuchsia1).
  • Work-dealing, sometimes called thread-per-core: tasks run in multiple threads, but once a task starts in a thread, it never moves again.
  • Work-stealing: tasks start in one thread, but can migrate between OS threads while they execute.

Tokio’s spawn function supports the final mode (work-stealing). The key point here is that the future can move between threads at any await point. This means that it’s possible for the future to be moved between threads while awaiting the future returned by check. Therefore, any data in this future must be Send.

This might be surprising. After all, the most common example of non-send data is something like a (non-atomic) Rc. It would be fine to create an Rc within one async task and then move that task to another thread, so long as the task is paused at the point of move. But there are other non-Send types that wouldn’t work so well. For example, you might make a type that relies on thread-local storage; such a type would not be Send because it’s only safe to use it on the thread in which it was created. If that type were moved between threads, the system could break.

In the future, it might be useful to separate out types like Rc from other Send types. The distinguishing characteristic is that Rc can be moved between threads so long as all possible aliases are also moved at the same time. Other types are really tied to a specific thread. There’s no example in the stdlib that comes to mind, but it seems like a valid pattern for Rust today that I would like to continue supporting. I’m not sure yet the right way to think about that!

  1. I have finally learned how to spell this word without having to look it up! 💪 

The Rust Programming Language BlogAnnouncing Rustup 1.25.2

The rustup working group is announcing the release of rustup version 1.25.2. Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of rustup installed, getting rustup 1.25.2 is as easy as stopping any programs which may be using Rustup (e.g. closing your IDE) and running:

rustup self update

Rustup will also automatically update itself at the end of a normal toolchain update:

rustup update

If you don't have it already, you can get rustup from the appropriate page on our website.

What's new in rustup 1.25.2

This version of rustup fixes a warning incorrectly saying that signature verification failed for Rust releases. The warning was due to a dependency of Rustup including a time-based check preventing the use of SHA-1 from February 1st, 2023 onwards.

Unfortunately Rust's release signing key uses SHA-1 to sign its subkeys, which resulted in all signatures being marked as invalid. Rustup 1.25.2 temporarily fixes the problem by allowing again the use of SHA-1.

Why is signature verification failure only a warning?

Signature verification is currently an experimental and incomplete feature included in rustup, as it's still missing crucial features like key rotation. Until the feature is complete and ready for use, its outcomes are only displayed as warnings without a way to turn them into errors.

This is done to avoid potentially breaking installations of rustup. Signature verification will error out on failure only after the design and implementation of the feature will be finished.


Thanks again to all the contributors who made rustup 1.25.2 possible!

  • Daniel Silverstone (kinnison)
  • Pietro Albini (pietroalbini)

Hacks.Mozilla.OrgInterop 2022: Outcomes

Last March we announced the Interop 2022 project, a collaboration between Apple, Bocoup, Google, Igalia, Microsoft, and Mozilla to improve the quality and consistency of their implementations of the web platform.

Now that it’s 2023 and we’re deep into preparations for the next iteration of Interop, it’s a good time to reflect on how the first year of Interop has gone.

Interop Wins

Happily, Interop 2022 appears to have been a big success. Every browser has made significant improvements to their test pass rates in the Interop focus areas, and now all browsers are scoring over 90%. A particular success can be seen in the Viewport Units focus area, which went from 0% pass rate in all browsers to 100% in all browsers in less than a year. This almost never happens with web platform features!

Looking at the release version of browsers — reflecting what actually ships to users — Firefox started the year with a score of around 60% in Firefox 95 and reached 90% in Firefox 108, which was released in December. This reflects a great deal of effort put into Gecko, both in adding new features and improving the quality of implementation of existing features like CSS containment, which jumped from 85% pass rate to 98% with the improvements that were part of Firefox 103.

One of the big new web-platform features in 2022 was Cascade Layers, which first shipped as part of Firefox 97 in February. This was swiftly followed by implementations shipping in Chrome 99 and Safari 15.4, again showing the power of Interop to rapidly drive a web platform feature from initial implementation to something production-quality and available across browsers.

Another big win that’s worth highlighting was the progress of all browsers to >95% on the “Web Compatibility” focus area. This focus area consisted of a small set of tests from already implemented features where browser differences were known to cause problems for users (e.g. through bug reports to In an environment where it’s easy to fixate on the new, it’s very pleasing to see everyone come together to clean up these longstanding problems that broke sites in the wild.

Other new features that have shipped, or become interoperable, as part of Interop 2022 have been written about in retrospectives by Apple and Google. There’s a lot of work there to be proud of, and I’d suggest you check out their posts.


Along with the “focus areas” based on counts of passing tests, Interop 2022 had three “investigations”, covering areas where there’s less clarity on what’s required to make the web interoperable, and progress can’t be characterized by a test pass rate.

The Viewport investigation resulted in multiple spec bugs being filed, as well as agreement with the CSSWG to start work on a Viewport Specification. We know that viewport-related differences are a common source of pain, particularly on mobile browsers; so this is very promising for future improvements in this area.

The Mouse and Pointer Events investigation collated a large number of browser differences in the handling of input events. A subset of these issues got tests and formed the basis for a proposed Interop 2023 focus area. There is clearly still more to be done to fix other input-related differences between implementations.

The Editing investigation tackled one of the most historically tricky areas of the platform, where it has long been assumed that complex tasks require the use of libraries that smooth over differences with bespoke handling of each browser engine. One thing that became apparent from this investigation is that IME input (used to input characters that can’t be directly typed on the keyboard) has behavioral differences for which we lack the infrastructure to write automated cross-browser tests. This Interop investigation looks set to catalyze future work in this area.

Next Steps

All the signs are that Interop 2022 was helpful in aligning implementations of the web and ensuring that users are able to retain a free choice of browser without running into compatibility problems. We plan to build on that success with the forthcoming launch of Interop 2023, which we hope will further push the state of the art for web developers and help web browser developers focus on the most important issues to ensure the future of a healthy open web.

The post Interop 2022: Outcomes appeared first on Mozilla Hacks - the Web developer blog.

Wladimir PalantPassword strength explained

The conclusion of my blog posts on the LastPass breach and on Bitwarden’s design flaws is invariably: a strong master password is important. This is especially the case if you are a target somebody would throw considerable resources at. But everyone else might still get targeted due to flaws like password managers failing to keep everyone on current security settings.

There is lots of confusion about what constitutes a strong password however. How strong is my current password? Also, how strong is strong enough? These questions don’t have easy answers. I’ll try my best to explain however.

If you are only here for recommendations on finding a good password, feel free to skip ahead to the Choosing a truly strong password section.

Where strong passwords are crucial

First of all, password strength isn’t always important. If your password is stolen as clear text via a phishing attack or a compromised web server, a strong password won’t help you at all.

In order to reduce the damage from such attacks, it’s way more important that you do not reuse passwords – each web service should have its own unique password. If your login credentials for one web service get into the wrong hands, these shouldn’t be usable to compromise all your other accounts e.g. by means of credential stuffing. And since you cannot possibly keep hundreds of unique passwords in your head, using a password manager (which can be the one built into your browser) is essential.

But this password manager becomes a single point of failure. Especially if you upload the password manager data to the web, be it to sync it between multiple devices or simply as a backup, there is always a chance that this data is stolen.

Of course, each password manager vendor will tell you that all the data is safely encrypted. And that you are the only one who can possibly decrypt it. Sometimes this is true. Often enough this is a lie however. And the truth is rather: nobody can decrypt your data as long as they are unable to guess your master password.

So that one password needs to be very hard to guess. A strong password.

Oh, and don’t forget enabling Multi-factor authentication (MFA) where possible regardless.

How password guessing works

When someone has your encrypted data, guessing the password it is encrypted with is a fairly straightforward process.

A flow chart starting with box 1 “Produce a password guess.” An arrow leads to a decision element 2 “Does this password work?” An arrow titled “No” leads to the original box 1. An arrow titled “Yes” leads to box 3 “Decrypt passwords.”

Ideally, your password manager made step 2 in the diagram above very slow. The recommendation for encryption is allowing at most 1,000 guesses per second on common hardware. This renders guessing passwords slow and expensive. Few password managers actually match this requirement however.

But password guesses will not be generated randomly. Passwords known to be commonly chosen like “Password1” or “Qwerty123” will be tested among the first ones. No amount of slowing down the guessing will prevent decryption of data if such an easy to guess password is used.

So the goal of choosing a strong password isn’t choosing a password including as many character classes as possible. It isn’t making the password look complex either. No, making it very long also won’t necessarily help. What matters is that this particular password comes up as far down as possible in the list of guesses.

The mathematics of guessing passwords

A starting point for password guessing are always passwords known from previous data leaks. For example, security professionals often refer to rockyou.txt: a list with 14 million passwords leaked 2009 in the RockYou breach.

If your password is somewhere on this list, even at 1,000 guesses per second it will take at most 14,000 seconds (less than 4 hours) to find your password. This isn’t exactly a long time, and that’s already assuming that your password manager vendor has done their homework. As past experience shows, this isn’t an assumption to be relied on.

Since we are talking about computers here, the “proper” way to express large numbers is via powers of two. So we say: a password on the RockYou list has less than 24 bits of entropy, meaning that it will definitely be found after 224 (16,777,216) guesses. Each bit of entropy added to the password results in twice the guessing time.

But obviously the RockYou passwords are too primitive. Many of them wouldn’t even be accepted by a modern password manager. What about using a phrase from a song? Shouldn’t it be hard to guess because of its length already?

Somebody calculated (and likely overestimated) the number of available song phrases as 15 billion, so we are talking about at most 34 bits of entropy. This appears to raise the password guessing time to half a year.

Except: the song phrase you are going to choose won’t actually be at the bottom of any list. That’s already because you don’t know all the 30 million songs out there. You only know the reasonably popular ones. In the end it’s only a few thousand songs you might reasonably choose, and your date of birth might help narrow down the selection. Each song has merely a few dozen phrases that you might pick. You are lucky if you get to 20 bits of entropy this way.

Estimating the complexity of a given password

Now it’s hard to tell how quickly real password crackers will narrow down on a particular password. One can look at all the patterns however that went into a particular password and estimate how many bits these contribute to the result. Consider this XKCD comic:

An XKCD comic comparing the complexity of the passwords “Tr0ub4dor&3” and “correct horse battery staple”<figcaption> Source: XKCD 936 </figcaption>

An uncommon base word chosen from a dictionary with approximately 50,000 words contributes 16 bits. The capitalization at the beginning of the word on the other hand contributes only one bit because there are only two options: capitalizing or not capitalizing. There are common substitutions and some junk added at the end contributing a few more bits. But the end result are rather unimpressive 28 bits, maybe a few more because the password creation scheme has to be guessed as well. So this is a password looking complex, it isn’t actually strong however.

The (unmaintained) zxcvbn library tries to automate this process. You can try it out on a webpage, it runs entirely in the browser and doesn’t upload your password anywhere. The guesses_log10 value in the result can be converted to bits: divide through 3 and multiply with 10.

For Tr0ub4dor&3 it shows guesses_log10 as 11. Calculating 11 ÷ 3 × 10 gives us approximately 36 bits.

Note that zxcvbn is likely to overestimate password complexity, like it happened here. While this library knows some common passwords, it knows too few. And while it recognizes some English words, it won’t recognize some of the common word modifications. You cannot count on real password crackers being similarly unsophisticated.

How strong are real passwords?

So far we’ve only seen password creation approaches that max out at approximately 35 bits of entropy. My guess it that this is in fact the limit for almost any human-chosen password. Unfortunately, at this point it is only my guess. There isn’t a whole lot of information to either support or disprove it.

For example, Microsoft published a large-scale passwords study in 2007 that arrives on the average (not maximum) password strength being 40 bits. However, this study is methodically flawed and wildly overestimates password strength. In 2007 neither XKCD comic 936 nor zxcvbn existed. So the researchers calculate password strength by looking at the character classes used. Going by their method, “Password1!” is a perfect password, whooping 63 bit strong. The zxcvbn estimate for the same password is merely 14 bits.

Another data point is the password strength indicator used for example on LastPass and Bitwarden registration pages. How strong are the passwords at the maximum strength?

Screenshot of a page titled “Create account.” The entered master password is “abcd efgh 1!” and the strength indicator below it is full.

Turns out, both these password managers use zxcvbn on their registration pages. And both will display a full strength bar for the maximum zxcvbn score: 4 out of 4. Which is assigned to any password that zxcvbn considers stronger than 33 bits.

Finally, there is another factor to consider: we aren’t very good at remembering complex passwords. A study from 2014 concluded that humans are capable of remembering passwords with 56 bits of entropy via a method the researchers called “spaced repetition.” Even using their method, half of the participants needed more than 35 login attempts in order to learn this password.

Given this, it’s reasonable to assume that in reality most people choose considerably weaker passwords: passwords that are still shown as “strong” by their password manager’s registration page, and that they can remember without a week of exercises.

Choosing a truly strong password

As I mentioned already, we are terrible at choosing strong passwords. The only realistic way to get a strong password is having it generated randomly.

But we are also very bad at remembering some gibberish mix of letters and digits. Which brings us to passphrases: sequences of multiple random words, much easier to remember at the same strength.

A typical way to generate such a passphrase would be diceware. You could use the EFF word list for five dice for example. Either use real dice or a website that will roll some fake dice for you.

Let’s say the result is ⚄⚀⚂⚅⚀. You look up 51361 in the dictionary and get “renovate.” This is the first word of your passphrase. Repeat the process to get the necessary number of words.

Update (2023-01-31): If you want it more comfortable, the Bitwarden password generator will do all the work for you while using the same EFF word list (type has to be set to “passphrase”).

How many words do you need? As a “regular nobody,” you can probably feel confident if guessing your password takes a century on common hardware. While not impossible, decrypting your passwords will simply cost too much even on future hardware and won’t be worth it. Even if your password manager doesn’t protect you well and allows 1,000,000 guesses per second, a passphrase consisting out of four words (51 bits of entropy) should be sufficient.

Maybe you are a valuable target however. If you hold the keys to lots of money or some valuable secrets, someone might decide to use more hardware for you specifically. You probably want to use at least five words then (64 bits of entropy). Even at a much higher rate of 1,000,000,000 guesses per second, guessing your password will take 900 years.

Finally, you may be someone of interest to a state-level actor. If you are an important politician, an opposition figure or a dissident of some kind, some unfriendly country might decide to invest lots of money in order to gain access to your data. A six words password (77 bits of entropy) should be out of reach even to those actors for the foreseeable future.

Firefox NightlyA Variety of Improvements At January’s End – These Weeks in Firefox: Issue 131


Screenshot of the about:logins page before new UI changes, where login updates where simply displayed as text at the bottom of the page.

      • After:

Screenshot of the about:logins page after new UI changes, where a new timeline visual is now visible at the bottom of the page to indicate when a login was last updated.

  • Picture-in-Picture updates:
    • kpatenio updated the Dailymotion wrapper, so captions should appear again on the PiP window
    • kpatenio resolved issues where PiP touch events changed playback while toggling PiP
    • Niklas fixed the Netflix wrapper when seeking forward or backward and scrubbing
    • Niklas increased the seek bar slider clickable area, making it easier to select the scrubber with the mouse
  • The DevTools team have updated our main highlighters to use less aggressive styling when prefers-reduced-motion is enabled (bug)

A screenshot of the DevTools' new highlighter appearing on the Wikipedia landing page when a user enables the setting prefers-reduced-motion.

  • There is a new context menu option for opening source view in Firefox Profiler. Thanks to our contributor Krishna Ravishankar!

Screenshot of a new Firefox Profiler context menu option, particularly for viewing a source file called Interpreter.cpp.

Friends of the Firefox team


  • [mconley] Introducing Jonathan Epstein (jepstein) who is coming to us from the Rally team as a new Engineering Manager! Welcome!

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug
  • CanadaHonk [:CanadaHonk]
  • Gregory Pappas [:gregp]
  • Jonas Jenwald [:Snuffleupagus]
  • kernp25
  • Oriol Brufau [:Oriol]
New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtension APIs
  • Oriol Brufau contributed a fix to the “tabs.move” API method when used to move multiple tabs into a different browser window – Bug 1809364
  • Gregory Pappas contributed a new “matchDiacritics” option to the (Firefox-specific) find API – Bug 1680606
  • All manifest_version 3 extensions that want to use the “webRequest.filterResponseData” API method will have to request the new “webRequestFilterResponse” permission (in addition to the “webRequest” and “webRequestBlocking” permissions that were already needed to get access to this API in manifest_version 2 extensions) – Bug 1809235
  • declarativeNetRequest API:
    • Constants representing the values, used internally to enforce limits to the DNR rules that each extension is allowed to define and enable, are now exposed as declarativeNetRequest API namespace properties – Bug 1809721
    • Update JSONSchema and tests to explicitly cover the expected default value set for the DNR rule condition property “isUrlFilterCaseSensitive”, which should be false per consensus reached in the WECG (WebExtensions Community Group) – Bug 1811498
  • As part of tweaks that aim to reduce the number of changes needed to port a Chrome’s manifest_version 3 extension to Firefox, In Firefox >= 110 the optional “extension_ids” property part of the manifest_version 3 “web_accessible_resources” manifest property can be set to an empty array – Bug 1809431
WebExtensions Framework
  • Extensions button and panel:
    • Cleanups for the remaining bits of the legacy implementation (which also covered the removal of the pref) – Bug 1799009, Bug 1801540
    • Introduction of a new “Origin Controls” string to be shown to the users in the extensions panel when an extension has access to the currently active tab but limited to the current visit (which will only be valid while the tab is not navigated) – Bug 1805523

Developer Tools

  • Thanks to Tom for grouping CSP warnings in the console (bug)

A screenshot showcasing more descriptive Content Security Policy, alias CSP, warnings on the browser console.

  • Thanks to rpl for fixing dynamic updates of the extension storage in the Storage panel (bug)
  • Alex fixed a recent regression for the Browser Toolbox in parent process mode, where several panels would start breaking when doing a navigation in the Browser (bug)
  • Alex also fixed several issues in the Debugger for sources created using `new Function` (eg bug)
  • Nicolas fixed several bugs for the autocomplete in the Browser Toolbox / Console, which could happen when changing context in the context selector (bug and bug)
WebDriver BiDi
  • Thanks to :CanadaHonk for fixing bugs or adding missing features in our CDP implementation (bug, bug, bug, bug, bug)
  • Henrik updated events of the browsingContext module (eg `domContentLoaded`, `load`, …) to provide a timestamp, which can be useful to collect page performance data (bug)
  • Sasha updated our vendored Puppeteer to version 18.0.0, which now includes a shared test expectation file, which means less maintenance for us and a better test coverage for Firefox on puppeteer side (bug and bug).
  • We implemented the network.responseCompleted event (bug) and updated our example web-client for webdriver BiDi to provide a simplified version of a network monitor (

ESMification status

  • ESMified status:
    • browser: 46.1%
      • Dropped a little bit because we removed a large number of sys.mjs files we didn’t need any more.
    • toolkit: 38.3%
      • Bilal has been working on migrating various actors.
    • Total: 46.54% (up from 46.0%)
  • #esmification on Matrix
  • Migration Document (with a walkthrough!)

Lint, Docs and Workflow

Migration Improvements (CalState LA Project)


Performance Tools (aka Firefox Profiler)

  • Support zip files on windows. Thanks to our contributor Krishna Ravishankar!
  • Scroll item horizontally in the virtuallist, taking into account the fixed size.
  • Remove the “optimizations” field from the frame table. This should reduce our profile data size.
  • Allow pinning source code view files to specific git tags.
  • Enable screenshots on talos profiling jobs on treeherder.
  • Remove some Timestamp::Now calls when the profiler is not running.
  • Fix Firefox version inside the profile data.

Search and Navigation

Storybook / Reusable components

A screenshot of the about:addons page displaying details about an add-on called "Tree Style Tab" and showcasing moz-toggle components in use.

  • Bug 1809457 –  Our common stylesheet no longer conflicts with Storybook styles
  • Bug 1801927 – The “Learn more” links in the about:preferences#general tab have been updated to use `moz-support-link`
  • Bug 1803155 (Heading to autoland) – ./mach storybook install is going away in favor of automatically installing dependencies when ./mach storybook is run

The Talospace ProjectFirefox 109 on POWER

Firefox 109 is out with new support for Manifest V3 extensions, but without the passive-aggressive deceitful crap Google was pushing (yet another reason not to use Chrome). There are also modest HTML, CSS and JS improvements.

As before linking still requires patching for bug 1775202 using this updated small change or the browser won't link on 64-bit Power ISA (alternatively put --disable-webrtc in your .mozconfig if you don't need WebRTC). Otherwise the browser builds and runs fine with the LTO-PGO patch for Firefox 108 and the .mozconfigs from Firefox 105.

Mozilla Localization (L10N)L10n Report: January 2023 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 


New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

  • Punjabi from Pakistan (pa-pk) was recently added to Pontoon.

New content and projects

What’s new or coming up in Firefox desktop

Firefox 111, shipping to release users on March 14, is going to include two new locales: Friulian (fur) and Sardinian (sc). Congratulations to the team for this achievement, it’s been a long time since we added new locales to release (Firefox 91).

A new locale is also available in Nightly, Saraiki (skr). Unfortunately, it’s currently blocked by missing information in the Unicode (CLDR) database that prevents the layout from being correctly displayed with right-to-left direction. If you want to help them, feel free to reach out to the locale manager.

In terms of content, one major feature coming is Cookie Banner Reduction, which will allow users to automatically reject all cookies in cookie banner requests. Several strings already landed over the last weeks, but expect some changes and instructions on how to test the feature (and different variations of messages used for testing).

What’s new or coming up in mobile

Just as for Firefox desktop, the v111 release ships on March 14 for all mobile projects, and also contains strings for the new Cookie Banner Reduction feature (see section above). Stay tuned for more information around that.

What’s new or coming up in web projects

The site is going to go through some transformation this year. It involves restructuring such as removing pages with duplicate information, consolidating other pages, redesigning the site, and rewriting some copy. Having said that, the effort involves several cross functional teams to accomplish. Impact of these changes on localization is estimated to be in the second half of the year.

If your locales have some catching up to do, please continue working. Your work won’t go wasted as it will be stored in the translation memory in Pontoon. Speaking of such, congratulations to the Saraiki (skr) team for completing the project. The site was recently launched on production.


Strings related to tools for reviewer and admins have been removed from Pontoon. The features used to be available for vetted contributors plus Mozilla staff and contractors in the production environment, but now it’s no longer the case. Since the localized strings can’t be reviewed in context by localizers, the team has decided to separate the strings from landing in Pontoon. Currently the feature is partially localized if your locale has done some or all the work in the past.

Firefox Accounts

Behind the scenes, the Firefox Accounts team are in the process of refactoring a number of pages to use Fluent. This means we will see a number of strings reusing translations from older file formats with updated Fluent syntax. These strings are in the process of landing, but won’t be exposed until the rework is done, so it may be some time before strings can be reviewed in production.

Congratulations to Baurzhan of the Kazakh (kk) team for recently raising the completion rate of his locale from 30% to 100%. The Kazakh locale is already activated on staging and will soon be released to production.

What’s new or coming up in SUMO

  • What did SUMO accomplish in 2022? Check out our 2022 summary in this blog post.
  • Please join our discussion on how we would like to present ourselves in Mozilla.Social!
  • SUMO just redesigned our Contribute Page recently. Check out the news and the new page if you haven’t already!
  • The Android mobile team (Firefox for Android and Firefox Focus for Android) have decided to move to Bugzilla. If you’re a mobile contributor, make sure to direct users to the right place for bug report by referring them to this article.
  • Check out the SUMO Sprint for Firefox 109 to learn more about how you can help with this release.
  • Are you a KB or article localization contributor and experience issue with special characters when copying tags? Please chime in on the discussion thread or directly in the bug report (Thanks to Tim for filing that bug).
  • If you’re a Social Support or Mobile Store Support contributor, make sure to watch the contributor forum to get updates about queue stats every week. Kiki will post the update by the end of the week to make sure that you’re updated. Here’s the latest one from last week.

You can now learn more about Kitsune releases by following this Discourse topic.

What’s new or coming up in Pontoon

Changes to the Editor

Pontoon’s editor is undergoing improvements, thanks to some deeper data model changes. The “rich” editor is now able to work with messages with multiple selectors, with further improvements incoming as this work progresses.

As with all other aspects of Pontoon, please let us know if you’ve any comments on these changes as they are deployed.


We started evaluating the Pretranslation feature on Testing is currently limited to 2 locales, but we’ll start adding more when we reach the satisfactory level of quality and stability.

New contributions

Thanks to our army of awesome contributors for recent improvements to our codebase:

  • Willian made his first contributions to Pontoon, including upgrading our legacy jQuery library.
  • Tomás fixed a bug in the local setup, which was also his first contribution.
  • Vishal fixed several bugs in the Pretranslation feature, which he developed a while ago.


Want to showcase an event coming up that your community is participating in? Contact us and we’ll include it.

Friends of the Lion

Image by Elio Qoshi

Know someone in your l10n community who’s been doing a great job and should appear here? Contact us and we’ll make sure they get a shout-out!

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

The Rust Programming Language BlogAnnouncing Rust 1.67.0

The Rust team is happy to announce a new version of Rust, 1.67.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.67.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.67.0 on GitHub.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.67.0 stable

#[must_use] effective on async fn

async functions annotated with #[must_use] now apply that attribute to the output of the returned impl Future. The Future trait itself is already annotated with #[must_use], so all types implementing Future are automatically #[must_use], which meant that previously there was no way to indicate that the output of the Future is itself significant and should be used in some way.

With 1.67, the compiler will now warn if the output isn't used in some way.

async fn bar() -> u32 { 0 }

async fn caller() {
warning: unused output of future returned by `bar` that must be used
 --> src/
5 |     bar().await;
  |     ^^^^^^^^^^^
  = note: `#[warn(unused_must_use)]` on by default

std::sync::mpsc implementation updated

Rust's standard library has had a multi-producer, single-consumer channel since before 1.0, but in this release the implementation is switched out to be based on crossbeam-channel. This release contains no API changes, but the new implementation fixes a number of bugs and improves the performance and maintainability of the implementation.

Users should not notice any significant changes in behavior as of this release.

Stabilized APIs

These APIs are now stable in const contexts:

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.67.0

Many people came together to create Rust 1.67.0. We couldn't have done it without all of you. Thanks!

Wladimir PalantIPinside: Korea’s mandatory spyware

Note: This article is also available in Korean.

On our tour of South Korea’s so-called security applications we’ve already took a look at TouchEn nxKey, an application meant to combat keyloggers by … checks notes … making keylogging easier. Today I want to shed some light on another application that many people in South Korea had to install on their computers: IPinside LWS Agent by Interezen.

The stated goal of the application is retrieving your “real” IP address to prevent online fraud. I found however that it collects way more data. And while it exposes this trove of data to any website asking politely, it doesn’t look like it is all too helpful for combating actual fraud.

How does it work?

Similarly to TouchEn nxKey, the IPinside LWS Agent application also communicates with websites via a local web server. When a banking website in South Korea wants to learn more about you, it will make a JSONP request to localhost:21300. If this request fails, the banking website will deny entry and ask that you install IPinside LWS Agent first. So in South Korea running this application isn’t optional.

On the other hand, if the application is present the website will receive various pieces of data in the wdata, ndata and udata fields. Quite a bit of data actually:

Screenshot of a browser window with the address open. The response is a jQuery callback with some data including wdata, ndata and udata fields and base64-encoded values.

This data is supposed to contain your IP address. But even from the size of it, it’s obvious that it cannot be only that. In fact, there is a whole lot more data being transmitted.

What data is it?


Let’s start with wdata which is the most interesting data structure here. When decrypted, you get a considerable amount of binary data:

A hex dump with some binary data but also obvious strings like QEMU Harddisk or Gigabit Network Connection

As you can see from the output, I am running IPinside in a virtual machine. It even says VirtualBox at the end of the output, even though this particular machine is no longer running on VirtualBox.

Another obvious thing are the two hard drives of my virtual machine, one with the serial number QM00001 and another with the serial number abcdef. That F0129A45 is the serial number of the primary hard drive volume. You can also see my two network cards, both listed as Intel(R) 82574L Gigabit Network Connection. There is my keyboard model (Standard PS/2 Keyboard) and keyboard layout (de-de).

And if you look closely, you’ll even notice the byte sequences c0 a8 7a 01 (standing for my gateway’s IP address, c0 a8 7a 8c (, the local IP address of the first network card) and c0 a8 7a 0a (, the local IP address of the second network card).

But there is way more. For example, that 65 (letter e) right before the hard drive information is the result of calling GetProductInfo() function and indicates that I’m running Windows 10 Home. And 74 (letter t) before it encodes my exact Windows version.

Information about running processes

One piece of the data is particularly interesting. Don’t you wonder where the firefox.exe comes from here? It indicates that the Mozilla Firefox process is running in the background. This information is transmitted despite the active application being Google Chrome.

See, websites give IPinside agent a number of parameters that determine the output produced. One such parameter is called winRemote. It’s mildly obfuscated, but after removing the obfuscation you get:


So banking websites are interested in whether you are running remote access tools. If a process is detected that matches one of these strings, the match is added to the wdata response.

And of course this functionality isn’t limited to searching for remote access tools. I replaced the winRemote parameter by AGULAAAAAAtmaXJlZm94LmV4ZQA= and got the information back whether Firefox is currently running. So this can be abused to look for any applications of interest.

And even that isn’t the end of it. IPinside agent will match substrings as well! So it can tell you whether a process with fire in its name is currently running.

That is enough for a website to start searching your process list without knowing what these processes could be. I created a page that would start with the .exe suffix and do a depth-first search. The issue here was mostly IPinside response being so slow, each request taking half a second. I slightly optimized the performance by testing multiple guesses with one request and got a proof of concept page that would turn up a process name every 40-50 seconds:

Screenshot of a page saying: “Please wait, fetching your process list… Testing suffix oerver-svg.exe cortana.exe.” It also lists already found processes: i3gproc.exe asdsvc.exe wpmsvc.exe i3gmainsvc.exe

With sufficient time, this page could potentially enumerate every process running on the system.


The ndata part of the response is much simpler. It looks like this:


No, I didn’t mess up decoding the data. Yes, is really in the response. The idea here was actually to use (reverse tilde symbol) as a separator. But since my operating system isn’t Korean, the character encoding for non-Unicode applications (like IPinside LWS Agent) isn’t set to EUC-KR. The application doesn’t expect this and botches the conversion to UTF-8.

▚▚▚.▚▚▚.▚▚▚.▚▚▚ on the other hand was me censoring my public IP address. The application gets it by two different means. VD1NATIP appears to come from my home router.

HDATAIP on the other hand comes from a web server. Which web server? That’s determined by the host_info parameter that the website provides to the application. It is also obfuscated, the actual value is:

Only the first two parts appear to be used, the application makes a request to One of the response headers is RESPONSE_IP. You guessed it: that’s your IP address as this web server sees it.

The application uses low-level WS2_32.DLL APIs here, probably as an attempt to prevent this traffic from being routed through some proxy server or VPN. After all, the goal is deanonymizing you.


Finally, there is udata where “u” stands for “unique.” There are several different output types here, this is type 13:

[52-54-00-A7-44-B5:1:0:Intel(R) 82574L Gigabit Network Connection];[52-54-00-4A-FD-6E:0:0:Intel(R) 82574L Gigabit Network Connection #2];$[QM00001:QEMU HARDDISK:];[abcdef:QEMU HARDDISK:];[::];[::];[::];

Once again a list of network cards and hard drives, but this time MAC addresses of the network cards are listed as well. Other output types are mostly the same data in different formats, except for type 30. This one contains a hexadecimal CPU identifier, representing 16 bytes generated by mashing together the results of 15 different CPUID calls.

How is this data protected?

So there is a whole lot of data which allows deanonymizing users, learning about the hardware and software they use, potentially facilitating further attacks by exposing which vulnerabilities are present on their systems. Surely this kind of data is well-protected, right? I mean: sure, every Korean online banking website has access to it. And Korean government websites. And probably more Interezen customers. But nobody else, right?

Well, the server under localhost:21300 doesn’t care who it responds to. Any website can request the data. But it still needs to know how to decode it.

When talking about wdata, there are three layers of protection being applied: obfuscation, compression and encryption. Yes, obfuscating data by XOR’ing it with a single random byte probably isn’t adding much protection. And compression doesn’t really count as protection either if people can easily find the well-known GPL-licensed source code that Interezen used without complying with the license terms. But there is encryption, and it is even using public-key cryptography!

So the application only contains the public RSA key, that’s not sufficient to decrypt the data. The private key is only known to Interezen. And any of their numerous customers. Let’s hope that all these customers sufficiently protect this private key and don’t leak it to some hackers.

Otherwise RSA encryption can be considered secure even with moderately sized keys. Except… we aren’t talking about a moderately sized key here. We aren’t even talking about a weak key. We are talking about a 320 bits key. That’s shorter than the very first key factored in the RSA Factoring Challenge. And that was in April 1991, more than three decades ago. Sane RSA libraries don’t even work with keys this short.

I downloaded msieve and let it run on my laptop CPU, occupying a single core of it:

$ ./msieve 108709796755756429540066787499269637…

sieving in progress (press Ctrl-C to pause)
86308 relations (21012 full + 65296 combined from 1300817 partial), need 85977
sieving complete, commencing postprocessing
linear algebra completed 80307 of 82231 dimensions (97.7%, ETA 0h 0m)
elapsed time 02:36:55

Yes, it took me 2 hours and 36 minutes to calculate the private key on very basic hardware. That’s how much protection this RSA encryption provides.

When talking about ndata and udata, things look even more dire. The only protection layer here is encryption. No, not public-key cryptography but symmetric encryption via AES-256. And of course the encryption key is hardcoded in the application, there is no other way.

To add insult to injury, the application produces identical ciphertext on each run. At first I thought this to be the result of the deprecated ECB block chaining mode being used. But: no, the application uses CBC block chaining mode. But it fails to pass in an initialization vector, so the cryptography library in question always fills the initialization vector with zeroes.

Which is a long and winded way of saying: the encryption would be broken regardless of whether one can retrieve the encryption key from the application.

To sum up: no, this data isn’t really protected. If the user has the IPinside LWS Agent installed, any website can access the data it collects. The encryption applied is worthless.

And the overall security of the application?

That web server the application runs on port 21300, what is it? Turns out, it’s their own custom code doing it, built on low-level network sockets functionality. That’s perfectly fine of course, who hasn’t built their own rudimentary web server using substring matches to parse requests and deployed it to millions of users?

Their web server still needs SSL support, so it relies on the OpenSSL library for that. Which library version? Why, OpenSSL 1.0.1j of course. Yes, it was released more than eight years ago. Yes, end of support for OpenSSL 1.0.1 was six years ago. Yes, there were 11 more releases on the 1.0.1 branch after 1.0.1j, with numerous vulnerabilities fixed, and not even these fixes made it into IPinside LWS Agent.

Sure, that web server is also single-threaded, why wouldn’t it be? It’s not like people will open two banking websites in parallel. Yes, this makes it trivial for a malicious website to lock up that server with long-running requests (denial-of-service attack). But that merely prevents people from logging into online banking and government websites, not a big deal.

Looking at how this server is implemented, there is code that essentially looks like this:

BYTE inputBuffer[8192];
char request[8192];
char debugString[8192];

memset(inputBuffer, 0, sizeof(inputBuffer));
memset(request, 0, sizeof(request));

int count = ssl_read(ssl, inputBuffer, sizeof(inputBuffer));
if (count <= 0)

memcpy(request, inputBuffer, count);

memset(debugString, 0, sizeof(debugString));
sprintf(debugString, "Received data from SSL socket: %s", request);


Can you spot the issues with this code?

Come on, I’m waiting.

Yes, I’m cheating. Unlike you I actually debugged that code and saw live just how badly things went here.

First of all, it can happen that ssl_read will produce exactly 8192 bytes and fill the entire buffer. In that case, inputBuffer won’t be null-terminated. And its copy in request won’t be null-terminated either. So attempting to use request as a null-terminated string in sprintf() or handle_request() will read beyond the end of the buffer. In fact, with the memory layout here it will continue into the identical inputBuffer memory area and then into whatever comes after it.

So the sprintf() call actually receives more than 16384 bytes of data, and its target buffer won’t be nearly large enough for that. But even if this data weren’t missing the terminating zero: taking a 8192 byte string, adding a bunch more text to it and trying to squeeze the result into a 8192 byte buffer isn’t going to work.

This isn’t an isolated piece of bad code. While researching the functionality of this application, I couldn’t fail noticing several more stack buffer overflows and another buffer over-read. To my (very limited) knowledge of binary exploitation, these vulnerabilities cannot be turned into Remote Code Execution thanks to StackGuard and SafeSEH protection mechanisms being active and effective. If somebody more experienced finds a way around that however, things will get very ugly. The application has neither ASLR nor DEP protection enabled.

Some of these vulnerabilities can definitely crash the application however. I created two proof of concept pages which did so repeatedly. And that’s another denial-of-service attack, also effectively preventing people from using online banking in South Korea.

When will it be fixed?

I submitted three vulnerability reports to KrCERT on October 21st, 2022. By November 14th KrCERT confirmed forwarding all these reports to Interezen. I did not receive any communication after that.

Prior to this disclosure, a Korean reporter asked Interezen to comment. They confirmed receiving my reports but claimed that they only received one of them on January 6th, 2023. Supposedly because of that they plan to release their fix in February, at which point it would be up to their customers (meaning: banks and such) to distribute the new version to the users.

Like other similar applications, this software won’t autoupdate. So users will need to either download and install an update manually or perform an update via a management application like Wizvera Veraport. Neither is particularly likely unless banks start rejecting old IPinside versions and requiring users to update.

Does IPinside actually make banking safer?

Interezen isn’t merely providing the IPinside agent application. According to their self-description, they are a company who specializes in BigData. They provide the service of collecting and analyzing data to numerous banks, insurances and government agencies.

Screenshot of a website section titled: “Client Companies. With the number one products in this industry, INTEREZEN is providing the best services for more than 200 client companies.” Below it the logos of Woori Bank, Industrial Bank of Korea, KEB Hana Card, National Tax Service, MG Non-Life Insurance, Hyundai Card as well as a “View more” button.

Online I could find a manual from 2009 showing screenshots from Interezen’s backend solution. One can see all website visitors being tracked along with their data. Back in 2009 the application collected barely more than the IP addresses, but it can be assumed that the current version of this backend makes all the data provided by the agent application accessible.

Screenshot of a web interface listing requests for a specific date range. Some of the table columns are: date, webip, proxyip, natip, attackip<figcaption> Screenshot from IPinside 3.0 product manual </figcaption>

In addition to showing detailed information on each user, in 2009 this application was already capable of producing statistical overviews based e.g. on IP address, location, browser or operating system.

Screenshot of a web interface displaying user shares for Windows 98, Windows 2000, Windows 2003 and Windows XP<figcaption> Screenshot from IPinside 3.0 product manual </figcaption>

The goal here isn’t protecting users, it’s protecting banks and other Interezen customers. The idea is that a bank will have it easier to detect and block fraud or attacks if it has more information available to it. Fraudsters won’t simply be able to obfuscate their identities by using proxies or VPNs, banks will be able to block them regardless.

In fact, Interezen filed several patents in Korea for their ideas. The first one, patent 10-1005093 is called “Method and Device for Client Identification.” In the patent filing, the reason for the “invention” is the following (automatic translation):

The importance and value of a method for identifying a client in an Internet environment targeting an unspecified majority is increasing. However, due to the development of various camouflage and concealment methods and the limitations of existing identification technologies, proper identification and analysis are very difficult in reality.

It goes on to explain how cookies are insufficient and the user’s real IP address needs to be retrieved.

The patent 10-1088084 titled “Method and system for monitoring and cutting off illegal electronic-commerce transaction” expands further on the reasoning (automatic translation):

The present invention is a technology that enables real-time processing, which was impossible with existing security systems, in the detection/blocking of illegal transactions related to all e-commerce services through the Internet, and e-commerce illegal transactions that cannot but be judged as normal transactions with existing security technologies.

This patent also introduces the idea of forcing the users to install the agent in order to use the website.

But does the approach even work? Is there anything to stop fraudsters from setting up their own web server on localhost:21300 and feeding banking websites bogus data?

Ok, someone would have to reverse engineer the functionality of the IPinside LWS Agent application and reproduce it. I mean, it’s not that simple. It took me … checks notes … one work week, proof of concept creation included. Fraudsters certainly don’t have that kind of time to invest into deciphering all the various obfuscation levels here.

But wait, why even go there? A replay attack is far simpler, giving websites pre-recorded legitimate responses will just do. There is no challenge-handshake scheme here, no timestamp, nothing to prevent this attack. If anything, websites could recognize responses they’ve previously seen. But even that doesn’t really work: ndata and udata obfuscation has no randomness in it, the data is expected to be always identical. And wdata has only one random byte in its obfuscation scheme, that’s not sufficient to reliably distinguish legitimately identical responses from replayed ones.

So it would appear that IPinside is massively invading people’s privacy, exposing way too much of their data to anybody asking, yet falling short of really stopping illegal transactions as they claim. Prove me wrong.

Firefox NightlyNew year, new updates to Firefox – These Weeks in Firefox: Issue 130


  • Thanks to Alex Poirot’s work on Bug 1410932, starting from Firefox 110, errors raised from WebExtensions content scripts should be visible in the related tab’s DevTools webconsole
  • Migrators for Opera, Opera GX and Vivaldi have been enabled by default and should hit release for Firefox 110 in February! Special thanks to Nolan Ishii and Evan Liang from CalState LA for their work there.
  • Various improvements to the Picture-in-Picture player window have landed – see the Picture-in-Picture section below for details.
    • Many of these improvements are currently gated behind a pref. Set `media.videocontrols.picture-in-picture.improved-video-controls.enabled` to true to check them out! You can file bugs here if you find any.
  • Firefox Profiler updates
    • Implement resizing columns in the TreeView (Merge PR #4204). This works in the Call Tree and the Marker Table that both use this component. Thanks Johannes Bechberger!
    • Add carbon metrics information to Firefox profiler (Merge PR #4372). Thanks Chris and Fershad!
  • Mark Banner fixed an issue with the default search engine being reset when the user upgrades to 108 if the profile was previously copied from somewhere else.

Friends of the Firefox team


  • [mconley] Welcome back mtigley!
  • [kpatenio] Welcome bnasar!

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtension APIs
  • Thanks to Gregory Pappas’ contributions starting from Firefox 110:
    • tabs.getZoomSettings will properly support the “defaultZoomFactor” property (instead of always returning “1” as before) – Bug 1772166
    • a “close” action icon is now being shown next to the omnibox API’s deletable suggestions – Bug 1478095 (deletable suggestions have been also introduced recently, in Firefox 109 by Bug 1478095)
  • As part of the ongoing work on the declarativeNetRequest API: initial support for the Dynamic Ruleset has been introduced in Nightly 110 – Bug 1745764

Developer Tools

  • :jacksonwhale (new contributor) fixed a small CSS issue in RDM’s device dialog (bug)
  • :Oriol improved the way we display quotes to require less “escaping” (bug)
  • :Gijs fixed all the imports of sys.mjs modules in DevTools to use the proper names and APIs (bug)
  • :barret cleaned up a remaining usage of osfile.jsm in DevTools (bug)
  • Mark (:standard8) replaced all Cu.reportError calls with console.error (bug)
  • :arai fixed eager evaluation for expressions which can safely be considered as non-effectful (JSOp::InitAliasedLexical with hops == 0) (bug)
  • :ochameau removed the preference to switch back to the legacy Browser Toolbox (bug) and also removed the Browser Content Toolbox (bug).
    • The regular Browser Toolbox (and Browser Console) should now cover all your needs to debug the parent process and content processes (ask us if you have any trouble migrating from our Browser Content Toolbox workflows!).
    • screenshot
  • :ochameau updated the version of the source-map library we use in-tree, which came with some performance improvements (bug)
WebDriver BiDi
  • :jdescottes implemented two events of the WebDriver BiDi network module: network.beforeRequestSent and network.responseStarted (bug and bug)
  • :whimboo added general support for serialization of platform objects (bug)
  • :whimboo migrated marionette’s element cache from the parent process to the content process which is the first step to be able to share element references between WebDriver BiDi and Classic (bug)
  • :sasha fixed the event subscription logic to allow consumers to subscribe for events on any context (bug)

ESMification status

Lint, Docs and Workflow

Migration Improvements (CalState LA Project)

PDFs & Printing


Performance Tools (aka Firefox Profiler)

  • Various small UI changes
    • The initial selection and tree expansion in call trees is now better:
      • Procure a selection also when the components update (for example when changing threads) (PR #4382). Previously no selection was ever provided after the first load.
      • Skip idle nodes when procuring an initial selection in the call tree (PR #4383). Previously we would very often select an idle node, because that’s where the most samples were captured. Indeed threads are usually very idle, but we’re interested in the moments when they’re not.
    • Do not automatically hide tracks when comparing profiles (Merge PR #4384). Previously it was common that the computed diffing track was hidden by the auto-hide algorithm.
    • Handle copy gesture for flame graph and stack chart (PR #4392). Thanks Krishna Ravishankar!
  • Improved Chrome and Linux perf importers
    • Chrome importer: Add 1 to line and column numbers of cpu profile (Merge PR #4403). Thanks Khairul Azhar Kasmiran!
    • linux perf: fix parsing frames with whitespaces in the path (PR #4410). Thanks Joel Höner!
  • Don’t miss Nazim’s lightning talk about improvements in performance regression alerts on Thursday! (please remove for the blog post)
  • Text only
    • Add some more content to the home page, about Android profiling as well as opening files from 3rd party tools (PR #4360)
    • Prevent ctrl+wheel events in timeline (PR #4350)
    • Make more explicit the fact that MarkerPayload is nullable (PR #4368)
    • Sanitize URL and file-path properties in markers (Merge PR #4369). We didn’t use these properties before so this wasn’t a problem for current payloads, but future patches in Firefox want to use them, so it’s important to remove this possibly private data.
    • Unselect and scroll to top when clicking outside of the activity graph (Merge PR #4375)
    • Do not show a tooltip when the stack index of the hovered sample is null, instead of crashing (PR #4376)
    • Do not trigger transforms when searching in the stack chart (PR #4387)
    • Add development note on Flow (PR #4391). Thanks Khairul Azhar Kasmiran!
    • Scroll the network chart at mount time if there’s a selected item (PR #4385)
    • Add VSCode settings.json to bypass flow-bin `SHASUM256.txt.sign` check (PR #4393). Thanks Khairul Azhar Kasmiran!
    • Do not scroll the various views as a result of a pointer click (PR #4386)
    • Do not throw an error when browsertime provides null timestamps incorrectly (Merge PR #4399)
    • Make cause.time optional (PR #4408)
    • Using mouseTimePosition in Selection.js and added tests for that (Merge PR #3000). This is the second step of a work to show a vertical line indicating the time position from the mouse cursor in all chronological panels at the same time. Thanks Hasna Hena Mow!

Search and Navigation

Storybook / Reusable components

  • Our Storybook has been updated
    • mconley fixed the styling for the (in-progress) Migration Wizard component Bug 1806128
    • tgiles added the MozSupportLink, for easier SUMO page linking Bug 1770447
      • <a is=”moz-support-link” support-page=”my-feature”></a>
    • tgiles added an Accessibility panel in Storybook which runs some accessibility tests against components Bug 1804927
  • mstriemer extracted the panel-list element (menu) from about:addons
    • This isn’t a fully-fledged “Reusable Component” but it would be better than writing yet another menu 🙂 Bug 1765635
  • hjones updated the moz-toggle element to now be backed by a button, rather than a checkbox. Toggles/switches should not be “form-associated” and should instead perform an immediate action, similar to a button Bug 1804771

Will Kahn-GreeneSocorro Engineering: 2022 retrospective


2022 took forever. At the same time, it kind of flew by. 2023 is already moving along, so this post is a month late. Here's the retrospective of Socorro engineering in 2022.

Read more… (18 min remaining to read)

Will Kahn-GreeneBleach 6.0.0 release and deprecation

What is it?

Bleach is a Python library for sanitizing and linkifying text from untrusted sources for safe usage in HTML.

Bleach v6.0.0 released!

Bleach 6.0.0 cleans up some issues in linkify and with the way it uses html5lib so it's easier to reason about. It also adds support for Python 3.11 and cleans up the project infrastructure.

There are several backwards-incompatible changes, hence the 6.0.0 version.

I did some rough testing with a corpus of Standup messages data and it looks like bleach.clean is slightly faster with 6.0.0 than 5.0.0.

Using Python 3.10.9:

  • 5.0.0: bleach.clean on 58,630 items 10x: minimum 2.793s

  • 6.0.0: bleach.clean on 58,630 items 10x: minimum 2.304s

The other big change 6.0.0 brings with it is that it's now deprecated.

Bleach is deprecated

Bleach sits on top of html5lib which is not actively maintained. It is increasingly difficult to maintain Bleach in that context and I think it's nuts to build a security library on top of a library that's not in active development.

Over the years, we've talked about other options:

  1. find another library to switch to

  2. take over html5lib development

  3. fork html5lib and vendor and maintain our fork

  4. write a new HTML parser

  5. etc

With the exception of option 1, they greatly increase the scope of the work for Bleach. They all feel exhausting to me.

Given that, I think Bleach has run its course and this journey is over.

What happens now?


  1. Pass it to someone else?

    No, I won't be passing Bleach to someone else to maintain. Bleach is a security-related library, so making a mistake when passing it to someone else would be a mess. I'm not going to do that.

  2. Switch to an alternative?

    I'm not aware of any alternatives to Bleach. I don't plan to work on coordinating the migration for everyone from Bleach to something else.

  3. Oh my goodness--you're leaving us with nothing?

    Sort of.

I'm going to continue doing minimal maintenance:

  1. security updates

  2. support for new Python versions

  3. fixes for egregious bugs (begrudgingly)

I'll do that for at least a year. At some point, I'll stop doing that, too.

I think that gives the world enough time for either something to take Bleach's place, or for the sanitizing web api to kick in, or for everyone to come to the consensus that they never really needed Bleach in the first place.

/images/bleach_deprecation.thumbnail.jpg <figcaption>

Bleach. Tired. At the end of its journey.



Many thanks to Greg who I worked with on Bleach for a long while and maintained Bleach for several years. Working with Greg was always easy and his reviews were thoughtful and spot-on.

Many thanks to Jonathan who, over the years, provided a lot of insight into how best to solve some of Bleach's more squirrely problems.

Many thanks to Sam who was an indispensible resource on HTML parsing and sanitizing text in the context of HTML.

Where to go for more

For more specifics on this release, see here:

Documentation and quickstart here:

Source code and issue tracker here:

Wladimir PalantBitwarden design flaw: Server side iterations

In the aftermath of the LastPass breach it became increasingly clear that LastPass didn’t protect their users as well as they should have. When people started looking for alternatives, two favorites emerged: 1Password and Bitwarden. But do these do a better job at protecting sensitive data?

For 1Password, this question could be answered fairly easily. The secret key functionality decreases usability, requiring the secret key to be moved to each new device used with the account. But the fact that this random value is required to decrypt the data means that the encrypted data on 1Password servers is almost useless to potential attackers. It cannot be decrypted even for weak master passwords.

As to Bitwarden, the media mostly repeated their claim that the data is protected with 200,001 PBKDF2 iterations: 100,001 iterations on the client side and another 100,000 on the server. This being twice the default protection offered by LastPass, it doesn’t sound too bad. Except: as it turns out, the server-side iterations are designed in such a way that they don’t offer any security benefit. What remains are 100,000 iterations performed on the client side, essentially the same protection level as for LastPass.

Mind you, LastPass isn’t only being criticized for using a default iterations count that is three time lower than the current OWASP recommendation. LastPass also failed to encrypt all data, a flaw that Bitwarden doesn’t seem to share. LastPass also kept the iterations count for older accounts dangerously low, something that Bitwarden hopefully didn’t do either (Edit: yes, they did this, some accounts have considerably lower iteration count). LastPass also chose to downplay the breach instead of suggesting meaningful mitigation steps, something that Bitwarden hopefully wouldn’t do in this situation. Still, the protection offered by Bitwarden isn’t exactly optimal either.

Edit (2023-01-23): Bitwarden increased the default client-side iterations to 350,000 a few days ago. So far this change only applies to new accounts, and it is unclear whether they plan to upgrade existing accounts automatically. And today OWASP changed their recommendation to 600,000 iterations, it has been adjusted to current hardware.

Edit (2023-01-24): I realized that some of my concerns were already voiced in Bitwarden’s 2018 Security Assessment. Linked to it in the respective sections.

How Bitwarden protects users’ data

Like most password managers, Bitwarden uses a single master password to protect users’ data. The Bitwarden server isn’t supposed to know this password. So two different values are being derived from it: a master password hash, used to verify that the user is allowed to log in, and a key used to encrypt/decrypt the data.

A schema showing the master password being hashed with PBKDF2-SHA256 and 100,000 iterations into a master key. The master key is further hashed on the server side before being stored in the database. The same master key is turned into a stretched master key used to encrypt the encryption key, here no additional PBKDF2 is applied on the server side.<figcaption> Bitwarden password hashing, key derivation, and encryption. Source: Bitwarden security whitepaper </figcaption>

If we look at how Bitwarden describes the process in their security whitepaper, there is an obvious flaw: the 100,000 PBKDF2 iterations on the server side are only applied to the master password hash, not to the encryption key. This is pretty much the same flaw that I discovered in LastPass in 2018.

What this means for decrypting the data

So what happens if some malicious actor happens to get a copy of the data, like it happened with LastPass? They will need to decrypt it. And for that, they will have to guess the master password. PBKDF2 is meant to slow down verifying whether a guess is correct.

Testing the guesses against the master password hash would be fairly slow: 200,001 PBKDF2 iterations here. But the attackers wouldn’t waste time doing that of course. Instead, for each guess they would derive an encryption key (100,000 PBKDF2 iterations) and check whether this one can decrypt the data.

This simple tweak removes all the protection granted by the server-side iterations and speeds up master password guessing considerably. Only the client-side iterations really matter as protection.

What this means for you

The default protection level of LastPass and Bitwarden is identical. This means that you need a strong master password. And the only real way to get there is generating your password randomly. For example, you could generate a random passphrase using the diceware approach.

Using a dictionary for 5 dice (7776 dictionary words) and picking out four random words, you get a password with slightly over 50 bits of entropy. I’ve done the calculations for guessing such passwords: approximately 200 years on a single graphics card or $1,500,000.

This should be a security level sufficient for most regular users. If you are guarding valuable secrets or are someone of interest for state-level actors, you might want to consider a stronger password. Adding one more word to your passphrase increases the cost of guessing your password by factor 7776. So a passphrase with five words is already almost unrealistic to guess even for state-level actors.

All of this assumes that your KDF iterations setting is set to the default 100,000. Bitwarden will allow you to set this value as low as 5,000 without even warning you. This was mentioned as BWN-01-009 in Bitwarden’s 2018 Security Assessment, yet there we are five years later. Should your setting be too low, I recommend fixing it immediately. Reminder: current OWASP recommendation is 310,000.

Is Bitwarden as bad as LastPass?

So as it turns out, with the default settings Bitwarden provides exactly the same protection level as LastPass. This is only part of the story however.

One question is how many accounts have a protection level below the default configured. It seems that before 2018 Bitwarden’s default used to be 5,000 iterations. Then the developers increased it to 100,000 in multiple successive steps. When LastPass did that, they failed upgrading existing accounts. I wonder whether Bitwarden also has older accounts stuck on suboptimal security settings.

The other aspect here is that Dmitry Chestnykh wrote about Bitwarden’s server-side iterations being useless in 2020 already, and Bitwarden should have been aware of it even if they didn’t realize how my research applies to them as well. On the other hand, using PBKDF2 with only 100,000 iterations isn’t a great default today. Still, Bitwarden failed to increase it in the past years, apparently copying LastPass as “gold standard” – and they didn’t adjust their PR claims either:

Screenshot of text from the Bitwarden website: The default iteration count used with PBKDF2 is 100,001 iterations on the client (client-side iteration count is configurable from your account settings), and then an additional 100,000 iterations when stored on our servers (for a total of 200,001 iterations by default). The organization key is shared via RSA-2048. The utilized hash functions are one-way hashes, meaning they cannot be reverse engineered by anyone at Bitwarden to reveal your master password. Even if Bitwarden were to be hacked, there would be no method by which your master password could be obtained.

Users have been complaining and asking for better key derivation functions since at least 2018. It was even mentioned as BWN-01-007 in Bitwarden’s 2018 Security Assessment. This change wasn’t considered a priority however. Only after the LastPass breach things started moving, and it wasn’t Bitwarden’s core developers driving the change. Someone contributed the changes required for scrypt support and Argon2 support. The former was rejected in favor of the latter, and Argon2 will hopefully become the default (only?) choice at some point in future.

Adding a secret key like 1Password would have been another option to address this issue. This suggestion has also been around since at least 2018 and accumulated a considerable amount of votes, but so far it hasn’t been implemented either.

On the bright side, Bitwarden clearly states that they encrypt all your vault data, including website addresses. So unlike with LastPass, any data lifted from Bitwarden servers will in fact be useless until the attackers manage to decrypt it.

How server-side iterations could have been designed

In case you are wondering whether it is even possible to implement server-side iterations mechanism correctly: yes, it is. One example is the onepw protocol Mozilla introduced for Firefox Sync in 2014. While the description is fairly complicated, the important part is: the password hash received by the server is not used for anything before it passes through additional scrypt hashing.

Firefox Sync has a different flaw: its client-side password hashing uses merely 1,000 PBKDF2 iterations, a ridiculously low setting. So if someone compromises the production servers rather than merely the stored data, they will be able to intercept password hashes that are barely protected. The corresponding bug report has been open for the past six years and is still unresolved.

The same attack scenario is an issue for Bitwarden as well. Even if you configure your account with 1,000,000 iterations, a compromised Bitwarden server can always tell the client to apply merely 5,000 PBKDF2 iterations to the master password before sending it to the server. The client has to rely on the server to tell it the correct value, and as long as low settings like 5,000 iterations are supported this issue will remain.

Niko MatsakisRust in 2023: Growing up

When I started working on Rust in 2011, my daughter was about three months old. She’s now in sixth grade, and she’s started growing rapidly. Sometimes we wake up to find that her clothes don’t quite fit anymore: the sleeves might be a little too short, or the legs come up to her ankles. Rust is experiencing something similar. We’ve been growing tremendously fast over the last few years, and any time you experience growth like that, there are bound to be a few rough patches. Things that don’t work as well as they used to. This holds both in a technical sense — there are parts of the language that don’t seem to scale up to Rust’s current size — and in a social one — some aspects of how the projects runs need to change if we’re going to keep growing the way I think we should. As we head into 2023, with two years to go until the Rust 2024 edition, this is the theme I see for Rust: maturation and scaling.


In summary, these are (some of) the things I think are most important for Rust in 2023:

  • Implementing “the year of everywhere” so that you can make any function async, write impl Trait just about anywhere, and fully utilize generic associated types; planning for the Rust 2024 edition.
  • Beginning work on a Rust specification and integrating it into our processes.
  • Defining rules for unsafe code and smooth tooling to check whether you’re following them.
  • Supporting efforts to teach Rust in universities and elsewhere.
  • Improving our product planning and user feedback processes.
  • Refining our governance structure with specialized teams for dedicated areas, more scalable structure for broad oversight, and more intensional onboarding.

“The year of everywhere” and the 2024 edition

What do async-await, impl Trait, and generic parameters have in common? They’re all essential parts of modern Rust, that’s one thing. They’re also all, in my opinion, in a “minimum viable product” state. Each of them has some key limitations that make them less useful and more confusing than they have to be. As I wrote in “Rust 2024: The Year of Everywhere”, there are currently a lot of folks working hard to lift those limitations through a number of extensions:

None of these features are “new”. They just take something that exists in Rust and let you use it more broadly. Nonetheless, I think they’re going to have a big impact, on experienced and new users alike. Experienced users can express more patterns more easily and avoid awkward workarounds. New users never have to experience the confusion that comes from typing something that feels like it should work, but doesn’t.

One other important point: Rust 2024 is just around the corner! Our goal is to get any edition changes landed on master this year, so that we can spend the next year doing finishing touches. This means we need to put in some effort to thinking ahead and planning what we can achieve.

Towards a Rust specification

As Rust grows, there is increasing need for a specification. Mara had a recent blog post outlining some of the considerations — and especially the distinction between a specification and standardization. I don’t see the need for Rust to get involved in any standards bodies — our existing RFC and open-source process works well. But I do think that for us to continue growing out the set of people working on Rust, we need a central definition of what Rust should do, and that we need to integrate that definition into our processes more thoroughly.

In addition to long-standing docs like the Rust Reference, the last year has seen a number of notable efforts towards a Rust specification. The Ferrocene language specification is the most comprehensive, covering the grammar, name resolution, and overall functioning of the compiler. Separately, I’ve been working on a project called a-mir-formality, which aims to be a “formal model” of Rust’s type system, including the borrow checker. And Ralf Jung has MiniRust, which is targeting the rules for unsafe code.

So what would an official Rust specification look like? Mara opened RFC 3355, which lays out some basic parameters. I think there are still a lot of questions to work out. Most obviously, how can we combine the existing efforts and documents? Each of them has a different focus and — as a result — a somewhat different structure. I’m hopeful that we can create a complementary whole.

Another important question is how to integrate the specification into our project processes. We’ve already got a rule that new language features can’t be stabilized until the reference is updated, but we’ve not always followed it, and the lang docs team is always in need of support. There are hopeful signs here: both the Foundation and Ferrocene are interested in supporting this effort.

Unsafe code

In my experience, most production users of Rust don’t touch unsafe code, which is as it should be. But almost every user of Rust relies on dependencies that do, and those dependencies are often the most critical systems.

At first, the idea of unsafe code seems simple. By writing unsafe, you gain access to new capabilities, but you take responsibility for using them correctly. But the more you look at unsafe code, the more questions come up. What does it mean to use those capabilities correctly? These questions are not just academic, they have a real impact on optimizations performed by the Rust compiler, LLVM, and even the hardware.

Eventually, we want to get to a place where those who author unsafe code have clear rules to follow, as well as simple tooling to test if their code violates those rules (think cargo test —unsafe). Authors who want more assurance than dynamic testing can provide should have access to static verifiers that can prove their crate is safe — and we should start by proving the standard library is safe.

We’ve been trying for some years to build that world but it’s been ridiculously hard. Lately, though, there have been some breakthroughs. Gankra’s experiments with strict_provenance APIs have given some hope that we can define a relatively simple provenance model that will support both arbitrary unsafe code trickery and aggressive optimization, and Ralf Jung’s aforementioned MiniRust shows how a Rust operational semantics could look. More and more crates test with miri to check their unsafe code, and for those who wish to go further, the kani verifier can check unsafe code for UB (more formal methods tooling here).

I think we need a renewed focus on unsafe code in 2023. The first step is already underway: we are creating the opsem team. Led by Ralf Jung and Jakob Degen, the opsem team has the job of defining “the rules governing unsafe code in Rust”. It’s been clear for some time that this area requires dedicated focus, and I am hopeful that the opsem team will help to provide that.

I would like to see progress on dynamic verification. In particular, I think we need a tool that can handle arbitrary binaries. miri is great, but it can’t be used to test programs that call into C code. I’d like to see something more like valgrind or ubsan, where you can test your Rust project for UB even if it’s calling into other languages through FFI.

Dynamic verification is great, but it is limited by the scope of your tests. To get true reliability, we need a way for unsafe code authors to do static verification. Building static verification tools today is possible but extremely painful. The compiler’s APIs are unstable and a moving target. The stable MIR project proposes to change that by providing a stable set of APIs that tool authors can build on.

Finally, the best unsafe code is the unsafe code you don’t have to write. Unsafe code provides infinite power, but people often have simpler needs that could be made safe with enough effort. Projects like [cxx][] demonstrate the power of this approach. For Rust the language, safe transmute is the most promising such effort, and I’d like to see more of that.

Teaching Rust in universities

More and more universities are offering classes that make use of Rust, and recently many of these educators have come together in the Rust Edu initiative to form shared teaching materials. I think this is great, and a trend we should encourage. It’s helpful for the Rust community, of course, since it means more Rust programmers. I think it’s also helpful for the students: much like learning a functional programming language, learning Rust requires incorporating different patterns and structure than other languages. I find my programs tend to be broken into smaller pieces, and the borrow checker forces me to be more thoughtful about which bits of context each function will need. Even if you wind up building your code in other languages, those new patterns will influence the way you work.

Stronger connections to teacher can also be a great source of data for improving Rust. If we understand better how people learn Rust and what they find difficult, we can use that to guide our priorities and look for ways to make it better. This might mean changing the language, but it might also mean changing the tooling or error messages. I’d like to see us setup some mechanism to feed insights from Rust educators, both in universities but also trainers at companies like Ferrous Systems or Integer32, into the Rust teams.

One particularly exciting effort here is the research being done at Brown University1 by Will Crichton and Shriram Krisnamurthi. Will and Shriram have published an interactive version of the Rust book that includes quizzes. As a reader, these quizzes help you check that you understood the section. But they also provide feedback to the book authors on which sections are effective. And they allow for “A/B testing”, where you change the content of the book and see whether the quiz scores improve. Will and Shriram are also looking at other ways to deepen our understanding of how people learn Rust.

More insight and data into the user experience

As Rust has grown, we no longer have the obvious gaps in our user experience that there used to be (e.g., “no IDE support”). At the same time, it’s clear that the experience of Rust developers could be a lot smoother. There are a lot of great ideas of changes to make, but it’s hard to know which ones would be most effective. I would like to see a more coordinated effort to gather data on the user experience and transform it into actionable insights. Currently, the largest source of data that we have is the annual Rust survey. This is a great resource, but it only gives a very broad picture of what’s going on.

A few years back, the async working group collected “status quo” stories as part of its vision doc effort. These stories were immensely helpful in understanding the “async Rust user experience”, and they are still helping to shape the priorities of the async working group today. At the same time, that was a one-time effort, and it was focused on async specifically. I think that kind of effort could be useful in a number of areas.

I’ve already mentioned that teachers can provide one source of data. Another is simply going out and having conversations with Rust users. But I think we also need fine-grained data about the user experience. In the compiler team’s mid-year report, they noted (emphasis mine):

One more thing I want to point out: five of the ambitions checked the box in the survey that said “some of our work has reached Rust programmers, but we do not know if it has improved Rust for them.”

Right now, it’s really hard to know even basic things, like how many users are encountering compiler bugs in the wild. We have to judge that by how many comments people leave on a Github issue. Meanwhile, Esteban personally scours twitter to find out which error messages are confusing to people.2 We should look into better ways to gather data here. I’m a fan of (opt-in, privacy preserving) telemetry, but I think there’s a discussion to be had here about the best approach. All I know is that there has to be a better way.

Maturing our governance

In 2015, shortly after 1.0, RFC 1068 introduced the original Rust teams: libs, lang, compiler, infra, and moderation. Each team is an independent, decision-making entity, owning one particular aspect of Rust, and operating by consensus. The “Rust core team” was given the role of knitting them together and providing a unifying vision. This structure has been a great success, but as we’ve grown, it has started to hit some limits.

The first limiting point has been bringing the teams together. The original vision was that team leads—along with others—would be part of a core team that would provide a unifying technical vision and tend to the health of the project. It’s become clear over time though that there are really different jobs. Over this year, the various Rust teams, project directors, and existing core team have come together to define a new model for project-wide governance. This effort is being driven by a dedicated working group and I am looking forward to seeing that effort come to fruition this year.

The second limiting point has been the need for more specialized teams. One example near and dear to my heart is the new types team, which is focused on type and trait system. This team has the job of diving into the nitty gritty on proposals like Generic Associated Types or impl Trait, and then surfacing up the key details for broader-based teams like lang or compiler where necessary. The aforementioned opsem team is another example of this sort of team. I suspect we’ll be seeing more teams like this.

There continues to be a need for us to grow teams that do more than coding. The compiler team prioritization effort, under the leadership of apiraino, is a great example of a vital role that allows Rust to function but doesn’t involve landing PRs. I think there are a number of other “multiplier”-type efforts that we could use. One example would be “reporters”, i.e., people to help publish blog posts about the many things going on and spread information around the project. I am hopeful that as we get a new structure for top-level governance we can see some renewed focus and experimentation here.


Seven years since Rust 1.0 and we are still going strong. As Rust usage spreads, our focus is changing. Where once we had gaping holes to close, it’s now more a question of iterating to build on our success. But the more things change, the more they stay the same. Rust is still working to empower people to build reliable, performant programs. We still believe that building a supportive, productive tool for systems programming — one that brings more people into the “systems programming” tent — is also the best way to help the existing C and C++ programmers “hack without fear” and build the kind of systems they always wanted to build. So, what are you waiting for? Let’s get building!

  1. In disclosure, AWS is a sponsor of this work. 

  2. To be honest, Esteban will probably always do that, whatever we do. 

The Rust Programming Language BlogOfficially announcing the types team

Oh hey, it's another new team announcement. But I will admit: if you follow the RFCs repository, the Rust zulip, or were particularly observant on the GATs stabilization announcement post, then this might not be a surprise for you. In fact, this "new" team was officially established at the end of May last year.

There are a few reasons why we're sharing this post now (as opposed to months before or...never). First, the team finished a three day in-person/hybrid meetup at the beginning of December and we'd like to share the purpose and outcomes of that meeting. Second, posting this announcement now is just around 7 months of activity and we'd love to share what we've accomplished within this time. Lastly, as we enter into the new year of 2023, it's a great time to share a bit of where we expect to head in this year and beyond.

Background - How did we get here?

Rust has grown significantly in the last several years, in many metrics: users, contributors, features, tooling, documentation, and more. As it has grown, the list of things people want to do with it has grown just as quickly. On top of powerful and ergonomic features, the demand for powerful tools such as IDEs or learning tools for the language has become more and more apparent. New compilers (frontend and backend) are being written. And, to top it off, we want Rust to continue to maintain one of its core design principles: safety.

All of these points highlights some key needs: to be able to know how the Rust language should work, to be able to extend the language and compiler with new features in a relatively painless way, to be able to hook into the compiler and be able to query important information about programs, and finally to be able to maintain the language and compiler in an amenable and robust way. Over the years, considerable effort has been put into these needs, but we haven't quite achieved these key requirements.

To extend a little, and put some numbers to paper, there are currently around 220 open tracking issues for language, compiler, or types features that have been accepted but are not completely implemented, of which about half are at least 3 years old and many are several years older than that. Many of these tracking issues have been open for so long not solely because of bandwidth, but because working on these features is hard, in large part because putting the relevant semantics in context of the larger language properly is hard; it's not easy for anyone to take a look at them and know what needs to be done to finish them. It's clear that we still need better foundations for making changes to the language and compiler.

Another number that might shock you: there are currently 62 open unsoundness issues. This sounds much scarier than it really is: nearly all of these are edges of the compiler and language that have been found by people who specifically poke and prod to find them; in practice these will not pop up in the programs you write. Nevertheless, these are edges we want to iron out.

The Types Team

Moving forward, let's talk about a smaller subset of Rust rather than the entire language and compiler. Specifically, the parts relevant here include the type checker - loosely, defining the semantics and implementation of how variables are assigned their type, trait solving - deciding what traits are defined for which types, and borrow checking - proving that Rust's ownership model always holds. All of these can be thought of cohesively as the "type system".

As of RFC 3254, the above subset of the Rust language and compiler are under the purview of the types team. So, what exactly does this entail?

First, since around 2018, there existed the "traits working group", which had the primary goal of creating a performant and extensible definition and implementation of Rust's trait system (including the Chalk trait-solving library). As time progressed, and particularly in the latter half of 2021 into 2022, the working group's influence and responsibility naturally expanded to the type checker and borrow checker too - they are actually strongly linked and its often hard to disentangle the trait solver from the other two. So, in some ways, the types team essentially subsumes the former traits working group.

Another relevant working group is the polonius working group, which primarily works on the design and implementation of the Polonius borrow-checking library. While the working group itself will remain, it is now also under the purview of the types team.

Now, although the traits working group was essentially folded into the types team, the creation of a team has some benefits. First, like the style team (and many other teams), the types team is not a top level team. It actually, currently uniquely, has two parent teams: the lang and compiler teams. Both teams have decided to delegate decision-making authority covering the type system.

The language team has delegated the part of the design of type system. However, importantly, this design covers less of the "feel" of the features of type system and more of how it "works", with the expectation that the types team will advise and bring concerns about new language extensions where required. (This division is not strongly defined, but the expectation is generally to err on the side of more caution). The compiler team, on the other hand, has delegated the responsibility of defining and maintaining the implementation of the trait system.

One particular responsibility that has traditionally been shared between the language and compiler teams is the assessment and fixing of soundness bugs in the language related to the type system. These often arise from implementation-defined language semantics and have in the past required synchronization and input from both lang and compiler teams. In the majority of cases, the types team now has the authority to assess and implement fixes without the direct input from either parent team. This applies, importantly, for fixes that are technically backwards-incompatible. While fixing safety holes is not covered under Rust's backwards compatibility guarantees, these decisions are not taken lightly and generally require team signoff and are assessed for potential ecosystem breakage with crater. However, this can now be done under one team rather than requiring the coordination of two separate teams, which makes closing these soundness holes easier (I will discuss this more later.)

Formalizing the Rust type system

As mentioned above, a nearly essential element of the growing Rust language is to know how it should work (and to have this well documented). There are relatively recent efforts pushing for a Rust specification (like Ferrocene or this open RFC), but it would be hugely beneficial to have a formalized definition of the type system, regardless of its potential integration into a more general specification. In fact the existence of a formalization would allow a better assessment of potential new features or soundness holes, without the subtle intricacies of the rest of the compiler.

As far back as 2015, not long after the release of Rust 1.0, an experimental Rust trait solver called Chalk began to be written. The core idea of Chalk is to translate the surface syntax and ideas of the Rust trait system (e.g. traits, impls, where clauses) into a set of logic rules that can be solved using a Prolog-like solver. Then, once this set of logic and solving reaches parity with the trait solver within the compiler itself, the plan was to simply replace the existing solver. In the meantime (and continuing forward), this new solver could be used by other tools, such as rust-analyzer, where it is used today.

Now, given Chalk's age and the promises it had been hoped to be able to deliver on, you might be tempted to ask the question "Chalk, when?" - and plenty have. However, we've learned over the years that Chalk is likely not the correct long-term solution for Rust, for a few reasons. First, as mentioned a few times in this post, the trait solver is only but a part of a larger type system; and modeling how the entire type system fits together gives a more complete picture of its details than trying to model the parts separately. Second, the needs of the compiler are quite different than the needs of a formalization: the compiler needs performant code with the ability to track information required for powerful diagnostics; a good formalization is one that is not only complete, but also easy to maintain, read, and understand. Over the years, Chalk has tried to have both and it has so far ended up with neither.

So, what are the plans going forward? Well, first the types team has begun working on a formalization of the Rust typesystem, currently coined a-mir-formality. An initial experimental phase was written using PLT redex, but a Rust port is in-progress. There's lot to do still (including modeling more of the trait system, writing an RFC, and moving it into the rust-lang org), but it's already showing great promise.

Second, we've begun an initiative for writing a new trait solver in-tree. This new trait solver is more limited in scope than a-mir-formality (i.e. not intending to encompass the entire type system). In many ways, it's expected to be quite similar to Chalk, but leverage bits and pieces of the existing compiler and trait solver in order to make the transition as painless as possible. We do expect it to be pulled out-of-tree at some point, so it's being written to be as modular as possible. During our types team meetup earlier this month, we were able to hash out what we expect the structure of the solver to look like, and we've already gotten that merged into the source tree.

Finally, Chalk is no longer going to be a focus of the team. In the short term, it still may remain a useful tool for experimentation. As said before, rust-analyzer uses Chalk as its trait solver. It's also able to be used in rustc under an unstable feature flag. Thus, new ideas currently could be implemented in Chalk and battle-tested in practice. However, this benefit will likely not last long as a-mir-formality and the new in-tree trait solver get more usable and their interfaces become more accessible. All this is not to say that Chalk has been a failure. In fact, Chalk has taught us a lot about how to think about the Rust trait solver in a logical way and the current Rust trait solver has evolved over time to more closely model Chalk, even if incompletely. We expect to still support Chalk in some capacity for the time being, for rust-analyzer and potentially for those interested in experimenting with it.

Closing soundness holes

As brought up previously, a big benefit of creating a new types team with delegated authority from both the lang and compiler teams is the authority to assess and fix unsoundness issues mostly independently. However, a secondary benefit has actually just been better procedures and knowledge-sharing that allows the members of the team to get on the same page for what soundness issues there are, why they exist, and what it takes to fix them. For example, during our meetup earlier this month, we were able to go through the full list of soundness issues (focusing on those relevant to the type system), identify their causes, and discuss expected fixes (though most require prerequisite work discussed in the previous section).

Additionally, the team has already made a number of soundness fixes and has a few more in-progress. I won't go into details, but instead am just opting to putting them in list form:

As you can see, we're making progress on closing soundness holes. These sometimes break code, as assessed by crater. However, we do what we can to mitigate this, even when the code being broken is technically unsound.

New features

While it's not technically under the types team purview to propose and design new features (these fall more under lang team proper), there are a few instances where the team is heavily involved (if not driving) feature design.

These can be small additions, which are close to bug fixes. For example, this PR allows more permutations of lifetime outlives bounds than what compiled previously. Or, these PRs can be larger, more impactful changes, that don't fit under a "feature", but instead are tied heavily to the type system. For example, this PR makes the Sized trait coinductive, which effectively makes more cyclic bounds compile (see this test for an example).

There are also a few larger features and feature sets that have been driven by the types team, largely due to the heavy intersection with the type system. Here are a few examples:

  • Generic associated types (GATs) - The feature long predates the types team and is the only one in this list that has actually been stabilized so far. But due to heavy type system interaction, the team was able to navigate the issues that came on its final path to stabilization. See this blog post for much more details.
  • Type alias impl trait (TAITs) - Implementing this feature properly requires a thorough understanding of the type checker. This is close to stabilization. For more information, see the tracking issue.
  • Trait upcasting - This one is relatively small, but has some type system interaction. Again, see the tracking issue for an explanation of the feature.
  • Negative impls - This too predates the types team, but has recently been worked on by the team. There are still open bugs and soundness issues, so this is a bit away from stabilization, but you can follow here.
  • Return position impl traits in traits (RPITITs) and async functions in traits (AFITs) - These have only recently been possible with advances made with GATs and TAITs. They are currently tracked under a single tracking issue.


To conclude, let's put all of this onto a roadmap. As always, goals are best when they are specific, measurable, and time-bound. For this, we've decided to split our goals into roughly 4 stages: summer of 2023, end-of-year 2023, end-of-year 2024, and end-of-year 2027 (6 months, 1 year, 2 years, and 5 years). Overall, our goals are to build a platform to maintain a sound, testable, and documented type system that can scale to new features need by the Rust language. Furthermore, we want to cultivate a sustainable and open-source team (the types team) to maintain that platform and type system.

A quick note: some of the things here have not quite been explained in this post, but they've been included in the spirit of completeness. So, without further ado:

6 months

  • The work-in-progress new trait solver should be testable
  • a-mir-formality should be testable against the Rust test suite
  • Both TAITs and RPITITs/AFITs should be stabilized or on the path to stabilization.

EOY 2023

  • New trait solver replaces part of existing trait solver, but not used everywhere
  • We have an onboarding plan (for the team) and documentation for the new trait solver
  • a-mir-formality is integrated into the language design process

EOY 2024

EOY 2027

  • (Types) unsound issues resolved
  • Most language extensions are easy to do; large extensions are feasible
  • a-mir-formality passes 99.9% of the Rust test suite


It's an exciting time for Rust. As its userbase and popularity grows, the language does as well. And as the language grows, the need for a sustainable type system to support the language becomes ever more apparent. The project has formed this new types team to address this need and hopefully, in this post, you can see that the team has so far accomplished a lot. And we expect that trend to only continue over the next many years.

As always, if you'd like to get involved or have questions, please drop by the Rust zulip.

Will Kahn-GreeneSocorro: Schema based overhaul of crash ingestion: retrospective (2022)



2+ years

  • radically reduced risk of data leaks due to misconfigured permissions

  • centralized and simplified configuration and management of fields

  • normalization and validation performed during processing

  • documentation of data reviews, data caveats, etc

  • reduced risk of bugs when adding new fields--testing is done in CI

  • new crash reporting data dictionary with Markdown-formatted descriptions, real examples, relevant links


I've been working on Socorro (crash ingestion pipeline at Mozilla) since the beginning of 2016. During that time, I've focused on streamlining maintainence of the project, paying down technical debt, reducing risk, and improving crash analysis tooling.

One of the things I identified early on is how the crash ingestion pipeline was chaotic, difficult to reason about, and difficult to document. What did the incoming data look like? What did the processed data look like? Was it valid? Which fields were protected? Which fields were public? How do we add support for a new crash annotation? This was problematic for our ops staff, engineering staff, and all the people who used Socorro. It was something in the back of my mind for a while, but I didn't have any good thoughts.

In 2020, Socorro moved into the Data Org which has multiple data pipelines. After spending some time looking at how their pipelines work, I wanted to rework crash ingestion.

The end result of this project is that:

  1. the project is easier to maintain:

    • adding support for new crash annotations is done in a couple of schema files and possibly a processor rule

  2. risk of security issues and data breaches is lower:

    • typos, bugs, and mistakes when adding support for a new crash annotation are caught in CI

    • permissions are specified in a central location, changing permission for fields is trivial and takes effect in the next deploy, setting permissions supports complex data structures in easy-to-reason-about ways, and mistakes are caught in CI

  3. the data is easier to use and reason about:

    • normalization and validation of crash annotation data happens during processing and downstream uses of the data can expect it to be valid; further we get a signal when the data isn't valid which can indicate product bugs

    • schemas describing incoming and processed data

    • crash reporting data dictionary documenting incoming data fields, processed data fields, descriptions, sources, data gotchas, examples, and permissions

What is Socorro?

Socorro is the crash ingestion pipeline for Mozilla products like Firefox, Fenix, Thunderbird, and MozillaVPN.

When Firefox crashes, the crash reporter asks the user if the user would like to send a crash report. If the user answers "yes!", then the crash reporter collects data related to the crash, generates a crash report, and submits that crash report as an HTTP POST to Socorro. Socorro saves the submitted crash report, processes it, and has tools for viewing and analyzing crash data.

State of crash ingestion at the beginning

The crash ingestion system was working and it was usable, but it was in a bad state.

  • Poor data management

    Normalization and validation of data was all over the codebase and not consistent:

    • processor rule code

    • AWS S3 crash storage code

    • Elasticsearch indexing code

    • Telemetry crash storage code

    • Super Search querying and result rendering code

    • report view and template code

    • signature report code and template code

    • crontabber job code

    • any scripts that used the data

    • tests -- many of which had bad test data so who knows what they were really testing

    Naive handling of minidump stackwalker output which meant that any changes in the stackwalker output were predominantly unnoticed and there was no indication as to whether changed output created issues in the system.

    Further, since it was all over the place, there were no guarantees for data validity when downloading it using the RawCrash, ProcessedCrash, and SuperSearch APIs. Anyone writing downstream systems would also have to normalize and validate the data.

  • Poor permissions management

    Permissions were defined in multiple places:

    • Elasticsearch json redactor

    • Super Search fields

    • RawCrash API allow list

    • ProcessedCrash API allow list

    • report view and template code

    • Telemetry crash storage code

    • and other places

    We couldn't effectively manage permissions of fields in the stackwalker output because we had no idea what was there.

  • Poor documentation

    No documentation of crash annotation fields other than CrashAnnotations.yaml which didn't enforce anything in crash ingestion (process, valid type, data correctness, etc) and was missing important information like data gotchas, data review urls, and examples.

    No documentation of processed crash fields at all.

  • Making changes was high risk

    Changing fields from public to protected was high risk because you had to find all the places it might show up which was intractable. Adding support for new fields often took multiple passes over several weeks because we'd miss things. Server errors happend with some regularity due to weirdness with crash annotation values affecting the Crash Stats site.

  • Tangled concerns across the codebase

    Lots of tangled concerns where things defined in one place affected other places that shouldn't be related. For example, the Super Search fields definition was acting as a "schema" for other parts of the system that had nothing to do with Elasticsearch or Super Search.

  • Difficult to maintain

    It was difficult to support new products.

    It was difficult to debug issues in crash ingestion and crash reporting.

    The Crash Stats webapp contained lots of if/then/else bits to handle weirdness in the crash annotation values. Nulls, incorrect types, different structures, etc.

    Socorro contained lots of vestigial code from half-done field removal, deprecated fields, fields that were removed from crash reports, etc. These vestigial bits were all over the code base. Discovering and removing these bits was time consuming and error prone.

    The code for exporting data to Telemetry built the export data using a list of fields to exclude rather than a list of fields to include. This is backwards and impossible to maintain--we never should have been doing this. Further, it pulled data from the raw crash which we had no validation guarantees for which would cause issues downstream in the Telemetry import code.

    There was no way to validate the data used in the unit tests which meant that a lot of it was invalid. We had no way to validate the test data which meant that CI would pass, but we'd see errors in our stage and production environments.

  • Different from other similar systems

    In 2020, Socorro was moved to the Data Org in Mozilla which had a set of standards and conventions for collecting, storing, analyzing, and providing access to data. Socorro didn't follow any of it which made it difficult to work on, to connect with, and to staff. Things Data Org has that Socorro didn't:

    • a schema covering specifying fields, types, and documentation

    • data flow documentation

    • data review policy, process, and artifacts for data being collected and how to add new data

    • data dictionary for fields for users including documentation, data review urls, data gotchas

In summary, we had a system that took a lot of effort to maintain, wasn't serving our users' needs, and was high risk of security/data breach.

Project plan

Many of these issues can be alleviated and reduced by moving to a schema-driven system where we:

  1. define a schema for annotations and a schema for the processed crash

  2. change crash ingestion and the Crash Stats site to use those schemas

When designing this schema-driven system, we should be thinking about:

  1. how easy is it to maintain the system?

  2. how easy is it to explain?

  3. how flexible is it for solving other kinds of problems in the future?

  4. what kinds of errors will likely happen when maintaining the system and how can we avert them in CI?

  5. what kinds of errors can happen and how much risk do they pose for data leaks? what of those can we avert in CI?

  6. how flexible is the system which needs to support multiple products potentially with different needs?

I worked out a minimal version of that vision that we could migrate to and then work with going forward.

The crash annotations schema should define:

  1. what annotations are in the crash report?

  2. which permissions are required to view a field

  3. field documentation (provenance, description, data review, related bugs, gotchas, analysis tips, etc)

The processed crash schema should define:

  1. what's in the processed crash?

  2. which permissions are required to view a field

  3. field documentation (provenance, description, related bugs, gotchas, analysis tips, etc)

Then we make the following changes to the system:

  1. write a processor rule to copy, nomralize, and validate data from the raw crash based on the processed crash schema

  2. switch the Telemetry export code to using the processed crash for data to export

  3. switch the Telemetry export code to using the processed crash schema for permissions

  4. switch Super Search to using the processed crash for data to index

  5. switch Super Search to using the processed crash schema for documentation and permissions

  6. switch Crash Stats site to using the processed crash for data to render

  7. switch Crash Stats site to using the processed crash schema for documentation and permissions

  8. switch the RawCrash, ProcessedCrash, and SuperSearch APIs to using the crash annotations and processed crash schemas for documentation and permissions

After doing that, we have:

  1. field documentation is managed in the schemas

  2. permissions are managed in the schemas

  3. data is normalized and validated once in the processor and everything uses the processed crash data for indexing, searching, and rendering

  4. adding support for new fields and changing existing fields is easier and problems are caught in CI

Implementation decisions

Use JSON Schema.

Data Org at Mozilla uses JSON Schema for schema specification. The schema is written using YAML.

The metrics schema is used to define metrics.yaml files which specify the metrics being emitted and collected.

For example:

One long long long term goal for Socorro is to unify standards and practices with the Data Ingestion system. Towards that goal, it's prudent to build out a crash annotation and processed crash schemas using whatever we can take from the equivalent metrics schemas.

We'll additionally need to build out tooling for verifying, validating, and testing schema modifications to make ongoing maintenance easier.

Use schemas to define and drive everything.

We've got permissions, structures, normalization, validation, definition, documentation, and several other things related to the data and how it's used throughout crash ingestion spread out across the codebase.

Instead of that, let's pull it all together into a single schema and change the system to be driven from this schema.

The schema will include:

  1. structure specification

  2. documentation including data gotchas, examples, and implementation details

  3. permissions

  4. processing instructions

We'll have a schema for supported annotations and a schema for the processed crash.

We'll rewrite existing parts of crash ingestion to use the schema:

  1. processing 1. use processing instructions to validate and normalize annotation data

  2. super search 1. field documentation 2. permissions 3. remove all the normalization and validation code from indexing

  3. crash stats 1. field documentation 2. permissions 3. remove all the normalization and validation code from page rendering

Only use processed crash data for indexing and analysis.

The indexing system has its own normalization and validation code since it pulls data to be indexed from the raw crash.

The crash stats code has its own normalization and validation code since it renders data from the raw crash in various parts of the site.

We're going to change this so that all normalization and validation happens during processing, the results are stored in the processed crash, and indexing, searching, and crash analysis only work on processed crash data.

By default, all data is protected.

By default, all data is protected unless it is explicitly marked as public. This has some consequences for the code:

  1. any data not specified in a schema is treated as protected

  2. all schema fields need to specify permissions for that field

  3. any data in a schema is either: * marked public, OR * lists the permissions required to view that data

  4. for nested structures, any child field that is public has public ancesters

We can catch some of these issues in CI and need to write tests to verify them.

This is slightly awkward when maintaining the schema because it would be more reasonable to have "no permissions required" mean that the field is public. However, it's possible to accidentally not specify the permissions and we don't want to be in that situation. Thus, we decided to go with explicitly marking public fields as public.

Work done

Phase 1: cleaning up

We had a lot of work to do before we could start defining schemas and changing the system to use those schemas.

  1. remove vestigial code (some of this work was done in other phases as it was discovered)

  2. fix signature generation

  3. fix Super Search

    • [bug 1624345]: stop saving random data to Elasticsearch crashstorage (2020-06)

    • [bug 1706076]: remove dead Super Search fields (2021-04)

    • [bug 1712055]: remove system_error from Super Search fields (2021-07)

    • [bug 1712085]: remove obsolete Super Search fields (2021-08)

    • [bug 1697051]: add crash_report_keys field (2021-11)

    • [bug 1736928]: remove largest_free_vm_block and tiny_block_size (2021-11)

    • [bug 1754874]: remove unused annotations from Super Search (2022-02)

    • [bug 1753521]: stop indexing items from raw crash (2022-02)

    • [bug 1762005]: migrate to lower-cased versions of Plugin* fields in processed crash (2022-03)

    • [bug 1755528]: fix flag/boolean handling (2022-03)

    • [bug 1762207]: remove hang_type (2022-04)

    • [bug 1763264]: clean up super search fields from migration (2022-07)

  4. fix data flow and usage

    • [bug 1740397]: rewrite CrashingThreadInfoRule to normalize crashing thread (2021-11)

    • [bug 1755095]: fix TelemetryBotoS3CrashStorage so it doesn't use Super Search fields (2022-03)

    • [bug 1740397]: change webapp to pull crashing_thread from processed crash (2022-07)

    • [bug 1710725]: stop using DotDict for raw and processed data (2022-09)

  5. clean up the raw crash structure

Phase 2: define schemas and all the tooling we needed to work with them

After cleaning up the code base, removing vestigial code, fixing Super Search, and fixing Telemetry export code, we could move on to defining schemas and writing all the code we needed to maintain the schemas and work with them.

  • [bug 1762271]: rewrite json schema reducer (2022-03)

  • [bug 1764395]: schema for processed crash, reducers, traversers (2022-08)

  • [bug 1788533]: fix validate_processed_crash to handle pattern_properties (2022-08)

  • [bug 1626698]: schema for crash annotations in crash reports (2022-11)

Phase 3: fix everything to use the schemas

That allowed us to fix a bunch of things:

  • [bug 1784927]: remove elasticsearch redactor code (2022-08)

  • [bug 1746630]: support new threads.N.frames.N.unloaded_modules minidump-stackwalk fields (2022-08)

  • [bug 1697001]: get rid of UnredactedCrash API and model (2022-08)

  • [bug 1100352]: remove hard-coded allow lists from RawCrash (2022-08)

  • [bug 1787929]: rewrite Breadcrumbs validation (2022-09)

  • [bug 1787931]: fix Super Search fields to pull permissions from processed crash schema (2022-09)

  • [bug 1787937]: fix Super Search fields to pull documentation from processed crash schema (2022-09)

  • [bug 1787931]: use processed crash schema permissions for super search (2022-09)

  • [bug 1100352]: remove hard-coded allow lists from ProcessedCrash models (2022-11)

  • [bug 1792255]: add telemetry_environment to processed crash (2022-11)

  • [bug 1784558]: add collector metadata to processed crash (2022-11)

  • [bug 1787932]: add data review urls for crash annotations that have data reviews (2022-11)

Phase 4: improve

With fields specified in schemas, we can write a crash reporting data dictionary:

  • [bug 1803558]: crash reporting data dictionary (2023-01)

  • [bug 1795700]: document raw and processed schemas and how to maintain them (2023-01)

Then we can finish:

Random thoughts

This was a very very long-term project with many small steps and some really big ones. Getting large projects done is futile and the only way to do it successfully is to break it into a million small steps each of which stand on their own and don't create urgency for getting the next step done.

Any time I changed field names or types, I'd have to do a data migration. Data migrations take 6 months to do because I have to wait for existing data to expire from storage. On the one hand, it's a blessing I could do migrations at all--you can't do this with larger data sets or with data sets where the data doesn't expire without each migration becoming a huge project. On the other hand, it's hard to juggle being in the middle of multiple migrations and sometimes the contortions one has to perform are grueling.

If you're working on a big project that's going to require changing data structures, figure out how to do migrations early with as little work as possible and use that process as often as you can.

Conclusion and where we could go from here

This was such a huge project that spanned years. It's so hard to finish projects like this because the landscape for the project is constantly changing. Meanwhile, being mid-project has its own set of complexities and hardships.

I'm glad I tackled it and I'm glad it's mostly done. There are some minor things to do, still, but this new schema-driven system has a lot going for it. Adding support for new crash annotations is much easier, less risky, and takes less time.

It took me about a month to pull this post together.

That's it!

That's the story of the schema-based overhaul of crash ingestion. There's probably some bits missing and/or wrong, but the gist of it is here.

If you have any questions or bump into bugs, I hang out on #crashreporting on You can also write up a bug for Socorro.

Hopefully this helps. If not, let us know!

Mozilla ThunderbirdImportant: Thunderbird 102.7.0 And Microsoft 365 Enterprise Users

Welcome to Thunderbird 102

Update on January 31st:

We’re preparing to ship a 2nd build of Thunderbird 102.7.1 with an improved patch for the Microsoft 365 oAuth issue reported here. Our anticipated release window is before midnight Pacific Time, January 31.

Update on January 28th:

Some users still experienced issues with the solution to the authentication issue that was included in Thunderbird 102.7.1. A revised solution has been proposed and is expected to ship soon. We apologize for the inconvenience this has caused, and the disruption to your workflow. You can track this issue via Bug #1810760.

Update on January 20th:

Thunderbird 102.7.0 was scheduled to be released on Wednesday, January 18, but we decided to hold the release because of an issue detected which affects authentication of Microsoft 365 Business accounts.

A solution to the authentication issue will ship with version 102.7.1, releasing during the week of January 23. Version 102.7.0 is now available for manual download only, to allow unaffected users to choose to update and benefit from the fixes it delivers

Please note that automatic updates are currently disabled, and users of Microsoft 365 Business are cautioned to not update. 

*Users who update and encounter difficulty can simply reinstall 102.6.1. Thunderbird should automatically detect your existing profile. However, you can launch the Profile Manager if needed by following these instructions.

On Wednesday, January 18, Thunderbird 102.7.0 will be released with a crucial change to how we handle OAuth2 authorization with Microsoft accounts. This may involve some extra work for users currently using Microsoft-hosted accounts through their employer or educational institution.

In order to meet Microsoft’s requirements for publisher verification, it was necessary for us to switch to a new Azure application and application ID. However, some of these accounts are configured to require administrators to approve any applications accessing email.

If you encounter a screen saying “Need admin approval” during the login process, please contact your IT administrators to approve the client ID 9e5f94bc-e8a4-4e73-b8be-63364c29d753 for Mozilla Thunderbird (it previously appeared to non-admins as “Mzla Technologies Corporation”).

We request the following permissions:

  • IMAP.AccessAsUser.All (Read and write access to mailboxes via IMAP.)
  • POP.AccessAsUser.All (Read and write access to mailboxes via POP.)
  • SMTP.Send (Send emails from mailboxes using SMTP AUTH.)
  • offline_access

(Please note that this change was previously implemented in Thunderbird Beta, but the Thunderbird 102.7.0 release introduces this change to our stable ESR release.)

The post Important: Thunderbird 102.7.0 And Microsoft 365 Enterprise Users appeared first on The Thunderbird Blog.